text
stringlengths 4
2.78M
|
---|
---
abstract: 'In order to investigate the environment of HII region Sh2-163 and search for evidence of triggered star formations in this region, we performed a multi-wavelength study of this HII region. Most of our data were taken from large-scale surveys: 2MASS, CGPS, MSX and SCUBA. We also made CO molecular line observations, using the 13.7m telescope. The ionized region of Sh2-163 is detected both by the optical and radio continuum observations. Sh2-163 is partially bordered by an arc-like photodissociation region (PDR), which is coincident with the strongest optical and radio emissions, indicating interactions between the HII region and the surrounding interstellar medium (ISM). Two molecular clouds were discovered on the border of PDR. The morphology of these two clouds suggests they are compressed by the expansion of Sh2-163. In cloud A, we found two molecular clumps. And it seems star formation in clump A2 is much more active than in clump A1. In cloud B, we found new outflow activities and massive star(s) are forming inside. Using 2MASS photometry, we tried to search for embedded young stellar object (YSO) candidates in this region. The very good relations between CO emission, infrared shell and YSOs suggest that it is probably a triggered star formation region by the expansion of Sh2-163. We also found the most likely massive protostar related to IRAS 23314+6033.'
author:
- |
Naiping Yu$^{1,2}$[^1], Jun-Jie Wang$^{1,2}$ and Nan Li$^{1,2}$[^2]\
$^{1}$National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China\
$^{2}$NAOC-TU Joint Center for Astrophysics, Lhasa 850000, China
title: ' The environment and star formation of HII region Sh2-163: a multi-wavelength study'
---
HII regions - ISM: molecules - ISM: outflows - stars: formation - stars: protostars
Introduction
============
A lot of research has been done in astrophysics to understand the formation of massive stars and the feedback to their surrounding ISM (e.g. Zinnecker et al. 2007; Deharveng et al. 2010, and references therein). However, many questions are still unclear. Multi-wavelength observations are essential to deeply understand how the triggered star formation processes impact on the massive star formation process. Young massive stars tend to form in clusters or groups. The formation of massive stars has an immense impact on their environment through ionizing radiation, heating of dust and expansions of their HII regions. These processes may trigger next generation of star formation by compressing neighboring molecular clouds to the point of gravitational instability. Massive stars also have powerful winds which sweep up the surrounding gas, creating interstellar bubbles (e.g. Weaver et al. 1977; Churchwell et al. 2006). In the case of OB associations, the released intense ultraviolet radiation may ionize the surrounding ISM within tens of parsecs. A number of observations demonstrate HII regions can strongly affect star formation nearby. Sugitani et al. (1989) showed the ratios of luminosity of the protostar to core mass in bright-rimmed clouds are much higher than those in dark globules. Dobashi et al. (2001) also showed that protostars associated with HII regions are more luminous than those in molecular clouds away from them, indicating HII regions favour massive stars or cluster formation in neighboring molecular clouds. Moreover, in the HII region of W5, Karr and Martin (2003) found the number of star-formation events per unit CO covering area within the influence zone is 4.8 higher than outside. However, the role of expanding HII regions in triggering star formation is still poorly understood. For example, Dale et al. (2007a,b) argue that the main effect of an expanding HII region may simply be to expose stars that would have formed anyway.
Several mechanisms by which massive stars can affect the subsequent star formation in an HII region have been proposed. Two of the most studied processes are known as “radiatively driven implosion” (RDI) (e.g. Lefloch & Lazareff 1994; Miao et al.2006; Miao et al. 2009 ) and “collect and collapse” (C&C) (e.g. Elmegreen & Lada 1977; Osterbrock 1989). According to the model of “RDI”, the expanding ionization fronts caused by the UV radiations impact into pre-existing molecular clouds, leading to the formation of a cometary globule, where new stars may finally be born (Larosa 1983). The “C&C” model invokes the standard picture of a slow moving D type ionization front with associated shock front that precedes the ionization front (Osterbrock 1989). Dense gas may pile up between the two fronts. On a long time the compressed shocked layer becomes gravitationally unstable and then star formation will take place inside. Observational evidence of both processes has been proposed in a number of HII regions (e.g. Deharveng & Zavagno 2008; Cichowolski et al. 2009; Paron et al. 2011). In this paper, we made a multi-wavelength study of Sh2-163 to find out whether second-generation clusters are forming around. We also discussed the physical mechanisms which may trigger star formation in this region.
Sh2-163 is an optically visible HII region centered on R.A. (2000) = 23h32m57.9s and Dec. (2000) = 60$^\circ$48$^\prime$01$^\prime$$^\prime$ with a mean diameter of about 10$^\prime$ (Sharpless 1959). The distance to Sh2-163 has been estimated by several authors, using different methods. CO observations by Blitz et al. (1982) shows it has a velocity of -44.9 $\pm$ 3.8 km/s (the local standard of rest velocity), corresponding to a kinematic distance of 2.3 $\pm$ 0.7 kpc (Brand & Blitz 1993). A spectrophotometric distance of 2.7 $\pm$ 0.9 kpc was derived by Georgelin (1975). According to Russeil et al. (2007), Sh2-163 belongs to the complex 114.0 - 0.7, which is composed of Sh2-163, Sh2-164, and Sh2-166. All of the three HII regions are located on the Norma-Cygnus arm. Based on spectroscopic and UBV photometric observations, Russeil et al. (2007) found Sh2-163 is ionized by an O9V star (R.A. (2000) = 23h33m36.9s, Dec. (2000) = 60$^\circ$45$^\prime$06.8$^\prime$$^\prime$) and an O8V star (R.A. (2000) = 23h33m32.7s, Dec. (2000) = 60$^\circ$47$^\prime$32.1$^\prime$$^\prime$). And the two ionizing stars have a mean distance of 3.3 $\pm$ 0.3 kpc. Thus, the distance of Sh2-163 is in the range of 2.3 to 3.3 kpc. We take a distance of 2.8 $\pm$ 0.5 kpc in the following discussions.
Data Sets and Observations
==========================
The Canadian Galactic Plane Survey (CGPS) is a project combining radio, millimeter, and infrared surveys of the Galactic plane. The radio surveys were carried out at the Dominion Radio Astrophysical Observatory (DRAO), covering the region 74. $^{\circ}$2 $<$ $\ell$ $<$ 147. $^{\circ}$3, with Galactic latitude extent -3. $^{\circ}$6 $<$ $\mathit{b}$ $<$ +5.$^{\circ}$6 at 1420 MHz (Taylor et al. 2003), resolving features as small as 1 arcminute. In order to match the DRAO images, the CGPS data base also comprises other data sets, such as the Five College Radio Astronomical Observatory (FCRAO) CO(1-0) Survey of the Outer Galaxy (Heyer et al. 1998).
Mid-IR data were taken from the Midcourse Space Experiment (MSX) Galactic Plane Survey (Price et al. 2001). The MSX Band A includes the unidentified infrared bands (UIBs) at 7.7 $\mu$m and 8.6 $\mu$m with an angular resolution of about 18$^\prime$$^\prime$. And near-IR data were obtained from the Two Micron All Sky Survey (2MASS) Point Source Catalog (Skrutskie et al. 2006).
The SCUBA Legacy Catalogues (Di Francesco et al. 2008) provide two comprehensive sets of continuum maps (and catalogs), using data at 850 and 450 $\mu$m obtained with the Submillimetre Common User Bolometer Array (SCUBA), with angular resolutions of 19$^\prime$$^\prime$ and 11$^\prime$$^\prime$ respectively. The data was reduced with the “matrix inversion” method described by Johnstone et al. (2000). Objects are named by their respective J2000.0 position of the peak 850$\mu$m intensity. The catalogues also provide for each object the respective maximum 850 $\mu$m intensity, estimates of total 850 $\mu$m flux and size, and tentative identifications from the SIMBAD Database.
On the May of 2014, we performed CO observations using the 13.7 m millimeter telescope of Qinghai Station at the Purple Mountain Observatory at Delingha, China. On-the-fly (OTF) observing mode was applied, with nine-pixel array receiver separated by $\sim$ 180$^\prime$$^\prime$. The receiver was operated in the sideband separation of single sideband mode, which allows for simultaneous observations of three CO isotope transitions, with $^{12}$CO (1-0) in the upper sideband (USB) and $^{13}$CO (1-0) and C$^{18}$O (1-0) in the lower sideband (LSB). The typical system temperature $T_{sys}$ is between 132 K and 221 K during the observations. The angular resolution of the telescope is about 58$^\prime$$^\prime$, with beam efficiency between 0.44 at 115GHz and 0.51 at 110 GHz. The mapping step is 30$^\prime$$^\prime$ and the pointing accuracy is better than 5$^\prime$$^\prime$. A fast Fourier transform (FFT) spectrometer was used as the back end with a total bandwidth of 1 GHz and 16384 channels. The velocity resolution is about 0.16 km s$^{-1}$ at $\sim$ 110 GHz. The spectral data were reduced and analyzed with CLASS and GREG software.
Results and analysis
====================
Fig.1 displays the images of Sh2-163 at different wavelengthes. The upper panel shows the radio continuum emission at 1420 MHz from the CGPS. An arc of strong radio continuum emission overlaying an extended diffuse emission can be noted. From north to south, the radio emission decreases. One of the two ionizing stars found by Russeil et al. (2007) is very close to the center, and the other is close to the peak of the radio emission. The middle panel shows an overlay of the emission at 1420 MHz (line contours) and the optical image (grey-scale). We can see that the arc-like structure of radio continuum emission is coincidental with the brightest optical region. The ionized region of Sh2-163 is detected both by the optical and radio continuum observations. However, inside the ionized region, the optical emission is weak near the location of R.A. (2000) = 23h33m17.8s and Dec. (2000) = 60$^\circ$46$^\prime$07.7$^\prime$$^\prime$. Interstellar dust in a foreground cloud may be responsible for the observed optical absorption feature. We did find a CO cloud which is spatially coincident with the area lacking optical emission, using the data from FCRAO CO Survey of the Outer Galaxy (Heyer et al. 1998). The bottom panel shows a two-color image of Sh2-163: DSS-R image (green) and MSX band A image (red). An arc-like structure of enhanced mid-infrared emission is evident on the north side, which is also just outside the enhanced optical emission. Like those observed in many other HII regions, the polycyclic aromatic hydrocarbons (PAHs) may be responsible for the emission detected at 8.3 $\mu$m, suggesting the existence of a PDR on the border of Sh2-173. The radio emission seems to penetrate into the PDR, indicating the interactions between the HII region and the mid-infrared shell.
------- ----------------- --------------- ---------- ------ ---------------
Cloud Emissions $V_{lsr}$ $T_{mb}$ rms FWHM
(km s$^{-1}$) (K) (k) (km s$^{-1}$)
A $^{12}$CO (1-0) -42.7 20.7 0.52 3.0
$^{13}$CO (1-0) -42.6 9.0 0.31 2.1
C$^{18}$O (1-0) -42.4 3.2 0.36 1.1
B $^{12}$CO (1-0) -45.0 23.3 0.66 2.9
$^{13}$CO (1-0) -45.0 7.8 0.33 1.8
------- ----------------- --------------- ---------- ------ ---------------
: Observed parameters of the emissions shown in Figure 3.
\[tb:rotn\]
CO emissions
------------
We first inspected the molecular gas around Sh2-163 from the CGPS data in the whole velocity range and found an interesting feature around $V_{lsr}$ $\sim$ -45 km s$^{-1}$. Fig.2 displays the integrated intensity of $^{12}$CO (1-0) between -39 and -52 km s$^{-1}$ superimposed on the MSX band A and DSS-R images. On the border of Sh2-163, two molecular clouds were found with an arc-like morphology, indicating they are compressed by the expanding HII region. The main peak of each cloud is also consistent with the mid-infrared emission. We further performed CO observations using the 13.7 m telescope to make a detail study of these clouds. The resolution of 13.7 m telescope is a little higher than that of the FCRAO 14m telescope (58$^{\prime\prime}$ vs. 100$^{\prime\prime}$). In addition, by studying the more optically thin lines of $^{13}$CO and C$^{18}$O (if detected), we suppose to inspect the inner region of these clouds.
Fig.3 shows the CO isotope transitions at the peaks of molecular cloud A and B. C$^{18}$O emission in cloud B was not detected. By fitting the C$^{13}$O lines with gaussian functions, we obtained the peak velocities and FWHM (table 1). The derived V$_{lsr}$ of cloud A and B is -42.6 km/s and -45.0 km/s, respectively. Fig.4 displays the integrated intensities of $^{12}$CO, $^{13}$CO and C$^{18}$O lines. The morphology of two clouds is consistent with that detected by FCRAO. Moreover, in cloud A we found two molecular clumps (noted by clump A1 and clump A2 in figure 4) by C$^{18}$O emissions. Detail study of the two clouds will be discussed in section 3.3.
------- ----------- ------------------------------------------- ---------- ------------------------ ------------- -- --
Cloud R.A. Dec. $T_{ex}$ $N(H_2)$ Mass
(J2000) (J2000) (K) (cm$^{-2}$) ($M_\odot$)
A 23h34m29s 60$^\circ$51$^\prime$40$^\prime$$^\prime$ 24.1 8.4 $\times$ 10$^{21}$ 1341
B 23h33m35s 60$^\circ$50$^\prime$34$^\prime$$^\prime$ 27.0 7.9 $\times$ 10$^{21}$ 591
------- ----------- ------------------------------------------- ---------- ------------------------ ------------- -- --
: Derived parameters of the two clouds.
\[tb:rotn\]
We now try to estimate the molecular column densities and hence the masses of the two clouds from the $^{12}$CO and $^{13}$CO observations. Under the assumptions of local thermodynamic equilibrium (LTE) and $^{12}$CO to be optically thick, the excitation temperature (T$_{ex}$) can be obtained through each peak brightness temperature of $^{12}$CO, via:
$$T_{ex} = \frac{5.653}{ln(\frac{5.653}{T_{mb} + 0.837} + 1)}$$
The excitation temperature of clouds A and B is 24.1 K and 27.0 K, respectively. The total column densities of $^{13}$CO can be obtained assuming that the $^{13}$CO emission is optically thin given by (Rohlfs $\&$ Wilson 2004):
$$N(^{13}CO) = 3.0 \times 10^{14}
\frac{T_{ex}}{1 - exp(-5.3/T_{ex})} \int \tau dv$$
and $$\int \tau dv = \frac{1}{J(T_{ex}) - J(T_{bg})} \int T_{mb} dv$$ where T$_{bg}$ is the temperature of the background radiation (2.73 K). The column densities of H$_2$ could be obtained by adopting typical abundance ratios \[H$_2$\]/\[$^{12}$CO\] = 10$^4$, and \[$^{12}$CO\]/\[$^{13}$CO\] $\sim$ \[$^{12}$C\]/\[$^{13}$C\]. We adopted the Galactocentric distance-dependent \[$^{12}$C\]/\[$^{13}$C\] ratio from Wilson $\&$ Rood (1994): $$\frac{^{12}C}{^{13}C} = 7.5 \times R_{GC}[kpc] + 7.6.$$ Using $M$ = $\mu$ *m$_H$ $D^2$ $\Omega$ $N (H_2)$ we obtain the masses for cloud A and B, where $N (H_2)$ is the H$_2$ column density calculated through the above equations, $D$ = 2.8 kpc is the distance, $\Omega$ is the area of the clouds (within 50$\%$ of each peak emission), and *m$_{H}$ is the hydrogen atom mass. We adopt a mean molecular weight per H$_2$ molecule of $\mu$ = 2.72 to include helium. The derived parameters are listed in table 2.**
2MASS YSO candidates
--------------------
Both observations and theories indicate expanding HII regions may trigger next generation of stars (e.g. Osterbrock 1989; Cichowolski et al. 2009; Miao et al. 2009; Panwar et al. 2014 ). To look for evidence of triggered star formation, we searched for young stars in this region. We have adopted the criteria developed by Kerton et al. (2008) and converted this criteria to the distance of Sh2-163 and the visual absorption in this direction. From the classical relations, $N(H+H_2)$/$E(B-V)$ = 5.8 $\times$ 10$^{21}$ particles cm$^{-2}$ (Bohlin et al. 1978) and $A_V$ = 3.1$E(B-V)$, we obtain $A_V$ $\sim$ 5.34 $\times$ 10$^{-22}$ $N(H_2)$ $\sim$ 4.3 mag. According to different photometric qualities of 2MASS, we divided their YSO candidates into four groups (P$_{1}$, P$_{1+}$, P$_{2}$, P$_{3}$). P$_{1}$ sources should have the valid photometry in all three bands (i.e. ph$_{-}$qual values = A, B, C or D). For sources in group P$_1$, the color criteria are (J-H)$>$0.872 and (J-H)-1.7(H-K)+0.083$<$0. It selects stars lying below the reddening vector associated with an O6V star. P$_{1+}$ sources also have the valid photometry in all three bands. The color criteria of group P$_{1+}$ are 1.172$<$(J-H)$<$1.472, (J-H)-1.7(H-K)+0.083$>$0, (J-H)-1.7(H-K)-0.3797$<$0, and K$>$14.5. It selects YSO candidates lying in the overlapping region of T Tauri and main sequence stars. Sources in group P$_{2}$ have not been detected in the J band. Thus the actual positions of P$_{2}$ sources in the (J-H) axe should be towards higher values. The color criteria of group P$_2$ are (J-H)-1.7(H-K)+0.083$<$0 and (H-K)$>$0.918. Sources belonging to P$_{3}$ group have J and H magnitudes that are lower limits so their color (J-H) can not be considered. The P$_{3}$ color criteria is (H-K)$>$0.918. Following such criteria, we searched for tracers of stellar formation activity in the 2MASS catalogue. Fig.5 shows the color-color diagram (CCD) of the selected YSO candidates. The two parallel lines are reddening vectors, assuming the interstellar reddening law of Rieke & Lebofsky (1985) (A$_{J}$ / A$_{V}$ = 0.282; A$_{H}$ / A$_{V}$ = 0. 175; A$_{K}$ / A$_{V}$ = 0.112). Fig.6 shows the locations of the YSO candidates.
We now discuss the likelihood of triggering star formation in Sh2-163. It can be noted the presence of a group of these sources right upon the north side of Sh2-163. For the two molecular clouds we discussed above, there is a very good relationship between CO emission, infrared shell and YSO candidates. Almost no YSO candidates was found outside the shell, even though there are still strong CO emissions. Such phenomenon suggest triggered star formation may be taking place in this region by the expansion of Sh2-163. The morphology of these clouds detected by CO emissions could not be purely explained by the “collect and collapse” model, as the CO emission distribution suggests the presence of pre-existing molecular clouds in the border of the HII region. The DSS-R image of Figure 1 neither displays cometary morphology. We thus discard the so-called “RDI” and “C&C” processes. Another possibility is that the shocked expanding layer, prior to the beginning of the instability, collides with these pre-existing molecular clouds (A and B). Star formation would take place at the interface between the layer and the cloud clumps. Sh2-163 is not the only object; HII regions like Sh2- 235 (Kirsanova et al. 2008), Sh2-217 and Sh2-219 (Deharveng et al. 2003) also show such physical processes of sequential star formation.
------- ------------------ -------------------------------- ------------- ------------------------- --------------------------------- -- --
Shift Integrated range N (CO) M$_{out}$ P$_{out}$ E$_{out}$
(km s$^{-1}$) ($\times$ 10$^{16}$ cm$^{-2}$) ($M_\odot$) ($M_\odot$ km s$^{-1}$) (M$_\odot$ \[km s$^{-1}$\]$^2$)
red (-43.5, -39) 1.9 7.6 46 137
blue (-46.5, -51) 2.7 14.2 85 255
------- ------------------ -------------------------------- ------------- ------------------------- --------------------------------- -- --
\[tb:rotn\]
Star formation in cloud A and B
-------------------------------
We now make a detail study of star formation in cloud A and B. As mentioned above, two molecular clumps were found in cloud A by C$^{18}$O observations. Fig.7 shows the integrated intensity of C$^{18}$O superimposed on the near-infrared (NIR) $K$-band image of 2MASS. We can note that there are six 2MASS YSO candidates projected onto clump A2, while only one candidate in clump A1. Fig.7 also shows the 1420 MHz image (grey-scale) with the C$^{18}$O intensity (contours) overlaid. It seems star formation in A2 is much more active than in A1. One good reason for this phenomenon is that the shock passed A2 first (triggered star formation inside) and then to A1. However, further observations should be carried out to study our speculation.
We found outflow activities in cloud B. Fig. 8 shows the $^{12}$CO position-velocity (PV) diagram of cloud B cut from north to south. The wing emission is obvious in the PV diagram. Typical outflows appear as spatially confined wings beyond the emission from the cloud. According to the PV diagram, we selected the integrated range of wings and determined the outflow intensities of red and blue lobes(figure 9). Using the method described in section 3.1, we obtain the masses for the red and blue lobes of the outflow. We estimate the momentum and energy of the red and blue lobes using $$P_{out} = M_{out} V$$ and $$E_{out} =\frac{1}{2} M_{out} V^2$$ where $V$ is a characteristic velocity estimated as the difference between the maximum velocity of CO emission in the red and blue wings respectively, and the molecular ambient velocity ($V_{lsr}$). The derived parameters are shown in table 3.
Near the core of cloud B we found that the IRAS point source 23314+6033 satisfied the protostellar object colors described by Junkes et al. (1992): S$_{100}$ $\ge$ 20 Jy, 1.2 $\le$ $\frac{S_{100}}{S_{60}}$ $\le$ 6.0, $\frac{S_{60}}{S_{25}}$ $\ge$ 1 and Q$_{60}$ + Q$_{100}$ $\ge$ 4, where S$_{\lambda}$ and Q$_{\lambda}$ are the flux density and the quality of the IRAS flux in each of the observed band respectively. The total infrared luminosity of IRAS 23314+6033 can be calculated by the method of Casoli et al.(1986): $$\begin{split}
&L_{IR}=(20.65\times S_{12}+7.35\times S_{25}+4.57\times
S_{60}+1.76\times S_{100})\\
&\times D^2\times0.30
\end{split}$$ where D is the distance from the solar system in kpc. According to Smith et al. (2002), the luminosity of 17380 L$_{\odot}$ corresponds to a B1.5 star. Zhang et al. (2005) made a $^{12}$CO(2-1) observation of IRAS 23314+6033 (1$^\prime$ $\times$ 1$^\prime$ in step of 29$^\prime$$^\prime$), using the 12 m telescope of the National Radio Astronomy Observatory (NRAO) at Kitt Peak. They discovered an unresolved outflow driven by the IRAS point source. IRAS 23314+6033 is associated with the Red MSX Source (RMS) G113.6041-00.6161 (Lumsden et al. 2002), which is about 45$^\prime$$^\prime$ away from the peak of cloud B (the upper panel in figure 9). Considering the spacial resolution of MSX, we do not regard IRAS 23314+6033 is on the center of cloud B. Our larger area observations indicate the outflow discovered by Zhang et al. (2005) seems to be just a part of the main outflow from cloud B. Young star(s) deeply embedded on the center of cloud B drive(s) the main outflow. The NIR H$_2$ emission is a good tracer of shocks in molecular outflows in low-mass star-forming regions. We can see extended emission in the $K$ band in figure 9, which may be partly due to excited H$_2$ emission at 2.12 $\mu$m.
We also searched for sources in the SCUBA legacy catalogues and found a prominent sub-mm source J233342.7+605030 associated with IRAS 23314+6033. The maximum brightness (B$_{850}$) of J233342.7+605030 is 1.91 Jy beam$^{-1}$, with a flux density (F$_{850}$) of 3.3 Jy (Di Francesco et al. 2008). The deconvolved radius is 34.2 arcsec and corresponds to a physical diameter of 0.54 pc. Using the 850 $\mu$m continuum emission, we estimate the dust mass from the relationship of Tej et al. (2006): $$M_{dust}=1.88\times10^{-4}(\frac{1200}{\nu})^{3+\beta} S_{\nu}
(e^{0.048\nu/T_d}-1) D^2$$ where S$_{\nu}$ is the flux density at the frequency $\nu$. We also assume that the dust temperature T$_d$ is 20 K, $\beta$ the dust emissivity index is 2.6 for the assumed dust temperature in this region according to Hill et al. (2006), and D the distance to Sh2-163 in kpc. We thus obtain a dust mass of M$_{dust}$ $\sim$ 9.1 M$_{\odot}$.
Fig.10 shows a composite image of the region surrounding the 850$\mu$m source, made from 2MASS J-band (blue), H-band (green) and K-band (red) images. The contours, 850 $\mu$m brightness levels of 0.9, 1.2, 1.5, 1.8 Jy beam$^{-1}$, trace the very central part of the sub-mm source. Three YSO candidates selected through the criteria described above locate within the sub-mm emission, with star 3 in the center. Fig.11 shows these stars plotted in a 2MASS color-magnitude diagram (CMD). The main-sequence stars are shown with some representative spectral types at a distance of 2.8 kpc. The two parallel lines are A$_{v}$ = 20 reddening vectors for a B0V and B6V star using the interstellar reddening law of Rieke & Lebofsky (1985). Arrows indicate sources where the 2MASS catalog only lists an upper brightness limit in the relevant band. The source labeled 3 in Fig.10 and Fig.11 is probably a highly embedded YSO with spectral type between B0V and B3V, consistent with the IR luminosity analysis above. Thus, we suggest that the most likely massive protostar related to IRAS 23314+6033 would be star 3.
summary
=======
Based on our CO emission observations of 13.7-m PMO telescope, together with other archival data including CGPS, 2MASS, MSX, and SCUBA, we made a multi-wavelength study of Sh2-163. The main results can be summarized as follows:
1.Radio continuum emissions, optical observations and mid-infrared images of Sh2-163 indicate the strong interactions between the HII region and the surrounding ISM.
2.Two molecular clouds were discovered on the border of PDR. The morphology of these two clouds also suggests they are compressed by the expansion of Sh2-163. In cloud A, we found two molecular clumps. Our study indicates star formation in clump A2 seems to be more active than in clump A1. In cloud B, we found outflow activities driven by young star(s) still deeply embedded.
3.Using 2MASS photometry, we searched for embedded YSO candidates in this region. The very good relations between CO emission, infrared shell and YSOs suggest that it is probably a triggered star formation region by the expansion of Sh2-163. We discard the so-called “RDI” and “C&C” processes taking place in Sh2-163, and propose another possibility that the shocked expanding layer, prior to the beginning of the instability, collides with these pre-existing molecular clouds (A and B). Star formation would take place at the interface between the layer and the cloud clumps.
4.We found a prominent sub-mm source at the location of IRAS 23314+6033. Three YSOs candidates were found imbedded. On the peak of the sub-mm emission locates an inter-mediate massive young star consistent with the IR luminosity analysis. We thus regard having found the most likely massive protostar related to IRAS 23314+6033.
ACKNOWLEDGEMENTS {#acknowledgements .unnumbered}
================
We thank the anonymous referee for the constructive suggestions and we are also grateful to the staff at the Qinghai Station of Purple Mountain Observatory (PMO) for their observations. The CGPS is a Canadian project with international partners and is supported by grants from NSERC. Data from the CGPS are publicly available through the facilities of the Canadian Astronomy Data Center (http://cadc.hia.nrc.ca) operated by the Herzberg Institute of Astrophysics, NRC.
[h]{} Blitz, L., Fich, M., Stark, A. A., 1982, ApJS, 49, 183 Brand, J., Blitz, L., 1993, A&A, 275, 67 Bohlin, R. C., Savage, B. D., Drake, J. F., 1978, ApJ, 224, 132 Casoli,F., Combes,F., Dupraz,C., Gerin,M., Boulanger,F., 1986, A&A, 169, 281 Churchwell, E., Povich, M. S., Allen, D., 2006, ApJ, 649, 759 Cichowolski, S., Romero, G.A., Ortega, M.E., Cappa, C.E., Vasquez, J., 2009, MNRAS, 394, 900 Dale, J. E., Ercolano, B., Clarke, C. J., 2007a, MNRAS, 382, 1759 Dale, J. E.,Clark, P. C., Bonnell, I. A., 2007b, MNRAS, 377, 535 Deharveng, L., Zavagno, A., Salas, L., et al. 2003, A&A, 399, 1135 Deharveng, L., Schuller, L. D., Zavagno, A., et al. 2010, A&A, 523, 6 Deharveng, L., Zavagno, A., 2008, in Beuther, H., Henning, T., eds, ASP Conf. Ser. Vol. 387, Massive Star Formation: Observations Confront Theory. Astron. Soc. Pac., San Francisco, p. 338 Di Francesco, J., Johnstone, D., Kirk, H., MacKenzie, T., Ledwosinska, E., 2008, ApJS, 175, 277 Dobashi, K., Yonekura, Y., Matsumoto, T., Momose, M., Sato, F., et al. 2001, PASJ, 53, 85 Elmegreen, B.G., Lada C.J., 1977, ApJ, 214, 725 Georgelin, Y.M., 1975, phD thesis, Univ. Provence, Obs. de Marseille Heyer M. H., Brunt C., Snell R. L., Howe J. E., Schloerb F. P., Carpenter J. M., 1998, ApJS, 115, 241 Hill, T., Thompson, M. A., Burton, M. G., et al. 2006, MNRAS, 368, 1223 Johnstone,D., Wilson,C.D., Moriarty-Schieven, G., Giannakopoulou Creighton, J., Gregersen, E. 2000, ApJS, 131, 505 Junkes, N., F$\ddot{u}$rst, E., Reich, W., 1992, A&A, 261, 289 Karr, J. L., Martin, P. G., 2003, ApJ, 595, 900 Kerton, C. R., Arvidsson K., Knee, L. B. G., Brunt, C., 2008, MNRAS, 385, 995 Kirsanova, M. S., Sobolev, A. M., Thomasson, M., et al. 2008, MNRAS, 388, 729 Koornneef, J., 1983, A&A, 128, 84 Larosa,T.N., 1983, ApJ, 274, 815 Lefloch, B., Lazareff,B.,1994, A&A, 289, 559 Lumsden, S. L., Hoare, M. G., Oudmaijer, R. D., Richards, D. 2002, MNRAS, 336, 621 Miao, J., White,G.J., Nelson,R., Thompson,M., Morgan,L., 2006, MNRAS, 369, 143 Miao, J., White,G.J., Thompson,M., Nelson,R., 2009, ApJ, 692,382 Osterbrock, D.E. 1989, Astrophysics of Gaseous Nebulae and Active Galactic Nuclei (Mill Valley, CA: Univ. Science Books) Panwar, N., Chen, W.P., Pandy, A.K. et al. 2014, MNRAS, 443, 1614 Paron,S., Petriella,A., Ortega,M.E., 2011, A&A, 525, 132 Price, S.D., Egan, M.P., Carey, S. J., Mizuno, D. R., Kuchar, T. A., 2001, AJ, 121, 2819 Rieke, G. H., Lebofsky, M. J., 1985, ApJ, 288, 618 Rohlfs, K., & Wilson, T. L. 2004, Tools of Radio Astronomy (4th ed.; Berlin: Springer) Russeil, D., Adami, C., Georgelin, Y.M., 2007, A&A, 470, 161 Sharpless. 1959, ApJS, 4, 257s Smith, L., Norris, R. P. F., Crowther, P. A., 2002, MNRAS, 337, 1309 Skrutskie M. F. et al., 2006, AJ, 131, 1163 Sugitani, K., Fukui, Y., Mizuno, A., Ohashi, N. 1989, ApJ, 342, L87 Taylor et al. 2003, AJ, 125, 3145 Tej, A., Ojha, D. K., Ghosh, S. K., et al. 2006, A&A, 452, 203 Weaver, R., McCray, R., Castor, J., Shapiro, P., Moore, R., 1977, ApJ, 218, 377 Wilson, T. L., & Rood, R. 1994, ARA&A, 32, 191 Zhang, Q.Z., Hunter, T.R., Brand, J., Sridharan, T.K., Cesaroni, R., Molinari, S., Wang, J., Kramer, K., 2005, ApJ, 625, 864 Zinnecker, H., & Yorke, H. W., 2007, ARA&A, 45, 481
\[lastpage\]
[^1]: E-mail: yunaiping09@mails.gucas.ac.cn
[^2]:
|
---
abstract: 'We present a theoretical scheme to simulate quantum field theory in a discrete curved spacetime based on the Bose-Hubbard model describing a Bose-Einstein condensate trapped inside an optical lattice. Using the Bose-Hubbard Hamiltonian, we first introduce a hydrodynamic presentation of the system evolution in discrete space. We then show that the phase (density) fluctuations of the trapped bosons inside an optical lattice in the superfluid (Mott insulator) state obey the Klein-Gordon equation for a massless scalar field propagating in a discrete curved spacetime. We derive the effective metrics associated with the superfluid and Mott-insulator phases and, in particular, we find that in the superfluid phase the metric exhibits a singularity which can be considered as a the manifestation of an analog acoustic black hole. The proposed approach is found to provide a suitable platform for quantum simulation of various spacetime metrics through adjusting the system parameters.'
address:
- 'Department of Physics, Faculty of Science, University of Isfahan, Hezar Jerib, 81746-73441, Isfahan, Iran'
- 'Department of Physics, Faculty of Science, University of Isfahan, Hezar Jerib, 81746-73441, Isfahan, Iran'
- 'Quantum Optics Group, Department of Physics, Faculty of Science, University of Isfahan, Hezar Jerib, 81746-73441, Isfahan, Iran'
author:
- 'F. Bemani'
- 'R. Roknizadeh'
- 'M. H. Naderi'
title: 'Quantum simulation of discrete curved spacetime by the Bose-Hubbard model: from analog acoustic black hole to quantum phase transition'
---
Introduction
============
Two celebrated theories of the modern physics are the general theory of relativity (GTR) and the quantum field theory (QFT). GTR unifies the special theory of relativity and the gravity and it evolves our understanding of the universe around us by providing a geometric interpretation of gravitation. It has some magnificent predictions such as light deflection by gravity, gravitational waves, and black holes [@Walecka]. On the other hand, QFT which combines the elements of quantum mechanics and the special theory of relativity, theoretically describes the interactions of the fundamental forces and subatomic particles. QFT in the curved spacetime leads to fascinating predictions such as particle production by the time-dependent gravitational fields, or by the time-independent gravitational fields that contain horizons [@Birrell]. QFT in the curved spacetime is regarded as the first approximation to the quantum gravity.
Even for the small values of spacetime curvature, experimental observation of the predictions of QFT in the curved spacetime seems to be impossible. To examine such predictions, one can employ the notion of quantum simulation (see [@Georgescu] and references therein). Various analog models have been proposed as the simulators for these theories [@Novello; @Barcelo2005; @Unruh; @Faccio; @Bemani]. These analog models provide accessible experimental models for quantum field theory in curved space-time. Some properties (mainly kinematic) of GTR can be investigated by the analogies with accessible physical systems [@Novello; @Barcelo2005; @Unruh; @Faccio].
The impressive experimental progress in quantum optics, in particular, manipulating and controlling atomic ensembles and the electromagnetic field have opened a route towards the accumulation of the boson particles into the lowest-energy single-particle state, i.e, the formation of BEC [@Dalfovo]. Since their first experimental realization in alkalis [@Anderson; @Davis], Bose-Einstein condensates (BECs) have been investigated extensively from both theoretical and experimental points of view [@Dalfovo]. Several aspects of the GTR and QFT have been analyzed by analogy with a BEC [@Garay; @Garay2; @Barcelo; @Mayoral; @PAnderson; @Finazzi; @Girelli; @Fedichev1; @Fedichev2; @Fedichev3; @Fischer2004]. The phononic perturbations in a BEC satisfy the Klein-Gordon equation in a curved spacetime and the corresponding metric possesses a singularity which can be regarded as an acoustic analog black hole. Therefore, BECs offer a unique opportunity to explore the basic principles of QFT in the curved spacetime.
By superimposing two or three orthogonal propagating laser beams a spatially periodic pattern is formed. The resulting periodic potential can be used to confine neutral atoms via the Stark shift. Loading ultracold atomic gasses into optical lattices has opened a promising route oriented towards the investigation of strongly correlated quantum many-body systems [@Morsch; @Jaksch; @Lewenstein; @Bloch]. There is a close analogy between ultracold atomic gasses inside light-induced periodic potentials and electrons in crystal lattices. The nonlinear atom-atom interactions are responsible for the collective fluid-like behavior in BEC [@Morsch; @Dalfovo]. Understanding quantum phase transitions and their underlying physics is one of the goals of many-body physics. Superfluid to Mott insulator phase transition, first experimentally reported in Ref. [@Greiner], is the most famous example of such phase transitions at zero temperature. The high flexibility and tunability of optical lattices and the condensate provide us a unique opportunity to explore the quantum phase transition [@Greiner]. Long decoherence time (of the order of seconds) makes the trapped bosonic atoms in an optical lattice to be a good candidate for the applications of quantum simulations [@Jaksch]. The dynamics of ultracold bosonic atoms in an optical lattice is effectively described by a discrete Bose-Hubbard Hamiltonian [@Morsch; @Jaksch; @Jaksch2; @Zwerger], which describes the tunneling of trapped atoms between the lowest vibrational states of the optical lattice.
Discrete Hamiltonians mainly arising in solid state physics have a great ability to be employed as quantum simulators to investigate QFT on lattice [@Giuliani; @Katsnelson; @Szpak2011; @Szpak2012; @Szpak2014]. Regardless of the elegance of treating the problems in a discrete spacetime, it has a number of advantages; i) the continuum spacetime can be viewed as the limit of discrete spacetime as the lattice spacing tends to zero; ii) the direct connection to the statistical physics is more obvious. iii) renormalization, scaling, universality, and the role of topology is more apparent in the discrete spacetime; iv) in numerical simulations on a computer, we always approximate the evolution of a continuous field by its finite difference in a discrete space [@smit2002introduction]. In this regard, dealing with a discrete geometry may be more beneficial. Electrons in a sheet of graphene have a dynamics analogous to the quantum field propagating in a discrete space [@Szpak2014]. The effect of curvature on the current flow lines in elastically deformed graphene sheets has been studied in [@Stegmann]. The Dirac Hamiltonian describing electrons and positrons moving in an external field has been simulated by ultracold atoms in bichromatic optical lattices [@Szpak2012].
Inspired by the existing studies on the analogy between the Bose-Hubbard model and a trapped BEC in an optical lattice, and also motivated by the analogy between curved spacetimes and BECs, in the present contribution, by applying the discrete differential geometry, we investigate the analog spacetime corresponding to the phase/density fluctuations of a trapped BEC in an optical lattice based on the Bose-Hubbard model. The discrete geometric formulation of the quantum fields is of a great significance to understand the quantum phase transitions occurring in the system. To explore the dynamics of the system, by applying the mean-field approximation, we first derive the discrete equations of motion for the mean field plus fluctuations and then, by using the density-phase representation, we obtain the dynamics of the density as well as the phase fluctuations. In superfluid phase within the hydrodynamic limit, the phase fluctuations of the quantum field are found to obey the covariant Klein-Gordon equation for a massless scalar field propagating in a curved discrete spacetime with a specified metric. From the point of view of quantum phase transition, the two different phases occurring in the system, i.e., the superfluid and the Mott-insulator phases can be distinguished by their corresponding effective metrics for phase and density fluctuations, respectively. In addition, various effective metrics can be simulated by adjusting the system parameters.
Dynamics of the system {#Sec:Section2}
=======================
We consider an interacting ultracold gas of bosonic atoms trapped in a simple cubic lattice optical lattice. The system is effectively described by the Bose-Hubbard Hamiltonian [@Zwerger] $$\begin{aligned}
&&\hat {\cal H} = - \sum\limits_{\left\langle {lmn;l'n'm'} \right\rangle } J\hat b_{lmn}^\dag {{{\hat b}_{l'n'm'}}} + \frac{U}{2}\sum\limits_{lmn} {{{\hat n}_{lmn}}} \left( {{{\hat n}_{lmn}} - 1} \right) - \mu \sum\limits_{lmn} {{{\hat n}_{lmn}}} .
\label{BoseHubbardHamiltonian}\end{aligned}$$ Here $\left\langle {lmn;l'n'm'} \right\rangle$ stands for the nearest-neighbor sites, ${\hat {b}}_{nlm}^{\dagger }$ and ${\hat {b}}_{nlm}$ are, respectively, the creation and annihilation operators for a bosonic atom at lattice site $(n,l,m) $ obeying the commutation relation $ [{\hat {b}}_{nlm},{\hat {b}}_{n'l'm'}^{\dagger }]=\delta_{n,n'}\delta_{m,m'}\delta_{l,l'} $, and ${\hat {n}}_{nlm}={\hat {b}}_{nlm}^{\dagger }{\hat {b}}_{nlm}$ is the number operator for site $(n,l,m)$. Moreover, $J$, $\mu$ and $U$ are the tunneling rate between adjacent sites, the chemical potential, and the strength of the on-site interaction, respectively given by $$\begin{aligned}
J &= - \int {{d^3}{\bf{r}}{w^*}({\bf{r}} \!-\! {{\bf{r}}_{nlm}})\left[ { -\frac{1}{2M} {\nabla ^2}\! +\! {V_{{\rm{lat}}}}({\bf{r}})} \right]w({\bf{r}} \!-\! {{\bf{r}}_{kmn}})} \, ,
\label{Eq:TunnlingRate} \\
\mu &= - \int {{d^3}{\bf{r}}{V_{\rm{ext}}}({\bf{r}})\left| {w({\bf{r}} - {{\bf{r}}_{nlm}})} \right|} ^2\, ,
\label{Eq:ChemicalPotential} \\
U &= G\int {{d^3}{\bf{r}}{{\left| {w({\bf{r}} - {{\bf{r}}_{nlm}})} \right|}^4}} \, ,
\label{Eq:OnSiteInteractionRate} \end{aligned}$$ where $w({\bf{r}} - {{\bf{r}}_{nlm}})$ are the Wannier functions in the lowest Bloch band. Moreover, $ V_{\rm{lat}} (\bf{r})$ and $ V_{\rm{ext}} (\bf{r})$ are the lattice periodic potential and the external trapping potential, respectively. $G=4\pi a_s/M $ is the strength of the interactions between different atoms in the BEC determined by the $s$-wave scattering length $a_s$ and the atomic mass $M$. The attractive (repulsive) on-site interaction corresponds to $U<0$ ( $U>0$). In what follows we limit ourselves to considering repulsive atom-atom interaction. Although, here we consider the Bose-Hubbard Hamiltonian for trapped atoms in an optical lattice, but we should notice that the Bose-Hubbard model can be employed to describe some other physical systems such as the systems of strongly interacting photons (for recent reviews see e.g., [@Hartmann; @Noh]) and coupled optomechanical systems [@Huang].
The Heisenberg equation of motion for the field operator reads $$\begin{gathered}
{\partial _t}{{\hat b}_{lmn}}\! =\! iJ({{\hat b}_{l - 1,mn}} \!+\! {{\hat b}_{l + 1,mn}} \!+\! {{\hat b}_{lm - 1,n}} + {{\hat b}_{lm + 1,n}} \!+ \!{{\hat b}_{lm,n - 1}} + {{\hat b}_{lmn + 1}}) \\
- i\frac{U}{2}({{\hat b}_{lmn}}{{\hat n}_{lmn}} + {{\hat n}_{lmn}}{{\hat b}_{lmn}} - {{\hat b}_{lmn}}) + i\mu {{\hat b}_{lmn}} .
\label{Eq:HeisenbergEquation}\end{gathered}$$ The key concept for the derivation of the curved spacetime on lattice is the central finite differences defined by $$\begin{aligned}
{f_{l \pm 1,m,n}}& = {f_{l,m,n}} \pm a{\Delta _x}{f_{l,m,n}} + \frac{{{a^2}}}{2}\Delta _x^2{f_{l,m,n}} + \ldots \label{Eq:FiniteDifference1}\\
{f_{l,m \pm 1,n}} &= {f_{l,m,n}} \pm a{\Delta _y}{f_{l,m,n}} + \frac{{{a^2}}}{2}\Delta _y^2{f_{l,m,n}} + \ldots \label{Eq:FiniteDifference2}\\
{f_{l,m,n \pm 1}} &= {f_{l,m,n}} \pm a{\Delta _z}{f_{l,m,n}} + \frac{{{a^2}}}{2}\Delta _z^2{f_{l,m,n}} + \ldots
\label{Eq:FiniteDifference3}\end{aligned}$$ where $ a $ is the lattice spacing. Using definitions (\[Eq:FiniteDifference1\]–\[Eq:FiniteDifference3\]) and (7) and neglecting terms containing higher orders of $ \mathcal{O}(a^2) $, we can write $$\begin{aligned}
\Delta _x^2{f_{lmn}} &= \frac{1}{a^2}({f_{l + 1,m,n}} + {f_{l - 1,mn}} - 2{f_{lmn}}),\label{Eq:FiniteDifference4}\\
\Delta _y^2{f_{lmn}} &= \frac{1}{a^2}({f_{l,m + 1,n}} + {f_{l,m - 1,n}} - 2{f_{lmn}}),\label{Eq:FiniteDifference5}\\
\Delta _z^2{f_{lmn}} &= \frac{1}{a^2}({f_{l,m,n + 1}} + {f_{l,m,n - 1}} - 2{f_{lmn}}),\label{Eq:FiniteDifference6}\end{aligned}$$ Using definitions (\[Eq:FiniteDifference4\]–\[Eq:FiniteDifference6\]), we can write Eq. (\[Eq:HeisenbergEquation\]) as $$\partial _t {\hat b}_j = iJ(a^2\Delta ^2{\hat b}_j + 6{\hat b}_j) - i\frac{U}{2} ( {\hat b}_j {\hat n}_j + {\hat n}_j{\hat b}_j - {\hat b}_j ) + i\mu {\hat b}_j\, ,
\label{Eq:DGPE}$$ where we have defined ${\Delta ^2} \equiv \Delta _x^2 + \Delta _y^2 + \Delta _z^2 $. Hereafter, the indexes $(l,m,n)$ denoting the spatial dependence of the field operator is replaced by $ j $ for conciseness of notation. Eq. (\[Eq:DGPE\]) represents a discrete version of the Gross-Pitaevskii equation with an effective mass $m_{\rm{eff}}=(2Ja^2)^{-1}$ describing the trapped atoms in the optical lattice. The effective mass can be controlled by manipulating the lattice depth. In contrast to the small value of $ J $ which leads to the large mass, a large value of $ J $ provides the atoms a small mass so that they can move freely.
Within the framework of the mean-field approximation the operator ${\hat b}_j$ can be decomposed into its mean value, $ b_j=\langle {\hat b}_j \rangle $, and a fluctuation component, $ \delta {{\hat b}_j} $, with zero mean value $${\hat b_j} = {b_j} + {\hat \chi _j} = {b_j}(1 + \delta {\hat b_j}){\mkern 1mu} ,
\label{Eq:mean-field}$$ where, for subsequent mathematical convenience, we have set $ \hat \chi_j= b_j \delta {\hat b}_j $ with $ \langle \delta {{\hat b}_j}\rangle =0 $. At zero temperature the fluctuations are essentially due to the quantum noise and at finite temperature, the thermal noise may also be relevant. By inserting Eq. (\[Eq:mean-field\]) in Eq. (\[Eq:DGPE\]), one may obtain the equations of motion for the mean field and the fluctuation operator which satisfy, respectively $$\partial _t b_j = iJ(a^2\Delta ^2b_j + 6 b_j) - i\frac{Ub_j}{2} ( 2n_j -1 ) + i\mu b_j\, ,
\label{Eq:background}$$ $$\partial _t \delta {\hat b}_j \!=\! iJa^2\!\left[\Delta ^2\delta {\hat b}_j \!+\! 2\frac{\Delta b_j}{b_j}\Delta \delta {\hat b}_j \right]\!-\! iUn_j( \delta {\hat b}_j \!+\! \delta \hat b_j^\dag ) \, .
\label{Eq:fluctuations}$$ Equation (\[Eq:fluctuations\]) is analogous to the Bogoliubov-de Gennes equation describing the dynamics of fluctuations in BEC [@Dalfovo].
![(Color online). Schematic illustration of the atoms over optical lattice sites: (top) superfluid and (bottom) Mott insulator states. The phase transition from superfluid to Mott insulator is achieved by increasing the lattice depth.[]{data-label="Fig:Fig1"}](Fig1.pdf){width="8.5cm"}
Density-phase representation and the effective metric for the fluctuations
==========================================================================
In order to get a clear physical insight into Eqs. (\[Eq:background\]) and (\[Eq:fluctuations\]) we express the atomic operator $ {\hat b}_j $ in the density-phase representation, namely $$\label{Eq:decompositon}
{\hat b}_j = \sqrt {{\hat n}_j} \exp (i{\hat \phi }_j)$$ where ${\hat n}_j$ and ${\hat \phi }_j$ denote the amplitude and the phase of the atomic operator, respectively. We should notice that, some issues arise due to the controversial nature of the quantum phase operator. In principle, one could write the quantum field operator as Eq (\[Eq:decompositon\]), as long as $n_j=\langle\hat{n}_j\rangle\gg1$. In the limit of large $n_j$, the local number fluctuation and phase operators are conjugate variables and the Bose-Hubbard Hamiltonian is analogous to the quantum rotor model [@Javanainen]. Using a self-consistent mean-field expansion for large filling factor, one can investigate the time evolution of quantum fluctuations in the Bose-Hubbard model [@Fischer]. In other words the present model will be valid at large filling factor. One can easily relate the usual decomposition of the atomic operator in linearized regime to the amplitude and phase fluctuations denoted by $\delta{\hat n}_j$ and $\delta{\hat \phi }_j$, respectively $$b_j = \sqrt { n_j} \exp (i \phi _j), \qquad \delta {{\hat b}_j} = i\delta {\hat \phi }_j + \delta {\hat n}_j/2n_j .$$ Substituting these two expressions into Eqs. (\[Eq:background\]) and (\[Eq:fluctuations\]) leads to the equations of motion for the mean density and the mean phase $${\partial _t}{n_j} + \Delta .\left( {{n_j}{\bf{v}}_j} \right) = 0\, , \label{Eq:hydrodynamics1}$$ $$\partial _t \phi _j = - J{a^2}\left[ (\Delta \phi _j)^2 - \frac{\Delta ^2n_j}{2n_j} + \frac{{{{(\Delta {n_j})}^2}}}{{4n_j^2}} \right]
+ (6J + U/2 + \mu ) -U{n_j},
\label{Eq:hydrodynamics2}$$ where we have defined ${{\bf v}_j}=2Ja^2 \Delta \phi_j$ to be the local velocity of the fluid. Equation (\[Eq:hydrodynamics2\]) can be transformed into the following equation $${\partial _t}{{\bf v}_j} + {{\bf v}_j}.\Delta {{\bf v}_j} = Ja^2\Delta (\mu +6J+U/2+ V_{\rm{Q}}- U{n_j})\, ,
\label{Eq:hydrodynamics3}$$ where $V_{\rm{Q}} = {J^2}({\Delta ^2}\sqrt {{n_j}} )/\sqrt {n_j} $ is the so-called “quantum potential”. Equations (\[Eq:hydrodynamics1\]) and (\[Eq:hydrodynamics3\]) are the mass and momentum conservation equations for the trapped bosons in optical lattice, respectively. The equations of motion for the density fluctuation and the phase fluctuation are obtained as $$\begin{aligned}
{\partial _t}\delta {{\hat n}_j} = - \Delta ({{\bf{v}}_j}\delta {{\hat n}_j} + 2J{a^2}{n_j}\Delta \delta {{\hat \phi }_j}), \label{Eq:hydrodynamics4}\end{aligned}$$ $$\begin{aligned}
\partial _t\delta {\hat \phi }_j = - {\bf{v}}_j\Delta \delta {\hat \phi }_j - \frac{{c_j^2}}{{2J{a^2}n_j}}\delta {{\hat n}_j} +\frac{{c_j^2}}{{8J{a^2}{n_j}}}{\xi_j ^2}\Delta [{n_j}\Delta (\frac{{\delta {{\hat n}_j}}}{{{n_j}}})], \label{Eq:hydrodynamics5}\end{aligned}$$ where $c_j = a \sqrt {2Jn_jU}$ and $\xi_j = a \sqrt{ 2J/(n_j U)} $ are the local speed of excitations and the so-called healing length, respectively. In the limit of large filling factor $n_j$, one can easily generalize the results of [@Javanainen; @Fischer] to a 3D optical lattice. Therefore, the dispersion relation for waves with frequency $ \omega({\bf k}) $ and wavevector $ {\bf k}= k_x\hat{i}+k_y\hat{j}+k_z\hat{k} $, is given by $$\omega \left( {{k_x},{k_y},{k_z}} \right) = \sum\limits_{l = x,y,z} {4n_jUJ{{\sin }^2}\left( {\frac{{{k_l}a}}{2}} \right) + 4{J^2}{{\sin }^4}\left( {\frac{{{k_l}a}}{2}} \right)}.
\label{Eq1}$$ We notice that for $ |{\bf k}| a <<1 $ it reduces to the well-known Bogoliubov spectrum $$\omega \left( {{k_i},{k_j},{k_k}} \right) = \sum\limits_{l = i,j,k} {\left( {n_jUJ{a^2} + \frac{{{J^2}{a^4}}}{4}k_l^2} \right)} k_l^2.
\label{Eq2}$$ In this paper, our goal is to simulate a continuum spacetime via a discrete spacetime and therefore we focus our attention on the case of long wavelength fluctuations (hydrodynamic limit). Considering the Bogoliubov mean-field approximation for the fluctuations of the weakly depleted condensate the excitation spectrum given by Eq (\[Eq1\]) becomes gapless.
Superfluid phase
----------------
![(Color online). The generation of an acoustic black hole in the analog spacetime in the superfluid state corresponding to the metric of Eq. (\[Eq:Metric\]).[]{data-label="Fig:Fig2"}](Fig2.pdf){width="8.5cm"}
It is well-known that the nature of many-body eigenstates of the Bose-Hubbard Hamiltonian in Eq. (\[BoseHubbardHamiltonian\]) is dependent on the ratio of the on-site interaction energy $ U $ to the tunneling energy $ J $. An experimental way for controlling this ratio is based on the Feshbach resonances method [@Chin] in which the scattering length is controlled by an external magnetic field. Alternatively, the dimensionless parameter $U/J$ can be controlled by manipulating the depth of the optical lattice [@Islam]. The competition between minimizing the tunneling energy and the on-site atom-atom interaction manifests itself as the well-known superfluid-Mott insulator phase transition in the ground state of the Bose-Hubbard model (Fig. (\[Fig:Fig1\])). In the regime $U/J\ll1$ which corresponds to the superfluid phase, atoms tend to accumulate in a single quantum state and delocalize over the entire lattice sites. The many-particle state of the superfluid can be written as $${\left| \Psi \right\rangle _{\rm{SF}}} \propto \prod\limits_j {\left| {{\alpha _j}} \right\rangle } \, ,$$ i.e., the state vector at each site can be described as the Glauber coherent state $ {\left| {{\alpha _j}} \right\rangle } $. In the superfluid phase, in the hydrodynamic approximation, i.e., in the regime where the characteristic length of the spatial variations of the condensate density is much larger than $\xi_j$ the last term in Eq. (\[Eq:hydrodynamics5\]) can be safely ignored and thus we have $$\label{Eq:hydrodynamics60}\delta {{\hat n}_j} \simeq - 2J a^2n_j\left( {{\partial _t}\delta {{\hat \phi }_j} + {{\bf{v}}_j}\Delta \delta {{\hat \phi }_j}} \right)/c_j^2\, .$$ Combining Eqs. (\[Eq:hydrodynamics4\]) and (\[Eq:hydrodynamics60\]) results in the following equation for the phase fluctuations of the quantum field $$\label{eqfase}
- ({\partial _t} + \Delta {{\bf{v}}_j})\frac{{J{a^2}{n_j}}}{{c_j^2}}({\partial _t} + {{\bf{v}}_j}\Delta )\delta {{\hat \phi }_j} + \Delta J{a^2}{n_j}\Delta \delta {{\hat \phi }_j} = 0\;.$$ These fluctuations, within the hydrodynamic approximation, are analogous to the collective quantum field on a curved metric. Therefore, the phase fluctuations of the quantum field obey the covariant Klein-Gordon equation for a massless scalar field propagating in a curved discrete spacetime $$\Box \delta \hat \phi_j = \frac{1}{\sqrt{-g}}\Delta_\mu (\sqrt{-g}g^{\mu\nu}\Delta_\nu \delta \hat \phi_j )=0\;,
\label{box}$$ where $g^{\mu \nu}$ is the inverse curved spacetime metric and $g$ is the determinant of the metric $g_{\mu \nu}$ $$g_{\mu \nu }^{SF} = \sqrt {2{n_j}J{a^2}/U} \left[ {\begin{array}{*{20}{c}}
{ - \left( {c_j^2 - {{\bf{v}}_j}.{{\bf{v}}_j}} \right)}&{ - v_j^l}\\
{ - v_j^m}&{{\delta _{lm}}}
\end{array}} \right].
\label{Eq:Metric}$$ Here, the Greek summation indices range from $0$ to $2$ and the indices $(l,m)$ range from $1$ to $2$. The line element of this spacetime is given by $$\begin{aligned}
\Delta {s^2} &= {g_{\mu \nu ,j}}\Delta {x^\mu }\Delta {x^\nu } \nonumber\\
& = \!\sqrt{\frac{{2{n_j}J {a^2} }}{U}}\! \left[ {\! - {c^2}d{t^2} \!+\! (\Delta {{\bf{x}}_j} \!-\! {{\bf{v}}_j}dt)(\Delta {{\bf{x}}_j} \!-\! {\bf{v}}_jdt)} \right].\end{aligned}$$ At the point $c_j=|\bf{v}_j|$ or, equivalently, when $ U/J = 2{\left( {{\phi _{i + 1}} - {\phi _j}} \right)^2}/{n_j} $ the metric of Eq. (\[Eq:Metric\]) exhibits a singularity. The meaning of singularities in fluid dynamics and general relativity has been discussed in [@Cadoni]. We refer this condition as an acoustic analog black hole (depicted in Fig. (\[Fig:Fig2\])). In such a situation, the sound waves traveling with $c_j<|\bf{v}_j|$ are trapped inside the *supersonic* region and they are not able to propagate backward into the *subsonic* region. Since both sound and the fluid velocity can be controlled by the lattice depth, one can generate an analog black hole in the system. The generated analog black hole is the turning point to the superfluid Landau instability [@Raman]. For a homogeneous lattice $ \Delta \phi_j=0 $ or equivalently $ \textbf{v}_j=0 $ the effective metric has the form $$g_{\mu \nu }^{H} = \sqrt {2 n_j J a^2/U} \left[ {\begin{array}{*{20}{c}}
{ - c_j^2} &{0}\\
{ 0}&{{\delta _{lm}}}
\end{array}} \right],
\label{Eq:MetricHomogeneous}$$ which is suitable for the simulation of various analog metrics in the curved spacetime.
Mott insulator phase
--------------------
Starting from the superfluid phase (see Fig. (\[Fig:Fig1\])), one can reach the Mott insulator phase by increasing the lattice depth. The Mott insulator regime emerges when the tunneling energy is much smaller than the on-site repulsion energy, i.e., $U/J\gg1$. In this regime, all atoms are completely localized over the lattice [@Greiner]. The many-body state vector of the ground state can be expressed as $${\left| \Psi \right\rangle _{\rm{MI}}} \propto \prod\limits_j {\left| {{n_j}} \right\rangle } \, ,$$ where ${\left| {{n_j}} \right\rangle }$ is the eigenstate of the number operator of particles on the $j$th site, $\hat{n}_j$. As stated in the beginning of the section, we consider large filling factor $n_j\gg1$ for the Mott insulator regime. Rewriting Eq. (\[Eq:hydrodynamics5\]) in terms of $ J $ and $ U $ leads to $${\partial _t}\delta {{\hat \phi }_j} = J{a^2}\left[ { - 2\Delta {\phi _j}\Delta \delta {{\hat \phi }_j} + \frac{1}{{2{n_j}}}\Delta [{n_j}\Delta (\frac{{\delta {{\hat n}_j}}}{{{n_j}}})]} \right] - U\delta {{\hat n}_j},
\label{Eq:MottHydrodynamics}$$ In the Mott insulator regime, $U\gg J$, one can neglect the first term in Eq (\[Eq:MottHydrodynamics\]), approximately and thus $${\partial _t}\delta {{\hat \phi }_j} = - U\delta {{\hat n}_j}, \label{Eq:hydrodynamics6}$$ By performing the time derivative on Eq. (\[Eq:hydrodynamics4\]), and using Eqs. (\[Eq:hydrodynamics1\]), (\[Eq:hydrodynamics2\]) and (\[Eq:hydrodynamics6\]) we readily get $$\begin{aligned}
\label{Eq:hydrodynamics7}
\partial _t^2\delta {{\hat n}_j} = - 2J{a^2}\Delta (\delta {{\hat n}_j}\Delta {\partial _t}{\phi _j} + \Delta {\phi _j}{\partial _t}\delta {{\hat n}_j} + \Delta \delta {{\hat \phi }_j}{\partial _t}{n_j} - U{n_j}\Delta \delta {{\hat n}_j}),\end{aligned}$$ Assuming $\Delta {n_j} \ll {n_j} $, and keeping only the last term in the parentheses one has $$\label{Eq:hydrodynamics7}
\partial _t^2\delta {{\hat n}_j} - \Delta (c_j^2 \Delta \delta {{\hat n}_j}) = 0,$$
Thus, in this regime, the effective spacetime for the density fluctuations is given by the metric $ g_{\mu \nu }^{MI} =g_{\mu \nu }^{H} $
Realization of various metrics in the Bose-Hubbard model
--------------------------------------------------------
Finally, we consider three other examples of the realization of spacetime metrics. The fist example is the Friedmann-Lemaître-Robertson-Walker (FLRW) metric [@Walecka; @Birrell] which is an exact solution of Einstein’s field equations of GTR which describes a homogeneous and isotropic universe on a surface of constant time, but it is no longer static (time-dependent metric). The FLRW metric corresponds to a spacetime geometry defined by the line element $$\label{Eq:FLRW}
ds^2_{\rm{FLRW}}=-c^2dt^2+R^2(t) \left[ dx^2 +dy^2+dz^2\right]\, ,$$ where $c$ is the speed of light and $ R(t) $ is the scale factor. With properly modulating the tunneling energy $ J(t)=J_0\exp(-Ht)$, one can simulate FLRW metric with $ R(t)= \exp(Ht/2) $ in the superfluid phase. A modulated tunneling energy leads to a time-dependent effective mass $ m\left( t \right) = {(2J_0a^2)^{-1}}{\exp{(Ht)}}$. According to Eq. (\[Eq:TunnlingRate\]), one can modulate the tunneling energy by modulating the optical lattice (exponential modulation of the tunneling energy was discussed in [@Fischer; @Schutzhold]). The dependence of the effective mass on time and space can be studied by introducing the mass tensor (the details of the analysis is given in ref. [@Barcelo]). In the present case, the effective mass depends only on time and hence the metric of Eq. (\[Eq:Metric\]) is generalized as $$g_{\mu \nu }^{SF} = \sqrt {2{n_j}{J_0}{a^2}/U} \left[ {\begin{array}{*{20}{c}}
{ - \left( {c_j^2 - {{\bf{v}}_j}.{{\bf{v}}_j}} \right)}&{ - v_j^l}\\
{ - v_j^m}&{{e^{Ht}}{\delta _{lm}}}
\end{array}} \right].$$ In the superfluid phase, for a homogeneous lattice the line element can be written as $$\Delta {s^2} = \sqrt {\frac{{2{n_j}{J_0}{a^2}}}{U}} \left[ { - c_j^2d{t^2} + {e^{Ht}}\left( {\Delta {x^2} + \Delta {y^2} + \Delta {z^2}} \right){\mkern 1mu} } \right].$$ As another example let us consider a homogeneous superfluid. The metric of the corresponding spacetime is a Minkowski metric given by $${\eta _{\mu \nu }} = \sqrt {2nJ{a^2}/U} \left[ {\begin{array}{*{20}{c}}
{ - {c^2}}&0\\
0&{{\delta _{lm}}}
\end{array}} \right]{\mkern 1mu} \, .$$ In general the metric ${g_{\mu \nu }}$ can be decomposed into a flat metric given by $ \eta_{\mu \nu} $ and a curved spacetime metric given by $ {h_{\mu \nu }}$. A small deviation from homogeneity in the background field, $ {n_j} = n + {\varepsilon _j}$, manifests itself as a curved spacetime in the metric $ {h_{\mu \nu }} $ given by $${h_{\mu \nu }} = \sqrt {{Ja^2}/{{(2nU)}}} {\varepsilon _j}\left[ {\begin{array}{*{20}{c}}
{ - 3{c^2}}&0\\
0&{{\delta _{lm}}}
\end{array}} \right]\, .$$ Therefore, one can obtain analog curved spacetime in the superfluid phase by manipulating the background density. Again one can also use the Mott insulator phase metric $g_{\mu \nu }^{MI} $ to mimic the curved space time. In other words, any inhomogeneity in the background can be translated into the spacetime curvature.
As the third example, we discuss the generation and propagation of analog gravitational radiation in the trapped atoms inside an optical lattice. As in the previous example, we set $h_{\mu\nu}$ to be a symmetric tensor field (with respect to the swapping of its indices) defined on a flat Minkowski background spacetime. In this case, the tensor field can vary rapidly with time. A 1D gravitational wave metric corresponds to a spacetime geometry defined by the line element $$\label{Eq:GW}
ds^2_{\rm{GW}}=-c^2dt^2+(1+h_+(t)) dx^2 ,$$ where $h_{+} $ is a time-dependent function describing a gravitational wave with $+$ polarization. Choosing a modulated tunneling energy according to $ J(t)=J_0 (1-\epsilon_+\sin\nu_+t) $ where $ \epsilon_+ $ is a small amplitude and $ \nu_+ $ is the frequency of the modulation leads to a time-dependent effective mass $ m(t)\simeq {(2J_0a^2)^{-1}}(1+\epsilon_+\sin\nu_+t) $. Therefore, the metric of Eq. (\[Eq:Metric\]) for a homogeneous 1D lattice with a modulated tunneling energy $ J(t)=J_0 (1-\epsilon_+\sin\nu_+t) $ can be written as $$g_{\mu \nu }^{SF} = \sqrt {2{n_j}{J_0}{a^2}/U} \left[ {\begin{array}{*{20}{c}}
{ - c_j^2 }&0\\
0&{{(1+\epsilon_+\sin\nu_+t) }}
\end{array}} \right].$$ Its corresponding line element can also be written as $$\Delta {s^2} = \sqrt {\frac{{2{n_j}{J_0}{a^2}}}{U}} \left[ { - c_j^2d{t^2} + (1+\epsilon_+\sin\nu_+t)\Delta {x^2} } \right].$$ which is the same as the line element given by (\[Eq:GW\]) up to a conformal factor.
Conclusions
===========
In summary, we have formulated quantum field theory in discrete spacetime based on the Bose-Hubbard Hamiltonian describing a trapped BEC in an optical lattice. The corresponding phase fluctuations of the field operator can be well described by the Klein-Gordon equation for a massless particle propagating in a curved discrete spacetime. The superfluid and the Mott insulator phases are associated with two different metrics. If trapped atoms in an optical lattice act as a superfluid, they must exhibit the known features of superfluids, including their analogy to space-time geometries. The emergence of the phase transition from the superfluid to Mott insulator depends on the system parameters. In the Mott insulator phase, the phase at each lattice site is random, so it only makes sense to consider amplitude fluctuations. The amplitude fluctuations also satisfy a space-time geometry, but it is quite obvious that this geometry cannot couple space and time, since tunneling is strongly suppressed in the Mott insulator. In addition, various analog metrics can be simulated by adjusting the system parameters. The proposed approach provides a suitable platform for quantum simulation of continuous quantum fields in curved spacetimes by their counterparts in discrete spacetimes. Moreover, the present study can be extended to some other physical systems in which the Bose-Hubbard model is realizable, such as strongly interacting photons [@Hartmann; @Noh] and coupled optomechanical systems [@Huang].
Acknowledgments {#acknowledgments .unnumbered}
===============
We wish to thank the Office of Graduate Studies of the University of Isfahan for its support.
[46]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, ) @noop [**]{} (, ) [****, ()](\doibase 10.1103/RevModPhys.86.153) @noop [**]{} (, ) [****, ()](\doibase 10.12942/lrr-2011-3) @noop [**]{}, Vol. (, ) @noop [**]{}, Vol. (, ) @noop [ ()]{} [****, ()](\doibase 10.1103/RevModPhys.71.463) [****, ()](\doibase 10.1126/science.269.5221.198), [****, ()](\doibase
10.1103/PhysRevLett.75.3969) [****, ()](\doibase 10.1103/PhysRevLett.85.4643) [****, ()](\doibase 10.1103/PhysRevA.63.023611) [****, ()](http://stacks.iop.org/0264-9381/18/i=6/a=312) [****, ()](\doibase 10.1103/PhysRevD.83.124047) [****, ()](\doibase
10.1103/PhysRevD.87.124018) [****, ()](\doibase 10.1103/PhysRevLett.108.071101) [****, ()](\doibase 10.1103/PhysRevD.78.084013) [****, ()](\doibase 10.1103/PhysRevA.69.033602) [****, ()](\doibase 10.1103/PhysRevLett.91.240407) [****, ()](\doibase 10.1103/PhysRevD.69.064021) [****, ()](\doibase 10.1103/PhysRevA.70.063615) [****, ()](\doibase 10.1103/RevModPhys.78.179) [****, ()](\doibase http://dx.doi.org/10.1016/j.aop.2004.09.010), [****, ()](\doibase
10.1080/00018730701223200), [****, ()](\doibase 10.1103/RevModPhys.80.885) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevLett.81.3108) [****, ()](http://stacks.iop.org/1464-4266/5/i=2/a=352) [****, ()](\doibase http://dx.doi.org/10.1016/j.aop.2011.10.007) [****, ()](\doibase http://dx.doi.org/10.1016/j.ssc.2007.02.043), [****, ()](\doibase 10.1103/PhysRevA.84.050101) @noop [****, ()]{} @noop [**]{}, Vol. (, ) @noop [**]{} (, ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevA.94.043842) [****, ()](\doibase 10.1103/PhysRevA.60.4902) [****, ()](\doibase 10.1103/PhysRevA.77.043615) [****, ()](\doibase 10.1103/RevModPhys.82.1225) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevD.72.084012) [****, ()](\doibase
10.1103/PhysRevLett.83.2502) [****, ()](\doibase 10.1103/PhysRevLett.97.200601)
|
---
author:
- |
\
University of Maryland, Department of Astronomy and Joint Space-Science Institute, College Park, MD 20742-2421, USA\
E-mail:
- |
Stratos Boutloukos\
Theoretical Astrophysics, University of Tübingen, Auf der Morgenstelle 10, 72076, Germany\
E-mail:
- |
Ka Ho Lo\
Center for Theoretical Astrophysics and Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080, USA\
E-mail:
- |
Frederick K. Lamb\
Center for Theoretical Astrophysics and Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080, USA and Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801-3074, USA\
E-mail:
title: 'Implications of high-precision spectra of thermonuclear X-ray bursts for determining neutron star masses and radii'
---
Introduction
============
A few years after the discovery of thermonuclear X-ray bursts from accreting neutron stars, Jan van Paradijs proposed a method for using observations of thermonuclear X-ray bursts to constrain both the masses and radii of the stars and hence to provide key information on the properties of cold high-density matter [@vP79]. In brief, the argument was that (1) if the luminosity of a source during the so-called touchdown phase of photospheric radius expansion bursts was the Eddington luminosity of the neutron star, and (2) if during the cooling phase of the bursts the entire surface of the star emits uniformly, then a combination of the observed touchdown flux and area normalization plus knowledge of the distance to the source and the composition of its atmosphere suffices to determine the star’s mass and radius.
The first applications of this method yielded puzzling results. Burst spectra are very close to Planck spectra, but the fitted Planck temperatures are commonly $kT_{\rm fit}\sim 3$ keV at the peaks of bursts, which is higher than is possible if the atmosphere is purely gravitationally confined [@mars82]. Also, in many cases application of this method leads to estimates of the stellar radius that are implausibly small ($<5$ km). It was then pointed out that although the [*shape*]{} of the spectrum may be qualitatively similar to a Planck spectrum, atmospheric opacity effects can shift the peak of the spectrum so that Planck fits of X-ray data yield a fitted temperature that can be up to $\sim 2$ times the surface effective temperature. It has been largely accepted that such models describe the spectra correctly, but prior to our work no comparison had been made with data that are capable of distinguishing between simple Planck or Bose-Einstein spectra and model atmosphere spectra; the differences are subtle, and require data taken with the best available instrument (the [*Rossi*]{} X-ray Timing Explorer Proportional Counter Array \[RXTE PCA\]) from long bursts that maintain steady spectra for tens of seconds as opposed to the tenths of a second that are usual for typical bursts.
Here we describe and elaborate on the comparisons we first reported in [@bout10]. We find, surprisingly, that although a simple Bose-Einstein function fits the highest-precision single spectra available in the RXTE archive, the most commonly used atmospheric spectral models are inconsistent with such spectra. This calls into question inferences made using these models. In Section 2 we give an overview of the principles behind atmospheric spectral models and why they shift the spectral peak. In Section 3 we discuss our comparisons with RXTE data and what they imply. Finally, in Section 4 we discuss ongoing work in which we compare new atmospheric spectral models with the data, and in particular the indications that these models may fit long data sets better than Bose-Einstein models. We discuss the implications of such fits, particularly that they may yield constraints on the mass and radius via joint constraints on the surface gravity and redshift, but caution that the approximations in current models do not yet allow us to draw robust conclusions.
The Principles of Burst Model Atmosphere Spectra
================================================
In the past three decades, many groups have calculated model atmosphere spectra relevant for bursts (e.g., [@lond84; @made04; @majc05]). The high temperatures of the bursts mean that unless the atmosphere has unexpectedly large metallicity, atoms will be fully ionized. The only opacity sources are then free-free absorption (important at sufficiently low photon energies) and Compton scattering (expected to dominate over most or all of the observed PCA energy range).
In an idealized situation where the only opacity is energy-independent scattering, we can understand the shift in the peak of the spectrum caused by the scattering using a simple thought experiment. Suppose that the atmosphere has a net surface radiative flux $F=\sigma T_{\rm eff}^4$, where $\sigma$ is the Stefan-Boltzmann constant and $T_{\rm eff}$ is the effective temperature. Suppose also that it is in complete thermal balance and hence emits a Planck spectrum, but that on top of the atmosphere is a scattering layer that lets an energy-independent fraction $0<f<1$ of the photons through, the rest being reflected and rethermalized in the atmosphere. Because the flux $F$ must emerge, the atmosphere heats up, still in thermal equilibrium, to a temperature $T_{\rm fit}=f^{-1/4}T_{\rm eff}$. The net flux is still $F=f\sigma(f^{-1/4}T_{\rm eff})^4=\sigma T_{\rm eff}^4$, so the effective temperature ([*defined*]{} as $T_{\rm
eff}=(F/\sigma)^{1/4}$) is unchanged and the emergent spectrum is still a perfect Planck spectrum, but its temperature is $f^{-1/4}$ times the effective temperature. Figure \[tcorr\] shows a typical example [@majc05] of how model atmospheres shift the peak of the spectrum upward.
![Illustrative figure showing the upward shift in the peak of X-ray burst spectra produced by model atmospheres. The solid line shows a model spectrum from [@majc05], the dashed line is the best fit of the Planck function to the spectrum, with an adjustable normalization to describe the reduction of the emergent flux caused by scattering, and the dotted line is the Planck spectrum at the effective temperature. For this case, where the flux is $\sim 80$% of Eddington, the best-fit Planck temperature has a temperature $\sim 1.8$ times the effective temperature. This is the main reason the dotted Planck spectrum differs from the best-fit Planck spectrum. Although the shape of the model atmosphere spectrum is close to the shape of a Planck spectrum, deviations are evident at low energies and at high energies.[]{data-label="tcorr"}](tcorr.eps){width="60.00000%"}
Figure \[tcorr\] also shows how model atmosphere spectra typically deviate from the Planck spectrum that best fits them. The deviation at low energies is caused primarily by the energy-dependence of the free-free opacity whereas the deviation at high energies is due primarily to the energy-dependent Klein-Nishina correction to the Thomson scattering cross section. Thus although the model atmosphere spectra have shapes that are close to the shape of a Planck spectrum, there are deviations that can in principle be observed.
Comparison of Models with Data
==============================
Prior to our work in [@bout10], very few comparisons had been made of model spectra with burst data, and none used data with enough counts to distinguish between qualitatively different models (e.g., Planck spectra fitted at least as well as model atmosphere spectra in the work of [@fost86]). It is therefore critical to use long stretches of data taken with the RXTE PCA during intervals when the temperature is nearly constant.
Most thermonuclear X-ray bursts last only a few seconds, during which time the temperature changes rapidly enough that a single-temperature fit is only appropriate for data segments shorter than a few tenths of a second. However, we found that around the peak of the superburst from 4U 1820–30 (see [@stro02]), there was a 64-second segment with $\sim\,800,000$ counts that had a nearly constant temperature. This is the most precise available data set. We note that although the nuclear processes in superbursts and canonical bursts are different, their atmospheric processes are the same and hence for the purpose of spectral fitting this is a representative data set. We also note that in the later portions of this burst, high time resolution data show no evidence that the spectrum changes on time scales $<10$ s, supporting our expectation that the time scale of variability is much longer in superbursts than in canonical bursts.
Our first comparison was with a Bose-Einstein spectral model, in which the continuum is $$F(E,T)\propto E^3/\left[\exp((E-\mu)/kT)-1\right]\; .$$ Here $E$ is the photon energy, $T$ is the temperature, and $\mu<0$ is the chemical potential. This spectrum, which generalizes and is more physically realizable than a Planck spectrum, is the equilibrium spectrum for fully saturated Comptonization; it could thus be a reasonable approximation to the spectrum produced in a scattering-dominated atmosphere [@bout10]. In addition to the continuum component, we follow [@stro02] in adding as additional components that originate far from the star a zero-redshift iron emission line, an edge, and photoelectric absorption.
We show the result in Figure \[befit\]. Remarkably, the simple Bose-Einstein form fits this $\sim$800,000 count spectrum extremely well, with $\chi^2$/dof=55.8/50 over the 3–32 keV range of our fit. The best-fit temperature and chemical potential are $kT=2.85$ keV and $\mu=-0.76$ keV. The data here are from where the flux measured with RXTE is $\sim 90$% of the peak flux of the burst, but we also find good fits to data at 100%, 80%, and 25% of the peak measured flux. The excellent fit of the Bose-Einstein shape is therefore not confined to the peak.
![Fit of a model with a Bose-Einstein continuum plus a zero-redshift iron line and edge and photoelectric absorption to $\sim$800,000 counts of data near the peak of a superburst from 4U 1820–30. The top panel shows the count data (shown with error bars representing the statistical uncertainties in the data) and the fit, shown by a solid line. The bottom panel shows the residuals. Contrary to our initial expectations, the fit is superb. Figure adapted from [@bout10].[]{data-label="befit"}](1820bet64T.eps){width="60.00000%"}
The high quality of this fit suggests challenges for spectral modelers. In particular, two questions emerge: why are the spectra so close to Bose-Einstein, and why is the magnitude of the chemical potential much less than $kT$? To elaborate on the latter point: if there were a significant deficit of photons compared to what would be expected for a Planck spectrum at $kT$, then $\mu<-kT$, so $|\mu|\ll kT$ implies that the supply of photons is close to what is needed to fill a Planck spectrum. Ongoing work by Fred Lamb and Ka Ho Lo suggests that these requirements can be met in extended atmospheres with appropriate densities (low enough that scattering dominates, but high enough that photons can be supplied at the required rate). It is an open question whether these requirements are met in realistic models.
Although a Bose-Einstein model fits the highest-precision PCA data well, the implications are difficult to establish with certainty. This is because, as we indicated earlier, Thomson scattering in the outer atmosphere can in principle impose a large dilution factor without causing any deviation from a nearly-perfect Planck or Bose-Einstein spectrum established at larger optical depths. In this case the efficiency $f$ of the emission can be less than unity by a significant factor. If the emission efficiency is high, the spectrum we have measured implies that the surface radiative flux is significantly super-Eddington and extra confinement is required (e.g., [@bout10] explored confinement by a tangled magnetic field generated during bursts). If instead the efficiency is low, the surface radiative flux could be sub-Eddington. For conventional, gravitationally-confined atmospheres to be favored would require the spectra they predict to fit much better than a Bose-Einstein spectrum, but this is not possible for single data segments because our fits of Bose-Einstein models to such segments yield $\chi^2/{\rm dof}\sim
1$. It is, nonetheless, important to determine whether published model atmosphere spectra also yield $\chi^2/{\rm dof}\sim 1$, because such models are only viable if this is the case.
We show such a comparison in Figure \[model\], where we compare representative models from [@made04; @majc05] with the same 64 seconds of data from the 4U 1820–30 superburst that we used previously. A direct fit of the data is not possible, because the available grids of these models are not fine enough and the relevant composition (pure helium) is not computed. Therefore, as we did in [@bout10], we compare the [*shape*]{} of the model spectra with the [*shape*]{} of Bose-Einstein spectra. That is, starting from our observation that the observed spectra are very close to Bose-Einstein in form, we produce synthetic RXTE data using the model spectra and fit those data with a Bose-Einstein model. As can be seen from Figure \[model\], there are strong and systematic deviations between these shapes. These deviations are similar for different compositions (H/He with no metals versus a solar composition), surface gravities ($\log_{10}
(g/{\rm cm~s}^{-2})=14.8$ versus 14.3), effective temperatures ($T_{\rm eff}=3\times 10^7$ K versus $2\times 10^7$ K), and surface radiative fluxes relative to the Eddington flux ($F=0.8~F_{\rm Edd}$ versus $0.5~F_{\rm Edd}$). We conclude that the spectral shape predicted by these models is significantly different from what is observed. We also found this to be true in a later segment of data where the observed flux was $\sim 50$% of the maximum, versus $\sim
90$% in our primary data set.
![Fit of a Bose-Einstein continuum spectrum to continuum data with $\sim\,800,000$ counts synthesized using (top panel) a H/He composition, $\log_{10} (g/{\rm cm~s}^{-2})=14.8$, $T_{\rm eff}=3\times 10^7$ K model spectrum from [@made04] ($F=0.8~F_{\rm Edd}$ for this spectrum) and (bottom panel) a solar composition, $\log_{10} (g/{\rm cm~s}^{-2})=14.3$, $T_{\rm eff}=2\times 10^7$ K model spectrum from [@majc05] ($F=0.5~F_{\rm Edd}$). Clearly, the predicted model spectra are very different in form from the Bose-Einstein shape, and hence from observed spectra. Caution is therefore appropriate in drawing inferences about stellar masses and radii using these models. Figure adapted from [@bout10].[]{data-label="model"}](residuals.eps){width="60.00000%"}
Given that use of spectral models that are inconsistent with the best data may introduce systematic errors in estimates of neutron star masses and radii, caution seems warranted. An additional indicator of possible biases in such estimates was mentioned briefly by [@guve10], and in more detail by [@stei10]. When the standard assumptions of Eddington luminosity at touchdown and full-surface uniform emission in the burst tail are employed along with the best measurements of quantities such as the distance, touchdown flux, and area normalization, the derived mass and radius are not real but are instead complex quantities, an obvious impossibility. Indeed, [@stei10] find that only a fraction $1.5\times 10^{-8}$ of the prior probability distribution of these quantities employed by [@guve10] allow a solution for 4U 1820–30. Such a small allowed region in parameter space produces small error bars on the mass and radius, but may indicate that the assumptions on which the analysis is based are incorrect.
Although previously published models differ strongly from the most precise data, more recent models show promise of much better fits. We discuss these in the next section.
More Recent Models and Future Directions
========================================
Recently, new burst model atmosphere spectra have been calculated [@sule10]. These spectra were computed using an approximate scattering integral (e.g., the Fokker-Planck approximation was made), but a fine enough grid was constructed with enough different compositions (including pure helium) that they could be fit directly to PCA data. Our preliminary results using these spectra are very encouraging; for example, a pure helium atmosphere with $F=0.95~F_{\rm Edd}$ fits our 64-second segment of data with $\chi^2/{\rm dof}=42.3/48$. This is better, but not significantly, than the best fit of Bose-Einstein spectra to the same data. These new models provide comparably good fits to data later in the burst, when the observed flux is half the maximum and previously published models still have shapes strongly discrepant with what is observed.
This is encouraging, and one might at first imagine that this would allow us to apply the van Paradijs [@vP79] method using the models reported in [@sule10] or new ones computed without some of the current approximations. Unfortunately, this appears not to be the case. We fit 102 consecutive 16-second data segments near the beginning of the 4U 1820–30 superburst (but after apparent touchdown) using the models from [@sule10], and found that even when we fixed the surface gravity and surface redshift (hence fixing the radius of the emitting surface) the inferred size of the emitting area changes systematically by $\sim 20$% over the data segments. One might wonder whether the whole star is, in fact, emitting but the photospheric or thermalization radius is changing. But a change of the amount observed would require a surface radiative flux very close to Eddington to achieve the necessary large scale height, and such fluxes are highly inconsistent with the observed spectra. Instead, it appears that the fraction of the surface that emits changes systematically, in conflict with the standard simplifying assumption of the van Paradijs method.
The encouragingly good fits using the models from [@sule10] do suggest an alternative method for determining the mass and radius, originally suggested in [@majc05b]. In addition to composition and surface radiative flux, the surface gravity is a parameter in the models and to relate the surface spectrum to what we see at infinity we must also include the surface redshift in the fit. The surface gravity $g$ and surface redshift $z$ depend differently on the gravitational mass $M$ and circumferential radius $R$; for example, for a nonrotating star whose exterior spacetime is therefore Schwarzschild, $1+z=(1-2GM/Rc^2)^{-1/2}$ and $g=(GM/R^2)(1+z)$. Inverting then gives us $$R=(c^2/2g)(1-1/(1+z)^2)(1+z)\quad{\rm and}\ M=(Rc^2/2G)(1-1/(1+z)^2)\;,$$ where $c$ is the speed of light and $G$ is Newton’s constant. Thus, if the surface redshift and surface gravity can be constrained separately, we can constrain $M$ and $R$.
To do this requires fits to the data that (1) are dramatically better than Bose-Einstein fits, so that we have some confidence in the inferences we draw from model atmosphere spectra, and (2) distinguish between compositions, surface gravities, and surface redshifts. Our work on this program, which we are undertaking in collaboration with Valery Suleimanov and Juri Poutanen, has yielded good initial results. We find that when we fit the 102 contiguous 16-second segments of data from the 4U 1820–30 superburst mentioned previously, assuming that the composition, surface gravity, and surface redshift remain the same for all segments but that the surface radiative flux can change, one example fit of the Suleimanov et al. models gives $\chi^2/{\rm dof}=5394/5200$. In contrast, the best Bose-Einstein joint fit to the data, where we allow the temperature and chemical potential to vary independently between segments, gives $\chi^2/{\rm dof}=5660/5100$. This comparison strongly favors the model atmosphere spectra, and there are preliminary indications that composition, surface gravity, and surface redshift can be constrained. However, we caution that because the current models are known to make approximations compared to the exact scattering kernel, any conclusions are premature at this point. Nonetheless, this approach seems promising.
In summary, we have recently performed the first comparison of predicted spectra with the highest-precision data available from the RXTE PCA. We found that although a Bose-Einstein spectrum fits all individual segments well, previously published model atmosphere spectra have shapes strongly inconsistent with the observed spectra. This suggests caution in inferences made using these spectra. New spectral models provide promising descriptions of the highest-precision data and may restrict the mass and radius via constraints on the surface gravity and redshift, once they have been made more accurate.
These results are based on research supported by NSF grant AST0708424 at Maryland and by NSF grant AST0709015 and the Fortner Chair at Illinois.
[99]{}
S. Boutloukos, M. C. Miller, & F. K. Lamb, [*Super-Eddington Fluxes During Thermonuclear X-ray Bursts*]{}, ApJ, 720, L15 (2010).
A. J. Foster, A. C. Fabian, & R. R. Ross, [*Neutron star model atmospheres - A comparison with MXB 1728–34*]{}, MNRAS, 221, 409 (1986).
T. Güver, P. Wroblewski, L. Camarota, & F. Özel, [*The Mass and Radius of the Neutron Star in 4U 1820–30*]{}, ApJ, 719, 1807 (2010).
R. A. London, W. M. Howard, & R. E. Taam, [*The spectra of X-ray bursting neutron stars*]{}, ApJ, 287, L27 (1984).
J. Madej, P. C. Joss, & A. R[ó]{}[ż]{}a[ń]{}ska, [*Model atmospheres: Hydrogen-Helium Comptonized Spectra*]{}, ApJ, 602, 904 (2004).
A. Majczyna & J. Madej, [*Mass and radius determination for the neutron star in X-ray burst source 4U/MXB 1728-34*]{}, Act. Astr., 55, 349 (2005)
A. Majczyna, J. Madej, P. C. Joss, & A. R[ó]{}[ż]{}a[ń]{}ska, [*Model atmospheres and X-ray spectra of bursting neutron stars. II. Iron rich comptonized spectra*]{}, A&A, 430, 643 (2005).
H. L. Marshall, [*Constraints on the parameters of X-ray burster emission regions*]{}, ApJ, 260, 815 (1982).
A. W. Steiner, J. M. Lattimer, & E. F. Brown, [*The Equation of State from Observed Masses and Radii of Neutron Stars*]{}, ApJ, 722, 33 (2010).
T. E. Strohmayer & E. F. Brown, [*A Remarkable 3 Hour Thermonuclear Burst from 4U 1820-30*]{}, ApJ, 566, 1045 (2002).
V. Suleimanov, J. Poutanen, & K. Werner, [*X-ray bursting neutron star atmosphere models: spectra and color corrections*]{}, A&A, 527, 139 (2011)
J. van Paradijs, [*Possible observational constraints on the mass-radius relation of neutron stars*]{}, ApJ, 234, 609 (1979)
|
---
abstract: 'Molecular dynamics simulations are used to investigate the connection between thermal history and physical aging in polymer glasses, in particular the effects of a temperature square step. Measurements of two-time correlation functions show that a negative temperature step causes “rejuvenation” of the sample: the entire spectrum of relaxation times appears identical to a younger specimen that did not experience a temperature step. A positive temperature step, however, leads to significant changes in the relaxation times. At short times, the dynamics are accelerated (rejuvenation), whereas at long times the dynamics are slowed (over-aging). All findings are in excellent qualitative agreement with recent experiments. The two regimes can be explained by the competing contributions of dynamical heterogeneities and faster aging dynamics at higher temperatures. As a result of this competition, the transition between rejuvenation and over-aging depends on the length of the square step, with shorter steps causing more rejuvenation and longer steps causing more over-aging. Although the spectrum of relaxation times is greatly modified by a temperature step, the van Hove functions, which measure the distribution of particle displacements, exhibit complete superposition at times when the mean-squared displacements are equal.'
address: 'Department of Physics and Astronomy, The University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1, Canada'
author:
- Mya Warren
- 'J[ö]{}rg Rottler'
title: Modification of the aging dynamics of glassy polymers due to a temperature step
---
Introduction
============
Glassy systems include such diverse materials as network glasses, polymer glasses, spin glasses, disordered metals and many colloidal systems. These materials present great challenges to theory and simulation due to a lack of long-range order and very slow, non-equilibrium dynamics. An intriguing consequence of these “glassy dynamics” is that the properties of glasses are not stationary but depend on the wait time $t_w$ elapsed since vitrification: this phenomenon is usually called physical aging. Polymer glasses are particularly suited to studies of aging, since they exhibit comparatively low glass transition temperatures, and therefore thermally activated aging dynamics are measurable over typical experimental timescales. One of the most comprehensive studies of aging in polymer glasses was presented by Struik [@Struik], who showed that after a rapid quench from the melt state, there is a particularly simple evolution of the properties of the glass. The thermodynamic variables such as the energy and the density evolve logarithmically with wait time, whereas the dynamical properties such as the creep compliance exhibit scaling with the wait time $t/t_w^\mu$, where the exponent $\mu$ is approximately unity. This scaling law has also been confirmed for many other glassy systems [@Berthier_book], but a clear molecular level explanantion of this behavior is still being sought.
The intrinsic aging dynamics can be modified through the influence of stress and temperature [@Struik; @McKenna_JPhys15; @Warren_PRE07; @Rottler_PRL95; @Lacks_PRL93; @Lequeux_JSM; @Lequeux_PRL89]. In polymer glasses, application of a large stress usually has the effect of “rejuvenating” the glass: the entire spectrum of relaxation times is rescaled to shorter times, and closely resembles the spectrum of a younger state of the unperturbed glass [@Struik; @McKenna_PES; @Warren_PRE07]. The term rejuvenation originated with Struik’s hypothesis that the application of stress increases the free-volume available for molecular rearrangements and actually results in an erasure of aging [@Struik]. This hypothesis is still somewhat controversial [@McKenna_JPhys15; @Struik_POLY38], however the term persists and has come to denote any acceleration of the intrinsic aging dynamics. In colloidal systems, it has recently been shown that stress can lead to an apparent slowing down of the dynamics (over-aging) as well [@Lacks_PRL93; @Lequeux_PRL89]. The effects of temperature on the aging dynamics are similarly complex. It is well known that glasses age faster at higher temperatures; however, simply taking this fact into account is not sufficient to explain the modifications of the dynamics due to changes in temperature in the glassy state [@Struik]. Glasses have been shown to retain a memory of previous annealing temperatures [@Bellon_EPL], therefore, the aging dynamics generally depends on the entire thermal history of the sample [@Kovacs_APS]. Recently, a set of detailed experiments were performed on polymer glasses that experienced a temperature square step after a quench to the glassy state [@Lequeux_JSM]. Results showed that the simple rescaling of time with $t_w^\mu$ no longer described the creep compliance curves, and there are significant changes to the entire spectrum of relaxation times in comparison to a sample that did not experience a temperature step.
In a previous paper, we used molecular dynamics simulations to investigate the dynamics of a model polymer glass under a simple temperature quench followed by a creep experiment [@Warren_PRE07]. It was shown that the model qualitatively reproduces the effects of aging on the creep compliance, including the phenomenon of mechanical rejuvenation at large stresses. Additionally, through measurements of the two-time particle correlation functions, we showed that the aging dynamics of the creep compliance exactly corresponds to the evolution in the cage-escape time (the time-scale for particles to undergo a significant rearrangement leading to a new local environment). In this work, we extend our investigation of the dynamics of aging in polymer glasses by examining the effects of a short temperature square step, modeled after the protocol of the experiments presented in ref. [@Lequeux_JSM]. In agreement with experiment, our simulations indicate that a downward temperature step causes rejuvenation of the relaxation times, but an upward step yields rejuvenation (faster dynamics) at short timescales, and over-aging (slower dynamics) at long timescales. In addition, we investigate the microscopic dynamics behind these relaxation phenomena through measurements of the two-time particle correlation functions and the van Hove distribution functions.
Methodology
===========
We perform molecular dynamics (MD) simulations on a “bead-spring” polymer model that has been studied extensively for its glass-forming properties [@Kremer_Grest; @Binder_PRE57]. Beads interact via a non-specific van der Waals potential (Lennard-Jones), and bonds are modeled as a stiff spring (FENE) that prevents chain crossing. The reference length-scale is $a$, the diameter of the bead; the energy scale, $u_0$, is determined by the strength of the van der Waals potential; and the time scale is $\tau_{LJ}=\sqrt{ma^2/u_0}$, where $m$ is the mass of a bead.
Our simulation method is similar to the one discussed in ref. [@Warren_PRE07]. We simulate 85500 beads in a cubic box with periodic boundary conditions. All polymer chains have 100 beads each. This length is not much greater than the entanglement length; however as the dynamics in glasses are very slow, reptation effects are minimal over the timescale of our simulations. The thermal procedure is detailed in Fig. \[fig:experiment\], and is very similar to the protocol used in ref. [@Lequeux_JSM] except for the obvious difference in timescales due to the limitations of molecular dynamics simulations. The glass is formed by a rapid quench at constant volume from an equilibrated melt at $T=1.2u_0/k_B$ to a glassy temperature of $T=0.2u_0/k_B$ ($T_g\approx0.35u_0/k_B$ for this model [@Rottler_PRE64]). It is then aged at zero pressure and $T=0.2u_0/k_B$ for $t_1=150\tau_{LJ}$, when the temperature is ramped to $T=(0.2+\Delta T)u_0/k_B$ over a period of $75\tau_{LJ}$, held there for $t_{\Delta T}=750\tau_{LJ}$, and then returned to $T=0.2u_0/k_B$ at the same rate. We monitor the dynamics of the system at wait times $t_w$ since the initial temperature quench.
![Simulation protocol for a temperature square step. $\Delta T =$ -0.1, -0.05, 0, 0.05, and 0.1$u_0/k_B$.[]{data-label="fig:experiment"}](fig1.eps)
Results
=======
Dynamics
--------
![Incoherent scattering factor, $C_q(t,t_w)$, versus $t$ for various wait times $t_w$, and $q=6.3a^{-1}$. The curves from left to right in each panel have wait times of 1125, 1800, 3300, 8550, and 23550$\tau_{LJ}$. []{data-label="fig:Cq_all"}](fig2.eps)
Two-time correlation functions are often used to measure the structural relaxations in aging glassy systems. In this study, we measure the incoherent scattering function, $$C_q(t,t_w)=\frac{1}{N}\sum^{N}_{j=1}\exp(i\vec{q} \Delta \vec{r}_j(t,t_w))
\label{eqn:Cq}$$ ($\vec{q}$ is the wave vector, and $\Delta
\vec{r}_j(t,t_w)=\vec{r}_j(t_w+t)-\vec{r}_j(t_w)$ is the displacement vector of the $j^{th}$ atom) as well as the mean squared displacement, $$\langle{\Delta \vec r}(t,t_w)^2\rangle=\frac{1}{N}\sum^{N}_{j=1} \Delta \vec{r}_j(t,t_w)^2
\label{eqn:msd}$$ as functions of the time $t$ after the wait time $t_w$ since the glass was quenched. The two-time correlation functions have two regimes: $$C(t,t_w) = C_{fast}(t)+C_{slow}(t,t_w)$$ $C_{fast}(t)$ describes the unconstrained motion of particles over their mean-free path; $C_{slow}(t,t_w)$ describes the dynamics at much longer timescales, where particle motions arise due to the collective relaxations of groups of particles. Only the slow part of the correlation function experiences aging, therefore discussion of the aging dynamics of $C(t,t_w)$ refer to $C_{slow}(t,t_w)$.
In ref. [@Warren_PRE07], we showed that if $\Delta T = 0$, both the incoherent scattering factor and the mean squared displacement exhibit superposition with wait time: a simple rescaling of the time variable by a shift factor $a$ causes complete collapse of the curves at different wait times, i. e. $$C_q(t,t_w)=C_q(at,t_w')
\label{eqn:superposition}$$ and $$\langle \Delta \vec{r}(t,t_w)^2\rangle=\langle \Delta \vec{r}(at,t_w')^2\rangle.
\label{eqn:superposition2}$$ The shift factor varies with the wait time in a simple power law, $$a{\sim}t_w^{-\mu}
\label{eqn:aging}$$ and agrees exactly with the shifts obtained from superimposing the creep compliance curves. Such a power law in the shift factors is characteristic of the aging process under a simple quench; however, ref. [@Lequeux_JSM] found that the power law does not hold for a temperature square step protocol. Figure \[fig:Cq\_all\] shows $C_q(t,t_w)$ for three temperature jumps, $\Delta T =$ -0.1, 0 and 0.1$u_0/k_B$. All curves exhibit a flat region at short times where atoms are relatively immobile due to the caging effect of their neighbours, followed by a rapid roll-off at longer times where atoms begin to escape from the local cages. For the negative temperature step, the $C_q(t,t_w)$ curves have the same shape as the reference curves for $\Delta T = 0$. Superposition with wait time continues to apply and the curves are simply shifted forward in time with increasing $t_w$, although the shifts are clearly different from the reference case. A positive temperature jump, however, causes a notable modification of the shape of the correlation function. The curves roll off more slowly at short wait times than long wait times, and superposition with wait time is no longer possible.
The changes in the relaxation time spectrum due to a temperature step can clearly be seen in Fig. \[fig:Cq\_teff\](a), where $C_q(t,t_w)$ is plotted for each of the temperature steps at a wait time of $1125\tau_{LJ}$. Compared to the case with $\Delta T = 0$, a step down in temperature causes a constant shift in the correlation function toward shorter times. The glass is said to be “rejuvenated”. A positive temperature step, however, seems to cause faster relaxations (rejuvenation) at short times, and slower relaxations (over-aging) at long times. For each of these curves, we define a shift factor with respect to the $\Delta T = 0$ sample that depends on time,
$$C_q(t,t_w,T_0) = C_q\left(a\left( t \right)t,t_w,T_0+\Delta T\right).
\label{eqn:shift_factor}$$
This quantity is exactly analogous to the effective time, $t_{eff}(t)$, that was used to analyze the changes in the creep compliance curves after a temperature square step in ref. [@Lequeux_JSM]: $$\frac{t_{eff}}{t_w}=a(t)-1.$$ $a(t)-1$ is plotted in Fig. \[fig:Cq\_teff\](b) for the correlation data in Fig. \[fig:Cq\_teff\](a). These results for the incoherent scattering factor are in excellent qualitative agreement with the data obtained from the experimental creep compliance [@Lequeux_JSM]. A step down in temperature causes a relatively constant, negative shift in the relaxation times, whereas a step up in temperature shows an effective time that is negative for short times, but eventually transitions to positive values for longer times.
![(a) Incoherent scattering function for various temperature steps (indicated in the legend in units of $u_0/k_B$). (b) Shift factors with respect to the $\Delta T =
0$ case (Eq. (\[eqn:shift\_factor\])) for the curves in (a).[]{data-label="fig:Cq_teff"}](fig3.eps)
Negative temperature step
-------------------------
![Shift factors (Eq. \[eqn:superposition\]) versus wait time for the $\Delta T = 0$ sample ($\times$), the $\Delta T = -0.1u_0/k_B$ ($\triangle$) sample, and the $\Delta
T =-0.1u_0/k_B$ sample with $t_w'=t_w-t_{\Delta T}$ ($\ocircle$). The solid lines are power law fits to the data, and the dashed line is a guide to the eye.[]{data-label="fig:shifts_downstep"}](fig4.eps)
As superposition of the incoherent scattering factor with wait time continues to hold after a step down in temperature, we can monitor the aging dynamics through the standard procedure of rescaling the time variable to make a master curve of the correlation functions. Figure \[fig:shifts\_downstep\] shows the shift factors versus wait time for the data from Fig. \[fig:Cq\_all\] for $\Delta T = 0$ and $\Delta T =
-0.1u_0/k_B$. The sample with no temperature step ages with the characteristic power law in $t_w$, however, the sample that experienced a negative temperature step clearly does not. However, if we correct the wait time using the assumption that there is no aging at the lower temperature, $t_w' = t_w-t_{\Delta T}$, then the power law is restored. The aging exponents for these samples agree within the uncertainty of the fit at $\mu=0.86\pm0.09$. A downward temperature step of $\Delta T = -0.1u_0/k_B$ seems to induce a trivial rejuvenation brought about by effectively “freezing” the dynamics at the lower temperature.
Positive temperature step
-------------------------
![Shift factors versus time with respect to the unperturbed sample at $t_w = 1800\tau_{LJ}$ for samples with temperature jumps defined by $\Delta T = 0.1u_0/k_B$, and $(t_1,t_{\Delta T})$ equal to 825 and 750$\tau_{LJ}$ ($\ocircle$), 1313 and 263$\tau_{LJ}$ ($\Box$), and 263 and 1313$\tau_{LJ}$ ($\Diamond$).[]{data-label="fig:teff_vs_t1"}](fig5.eps)
The effect of an increase in temperature cannot be described as either simple rejuvenation or over-aging, but instead the dynamics are accelerated at short times and slowed at long times. To understand the origin of the rejuvenation to over-aging transition, the relevant timescales in the protocol, $t_1$ and $t_{\Delta T}$, are varied for a constant step up in temperature of $0.1u_0/k_B$. We investigate three cases: (1) $t_1 \approx t_{\Delta T}$, (2) $t_1 \gg t_{\Delta T}$ and (3) $t_1 \ll t_{\Delta T}$. The effective time with respect to the sample with no temperature step is shown for these simulation parameters at $t_w=1800\tau_{LJ}$ in Fig. \[fig:teff\_vs\_t1\]. To simplify the analysis, $t_1$ and $t_{\Delta T}$ were chosen such that in each of the three cases, the dynamics are measured at the same time since the end of the step. In cases (1) and (2), we clearly see rejuvenation of the short time region of the relaxation-time spectrum and a transition to over-aging at longer times; however, the transition from rejuvenation to over-aging occurs later for case (2) (shorter $t_{\Delta T}$). In case (3), there is no clear rejuvenation region at all, and at long times, the relaxation spectrum is almost identical to case (1).
These results may be understood based on the transient dynamics at the higher temperature. The aging dynamics in glassy systems consist of spatially and temporally heterogeneous relaxation events, whereby collective motions in a group of particles lead to small rearrangements of the cage [@Weeks_JCM15; @Cipelletti_JPCM17]. These relaxations are often called dynamical heterogeneities [@Kob_PRL79; @Vollmayr_PRE72]. After a first relaxation event (cage escape), it has been shown that the timescale for subsequent relaxations is shorter, and a typical group of atoms will experience many such events before finding a stable configuration [@Kob_PRL99]. Because the aging dynamics are thermally activated, immediately after the temperature of the glass is raised, the total rate of relaxation events rapidly increases and persists for a time which is related to the “life-time” of the mobile regions. Once these regions finally stabilize, the resulting structure has a lower energy than before, or an over-population of long relaxation times. Overall, as also pointed out in ref. [@Lequeux_JSM], the greater rate of transitions at this temperature leads to accelerated aging with respect to a glass at the lower temperature. The results of our simulations after a temperature square step can then be understood as a competition between the initial rejuvenation due to the dynamical heterogeneities, and the over-aging that results when once they have stabilized. Therefore, a short temperature square step causes mostly rejuvenation, and long temperature steps show primarily over-aging. This picture is generally consistent with that of trap models [@Bouchaud_PRL2003] that generate glassy dynamics through a wide distribution of relaxation times. However, ref. [@Lequeux_JSM] found that such models do not explain all aspects of the thermal cycling experiment since they do not address the spatial arrangement of the relaxation processes.
It is curious that the long time part of the relaxation spectrum is the same for case (1) and case (3). This may indicate the end of the rapid transient effects, and the beginning of more steady aging in the glass at the higher temperature. It would be useful to expand the study to even longer $t_{\Delta T}$ to investigate this finding.
![The van Hove function for a glass aged at $T=0.2u_0/k_B$ (no temperature jump) for $t_w=1125\tau_{LJ}$. The curves are for times of $15\tau_{LJ}$ ($\ocircle$), $150\tau_{LJ}$ ($\Box$) and $1575\tau_{LJ}$ ($\Diamond$). The black lines are fits to the curves using Eq. (\[eqn:dist\_fit\]).[]{data-label="fig:dist_nojump_vs_t"}](fig6.eps)
Particle displacement distributions
-----------------------------------
In addition to the correlation functions, which provide an average picture of the particle dynamics with $t$ and $t_w$, molecular dynamics simulations allow us to obtain the full distribution of displacements $P(\Delta r,t,t_w)$, where $\Delta r = |\Delta
\vec{r}|$. This is also called the van Hove function [@Kob_PRE51]. The van Hove function was measured for three temperature jumps ($\Delta T=$ -0.1, 0 and 0.1$u_0/k_B$), and representative curves are shown in Fig. \[fig:dist\_nojump\_vs\_t\] for $\Delta T = 0$. The distribution appears to be the combination of a narrow, caged particle distribution, and a wider distribution of “mobile” particles that have experienced a cage rearrangement. The distribution can be described by the sum of a Gaussian for the caged particles, and an exponential tail for the mobile particles:
![(a)-(c): Fit parameters to the van Hove distributions (Eq. (\[eqn:dist\_fit\])) for three different temperature jumps at $t_w=1125\tau_{LJ}$. (d): Shift factors for $N_1/N$ and $\sigma^2$ together with those from $C_q(t,t_w)$ obtained from Fig. \[fig:Cq\_teff\].[]{data-label="fig:teff_fit_params"}](fig7.eps)
$$P(\Delta r) = N_1 e^{-\Delta r^2/\sigma^2} + N_2 e^{-\Delta r/r_0}.
\label{eqn:dist_fit}$$
If the distribution is normalized, there are three fit parameters: $N_1/N$, the ratio of caged particles $N_1$ to total particles $N=N_1+N_2$; $\sigma^2$, the width of the cage peak; and $r_0$, the characteristic length-scale of the exponential tail. These are shown in Fig. \[fig:teff\_fit\_params\](a)-(c). $N_1/N$ is relatively flat at short times, followed by a decay in the number of trapped particles that signals the onset of cage escape. The width of the cage peak, $\sigma^2$, is constant on the timescale where the particles are predominantly caged, and increases sub-diffusively in the cage escape regime possibly due to weak coupling between adjacent cages (local relaxations may somewhat affect cages nearby [@Heuer_PRE72]). Alternatively, the length scale of the mobile distribution, $r_0$, increases steadily with time even at the shortest timescales. The parameters $N_1/N$ and $\sigma^2$ show large differences for samples undergoing different temperature jumps, however, $r_0$ does not. A previous study showed that the width of the mobile peak also did not depend on the wait time [@Warren_PRE07]. It seems that the shape of the mobile distribution does not exhibit memory.
The effects of the temperature step on the van Hove function can be understood by defining a time-dependent shift factor for $\sigma^2$ and $N_1/N$, in analogy to Eq. (\[eqn:shift\_factor\]) for $C_q(t,t_w)$. Surprisingly, we see in Fig. \[fig:teff\_fit\_params\](d) that the shift factors for both fit parameters and for $C_q(t,t_w)$ are identical. This suggests that the van Hove functions after different temperature steps can be superimposed at times where their two-time correlation functions are equal, $$P(\Delta r,t,t_w) = P(\Delta r, \langle \Delta r(t,t_w)^2 \rangle).
\label{eqn:P_superpos}$$ Indeed, this is what we observe in Fig. \[fig:dist\_msd\_t1\](a). The mean-squared displacement for this system is shown Fig. \[fig:dist\_msd\_t1\](b); the distributions are compared at times indicated by the intercept of the $\langle \Delta r^2 \rangle$ curves with the dashed horizontal lines. Note that this is not a simple rescaling of the length parameter $\Delta r/\sqrt{\langle \Delta r^2
\rangle}$, as there are two distinct length-scales in the system, $\sigma$ and $r_0$. Superposition of the van Hove function has also been found for variations in wait time $t_w$ after a simple quench from the melt [@Castillo_NaturePhys3; @Warren_PRE07]. For this thermal protocol, the entire spectrum of relaxation times scales as $t/t_w^\mu$, in which case a simple Landau-theory approximation predicts full scaling of the probability distributions [@Castillo_PRL88]. Full scaling of the relaxation times clearly does not hold after a temperature step, therefore it can not be a necessary condition for superposition of the van Hove distributions. It seems that the relationship described in Eq. (\[eqn:P\_superpos\]) is quite general.
![(a) The van Hove distributions for three temperature steps (see legend) at times where their mean squared displacement are equal. (b) The mean-squared displacement versus time for the temperature steps in (a). The intercepts of the dashed lines and the mean-squared displacement curves indicate the measurement times of the distributions in (a).[]{data-label="fig:dist_msd_t1"}](fig8.eps)
Superposition of the van Hove functions may be a consequence of the fact that, while the relaxation times are greatly modified by wait time and temperature, the spatial scale of the relaxations is not. This microscopic length scale is determined by the density, which is known to be much less sensitive to these parameters. Although correlation lengths of rearranging clusters can be quite large in glasses, these lengths also do not appear to grow with aging [@Weeks_JCM15]. If the typical size of relaxation events is approximately constant, then we can surmise that the only relevant quantity in the van Hove function is the number of relaxation events. This information is exactly expressed by the effective time that we obtained from the average correlation functions. Future work will investigate the particle trajectories in more detail to validate the spatial scale of the cage rearrangements and its relationship to the microscopic length scale of the system.
Conclusion
==========
Molecular dynamics simulations of physical aging in a model polymer glass were performed using a thermal protocol modeled after recent experiments [@Lequeux_JSM]. A temperature square step was applied to the glass after an initial quench from the melt, and the dynamics were monitored through the two-time correlation functions. Results show excellent agreement with experiment. A negative temperature step causes uniform rejuvenation due to reduced aging at low temperatures. A positive temperature step yields a completely different relaxation spectrum: at short times the dynamics are accelerated (rejuvenation), and at long times they are slowed (over-aging). By modifying the length of the step up in temperature, we determined that the transition from rejuvenation to over-aging is controlled by the length of the square step: short steps cause primarily rejuvenation, while long steps show marked over-aging. We also investigated the distribution of displacements, or the van Hove functions. Even though the spectrum of relaxation times was greatly modified by the temperature step, the van Hove functions showed perfect superposition at times where the mean squared displacements were equal.
Acknowledgements
================
We thank the National Sciences and Engineering Council of Canada (NSERC) and the Canadian Foundation for Innovation (CFI) for financial support. Computing resources were provided by WestGrid.
[10]{}
L. C. E. Struik. . Elsevier, Amsterdam, 1978.
J.-L. Barrat, J. Dalibard, M. Feigelman, and J. Kurchan. . Springer, Berlin, 2003.
G. B. McKenna. Mechanical rejuvenation in polymer glasses: fact or fallacy? , 15:S737–S763, 2003.
M. Warren and J. Rottler. Simulations of aging and plastic deformation in polymer glasses. , 76:031802, 2007.
J. Rottler and M. O. Robbins. Unified description of aging and rate effects in yield of glassy solids. , 95:225504, 2005.
D. J. Lacks and M. J. Osborne. Energy landscape picture of overaging and rejuvenation in a sheared glass. , 93:255501, 2004.
H. Montes, V. Viasnoff, S. Juring, and F. Lequeux. Ageing in glassy polymers under various thermal histories. , page P03003, 2006.
V. Viasnoff and F. Lequeux. Rejuvenation and overaging in a colloidal glass under shear. , 89:065701, 2002.
G. B. McKenna and A. J. Kovacs. Physical ageing of poly(methyl methacrylate) in the nonlinear range: torque and normal force measurements. , 24:1131–1141, 1984.
L. C. E. Struik. Rejuvenation of physically aged polymers. , 38:4053–4057, 1997.
L. Bellon, S. Ciliberto, and Laroche C. , 51:551, 2000.
A.J. Kovacs. , 3:394, 1963.
K. Kremer and G. S. Grest. , 92:5057, 1990.
C. Bennemann, W. Paul, K. Binder, and B. Dünweg. Molecular-dynamics simulations of the thermal glass transition in polymer melts: $\alpha$-relaxation behavior. , 57:843–851, 1998.
J. Rottler and M. O. Robbins. Yield conditions for a deformation of amorphous polymer glasses. , 64:051801, 2001.
R. E. Courtland and E. R. Weeks. Direct visualization of ageing in colloidal glasses. , 15:S359–S365, 2003.
L. Cipelletti and L. Ramos. Slow dynamics in glassy soft matter. , 17:R253, 2005.
Walter Kob, Claudio Donati, Steven J. Plimpton, Peter H. Poole, and Sharon C. Glotzer. Dynamical heterogeneities in a supercooled lennard-jones liquid. , 79(15):2827–2830, Oct 1997.
K. Vollmayr-Lee and A. Zippelius. Heterogeneities in the glassy state. , 72:041507, 2005.
P. Chaudhuri, L. Berthier, and W. Kob. Universal nature of particle displacements close to glass and jamming transitions. , 99:060604, 2007.
R. A. Denny, D. R. Reichman, and J. P. Bouchaud. Trap models and slow dynamics in supercooled liquids. , 90:025503, 2003.
Walter Kob and Hans C. Andersen. Testing mode-coupling theory for a supercooled binary lennard-jones mixture i: The van hove correlation function. , 51(5):4626–4641, May 1995.
A. Heuer, B. Doliwa, and A. Saksaengwijit. Potential-energy landscape of a supercooled liquid and its resemblance to a collection of traps. , 73:021503, 2005.
H. E. Castillo and A. Parsaeian. Local fluctuations in the ageing of a simple structural glass. , 3:26–28, 2007.
H. E. Castillo, C. Chamon, L. F. Cugliandolo, and M. P. Kennett. Heterogeneous aging in spin glasses. , 88:237201, 2002.
|
---
abstract: 'We report the appearance of a new radio source at a projected offset of 460 pc from the nucleus of Cygnus A. The flux density of the source (which we designate Cygnus A-2) rose from an upper limit of $<$0.5 mJy in 1989 to 4 mJy in 2016 ($\nu$=8.5 GHz), but is currently not varying by more than a few percent per year. The radio luminosity of the source is comparable to the most luminous known supernovae, it is compact in VLBA observations down to a scale of 4 pc, and it is coincident with a near-infrared point source seen in pre-existing adaptive optics and HST observations. The most likely interpretation of this source is that it represents a secondary supermassive black hole in a close orbit around the Cygnus A primary, although an exotic supernova model cannot be ruled out. The gravitational influence of a secondary SMBH at this location may have played an important role in triggering the rapid accretion that has powered the Cygnus A radio jet over the past $10^7$ years.'
author:
- 'Daniel A. Perley, Richard A. Perley, Vivek Dhawan, and Christopher L. Carilli'
title: |
Discovery of a Luminous Radio Transient 460 pc from the Central\
Supermassive Black Hole in Cygnus A
---
Introduction {#sec:intro}
============
The past decade has witnessed a large expansion in the capabilities of observational astronomers to identify new or variable objects in the sky at a variety of wavelengths. These rapid advances have been made possible largely by the coming of age of wide-field synoptic astronomy, in which a large area on the sky is repeatedly imaged and sophisticated software algorithms search the resulting data stream for new objects or other changes of astronomical interest.
Even in this era, more traditional modes of discovery remain relevant. Large classical observatories with smaller fields of view often have substantially greater sensitivity and resolution, providing greater depth and less confusion. While only a small fraction of the observing time on such facilities is spent on projects expressly designed to find new transients or high-amplitude variables, repeat imaging of certain fields (by design or by chance) offers the possibility of finding new and unexpected sources at these locations. While lacking in the cadence control or blind target selection often employed by dedicated transient surveys, pointed observations can target particularly interesting, exotic, or nearby environments. These locations may host unusual types of objects that are too rare, in too difficult an environment, or too low-luminosity to be easily identifed in a large-scale synoptic survey.
In this paper we report the serendipitous discovery of a new radio source very close to, but not coincident with, the nucleus of the Cygnus A host galaxy. Our Karl G. Jansky Very Large Array (VLA) observations leading to the discovery of the transient are described in §\[sec:observations\]. Additional follow-up observations with the VLA and Very Long Baseline Array (VLBA), and archival observations from these and other facilities are also presented. In §\[sec:discussion\] we consider possible physical interpretations for the object, including a rare type of supernova or a fast-accreting secondary supermassive black hole inside Cygnus A. Finally in §\[sec:implications\] we consider the implications of our study for the nature of the Cygnus A system and other luminous radio galaxies, and for radio transient surveys generally.
Observations {#sec:observations}
============
Discovery {#sec:discovery}
---------
Cygnus A is the best-studied powerful radio galaxy by far [@Carilli+1996]. It is the archetype for the [@Fanaroff+1974] class II radio galaxies, in which two powerful oppositely-directed jets of relativistic matter are observed to emanate from a central point source at the galaxy nucleus and terminate at bright hot spots in extensive edge-brightened radio lobes in the halo.
Cygnus A was observed by the VLA during the mid-1980s, revealing its inner jet and luminous arcsecond-scale hotspots at the jet termination points [@Perley+1984; @Carilli+1991]. The inner core and jets of Cygnus A have been also been studied extensively on milliarcsecond scales using very long baseline interferometry (VLBI) techniques from 1.4 GHz to 90 GHz [@Carilli+1994; @Krichbaum+1996; @Krichbaum+1998; @Boccardi+2016b; @Boccardi+2016a].
In spite of or perhaps because of the success of the early VLA observations, no additional sensitive measurements of Cygnus A at $>$10 pc scales were acquired until quite recently. Motivated by the major improvements to the VLA’s bandwidth and sensitivity [@Perley+2011], a new wideband radio-frequency imaging campaign of the system was initiated in 2015, accompanied by a deep *Chandra* X-ray observation. The radio campaign used all of the VLA’s receivers between 2–18 GHz and all four array configurations.
Most of the reduced images show similar structure as the original VLA images from the 1980s, with greater sensitivity and wider frequency coverage. However, in analyzing the higher-frequency extended-configuration observations we noticed a new feature that was not evident in any previously published imaging of the system: a strong point source (4 mJy at 8 GHz) at an offset of 0.42$\arcsec$ west-southwest of the nucleus (Figure \[fig:discovery\]). This is not along the jet axis, but is embedded in the complex and gas-rich inner region of the host galaxy seen in prior optical imaging [[[e.g.,]{} @vandenBergh+1976]]{}. The source is visible at the same location at multiple frequencies and the detection is highly secure ($>$12$\sigma$ detection at all frequencies), leaving no doubt that it is a real object. We designate this source Cygnus A-2 (or A-2 for short).
![Discovery images of the off-nuclear radio transient Cygnus A-2. Both images show VLA observations at 8.5 GHz, using spacings of $>$150k$\lambda$ only. The scale and contours are the same for both panels (the first five contours are 0.5, 0.85, 1.2, 2, and 4 mJy/beam). A point source is detected approximately 0.42$\arcsec$ west-southwest of the nucleus in the new observations (location designated by the green crosshairs). No source at this location was present in 1989.[]{data-label="fig:discovery"}](f1.eps){width="8.5cm"}
VLA Observations {#sec:vla}
----------------
To confirm that A-2 represents a new source (rather than a non-variable object that was below the detection limit of the earlier, less-sensitive observations), we searched the NRAO archives for observations taken in configurations and frequencies suitable in principle to resolve and detect a source at this location. We found two suitable sets of archival observations. A low-frequency observation of the nucleus was taken on 1989 Jan 06 in X-band (program ID AC244), using four spectral windows (centered at 7815, 8165, 8515, 8885 MHz) with 6.25 MHz of bandwidth each (the narrow bandwidth was necessary to reduce chromatic aberration at the 1-arcminute offset of the source’s two hotspots). In addition, high-frequency observations were obtained on 1996 Nov 11–12 (in Q-band) and 1997 Mar 29 (in K-band and Q-band), both part of program ID AP334. These observations were both taken with a subarray using 13 antennas, and with 50 MHz of bandwidth centered at 22.46 GHz (K-band) and 43.34 GHz (Q-band).
We also applied for and received additional VLA observations under director’s discretionary time (program IDs 16B-381 and 16B-396) in order to determine if A-2 was still present one year after the discovery observation and, if so, to better constrain its spectrum and rate of evolution. All frequency bands capable of separately resolving the target from the Cygnus A nucleus in the available configuration were used in these programs: K through Q bands (18–50 GHz) on 2016 Aug 14 in B-configuration, and X through Q bands (8–50 GHz) on 2016 Oct 21 in A-configuration. All observations, as well as the 2015 discovery observations, were taken using the WIDAR correlator in continuum mode, using 3-bit sampling and the maximum bandwidth available for each receiver.
All data were calibrated in AIPS using well-established techniques. The flux scale was set by observations of J1331+3030 (3C286), using the flux density scale of [@PerleyButler2013]. Referenced pointing, utilizing observations of the nearby unresolved source J2007+4029, was used to stabilize the antenna pointing on Cygnus A. We used the Cygnus A nucleus, rather than J2007+4029, to establish the phase calibration: standard phase calibration using this source does not fully remove the differential atmospheric phases present between that source and Cygnus A. The nucleus can be treated as pointlike at VLA resolutions, as the jet and lobe structures and the hotspots are largely resolved out at spacings beyond 0.5 M$\lambda$. We employ only these long spacings to establish the phase calibration where possible. For the high-frequency A-configuration data, the hotspots are completely resolved out (and furthermore lie near the first null of the antenna pattern), so all interferometer spacings were employed. For the B-configuration data, and for the A-configuration data at the longest wavelengths (C, X, and Ku bands), residual emission from the hotspots was managed using flanking fields in the imaging/deconvolution process. Following this self-calibration step, the data were decimated into 1GHz-wide (for C and X bands) and 2GHz-wide (for the other bands) continuum blocks for the imaging stage.
Spectral fluxes for both the nucleus and A-2 were determined using the AIPS task `JMFIT`. The individual spectral windows were averaged together into 1 GHz or 2 GHz wide bins; these values are specified in Table \[tab:fluxes\]. All measurements were consistent with a point source, with no indication of spatial extension at the VLA’s resolution ($\sim$40 mas for the longest spacings at the high frequencies). Note that the values in Table \[tab:fluxes\] do not include systematic errors associated with uncertainty in the flux density scale, which are estimated to be a few percent at each frequency [@PerleyButler2013]. A-2 is not well-resolved from the nucleus in the lowest-frequency 2016 B-configuration spectral window or in any of the 2015 A-configuration observations below 7 GHz, so we do not quote fluxes at these frequencies.
The archival observations in 1989 and 1996–1997 unambiguously show that A-2 was not present then to limits substantially below its discovery value. Using the RMS of the synthesized map at locations close to the position of A-2, we place a limiting flux of $<$0.69 mJy at 8.34 GHz (1989), $<$1.32 mJy at 22 GHz (1996), and $<$0.78 mJy at 45 GHz (1997). This indicates that the flux rose by at least a factor of six between 1989 and 2015 (and at least four between 1997 and 2015).
No obvious variability is seen during the past year. Comparing repeat measurements at the same frequency (i.e., comparing our October 2016 observations to the low-frequency points from 2015 and high-frequency points from August 2016), no measurement changes by more than 5 percent and only one measurement changes by greater than 3$\sigma$. (However, the noise structure close to the nucleus is complex, and a 3$\sigma$ change of this individual point does not securely indicate real variability). There is some indication that the flux may be dropping slightly (at least at the lower frequencies where a longer time baseline is available), but any such behavior is marginally significant at best and at the level of no more than a few percent per year. A longer temporal baseline will be required to determine more clearly if the object is indeed changing with time.
![Spectral energy distribution of the off-nuclear transient (A-2) as measured from three recent VLA epochs (colored circles) and one VLBA epoch (open square). Upper limits from archival observations using the VLA (red circles) and using high-frequency VLBI (red squares; from Krichbaum et al., private communication) are also presented. Error bars are 2$\sigma$. We detect no obvious variability at any frequency over a 1-year baseline, although the source rose by a factor of $\gtrsim$5 between the mid-1990s and 2015. Two different models are shown, one with a self-absorbed turnover at low frequencies $\alpha=5/2$ (dashed line) and a fully optically-thin model with a gentler turnover (solid line).[]{data-label="fig:sed"}](f2.eps){width="9cm"}
A spectral energy distribution (SED) formed by our measurements is presented in Figure \[fig:sed\]. The nature of the multiwavelength SED is most unambiguously constrained by our A-configuration observation from October 2016 in which all frequencies are available simultaneously: this shows a largely flat (in $F_\nu$) overall spectral shape, with some curvature at the low-frequency end. The high-frequency data points ($>20$ GHz) are well-fit by a power-law with a spectral index of $\alpha_{\rm hi}$=$-$0.6$\pm$0.1 (using the convention $F_\nu \propto \nu^{\alpha}$). Below 15 GHz the SED flattens, and probably drops at the lowest frequencies. The low-energy spectral index is poorly constrained and depends largely on what is assumed about the softness of the spectral turnover. The best fits are obtained with a sharp turnover and a rather flat low-frequency index ($\alpha_{\rm lo} \approx +0.4$) but steep low-frequency indices (including the self-absorbed value of $\alpha=+2.5$; see discussion in §\[sec:sed\]) are permitted if the turnover is soft.
[c|ccc|ccc|ccc]{} 7.1 & 1393 & 4.0 & 0.50& - & - & - & - & - & -\
8.5 & 1368 & 4.15 & 0.35& - & - & - & 1253 & 3.74 & 0.34\
9.5 & 1416 & 4.20 & 0.35& - & - & - & 1283 & 3.95 & 0.26\
10.5 & 1483 & 4.40 & 0.35& - & - & - & 1399 & 3.82 & 0.25\
11.5 & 1507 & 4.32 & 0.35& - & - & - & 1423 & 4.43 & 0.23\
13.0 & 1440 & 4.86 & 0.17& - & - & - & 1414 & 4.49 & 0.07\
15.0 & 1435 & 4.80 & 0.13& - & - & - & 1428 & 4.26 & 0.06\
17.0 & 1427 & 4.61 & 0.11& - & - & - & 1450 & 4.30 & 0.07\
19.2 & - & - & - & - & - & - & 1498 & 4.39 & 0.06\
21.2 & - & - & - & 1527 & 4.37 & 0.26 & 1498 & 4.28 & 0.11\
23.2 & - & - & - & 1505 & 4.13 & 0.16 & 1475 & 4.14 & 0.10\
25.2 & - & - & - & 1438 & 4.02 & 0.09 & 1473 & 4.11 & 0.14\
31.5 & - & - & - & 1320 & 3.56 & 0.09 & 1317 & 3.42 & 0.09\
33.5 & - & - & - & 1280 & 3.45 & 0.08 & 1276 & 3.39 & 0.05\
35.5 & - & - & - & 1220 & 3.26 & 0.07 & 1248 & 3.33 & 0.04\
37.5 & - & - & - & 1199 & 3.24 & 0.07 & 1201 & 3.22 & 0.05\
41.0 & - & - & - & 1198 & 3.17 & 0.06 & 1203 & 2.98 & 0.05\
43.0 & - & - & - & 1176 & 3.08 & 0.06 & 1156 & 2.86 & 0.06\
45.0 & - & - & - & 1158 & 2.88 & 0.06 & 1133 & 2.83 & 0.05\
47.0 & - & - & - & 1141 & 3.00 & 0.08 & 1114 & 2.91 & 0.07\
\[tab:fluxes\]
VLBA Observations {#sec:vlba}
-----------------
We also acquired director’s discretionary observations with the Very Long Baseline Array (program ID BP213). Observations were taken in the S- and X-bands simultaneously in dual polarisation, dual band with dichroic, a 2 Gbps total recorded bitrate, using the DDC (digital down-converter) signal processing mode. The recording was split equally between S-band (using 2$\times$64 MHz of bandwitdth centered at 2230 and 2345 MHz) and X-band (using 2$\times$64 MHz of bandwidth centered at 8350 and 8420 MHz). We used filler time at low priority, with 9 or 10 antennas, in each of four observations. The observations were taken on 2016 Nov 3, Nov 11, Nov 14, and Nov 20. The total on-source time was 3.1 hours.
We observed continuously on Cygnus A, and used the strong nucleus to calibrate delay and phase for the observation. The transient (A-2) is well within the primary beam of the antennas and experiences the same phase and delay variations from the atmosphere, which are hence almost completely removed by calibration. Data reduction followed the standard path for VLBA data in AIPS, i.e., amplitude calibration by applying the pre-measured system temperature and antenna efficiency factors provided by NRAO operations, followed by delay, rate, and phase calibration.
Only the X-band data (Stokes I) were usable. (The S-band observations were unsuccessful due to a combination of factors, including radio-frequency interference, interstellar scattering, and instrumental challenges associated with the luminous lobes within the primary beam; no image could be produced.) Even for the X-band data, the flux on the longest baselines is weak due to a combination of intrinsic structure and/or scattering, so we tapered the final images to 100 M$\lambda$ (from 200 M$\lambda$) to down-weight low SNR data.
Our final reduced image of the system, combining all four epochs, has an RMS of 0.25 mJy and a final beam size of 2.3$\times$1.8 mas (FWHM) and is shown in Figure \[fig:vlba\]. The new source A-2 is clearly detected, with an integrated flux of 3.8$\pm$0.25 mJy and peak brightness of 3.1$\pm$0.25 mJy/beam. Some possible east-west extension is apparent in the image, and formally the FWHM of the source along the major axis is broader than the synthesized beam at 3$\sigma$ significance, hinting that A-2 may be marginally resolved on a scale of $\sim$1–2 mas. However, this extension is marginal and could result from phase-transfer artifacts associated with the use of the primary nucleus for phase calibration. More conservatively, we place an upper limit on the spatial scale of the source by measuring the full-width of the projected profile in the synthesized image. We derive a limit on the diameter of $<$4 mas, equivalent to 4.5 pc at the distance of this system ($z=0.0561$; we assume the standard cosmology of @Bennett+2014).
The precise location of A-2 is $\alpha$=19:59:28.32345, $\delta$=+40:44:01.9133 (J2000, $\pm$1mas), referenced to the previously known astrometric position of the nucleus (established to be $\alpha_{\rm nuc}$=19:59:28.35648, $\delta_{\rm nuc}$=+40:44:02.0963, @Gordon+2016) in the ICRF2 frame [@Fey+2015]. The projected offset from the nucleus is 418 mas, or 458 pc.
![VLBA synthesized image of the A-2 field. A pointlike source with flux consistent with the VLA measurement is detected at a location consistent with the VLA (and NIR) locations, indicating that the source is quite compact ($<4$ pc). Brightness contours are nonzero integer multiples of 0.47 mJy/beam.[]{data-label="fig:vlba"}](f3.eps){width="7.5cm"}
Multifrequency Archival Observations {#sec:oir}
------------------------------------
The Cygnus A nucleus is very luminous at all electromagnetic wavelengths from radio to X-rays ($\sim10^{44}$ ergs$^{-1}$; @Carilli+1996). The 0.42$^{\prime\prime}$ projected offset places A-2 well within the PSF of the nucleus at the typical resolution of nearly all existing observatories, meaning that only a few facilities (those capable of resolutions less than approximately 0.3$\arcsec$) are capable of detecting it. At the present time, this is limited to radio observatories, HST, and ground-based near-infrared adaptive optics instruments.
Cygnus A was the subject of several VLBI monitoring campaigns in the early 1990s. These data are not publicly archived, but we contacted the authors of the most recent large-scale VLBI study [@Krichbaum+1998] who re-investigated their 22 GHz observations for evidence of any emission at the location of A-2. There is no detection of any source to upper limits of $<$0.5 mJy on 1992 June 10 and $<$0.8 mJy on 1994 March 4.
Cygnus A has also been the target of several HST campaigns, beginning with the study of [@Jackson+1994] shortly following the first servicing mission, with additional follow-up studies by [@Jackson+1998] and [@Tadhunter+1999; @Tadhunter+2000; @Tadhunter+2003]. The observations used in these studies were all taken between 1994 and 2001. These images reveal in detail the inner region surrounding the nucleus, which is dominated by a biconic structure that surrounds the two jets. No distinct source is evident at the position of A-2 in the blue and UV observations, which is unsurprising given the very large extinction towards the central part of the Cygnus A host galaxy (as well as through the plane of our own Galaxy). A point source begins to emerge at the location of A-2 redward of approximately 500 nm and is quite distinct in the 814 nm image. It is evident in the near-IR NICMOS images also, but is difficult to cleanly resolve from its complex environment due to the lower resolution of that camera. We searched the HST archive to determine if the source has been observed since 2001 and did not find any constraining imaging.
Cygnus A was also observed using Keck adaptive optics (NIRC2) in May 2002 by [@Canalizo+2003]. Observations were acquired in all three near-infrared bands ($J$, $H$, and $K'$); these unambiguously show a strong point source at the location of A-2 (Figure \[fig:image\]). This source is discussed extensively in that work; the authors favor an interpretation in which it represents the dense, stripped stellar core of a companion galaxy merging into the Cygnus A host. We were provided the original $K'$-band AO image by C. Max and aligned it with our highest-frequency A-configuration image from the VLA. Registering the radio and IR images using the nucleus, the positional coincidence of the point source with A-2 in both images is precise to within 0.01$\arcsec$ or 10 pc, indicating a secure physical association between the two objects.
All of the HST and AO imaging of this field was single-epoch, and to our knowledge no re-observations of this source at a common waveband have been conducted to date. The rate of change of flux of the source is therefore not strongly constrained. Assuming no change between the different HST epochs this optical counterpart must be very red, as would be expected for a source embedded within the Cygnus A inner environment. It is however significantly less red than the nucleus itself, so the extinction column towards A-2 is likely somewhat lower (the nucleus is invisible at all optical wavelengths but the optical/IR counterpart of A-2 can be detected down to $\sim$500 nm).
Interpretation {#sec:discussion}
==============
Association with the Cygnus A Host Galaxy
-----------------------------------------
We consider first the possiblity that the variable radio source simply represents a coincidental object unassociated with Cygnus A, such as an active M-dwarf in the disk of our own Galaxy. The probability of this is extremely low. Even at low Galactic latitude ($b=5.75\arcdeg$), the probability that a detectable Galactic field star with $K < 20$ mag would appear within an arcsecond of the nucleus of this galaxy is quite low, $<10^{-4}$. (The possibility of a non-Galactic point source such as a quasar being coincidentally aligned this way is orders of magnitude lower.) The foreground/background hypothesis was rejected on these grounds alone by [@Canalizo+2003], who furthermore noted that the colors of the object are not consistent with typical Galactic stars. The radio detection further strengthens the statistical case for an extragalactic origin, since it is very unlikely that a random foreground star would also be radio-loud. We therefore are quite confident that the source originates in Cygnus A.
Luminosity Constraints {#sec:lum}
----------------------
Placing the source at the redshift of Cygnus A ($D_L=251$ Mpc) allows us to calculate its luminosity: $L_\nu \approx 3 \times 10^{29}$ erg s$^{-1}$ Hz$^{-1}$ or $\nu F_\nu \approx 6\times10^{39}$ erg s$^{-1}$. This alone represents an extremely powerful constraint, since it is orders of magnitudes more luminous than almost any known variable radio object. It rules out, in particular, any known nondestructive stellar event (such as a flare, a nova, or any known class of energetic burst from a pre-existing compact stellar object such as a neutron star or stellar-mass black-hole binary): see, for example, the luminosity-timescale diagram of Figure 3 in [@Pietka+2015].
Viable explanations consistent with the luminosity of A-2 and its appearance during a $\lesssim$10-year timescale can be sorted into two general categories: an exceptionally luminous class of supernova, or a rapidly accreting supermassive black hole. We will compare these two models in more detail in §\[sec:sn\] and §\[sec:smbh\] after examining a few additional physical considerations.
Size Constraints {#sec:size}
----------------
The VLBA observation directly constraints the maximum size of the object to be less than approximately 4 pc. However, we can also place a *lower* limit on the size on account of the lack of strong variability: given the low Galactic latitude of Cygnus A, for a compact source we would expect to observe large-amplitude interstellar scintillations due to refraction by ionized gas within the plane of the Milky Way.
The transition frequency for the strong interstellar scintillation regime is quite high in this direction (approximately 30 GHz according to the maps of @Walker+2001). At frequencies close to this value, large (of order unity) modulations of the observed flux are to be expected for sources smaller in angular size than the Fresnel scale in this direction, which is $\sim 1$$\mu$as or $\sim10^{-3}$pc in projection [@Walker+1998]; the variation timescale is approximately 1 day. Although we only have three epochs at this time, our observations are all close to the critical frequency and it is very unlikely that even three repeated observations on timescales far greater than the characteristic fluctuation time would provide consistent measurements within a few percent. This suggests that the source is larger (most likely significantly larger) than $10^{-3}$ pc (200 AU).
SED Constraints {#sec:sed}
---------------
The SED of A-2 is nonthermal throughout the observed bands, as would be expected given the very high brightness temperature ($T_B > 1.7\times10^7$K at 8 GHz, given the flux measurement and angular size limit provided by the VLBA observations). This implies a population of highly energetic particles, most likely shocked or otherwise relativistically accelerated electrons radiating via synchrotron emission in a local magnetic field, the same process that is responsible for the radio emission from nearly all energetic transient phenomena (including both SNe and AGNs). The observations are consistent with this; the expected high-frequency spectral index for a shocked synchrotron population is $\alpha = -(p-1)/2$, which for the observed $\alpha=-0.6$ (§\[sec:vla\]) implies $p=2.2$, a standard value for this term.
The observed spectral energy distribution appears to roll over below $\nu_t \approx 15$GHz. The lack of short-term variability establishes that this is not likely to be due to scintillation (§\[sec:size\]). A spectral turnover could in principle also originate from free-free absorption by ions along the line of sight through the host galaxy. However, the required emission measure to produce a turnover via free-free emission is enormous ($EM \approx 1.5 \times \nu_{t,{\rm MHz}}^{2.1}$ pccm$^{-6}$ $\approx$ $8.8 \times 10^8$ pc cm$^{-6}$ for $\nu_t$=10 GHz; @Condon+2016), a factor of $\sim10^7$ times higher than what has been seen in narrow-band H$\alpha$ observations of the galaxy [@Carilli+1989]. And while A-2 is quite optically obscured, the detection of a clear optical/IR counterpart at this location (§\[sec:oir\]) demonstrates that the extinction along this sightline cannot be too extreme (assuming an intrinsically flat SED in $f_\nu$, the colors provided by @Canalizo+2003 imply an extinction factor of no more than $\sim$50 at the wavelength of H$\alpha$). Alternatively, the absorbing matter could have been recently ionized and thus not evident since the time of the most recent H$\alpha$ observations—as would be the case in a Type IIn supernova model, in which the progenitor star releases a dense wind and then ionizes it upon explosion [@Chevalier+1982]. This would require a luminous UV/optical transient to have accompanied the appearance of A-2 and predict strong and ongoing H$\alpha$ emission from the transient; possibilities that could be checked via archival and/or future observations, respectively.
Alternatively, synchrotron self-absorption is commonly seen in extragalactic sources and is generically expected at low frequencies anytime that the accelerated electrons are confined to a relatively limited volume. The observed self-absorption frequency $\nu_{\rm SSA}$ is related to the source size $R$ according to the expression: $R \approx 9$$\times$$10^{15}$${\rm cm} \times (\frac{F_{\rm peak}}{\rm Jy})^{\frac{9}{19}}(\frac{D_L}{\rm Mpc})(\frac{\nu_{\rm SSA}}{\rm 5 GHz})^{-1}$ (@Chevalier+1998; terms of order unity are omitted). For our observed parameters this corresponds to a size of $R \approx 0.1$pc, fully consistent with the size constraints we have derived above (§\[sec:size\]). (If the break is not due to SSA, then this estimate becomes an inequality: $R>$0.1pc).
A spectral break could also originate from the acceleration process itself: for example, if all of the accelerated electrons exceeded a certain minimum (large) Lorentz factor, as is the case for extremely energetic shocks such as those in gamma-ray bursts [[[e.g.,]{} @Sari+1998]]{}. Lower-frequency observations will be needed to determine the low-frequency spectral index (expected to be $\alpha=2.0$ for free-free absorption, $\alpha=2.5$ for synchrotron self-absorption, or $\alpha = 0.33$ for electron injection) and confirm the nature of the break.
Supernova Models {#sec:sn}
----------------
It is very rare for a supernova to reach the radio luminosity we observe for A-2 [@PerezTorres+2015]. Type Ia SNe are never detected in the radio band, and common core-collapse supernova classes reach only $10^{26}$ergs$^{-1}$Hz$^{-1}$ (IIp supernovae) to $10^{28}$ergs$^{-1}$Hz$^{-1}$ (most Ib/c supernovae). However, some exotic classes of supernova can reach the luminosity scales in question.
*Relativistic* supernovae—a subclass of Ib/c events—entrain a significant amount of energy in jets traveling at relativistic velocities. The rarest and most luminous of these, long-duration gamma-ray bursts (GRBs), have extremely powerful on-axis jets whose interaction with their environments creates powerful afterglows with radio luminosities exceeding $10^{31}$ergs$^{-1}$Hz$^{-1}$ at peak and lasting for years [@Chandra+2012]. The more common mildly-relativistic versions may or may not possess strong narrow jets, but do accelerate a significant amount of matter to near $c$; examples include SN 2009bb or PTF 11qcj, which exhibited maximum radio luminosities around 10$^{29}$ergs$^{-1}$Hz$^{-1}$ [@Soderberg+2010; @Corsi+2016].
*Strongly interacting* (type IIn) supernovae are not relativistic but owe their luminosity to the collision of the expanding supernova envelope with massive, dense circumstellar matter ejected by the star prior to explosion. Only a few of these have been well-studied in the radio, but their luminosities are comparable to mildly-relativistic type Ic supernovae.
It would not be surprising to identify a supernova within the inner environs of Cygnus A. The star-formation rate is very large, perhaps as high as 80 M$_\odot$ yr$^{-1}$ [@Privon+2009; @Hoffer+2012], equivalent to a core-collapse supernova rate of approximately 0.7 yr$^{-1}$ [@Horiuchi+2011], so during any given observation it is likely that there are several young supernovae within a few years of explosion. However it is quite *un*likely that we would find a young *radio-luminous* supernova by sheer chance if star-formation in the Cygnus A host is similar to that in other galaxies. Cygnus A-2 would have to be *the* most radio-luminous non-GRB supernova ever recorded. Broad-lined SNe-Ib/c and SNe-IIn combined represent less than 5% of the core-collapse supernova rate in typical galaxies (@Arcavi+2010 [@Graur+2016]; true GRBs are orders of magnitude rarer still), and radio surveys of these classes indicate that less than 10% of either could be as luminous as our source [@Soderberg+2006; @Corsi+2016]. The chance that a blind observation of Cygnus A would identify an extremely radio-luminous SN within 10 years after explosion is therefore $\lesssim 5 \times 10^{-3}$.
This is far from impossible, and furthermore it is quite possible that gas-rich, high-density environments such as the Cygnus A nucleus may produce a different distribution of supernovae than more mundane star-forming environments. The most luminous known (probable) interacting radio supernova was also located in a dusty nuclear region around an AGN (Markarian 294A), for instance [@Yin+1994], and two examples of optically superluminous type IIn supernovae, SN2006gy and PTF10tpz, have also been found in the nuclear environments of massive galaxies, so it is possible that these environments are particularly friendly to very massive stars and energetic supernovae [@Perley+2016].
However, despite being quite nearby (77 Mpc) SN2006gy was undetected by the VLA [@Ofek+2007], and no radio detection of any other (optically) superluminous SN IIn has been reported to date to our knowledge. Also, larger-scale VLBA radio surveys of ULIRGs with dense, dusty star-forming nuclear environments similar to what is present in Cygnus A have not found clear evidence of an excess number of very luminous supernovae compared to what is expected given their high star-formation rates [@Lonsdale+2006]. Furthermore, optical surveys suggest that (if anything) the relative fractions of IIn and broad-lined Ib/c supernovae actually *decrease* in the most massive and metal-rich galaxies [@Arcavi+2010; @Modjaz+2011; @Graur+2016], as do GRBs [[[e.g.,]{} @Perley+2016a; @Japelj+2016; @Graham+2017]]{}.
A supernova model also is unnatural given our observations of the system to date. GRBs are the only stellar transients actually observed to have exceeded the observed luminosity of our source—but a GRB at this location would have to be many years old to not be varying at any frequency over a timescale of one year, in which case its luminosity would be surprising even for a gamma-ray burst. An old ($>$5 years), highly relativistic event would likewise be expected to have become resolved in VLBA imaging by this point, given the anticipated superluminal lateral expansion of the jet [@Taylor+2004].
Another serious problem with supernova models is the precise coincidence with an optical/IR point-source, which is unexpected and difficult to explain.[^1] This could represent a young and very dense star cluster (compact and luminous star clusters are known in the central Milky Way and in other luminous galaxies), with the Cygnus A transient representing one of the first supernovae from an extremely massive star inside of it. However, the narrow-band optical and spectroscopic NIR observations of this source show no evidence that it produces strong line emission, as would be expected from a young super star-cluster [@Jackson+1998; @Canalizo+2003]. Also, given that the probability of catching an ultra-rare supernova by chance in this galaxy is very low to begin with, the possibility that said SN would *also* happen to align with a particular single cluster and not originate in one of the many other abundant star-forming regions within Cygnus A makes this statistical difficulty even more problematic.
Further radio monitoring will be able to examine the supernova hypothesis more robustly. A multi-year radio light curve should unambiguously show fading for any GRB-like model (and likely for a IIn model as well), higher-frequency VLBA observations will place tighter constraints on jetted emission, and further adaptive optics or HST observations with modern instruments may be able to identify optical or NIR line emission from a late-stage nebular supernova. In the meantime, however, we will focus our attention on the alternative black hole model, for which a high radio luminosity and coincidence with a bright optical point source are natural expectations.
Accreting massive black hole models {#sec:smbh}
-----------------------------------
As Cygnus A-2 is clearly separate from the nucleus that powers the Cygnus A jets it cannot be related to the primary supermassive black hole in this galaxy. However, there is no reason that a galaxy such as this one cannot harbor more than one large black hole. Indeed, although most giant elliptical galaxies at low redshift are starved of gas and largely devoid of star-formation, the extremely dust- and gas-rich nucleus of Cygnus A (and its rapidly-accreting active nucleus) suggests that a recent merger has delivered a large gas supply to the center of the galaxy in the relatively recent past. It seems quite plausible that the central supermassive black hole of this infalling satellite has not yet merged with the primary black hole. Accretion of gas onto such a secondary could produce luminous jets of relativistic matter, leading to variable multiwavelength emission on a variety of wavelengths and timescales. This could naturally produce a radio transient on a timescale of several years as well as the persistent optical/NIR emission seen in pre-existing observations.
### Secondary AGN
Our available constraints on the SED, luminosity, size, and variability timescale of Cygnus A-2 are all consistent with what have been observed previously from AGNs [[[e.g.,]{} @Ho+2008]]{}. An AGN model is also consistent with the short-wavelength observations: the optical/IR counterpart of A-2 could represent either the stripped remnant core of the merging galaxy (as originally proposed by @Canalizo+2003) or continuum emission from the AGN itself. The $K$-band near-infrared spectrum of this object (Figure 5 of @Canalizo+2003) does indeed appear quite similar to known AGNs (e.g. @Glikman+2006 [@RamosAlmeida+2009]), although beacuse it is heavily contaminated by light from gas excited by Cygnus A itself it is not clear if these features are intrinsic to the point-source (@Canalizo+2003 note that the centroid of the line emission is offset from the centroid of the continuum). If AGN-dominated, the bolometric luminosity inferred from the NIR observations ($\nu F_\nu \sim ~2\times10^{41}$ erg s$^{-1}$) is much less than Eddington for a central supermassive black hole (e.g., $\sim$0.002 $L_{\rm edd}$ for $M_{\rm BH} = 10^6\,{\rm M}_\odot$ and an assumed efficiency of $\eta=0.1$), similar to low-luminosity AGNs. Our unresolved VLBA observation rules out any highly relativistic jet emission, but the more typical mildly-relativistic low-luminosity AGNs are consistent with remaining point-like on this scale.
The most distinctive property of A-2 is its rapid appearance: from nondetection to a clear detection over a timescale of less than nine years. Most AGNs vary in flux at some level but do not turn on (or off) completely. Of course, a varying source can move up and down across a detection threshold, and it is entirely possible that the source was present (and accreting) in the 1980s and 1990s at a lower level before brightening over the past decade. Unfortunately the limited sensitivity of 20th century radio facilities makes it difficult to constrain this precisely. A change by a factor of six is very large for a non-blazar AGN; for example, of the blindly-selected radio sources in Stripe 82 of the Sloan Digital Sky Survey examined by [@Hodge+2013], only 1% showed variability by more than a factor of a few on a decade timescale. It seems curious that, should a secondary AGN be present in Cygnus A, that it would happen to be one of these strong variables. On the other hand, A-2 was discovered in a very different manner than that by which ordinary AGNs are selected and in an atypical environment, so the statistics on short-term variability based on blindly-selected AGNs may not be representative. Also, sudden changes in AGN fluxes and spectra at a variety of wavelengths have recently been seen in some well-studied nearby AGNs, including NGC2617 [@Shappee+2014], Mrk590 [@Koay+2016], and NGC660 [@Argo+2015], although only in the lattermost of these examples was a sharp increase observed at radio wavelengths.
### Tidal disruption event
Alternatively, a largely or completely quiescent AGN could suddenly become quite luminous as a result of a large-scale accretion event. The most extreme example of this would be the disruption and accretion of a star passing near the tidal radius (a tidal disruption event or TDE). A TDE model is also consistent with the available observational data; some TDEs have luminosities comparable to what we have observed for A-2 and this emission can persist for several years [[[e.g.,]{} @Zauderer+2011]]{}. However the expected TDE rate per galaxy is very low ($\sim10^{-4}$ yr$^{-1}$): as with the rare classes of supernovae discussed in the previous section, the probability that we would catch such an event by chance is quite low *unless* Cygnus A has an atypically high disruption rate. There is some reason to expect that this may indeed be the case: nearly all known TDEs have been localized to galaxies that have undergone recent mergers [@Arcavi+2014; @French+2016; @Lezhnin+2016], although these systems appear to be in much later stages in which the nuclear black holes have most likely already merged. The actual TDE rate in ongoing massive galaxy mergers similar to Cygnus A is very difficult to determine observationally, but the detection of a candidate TDE in a nearby ULIRG gives some reason to think that it may indeed be extremely high [@Tadhunter+2017].
### Mass constraints and IMBH/ULX
Direct observational constraints on the mass of the accreting black hole responsible for A-2 in the SMBH scenario are relatively weak. The lack of any reported large-scale dynamical disturbances close to this location from IR spectroscopy suggests that any black hole there must be significantly smaller than that of the primary Cygnus A SMBH ($M_{\rm primary} \sim 3\times10^9\,{\rm M}_\odot$; @Tadhunter+2003)[^2]. The pre-flare radio non-detection would imply an upper mass limit of $M_{\rm BH}\lesssim 10^8\,{\rm M}_\odot$ via the AGN $L_{\rm radio}-M_{\rm BH}$ correlation of [@Franceschini+1998]—however, more recent work does not confirm this correlation [@Nyland+2017], and it would not apply to the TDE scenario in any case.
The lower limit on the black hole mass is likewise not tightly bounded. If we assume that the black hole is the central BH of a merging companion galaxy then its mass is likely to be at least $>10^5\,{\rm M}_\odot$ (@Reines+2015; see also @Nyland+2012 [@Nguyen+2017]), especially if the companion was a massive galaxy responsible for delivering large amounts of gas to the Cygnus A nuclear region and triggering the ongoing starburst.[^3] In principle, however, our observations permit it to be smaller: Eddington-limit arguments based on the bolometric NIR luminosity limit the mass only to $M_{\rm BH} \gtrsim 2\times10^3\,{\rm M}_\odot$ (and no limit can be placed at all if the optical/IR counterpart’s luminosity is stellar in origin.)
An intermediate-mass black hole (IMBH) of this type could be pre-existing, or could have formed during the ongoing starburst in the center of a dense, young star cluster (but c.f. the spectroscopic constraints on line emission from young stars in §\[sec:sn\]). No IMBHs have yet been clearly established to exist in the low-redshift universe, although ultra-luminous X-ray sources (ULXs) represent a possible candidate. ULX’s are X-ray flaring events that have been hypothesized to be associated with accretion onto intermediate-mass black holes (IMBHs) in nearby star-forming galaxies [@Mezcua+2011]. However, known ULX flares have radio luminosities orders of magnitude lower than A-2 [@Mezcua+2011; @Webb+2012; @Pietka+2015], so if A-2 originates from an IMBH flare this event would have to be without observational precedent. We conclude that is much more likely that the A-2 black hole is supermassive and originates from the center of a merging companion galaxy.
Implications {#sec:implications}
============
Regardless of the nature of Cygnus A-2, it is a rare class of object, and the serendipitous detection of such an event so close to one of the most active nuclei in the nearby Universe suggests some sort of physical connection to its local environment.
If it is a rare type of supernova, the detection in this unusual location suggests that star-formation in the Cygnus A inner environment proceeds in an unusual way that is particularly conducive to producing radio-luminous supernova explosions. This could be due to an altered IMF with much larger numbers of ultra-massive stars, a larger fraction of massive stars in close binaries, an increased rate of rare explosions at very high metallicities, or some combination of these effects. If so, shallow radio supernova surveys of other extreme galaxy environments may be a fruitful means of finding other such transients.
If it is a tidal disruption event, this would confirm the inference of a hugely elevated rate ($\sim10^{-1}$ yr$^{-1}$) within ULIRG-like galaxies as suggested by [@Tadhunter+2017].
The most likely possibility, in our view, is that it represents an outburst from an active galactic nucleus due to a rapid increase in the accretion rate. The inferred order-of-magnitude increase in flux over a ten-year timespan is also somewhat unusual among blindly-selected quasars, and may indicate that the configuration of this system (a secondary, lower-mass supermassive black hole in a distant orbit around a larger one) is particularly conducive to this sort of variability.
Perhaps the most interesting implication of this model, however, is the possibility of a connection between A-2 and the Cygnus A primary with its powerful jet. Assuming a true offset close to the projected 460 pc and a mass interior to this of $\sim10^{10}\,{\rm M}_\odot$, the orbital timescale for the black hole is approximately $10^7$ yr. This is quite similar to the timescale over which the Cygnus A jet system has been active [@Alexander+1984; @Carilli+1991; @Kino+2005], a coincidence also noted by [@Canalizo+2003]. This may be due to chance, but it is conceivable that the connection is a direct one, with the secondary black hole (and its surrounding halo) playing a key role in triggering or regulating the inflow associated with the current jet episode via its gravitational influence [@Hernquist+1989].
Over a longer timescale, A-2 (and its former host galaxy) may also have been responsible for setting up the conditions necessary for the Cygnus A jet and nuclear starburst to form in the first place. Quasars and radio galaxies are widely believed to result from mergers, and while the presence of ongoing major mergers is often obvious from the morphology of the host galaxies of these objects, for many examples (including Cygnus A itself) the host provides no direct clue to the nature or even presence of a merging companion. If the active nucleus was at the center of the galaxy which delivered the gas to the center of Cygnus A’s elliptical host, further observations may be able to shed light on the age, mass, and nature of the merging galaxy responsible.
Our discovery may also help understand binary black hole evolution in a broader sense. Only a few sub-kpc binaries are known [@Max+2005; @Comerford+2015], despite the important implications of these systems ranging from the nanoHz gravity wave background to black hole growth and AGN feeding. The discovery that the most iconic powerful radio galaxy may be a binary SMBH argues that SMBH binarity may be more prevalent, and more important, than previously considered. Future high-resolution, high-dynamic-range imaging of massive, luminous galaxy mergers similar to Cygnus A could lead to the discovery of additional examples of close, active binaries. This would provide better constraints on the prevalence of this phenomenon and new insights into the SMBH inspiral process, its relation to the surrounding environment, and its implications for AGN triggering.\
0.2cm
D.A.P. acknowledges past support from a Marie Sklodowska-Curie Individual Fellowship within the Horizon 2020 European Union (EU) Framework Programme for Research and Innovation (H2020-MSCA-IF-2014-660113).
The National Radio Astronomy Observatory and the Long Baseline Observatory are facilites of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We acknowledge the use of imaging acquired from the Gemini Observatory Archive. Gemini Observatory is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership.
We thank T. Krichbaum and C. Max for providing us with their reduced archival observations. We also thank the anonymous referee for providing useful input and suggestions that improved the quality of the paper. We acknowledge useful comments and feedback from J. Comerford, K. Nyland, D. A. Kann, M. Vestergaard, and A. V. Filippenko.
[^1]: While the optical luminosity of the counterpart of A-2 ($m_K\approx-18$) is similar to that of SNe, it cannot be a supernova itself, as the point source is visible in both HST images from the mid-1990s and the Keck observations from 2002.
[^2]: This may not apply if A-2 originates from a fast-recoiling black hole associated with a previous merger [[[e.g.,]{} @Blecha+2008]]{}. As no other recoiling black holes are known, however, this seems unlikely in comparison to simply attributing the A-2 black hole to the central black hole of the galaxy responsible for the present merger event.
[^3]: The black hole could also be a pre-existing “stalled” SMBH delivered by an earlier minor merger [@Dosopoulou+2017].
|
---
abstract: 'Many-particle optical lattice clocks have the potential for unprecedented measurement precision and stability due to their low quantum projection noise. However, this potential has so far never been realized because clock stability has been limited by frequency noise of optical local oscillators. By synchronously probing two $^{87} \mbox{Sr}$ lattice systems using a laser with a thermal noise floor of $1 \times 10^{-15}$, we remove classically correlated laser noise from the intercomparison, but this does not demonstrate independent clock performance. With an improved optical oscillator that has a $1 \times 10^{-16}$ thermal noise floor, we demonstrate an order of magnitude improvement over the best reported stability of any independent clock, achieving a fractional instability of $1 \times 10^{-17}$ in 1000 s of averaging time for synchronous or asynchronous comparisons. This result is within a factor of 2 of the combined quantum projection noise limit for a 160 ms probe time with $\sim$10$^3$ atoms in each clock. We further demonstrate that even at this high precision, the overall systematic uncertainty of our clock is not limited by atomic interactions. For the second Sr clock, which has a cavity-enhanced lattice, the atomic-density-dependent frequency shift is evaluated to be $-3.11 \times 10^{-17}$ with an uncertainty of $8.2 \times 10^{-19}$.'
author:
- 'T.L. Nicholson, M.J. Martin, J.R. Williams, B.J. Bloom, M. Bishof'
- 'M.D. Swallows'
- 'S.L. Campbell, and J. Ye'
title: 'Comparison of Two Independent Sr Optical Clocks with $1\times10^{-17}$ Stability at 10$^3$ s'
---
Precise time keeping is foundational to technologies such as high-speed data transmission and communication, GPS and space navigation, and new measurement approaches for fundamental science. Given the increasing demand for better synchronization, more precise and accurate clocks are needed, motivating the active development of atomic clocks based on optical transitions. Several optical clocks have surpassed the systematic uncertainty of the primary Cs standard [@Bize2005; @Parker2010]. Two examples are the NIST trapped $\mbox{Al} ^{+}$ single ion clock, with a systematic uncertainty of $8.6 \times 10^{-18}$ [@Chou2010], and the JILA $^{87} \mbox{Sr}$ neutral atom lattice clock, at the $1.4 \times 10^{-16}$ level [@Ludlow2008; @Swallows2012]. The field of optical atomic clocks has been very active in recent years, with many breakthrough results coming from both the ion clock [@Oskay2006; @Chou2010; @Huntemann2012; @King2012; @Dube2005] and lattice clock [@Swallows2011; @Westergaard2011; @Takamoto2011; @Middelmann2012; @Yamaguchi2012; @Schiller2012] communities.
In principle, the stability of an optical lattice clock can surpass that of a single-ion standard because the simultaneous interrogation of many neutral atoms reduces the quantum projection noise (QPN) of the lattice clock [@Itano1993; @Santarelli1999]. QPN determines the standard quantum limit to the clock stability, and it can be expressed as
$$\sigma_{\mathrm{QPN}}(\tau) = \frac{\chi}{\pi Q} \sqrt{\frac{T_{c}}{N \tau}}.$$
Here, $\sigma_{\mathrm{QPN}}(\tau)$ is the QPN-limited fractional instability of a clock, $Q$ is the quality factor of the clock transition, $N$ is the number of atoms, $T_{c}$ is the clock cycle time, $\tau$ is the averaging time (in seconds), and $\chi$ is a numerical factor near unity that is determined by the line shape of the clock transition spectroscopy. In a typical lattice clock, $N$ is on the order of $10^{3}$. In the case of the $\mbox{Al}^{+}$ ion clock, $N=1$, and a fractional instability of $2.8 \times 10^{-15} / \sqrt{\tau}$ for a two-clock comparison has been demonstrated [@Chou2010]. For typical values of $T_{c}$ and $N$, a QPN-limited [$^{87}$Sr]{} lattice clock could potentially reach a given stability 500 times faster than the $\mbox{Al} ^{+}$ clock.
Despite this promise, thus far the instability of lattice clocks has been far worse than the QPN limit. Instead, demonstrated lattice clock instability has been dominated by downsampled broadband laser noise (the Dick effect [@Santarelli1998]) at a few times $10^{-15} / \sqrt{\tau}$, similar to that of the best ion systems [@Ludlow2008; @Lodewyck2009; @Westergaard2011; @Takamoto2011]. To improve the precision of lattice clock systematic evaluations while avoiding the challenge of building more stable clock lasers, a synchronous interrogation method can be implemented [@Bize2000; @Takamoto2011]. Synchronous interrogation facilitates laser-noise-free differential measurements between two atomic systems; however, in this approach, these systems are not independent clocks.
In this work, we achieve instability at the $10^{-16}/\sqrt{\tau}$ level for two independent [$^{87}$Sr]{} optical lattice clocks. Using a new clock laser stabilized to a 40 cm optical reference cavity [@Swallows2012] with a thermal noise floor [@Notcutt2006] of $1 \times 10^{-16}$, we directly compare two independently operated [$^{87}$Sr]{} clocks. The combined stability of these clocks is within a factor of 2 of the QPN limit, reaching $1 \times 10^{-17}$ stability in only 1000 s. We also use synchronous interrogation to study the effect of laser noise on clock stability, demonstrating its effectiveness in removing correlated noise arising from a 7 cm cavity with a $1 \times 10^{-15}$ thermal noise floor. Operating with the 40 cm cavity, on the other hand, synchronous and asynchronous interrogations (the latter of which demonstrates independent clock performance) yield nearly the same measurement precision for a given averaging time.
This high measurement precision will permit much shorter averaging times for a range of applications, including investigations of systematic uncertainties in lattice clocks. In particular, we are able to characterize one of the most challenging systematics in a many-particle clock—the density-related frequency shift [@Campbell2009; @Swallows2011; @Lemke2011; @Ludlow2011]—at an uncertainty below $1 \times 10^{-18}$ for our second Sr clock. The only remaining major systematic uncertainty for lattice clocks is the blackbody-radiation-induced Stark shift [@Porsev2006; @Campbell2008; @Sherman2012; @Middelmann2012]. One can mitigate this effect by trapping atoms in a well-characterized blackbody environment or cold enclosure [@Middelmann2011].
Our previous clock comparisons involved referencing our first generation [$^{87}$Sr]{} clock (Sr1) to various clocks at NIST using a 3.5 km underground fiber optic link [@Ludlow2008; @Campbell2008]. To evaluate the stability of the [$^{87}$Sr]{} clock at the highest possible level, we constructed a second Sr clock (Sr2) for a direct comparison between two systems with similar performance. In both systems, [$^{87}$Sr]{} atoms are first cooled with a Zeeman slower and a magneto-optical trap (MOT) on a strong 30 MHz transition at 461 nm. Then a second MOT stage, operating on a 7.5 kHz intercombination transition at 689 nm, cools the atoms to a few $\mu$K. Atoms are then loaded from their 689 nm MOTs into 1D optical lattices and are nuclear spin polarized. The lattices operate near the “magic" wavelength at 813 nm where the differential AC Stark shift for the $^{1}S_{0}$ and $^{3}P_{0}$ clock states is identically zero [@Ye2008].
The lattice for Sr1 is made from the standing wave component of a retroreflected optical beam focused to a 32 $\mu \mbox{m}$ radius. The power in one direction of this beam is 140 mW, corresponding to measured trap frequencies of 80 kHz along the lattice axis and 450 Hz in the radial direction. From this trap frequency we estimate a 22 $\mu \mbox{K}$ trap depth. The Sr2 lattice utilizes an optical buildup cavity so that laser power in one direction of this lattice is 6 W. The cavity has a finesse of 120 and is mounted outside our vacuum chamber. The intracavity beam radius for this lattice is 160 $\mu \mbox{m}$, which yields a much greater trap volume. Trap frequencies in this lattice are 100 kHz and 120 Hz in the axial and radial directions, respectively. We estimate a 35 $\mu \mbox{K}$ trap depth for the cavity-enhanced lattice.
![(a) A cavity-stabilized diode laser is split and sent to each of the lattice clocks. To ensure that both clock laser beams have independent frequency control, Sr1 and Sr2 have separate AOMs. The two clocks have different lattice geometries: Sr1 uses a 1D retroreflected lattice and Sr2 uses a 1D cavity-enhanced lattice. The independent clock laser beams are locked to the ${}^{1}S_{0} \rightarrow {}^{3}P_{0}$ transition by feeding the measured clock transition frequency back to the rf frequencies of the AOMs. The rf frequencies are recorded to determine the difference between the two clocks. (b) Clock comparisons using our 7 cm vertical cavity-stabilized laser (top) required synchronizing the clock probe pulses to perform correlated spectroscopy. Clock comparisons with our lower noise 40 cm horizontal cavity (bottom) used asynchronous pulses to ensure independent clock operation. Synchronous measurements with the 40 cm cavity (not depicted) were also performed.[]{data-label="fig:laser_systems"}](fig_laser_systems.eps){width="\linewidth"}
The optical local oscillator for the Sr1 and Sr2 systems is derived from a common cavity-stabilized diode laser at 698 nm, but two different acousto-optic modulators (AOMs) provide independent optical frequency control for each system \[Fig. \[fig:laser\_systems\](a)\]. For all measurements presented in this Letter, we use Rabi spectroscopy with a 160 ms probe time, corresponding to a Fourier-limited linewidth of 5 Hz. For the stability measurements, we use 1000 atoms for Sr1 and 2000 atoms for Sr2. The optical frequency is locked to the clock transition using a digital servo that provides a correction to the AOM frequency for the corresponding clock.
To provide a quantitative understanding of the role of laser noise in our clock operations, we use two different clock lasers in our experiment. The first clock laser is frequency stabilized to a vertically oriented 7 cm long cavity with a thermal noise floor of $1 \times 10^{-15}$ [@Ludlow2007]. This 7 cm reference cavity was used in much of our previous clock work and represented the state-of-the-art in stable lasers until recently. The second laser is stabilized to a horizontal 40 cm long cavity with a thermal noise floor of $1 \times 10^{-16}$ \[Fig. \[fig:thermal\_noise\](a)\], which is similar to the record performance achieved with a silicon-crystal cavity [@Kessler2012]. The greater cavity length and use of fused silica mirror substrates both reduce the thermal noise floor of this laser [@Swallows2012]. Other significant improvements for the 40 cm system include a better vacuum, active vibration damping, enhanced thermal isolation and temperature control, and an improved acoustic shield.
![(a) The measured thermal noise floor of the two optical reference cavities. The stability of the 7 cm cavity (closed circles) was measured by comparing two cavities of the same design. For the 40 cm cavity (open circles), we determine its frequency stability from a measurement based on the atomic reference. We lock this laser to the [$^{87}$Sr]{} clock transition and subtract off a residual cavity drift of $\sim$1.4 mHz/s. These data include contributions from other technical noise and thus represents an upper bound on the thermal noise floor. (b) A scatter plot of the measured excitation fraction when the clock lasers are locked to the two Sr references. Each point represents the measured excitation fraction for Sr1 versus Sr2 for the same duty cycle. The blue points represent data taken under synchronous interrogation using the 7 cm reference cavity, showing a clear correlation arising from common-mode laser noise. The red points represent data taken under asynchronous interrogation with the low-noise 40 cm reference cavity, clearly indicating a lack of classical correlations. Instead, the distribution indicates near-QPN-limited performance for independent Sr1 and Sr2. The inset compares synchronous measurements using the 40 cm cavity (in green) with the asynchronous data using the same cavity. This distribution shows a slight correlation, indicating a small amount of residual laser noise.[]{data-label="fig:thermal_noise"}](fig_thermalnoise.eps){width="\linewidth"}
When comparing the two [$^{87}$Sr]{} systems using the 7 cm reference cavity, the probe pulses for the Sr1 and Sr2 clock transitions are precisely synchronized \[Fig. \[fig:laser\_systems\](b)\]. The responses of both digital atomic servos are also matched. This synchronous interrogation allows each clock to sample the same laser noise; therefore, the difference between the measured clock transition frequencies for Sr1 and Sr2 benefits from a common-mode rejection of the laser noise. Because of this common-mode laser noise, simultaneous measurements of the excitation fraction for the Sr1 and Sr2 atomic servos show classical correlations \[Fig. \[fig:thermal\_noise\](b)\], as evidenced by the distribution of these measurements in the shape of an ellipse stretched along the correlated (diagonal) direction. The minor axis of this distribution indicates uncorrelated noise such as QPN.
The 40 cm cavity supports a tenfold improvement in laser stability, and we estimate that the Dick effect contribution is close to that of QPN for clock operation. To test this, we operate the two clocks asynchronously, where the clock probes are timed such that the falling edge of the Sr1 pulse and the rising edge of the Sr2 pulse are always separated by 10 ms \[Fig. \[fig:laser\_systems\](b)\]. During this asynchronous comparison, the two clocks sample different laser noise, preventing common-mode laser noise rejection. The Sr1–Sr2 excitation fraction scatter plot \[Fig. \[fig:thermal\_noise\](b)\] resembles a 2D Gaussian distribution, which is consistent with both clocks being dominated by uncorrelated white noise. Synchronous comparisons with the 40 cm cavity were also performed, indicating a similar distribution for the scatter plot of the Sr1 vs. Sr2 excitation \[Fig. \[fig:thermal\_noise\](b) inset\].
With this understanding of laser noise effects in our clocks, we now evaluate the clock stability. In the short term ($<$100 s) the clock stability is limited by laser noise and QPN, and in the long term ($\sim$1000 s) it is limited by drifting systematic shifts. Using the 40 cm cavity, we measure the short- and long-term stability in two ways. The first approach combines information from both a self-comparison and a synchronous comparison to infer the full stability of our clocks \[Fig. \[fig:comparison\](a)\]. A self-comparison involves comparing two independent atomic servos on the Sr2 system [@Swallows2011]. Updates for these two digital servos alternate for each experimental cycle. Thus the difference between these servo frequencies is sensitive to the Dick effect and QPN and therefore represents the short-term stability of an independent clock [@Jiang2011; @Hagemann2012]; however, it does not measure the clock’s long-term stability as it is insensitive to all drifts at time scales greater than 5 s. The other component of this approach, the synchronous comparison, is sensitive to long-term drifts on either system, but in the short term it is free of correlated laser noise. Together these two data sets provide a complete picture of our clock’s short- and long-term stability, and the small difference between them after about 10 s implies that our clocks are only minimally affected by correlated noise.
![(a) The Allan deviation of a synchronous comparison (closed circles) between Sr1 and Sr2 with the low-noise 40 cm cavity. The self-comparison (open circles) is $(\nu_{1} - \nu_{2})/\sqrt{2}$, where $\nu_{1}$ and $\nu_{2}$ are the frequencies to which the two servos are locked. Dividing $(\nu_{1} - \nu_{2})$ by $\sqrt{2}$ extrapolates the self-comparison stability to the expected performance of a comparison between the Sr2 system and an identical clock. The dashed line indicates the QPN limit. (b) An asynchronous comparison between the two Sr clocks (also taken with the 40 cm cavity). The Allan deviation of the comparison fits to $4.4 \times 10^{-16} / \sqrt{\tau}$. The estimated Dick effect is roughly equal to the predicted QPN of $2.0 \times 10^{-16} / \sqrt{\tau}$ (dashed line). The inset depicts typical scans of the clock transition (open circles). The red line is a fit to the data using the Rabi model. All stability data shown in this work represent the combined stability of the two systems. To infer a single clock stability, one would need to divide all the data by $\sqrt{2}$.[]{data-label="fig:comparison"}](fig_comparison.eps){width="\linewidth"}
In the second approach, we measured the full stability of our clock with an asynchronous comparison, which is sensitive to both short- and long-term instability. Beyond the atomic servo response time ($>$20 s), an asynchronous comparison reflects the performance of two independent clocks. Analysis of the Dick effect for our asynchronous pulse sequence (and a thermal-noise-limited local oscillator) shows that our asynchronous comparison reproduces independent clock performance within 6%. The Allan deviation of the comparison signal is shown in Fig. \[fig:comparison\](b). These results demonstrate that one or both of our [$^{87}$Sr]{} clocks reaches the $1 \times 10^{-17}$ level in 1000 s, representing the highest stability for an individual clock and marking the first demonstration of a comparison between independent neutral-atom optical clocks with a stability well beyond that of ion systems.
The enhanced stability of many-particle clocks can come at the price of higher systematic uncertainty due to density-dependent frequency shifts, which arise from atomic interactions. This shift has received a great deal of attention in recent years, with experiments and theory centered around schemes for explaining and minimizing this effect for optical lattice clocks [@Swallows2011; @Gibble2009; @Rey2009; @Lemke2011; @Yu2010]. To operate at lower densities, the Sr2 system employs a large volume optical lattice created by a buildup cavity that results in a lower density shift than Sr1 [@Westergaard2011]. The large lattice volume also allows Sr2 to trap as many as 50 000 atoms under typical experimental conditions.
We measure the Sr2 density shift with Rabi spectroscopy and synchronous interrogation, using our 7 cm cavity. In this case where laser noise dominates the single-clock instability, synchronous interrogation allows us to evaluate this systematic an order of magnitude faster than we could without the Sr1 reference. This measurement alternates between two independent atomic servos, one addressing a high atom number sample, $N_{\mathrm{high}}$, and one addressing a low atom number $N_{\mathrm{low}}$. The first (second) servo measures a frequency $\nu_{\mathrm{high}}$ ($\nu_{\mathrm{low}}$), and the corresponding $N_{\mathrm{high}}$ ($N_{\mathrm{low}}$) is recorded during each cycle. For a frequency shift that is linear in density, the quantity $(\nu_{\mathrm{high}} - \nu_{\mathrm{low}})/(N_{\mathrm{high}} - N_{\mathrm{low}})$ is the slope of the shift. For our greatest modulation amplitude of $\Delta N = N_{\mathrm{high}} - N_{\mathrm{low}} \simeq 47 000$, we determine that the uncertainty in the density shift per 2000 atoms (corresponding to an average density of 2 to 3 $\times ~ 10^{9} ~\mbox{cm}^{-3}$) reaches the $1 \times 10^{-18}$ level in 1000 s \[Fig. \[fig:density\] inset\].
![The measured Sr2 density shift as a function of $\Delta N$. Each point on this plot represents an average over a bin of 30 measurements of $(\nu_{\mathrm{high}} - \nu_{\mathrm{low}})/(N_{\mathrm{high}} - N_{\mathrm{low}})$. Our statistics show that the density shift for our trapping conditions is linear within our quoted uncertainty of $8.2 \times 10^{-19}$. **Inset**: A single 2000 s long density shift measurement with $\Delta N \simeq 41 000$. The shift per atom was measured and then scaled up to 2000 atoms for a typical running condition. This measurement shows that a single density shift evaluation for 1000 s using a large atom number modulation is sufficient for a $1 \times 10^{-18}$ clock.[]{data-label="fig:density"}](fig_density.eps){width="\linewidth"}
To verify that the shift is linear in atom number, we vary $\Delta N$ by changing $N_{\mathrm{high}}$ while setting $N_{\mathrm{low}}$ to 2000–3000 atoms \[Fig. \[fig:density\]\]. We analyze the density shift data using the statistical analysis from our previous work [@Swallows2011]. Our error bars are inflated by the square root of the reduced chi-square statistic $\chi_{\mathrm{red}}^{2}$ calculated for a model in which the density shift is directly proportional to our atom number. For this measurement, $\sqrt{\chi_{\mathrm{red}}^{2}} = 1.3$. The $\chi_{\mathrm{red}}^{2}$ statistic can differ from unity due to drifts in the calibration of the fluorescence signal used to measure our atom number, slight variations in the optical trapping conditions, or departures from a proportional model. We determine the Sr2 density shift of $(-3.11 \pm 0.08) \times 10^{-17}$ at 2000 atoms. At this atom number, the total shift is sufficiently small such that our clock is stable at the $1 \times 10^{-18}$ level in the presence of typical atom number drifts.
In summary, we have demonstrated comparisons between two independent optical lattice clocks with a combined instability of $4.4 \times 10^{-16} / \sqrt{\tau}$, with a single clock demonstrating $1 \times 10^{-17}$ level instability at 1000 s. We have also determined the density-dependent frequency shift uncertainty in our cavity-enhanced lattice at $8.2 \times 10^{-19}$, with single measurements averaging down to the $1 \times 10^{-18}$ level in 1000 s.
We acknowledge funding support from NIST, NSF, and DARPA. JRW is supported by NRC RAP. MB acknowledges support from NDSEG. SLC acknowledges support from NSF. We thank X. Zhang and W. Zhang for technical contributions.
[30]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , , , , , , , , ****, ().
, ****, ().
, , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , ****, ().
, , , ****, ().
, , , , .
, , , , , , , , ****, ().
, , .
, , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , ****, ().
, , , ****, ().
, , , , , , , , ****, ().
, , , , , , , , , ****, ().
, , , , , , , ****, ().
, , , , , , , , .
, ****, ().
, , , ****, ().
, ****, ().
|
---
abstract: |
A tree $T$ in an edge-colored graph is called a [*proper tree*]{} if no two adjacent edges of $T$ receive the same color. Let $G$ be a connected graph of order $n$ and $k$ be an integer with $2\leq k \leq n$. For $S\subseteq V(G)$ and $|S| \ge 2$, an $S$-tree is a tree containing the vertices of $S$ in $G$. Suppose $\{T_1,T_2,\ldots,T_\ell\}$ is a set of $S$-trees, they are called *internally disjoint* if $E(T_i)\cap E(T_j)=\emptyset$ and $V(T_i)\cap V(T_j)=S$ for $1\leq i\neq j\leq \ell$. For a set $S$ of $k$ vertices of $G$, the maximum number of internally disjoint $S$-trees in $G$ is denoted by $\kappa(S)$. The $\kappa$-connectivity $\kappa_k(G)$ of $G$ is defined by $\kappa_k(G)=\min\{\kappa(S)\mid S$ is a $k$-subset of $V(G)\}$. For a connected graph $G$ of order $n$ and for two integers $k$ and $\ell$ with $2\le k\le n$ and $1\leq \ell \leq \kappa_k(G)$, the *$(k,\ell)$-proper index $px_{k,\ell}(G)$* of $G$ is the minimum number of colors that are needed in an edge-coloring of $G$ such that for every $k$-subset $S$ of $V(G)$, there exist $\ell$ internally disjoint proper $S$-trees connecting them. In this paper, we show that for every pair of positive integers $k$ and $\ell$ with $k \ge 3$, there exists a positive integer $N_1=N_1(k,\ell)$ such that $px_{k,\ell}(K_n) = 2$ for every integer $n \ge N_1$, and also there exists a positive integer $N_2=N_2(k,\ell)$ such that $px_{k,\ell}(K_{m,n}) = 2$ for every integer $n \ge N_2$ and $m=O(n^r) (r \ge 1)$. In addition, we show that for every $p \ge c\sqrt[k]{\frac{\log_a n}{n}}$ ($c \ge 5$), $px_{k,\ell}(G_{n,p})\le 2$ holds almost surely, where $G_{n,p}$ is the Erdös-Rényi random graph model.\
**Keywords:** proper tree; proper index; random graphs; threshold function.\
**AMS subject classification 2010:** 05C15, 05C40, 05C80, 05D40.\
author:
- |
Hong Chang$^1$, Xueliang Li$^1$, Colton Magnant$^2$, Zhongmei Qin$^1$\
[$^1$Center for Combinatorics and LPMC]{}\
[Nankai University, Tianjin 300071, P.R. China]{}\
[Email: changh@mail.nankai.edu.cn, lxl@nankai.edu.cn, qinzhongmei90@163.com]{}\
[$^2$Department of Mathematical Sciences]{}\
[Georgia Southern University, Statesboro, GA 30460-8093, USA]{}\
[Email: cmagnant@georgiasouthern.edu]{}
title: 'The $(k,\ell)$-proper index of graphs[^1]'
---
Introduction
============
All graphs in this paper are undirected, finite and simple. We follow [@BM] for graph theoretical notation and terminology not described here. Let $G$ be a nontrivial connected graph with an associated [*edge-coloring*]{} $c : E(G)\rightarrow \{1, 2, \ldots, r\}$, $r \in \mathbb{N}$, where adjacent edges may have the same color. If adjacent edges of $G$ are assigned different colors by $c$, then $c$ is a [*proper coloring*]{}. The minimum number of colors needed in a proper coloring of $G$ is referred to as the *chromatic index* of $G$ and denoted by $\chi'(G)$. A path of $G$ is said to be a [*rainbow path*]{} if no two edges on the path receive the same color. The graph $G$ is called [*rainbow connected*]{} if for every pair of distinct vertices there is a rainbow path of $G$ connecting them. An edge-coloring of a connected graph is a [*rainbow connecting coloring*]{} if it makes the graph rainbow connected. This concept of rainbow connection of graphs was introduced by Chartrand et al. [@CJMZ] in 2008. The *rainbow connection number* $rc(G)$ of a connected graph $G$ is the smallest number of colors that are needed in order to make $G$ rainbow connected. The readers who are interested in this topic can see [@LSS; @LS] for a survey.
In [@COZ], Chartrand et al. generalized the concept of rainbow connection to rainbow index. At first, we recall the concept of generalized connectivity. Let $G$ be a connected graph of order $n$ and $k$ be an integer with $2\leq k \leq n$. For $S\subseteq V(G)$ and $|S| \ge 2$, an $S$-tree is a tree containing the vertices of $S$ in $G$. Let $\{T_1,T_2,\ldots,T_\ell\}$ be a set of $S$-trees, they are called *internally disjoint* if $E(T_i)\cap E(T_j)=\emptyset$ and $V(T_i)\cap V(T_j)=S$ for every pair of distinct integers $i,j$ with $1\leq i, j\leq \ell$. For a set $S$ of $k$ vertices of $G$, the maximum number of internally disjoint $S$-trees in $G$ is denoted by $\kappa(S)$. The $\kappa$-connectivity $\kappa_k(G)$ of $G$ is defined by $\kappa_k(G)=\min\{\kappa(S)\mid S$ is a $k$-subset of $V(G)\}$. We refer to a book [@LM] for more details about the generalized connectivity.
A tree $T$ in an edge-colored graph is a [*rainbow tree*]{} if no two edges of $T$ have the same color. Let $G$ be a connected graph of order $n$ and let $k,\ell$ be two positive integers with $2\le k\le n$ and $1\leq \ell \leq \kappa_k(G)$. The *$(k,\ell)$-rainbow index* of a connected graph $G$, denoted by $rx_{k,\ell}(G)$, is the minimum number of colors that are needed in an edge-coloring of $G$ such that for every $k$-subset $S$ of $V(G)$, there exist $\ell$ internally disjoint rainbow $S$-trees connecting them. Recently, a lot of relevant results have been published in [@CLS; @CLS2; @CLS3]. In particular, for $\ell=1$, we write $rx_k(G)$ for $rx_{k,1}(G)$ and call it the *$k$-rainbow index* of $G$ ( see [@CLZ; @QXY; @CLYZ]).
Motivated by rainbow coloring and proper coloring in graphs, Andrews et al. [@ALLZ] and Borozan et al. [@BFGMMMT] introduced the concept of proper-path coloring. Let $G$ be a nontrivial connected graph with an edge-coloring. A path in $G$ is called a *proper path* if no two adjacent edges of the path are colored with the same color. An edge-colored graph $G$ is [*proper connected*]{} if any two vertices of $G$ are connected by a proper path. For a connected graph $G$, the [*proper connection number*]{} of $G$, denoted by $pc(G)$, is defined as the smallest number of colors that are needed in order to make $G$ proper connected. The [*$k$-proper connection number*]{} of a connected graph $G$, denoted by $pc_k(G)$, is the minimum number of colors that are needed in an edge-coloring of $G$ such that for every two distinct vertices of $G$ are connected by $k$ internally pairwise vertex-disjoint proper paths. For more details, we refer to [@GLQ; @LWY] and a dynamic survey [@LC].
Recently, Chen et al. [@CLL] introduced the concept of $k$-proper index of a connected graph $G$. A tree $T$ in an edge-colored graph is a [*proper tree*]{} if no two adjacent edges of $T$ receive the same color. Let $G$ be a connected graph of order $n$ and $k$ be a fixed integer with $2\le k\le n$. An edge-coloring of $G$ is called a *$k$-proper coloring* if for every $k$-subset $S$ of $V(G)$, there exists a proper $S$-tree in $G$. For a connected graph $G$, the *$k$-proper index* of $G$, denoted by $px_k(G)$, is defined as the minimum number of colors that are needed in a $k$-proper coloring of $G$. In [@CLQ], we gave some upper bounds for the 3-proper index of graphs.
A natural idea is to introduce the concept of the $(k,\ell)$-proper index. Let $G$ be a nontrivial connected graph of order $n$ and size $m$. Given two integers $k,\ell$ with $2\le k\le n$ and $1\leq \ell \leq \kappa_k(G)$, the *$(k,\ell)$-proper index* of a connected graph $G$, denoted by $px_{k,\ell}(G)$, is the minimum number of colors that are needed in an edge-coloring of $G$ such that for every $k$-subset $S$ of $V(G)$, there exist $\ell$ internally disjoint proper $S$-trees connecting them. From the definition, it follows that $$1 \le px_{k,\ell}(G) \le \min\{rx_{k,\ell(G)}, \chi'(G) \}\le m.$$ Clearly, $px_{2,\ell}(G)=pc_\ell(G)$, $px_{k,1}(G)=px_k(G)$.
Let us give an overview of the rest of this paper. In Section 2, we study the $(k,\ell)$-proper index of complete graphs using two distinct methods. We show that there exists a positive integer $N_1=N_1(k,\ell)$, such that $px_{k,\ell}(K_n)=2$ for every integer $n\geq N_1$. In Section 3, we turn to investigate the $(k,\ell)$-proper index of complete bipartite graphs by probabilistic method [@AS]. Similarly, we prove that there exists a positive integer $N_2=N_2(k,\ell)$, such that $px_{k,\ell}(K_{n,n})=2$ for every integer $n\geq N_2$. Furthermore, we can extend the result about $K_{n,n}$ to more general complete bipartite graph $K_{m,n}$, where $m=O(n^r)$, $r\in\mathbb{R}$ and $r\geq 1$. In section 4, we show that for every $p \ge c\sqrt[k]{\frac{\log_a n}{n}}$ ($c \ge 5$), $px_{k,\ell}(G_{n,p})\le 2$ holds almost surely, where $G_{n,p}$ is the Erdös-Rényi random graph model [@ER].
Complete graphs
===============
In this section, we will investigate the $(k,\ell)$-proper index of complete graphs. Firstly, we state a useful result about the $k$-connectivity of $K_n$ and present some preliminary results.
\[thm0\][@COZ] For every two integers $n$ and $k$ with $2 \leq k \leq n$, $\kappa_k(K_n)=n-\lceil\frac{k}{2}\rceil$.
[@CLL] Let $G=K_n$ and $k$ be an integer with $3 \le k \le n$. Then $px_{k,1}(G)=px_k(G)=2$.
\[thm1\][@BFGMMMT] Let $G=K_n$, $n\geq4$ and $\ell\ge 2$. If $n\geq2\ell$, then $px_{2,\ell}(G)=pc_\ell(G)=2$.
Based on the previous results, we prove the following.
For every integer $n\ge 4$, $px_{3,2}(K_n)=2$.
[**Case 1.**]{} $n=2p$ for $p \ge 2$.
Take a Hamiltonian cycle $C=v_1,v_2, \ldots, v_{2p}$ of $K_n$ and denote the $v_sv_t$-path in clockwise direction contained in $C$ by $C[v_s,v_t]$. Next, we will provide an edge-coloring of $K_n$ with 2 colors such that for any three vertices of $K_n$, there are two internally disjoint proper trees connecting them. We alternately color the edges of $C$ with colors 1 and 2 starting with color 2, and color the rest of the edges with color 1. Let $S$ be any 3-subset of $V(K_n)$, without loss of generality, we assume that $S=\{v_i, v_j , v_h\}$ with $1 \le i < j < h \le 2p$. It is easy to see that the Hamiltonian cycle $C$ is partitioned into three segments $C[v_i,v_j]$, $C[v_j,v_h]$ and $C[v_h, v_i]$. If $2p+i-h =1$, then we get that $i=1$ and $h=2p$. Note that the edges $v_1v_2$ and $v_{2p-1}v_{2p}$ are colored with color 2 and the edges $v_{2p}v_1$ and $v_{2p-1}v_1$ are colored with color 1. Thus, there are two internally disjoint proper $S$-trees $P_1=v_{2p}v_1C[v_1,v_j]$ and $P_2=v_jv_{2p}v_{2p-1}v_1$ (if $v_j=v_{2p-1}$, then $P_2=v_{2p}v_{2p-1}v_1$). Now we suppose that $2p+i-h \ge 2$. If the edge incident to $v_i$ with color 2 is $v_iv_{i+1}$, then the two internally disjoint proper $S$-trees are $v_hv_iC[v_i,v_j]$ and $C[v_j,v_h]C[v_h,v_i]$. Otherwise, the edge incident to $v_i$ with color 2 is $v_{i}v_{i-1}$. Thus, the two internally disjoint proper $S$-trees are $v_hv_iv_{i-1}v_j$ and $C[v_i,v_j]C[v_j,v_h]$.
[**Case 2.**]{} $n=2p+1$ for $p \ge 2$.
Let $H$ be a complete subgraph of $K_n$ with vertex set $\{v_1, v_2, \ldots, v_{2p}\}$. Color the edges of $H$ as above, and color the edge $v_{2p+1}v_i$ ($1\leq i\leq 2p$) with color 1 for $i$ odd and with color 2 for $i$ even. It is easy to see that for any three vertices of $V(H)$, there are two internally disjoint proper trees connecting them. Now we assume that $S=\{v_i, v_j ,v_{2p+1}\}$ ($1 \le i < j \le 2p$). Since there are two internally disjoint proper paths connecting $v_i$ and $v_j$ in the Hamiltonian cycle $C=v_1,v_2, \ldots, v_{2p}$, it follows that there are two internally disjoint proper $S$-trees in $K_n$.
For every integer $n\ge 4$, $px_{n-1,2}(K_n)=2$.
It is well known that if $n$ is even, then $K_n$ can be factored into $\frac{n}{2}$ Hamiltonian paths $\{P_1,P_2,\ldots,P_{\frac{n}{2}}\}$; if $n$ is odd, then $K_n$ can be factored into $\frac{n-1}{2}$ Hamiltonian cycles $\{C_1,C_2,\ldots,C_{\frac{n-1}{2}}\}$. Let $S$ be any $(n-1)$-subset of $V(K_n)$. Without loss of generality, assume that $\{v\}=V(K_n)\setminus S$. If $n$ is even, then for each Hamiltonian path $P_i$, we alternately color the edges of $P_i$ with color 1 and 2. Notice that each vertex will be an end-vertex of a Hamiltonian path. Let $P_i$ be the Hamiltonian path which contains $v$ as one of its end-vertices. Thus, there are two internally disjoint proper $S$-trees $P_i-v$ and $P_j$ $(i\neq j)$. If $n$ is odd, then for each Hamiltonian cycle $C_i$, we alternately color the edges of $C_i$ with color 1 and 2 such that the edges incident to $v_1$ in each cycle are colored the same. If $v=v_1$, then there are two internally disjoint proper $S$-trees $C_1-v_1$ and $C_2-v_1$. Now we suppose $v \ne v_1$. Let $C_i$ be the Hamiltonian cycle in which $v$ is adjacent to $v_1$, and let $v'$ be one of the neighbours of $v_1$ in another Hamiltonian cycle $C_j (i \ne j)$. Thus, there are two internally disjoint proper $S$-trees $C_i- v$ and $C_j - v_1v'$. Hence, $px_{n-1,2}(K_n)=2$.
\[thm2\] For every two integers $n$ and $\ell$ with $n\geq2\ell$, $px_{n,\ell}(K_n)=2$.
It is obvious that $px_{n,\ell}(K_n)\geq2$. To show that the converse inequality, we will provide an edge-coloring of $K_n$ with 2 colors such that there are at least $\ell$ internally disjoint spanning proper trees in $K_n$. It is well known that if $n$ is even, then $K_n$ can be factored into $\frac{n}{2}$ Hamiltonian paths; if $n$ is odd, then $K_n$ can be factored into $\frac{n-1}{2}$ Hamiltonian cycles. In conclusion, $K_n$ contains $\lfloor\frac{n}{2}\rfloor$ pairwise edge-disjoint Hamiltonian paths. Thus, for each Hamiltonian path, we alternately color the edges using color 1 and 2 starting with color 1. If there still remains uncolored edges, then color them with color 1. It is easy to see that there exist $\lfloor\frac{n}{2}\rfloor \ge \ell$ internally disjoint spanning proper trees. Hence, $px_{n,\ell}(K_n)\leq2$. This completes the proof.
**Remark:** Theorem \[thm2\] is best possible in the sense of the order of $K_n$. It follows from Theorem \[thm0\] that there are at most $\lfloor\frac{n}{2}\rfloor$ internally disjoint trees connecting $n$ vertices of $K_n$. Thus, $\ell\leq \lfloor\frac{n}{2}\rfloor$.
These results naturally lead to the following question: for general integers $k, \ell$ with $3\leq k\leq n-1$, whether there exists a positive integer $N_1=N_1(k,\ell)$ such that $px_{k,\ell}(K_n)=2$ for every integer $n\geq N_1$? Now, we use two distinct methods to answer this question. Although the result of Theorem \[Thm:Kn-2-Explicit\] is stronger than the result of Theorem \[thm3\], we include both as a demonstration of the variety of possible approaches to this kind of question.
In order to prove Theorem \[Thm:Kn-2-Explicit\], we need the following result of Sauer as presented in [@BB].
[@S] \[Thm:Sauer\] Given $\delta \geq 3$ and $g \geq 3$, for any $$m \geq \frac{(\delta - 1)^{g - 1} - 1}{\delta - 2},$$ there exists a $\delta$-regular graph $G$ of order $2m$ with girth at least $g$.
By removing a vertex from the graph provided by Theorem \[Thm:Sauer\], we obtain an almost regular graph of odd order, still with girth at least $g$, but the degree of some vertices is $\delta - 1$. Thus, we replace $\delta$ with $\delta + 1$ in Theorem \[Thm:Sauer\] to obtain the following easy corollary.
\[Cor:Sauer\] Given $\delta \geq 3$ and $g \geq 3$, for any $$n \geq 2\frac{\delta^{g - 1} - 1}{\delta - 1} - 1,$$ there exists a graph $G$ of order $n$ with $\delta(G) \geq \delta$ and girth at least $g$.
\[Thm:Kn-2-Explicit\] Let $k \geq 3$ and $\ell \geq 1$. For all $n$ with $$n \geq 2 \frac{ (\ell(k - 1) + k)^{4} - 1 }{(\ell + 1)(k - 1)} - 1,$$ we have $px_{k, \ell}(K_{n}) = 2$.
First note that a proper tree using only two colors must be a path. This means that the goal of this result is to produce an edge-coloring with two colors of a complete graph in which any set of $k$ vertices is contained in $\ell$ internally disjoint proper paths.
By Corollary \[Cor:Sauer\] with $g = 5$ and $\delta = \ell(k - 1) + k$, we have that for any $$n \geq 2 \frac{ (\ell(k - 1) + k)^{4} - 1 }{(\ell + 1)(k - 1)} - 1,$$ there exists a graph $H$ with $n$ vertices, girth at least $5$, and minimum degree at least $\ell(k - 1) + k$. Color the edges of $H$ red and color the complement of $H$ blue to complete a coloring of $G = K_{n}$.
Let $S$ be any set of $k$ vertices in this graph, say with $S = \{v_{1}, v_{2}, \dots, v_{k}\}$. We say that a path $P$ is $S$-alternating if each odd vertex of $P$ is in $S$ while each even vertex of $P$ is in $G \setminus S$. Suppose that we have constructed $t \geq 0$ proper paths that are $S$-alternating, each contain all of $S$, and are vertex-disjoint aside from the vertices of $S$. If $t = \ell$, this is the desired system of paths so suppose $t < \ell$ and choose the constructed set of paths so that $t$ is as large as possible.
Further suppose we have constructed an additional $S$-alternating proper path $P^{i}$ from $v_{1}$ to $v_{i}$ using red edges of the form $v_{j}w_{j}$ for each $j < i$ where $w_{j} \in G \setminus S$ and blue edges of the form $w_{j}v_{j + 1}$. If $i = k$, this contradicts the choice of $t$ so suppose $i < k$ and further choose this constructed path so that $i$ is as large as possible. Note that at most $\ell(k - 1) - 1$ vertices of $G \setminus S$ have been used in these existing paths.
Incident to $v_{i}$, there are at least $\delta - (\ell (k - 1) - 1) - (k - 1) \geq 2$ red edges to vertices of $G \setminus S$ that have not already been used in paths. Let $x$ and $y$ be the opposite ends of these edges. At most one of $x$ and $y$, say $x$, may have a red edge to $v_{i + 1}$ since the red graph was constructed to have girth $5$. This means that the proper path $P^{i}$ can be extended to $P^{i + 1}$ by including the red edge $v_{i}y$ and the blue edge $yv_{i + 1}$. This contradiction completes the proof.
Next, we answer the above question by using probabilistic method.
\[thm3\] Let $k \geq 3$ and $\ell \geq 1$. For all $n$ with $$n \geq 2k(k + \ell) \ln \left( \frac{1}{1 - (1/2)^{2k - 3}} \right),$$ we have $px_{k, \ell}(K_{n}) = 2$.
Obviously, $px_{k,\ell}(K_n)\geq2$. For the converse, we color the edges of $K_n$ with two colors uniformly at random. For a $k$-subset $S$ of $V(K_n)$, let $A_S$ be the event that there exist at least $\ell$ internally disjoint proper $S$-trees. Note that a proper tree using only two colors must be a path. It is sufficient to show that $Pr[\ \underset{S} \bigcap A_S\ ]>0$.
Let $S$ be any $k$-subset of $V(K_n)$, without loss of generality, we assume $S=\{v_1,v_2,\ldots,v_k\}$. For any $(k-1)$-subset $T$ of $V(K_n)\setminus S$, suppose $T=\{u_1,u_2,\ldots,u_{k-1}\}$, define $P_T={v_1u_1v_2u_2\cdots v_{k-1}u_{k-1}v_k}$ as a path of length $2k-2$ from $v_1$ to $v_k$, which implies $P_T$ is an $S$-tree. Note that for $T,T'\subseteq V(K_n)\setminus S$ and $T\cap T'=\emptyset$, $P_T$ and $P_{T'}$ are two internally disjoint $S$-trees. Let $\mathcal{P}=\{P_T \mid T\subseteq V(K_n)\setminus S\}$. Take $\mathcal{P}'$ to be a subset of $\mathcal{P}$ which consists of $\lfloor\frac{n-k}{k-1}\rfloor$ internally disjoint $S$-trees in $\mathcal{P}$. Set $p= Pr[\ P_T \in\mathcal{P}'$ is a proper $S$-tree \]$=\frac{2}{2^{2k-2}}=\frac{1}{2^{2k-3}}$. Let $A_S'$ be the event that there exist at most $\ell-1$ internally disjoint proper $S$-trees in $\mathcal{P}'$. Assume that $\lfloor\frac{n-k}{k-1}\rfloor > \ell-1$ (that is, $n\ge k+(k-1)\ell$), we have $$\begin{aligned}
Pr[\ \overline{A_S}\ ]&\leq Pr[\ A_S'\ ]\\
&\leq {\lfloor\frac{n-k}{k-1}\rfloor\choose \lfloor\frac{n-k}{k-1}\rfloor-(\ell-1)}(1-p)^{\lfloor\frac{n-k}{k-1}\rfloor-(\ell-1)}\\
&= {\lfloor\frac{n-k}{k-1}\rfloor\choose \ell-1}(1-p)^{\lfloor\frac{n-k}{k-1}\rfloor-(\ell-1)}.\end{aligned}$$
Then over all possible choices of $S$ with $|S| = k$, we get $$\begin{aligned}
Pr[\ \underset{S} \bigcap A_S\ ]&=1-Pr[ \ \bigcup\overline{A_S} \ ]\\
&\geq1-\underset{S} \sum Pr[\overline{A_S}]\\
&>1-{n\choose k}{\lfloor\frac{n-k}{k-1}\rfloor\choose \ell-1}(1-p)^{\lfloor\frac{n-k}{k-1}\rfloor-(\ell-1)}\\
&>1-n^k\left\lfloor\frac{n-k}{k-1}\right\rfloor^{\ell-1}(1-p)^{\lfloor\frac{n-k}{k-1}\rfloor-\ell+1}\\
& >0\end{aligned}$$ for $$n \geq 2k(k + \ell) \ln \left( \frac{1}{1 - (1/2)^{2k - 3}} \right).$$
Complete bipartite graphs
=========================
In this section, we will turn to study the $(k,\ell)$-proper index of complete bipartite graphs.
\[thm4\] Let $k$ and $\ell$ be two positive integers with $k\geq3$. Then there exists a positive integer $N_2$ such that $px_{k,\ell}(K_{n,n})=2$ for every integer $n\geq N_2$.
Clearly, $px_{k,\ell}(K_{n,n})\geq2$. We only need to show that $px_{k,\ell}(K_{n,n})\leq 2$. Firstly, color the edges of $K_{n,n}$ with two colors uniformly at random. For a $k$-subset $S$ of $V(K_{n,n})$, let $B_S$ denote the event that there exist at least $\ell$ internally disjoint proper $S$-trees. Note that a proper tree using only two colors must be a path. It is sufficient to show that $Pr[\ \underset{S} \bigcap B_S\ ]>0$.
Assume that $K_{n,n}=G[X,Y]$, where $X=\{x_1, x_2, \ldots, x_n\}$ and $Y=\{y_1, y_2, \ldots, y_n\}$. We distinguish the following two cases.
[**Case 1.**]{} Fix the vertices in $S$ in the same class of $K_{n,n}$.
Without loss of generality, we suppose that $S=\{x_1,x_2,\ldots,x_k\}\subseteq X$. Let $T$ be any $(k-1)$-subset of $Y$, assume that $T=\{y_1,y_2,\ldots, y_{k-1}\}$, define $P_T=x_1y_1\ldots x_{k-1}y_{k-1}x_k$ as a path of length $2k-2$ from $x_1$ to $x_k$, which follows that $P_T$ is an $S$- tree. Note that for $T,T'\subseteq Y$ and $T\cap T'=\emptyset$, $P_T$ and $P_{T'}$ are two internally disjoint $S$-trees. Let $\mathcal{P}_1=\{P_T \mid T\subseteq V(K_n)\setminus S\}$. Take $\mathcal{P}_1'$ to be a subset of $\mathcal{P}_1$ which consists of $\lfloor\frac{n}{k-1}\rfloor$ internally disjoint $S$-trees in $\mathcal{P}_1$. Set $p_1= Pr[\ P_{T} \in\mathcal{P}_1'$ is a proper $S$-tree \]$=\frac{2}{2^{2k-2}}=\frac{1}{2^{2k-3}}$. Let $B_S'$ be the event that there exist at most $\ell-1$ internally disjoint proper $S$-trees in $\mathcal{P}_1'$. Assume that $\lfloor\frac{n}{k-1}\rfloor > \ell-1$ (that is, $n\ge(k-1)\ell$), we have $$\begin{aligned}
Pr[\ \overline{B_S}\ ]&\leq Pr[\ B_S'\ ]\\
&\leq {\lfloor\frac{n}{k-1}\rfloor\choose \lfloor\frac{n}{k-1}\rfloor-(\ell-1)}(1-p_1)^{\lfloor\frac{n}{k-1}\rfloor-(\ell-1)}\\
&= {\lfloor\frac{n}{k-1}\rfloor\choose \ell-1}(1-p_1)^{\lfloor\frac{n}{k-1}\rfloor-(\ell-1)}\\
&<(\frac{n}{2})^{\ell-1}(1-p_1)^{\lfloor\frac{n}{k-1}\rfloor-(\ell-1)}.\end{aligned}$$
[**Case 2.**]{} Fix the vertices in $S$ in different classes of $K_{n,n}$.
Suppose that $S\cap X=\{s_1,s_2,\ldots, s_r\}$ and $S\cap Y=\{s_{r+1},s_{r+2},\ldots, s_k\}$, where $r$ is an positive integer with $1\leq r\leq k-1$. For any $(k-r)$-subset $T_X$ of $X\setminus S$ with $T_X=\{x_1,x_2,\ldots, x_{k-r}\}$ and any $r$-subset $T_Y$ of $Y\setminus S$ with $T_Y=\{y_1,y_2,\ldots, y_{r}\}$, let $T=T_X\cup T_Y$, and define $P_T=s_1y_1\ldots s_{r}y_{r}x_1s_{r+1}x_2\ldots s_{k-1}x_{k-r}s_k$ as a path of length $2k-1$ from $s_1$ to $s_k$. Obviously, $P_T$ is an $S$-tree. Let $\mathcal{P}_2=\{P_T \mid T=T_X\cup T_Y ,T_X\subseteq X\setminus S,T_Y\subseteq Y\setminus S\}$. Then $\mathcal{P}_2$ has $t=\min\{\lfloor\frac{n-r}{k-r}\rfloor, \lfloor\frac{n-(k-r)}{r}\rfloor\}$ internally disjoint $S$-trees. Take $\mathcal{P}_2'$ to be a subset of $\mathcal{P}_2$ which consists of $t$ internally disjoint $S$-trees in $\mathcal{P}_2$. Set $p_2= Pr[\ P_T \in\mathcal{P}_2'$ is a proper $S$-tree \]$=\frac{2}{2^{2k-1}}=\frac{1}{2^{2k-2}}$. Let $B_S''$ be the event that there exist at most $\ell-1$ internally disjoint proper $S$-trees in $\mathcal{P}_2'$. Note that $\lfloor\frac{n-1}{k-1}\rfloor \le t < \frac{n}{2}$. Here, we assume that $\lfloor\frac{n-1}{k-1}\rfloor > \ell-1$ (that is, $n\ge (k-1)\ell+1$), we obtain $$\begin{aligned}
Pr[\ \overline{B_S}\ ]&\leq Pr[\ B_S''\ ]\\
&\leq {t \choose t-(\ell-1)}(1-p_2)^{t-(\ell-1)}\\
&= {t\choose \ell-1}(1-p_2)^{t-(\ell-1)}\\
&<(\frac{n}{2})^{\ell-1}(1-p_2)^{\lfloor\frac{n-1}{k-1}\rfloor-(\ell-1)}.\end{aligned}$$
Since $p_2<p_1$ and $\lfloor\frac{n-1}{k-1}\rfloor \le \lfloor\frac{n}{k-1}\rfloor$, we get $Pr[\ \overline{B_S}\ ]<(\frac{n}{2})^{\ell-1}(1-p_2)^{\lfloor\frac{n-1}{k-1}\rfloor-(\ell-1)}$ for every $k$-subset S of $V(K_{n,n})$. It yields that $$\begin{aligned}
Pr[\ \underset{S} \bigcap B_S\ ]&=1-Pr[\ \bigcup\overline{B_S}\ ]\\
&\geq1-\underset{S} \sum Pr[\overline{B_S}]\\
&>1-{2n\choose k}(\frac{n}{2})^{\ell-1}(1-p_2)^{\lfloor\frac{n-1}{k-1}\rfloor-(\ell-1)}\\
&>1-2^{k-\ell+1}n^{k+\ell-1}(1-p_2)^{\lfloor\frac{n-1}{k-1}\rfloor-(\ell-1)}.\end{aligned}$$
We shall be guided by the above inequality in search for the value of $N_2$, we find that the inequality $2^{k-\ell+1}n^{k+\ell-1}(1-p_2)^{\lfloor\frac{n-1}{k-1}\rfloor-(\ell-1)}\leq1$ will lead to $Pr[\ \underset{S} \bigcap B_S\ ]>0$. With similar arguments in Theorem \[thm3\], we can obtain that there exists a positive integer $N_2$ for every integer $n\geq N_2$.
With arguments similar to Theorem \[thm4\], we can extend the above result to more general complete bipartite graph $K_{m,n}$, where $m=O(n^r)$, $r\in\mathbb{R}$ and $r\geq 1$.
\[thm5\] Let $m$ and $n$ be two positive integers with $m=O(n^r)$, $r\in\mathbb{R}$ and $r\geq 1$. For every pair of integer of $k,\ell$ with $k\geq3$, there exists a positive integer $N_3=N_3(k,\ell)$ such that $px_{k,\ell}(K_{m,n})=2$ for every integer $n\geq N_3$.
Random graphs
=============
At the beginning of this section, we introduce some basic definitions about random graphs. The most frequently occurring probability model of random graphs is the Erdös-Rényi random graphs model [@ER]. The model $G_{n,p}$ consists of all graphs on $n$ vertices in which the edges are chosen independently and randomly with probability $p$. We say that an event $\mathcal{A}$ happens almost surely if $Pr[\mathcal{A}]\rightarrow1$ as $n\rightarrow\infty$.
We now focus on the $(k,\ell)$-proper index of the random graph $G_{n,p}$. In what follows, we first show two lemmas which are useful in the main result of this section.
\[lem2\] For any $c\geq 5$, if $p\geq c\sqrt[k]{\frac{\log_a n}{n}}$, then almost surely any $k$ vertices in $G_{n,p}$ have at least $2k^2\log_a n$ common neighbours, where $a=1+\frac{1}{2^{2k-3}-1}$.
For a $k$-subset $S$ of $V(G_{n,p})$, let $C_S$ be the event that all the vertices in $S$ have at least $2k^2\log_a n$ common neighbours. It is sufficient to prove that for $p= c\sqrt[k]{\frac{\log_a n}{n}}$, $Pr[ \ \underset{S} \bigcap C_S\ ]\rightarrow1,n\rightarrow\infty$. Let $C_1$ be the number of common neighbours of all the vertices in $S$. Then $C_1\sim B\left(n-k,\left(c\sqrt[k]{\frac{\log_a n}{n}}\right)^k\right)$, and $E(C_1)=\frac{n-k}{n}c^k\log_a n$. In order to apply the Chernoff bound [@JLR] as follows, setting $n>\frac{kc^k}{c^k-2k^2}$.
By the Chernoff Bound, we obtain $$\begin{aligned}
Pr[\ \overline{C_S}\ ]&= Pr[\ C_1< 2k^2\log_a n\ ]\\
&=Pr[\ C_1<E(C_1)\left(1-\frac{E(C_1)-2k^2\log_a n}{E(C_1)}\right)\ ]\\
&=Pr[\ C_1<\frac{n-k}{n}c^k\log_a n\left(1-\frac{(n-k)c^k-2k^2n}{(n-k)c^k}\right)\ ]\\
&\leq e^{-\frac{n-k}{2n}c^k\log_a n\left(\frac{(n-k)c^k-2k^2n}{(n-k)c^k}\right)^2}\\
&<n^{-\frac{c^k(n-k)}{2n}\left(\frac{(n-k)c^k-2k^2n}{(n-k)c^k}\right)^2}.\end{aligned}$$ Since $1<a<e$, we have $\log_a n>\ln n$, this leads to the last inequality.
As an immediate consequence, we get $$\begin{aligned}
Pr[\ \underset{S} \bigcap C_S\ ]&=1-Pr[\ \underset{S}\bigcup\overline{C_S}\ ]\\
&\geq1-\underset{S} \sum Pr[\ \overline{C_S}\ ]\\
&>1-{n\choose k}n^{-\frac{c^k(n-k)}{2n}(\frac{(n-k)c^k-2k^2n}{(n-k)c^k})^2}\\
&>1-n^{k-\frac{c^k(n-k)}{2n}(\frac{(n-k)c^k-2k^2n}{(n-k)c^k})^2}.\end{aligned}$$ Note that for any $c\geq5$, $k-\frac{c^k(n-k)}{2n}(\frac{(n-k)c^k-2k^2n}{(n-k)c^k})^2<0$ holds for sufficiently large $n$. Thus, $\underset{n\rightarrow\infty}\lim Pr[\ \underset{S} \bigcap C_S\ ]=\underset{n\rightarrow\infty}\lim 1-n^{k-\frac{c^k(n-k)}{2n}(\frac{(n-k)c^k-2k^2n}{(n-k)c^k})^2}=1$.
\[lem3\] Let $a=1+\frac{1}{2^{2k-3}-1}$. If any $k$ vertices in $G_{n,p}$ have at least $2k^2\log_a n$ common neighbours, then $px_{k,\ell}(G_{n,p})\le 2$ holds almost surely.
Firstly, we color the edges of $G_{n,p}$ with two colors uniformly at random. For a $k$-subset $S$ of $V(G_{n,p})$, let $D_S$ be the event that there exist at least $\ell$ internally disjoint proper $S$-trees. Note that a proper tree using only two colors must be a path. If $Pr[\ \underset{S} \bigcap D_S\ ]>0$, then a suitable coloring of $G_{n,p}$ with 2 colors exists, which follows that $px_{k,\ell}(G_{n,p})\leq 2$.
We assume that $S=\{v_1,v_2,\ldots,v_k\}\subseteq V(K_n)$, let $N(S)$ be the set of common neighbours of all vertices in $S$. Let $T$ be any $(k-1)$-subset of $N(S)$, without loss of generality, suppose $T=\{u_1,u_2,\ldots,u_{k-1}\}$, define $P_T={v_1u_1v_2u_2\cdots v_{k-1}u_{k-1}v_k}$ as a path of length $2k-2$ from $v_1$ to $v_k$. Obviously, $P_T$ is an $S$-tree. Let $\mathcal{P}^\star=\{P_T \mid T \subseteq N(S)\}$, then $\mathcal{P}^\star$ has at least $\lfloor\frac{2k^2\log_a n}{k-1}\rfloor\geq 2k\log_a n$ internally disjoint $S$-trees (we may and will assume that $2k\log_a n$ is an integer). Take $\mathcal{P}^\star_1$ to be a set of $2k\log_a n$ internally disjoint $S$-trees of $\mathcal{P}^\star$. It is easy to check that $q=$ Pr\[ $P_T \in\mathcal{P}^\star_1$ is a proper $S$-tree \]$=\frac{2}{2^{2k-2}}=\frac{1}{2^{2k-3}}$. So $1-q=a^{-1}$. Let $D_1$ be the number of proper $S$-trees in $\mathcal{P}^\star_1$. Then we get $$\begin{aligned}
Pr[\ \overline{D_S}\ ]&\leq Pr[\ D_1\leq \ell-1\ ]\\
&\leq{2k\log_a n\choose 2k\log_a n-(\ell-1)}(1-q)^{2k\log_a n-(\ell-1)}\\
&={2k\log_a n\choose \ell-1}(1-q)^{2k\log_a n-(\ell-1)}\\
&< (2k\log_a n)^{\ell-1}a^{-(2k\log_a n-(\ell-1))}\\
&=\frac{(2ak\log_a n)^{\ell-1}}{n^{2k}}.\end{aligned}$$
Consequently $$\begin{aligned}
Pr[\ \underset{S} \bigcap D_S\ ]&=1-Pr[\ \bigcup\overline{D_S}\ ]\\
&\geq1-\underset{S} \sum Pr[\ \overline{D_S}\ ]\\
&\geq1-{n\choose k}\frac{(2ak\log_a n)^{\ell-1}}{n^{2k}}\\
&>1-\frac{(2ak\log_a n)^{\ell-1}}{n^{k}}.\end{aligned}$$ It is easy to verify that $\underset{n\rightarrow\infty}\lim 1-\frac{(2ak\log_a n)^{\ell-1}}{n^{k}}=1$, which implies that $\underset{n\rightarrow\infty}\lim Pr[\ \underset{S} \bigcap D_S\ ]=1$, this is to say that $px_{k,\ell}(G_{n,p})\leq 2$ holds almost surely. This completes the proof.
Combining with Lemmas \[lem2\] and \[lem3\], we get the following conclusion.
\[thm6\] Let $a=1+\frac{1}{2^{2k-3}-1}$ and $c \ge 5$. For every $p \ge c\sqrt[k]{\frac{\log_a n}{n}}$, $px_{k,\ell}(G_{n,p})\le 2$ holds almost surely.
[1]{}
N. Alon, J.H. Spencer, [*The Probabilistic Method*]{}, John Wiley &Sons, 2004.
E. Andrews, E. Laforge, C. Lumduanhom, P. Zhang, On proper-path colorings in graphs, [*J. Combin. Math. Combin. Comput.*]{}, to appear.
B. Bollob$\acute{a}s$, [*Extremal graph theory*]{}, volume 11 of London Mathematical Society Monographs. Academic Press, Inc. \[Harcourt Brace Jovanovich, Publishers\], London-New York, 1978.
J.A. Bondy, U.S.R. Murty, [*Graph Theory*]{}, GTM $244$, Springer, $2008$.
V. Borozan, S. Fujita, A. Gerek, C. Magnant, Y. Manoussakis, L. Montero, Z. Tuza, Proper connection of graphs, [*Discrete Math.*]{} [**312**]{} (2012) 2550–2560.
Q. Cai, X. Li, J. Song, Solutions to conjectures on $(k,\ell)$-rainbow index of complete graphs, [*Networks*]{} [**62**]{} (2013) 220–224.
Q. Cai, X. Li, J. Song, The $(k,\ell)$-rainbow index of complete bipartite graphs, [*Bull. Malays. Math. Sci. Soc.*]{}, in press: 10.1007/s40840-016-0348-9.
Q. Cai, X. Li, J. Song, The $(k,\ell)$-rainbow index of random graphs, [*Bull. Malays. Math. Sci. Soc.*]{} [**39(2)**]{} (2016) 765–771.
Q. Cai, X. Li, Y. Zhao, The 3-rainbow index and connected dominating sets, [*J. Comb. Optim.*]{} [**31(2)**]{} (2016) 1142–1159.
Q. Cai, X. Li, Y. Zhao, Note on the upper bound of the rianbow index of a graph, [*Discrete Appl. Math.*]{} [**209**]{} (2016) 68–74.
H. Chang, X. Li, Z. Qin, Some upper bounds for the 3-proper index of graphs, arXiv:1603.07840.
G. Chartrand, G.L. Johns, K.A. McKeon, P. Zhang, The rainbow connectivity of a graph, [*Networks*]{} [**54**]{} (2009) 75–81.
G. Chartrand, F. Okamoto, P. Zhang, Rainbow trees in graphs and generalized connectivity, [*Networks*]{} [**55**]{} (2010) 360–367.
L. Chen, X. Li, J. Liu, The $k$-proper index of graphs, arXiv:1601.06236.
L. Chen, X. Li, K. Yang, Y. Zhao, The 3-rainbow index of a graph, [*Discuss. Math. Graph Theory*]{} [**35**]{} (2015) 81–94.
P. Erdös, A. Rényi, On the evolution of random graphs, [*Publ. Math. Inst. Hungar. Acad. Sci.*]{} [**5**]{} (1960) 17–61.
R. Gu, X. Li, Z. Qin, Proper connection number of random graphs, [*Theoret. Comput. Sci.*]{} [**609(2)**]{} (2016) 336–343.
S. Janson, T. Luczak, A. Ruci$\acute{n}$ski, “Random Graphs ” [*Whiley-Interscience Series in Discrete Mathesmatics and Optimization,*]{} New York, 2000, xii+333pp.
X. Li, C. Magnant, Properly colored notions of connectivity - a dynamic survey, [*Theory & Appl. Graphs*]{} [**0**]{}(1) (2015), Art. 2.
X. Li, Y. Mao, [*Generalized Connectivity of Graphs*]{}, Springer Briefs in math., Springer, New York, 2016.
X. Li, Y. Shi, Y. Sun, Rainbow connections of graphs: A survey, [*Graphs &Combin.*]{} [**29**]{} (2013) 1–38.
X. Li, Y. Sun, [*Rainbow Connections of Graphs*]{}, Springer Briefs in math., Springer, New York, 2012.
X. Li, M. Wei, J. Yue, Proper connection number and connected dominating sets, [*Theoret. Comput. Sci.*]{} [**607**]{} (2015) 480–487.
N. Sauer, Extremaleigenschaften regulärer graphen gegebener taillenweite I & II, [*Sitzungsberichte Österreich. Akad. Wiss. Math. Natur. Kl.*]{}, S-B II, [**176**]{} (1967) 9–25, 27–43.
[^1]: Supported by NSFC No.11371205, 11531011, “973" program No.2013CB834204 and PCSIRT.
|
---
abstract: 'Simon’s problem asks the following: determine if a function $f: \{0,1\}^n \rightarrow \{0,1\}^n$ is one-to-one or if there exists a unique $s \in \{0,1\}^n$ such that $f(x) = f(x \oplus s)$ for all $x \in \{0,1\}^n$, given the promise that exactly one of the two holds. A classical algorithm that can solve this problem for every $f$ requires $2^{\Omega(n)}$ queries to $f$. Simon [@simon:power] showed that there is a quantum algorithm that can solve this promise problem for every $f$ using only $\mathcal O(n)$ quantum queries to $f$. A matching lower bound on the number of quantum queries was given in [@knp:simonJ], even for functions $f: {{\mathbb{F}_p^n}}\to {{\mathbb{F}_p^n}}$. We give a short proof that $\mathcal O(n)$ quantum queries is optimal even when we are additionally promised that $f$ is linear. This is somewhat surprising because for linear functions there even exists a *classical* $n$-query algorithm.'
author:
- 'Joran van Apeldoorn[^1]'
- Sander Gribling
bibliography:
- 'qc.bib'
title: 'Simon’s problem for linear functions'
---
Introduction
============
In 1994, Simon [@simon:power] showed the existence of a query problem where quantum algorithms offer an exponential improvement over the best randomized classical algorithms that have a bounded error probability of, say, at most 1/3. The problem he considers is the following:
*Given a function $f: \{0,1\}^n \rightarrow \{0,1\}^n$ with the promise that it either (1) is one-to-one or (2) admits a unique $s \in \{0,1\}^n$ such that $f(x) = f(x \oplus s)$ for all $x \in \{0,1\}^n$, decide which of the two holds.*
Simon showed that there is a quantum algorithm which can solve this promise problem for any $f$ using $\mathcal O(n)$ quantum queries to $f$, i.e., using $\mathcal O(n)$ applications of the unitary $|x\rangle |b\rangle \mapsto |x\rangle|b\oplus f(x)\rangle$.[^2] This offers an exponential improvement over classical algorithms, since Simon also showed that at least $2^{\Omega(n)}$ classical queries of the form $x \mapsto f(x)$ are needed in order to succeed with probability at least $2/3$. The question we are interested in is the optimality of Simon’s quantum algorithm and its generalization to finite fields. Let $p$ be a prime power and let ${\mathbb{F}_p}$ be the finite field with $p$ elements. Simon’s problem over ${\mathbb{F}_p}$ can be formulated as follows:
*Given a function $f: {\mathbb{F}_p}^n \rightarrow {\mathbb{F}_p}^n$ with the promise that it either (1) is one-to-one or (2) admits a one-dimensional subspace $H \subset {\mathbb{F}_p}^n$ such that for all $x,y \in {\mathbb{F}_p}^n$, $f(x) = f(y) \Leftrightarrow x-y \in H$, decide which of the two holds.*
Koiran et al. [@knp:simonJ] (for an earlier version see [@knp:simon]) showed that the quantum query complexity of Simon’s problem over ${\mathbb{F}_p}$ is $\Theta(n)$.[^3] Here we show that the lower bound of $\Omega(n)$ quantum queries holds even when $f$ is additionally promised to be linear. That is, a quantum algorithm which can solve Simon’s problem over ${\mathbb{F}_p}$ for any linear function requires $\Omega(n)$ quantum queries to $f$. Interestingly, this shows that for the class of linear functions there is no quantum advantage: classically, one can also fully determine a linear function using $n$ queries, by querying a basis.
Given a linear function $f: {{\mathbb{F}_p^n}}\to {{\mathbb{F}_p^n}}$, with the promise that either $|\ker(f)| = 1$ or $|\ker(f)|=p$, decide which of the two holds.
Our main result (proved in Section \[sec:proofs\]) is the following.
\[thm:mainLB\] Let $\mathcal A$ be a $T$-query quantum algorithm for the Linear Simon’s problem with success probability at least $2/3$. Then $T = \Omega(n)$.
We follow the same proof structure as [@knp:simonJ], using the polynomial method [@bbcmw:polynomialsj]. More specifically, we show that, averaged over a subset of functions, the acceptance probability of a $T$-query quantum algorithm is a polynomial of degree at most $2T$ in the size of the kernel. We then obtain the lower bound by appealing to [@knp:simonJ Lemma 5] which states that any polynomial with the correct success probabilities has degree $\Omega(n)$. However, where [@knp:simonJ] average over all functions, we only consider linear functions over ${\mathbb{F}_p}^n$. Surprisingly this simplifies the proof substantially. We also give a slightly simplified proof of [@knp:simonJ Lemma 5].
#### Notation
For a set $K \subseteq {\mathbb{F}_p}^n$ we call $s:K \rightarrow {\mathbb{F}_p}^n$ a *partial* function and we say that $f:{\mathbb{F}_p}^n \rightarrow {\mathbb{F}_p}^n$ *extends* $s$ if $f(x) = s(x)$ for all $x \in K$. We write $s \preceq f$ if $f$ extends $s$. Let $S_k$ be the set of all partial functions defined on a domain of size at most $k$. Let $\deg_x(f)$ be the degree of $f$ as a polynomial in the variable $x$. We define $F=\{f : {{\mathbb{F}_p^n}}\rightarrow {{\mathbb{F}_p^n}}\ | \ f \text{ linear} \}$ as the set of all linear functions from ${{\mathbb{F}_p^n}}$ to ${{\mathbb{F}_p^n}}$. For each $k \in \{0,1,\ldots,n\}$ and $D = p^k$ we let $F_D$ be the subset of $F$ consisting of linear functions whose kernel has size $D$, i.e., $F_D=\{f \in F \mid |\ker(f)| = D \}$. Finally, we use ${\mathbf i}^2 = -1$ and we use square brackets $[\cdot]: \{\mathrm{true,false}\} \to \{0,1\}$ to denote the function that maps true to $1$ and false to $0$.
Proof of Theorem \[thm:mainLB\] {#sec:proofs}
===============================
The proof of Theorem \[thm:mainLB\] is based on a well-known method of lower bounding the quantum query complexity of a Boolean function $G:\{0,1\}^n \to \{0,1\}$: the polynomial method introduced by Beals et al. [@bbcmw:polynomialsj]. Let us first sketch the polynomial method in the setting of their paper. A $T$-query quantum algorithm $\mathcal A$ for computing $G(x)$ (for every $x \in \{0,1\}^n$) can be described by a Hilbert space ${\mathbb{C}}^n \otimes {\mathbb{C}}^2 \otimes {\mathbb{C}}^m$, a sequence of $T$ unitary matrices $U_0,\ldots, U_T$ acting on the space, and an oracle $O_x$ that is defined as $$O_x: |i\rangle |b \rangle |w\rangle \mapsto |i \rangle |b \oplus x_i \rangle |w\rangle.$$ The definition of the oracle explains the tensor product structure of the Hilbert space ${\mathbb{C}}^n \otimes {\mathbb{C}}^2 \otimes {\mathbb{C}}^m$: the first part corresponds with a query input, the second with a query output, and the last with extra work space. The quantum algorithm then works as follows. It starts in a fixed state, say $|0 \rangle |0 \rangle |0\rangle$, and then alternates between applying the unitaries and queries before deciding on its output via a measurement to the second register of the final state. Concretely, the state of the algorithm before the final measurement is as follows: $$U_T O_x U_{T-1} O_x \cdots O_x U_1 O_x U_0 |0 \rangle |0 \rangle |0 \rangle =: \sum_{(i,b,w) \in [n] \times \{0,1\} \times [m] } \alpha_{i,b,w}(x) |i\rangle |b \rangle |w \rangle$$ where $\alpha_{i,b,w}(x) \in {\mathbb{C}}$. The crucial observation is that the amplitudes $\alpha_{i,b,w}(x)$ of the final state are polynomials in the input variables $x_i$ of degree at most $T$. Indeed, applying the oracle to, e.g., a state $\alpha |i \rangle |0 \rangle |w\rangle + \beta |i \rangle |1\rangle |w\rangle$ leads to the state $$\big((1-x_i) \alpha + x_i \beta\big) |i\rangle |0\rangle |w\rangle + \big(x_i \alpha + (1-x_i) \beta\big) |i\rangle |1\rangle |w\rangle.$$ This shows that applying the oracle once increases the degree by at most 1. Since the unitaries do not depend on $x$ and are linear transformations, they do not increase the degree. Instead of viewing the amplitudes as polynomials in the variables $x_i$, it will be more convenient to think of them as homogeneous (degree $T$) polynomials in the Kronecker delta variables $\delta_{x_i,1}:= x_i$ and $\delta_{x_i,0} := (1-x_i)$. The probability of measuring a $1$ in the second register of the final state, i.e., the acceptance probability $P(x)$, is then given by the sum of the squared amplitudes of states with a $1$ in the second register: $$P(x) = \sum_{i \in [n],w \in [m]} |\alpha_{i,1,w}(x)|^2 = \sum_{\substack{s\subseteq [n]\times \{0,1\}\\|s| \leq 2T}} \beta_s \prod_{(i,b)\in s} \delta_{x_i,b}$$ where the real numbers $\beta_s$ are the coefficients of the monomials $\prod_{(i,b)\in s} \delta_{x_i,b}$ in $P(x)$. If $\mathcal A$ computes $G$ with high success probability, then $P(x)$ will be close to $G(x)$ for every $x \in \{0,1\}^n$ which may be used to prove a degree lower bound on $P(x)$. However, proving lower bounds on the degree of $P(x)$ directly is often complicated. A common technique is to average $P(x)$ over multiple inputs in order to reduce the problem to studying a univariate polynomial. For example, for a symmetric[^4] function $G: \{0,1\}^n \rightarrow \{0,1\}$ averaging $P(x)$ over all permutations of $n$ elements reduces the problem to studying univariate polynomials $q(|x|)$ which approximate $G(x)$ (for which tight degree bounds are known) [@bbcmw:polynomialsj].
The above version of the polynomial method is easily generalized to inputs that are not Boolean (see, e.g., ). We will do so here for the setting corresponding to the Linear Simon’s problem.
Let $\mathcal A$ be a $T$-query algorithm for the Linear Simon’s problem and let $P(f)$ be the acceptance probability of $\mathcal A$ on the input $f$. As before, we can write $$P(f) = \sum_{\substack{s\subseteq{{\mathbb{F}_p^n}}\times{{\mathbb{F}_p^n}}\\|s| \leq 2T}} \beta_s \prod_{(x,y)\in s} \delta_{f(x),y}.$$ When we view $s$ as a partial function, this expression can be rewritten in terms of $f$ extending $s$: $$P(f) = \sum_{s\in S_{2T}} \beta_s [ s \preceq f ],$$ where $S_{2T}$ is the set of all partial functions $s$ with $|{\mbox{\rm dom}}(s)|\leq 2T$. As above, it will turn out to be useful to average $P(f)$ over all linear functions $f$ with a kernel of size $D$, i.e., we consider the average acceptance probability $Q(D)$ over all functions with a kernel of size $D$: $$Q(D) = \sum_{f\in F_D} \frac{1}{|F_D|} P(f) = \sum_{f\in F_D} \frac{1}{|F_D|}\sum_{s\in S} \beta_s [ s \preceq f ] = \sum_{s\in S} \beta_s \frac{1}{|F_D|} \sum_{f\in F_D} [ s \preceq f ] = \sum_{s\in S} \beta_s Q_s(D).$$ Here $Q_s(D)$ is the probability that a uniformly random $f \in F_D$ extends $s$: $$Q_s(D) = \frac{1}{|F_D|} \sum_{f\in F_D} [ s \preceq f ] = {{\Pr_{f\in_R F_{D}} \left[ {s\preceq f} \right] } }$$ In the next two sections we will prove that the degree of $Q$ needs to be at least linear in $n$, and that the degree of each $Q_s$ (and hence of $Q$) is upper bounded by $2T$. Together these results implies Theorem \[thm:mainLB\].
Lower bound on the degree
-------------------------
For $k \in \{0,1,\dots,n\}$, $Q(p^k)$ represents an acceptance probability and therefore $Q(p^k) \in [0,1]$. Moreover, if the algorithm succeeds with probability at least $2/3$, then $Q(1) \geq 2/3$ and $Q(p) \leq 1/3$. The lemma below shows that such a $Q$ has degree $\Omega(n)$. We give a slightly simplified proof for completeness.
\[lem:deglb\] For every polynomial $Q$ such that $Q(1)\geq 2/3$, $Q(p) \leq 1/3$ and $Q(p^k) \in [0,1]$ for all $k\in \{0,\dots,n\}$, it holds that $\deg(Q) \geq n/4$.
Assume that $Q$ is a polynomial of degree $d\leq n/2$ (otherwise we are done), so that its derivative $Q'$ is of degree $d-1$ and its second derivative $Q''$ is of degree $d-2$. Consider the $2d-2$ intervals of the form $(p^a,p^{a+1})$ where $a = n-(2d-2),\ldots, n-1$. Since together $Q'$ and $Q''$ have at most $2d-3$ roots, there is such an interval for which both polynomials have no roots with real part in it; let $a \geq n-(2d-2)$ be the integer corresponding to this interval and let $M:=\frac{1+p}{2}p^{a}$ be the middle of this interval. By the mean value theorem we know that there is an $x_0 \in [1,p]$ for which $|Q'(x_0)|\geq \frac{1}{3(p-1)}$. To show the degree lower bound it suffices to prove the following chain of inequalities: $$\frac{1}{ p^{2d-2}} \stackrel{(*)}{\leq} \left|\frac{Q'({M})}{Q'(x_0)} \right| \stackrel{(**)}{\leq} \frac{3(p-1)}{\frac{p-1}{2} p^{n-2d+2}}.$$ Indeed, if the above chain of inequalities holds, then $6 \geq p^{n-4d +4} \geq 2^{n-4d+4}$ which implies that $n-4d+4 \leq 3$, i.e., $d \geq \frac{n+1}{4}$.\
$\mathbf{(*)}$ For the lower bound we will use the following elementary fact: $$\label{eq:simplefact}
\text{if } 0\leq v < w\text{ and } 0\leq y, \text{ then } \frac{v+y}{w+y} \geq \frac{v}{w}$$ Denote the roots of $Q'$ by $b_j+c_j {\mathbf i}$, for $j \in [d-1]$. Then $Q'(x) = \lambda \prod_{j=1}^{d-1}(x-b_j-c_j {\mathbf i})$ for some $\lambda \in {\mathbb{R}}$ and hence $$\left|\frac{Q'({M})}{Q'(x_0)} \right| = \left| \prod_{j=1}^{d-1} \frac{{M}- b_j-c_j {\mathbf i}}{x_0-b_j-c_j {\mathbf i}} \right| = \prod_{j=1}^{d-1} \left| \frac{{M}- b_j-c_j{\mathbf i}}{x_0-b_j-c_j{\mathbf i}} \right| = \prod_{j=1}^{d-1} \sqrt{\frac{\left({M}- b_j\right)^2+c_j^2}{\left(x_0-b_j\right)^2 + c_j^2}}$$ We will show that each factor in the product is bounded from below by $1/p^2$. Considering the $j$-th factor, if $| x_0 - b_j| \leq |{M}- b_j|$ then we are clearly done. Hence, assume $|x_0 - b_j| > |{M}- b_j|$, that is, $b_j> \frac{{M}- x_0}{2} \geq p^{a-1}$. We use : $$\sqrt{\frac{\left({M}- b_j\right)^2+c_j^2}{\left(x_0-b_j\right)^2 + c_j^2}} \geq \left| \frac{{M}- b_j}{x_0-b_j} \right|$$ Since we know that $b_j > p^{a-1}$ and $b_j\not\in (p^a,p^{a+1})$ there are two cases to consider:
- If $b_j \in (p^{a-1},p^a]$, then $\displaystyle{\left| \frac{{M}- b_j}{x_0-b_j} \right| \geq \inf_{x\in (p^{a-1},p^a)} \left| \frac{{M}- x}{x_0-x} \right| = \left| \frac{{M}- p^a}{x_0-p^a} \right| \geq \frac{1}{2}} \geq \frac{1}{p^2}$
- If $b_j \in [p^{a+1},\infty)$, then $\displaystyle{
\left| \frac{{M}- b_j}{x_0-b_j} \right| = \left| \frac{ - \frac{1+p}{2}p^{a} + b_j}{ -x_0+b_j} \right| = \left| \frac{ \frac{p-1}{2}p^a + (b_j - p^{a+1})}{ p^{a+1}-x_0+(b_j-p^{a+1})} \right| \geq \frac{p^{a-1}}{p^{a+1} - x_0} \geq \frac{1}{p^2}}$\
where we use and $\frac{p-1}{2} \geq \frac{1}{p}$ for the first inequality.
$\mathbf{(**)}$ By construction $|Q'(x_0)| \geq \frac{1}{3(p-1)}$, so it remains to show that $|Q'({M})| \leq (\frac{p-1}{2} p^{n-2d+2})^{-1}$. Assume towards a contradiction that $|Q'({M})| > (\frac{p-1}{2} p^{a})^{-1}$. Since $Q''$ has no roots with real part in the interval $(p^a,p^{a+1})$, $Q'$ is either strictly increasing or strictly decreasing on the interval $(p^a,p^{a+1})$. Therefore, there is an interval $(\alpha,\beta)$ (with $\alpha,\beta \in \{p^a, {M}, p^{a+1}\}$) of length $\frac{p-1}{2} p^a$ where $|Q'(x)| > (\frac{p-1}{2} p^{a})^{-1}$. By the fundamental theorem of calculus this implies that $|Q(\alpha) - Q(\beta)| >1$. This is a contradiction, since we have $1 \geq |Q(p^{a+1}) - Q(p^a)| \geq |Q(\alpha) - Q(\beta)|$, where the last inequality follows by monotonicity of $Q$ on the interval $(p^a,p^{a+1})$. It follows that $$|Q'({M})| \leq \left(\frac{p-1}{2} p^{a}\right)^{-1} \leq \left(\frac{p-1}{2} p^{n-2d+2}\right)^{-1}.$$ We conclude that $\displaystyle{\frac{1}{ p^{2d-2}} \leq \left|\frac{Q'({M})}{Q'(x_0)} \right| \leq \frac{3(p-1)}{\frac{p-1}{2} p^{n-2d+2}}}$ and hence that $d \geq n/4$.
Upper bound on the degree
-------------------------
We now show that the degree of each $Q_s$ is upper bounded by $2T$.
\[lem:upperbound\] Given a partial linear function $s: {\mbox{\rm dom}}(s) \rightarrow {{\mathbb{F}_p^n}}$, $\deg_D(Q_s)\leq \dim({\mbox{\rm span}}({\mbox{\rm dom}}(s)))$.
Let $K := {\mbox{\rm span}}({\mbox{\rm dom}}(s))$ and $k:=\dim(K)$. We can extend $s$ uniquely to a linear function on $K$. Define $Z:= \ker(s) \subseteq K$ and $z := \dim(Z)$, and $Y := Z^{\perp} \cap K$. For a function $f:{{\mathbb{F}_p^n}}\rightarrow {{\mathbb{F}_p^n}}$ in $F_D$ we write $H:= \ker(f)$, $h := \dim(H)$ and $D := |H| = p^h$. We show that ${{\Pr_{f\in_R F_{D}} \left[ {s\preceq f} \right] } }$ has degree at most $k$ as a polynomial in $D$. We analyze this probability in three parts: $$\begin{aligned}
{{\Pr_{f\in_R F_{D}} \left[ {s\preceq f} \right] } } &= {{\Pr_{f\in_R F_{D}} \left[ {Z \subseteq H \land Y \cap H = \{ 0 \} } \right] } } {{\Pr_{f\in_R F_{D}} \left[ {s\preceq f \mid Z \subseteq H \land Y \cap H = \{ 0 \} } \right] } }\\
&= {{\Pr_{f\in_R F_{D}} \left[ {Z \subseteq H} \right] } } {{\Pr_{f\in_R F_{D}} \left[ {Y \cap H = \{ 0 \} \mid Z \subseteq H } \right] } } {{\Pr_{f\in_R F_{D}} \left[ {s\preceq f \mid Z \subseteq H \land Y \cap H = \{ 0 \} } \right] } }.
\end{aligned}$$ We show that
1. ${{\Pr_{f\in_R F_{D}} \left[ {Z \subseteq H} \right] } }$ is a polynomial in $D$ of degree at most $z$,
2. ${{\Pr_{f\in_R F_{D}} \left[ {Y \cap H = \{ 0 \} \mid Z \subseteq H } \right] } }$ is a polynomial in D of degree at most $k-z$,
3. ${{\Pr_{f\in_R F_{D}} \left[ {s\preceq f \mid Z \subseteq H \land Y \cap H = \{ 0 \} } \right] } }$ does not depend on $D$.
Together, this implies that ${{\Pr_{f\in_R F_{D}} \left[ {s\preceq f} \right] } }$ is a polynomial in $D$ of degree at most $k$.
1. The probability that $Z \subseteq H$ equals the fraction of $h$-dimensional subspaces of ${{\mathbb{F}_p^n}}$ that contain $Z$. There are $\alpha(n,h) = \prod_{i=0}^{h-1} (p^n - p^i)$ ways to pick $h$ linearly independent vectors in a space of dimension $n$, and hence there are $\beta(n,h) = \frac{\alpha(n,h)}{\alpha(h,h)}$ different subspaces of dimension $h$ in ${{\mathbb{F}_p^n}}$. The number of $h$-dimensional subspaces that contain $Z$ equals the number of $(h-z)$-dimensional subspaces in an $(n-z)$-dimensional space. Hence $${{\Pr_{f\in_R F_{D}} \left[ {Z \subseteq H} \right] } } = \frac{\beta(n-z,h-z)}{\beta(n,h)} = \prod_{i=0}^{z-1} \frac{p^h - p^i}{p^n-p^i},$$ which is a degree-$z$ polynomial in terms of $D = p^h$.
2. We have $\displaystyle{{{\Pr_{f\in_R F_{D}} \left[ {Y \cap H = \{ 0 \} \mid Z \subseteq H } \right] } } = {{\Pr_{f\in_R F_{D}} \left[ {Y/Z \cap H/Z = \{ 0 \}} \right] } }}$ where $Y/Z$ and $H/Z$ are subspaces of ${{\mathbb{F}_p^n}}/Z \simeq {\mathbb{F}_p}^{n-z}$. By construction we have that $\dim(Y/Z) = \dim(Y) = k-z$, $\dim(H/Z) = h-z$. The probability $ {{\Pr_{f\in_R F_{D}} \left[ {Y/Z \cap H/Z = \{ 0 \}} \right] } }$ equals the number of $(h-z)$-dimensional bases of ${\mathbb{F}_p}^{n-z}$ which are linearly independent from $Y$, divided by $\beta(n-z,h-z)$. That is, $${{\Pr_{f\in_R F_{D}} \left[ {Y/Z \cap H/Z = \{ 0 \}} \right] } } = \frac{\frac{\prod_{i=0}^{h-z-1} p^{n-z}-p^{k-z+i}}{\alpha(h-z,h-z)}}{\frac{\alpha(n-z,h-z)}{\alpha(h-z,h-z)}}
= \frac{\prod_{i=0}^{h-z-1} p^{n-z}-p^{k-z+i}}{\alpha(n-z,h-z)} = \frac{\prod_{i=0}^{k-z-1} p^{n-z}-p^{h-z+i}}{\alpha(n-z,k-z)}$$ where the last equality is obtained using ${\alpha(n-z,h-z) = \alpha(n-z,k-z) \prod_{i=k-z}^{h-z-1} p^{n-z} - p^i}$. It follows that $\displaystyle{{{\Pr_{f\in_R F_{D}} \left[ {Y/Z \cap H/Z = \{ 0 \}} \right] } } = \frac{\prod_{i=0}^{k-z-1} p^{n-z}-p^{h-z+i}}{\alpha(n-z,k-z)}}$ is a polynomial in $D=p^h$ of degree $k-z$. We mention in passing that, alternatively, one can arrive at the same expression by looking at the probability that a random $Y$ is linearly independent from a fixed $H$.
3. Finally we consider $ {{\Pr_{f\in_R F_{D}} \left[ {s\preceq f \mid Z \subseteq H \land Y \cap H = \{ 0 \} } \right] } } $. Since $Z \subseteq H$, we know that $f$ and $s$ agree on $Z$. Hence, $f$ extends $s$ if their values agree on $Y$. Let $b_1,\dots,b_{k-z}$ be a basis for $Y$, then $f$ and $s$ agree on $Y$ if and only if they agree on $b_1, \ldots, b_{k-z}$. Since we condition on the event $Y \cap H = \{0\}$, the probability that this happens does not depend on $D = p^h$.
Open problems {#sec:concl}
=============
To conclude, we propose the following open problems:
- Koiran et al. [@knp:simonJ] lift the lower bound on Simon’s problem over ${{\mathbb{F}_p^n}}$ to the hidden subgroup problem over finite Abelian groups:
*Given a (finite Abelian) group $G$ and a function $f: G \to X$ with the promise that there is a subgroup $H \leq G$ of rank either $0$ or $1$ (i.e., either trivial, or generated by a single element), such that $f(g) = f(g')$ if and only if $g-g' \in H$, decide which of the two holds.*
One recovers Simon’s problem over ${{\mathbb{F}_p^n}}$ by taking $G = X = {{\mathbb{F}_p^n}}$. A natural question is whether or not the hidden subgroup problem over finite Abelian groups also remains equally hard when we are additionally promised that $f$ is an endomorphism. The reduction used by Koiran et al. combined with our result gives a smaller and more structured set of hard instances of the hidden subgroup problem over Abelian groups. However, the functions obtained from this reduction will only be endomorphisms on a subgroup of $G$, not on all of $G$.
- While the general Simon’s problem has no natural extension to ${\mathbb{R}}^n$, the linear Simon’s problem can possibly be extended to ${\mathbb{R}}^n$. For example: given matrix-vector multiplication queries $x \mapsto Ax$ for a symmetric matrix $A$ with ${\left\lVertA\right\rVert}\leq 1$, decide if $\lambda_{\min}(A)\leq {\varepsilon}$ or $\lambda_{\min}(A)\geq 2{\varepsilon}$. It remains an open question to prove a lower bound on this problem. An $\Omega(n)$ lower bound could have implications for quantum convex optimization. In particular this may resolve an open question posed in recent work [@apeldoor:convexoracles] regarding the number of queries needed to optimize a convex function.
- Aaronson and Ben-David [@aaronson:sculpt] introduced the idea of *sculpting* functions. They characterized the total Boolean functions for which there is a promise on the input such that restricted to that promise there is an exponential separation between quantum and classical query complexity. We propose the related idea of *over-sculpting*: bringing the classical query complexity down to the quantum query complexity. More specifically, for which (possibly partial) Boolean functions $f$ does there exist a promise $P$ such that: $$Q_{1/3}(f) \leq o(R_{1/3}(f))$$ $$Q_{1/3}(f) = \Theta(Q_{1/3}(f|_P)) = \Theta(R_{1/3}(f|_P)).$$ Simon’s problem does not correspond to a Boolean function since the input alphabet is not Boolean[^5], but our results show that Simon’s problem can be over-sculpted in this slightly different setting.
#### Acknowledgements
We would like to thank Ronald de Wolf for many helpful comments and discussions. We would also like to thank András Gilyén for useful discussions.
[^1]: QuSoft, CWI, the Netherlands. Both authors are supported by the Netherlands Organization for Scientific Research, grant number 617.001.351. The first author is also partially supported by QuantERA, project QuantAlgo 680-91-034. [{apeldoor,gribling}@cwi.nl]{}
[^2]: In fact, Simon considered the problem of finding the non-zero string $s$, if it exists. Here we focus on the decision version of his problem. However, all upper bounds mentioned are derived from algorithms which also find $s$.
[^3]: They even prove the analogous lower bound for the hidden subgroup problem over Abelian groups, see Section \[sec:concl\].
[^4]: A Boolean function $G$ is symmetric if $G(x)$ only depends on the Hamming weight $|x|$ of $x$.
[^5]: An input for Simons problem is a function $f: {{\mathbb{F}_p^n}}\rightarrow {{\mathbb{F}_p^n}}$, which can be viewed as a string of length $p^n$ over the input alphabet ${{\mathbb{F}_p^n}}$.
|
---
abstract: 'We present a proxy dataset of vital signs with class labels indicating patient transitions from the ward to intensive care units called *Ward2ICU*. Patient privacy is protected using a Wasserstein Generative Adversarial Network to implicitly learn an approximation of the data distribution, allowing us to sample synthetic data. The quality of data generation is assessed directly on the binary classification task by comparing specificity and sensitivity of an LSTM classifier on proxy and original datasets. We initialize a discussion of unintentionally disclosing commercial sensitive information and propose a solution for a special case through class label balancing.'
author:
- |
Daniel Severo\
3778 Healthcare\
São Paulo, Brazil\
`severo@3778.care`\
Flávio Amaro\
3778 Healthcare\
Belo Horizonte, Brazil\
`flavio@3778.care`\
Estevam R. Hruschka Jr\
Carnegie Mellon University\
Pittsburgh, USA\
`estevam@cs.cmu.edu`\
André Soares de Moura Costa\
Mater Dei Healthcare\
Belo Horizonte, Brazil\
`andre.costa@materdei.com.br`\
bibliography:
- 'bibliography.bib'
title: 'Ward2ICU: A Vital Signs Dataset of Inpatients from the General Ward'
---
Introduction
============
Public datasets are a crucial component for the advancement of science [@baxevanis2015importance]. Acquiring labeled data is essential to Machine Learning tasks and often very expensive. These datasets allow for a common ground for comparison between different algorithms and models. Techniques such as transfer learning can be used to lift performance on tasks not originally associated with the published dataset [@yosinski2014transferable]. For example, pre-training on ImageNet [@deng2009imagenet] for computer-vision tasks is now a common practice. Healthcare is no different, but concerns with patient privacy and commercial sensitive information hinder the publication and dissemination of datasets by institutions [@kostkova2016owns]. The Machine Learning community has benefited significantly from datasets such as MNIST [@lecun1998gradient], ImageNet and WordNet [@miller1998wordnet], but there are few widespread databases that lead to well defined machine learning tasks in health and bioinformatics such as MIMIC [@johnson2016mimic] and eICU [@pollard2018eicu].
#### Issues beyond patient privacy
Guaranteeing patient privacy is an ongoing field of study [@beaulieu2019privacy; @walsh2018enabling]. The possibility of unintentionally revealing commercial sensitive information is also a great obstacle for the availability of public datasets and is generally not discussed. For example, if a hospital’s occupancy rate can be inferred by a health insurance company it can be used as leverage during negotiations. Competitors may use patient population statistics derived from clinical datasets for targeted commercial campaigns in an attempt to gain market share. Healthcare providers are also reluctant to disclose the specifics of their care practices, concerned that it may be used for benchmarks by competitors.
#### Our contributions
1. Release a new anonymized vital signs dataset inducing a binary classification task of patient transitions from the general ward to an intensive care unit called *Ward2ICU*;
2. Discuss the aforementioned issue of hiding commercial sensitive data and demonstrate a possible solution in our context.
Although vital signs are not considered as sensitive as other patient data (e.g. exam results, age, gender), we create a proxy dataset using a Conditional WGAN-GP to mitigate privacy concerns [@gulrajani2017improved]. A classifier that shows similar performance when trained on the proxy and original datasets is built using LSTM [@gers1999learning] and a fully connected layer.
Vital Signs Unit Lower Upper
----------------------------------- ------------- ------- -------
Temperature C 30 45
Respiratory Rate breaths/min 5 75
Heart Rate beats/min 10 250
Systolic Arterial Blood Pressure mmHg 20 300
Diastolic Arterial Blood Pressure mmHg 10 200
: Lower and upper bounds of vital signs filters.[]{data-label="vs-ranges"}
For experiments, we used TorchGAN [@pal2019torchgan], GNU Parallel [@Tange2011a] and our own source code which has been made available together with a synthetic pre-release of our dataset. [^1] Our long term goal is to progressively publish other datasets after surveying the research community to direct our efforts. [^2]
Original Dataset
================
*Ward2ICU* is a dataset of sequential physiological measurements regarding the vital signs discussed below together with a a binary class label. It derives from Electronic Health Records (EHR) of patients from [*Hospital Mater Dei* (HMD)]{}, a tertiary hospital, located in [Belo Horizonte, Brazil]{}. It consists of adult patients with an average age of 40, admitted to the standard ward between the years of 2014 and 2019. Over 25 vital signs are monitored and collected but only 5 have been made available as of the present date. Each data point was measured and recorded manually by nursing professionals. The default interval between measurements is 6 hours but this is sometimes overlooked when demanded by medical staff. This results in an average of 4 to 4.6 data points for each of the 5 different vital signs taken per day per patient. We define a *sample* as the measurement of all 5 vital signs near simultaneously for a single patient. For each patient, 20 sequential samples are provided totaling 100 data points, 20 for each vital sign. A filtering stage removes patients that have *at least one* sample outside the pre-defined ranges shown in . Patients with label $1$ have been moved to the ICU by the time the 21st sample is taken while $0$ indicates a discharge. The class ratio is a commercial sensitive information to [HMD]{}, hence the exact number can not be disclosed. However, we can confirm that ICU transitions (i.e. the minority class) lie between 5 to 30%.
#### Body Temperature (T)
The average human body temperature ranges from 36.5 to 37.5 Celsius, or 97.7 to 99.5 Fahrenheit [@hutchison2008hypothermia]. It was routinely measured using a digital thermometer inserted into the mouth, anus, or placed under the armpit.
#### Respiratory Rate (RR)
The average number of breaths taken per minute. This rate varies depending on the age range. An adult’s normal respiration rate at rest is 12 to 20 breaths per minute [@barrett2009ganong]. RR was measured by looking at the patient’s chest movements and counting the number of cycles of inhalation and exhalation (i.e. the rise and the fall of the chest wall) per minute [@lindh2013delmar].
#### Heart Rate or Pulse Rate (HR)
The number of heart beats over a period of 60 seconds. This vital sign was measured by touching the lateral area of the wrist using the finger tips, where an artery passes close to the surface of an underlying bone. This is a commonly executed maneuver [@lindh2013delmar]. We also used a digital pulse oximeter to measure the heart rate. It consists of a small display and a sensor attached to the patient’s finger that measures and displays the data [@lindh2013delmar].
#### Arterial Blood Pressure (ABP)
The cardiac cycle consists of the events (i.e. diastole and systole) that occur from the beginning of one heartbeat to the beginning of the next. We measured ABP by indirect means, using the Auscultatory method. It consists of inflating a manometer cuff around a patient’s arm and listening with a stethoscope for specific sounds that mark the levels of systolic and diastolic blood pressures.
------------------------ ----------- ----------- ----------- -----------
(r)[2-5]{} Vital Signs Real Proxy Real Proxy
T 0.441 0.373 0.703 0.761
T, RR 0.601 0.624 0.612 0.590
T, RR, HR 0.494 0.488 0.672 0.785
**T, RR, HR, ABP** **0.732** **0.721** **0.478** **0.380**
------------------------ ----------- ----------- ----------- -----------
: Accuracy on binary classification task.[]{data-label="results"}
Related work
============
Recent work in protecting patient privacy has made significant use of Generative Adversarial Networks (GAN) for synthesizing proxy datasets [@goodfellow2014generative]. GAN are a family of generative models for implicit density estimation where a Generator ($G$) and Discriminator ($D$) are trained simultaneously in a zero-sum game. Given the underlying data distribution $p$, $D$’s objective is to classify incoming samples $\vx$ as being real ($\vx \sim p$) or fake ($\vx \sim q$). Meanwhile, $G$ learns to generate samples that can fool $D$ into classifying them as real, by minimizing $d(p, q)$ for some distance $d$. $G$ has no direct access to $p$ and learns only from the gradient signal provided by $D$ together with some loss function ${\ell}$ induced by $d$. By varying $D$, $G$ and $d$ we can recover most GAN variants. For example, [@goodfellow2014generative] uses the Jensen-Shannon Divergence (JSD) [@lin1991divergence] for $d$ while Least Squares GAN uses Pearson $\chi^2$ [@mao2017least]. Deep Convolutional and Recurrent GAN both minimize JSD, but differ by using convolutional and recurrent architectures, respectively [@radford2015unsupervised; @esteban2017real]. Wasserstein GAN (WGAN) minimize the Earth mover’s distance (EMD), also called Kantorovich–Rubinstein or Wasserstein metric [@arjovsky2017wasserstein]. Intuitively, the EMD between $p$ and $q$ is the minimal effort required to transform $p$ into $q$ by transporting density values $p(\vx)$ to $q(\vy)$, or vice-versa. Training JSD GAN suffers from an issue called mode collapse, where the fake samples generated have low diversity (e.g. on MNIST fake images would all be of the same digit). Theoretical and empirical results are given in [@arjovsky2017wasserstein], showing that WGAN have better convergence properties than JSD GAN due to non-vanishing gradients. Some authors have made extensions to include conditional information $p\left(\vx \mid \vc\right)$ such as class labels during learning, usually through the use of embeddings concatenated with input and hidden layers [@mirza2014conditional; @esteban2017real]. This augmentation can, in theory, be applied to any $(D,G,d)$ triplet.
In practice, training is done by minimizing ${\ell}$, while $D$ and $G$ are neural networks with parameters $\vtheta$. $G$ transforms some seed distribution $\phi$ into $q$ by sampling values $\vz \sim \phi$ such that $G_\vtheta(\vz) = \vx \sim q_\vtheta$. Currently, there is no $(D,G,d)$ combination that reaches state-of-the-art on all synthesis tasks as it is highly sensitive to the domain of $p$ (i.e. the data type). Previous work in generating synthetic datasets for healthcare and bioinformatics has been done for count [@baowaly2018synthesizing; @walonoski2017synthea], binary [@baowaly2018synthesizing; @walonoski2017synthea], categorical [@choi2017generating], time-series [@esteban2017real; @hartmann2018eeg; @abdelfattah2018augmenting; @harada2019biosignal], text [@guan2018generation] and image data [@nie2017medical].
Synthesis
=========
To generate a proxy dataset for *Ward2ICU* we employed a Conditional WGAN-GP [@gulrajani2017improved]. A one dimensional convolutional network was used for both $D$ and $G$ similar to [@hartmann2018eeg] and can be seen in . RMSprop [@tieleman2012lecture] optimizer was used for both networks with default recommended values. The input data was scaled to the interval $[-1, 1]$ by subtracting the mean and dividing by the maximum absolute value *along the channel dimension*. For each epoch, we sampled with repetition a mini-batch keeping the classes uniformly distributed. 30% of the original data was held out for testing.
The proxy datasets were made to have the same size as the original but with balanced classes. Synthesis quality is evaluated by training a classifier composed of an LSTM with a fully connected layer and computing accuracy for both classes. We varied the total number of vital signs used throughout the experiments. To obtain the results in , we did a randomized search on the hyperparameters of the classifier with the real dataset to maximize the balanced accuracy [@brodersen2010balanced]. The final set of hyperparameters was then used to re-train the classifier on the proxy data using the same procedures as before. This was done for each of the 4 sets of vital signs. The synthetic pre-release corresponds to the row in bold. Further details as well as PyTorch [@paszke2017automatic] implementations can be found in the repository [^3].
#### Protecting Commercial Sensitive Information
Classification tasks with imbalanced classes commonly report metrics that are a function $f$ of the confusion matrix $\mM$, such as $F_1$ score and Balanced Accuracy. However, $f\left(\mM\right) = f\left(\mM^\prime\right)$ does not imply that $\mM = \mM^\prime$, making it difficult to evaluate with $f$ if the GAN has properly learned to synthesize each individual class. We can not make a verbatim report on $\mM$, nor can we divulge multiple values of $f\left(\mM\right)$ for different $\mM$, as it would indirectly disclose [HMD’s]{} ICU to discharge ratio (ratio between positive and negatives classes). Hence, we opted to show the minority and majority class accuracies.[^4] To permanently hide this information, the proxy dataset was generated with balanced classes.
[lcllcl]{} & Discriminator &&& Generator &\
\
**Layer** & **Act./Padd./Reg.** & **Output shape** & **Layer** & **Act./Padd./Reg.** & **Output shape**\
(r)[1-3]{} (r)[4-6]{} Input & & $20 \times s $ & Seed & & $m$\
AppEmb & & $20 \times (s + c)$ & AppEmb & & $m \times (1 + c)$\
& & & Linear & LReLU/DP & $\ 5 \times h$\
Conv & LReLU/RP & $20 \times h$ & Upsample & & $10 \times h$\
Conv & LReLU/RP & $20 \times h$ & Conv & LReLU/RP & $10 \times h$\
AvgPool & & $10 \times h$ & Conv & LReLU/RP/DP & $10 \times h$\
Conv & LReLU/RP & $10 \times h$ & Upsample & & $20 \times h$\
Conv & LReLU/RP & $10 \times h$ & Conv & LReLU/RP & $20 \times h$\
AvgPool & & $\ \ 5 \times h$ & Conv & LReLU/RP/DP & $20 \times h$\
Linear & LReLU/RP & $\ \ 1$ & Conv & & $20 \times s$\
Conclusion and Future Work
==========================
We used a Conditional WGAN-GP to synthesize a proxy database of vital signs with an associated binary class label indicating patient transitions from the general ward to the ICU. Commercial sensitive information was hidden by balancing the generated dataset. Evaluation was done by comparing individual class accuracies for an LSTM classifier on proxy and original data. From our preliminary results in we argue that data utility is being transferred from the original to the proxy dataset for the *Ward2ICU* binary classification task. Some accuracy on the minority class is lost, as was expected.
Future work will focus on circumventing issues that currently harm data publishing for and beyond patient privacy. Specifically, developing new ways to explicitly trade data utility for protection of commercial sensitive information and finding new ways to generate multi-modal EHR. One path to explore is applying the same theory used to obtain privacy guarantees for patients, called Differential Privacy [@dwork2006calibrating], to commercial sensitive information.
### Acknowledgments {#acknowledgments .unnumbered}
We would like to acknowledge the entire 3778 Research team for insightful discussions and reviews, especially Marcio Aldred Gregory for helping us define a relevant classification task and for bringing up the discussion on commercial sensitive data. We would also like to thank [Hospital Mater Dei]{} for providing the original data. Finally, we would like to thank the founders of 3778 Healthcare, Guilherme Salgado and Fernando Barreto, for intensively investing in research in a country where there is little to no incentives for this type of work.
[^1]: [<https://research.3778.care/publication/ward2icu>]{}
[^2]: [<https://research.3778.care/publication/survey>]{}
[^3]: [<https://github.com/3778/Ward2ICU>]{}
[^4]: Note that we do not train nor cross-validate on these metrics, they are used solely for empirical evaluations.
|
---
abstract: 'We construct the solution of the Riemann problem for the shallow water equations with discontinuous topography. The system under consideration is non-strictly hyperbolic and does not admit a fully conservative form, and we establish the existence of two-parameter wave sets, rather than wave curves. The selection of admissible waves is particularly challenging. Our construction is fully explicit, and leads to formulas that can be implemented numerically for the approximation of the general initial-value problem.'
address:
- ' Philippe G. LeFlochLaboratoire Jacques-Louis Lions & Centre National de la Recherche Scientifique, Université de Paris 6, 4 Place Jussieu, 75252 Paris, France.'
- 'Mai Duc Thanh Department of Mathematics, International University, Quarter 6, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam'
author:
- 'Philippe G. L[e]{}Floch and Mai Duc Thanh'
title: The Riemann problem for the shallow water equations with discontinuous topography
---
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Definition]{} \[theorem\][Corollary]{}
Introduction
============
Shallow water equations
-----------------------
We consider the one-dimensional shallow water equations $$\aligned &\pt_th + \pt_x(hu) = 0,
\\
&\pt_t(hu) + \pt_x(h(u^2 + g\frac{h}{2})) = - gh\pt_x a,
\\
&\pt_t a = 0,\\
\endaligned
\label{1.1}$$ where $h$ denotes the height of the water from the bottom to the surface, $u$ the velocity of the fluid, $g$ the gravity constant, and $a$ the height of the river bottom from a given level. Following LeFloch [@LeFloch89] we supplement the first two balance laws for the fluid with the equation $\pt_ta=0$ corresponding to a fixed geometry. Adding the equation $\pt_t a=0$ allows us to view the shallow water equations (the first two equations in [(\[1.1\])]{}), which form a strictly hyperbolic system of balance laws in nonconservative form, as a non-strictly hyperbolic system of balance laws with a linearly degenerate characteristic field.
We are mainly interested in the case that $a$ is piecewise constant $$a(x)=\left\{\begin{array}{ll}a_L,\quad &x< 0,\\
a_R,\quad &x>0, \end{array}\right.$$ where $a_L,a_R$ are two distinct constants. The Riemann problem associated with [(\[1.1\])]{} is the initial-value problem corresponding to the initial conditions of $$(h,u,a)(x,0)= \left\{\begin{array}{ll}(h_L,u_L,a_L),\quad &x< 0,\\
(h_R,u_R,a_R),\quad &x>0. \end{array}\right.
\label{1.2}$$ Since $a$ is discontinuous, the system [(\[1.1\])]{} cannot be written in a fully conservative form, and the standard notion of weak solutions for hyperbolic systems of conservation laws does not apply. However, the equations still make sense within the framework introduced in Dal Maso, LeFloch, and Murat [@DalMasoLeFlochMurat]. (For a recent review see [@LeFloch02; @LeFloch04].)
DLM generalized Rankine-Hugoniot relations
------------------------------------------
Consider an elementary discontinuity propagating with the speed $\ld$ and satisfying the equations [(\[1.1\])]{}. Observe that the Rankine-Hugoniot relation associated with the third equation in [(\[1.1\])]{} simply reads $$-\ld [a] = 0,
\label{1.3}$$ where $[a] := a_+ - a_-$ denotes the jump of the bottom level function $a$, and $a_\pm$ denotes its left- and right-hand traces. Then, we have the following possibilities:
- either the component $a$ remains constant across the propagating discontinuity,
- or $a$ changes its levels across the discontinuity and the discontinuity is stationary, i.e., the speed $\ld$ vanishes.
This observation motivates us to define the admissible elementary waves of the system [(\[1.1\])]{}. First of all, assume that the bottom level $a$ remains constant across a discontinuity; then, $a$ should be constant in a neighborhood of the discontinuity. Eliminating $a$ from [(\[1.1\])]{}, we obtain the following system of two conservation laws $$\aligned &\pt_th + \pt_x(hu) = 0,
\\
&\pt_t(hu) + \pt_x(h(u^2 + g\frac{h}{2})) = 0,
\\
\endaligned
\label{1.4}$$ Thus, the left- and right-hand states are related by the Rankine Hugoniot relations corresponding to [(\[1.4\])]{} $$\aligned &-\ld[h] + [hu] = 0,
\\
&-\ld[hu] + [h(u^2 + g\frac{h}{2})] = 0,
\\
\endaligned
\label{1.5}$$ where $[h] : = h_+ - h_+$, etc.
Second, suppose that the component $a$ is discontinuous so that the speed vanishes. Then, the solution is independent of the time variable, and it is natural to search for a solution obtained as the limit of a sequence of time-independent smooth solutions of [(\[1.1\])]{}. (See below.)
Suppose that $(x,t) \mapsto (h, u, a)$ is a smooth solution of [(\[1.1\])]{}. Then, the system [(\[1.1\])]{} can be written in the following form, as a system of conservation laws for the (now conservative) variables $(h,u,a)$: $$\aligned &\pt_th + \pt_x(hu) = 0,
\\
&\pt_tu + \pt_x\big({u^2\over 2} + g(h+a)\big) = 0,
\\
&\pt_t a =0.\\
\endaligned
\label{1.6}$$ Hence, time-independent solutions of [(\[1.1\])]{} satisfy $$\aligned &(hu)' = 0,
\\
&\big({u^2\over 2} + g(h+a)\big)' = 0,
\\
\endaligned
\label{1.7}$$ where the dash denotes the differentiation with respect to $x$. Trajectories initiating from a given state $(h_0,u_0,a_0)$ are given by $$\aligned &hu = h_0u_0,
\\
&{u^2\over 2} + g(h+a) = {u_0^2\over 2} + g(h_0+a_0).
\\
\endaligned
\label{1.8}$$ It follows from [(\[1.8\])]{} that the trajectories of [(\[1.7\])]{} can be expressed in the form $u=u(h)$, $a=a(h)$. Now, letting $h\to
h_\pm$ and setting $u_\pm = u(h_\pm)$, $a_\pm = a(h_\pm)$, we see that the states $(h_\pm, u_\pm, a_\pm)$ satisfy the Rankine-Hugoniot relations associated with [(\[1.6\])]{}, but with zero shock speed: $$\aligned &[hu] = 0,
\\
&[{u^2\over 2} + g(h+a)] = 0,
\\
\endaligned
\label{1.9}$$
The above discussion leads us to define the elementary waves of interest, as follows.
The admissible waves for the system [(\[1.1\])]{} are the following ones:
- the [rarefaction waves]{}, which are smooth solutions of [(\[1.1\])]{} with constant component $a$ depending only on the self-similarity variable $x/t$;
- the [shock waves]{} which satisfy [(\[1.5\])]{} and Lax shock inequalities and have constant component $a$;
- and the [stationary waves]{} which have zero speed and satisfy [(\[1.9\])]{}.
As will be checked later, the system [(\[1.1\])]{} is [*not strictly hyperbolic*]{}, as was already observed in the previous work [@LeFlochThanh03.2]. Recall that therein we studied the Riemann problem in a nozzle with variable cross-section and constructed all of the Riemann solutions. The present model is analogous, and our main purpose in the present paper is to demonstrate that the technique in [@LeFlochThanh03.2] extends to the shallow water model and to construct the solution of the Riemann problem. The lack of strict hyperbolicity and the nonconservative form of the equation make the problem particularly challenging. Some aspects of this problem are also covered by Alcrudo and Benkhaldoun [@AB]. For works on various related models including scalar conservation laws we refer to [@MarchesinPaes-Leme; @IsaacsonTemple95; @IsaacsonTemple92; @HouLeFloch; @HayesLeFloch; @Gosse; @GoatinLeFloch; @AGG].
Results and perspectives
------------------------
As we will show, waves in the same characteristic field may be repeated in a single Riemann solution. This happens when waves cross the boundary of the strictly hyperbolic regions and the order of characteristic speeds changes. We will also show below that the Riemann problem may not always have a solution. The Riemann problem may admit exactly one, or two, or up to three distinct solutions for different ranges of left-hand and right-hand states. Thus, uniqueness does not hold for the Riemann problem, as was already observed for the nozzle flow system.
Each possible construction leads to a solution that depends continuously on the left-hand and right-hand states. This is a direct consequence of the smoothness of the elementary wave curves; by the implicit function theorem, the intermediate waves depend continuously on their left- or right-hand states as well as on the Riemann data. These results agree with [@LeFlochThanh03.2] which covered fluids in a nozzle with variable cross section.
In the present model, the curve of stationary wave is strictly convex. To find stationary waves, one needs to determine the roots of a nonlinear equation (see the function $\varphi$ in ) which is convex and, therefore, can be easily computed numerically. The Riemann solver derived in the present paper should be useful in combination with numerical methods for shallow water systems developed in [@AndrianovWarnecke; @AudusseBouchutBristeauKleinPerthame; @GreenbergLerouxBarailleNoussair; @GT; @KroenerThanh04; @CGGP] for which we refer to the lecture notes by Bouchut [@Bouchut].
Background
==========
Shallow water equations as a non-strictly hyperbolic system
-----------------------------------------------------------
We now discuss the system [(\[1.1\])]{} in the nonconservative variables $U=(h,u,a)$. From [(\[1.6\])]{} if follows that, for smooth solutions, [(\[1.1\])]{} is equivalent to $$\aligned &\pt_th + u\pt_xh + h\pt_xu = 0,
\\
&\pt_tu + g\pt_xh + u\pt_x u + g\pt_x a = 0,
\\
&\pt_t a =0,\\
\endaligned
\label{2.1}$$ which can be written in the nonconservative form $$\pt_t U + A(U)\pt_x U=0,
\label{2.2}$$ where the Jacobian matrix $A(U)$ is given by $$A(U)=\left(\begin{matrix} u&u&0\\
g&u&g\\
0&0&0\end{matrix}\right).$$
The eigenvalues of $A$ are $$\ld_1(U):=u-\sqrt{gh}<\ld_2(U):=u+\sqrt{gh}, \quad \ld_3(U):=0,
\label{2.3}$$ and corresponding eigenvectors can be chosen as $$\aligned & r_1(U):=(h,-\sqrt{gh},0)^t,\quad
r_2(U):=(h,\sqrt{gh},0)^t,\\
&r_3(U):=(gh,-gu,u^2-gh)^t.\endaligned \label{2.4}$$ We see that the first and the third characteristic fields may coincide: $$(\ld_1(U),r_1(U)) = (\lambda_3(U),r_3(U))$$ on a hypersurface in the variables $(h,u,a)$, which can be identified as $${\mathcal C}_+:=\{(h,u,a) | \quad u=\sqrt{gh}\}.
\label{2.5}$$
Similarly, the second and the third characteristic fields may coincide: $$(\ld_2(U),r_2(U))=(\lambda_3(U),r_3(U))$$ on a hypersurface in the variables $(h,u,a)$, which can be identified as $${\mathcal C}_-:=\{(h,u,a) | \quad u=-\sqrt{gh}\}.
\label{2.6}$$ The third eigenvalue $(\ld_3,r_3)$ is linearly degenerate, and we have $$-\nabla \ld_1(U)\cdot r_1(U) =\nabla\ld_2(U)\cdot r_2(U)={3\over
2}\sqrt{gh}\ne 0,\quad h> 0.$$ Note also that the first and the second characteristic fields $(\ld_1,r_1)$, $(\ld_2,r_2)$ are genuinely nonlinear in the open half-space $\{(h,u,a) |\quad h>0\}$.
It is convenient to set $${\mathcal C}={\mathcal C}_+\cup {\mathcal C}_-=\{(h,u,a) |\quad u^2-gh=0\},$$ which is the hypersurface on which the system fails to be strictly hyperbolic.
In conclusion we have established (cf. Figure \[fig21\]):
On the hypersurface ${\mathcal C}_+$ in the variables $(h,u,a)$ the first and the third characteristic speeds coincide and, on the hypersurface ${\mathcal C}_-$, the second and the third characteristic speeds coincide. Hence, the system [(\[1.1\])]{} is non-strictly hyperbolic.
![Projection of strictly hyperbolic regions in the $(h,u)$-plane[]{data-label="fig21"}](fig21.pdf "fig:"){width="70.00000%"}\
The hypersurface ${\mathcal C}$ divides the phase domain into three disjoint regions, denoted by $A_1, A_2$ and $A_3$, in which the system is strictly hyperbolic. More precisely, we define $$\aligned &A_1:=\{(h,u,a) \in \RR_+\times\RR\times\RR_+ |\quad \ld_2(U)>\ld_1(U)>\ld_3(U)\},\\
&A_2:=\{(h,u,a) \in \RR_+\times\RR\times\RR_+ | \quad \ld_2(U)>\ld_3(U)>\ld_1(U)\},\\
&A_2^+:=\{(h,u,a) \in A_2 | \quad u>0\},\\
&A_2^-:=\{(h,u,a) \in A_2 | \quad u<0\},\\
&A_3:=\{(h,u,a)\in \RR_+\times\RR\times\RR_+ |\quad \ld_3(U)>\ld_2(U)>\ld_1(U)\}.\\
\endaligned
\label{2.7}$$ The strict hyperbolicity domain is not connected, which makes the Riemann problem delicate to solve.
Wave curves
-----------
We begin by investigating some properties of the curves of admissible waves.
First, consider shock curves from a given left-hand state $U_0=(h_0,u_0,a_0)$ consisting of all right-hand states $U=(h,u,a)$ that can be connected to $U_0$ by a shock wave. Thus, it follows from [(\[1.6\])]{} that $U$ and $U_0$ are related by the Rankine-Hugoniot relations $$\aligned &-\bar\ld[h] + [hu] = 0,
\\
&-\bar\ld[hu] + [h(u^2 + g\frac{h}{2}] = 0,
\\
\endaligned
\label{3.1}$$ where $[h]=h-h_0$, etc, and $\bar\ld=\bar\ld(U_0,U)$ is the shock speed.
Fix the state $U_0$. A straightforward calculation from the Rankine-Hugoniot relations [(\[3.1\])]{} shows that the restriction to the $(h,u)$ plane of the Hugoniot set consists of two curves given by $$u=u_0\pm \sqrt{g\over 2}(h-h_0)\sqrt{\Big({1\over h}+{1\over
h_0}\Big)}. \label{3.2}$$ Moreover, along these two curves it holds $$\aligned
{du\over dh}
& = \pm \sqrt{g\over 2}\Big(\sqrt{{1\over h}+{1\over
h_0}} -(h-h_0){1\over 2h^2\sqrt{\dfrac{1}{h}+\dfrac{1}{h_0}}}\Big)
\\
& \to\pm\sqrt{g\over h_0} \text{ as $h\to h_0$.}
\endaligned$$ Since the $i$th-Hugoniot curve is tangent to $r_i(U_0)$ at $U_0$, we conclude that the first Hugoniot curve associated with the first characteristic field is $${\mathcal H}_1(U_0) :\quad u:=u_1(h,U_0)=u_0- \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h\ge
0,\label{3.3}$$ while one associated with the second characteristic field is $${\mathcal H}_2(U_0):\quad u:=u_2(h,U_0)=u_0+ \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)},\quad h\ge 0.
\label{3.4}$$
Along the Hugoniot curves ${\mathcal H}_1, {\mathcal H}_2$, the corresponding shock speeds are given by $$\aligned
\bar\ld_{1,2}(U_0,U) &={hu_{1,2}-h_0u_0\over h-h_0}\\
&=u_0\mp\sqrt{{g\over 2}\Big(h+{h^2\over h_0}\Big)},\quad h\ge
0,\endaligned \label{3.5}$$
As is customary, the shock speed $\bar\ld_i(U_0,U)$ is required to satisfy Lax shock inequalities [@Lax71]: $$\ld_i(U)<\bar\ld_i(U_0,U)<\ld_i(U_0),\quad i=1,2. \label{3.6}$$ Thus, the $1$-shock curve ${\mathcal S}_1(U_0)$ initiating from the left-hand state $U_0$ and consisting of all right-hand states $U$ that can be connected to $U_0$ by a Lax shock associated with the first characteristic field is $${\mathcal S}_1(U_0) :\quad u=u_1(h,U_0)=u_0- \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h>
h_0.
\label{3.7}$$
Similarly, the $2$-shock curve ${\mathcal S}_2(U_0)$ issuing from a left-hand state $U_0$ consisting of all right-hand states $U$ that can be connected to $U_0$ by a Lax shock associated with the second characteristic field is $${\mathcal S}_2(U_0) :\quad u=u_1(h,U_0)=u_0+ \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h<
h_0,\label{3.8}$$
We summarize these results in the following proposition.
Given a left-hand state $U_0$, the $1$-shock curve ${\mathcal S}_1(U_0)$ consisting of all right-hand states $U$ that can be connected to $U_0$ by a Lax shock is $${\mathcal S}_1(U_0) :\quad u=u_1(h,U_0)=u_0- \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h> h_0.$$ The $2$-shock curve ${\mathcal S}_2(U_0)$ consisting of all right-hand states $U$ that can be connected to $U_0$ by a Lax shock is $${\mathcal S}_2(U_0) :\quad u=u_1(h,U_0)=u_0+ \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h< h_0.$$
In view of the Lax shock inequalities [(\[3.6\])]{}, we also conclude that the backward $1$-shock curve ${\mathcal S}_1^B(U_0)$ issuing from a right-hand state $U_0$ and consisting of all left-hand states $U$ that can be connected to $U_0$ by a Lax shock associated with the first characteristic field is $${\mathcal S}_1^B(U_0) :\quad u=u_1(h,U_0)=u_0- \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h<
h_0,\label{3.9}$$ Similarly, the backward $2$-shock curve ${\mathcal S}_2^B(U_0)$ issuing from a right-hand state $U_0$ and consisting of all left-hand states $U$ that can be connected to $U_0$ by a Lax shock associated with the second characteristic field is $${\mathcal S}_2^B(U_0) :\quad u=u_1(h,U_0)=u_0+ \sqrt{g\over
2}(h-h_0)\sqrt{\Big({1\over h}+{1\over h_0}\Big)}, \quad h>
h_0,\label{3.10}$$
Next, we discuss the properties of rarefaction waves, i.e., smooth self-similar solutions to the system [(\[1.1\])]{} associated with one of the two genuinely nonlinear characteristic fields. These waves satisfy the ordinary differential equation: $$\frac{dU}{d\xi} = \frac{r_i(U)}{\nabla \ld_i\cdot r_i(U)},\quad
\xi =x/t,\quad i=1,2.\label{3.11}$$ For waves in the first family, we have $$\aligned
\frac{dh(\xi)}{d\xi} &= -\frac{2h(\xi)}{3\sqrt{gh(\xi)}}=-{2\over 3\sqrt{g}}\sqrt{h(\xi)},\\
\frac{du(\xi)}{d\xi} &= \frac{-2\sqrt{gh(\xi)}}{-3\sqrt{gh(\xi)}}={2\over 3},\\
\frac{da(\xi)}{d\xi} &= 0.\\
\endaligned$$ It follows that $$\frac{du}{dh} = -\sqrt{\frac{g}{h}},$$ therefore, the integral curve passing through a given point $U_0=(h_0,u_0,a_0)$ is given by $$u=u_0-2\sqrt{g}(\sqrt{h}-\sqrt{h_0}).$$
Moreover, the characteristic speed should increase through a rarefaction fan, i.e., $$\ld_1(U)\ge \ld_1(U_0),\label{3.14}$$ which implies $$h\ge h_0,$$ Thus, we define a rarefaction curve ${\mathcal R}_1(U_0)$ issuing from a given left-hand state $U_0$ and consisting of all the right-hand states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the first characteristic field as $${\mathcal R}_1(U_0):\quad
u=v_1(h,U_0):=u_0-2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\le
h_0.\label{3.16}$$ A $1$-rarefaction wave is determined by $$u=u_0+{2\over 3}\Big({x\over t}-{x_0\over t_0}\Big) \label{3.17}$$ while $h$ is determined by the equation [(\[3.16\])]{} and the component $a$ remains constant.
Similarly, we define the rarefaction curve ${\mathcal R}_2(U_0)$ issuing from a given left-hand state $U_0$ and consisting of all the right-hand states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the second characteristic field as $${\mathcal R}_2(U_0):\quad
u=v_2(h,U_0):=u_0+2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\ge
h_0.\label{3.18}$$ The $u$-component of the $2$-rarefaction wave is determined by [(\[3.17\])]{} and the $h$-component is given by [(\[3.18\])]{}.
We can summarize the above results in:
Given a left-hand state $U_0$, the $1$-rarefaction curve ${\mathcal R}_1(U_0)$ consisting of all right-hand states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the first characteristic field is $${\mathcal R}_1(U_0):\quad
u=v_1(h,U_0):=u_0-2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\le h_0.$$ The $2$-rarefaction curve ${\mathcal R}_2(U_0)$ consisting of all right-hand states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the second characteristic field is $${\mathcal R}_2(U_0):\quad
u=v_2(h,U_0):=u_0+2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\ge h_0.$$
We will also need backward curves which we define here for completeness. Given a [*right-hand*]{} state $U_0$, the $1$-rarefaction curve ${\mathcal R}_1^B(U_0)$ consisting of all [*left-hand*]{} states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the first characteristic field is $${\mathcal R}_1^B(U_0):\quad
u=v_1(h,U_0):=u_0-2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\ge h_0.
\label{3.19}$$ The $2$-rarefaction curve ${\mathcal R}_2^B(U_0)$ consisting of all [*left-hand*]{} states $U$ that can be connected to $U_0$ by a rarefaction wave associated with the second characteristic field is $${\mathcal R}_2^B(U_0):\quad
u=v_2(h,U_0):=u_0+2\sqrt{g}(\sqrt{h}-\sqrt{h_0}),\quad h\le h_0.
\label{3.20}$$
In turn, we are in a position to define the wave curves, as follows $$\aligned
&{\mathcal W}_1(U_0)={\mathcal S}_1(U_0)\cup{\mathcal R}_1(U_0),\\
&{\mathcal W}_1^B(U_0)={\mathcal S}_1^B(U_0)\cup{\mathcal R}_1^B(U_0),\\
&{\mathcal W}_2(U_0)={\mathcal S}_2(U_0)\cup{\mathcal R}_2(U_0),\\
&{\mathcal W}_2^B(U_0)={\mathcal S}_2^B(U_0)\cup{\mathcal R}_2^B(U_0).\\
\endaligned
\label{3.21}$$ Some properties of the wave curves are now checked.
The wave curve ${\mathcal W}_1(U_0)$ can be parameterized in the form $h\mapsto
u=u(h), {h>0}$, where the function $u$ is strictly convex and strictly decreasing in $h$. The wave curve ${\mathcal W}_2(U_0)$ can be parameterized in the form $h\mapsto u=u(h), h>0,$ where the function $u$ is strictly concave and strictly decreasing in $h$.
We only give the proof for the $1$-wave curve ${\mathcal W}_1(U_0)$, the proof for ${\mathcal W}_2(U_0)$ being similar. For the shock part ${\mathcal S}_1(U_0)$, we have $${du\over dh} = -\sqrt{g\over 2}\dfrac{\dfrac{1}{
2h}+\dfrac{1}{h_0}+\dfrac{h_0}{2h^2}}{\sqrt{\dfrac{1}{h}+\dfrac{1}{h_0}}}<0.$$ For the rarefaction part ${\mathcal R}_1(U_0)$, we have $${du\over dh}=-\sqrt{g\over h}<0.$$ This establishes the desired monotonicity property of ${\mathcal W}_1(U_0)$.
The convexity of ${\mathcal W}_1(U)$ follows from the fact that $du/dh$ is increasing. Indeed, along the shock part ${\mathcal S}_1(U_0)$ it holds $${d^2u\over dh^2}=\sqrt{g\over 2}\dfrac{\Big({1\over
2h^2}+{h_0\over h^3}\Big)\sqrt{{1\over h}+{1\over h_0}} +{1\over
2h^2\sqrt{{1\over h}+{1\over h_0}}}\Big({1\over 2h}+{1\over
h_0}+{h_0\over 2h^2}\Big)}{{1\over h}+{1\over h_0}}>0$$ and, along the rarefaction part ${\mathcal R}_1(U_0)$, $${d^2u\over dh^2}= {\sqrt{g}\over 2h^{3/2}}>0,$$ which completes the proof.
Next, we consider the $3$-curve from a state $U_0$, which consists of all states $U$ that can be connected to $U_0$ by a [*stationary wave.*]{} As seen in [(\[1.9\])]{}, $U$ and $U_0$ are related by the Rankine-Hugoniot relations $$\aligned &[hu]=0\\
&[{u^2\over 2}+g(h+a)]=0.\\
\endaligned
\label{3.22}$$ This leads to a natural definition of a curve parameterized in $h$: $${\mathcal W}_3(U_0):
\quad
\begin{cases}
& u=u(h)={h_0u_0\over h},
\\
& a=a(h)=a_0+{u^2-u_0^2\over 2g}+h-h_0.
\end{cases}
\label{3.23}$$
Admissibility conditions for stationary waves
=============================================
Two possible stationary jumps
-----------------------------
In view of the discussion in the previous section, the states across a stationary wave are constraint by the Rankine-Hugoniot relations [(\[3.22\])]{}. From a given left-hand state we have to determine the right-hand state, which has three components, determined by the two equations [(\[3.22\])]{}. Moreover, since the component $a$ changes only through stationary waves (which propagate with zero speed) for given bottom levels $a_\pm$ we should solve for $u$ and $h$ in terms of $a$. Thus, we rewrite [(\[3.22\])]{} in the form $$\aligned
&u={h_0u_0\over h},\\
& a_0-a+{u^2-u_0^2\over 2g}+h-h_0=0. \endaligned$$ Substituting for $u$ and re-arranging the terms, we obtain $$\aligned
&u={h_0u_0\over h},\\
& a_0-a+{u_0^2\over 2g}\Big({h_0^2\over h^2}-1\Big)+h-h_0=0.
\endaligned$$ This leads us to search for roots of the function $$\varphi(h) := a_0-a+{u_0^2\over 2g}\Big({h_0^2\over
h^2}-1\Big)+h-h_0.
\label{4.3}$$ Let us set $$\aligned
& h_{\min}(U_0) :=\Big({u_0^2h_0^2\over g}\Big)^{1/3},\\
&a_{\min}(U_0) :=a_0+{u_0^2\over 2g}\Big({h_0^2\over
h_{\min}^2}-1\Big)+h_{\min}-h_0.
\endaligned
\label{4.4}$$
Some useful properties of the function $\varphi$ in [(\[4.3\])]{} are now derived.
Suppose that $U_0 = (h_0, u_0, a_0)$ and $a$ are given with $u_0\ne 0$. The function $\varphi: (0, +\infty) \to \RR$ is smooth and convex and, for some $h_{\min}$, it is decreasing in the interval $(0,h_{\min})$ and is increasing in the interval $(h_{\min},\infty)$, with $$\lim_{h\to 0} \varphi(h)=\lim_{h\to +\infty} \varphi(h) = +\infty.
\label{4.5}$$ Furthermore, if $a\ge a_{\min}$ then the function $\varphi$ has two roots $h_*(U_0), h^*(U_0)$ with $h_*(U_0)\le h_{\min}(U_0)\le
h^*(U_0)$. These inequalities are strict whenever $a>a_{\min}(U_0)$.
The smoothness of the function $\varphi$ and the limiting conditions are obvious. Moreover, we have $${d\varphi(h)\over dh} = -{u_0^2h_0^2\over gh^3}+1$$ (for $u_0\ne 0$) which is positive if and only if $$h > \Big({u_0^2h_0^2\over g}\Big)^{1/3}=h_{\min}(U_0).$$ This establishes the monotonicity property of $\varphi$. Furthermore, we have $${d^2\varphi(h)\over dh^2}={3u_0^2h_0^2\over gh^4}\ge 0,$$ which shows the convexity of $\varphi$. If $a>a_{\min}(U_0)$, then $\varphi(h_{\min}(U_0))<0$. The other conclusions follow immediately.
It is straightforward to check:
The function $h_{\min}$ satisfies the following inequalities: $$\aligned
& h_{\min}(U_0) >h_0,\quad\quad U_0\in A_1\cup A_3,\\
& h_{\min}(U_0) <h_0,\quad\quad U_0\in A_2,\\
& h_{\min}(U_0) =h_0,\quad\quad U_0\in {\mathcal C},\\
\endaligned
\label{4.6}$$ The roots $h^*$ and $h_*$ satisfy the following inequalities:
- If $a > a_0$, then $$h_*(U_0) < h_0 < h^*(U_0).
\label{4.7}$$
- If $a < a_0$, then $$\aligned
&h_0 < h_*(U_0) \quad\quad U_0 \in A_1\cup A_3,\\
&h_0 > h^*(U_0) \quad\quad U_0 \in A_2.
\endaligned
\label{4.8}$$
The function $a_{\min}(U_0)$ satisfy the following inequalitites: $$\aligned
&a_{\min}(U_0) < a_0,\quad (h_0,u_0)\in A_1 \cup A_2 \cup A_3,
\\
&a_{\min}(U_0) = a_0,\quad (h_0,u_0)\in {\mathcal C}_\pm.
\endaligned
\label{4.9}$$
The states that can be connected by stationary waves are characterized as follows.
\[theo41\] Fix a left-hand state $U_0=(h_0,u_0,a_0)$ and a right-hand bottom level $a$.
- If $u_0\ne 0$ and $a>a_{\min}(U_0)$, then there are two distinct right-hand states $$U_{1,2}:=(h_{1,2}(U_0),u_{1,2}(U_0),a)$$ where $u_i(U_0):=h_0u_0/h_i(U_0), i=1,2$, that can be connected to $U_0$ by a stationary wave satisfying the Rankine-Hugoniot relations.
- If $u_0\ne 0$ and $a=a_{\min}(U_0)$, the two states in (i) coincide and we obtain a unique stationary wave.
- If $u_0\ne 0$ and $a<a_{\min}(U_0)$, there is no stationary wave from $U_-$ to a state with level $a$.
- If $u_0=0$, there is only one stationary jump defined by $$u=u_0=0,\quad h=h_0+a-a_0.$$
We arrive at an important conclusion on stationary jumps.
\[prop42\] For $u_0\ne 0$, the state $(h_{1}(U_0),u_{1}(U_0))$ belongs to $A_1$ if $u_0 < 0$, and belongs to $A_3$ if $u_0 > 0$, while the state $(h_{2}(U_0),u_{2}(U_0))$ always belongs to $A_2$. Moreover, we have $$(h_{\min}(U_0), u=h_0u_0/h_{\min}(U_0)) \in
\begin{cases}
{\mathcal C}^+, & u_0 > 0,
\\
{\mathcal C}^-, & u_0 < 0.
\end{cases}
\label{4.10}$$
It is interesting to observe that the shock speed in the genuinely nonlinear characteristic fields will change sign along the shock curves. Therefore, it exchanges its order with the linearly degenerate field, as stated in the following theorem.
\[theo43\] (a) If $U_0\in A_1$, then there exists $\tilde U_0\in{\mathcal S}_1(U_0)\cap A_{2}^+$ corresponding to $h=\tilde h>h_0$ such that $$\aligned
&\bar\ld_1(U_0,\tilde U_0) = 0,\\
&\bar\ld_1(U_0,U) > 0,\quad U\in{\mathcal S}_1(U_0), h \in (h_0,\tilde h_0),\\
&\bar\ld_1(U_0,U) < 0,\quad U\in{\mathcal S}_1(U_0), h \in (\tilde h_0,+\infty).\\
\endaligned
\label{4.11}$$ If $U_0\in A_2\cup A_3$, then $$\bar\ld_1(U_0,U) < 0, \quad U\in {\mathcal S}_1(U_0).
\label{4.12}$$
\(b) If $U_0\in A_3$, then there exists $\bar U_0\in{\mathcal S}_2^B(U_0)\cap A_{2}^-$ corresponding to $h=\bar h>h_0$ such that $$\aligned
&\bar\ld_2(U_0,\bar U_0) = 0,\\
&\bar\ld_2(U_0,U) > 0,\quad U\in{\mathcal S}_2^B(U_0), h \in (h_0,\bar h_0),\\
&\bar\ld_2(U_0,U) < 0,\quad U\in{\mathcal S}_2^B(U_0), h \in (\bar h_0,+\infty).\\
\endaligned
\label{4.13}$$
If $U_0\in A_1\cup A_2$, then $$\bar\ld_2(U_0,U) > 0,\qquad U\in {\mathcal S}_2^B(U_0).
\label{4.14}$$
Two-parameter wave sets
-----------------------
From Proposition \[prop42\] and the arguments in the previous section, we can now construct wave composites. It turns out that two-parameter wave sets can be constructed. For definiteness, we now illustrate this feature on a particular case. Suppose that $U_0=(h_0,u_0,a_0)\in A_2^+$. We can use a stationary wave from $U_0$ to a state $U_m=(h_m,u_m,a_m)\in A_2^+$ using $h^*$, followed by another stationary wave from $U_m$ to $U\in A_1$ using the corresponding value $h_*$, then we continue with $1$-waves, and as in $A_1$ the characteristic speed is positive. As $a_m$ can vary, the set of such states $U$ forms a two-parameter set of composite waves containing first and third waves. Such wave sets were constructed even for strictly hyperbolic systems by Hayes and LeFloch [@HayesLeFloch].
To make the Riemann problem well-posed, it is necessary to impose an additional admissibility criterion.
The monotonicity criterion
--------------------------
Since the Riemann problem for [(\[1.1\])]{} may in principle admit up to a one-parameter family of solutions, we now require that the Riemann solutions of interest satisfy a monotonicity condition in the component $a$.
- (Monotonicity Criterion) Along any stationary curve ${\mathcal W}_3(U_0)$, the bottom level $a$ is a monotone function in $h$. The total variation of the bottom level component of any Riemann solution must not exceed (and, therefore, is equal to) $|a_L-a_R|$, where $a_L, a_R$ are left-hand and right-hand cross-section levels.
A similar selection criterion was used by Isaacson and Temple [@IsaacsonTemple92; @IsaacsonTemple95] and by LeFloch and Thanh [@LeFlochThanh03.2], and by Goatin and LeFloch [@GoatinLeFloch]. Under the transformation (if necessary) $$x \to -x, \qquad u \to -u,$$ a right-hand state $U=(h,u,a)$ transforms into a left-hand state of the form $U'=(h,-u,a)$. Therefore, it is not restrictive to assume that $$a_L < a_R.
\label{5.1}$$
\[lem51\] The Monotonicity Criterion implies that stationary shocks do not cross the boundary of strict hyperbolicity. In other words, we have:
- If $U_0\in A_1\cup A_3$, then only the stationary shock based on the value $h_*(U_0)$ is admissible.
- If $U_0\in A_2$, then only the stationary shock using $h^*(U_0)$ is admissible.
Recall that the Rankine-Hugoniot relations associated with the linearly degenerate field [(\[3.23\])]{} implies that the component $a$ can be expressed as a function of $h$: $$a=a(h)=a_0+{u^2-u_0^2\over 2g}+h-h_0,$$ where $$u=u(h)={h_0u_0\over h}.$$ Thus, differentiating $a$ with respect to $h$, we find $$\aligned
a'(h)&={uu'(h)\over g}+1=-u{h_0u_0\over gh^2}+1\\
&=-{u^2\over gh}+1\\
\endaligned$$ which is positive (resp. negative) if and only if $$u^2 - g \, h < 0 \qquad \text{(resp. $u^2 - g \, h > 0$)}$$ or $(h,u,a)\in A_2$ (resp. $\in A_1$ or $\in A_3$). Thus, in order that $a'$ keeps the same sign, the point $(h,u,a)$ must remain on the same side as $(h_0,u_0,a_0)$ with respect to ${\mathcal C}_\pm$. The conclusions in (i) and (ii) follow.
It follows from Lemma \[lem51\] that for a given $U_0=(h_0,u_0,a_0)\in A_i, i=1,2,3,$ and a level $a$, we can define a unique point $U=(h,u,a)$ so that the two points $U_0,U$ can be connected by a stationary wave satisfying the (MC) criterion. We have a mapping $$\aligned
SW(.,a): [0,\infty)\times \RR\times \RR_+ & \to [0,\infty)\times \RR\times\RR_+\\
U_0=(h_0,u_0,a_0) &\mapsto SW(U_0,a)= U = (h,u,a),\\
\endaligned
\label{5.2}$$ such that $U_0$ and $U$ can be connected by a stationary wave satisfying the (MC) condition. Observe that this mapping is single-valued except on the hypersurface ${\mathcal C}$, where it has two-values.
Let us use the following notation: $W_i(U_0,U)$ will stand for the $i$th wave from a left-hand state $U_0$ to the right-hand state $U$, $i=1,2,3$. To represent the fact that the wave $W_i(U_1,U_2)$ is followed by the wave $W_j(U_2,U_3)$, we use the notation: $$W_i(U_1,U_2) \oplus W_j(U_2,U_3).
\label{5.3}$$
The Riemann problem
===================
In this section we construct the solutions of the Riemann problem, by combining Lax shocks, rarefaction waves, and stationary waves satisfying the admissibility condition (MC).
Recall that for general strictly hyperbolic systems of conservation laws, the solution to the Riemann problem exist when the initial jump is sufficiently small only. That is to say that the right-hand states $U_R$ should lie in a small neighborhood of the left-hand state $U_L$. However, for the system [(\[1.1\])]{}, we can cover large data and essentially cover a full domain of existence for any given left-hand state. More precisely, we determine the precise range of right-hand states in which the Riemann solution exists.
Solutions containing only one wave of each characteristic family
----------------------------------------------------------------
We begin by constructing solutions containing only one wave corresponding to each characteristic field which is identified as each family of waves. This structure of solutions is standard in the theory of [*strictly hyperbolic*]{} system of conservation laws. In the next subsection we will consider solutions that contain up to two waves in the same family. The following theorem deals with the case where the left-hand state $U_L$ is in $A_1$.
\[theo51\] Let $U_L\in A_1$ and set $U_1:=SW(U_L,a_R)$, $\{U_2\}={\mathcal W}_1(U_1)\cap {\mathcal W}_2^B(U_R)$. Then, the Riemann problem [(\[1.1\])]{}-[(\[1.2\])]{} admits an admissible solution with the following structure $$W_3(U_L,U_1) \oplus W_1(U_1,U_2) \oplus W_2(U_2,U_R),
\label{5.4}$$ provided $h_2\le \tilde h_1$. (Figure \[fig41\]).
![Solution for $U_L\in A_1$[]{data-label="fig41"}](fig41.pdf "fig:"){width="70.00000%"}\
Observe that the set of composite waves $SW({\mathcal W}_1(U_L), a_R)$ consists of three monotone decreasing curves, and each lies entirely in each region $A_i, i=1,2,3$. The monotone increasing backward curve ${\mathcal W}_2^B(U_R)$ therefore may cut the three composite curves at a unique point, two point, or else does not meet the wave composite set. The Riemann problem therefore may admit a unique solution, two solutions, or has no solution.
The state $U_L$ belongs to $A_1$ and in this region, the $\ld_3$ is the smallest of the three characteristic speeds. A stationary wave from $U_L=(h_L,u_L,a_L)$ to $U_1=(h_1,u_1,a_R)$ exists, since $a_L\le a_R$. Moreover, by Lemma \[lem51\], we have $U_1\in A_1$.
If $h_2\le h_1$, then the stationary wave is followed by a $1$-rarefaction wave with positive speed, and then can be continued by a $2$-wave $W_2(U_2,U_R)$. If $h_2>h_1$, then the $1$-wave in [(\[5.3\])]{} is a shock wave. Since $h_2\le \bar h_1$ and $U_1\in A_1$, the shock speed $\ld_2(U_1,U_2)\ge 0$, and thus it can follow a stationary wave (with zero speed). Moreover, it is derived from [(\[3.5\])]{} that $$\aligned
\bar\ld_{1}(U_1,U_2) &=u_1-\sqrt{{g\over 2}\Big(h_2+{h_2^2\over h_1}\Big)},\\
&={h_2u_2-h_1u_1\over h_2-h_1}\\
&=u_2-\sqrt{{g\over 2}\Big(h_1+{h_1^2\over h_2}\Big)},
\endaligned$$ thus $$\aligned
\bar\ld_{1}(U_1,U_2)
&\le u_2+\sqrt{{g\over 2}\Big(h_R+{h_R^2\over h_2}\Big)}
=\bar\ld_{2}(U_2,U_R).
\endaligned \label{5.5}$$ This means the $1$-shock $S_1(U_1,U_2)$ can always follow the $2$-shock $ S_2(U_2,U_R)$. Similar for rarefaction waves. Therefore, the solution structure [(\[5.3\])]{} holds.
The following theorem deals with the case where the left-hand state $U_L$ is in $A_1\cup A_2$.
\[theo52\] Let $U_L\in A_1\cup A_2$. Then there exists a region of values $U_R$ such that $SW({\mathcal W}_1(U_L),a_R)\cap{\mathcal W}_2^B(U_R)\ne \emptyset$. In this case this intersection may contain either only one or both points $U_1\in A_2$ and $U_2\in A_3$. The Riemann problem [(\[1.1\])]{}-[1.2]{} therefore has a solution with the structure $$W_1(U_L,U_3) \oplus W_3(U_3,U_1) \oplus W_2(U_1,U_R),
\label{5.6}$$ where $U_3\in {\mathcal W}_1(U_L)$ is the point such that $U_1=SW(U_3,a_R)$, and also $$W_1(U_L,U_4) \oplus W_3(U_4,U_2) \oplus W_2(U_2,U_R),
\label{5.7}$$ where $U_4\in {\mathcal W}_1(U_L)$ is the point such that $U_2=SW(U_4,a_R)$, if $h_2\ge \bar h_R$ whenever $U_R\in A_2^-$. (Figure 3 )
![Solution for $U_L\in A_1\cup A_2$[]{data-label="fig42"}](fig42.pdf "fig:"){width="70.00000%"}\
The solution may begin with a $1$-wave, either $1$-shock with a negative shock speed to a state $U_3$, or a $1$-rarefaction wave with $\ld_1(U_3)\le 0$, followed by a stationary wave $W_3(U_3,U_1)$ from $U_3$ to $U_1$, then followed by a $2$-wave $W_2(U_1,U_R)$ from $U_1$ to $U_R$. It is similar in the case of $U_2$. However, in order that the stationary wave $W_3(U_4,U_2)$, for some $U_4\in
{\mathcal W}_1(U_L)$ and $U_4\in A_3$ obviously, to be followed by a $2$-wave $W_2(U_2,U_R)$, it is required that the wave is a shock with non-negative shock speed $\ld_2(U_2,U_R)$. This is equivalent to $h_2\ge \bar h_R$.
\[theo53\] Let $U_L\in A_3$ and $U_R\in A_1\cup A_2$, and set $U_1=SW({\mathcal W}_2^B(U_R),a_L)\cap {\mathcal W}_1(U_L)$ and $U_2=SW(U_1,a_R)\in {\mathcal W}_2^B(U_R)$.
- If $U_1\in A_2^+\cup {\mathcal C}_+\cup\{u=0\}$, the Riemann problem [(\[1.1\])]{}-[(\[1.2\])]{} has a solution with the following structure $$W_1(U_L,U_1) \oplus W_3(U_1,U_2) \oplus W_2(U_2,U_R).
\label{5.8}$$
- If $U_1\in A_2^-\cup {\mathcal C}_-$, provided $h_R\ge \bar
h_2$, the Riemann solution [(\[5.8\])]{} also exists.
- If $U_1\in A_1\cup A_3$, the construction [(\[5.8\])]{} does not make sense.
(Figure \[fig43\]).
![Solution for $U_L\in A_3$[]{data-label="fig43"}](fig43.pdf "fig:"){width="70.00000%"}\
If $U_1\in A_2\cup{\mathcal C}$, the non-positive speed wave $W_1(U_L,U_1)$ can be followed by a stationary wave $W_3(U_1,U_2)$.
When $U_2\in A_2-$, if $U_1\in A_2^+\cup {\mathcal C}_+\cup\{u=0\}$, then this stationary wave can always be followed by a $2$-wave $W_2(U_2,U_R)$, since the wave speed of the $2$-wave is positive. This establishes (i).
If $U_1\in A_2^-\cup {\mathcal C}_-$, the wave speed of the $2$-wave $W_2(U_2,U_R)$ is non-negative if and only if $h_R\ge \bar h_2$. This proves (ii).
If $U_1\in A_1$, the $1$-wave has positive speed. So it can not be followed by a stationary wave. If $U_1\in A_3$, then $U_2\in A_3$ by the (MC) criterion. So the $2$-wave $W_2(U_2,U_R)$ has negative speed. So it can not be proceeded by a stationary wave. This proves (iii).
The above theorem enables $U_R$ to vary in each region $A_1, A_2$, and $A_3$. The next theorem enables $U_L$ to vary in all the three regions.
\[theo54\] Let $U_R\in A_3$. Set $U_1=SW(U_R,a_L)$, $U_2={\mathcal W}_2^B(U_1)\cap {\mathcal W}_1(U_L)$. A Riemann solution exists and has the following structure
$$W_1(U_L,U_2) \oplus W_2(U_2,U_1) \oplus W_3(U_1,U_R),
\label{5.9}$$
provided $ h_2\le \bar h_1$. (Figure \[fig44\]).
![$U_L$ may be anywhere[]{data-label="fig44"}](fig44.pdf "fig:"){width="70.00000%"}\
The stationary wave $W_3(U_1,U_R)$ turns out to have the greatest wave speed. In order for this wave to be proceeded by the $2$-wave $W_2(U_1,U_2)$, the wave speed of this $2$-wave has to be non-positive. This is equivalent to the condition $h_2\le
\bar h_1$, according to Proposition [(\[theo43\])]{}. Similar to [(\[5.7\])]{}, we have $$\ld_1(U_L,U_2)\le \ld_2(U_2,U_1).$$ so that the $1$-wave $W_1(U_L,U_2)$ can follow the $2$-wave $ W_2(U_2,U_1)$.
Solutions containing more than one wave of each characteristic family
---------------------------------------------------------------------
It is remarkable feature of the shallow water system that we can also construct solutions with [*four*]{} elementary waves, using three available characteristic fields. This illustrates one of the difficulties in coping with the Riemann problem when the system under consideration is not strictly hyperbolic.
Let $U_L\in A_2\cup A_3$ and set $U_+={\mathcal W}_1(U_L)\cap {\mathcal C}_+,\{U_1\}=SW(U_+,a_R)\cap
A_1,\{U_2\}={\mathcal W}_1(U_1)\cap{\mathcal W}_2^B(U_R)$. The Riemann problem [(\[1.1\])]{}-[(\[1.2\])]{} has a solution with the following structure $$R_1(U_L,U_+) \oplus W_3(U_+,U_1) \oplus W_1(U_1,U_2)\oplus
W_2(U_2,U_R),
\label{5.11}$$ provided $h_2\le \tilde h_1$. (Figure \[fig45\]).
![Solution with repeated two $1$-waves[]{data-label="fig45"}](fig45.pdf "fig:"){width="70.00000%"}\
For any $U_L$, set $ \{U_1\}=SW({\mathcal C}_-,a_R)\cap {\mathcal W}_2^B(U_R)\cap A_2$, $U_2=(h_2,u_2,a_L)\in {\mathcal C}_-$ such that $U_1=SW(U_2)$, and $\{U_3\}={\mathcal W}_2^B(U_2)\cap {\mathcal W}_1(U_L).$ Then the Riemann problem [(\[1.1\])]{}-[(\[1.2\])]{} has a solution with the following structure $$W_1(U_L,U_3) \oplus R_2(U_3,U_2) \oplus W_3(U_2,U_1)\oplus
W_2(U_1,U_R),
\label{5.12}$$ provided $h_R\ge \bar h_1$ and $h_3\le h_2$. (Figure \[fig46\]).
![Solution with repeated two $2$-waves[]{data-label="fig46"}](fig46.pdf "fig:"){width="70.00000%"}\
Thus, we see that the Riemann problem [(\[1.1\])]{}-[(\[1.2\])]{} has a solution consisting of a $1$-, a $3$-, and two $2$-waves.
It is interesting to note that there are solutions satisfying the (MC) criterion which contain three waves with the same speed (zero). This is the case when a stationary wave jumps from the level $a=a_L$ to an intermediate level $a_m$ between $a_L$ and $a_R$, followed by an “intermediate” $k$-shock with zero speed at the level $a_m$, $k=1,3$, and then followed by another stationary wave jumping from the level $a_m$ to $a_R$. Thus, there are only two possibilities:
- $U_L$ belongs to $A_1$ and a $1$-shock with zero speed is used.
- $U_R$ belongs to $A_3$ and a $2$-shock with zero speed is used.
We just describe the first case (i), as the second case is similar. Recall from Proposition \[theo43\] that for any $U\in A_1$, there exists a unique point denoted by $\tilde U\in {\mathcal W}_1(U)\cap A_2$ such that $$\bar\ld_1(U,\tilde U)=0.$$
Let $U_L\in A_1$ and set $$\aligned
&SW(U_L,[a_L,a_R]) :=\cup_{a\in [a_L,a_R]} SW(U_L,a),\\
&\widetilde{SW}(U_L,[a_L,a_R]):=\{\tilde U | \ U \in
SW(U_L,[a_L,a_R])\}.\\
\endaligned$$ Whenever $$\widetilde{SW}(U_L,[a_L,a_R])\cap {\mathcal W}_2^B(U_R)\ne \emptyset$$ there exist $a_m\in [a_L,a_R]$, $U_1=SW(U_L,a_m)$, and $$U_2\in \widetilde{SW}(U_L,[a_L,a_R])\cap {\mathcal W}_2^B(U_R)$$ that defines a solution with the structure $$W_3(U_L,U_1) \oplus S_1(U_1,\tilde U_1) \oplus W_3(\tilde
U_1,U_2)\oplus W_2(U_2,U_R).
\label{5.14}$$
Acknowledgments {#acknowledgments .unnumbered}
===============
The first author (P.G.L.) was supported by the A.N.R. Grant 06-2-134423: [*Mathematical methods in general relativity*]{} (MATH-GR) and the Centre National de la Recherche Scientifique (CNRS).
[10]{}
A. Alcrudo and F. Benkhaldoun, Exact solutions to the Riemann problem of the shallow water equations with a bottom step, 30 (2001), 643–671.
D. Amadori, L. Gosse, and G. Guerra, Godunov-type approximation for a general resonant balance law with large data, 198 (2004), 233–274.
N. Andrianov and G. Warnecke, . , 64(3):878–901, 2004.
E. Audusse, F. Bouchut, M-O. Bristeau, R. Klein, and B. Perthame, A fast and stable well-balanced scheme with hydrostatic reconstruction for shallow water flows. , 25(6):2050–2065, 2004.
F. Bouchut, [*Nonlinear stability of finite volume methods for hyperbolic conservation laws and well-balanced schemes for sources,*]{} Frontiers in Mathematics. Birkhäuser Verlag, Bäsel, 2004.
M.J. Castro, J.A. Garc’a-Rodr’guez, J.M. Gonz‡lez-Vida, and C. ParŽs, A parallel 2D finite volume scheme for solving the bilayer shallow-water system: modellization of water exchange at the Strait of Gibraltar, Parallel computational fluid dynamics, 199–206, Elsevier B. V., Amsterdam, 2005.
G. Dal Maso, P.G. LeFloch, and F. Murat, Definition and weak stability of nonconservative products. , 74:483–548, 1995.
P. Goatin and P.G. LeFloch, . , 21:881–902, 2004.
L. Gosse, Localization effects and measure source terms in numerical schemes for balance laws, Math. Comp. 71 (2002), 553–582.
L. Gosse and G. Toscani, Asymptotic-preserving and well-balanced schemes for radiative transfer and the Rosseland approximation, Numer. Math. 98 (2004), no. 2, 223–250.
J.M. Greenberg, A.Y. Leroux, R. Baraille, and A. Noussair, Analysis and approximation of conservation laws with source terms. , 34:1980–2007, 1997.
B.T. Hayes and P.G. LeFloch, SIAM J. Math. Anal. 31 (2000), 941–991.
T.Y. Hou and P.G. LeFloch, Why nonconservative schemes converge to wrong solutions: error analysis, Math. Comp. 62 (1994), 497–530.
E. Isaacson and B. Temple, Nonlinear resonance in systems of conservation laws. , 52:1260–1278, 1992.
E. Isaacson and B. Temple, Convergence of the $2\times 2$ godunov method for a general resonant nonlinear balance law. , 55:625–640, 1995.
D. Kröner and M.D. Thanh, . , 43(2):796–824, 2005.
P.D. Lax, . , pp. 603–634, 1971.
P.G. LeFloch, . , 593, 1989.
P.G. LeFloch, Lectures in Mathematics, ETH Zuerich, Birkhauser, 2002.
P.G. LeFloch, 1:243–289, 2004.
P.G. LeFloch and M.D. Thanh, . , 1(4):763–797, 2003.
D. Marchesin and P.J. Paes-Leme, . , 12:433–455, 1986.
|
---
abstract: 'A new coset matrix for low–energy limit of heterotic string theory reduced to three dimensions is constructed. The pair of matrix Ernst potentials uniquely connected with the coset matrix is derived. The action of the symmetry group on the Ernst potentials is established.'
---
[H]{} §[[S]{}]{} Ł[[L]{}]{}
-10mm
[**Matrix Ernst Potentials and Orthogonal Symmetry\
for Heterotic String in Three Dimensions** ]{} 2truecm [**Alfredo Herrera–Aguilar**]{}\
Laboratory of Computing Techniques and Automation\
Joint Institute for Nuclear Research, Dubna, M.R. 141980 Russia\
e–mail: alfa@cv.jinr.dubna.su\
\
\
Department of Electromagnetic Processes and Nuclear Interactions\
Nuclear Physics Institute, Moscow State University, Moscow 119899 Russia\
e–mail: kechkin@monet.npi.msu.su
2truecm
[April 1997]{}
2truecm
Review of Previous Results
==========================
In one–loop approximation the heterotic string theory leads to the effective action which describes matter fields coupled to gravity [@ms]: S= dx G\^&&e\^[-]{} (R+ \_[;M]{} \^[(D);M]{} -\
&&H\_[MNP]{} H\^[(D)MNP]{} - F\^[(D)I]{}\_[MN]{} F\^[(D)IMN]{}), where &&F\^[(D)I]{}\_[MN]{}=\_MA\^[(D)I]{}\_N-\_NA\^[(D)I]{}\_M,\
&&H\_[MNP]{}=\_MB\_[NP]{}-A\^[(D)I]{}\_MF\^[(D)I]{}\_[NP]{}+ Here $G\D_{MN}$ is the $D$-dimensional metric, $B\D_{MN}$ is the antisymmetric Kalb–Ramond field, $\p\D$ is the dilaton and $A^{(D)I}_M$ denotes a set ($I=1,\,2,\,...,n$) of Abelian vector fields. For the self–consistent heterotic string theory $D=10$ and $n=16$ [@s], but in this work, following [@ms], we shall leave these parameters arbitrary.
The action (1) can be generalized for the case of Yang–Mills gauge fields; it can also include mass, Gauss–Bonnet terms, etc. But only the simplest variant (1) of the theory possesses remarkable analytical properties which are important for our consideration.
In [@ms]–[@s] it was shown that after the Kaluza–Klein compactification of $d=D-3$ dimensions on a torus, the resulting theory is S\^[(3)]{}\[g\_,B\_,,A\_,M\]= d\^3 x g\^ \[R + \_[;]{} \^[;]{} - &&H\_ H\^ -\
&&e\^[-2]{}F\^T\_M\^[-1]{}F\^- Tr(J\^M)\^2\]. Here the symmetric matrix $M$ has the following structure M=( G\^[-1]{} & G\^[-1]{}(B+C) & G\^[-1]{}A (-B+C)G\^[-1]{} & (G-B+C)G\^[-1]{}(G+B+C) & (G-B+C)G\^[-1]{}A A\^[T]{}G\^[-1]{} & A\^[T]{}G\^[-1]{}(G+B+C) & I\_n+A\^[T]{}G\^[-1]{}A ) with block elements defined by &&G=(G\_[pq]{} G\_[p+2,q+2]{}),\
&&B=(B\_[pq]{} B\_[p+2,q+2]{}),\
&&A=(A\^I\_p A\^[(D)I]{}\_[p+2]{}), where $C=\frac{1}{2}AA^{T}$ and $p,q=1,2,...,d$. Matrix $M$ satisfies the $O(d,d+n)$ group relation MLM=L, where L=( O & I\_d & 0 I\_d & 0 & 0 0 & 0 & -I\_n ); thus $M\in O(d,d+n)/O(d)\times O(d+n)$.
The remaining $3$–fields are defined in the following way: for dilaton and metric fields one has &&=-lndetG,\
&&g\_=e\^[-2]{}(G\_-G\_[p+2,]{} G\_[q+2,]{}G\^[pq]{} ). Then, the set of Maxwell strengths $F^{(a)}_{\mu\nu}$ ($a=1,2,...,2d+n$) is constructed on $A^{(a)}_{\mu}$, where &&A\^p\_=G\^[pq]{}G\_[q+2,]{}\
&&A\^[I+2d]{}\_=-A\^[(D)I]{}\_+A\^I\_qA\^q\_,\
&&A\^[p+d]{}\_=B\_[p+2,]{}-B\_[pq]{}A\^q\_+ A\^I\_[p]{}A\^[I+2d]{}\_. Finally, the $3$–dimensional axion $$H_{\mu\nu\rho}=\pa_{\mu}B_{\nu\rho}+2A^a_{\mu}L_{ab}F^b_{\nu\rho}+
\mbox{\rm cycl. perms. of $\mu$, $\nu$, $\rho$}$$ depends on the $3$–dimensional Kalb–Ramond field $$B_{\mu\nu}=B\D_{\mu\nu}-4B_{pq}A^p_{\mu}A^q_{\nu}-
2\left(A^p_{\mu}A^{p+d}_{\nu}-A^p_{\nu}A^{p+d}_{\mu}\right).$$
The dimensionally reduced system (2) admits two simplifications. Namely, in three dimensions, the Kalb–Ramond field $B_{\mu\nu}$ becomes a non–dynamical variable and can be omitted [@s]. Moreover, the fields $A_\mu^a$ can be dualized on–shell as follows e\^[-2]{}MLF\_=E\_\^; so, the final system is defined by the quantities $M$, $\p$ and $\psi$. As it had been established by Sen in [@s], it is possible to introduce the matrix \_S=( M+e\^[2]{}\^T & -e\^[2]{}& ML+(\^[T]{}L) -e\^[2]{}\^T & e\^[2]{} & -e\^[2]{}\^[T]{}L \^TLM+e\^[2]{}\^T(\^[T]{}L) & -e\^[2]{}\^[T]{}L& e\^[-2]{}+\^TLML+e\^[2]{}(\^[T]{}L)\^2 ), in terms of which the action of the system adopts the standard chiral form S\^[(3)]{}\[g\_,\_S\]= d\^3 x g\^ , where $J^{\M_S}=\nabla\M_S\M^{-1}_S$. This matrix is symmetric $\M_S=\M^T_S$ and satisfies the $O(d+1,d+n+1)$–group relation \_SŁ\_S\_S=Ł\_S with Ł\_S=( L & 0 & 0 0 & 0 & 1 0 & 1 & 0 ), so that, $\M_S$ belongs to the coset $O(d+1,d+n+1)/O(d+1)\times O(d+n+1)$.
It is easy to see that the coset $O(d+1,d+n+1)/O(d+1)\times O(d+n+1)$ can be obtained from the coset $O(d,d+n)/O(d)\times O(d+n)$ by the replacement $d\rightarrow d+1$. At the same time, $\M_S$ has a quite different structure in comparison with $M$. Making use of these facts, one can hope that there is another chiral matrix $\M$ possessing the same structure that $M$ with block components $\G$, $\B$ and $\A$ of $(d+1)\times (d+1)$, $(d+1)\times (d+1)$ and $(d+1)\times n$ dimensions, respectively.
In this paper we show that such a matrix can actually be constructed. We establish that its block components allow to define two matrices (“matrix Ernst potentials") which permit to represent the theory under consideration in the Einstein–Maxwell (EM) form. At the end of the paper we study how the $O(d+1,d+n+1)$ group of transformations acts on the matrix Ernst potentials and establish the relations between its subgroups on the base of the discrete strong–weak coupling duality transformations (SWCDT) found in [@s].
Matrix Ernst Potentials
=======================
We start from the consideration of the kinetic term of the matrix $M$ S\^[(3)]{}\[M\]=-d\^3 x g\^ Tr(J\^[M]{})\^2. The Euler–Lagrange equation corresponding to (11) is J\^[M]{}=0. In terms of the block components $G$, $B$ and $A$ it reads &&J\^[G]{}-(J\^B)\^2 +AA\^T G\^[-1]{}=0,\
&&J\^[B]{}-J\^GJ\^B=0,\
&&(G\^[-1]{}A)-G\^[-1]{}J\^BA=0, where &&J\^G=GG\^[-1]{},\
&&J\^B=G\^[-1]{}. Eqs. (13) are the motion equations for the action S\^[(3)]{}\[G,B,A\]=-d\^3 x g\^ Tr{+ A\^TG\^[-1]{}A}, which is equivalent to (11) and can be obtained by straightforward but tedious algabraical calculations. (The coefficient $\frac{1}{4}$ can easily be established by comparison of Eqs. (11) and (15) in the case when $B=A=0$).
One can introduce the matrix variable X=G+B-AA\^T, which was entered for the first time by Maharana and Schwartz in the case when $A=0$ [@ms]; it defines, together with $A$, the most compact constrainless representation of the system: S\^[(3)]{}\[X,A\]=-d\^3 x g\^ Tr, where $G=\frac{1}{2}\left(X+X^T-AA^T\right)$. The form of this action is very similar to the stationary Einstein–Maxwell one [@iw]–[@m]. Thus, in string gravity the matrix $X$ formally plays the role of the gravitational potential $\E$, whereas the matrix $A$ corresponds to the electromagnetic potential $\Phi$ of EM theory [@e]. At the same time, one can notice a direct correspondence between the transposition of $X$ and $A$ on the one hand, and the complex conjugation of $\E$ and $\Phi$, on the other. This analogy will be useful to study the symmetry group of string gravity in the last chapter of the paper.
For the complete theory, i.e., for the theory with nontrivial fields $\p$ and $\psi$, the chiral current $J^M$ does not preserve and one has the equation J\^M+4e\^[-2]{}FF\^TM\^[-1]{}=0 instead of (12). The additional $\p$– and $\psi$–equations of motion are: &&\^2+e\^[2]{}\^TM\^[-1]{}=0,\
&&\_(e\^[-2]{}M\^[-1]{}F\^)=0. They can be derived from the action S\^[(3)]{}\[M,,\]=-d\^3 x g\^ Trby the usual variational procedure.
Our main aim is to represent the action (20) in a form similar to (11). We suppose that it can be done by the $[2(d+1)+n]\times [2(d+1)+n]$ matrix $\M$ defined by the block components $\G$, $\B$ and $\A$ in the same way that the $[2d+n]\times [2d+n]$ matrix $M$ is defined by $G$, $B$ and $A$: =( \^[-1]{} & \^[-1]{}(+) & \^[-1]{} (-+)\^[-1]{} & (-+)\^[-1]{}(++) & (-+)\^[-1]{} \^[T]{}\^[-1]{} & \^[T]{}\^[-1]{}(++) & I\_n+\^[T]{}\^[-1]{}). This matrix also is a symmetric one and satisfies the $O(d+1,d+n+1)$–group relation Ł=Ł, where Ł=( 0 & I\_[d+1]{} & 0 I\_[d+1]{} & 0 & 0 0 & 0 & -I\_n ), and belongs to the coset $O(d+1,d+n+1)/O(d+1)\times O(d+n+1)$.
This hypothesis means that the action (20) can be expressed in the form S\^[(3)]{}\[\]=-d\^3 x g\^ Tr(J\^)\^2 with $J^{\M}=\nabla\M\M^{-1}$; in view of (21), one can rewrite it as S\^[(3)]{}\[,,\]=-d\^3 x g\^ Tr{+ \^T\^[-1]{}}.
In order to establish the explicit form of the matrix $\M$ one can procede as follows. On the one hand, it is useful to represent the column $\psi$ in the form Ł\_S=( u v s ). Then Eq. (25) transforms to S\^[(3)]{}\[G,B,A,,u,v,s\]=-d\^3 x g\^ {()\^2+Tr- e\^[2]{}(u+ (B+C)v+As)\^TG\^[-1]{} (u+ (B+C)v+As)+ v\^TGv+(s-A\^Tv)\^T (s-A\^Tv)}. On the other hand, the parametrization [^1] =( -f+v\^TGv & v\^TG Gv & G ), =( 0 & w\^T -w & B ), =( s\^T A ), with $\tilde w=\tilde u+B\tilde v$, leads to the following expressions for the 1–st and 3–rd terms of Eq. (25) S\^[(3)]{}\[\]=-d\^3 x g\^ Tr(J\^)\^2=-d\^3 x g\^ {- f\^[-1]{}v\^TGv}, S\^[(3)]{}\[\]=-d\^3 x g\^ Tr(A\^T\^[-1]{}A)= -d\^3 x &&g\^ \[Tr(A\^TG\^[-1]{}A)-\
&&f\^[-1]{} (s-A\^Tv)\^T (s-A\^Tv) \]. One can see that Eq. (29) gives the 1–st, 2–nd and 6–th terms of Eq. (27) if f=e\^[-2]{} v=v. On the other hand Eq. (30) is equivalent to the 4–th and 7–th items of Eq. (25) if s=-s+A\^Tv. The second term of Eq. (25) S\^[(3)]{}\[\]=&&d\^3 x g\^ Tr(J\^)\^2=d\^3 x g\^ {Tr(J\^)\^2+f\^[-1]{} .\
&&.\^TG\^[-1]{} } corresponds to the remaining 3–rd and 5–th items of Eq. (27) if u=u+As.
Thus, the block components of the matrix $\M$ are defined by Eqs. (28), (31), (32) and (34). Consequently, the matrices $\G$ and $\B$ are =( -e\^[-2]{}+v\^TGv & v\^TG Gv & G ), =( 0 & w\^T -w & B ), where $w=u+Bv+\frac{1}{2}As$. Finally, for the matrix Ernst potentials $\X$ and $A$ one has =( -e\^[-2]{}+v\^TXv-v\^TAs-s\^Ts & v\^TX+u\^T+s\^TA\^T Xv-u & X ), =( -s\^T+v\^TA A ).
Matrix Ehlers–Harrison Transformations
======================================
In this section we establish the action of the symmetry group $O(d+1,d+n+1)$ on the matrix Ernst potentials $\X$ and $\A$. It is evident that the action S\^[(3)]{}\[g\_,,\]=-d\^3 x g\^ {R-Tr}, where $\G=\frac{1}{2}\left(\X+\X^T-\A\A^T\right)$, is invariant under the “rotation" &&=\_0,\
&&=\_0, where $\H\H^T=I_n$; this map generalizes the duality rotation of the electromagnetic sector in the stationary EM theory [@k]. One can also see that the “scaling" &&=§\^T\_0,\
&&=§\^T\_0§, where $det\S\ne 0$, corresponds to the scale transformation of EM system. The gauge transformation of the potential $\A$ reads &&=\_0,\
&&=\_0+\_1 with $\R^T_1=-\R_1$, whereas for the gauge shift of the potential $\X$ one obtains &&=\_0+\_1,\
&&=\_0-\_1\^T\_0-\_1\^T\_1. These transformations are the matrix analogues of the shifts of the rotational and electromagnetic variables of the stationary EM theory.
In order to find nontrivial transformations one can use SWCDT [@s]. This symmetry transformation \^[-1]{} can be expressed in terms of the matrices $\X$ and $\A$ as follows &&-(+\^T)\^[-1]{},\
&&(+\^T)\^[-1]{}\^T(\^T+\^T)\^[-1]{}.
Using this map it is possible to obtain new transformations from the known ones (38)–(41). However, the scaling matrix subgroups remain invariant ($\H\rightarrow H$ and $\S\rightarrow (\S^T)^{-1}$) under (43). It turns out that the shift subgroups give rise to the actually non–linear transformations &&=\^[-1]{}\_0,\
&&(+\^T)\^[-1]{}=(\_0+\_0\^T\_0)\^[-1]{}+\_2, where $\R^T_2=-\R_2$, and &&=\^[-1]{},\
&&+\^T=\^[-1]{} (\_0+\_0\^T\_0). Formula (44) generalizes the Ehlers transformation [@eh] for the string system, whereas Eq. (45) provides the matrix analogue of the Harrison (“charging") transformation [@k].
At the end of the paper we would like to remark that the relations (38)–(41) and (44)-(45) form the full set of transformations of the $O(d+1,d+n+1)$ group. Actually, the general $O(d+1,d+n+1)$ matrix $\K$, which defines the authomorphism $\M \rightarrow \K^T\M\K$, can be represented in the following form =\_[\_2]{}\_[\_2]{}\_§\_\_[R\_1]{}\_[\_1]{}, where \_[\_2]{}=( I\_[d+1]{} & 0 & 0 K\_[\_2]{} & I\_[d+1]{} & \_2\^T\_2 & 0 & I\_n ), & \_[\_2]{}=( I\_[d+1]{} & 0 & 0 \_2 & I\_[d+1]{} & 0 0 & 0 & I\_n ), \_[§]{}=( (§\^T)\^[-1]{} & 0 & 0 0 & §& 0 0 & 0 & I\_n ), & \_=( I\_[d+1]{} & 0 & 0 0 & I\_[d+1]{} & 0 & 0 & ), \_[\_1]{}=( I\_[d+1]{} & \_1 & 0 0 & I\_[d+1]{} & 0 0 & 0 & I\_n ), & \_[\_1]{}=( I\_[d+1]{} & K\_[\_1]{} & T\_1 0 & I\_[d+1]{} & 0 0 & \^T\_1 & I\_n ). Here $K_{\T_2}=\frac{1}{2}\T_2\T^T_2$ and $K_{\T_1}=\frac{1}{2}\T_1\T^T_1$; moreover, $\left[\K_{\T_2},\K_{\R_2}\right]=\left[\K_{\R_1},\K_{\T_1}\right]=
\left[\K_{\S},\K_{\H}\right]=0$, and under the map (42) one has &&\_[\_1]{}\_[\_2]{},\
&&\_\_,\
&&\_[§]{}\_[(§\^T)\^[-1]{}]{},\
&&\_[\_1]{}\_[\_2]{}, where $\R_1\rightarrow \R_2$ and $\T_1 \rightarrow -\T_2$. Thus, the complete $O(d+1,d+n+1)$ group consists of six subgroups defined by the matrices $\H$, $\S$; $\T_1$, $\T_2$; $\R_1$, $\R_2$. This subgroups are the same ones that we have considered above (see Eqs. (38)–(41) and (44)–(45)).
Conclusion and Discussion
=========================
In this paper we study the $O(d+1, d+n+1)$–symmetric low–energy limit of heterotic string theory reduced to three dimensions. It is shown that such a theory can be represented in terms of the $(d+1)\times(d+1)$ matrix $\X$ and $(d+1)\times n$ matrix $\A$. These matrices appear to be the analogues of the gravitational and electromagnetic potentials ($\E$ and $\Phi$, respectively) of the stationary EM theory. The matrices $\G=\frac{1}{2}\left(\X+\X^T-\A\A^T\right)$, $\B=\frac{1}{2}\left(\X-\X^T\right)$ and $\A$ define the chiral matrix $\M\in O(d+1,d+n+1)/O(d+1)\times O(d+n+1)$ of the theory in the same way that matrices $G$, $B$ and $A$ (constructed on the extra components of the metric, Kalb–Ramond and electromagnetic fields, respectively) define the coset matrix $M\in O(d,d+n)/O(d)\times O(d+n)$.
It is established that the $O(d+1,d+n+1)$ symmetry group can be decomposed into six subgroups using the strong–weak coupling duality transformation. It turns out that two subgroups (the rescaling of the potentials $\X$ and $\A$) are invariant under SWCDT. At the same time, the remaining transformations combine into two pairs which map one into another under SWCDT. We show that the gauge shift of $\X$ maps into the matrix Ehlers tranformation, whereas the shift of $\A$ maps into the matrix Harrison one.
All subgroups of transformations are written in quasi–Einstein–Maxwell form. This fact remarks the analogy between the string gravity system with orthogonal symmetry, on the one hand, and the EM theory, on the other, in the $3$–dimensional case.
Acknowledgments {#acknowledgments .unnumbered}
===============
We wuold like to thank our colleagues of DEPNI (NPI) and JINR for an encouraging relation to our work, as well as to ICTP for the hospitality and facilities provided during our stay at Trieste, where the final version of this paper was performed. One of the authors (A.H.) was supported in part by CONACYT and SEP.
[76]{} J. Maharana and J.H. Schwarz, Nucl. Phys. [**B390**]{} (1993) 3; and references therein. A. Sen, Nucl. Phys. [**B434**]{} (1995) 179. W. Israel and G.A. Wilson, J. Math. Phys. [**13**]{} (1972) 865. P.O. Mazur, Acta Phys. Pol. [**14**]{} (1983) 219. F.J. Ernst, Phys. Rev. [**168**]{} (1968) 1415. O. Kechkin and M. Yurova, “Symplectic Gravity Models in Four, Three and Two Dimensions", report hep-th/9610222. O. Kechkin and M. Yurova, Phys. Rev. [**D54**]{} (1996) 6135. W. Kinnersley, J. Math. Phys. [**18**]{} (1977) 529; and references therein. J. Ehlers, in “Les Theories de la Gravitation” (CNRS, Paris, 1959).
[^1]: The parametrization of the matrices $G$ and $B$ is written using the analogy between the theory under consideration and the theories with symplectic symmetry [@ky1]– [@ky2].
|
---
abstract: 'Quadratic eigenvalue problems (QEP) and more generally polynomial eigenvalue problems (PEP) are among the most common types of nonlinear eigenvalue problems. Both problems, especially the QEP, have extensive applications. A typical approach to solve QEP and PEP is to use a linearization method to reformulate the problem as a higher dimensional linear eigenvalue problem. In this article, we use homotopy continuation to solve these nonlinear eigenvalue problems without passing to higher dimensions. Our main contribution is to show that our method produces substantially more accurate results, and finds all eigenvalues with a certificate of correctness via Smale’s $\alpha$-theory. To explain the superior accuracy, we show that the nonlinear eigenvalue problem we solve is better conditioned than its reformulated linear eigenvalue problem, and our homotopy continuation algorithm is more stable than QZ algorithm — theoretical findings that are borne out by our numerical experiments. Our studies provide yet another illustration of the dictum in numerical analysis that, for reasons of conditioning and stability, it is sometimes better to solve a nonlinear problem directly even when it could be transformed into a linear problem with the same solution mathematically.'
address:
- 'Department of Statistics, University of Chicago, Chicago, IL'
- 'Computational and Applied Mathematics Initiative, Department of Statistics, University of Chicago, Chicago, IL'
- 'Computational and Applied Mathematics Initiative, Department of Statistics, University of Chicago, Chicago, IL'
author:
- Yiling You
- Jose Israel Rodriguez
- 'Lek-Heng Lim'
bibliography:
- 'BIB\_AccuratePEP.bib'
title: Accurate Solutions of Polynomial Eigenvalue Problems
---
Introduction {#Intro}
============
The study of polynomial eigenvalue problems (PEPs) is an important topic in numerical linear algebra. Such problems arise in partial differential equations and in various scientific and engineering applications (more on these below). A special case of PEP that has been extensively and thoroughly studied is the *quadratic eigenvalue problem* (QEP), often formulated as below, where we follow the notations in [@QEPsurvey].
Let $M,C,K \in \mathbb{C}^{n\times n}$. The QEP corresponding to these matrices is to determine all solutions $(x,\lambda)$ to the following equation $$\label{QEP_def}
Q(\lambda)x=0\quad \text{ where }\quad Q(\lambda):=\lambda^2 M+\lambda C+K.$$ The dimension of the QEP is $n$. We call a solution $(x,\lambda)$ an eigenpair, $\lambda$ an eigenvalue, and $x$ an eigenvector.
More generally, we may consider *polynomial eigenvalue problems* (PEP), of which the QEP is the special case when $m = 2$.
\[problem:pep\] Let $A_0,A_1,\dots,A_m \in \mathbb{C}^{n\times n}$. The PEP corresponding to these matrices is to determine all solutions $(x,\lambda)$ to the equation $$\label{eq:pep_def}
P(\lambda)x=0\quad\text{ where }\quad P(\lambda):=\lambda^mA_m+\lambda^{m-1}A_{m-1}+\cdots+A_0.$$ Here $m$ is the degree of the PEP and again the solutions are called eigenpairs. A general PEP has $mn$ eigenpairs (we use ‘general’ in the sense of algebraic geometry; those unfamiliar with this usage may think of it as ‘random’).
Well-known examples where such problems arise include the acoustic wave problem [@MIMS_ep2011.116; @acoustic_wave], which gives a QEP, and the planar waveguide problem [@MIMS_ep2011.116; @stowell2010guided], which gives a PEP of higher degree; they are discussed in \[ss:QEPEx\] and \[ss:PEPEx\] respectively. The encyclopedic survey [@QEPsurvey] contains many more examples of QEPs.
There are several methods for solving the QEP, including iterative methods such as Arnoldi method [@10.2307/43633863], Jacobi–Davidson method [@Jacobi-Davison1; @Jacobi-Davison2], and linearization method. However the existing methods invariably suffer from some form of inadequacies: Either (i) they do not apply to PEP of arbitrary degree, or (ii) they require matrices with special structures, or (iii) they only find the largest or smallest eigenvalues. Essentially the only existing method that potentially avoids these inadequacies is the linearization method, as it is based on a reduction to a usual (linear) eigenvalue problem. With this consideration, the linearization method forms the basis for comparison with our proposed method, which does not suffer from any of the aforementioned inadequacies. We test our method with the software <span style="font-variant:small-caps;">Bertini</span> [@bertinibook] and compare numerical results obtained with those obtained using the linearization method described in \[ss:linearization\].
Our main contribution is to propose the use of homotopy method to directly solve a nonlinear problem and find *all* eigenpairs of a PEP or QEP, suitably adapted to take advantage of the special structures of these problems. Since this article is written primarily for numerical analysts, we would like to highlight that the study of *homotopy method for solving a system of multivariate polynomial equations* (as opposed to systems involving non-algebraic or transcendental functions) has undergone enormous progress within the past decade — both theoretically, with the import of powerful results from complex algebraic geometry, and practically, with the development of new softwares implementing greatly improved algorithms. Some of the recent milestones include the resolution of Smale’s 17th Problem [@SIAMNews] and finding all tens of millions of solutions of kinematic problems in biologically-inspired linkage design [@bioLinkage]. Homotopy method has been used for symmetric EVPs and GEPs in the traditional, non-algebraic manner; for instance, [@Lui-Golub; @Zhang-Law-Golub] rely on Raleigh quotient iterations rather than Newton’s method as the corrector method and their results are not certified. Fortunately for us, PEP and QEP fall in this realm of algebraic problems — they are special systems of multivariate polynomial equations. In fact, we will see that when formulated as such a system of polynomial equations, the PEP is better conditioned than its alternative formulation as a generalized eigenvalue problem obtained using linearization. Our numerical experiments also show that as the dimension $n$ crosses a threshold of around $20$, homotopy method begins to significantly outperform linearization method in terms of the normwise backward errors. Moreover, our homotopy method approach is not limited to a specific class of matrices but applies robustly to a wide range of matrices — dense sparse, badly scaled, etc. In addition, we will see that our homotopy method approach makes it perfect for parallelization. Perhaps most importantly, a unique feature of our approach is that our outputs are certifiable using Smale’s $\alpha$-theory.
Linearization method {#ss:linearization}
====================
A popular approach to solve QEP and PEP is the *linearization method*. The goal of which is to transform a PEP into a generalized eigenvalue problem (GEP) [@QEPsurvey] involving an equivalent linear $\lambda$-matrix $A-\lambda B$. A $2n \times 2n$ linear $\lambda$-matrix is a *linearization* of $Q(\lambda)$ [@gohberg2005matrix; @lancaster1985theory] if $$\begin{bmatrix}
Q(\lambda) & 0\\
0 & I_n\\
\end{bmatrix}
=E(\lambda)(A-\lambda B)F(\lambda)$$ where $E(\lambda)$ and $F(\lambda)$ are $2n\times 2n$ $\lambda$-matrices with constant nonzero determinants. The eigenvalues of the quadratic $\lambda$-matrix $Q(\lambda)$ and the linear $\lambda$-matrix $A-\lambda B$ coincide.
Linearizations of PEP are not unique, but two linearizations commonly used in practice are the *first* and *second companion forms*: $$\label{L1andL2}
\text{L1:} \quad
\begin{bmatrix}
0 & N\\
-K & -C\\
\end{bmatrix}
-\lambda
\begin{bmatrix}
N & 0\\
0 & M\\
\end{bmatrix},
\qquad
\text{L2:} \quad
\begin{bmatrix}
-K & 0\\
0 & N\\
\end{bmatrix}
-\lambda
\begin{bmatrix}
C & M\\
N & 0\\
\end{bmatrix},$$ where $N$ can be any nonsingular $n\times n$ matrix. The choice between the two companion forms \[L1andL2\] usually depends on the nonsingularity of $M$ and $K$ [@afolabi1987linearization].
More generally, a PEP \[eq:pep\_def\] can also be transformed into a GEP of dimension $mn$. The most common linearization is called the *companion linearization* where $A$ and $B$ are: $$\label{eq:gep}
A=
\begin{bmatrix}
A_0 & & & & \\
& I & & & \\
& & \ddots & & \\
& & & I & \\
& & & & I
\end{bmatrix},\qquad
B=
\begin{bmatrix}
-A_1 & -A_2 & \cdots & \cdots & -A_m\\
I & 0 & \cdots & \cdots & 0\\
0 & I & 0 & \cdots & 0\\
\vdots & \ddots & \ddots & \ddots & \vdots\\
0 & \cdots & 0 & I& 0
\end{bmatrix}.$$ We will call such a GEP corresponding to a PEP its *companion GEP*. One reason for the popularity of the companion linearization is that eigenvectors of PEP can be directly recovered from eigenvectors of this linearization (see [@doi:10.1137/050628283] for details).
Stability {#ss:Accuracy}
=========
To assess the stability and quality of the numerical methods in this article, we use the backward error for PEP as defined and discussed in [@tisseur2000backward]. Since we do not run into zero or infinite eigenvalues in all the numerical experiments and application problems in this article, our backward errors are always well-defined. For an approximate eigenpair $(\widetilde{x},\widetilde{\lambda})$ of $Q(\lambda)$, the *normwise backward error* is defined as $$\eta(\widetilde{x},\widetilde{\lambda}) := \min \big\{ \epsilon :\bigl(Q(\widetilde{\lambda})+\Delta Q(\widetilde{\lambda})\bigr)\widetilde{x}=0, \;
\| \Delta M\| \leq \epsilon \alpha_2 ,\; \| \Delta C\| \leq \epsilon \alpha_1,\; \| \Delta K\| \leq \epsilon \alpha_0\bigr\},$$ where $\Delta Q(\lambda)$ denotes the perturbation $$\Delta Q(\lambda) = \lambda^2 \Delta M+\lambda \Delta C +\Delta K,$$ and $\alpha_k$’s are nonnegative parameters that allow freedom in how perturbations are measure, e.g., in an absolute sense ($\alpha_k\equiv 1$) or a relative sense ($\alpha_2 = \|M \|$, $\alpha_1 = \|C \|$, $\alpha_0 = \|K \|$). In [@tisseur2000backward], the backward error is shown to be equal to the following scaled residual, $$\label{scaled residual}
\eta(\widetilde{x},\widetilde{\lambda})=
\frac{\bigl\| Q(\widetilde{\lambda})\widetilde{x}\bigr\|}{\bigl(|\widetilde{\lambda}|^2 \alpha_2+|\widetilde{\lambda}| \alpha_1+\alpha_0\bigr) \|\widetilde{x} \|},$$ which has the advantage of being readily computable.
We adopt usual convention and say that a numerical method is *numerically stable* if all of the computed eigenpairs have backwards errors that are of the same order as the unit roundoff. It is known that a numerically stable reduction of GEP may be obtained by computing the *generalized Schur decomposition*: $$\label{generalized Schur decomposition}
W^*AZ=S, \qquad W^*BZ=T,$$ where $W$ and $Z$ are unitary and $S$ and $T$ are upper triangular. Then $\lambda_i={s_{ii}/t_{ii}}, i=1,\dots, 2n$, with the convention that $s_{ii}/t_{ii}=\infty$ whenever $t_{ii}=0$.
The QZ algorithm [@moler1973algorithm; @golub2012matrix] is numerically stable for computing the decomposition \[generalized Schur decomposition\] and solving the GEP, but it is not stable for the solution of the QEP. To be more precise, one may solve a QEP via the linearization followed by QZ algorithm applied to resulting GEP. An approximate eigenvector $\widetilde{x}$ to the QEP can be recovered from either the first $n$ components or the last $n$ components of the approximate eigenvector of the GEP computed by the QZ algorithm, a $2n$-vector $\widetilde{\xi}^\tp=(\widetilde{x}^\tp_1,\widetilde{x}^\tp_2)$ that yields the smallest backward error \[scaled residual\]. Nevertheless, this method is in general unstable [@tisseur2000backward]. Even though its backward stability is not guaranteed, it is still the method-of-choice when all eigenpairs are desired and if the coefficient matrices have no special structure and are not too large.
The discussions above regarding QEP extends to PEP. For an approximate eigenpair $(\widetilde{x},\widetilde{\lambda})$ of $P(\lambda)$, the normwise backward error is defined in [@tisseur2000backward] as $$\eta(\widetilde{x},\widetilde{\lambda}) := \min \bigl\{ \epsilon :\bigl(P(\widetilde{\lambda})+\Delta P(\widetilde{\lambda})\bigr)\widetilde{x}=0,\;
\| \Delta A_k\| \leq \epsilon \| E_k\|,\; k=0,\dots, m \bigr\}$$ where $\Delta P(\lambda)$ denotes the perturbation $$\Delta P(\lambda)=\lambda^m \Delta A_m+\lambda^{m-1} \Delta A_{m-1}+\cdots+\Delta A_0$$ and the matrices $E_k, k= 0,\dots,m$ are arbitrary and represent tolerances against which the perturbations $\Delta A_k$ to $A_k$ will be measured. As in the case of QEP, the normwise backward error of PEP may be computed [@tisseur2000backward] via the following expression $$\label{PEP scaled residual}
\eta(\widetilde{x},\widetilde{\lambda})=\frac{\bigl\| P(\widetilde{\lambda})\widetilde{x}\bigr\|}{\bigl( \|E_0\| + |\widetilde{\lambda}| \|E_1\| + |\widetilde{\lambda}|^2 \|E_2\| \dots + |\widetilde{\lambda}|^m \|E_m\| \bigr) \|\widetilde{x} \|}.$$
Conditioning {#ss:Conditioning}
============
We will see examples in \[sec:cond\] where a PEP as formulated in \[eq:pep\_def\] is far better conditioned than its companion GEP in \[eq:gep\] obtained using linearization. Thus for accurate solutions, it is better to solve the nonlinear PEP problem directly rather than to first transform it into a mathematically equivalent GEP. As an analogy, this is much like solving a least squares problem directly versus solving its normal equations.
To characterize the sensitivity of solutions to these problems, we will describe their condition numbers in this section. As previously mentioned, in all of our numerical experiments and applications, we will not encounter zero or infinite eigenvalues. So in principle we just need a notion of condition number [@tisseur2000backward] that is based on the nonhomogeneous matrix polynomial $P(\lambda)$ in \[eq:pep\_def\]. However, since the <span style="font-variant:small-caps;">Matlab</span> `polyeig` function that we use for comparison (see \[sec:polyeig\]) implements a more general notion of condition number [@DEDIEU200371; @doi:10.1137/050628283] based on the homogenized version (permitting zero and infinite eigenvalues), we will briefly review the homogenized eigenvalue problem and define its condition number accordingly.
We rewrite the polynomial matrix $P(\lambda)$ in \[eq:pep\_def\] in homogeneous form $$\label{eq:HPEP}
P(\lambda_0,\lambda_1) = \sum_{i=0}^{m} \lambda_0^i \lambda_1^{m-i} A_i$$ and consider eigenvalues as pairs $(\lambda_0,\lambda_1) \neq (0,0)$ that are solutions of the equation $\det P(\lambda_0,\lambda_1)=0$. Let $T_{(\lambda_0,\lambda_1)}\mathbb{P}_1$ denote the tangent space at $(\lambda_0,\lambda_1)$ to $\mathbb{P}_1$, the projective space of lines through the origin in $\mathbb{C}^2$. A condition operator is defined in [@DEDIEU200371] as $K(\lambda_0,\lambda_1):(\mathbb{C}^{n \times n})^{m+1} \to T_{(\lambda_0,\lambda_1)}\mathbb{P}_1$ of the eigenvalue $(\lambda_0,\lambda_1)$ as the differential of the map from the $(m+1)$-tuple $(A_0,\dots,A_m)$ to $(\lambda_0,\lambda_1)$ in projective space. If we write a representative of an eigenvalue $(\lambda_0,\lambda_1)$ as a row vector $[\lambda_0,\lambda_1] \in \mathbb{C}^{1 \times 2}$, the condition number $\kappa_P (\lambda_0,\lambda_1)$ can be defined as a norm of the condition operator [@doi:10.1137/050628283], $$\kappa_P(\lambda_0,\lambda_1) := \max_{\| \Delta A\|\leq 1}\frac{\|K(\lambda_0,\lambda_1)\Delta A\|_2}{\|[\lambda_0,\lambda_1]\|_2},$$ for any arbitrary norm on $\Delta A$. We will choose the norm on $(\mathbb{C}^{n \times n})^{m+1}$ to be the $\omega$-weighted Frobenius norm $$\|A\| :=\|(A_0,\dots,A_m)\|=\bigl\|[\omega_0^{-1}A_0,\dots,\omega_m^{-1}A_m]\bigr\|_F,$$ with weights $\omega_i > 0$, $i =1,\dots,m$. If we define the operators $\partial_{\lambda_0}:= \partial/\partial \lambda_0$ and $\partial_{\lambda_1}:= \partial/\partial \lambda_1$, then the normwise condition number $\kappa_P(\lambda_0,\lambda_1)$ of a simple eigenvalue $(\lambda_0,\lambda_1)$ is given by $$\kappa_P(\lambda_0,\lambda_1) = \left( \sum_{i=0}^m |\lambda_0|^{2i}|\lambda_1|^{2(m-i)}\omega_i^2 \right)^{1/2} \frac{\|y\|_2\|x\|_2}{|y^*(\overline{\lambda}_1\partial_{\lambda_0}P-\overline{\lambda}_0\partial_{\lambda_1}P)|_{(\lambda_0,\lambda_1)}x|},$$ where $x,y$ are the corresponding right and left eigenvector respectively [@doi:10.1137/050628283].
Homogenization allows one to better handle eigenvalues at infinity. In general, the characteristic polynomial is the determinant of the matrix polynomial and takes the form $\det (A_m)\lambda_1^{mn}+\cdots+\det (A_0)\lambda_0^{mn}$. Therefore, $P(\lambda_0,\lambda_1)$ has $mn$ finite eigenvalues when the matrix $A_m$ is nonsingular. However, when $\det(A_m)=0$, the characteristic polynomial has degree $r < mn$ and there are $r$ finite eigenvalues and $mn-r$ infinite eigenvalues. Those infinite eigenvalues correspond to $\lambda_0=0$. None of the numerical experiments we consider in this article has eigenvalues at infinity.
Certification {#sec:cert}
=============
A major advantage of our homotopy method approach for solving the PEP is that we may use Smale’s $\alpha$-theory (also known as Shub–Smale’s $\alpha$-theory, see [@BCSS1998 Chapter 8]) to certify that the Newton iterations will converge quadratically to an eigenpair. In numerical analysis lingo, this means we can control the *forward error*, not just the backward error.
To apply Smale’s $\alpha$-theory, we view the PEP as a collection of $n$ polynomial equations \[eq:pep\_def\] and an affine linear constraint $L(x)=0$, which yields a polynomial map $f = (f_0,\dots,f_n):\mathbb{C}^{n+1}\to\mathbb{C}^{n+1}$. The affine linear polynomial $L(x)$ is chosen randomly as described in the numerical experiments \[Numerical results\].
Let $\mathcal{V}(f):=\left\{
\zeta\in\mathbb{C}^{n+1} : f(\zeta)=0
\right\}$ and let $Df(z)$ be the Jacobian matrix of the system $f$ at $z=(x,\lambda)$. Consider the map $N_f:\mathbb{C}^{n+1}\to\mathbb{C}^{n+1}$ defined by $$N_f(z):=\begin{cases}
z-Df\left(z\right)^{-1}f\left(z\right) & \text{if }Df\left(z\right)\text{ is invertible,}\\
z & \text{otherwise}.
\end{cases}$$ We say the point $N_f(z)$ is the *Newton iteration of $f$ starting at $z$*. The $k$th Newton iteration of $f$ starting at $x$ is denoted by $N_f^{k}(z)$. Now we define precisely what we mean by an approximate solution to $f$.
\[def:approxSol\]*[@BCSS1998 p. 155]* With the notation above, a point $z\in\mathbb{C}^{n+1}$ is an *approximate solution to $f$ with *associated solution* $\zeta\in\mathcal{V}(f)$*, if for every $k\in\mathbb{N}$, $$\left\Vert N_{f}^{k}\left(z\right)-\zeta\right\Vert \leq\left(\frac{1}{2}\right)^{2^{k}-1}\left\Vert z-\zeta\right\Vert,$$ where the norm is the $2$-norm $\lVert z
\rVert=\bigl(
|z_1|^2+\cdots+|z_{n+1}|^2
\bigr)^{1/2}$.
Smale’s $\alpha$-theory gives a condition for when a given point $z$ is an approximate solution to $f=0$ using the following constants when $Df(z)$ is invertible: $$\begin{aligned}
\alpha(f,z) &:=\beta(f,z)\gamma(f,z),\\
\beta(f,z) &:=\left\Vert z-N_f(z) \right\Vert=
\left\Vert
Df(z)^{-1}f(z)
\right\Vert,\\
\gamma(f,z) &:=
\sup_{k\geq 2}\left\Vert
\frac{Df(z)^{-1}D^kf(z)}{k!}
\right\Vert^{1/(k-1)}.\end{aligned}$$ The following theorem from [@HS12] is a version of Theorem 2 in [@BCSS1998 p. 160] and it provides a certificate that a point $z$ is an approximate solution to $f=0$.
\[smale\] If $f:\mathbb{C}^{n+1}\to\mathbb{C}^{n+1}$ is a polynomial system and $z\in\mathbb{C}^{n+1}$, with $$\alpha(f,z)<\frac{13-3\sqrt{17}}{4}\approx 0.157671,$$ then $z$ is an approximate solution to $f=0$.
The quantity $\gamma(f,z)$ is difficult to compute in general. In [@HS12], this quantity is bounded in terms of an alternative that is more readily computable. Define $$\label{eq:mu}
\mu(f,z):=
\max\left\{
1,\Vert f\Vert\cdot \Vert Df(z)^{-1}
\Delta_{(d)}(z)\Vert
\right\},$$ where $\Delta_{(d)}(z)$ is an $(n+1)\times(n+1)$ diagonal matrix with $i$th diagonal entry $d_i^{1/2}(1+\Vert z\Vert^2)^{(d_i-1)/2}$ and $d_i := \operatorname{deg}(f_i)$. For a polynomial $g=\sum_{|\nu|\leq d}a_\nu z^\nu$, we define the norm $\Vert g\Vert$ according to [@HS12], $$\label{eq:norm1}
\Vert g\Vert^2:=
\sum_{|\nu|\leq d} |a_\nu|^2\frac{\nu!(d-|\nu|)!}{d!}.$$ This can extended to polynomial system $f$ via $$\label{eq:norm2}
\Vert f\Vert^2:=\sum_{i=0}^{n}\Vert f_i\Vert^2$$ where each $ \Vert f_i\Vert^2$ is as defined in \[eq:norm1\].
With the notations above, [@HS12] gives the following bound: $$\label{eq:gammaBound}
\gamma(f,z)\leq \frac{\mu(f,z)d_{\max}^{3/2}}{2(1+\Vert z\Vert^2)^{1/2}},$$ where $d_{\max}$ is the maximal degree of a polynomial in the system.
When $f$ comes from a PEP, \[pep\_certification\] follows from \[eq:mu\]–\[eq:norm2\] and a straightforward calculation of $Df(z)^{-1}\Delta_{(d)}(z)$. This yields the value of $\mu(f,z)$ and thus a bound for $\gamma(f,z)$ for the PEP. We will rely on our bound in the acoustic wave problem \[Certification\_APP1\] to certify the eigenvalues of the PEP therein.
\[pep\_certification\] Let $f=0$ denote the PEP in \[problem:pep\], and let $L$ denote the affine constraint on the $x$. Then, $$\label{mu_formula}
\mu(f,x,\lambda) =
\left[\Vert L\Vert^2+\sum_{k=0}^m \frac{k!(m-k)!}{(m+1)!}\|A_k\|_F^2\right]^{1/2}
\left\|
\begin{bmatrix}
\frac{P(\lambda)}{\sqrt{m+1}(1+\|[x^\tp,\lambda]\|^2)^{m/2}} & P'(\lambda)x \\
\frac{(\nabla_x L)^\tp}{\sqrt{m+1}(1+\|[x^\tp,\lambda]\|^2)^{m/2}} & 0
\end{bmatrix}
^{-1}\right\|.$$
Homotopy method for the polynomial eigenvalue problem {#ss:homotopymethod}
=====================================================
We now describe our approach of using *homotopy continuation method*, also called *homotopy method*, to solve PEPs. Our goal is to find all eigenpairs.
Homotopy method deforms solutions of a *start system* $S(z)=0$ that is easy to solve to solutions of a *target system* $T(z)=0$ that is of interest. More precisely, a *straight-line homotopy* with *path parameter* $t$ is defined as $$\label{eq:H}
H(z,t):=(1-t)S(z)+tT(z), \quad t\in[0,1].$$ When $t=0$ or $t=1$, the system $H(z,t) = 0$ gives the start system $H(z,0)=S(z)=0$ or the target system $H(z,1)=T(z)=0$ respectively.
A start system for the homotopy $\eqref{eq:H}$ is said to be *chosen correctly* if the following properties [@li_1997] hold:
1. the solution set of the start system $S(z)=0$ are known or easy to obtain;
2. the solution set of $H(z,t)=0$ for $0 \leq t < 1$ consists of a finite number of smooth paths, each parametrized by $t$ in $[0,1)$;
3. for each isolated solution of the target systems $T(z)=0$, there is some path originating at $t=0$, that is, a solution of the start systems $S(z)=0$.
For an example of homotopy familiar to numerical linear algebraists, consider $P(\lambda) = I - \lambda A$, i.e., $m=1$, $A_0 = I$, and $A_1 = -A$. Let $D$ be the diagonal matrix whose diagonal entries are the diagonal entries of $A$. The proof of strengthened Gershgorin Circle Theorem [@varga] exactly illustrates a straight-line homotopy path $H(t)=(1-t)D+tA$, $t\in[0,1]$. Note however that such a $D$ would be a poor choice for us as the start system for the homotopy is not guaranteed to be chosen correctly: The solution set $H(t)=0$ for $0 \leq t < 1$ need not consist of a finite number of smooth paths.
In this article, we consider a target system to solve the PEP in given by $$\label{eq:targetpep}
T(z)=\begin{bmatrix}
P(\lambda)x\\
L(x)
\end{bmatrix}$$ where $z=(x,\lambda)$ and $L(x)$ is a general affine linear polynomial, chosen randomly so that we have a polynomial system $T : \mathbb{C}^{n+1} \to \mathbb{C}^{n+1}$ as defined in \[sec:cert\]. The requirement that $L(x) = 0$ also fixes the scaling indeterminacy in the polynomial eigenvector $x$. In more geometric language, $x$ is a point in projective space and the random choice of $L(x)$ restricts this space to a general affine chart so that the eigenvectors are not at infinity. If one instead had chosen $L(x)=x_1-1$, then eigenvectors with a first coordinate equal to zero would not be solutions to the system.
There is an obvious choice of start system — we choose random diagonal matrices $D_i$’s to replace the coefficient matrices $A_i$’s in $P(\lambda)$: $$\label{eq:startpep}
S(z)=\begin{bmatrix}
(\lambda^mD_m+\lambda^{m-1}D_{m-1}+\cdots+
D_0)x\\
L(x)
\end{bmatrix}$$
One observes that $S(x,\lambda)=0$ is a polynomial system with linear products of $(x,\lambda)$. Specifically, let $d_{j,i}$ denote the $i$th diagonal entry of $D_j$ and let $x_i$ be the $i$th entry of $x$. Then we may factor the univariate polynomial $$\bigl(\lambda^m d_{m,i}+\lambda^{m-1} d_{m-1,i}+\dots+d_{0,i}\bigr)x_i=d_{m,i}(\lambda-r_{m,i})( \lambda-r_{m-1,i}) \cdots ( \lambda-r_{1,i})x_i,$$ where $ r_{j,i}\in \mathbb{C}$, $j =1,\dots, m$, are the roots of the respective monic polynomial (obtained via, say, the Schur decomposition of its companion matrix). The solutions to are then simply $$\lambda = r_{j,i}, \qquad
x_k = 0 \; \text{for all}\; k \neq i, \qquad
L(0,\dots,0,x_i,0,\dots,0)=0,$$ for every $j = 1,2,\dots,m$ and $i=1,2,\dots,n$.
\[null-methodn=2\] Let $m=n=2$. $$\begin{aligned}
\nonumber S(x,\lambda)&=\Bigg(\lambda_1^2
\begin{bmatrix}
m_{11} & 0\\
0 & m_{22}\\
\end{bmatrix}
+\lambda_0\lambda_1
\begin{bmatrix}
c_{11} & 0\\
0 & c_{22}\\
\end{bmatrix}
+\lambda_0^2
\begin{bmatrix}
k_{11} & 0\\
0 & k_{22}\\
\end{bmatrix}
\Bigg)
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}\\
\nonumber &=
\begin{bmatrix}
(\lambda_1^2 m_{11}+\lambda_0\lambda_1 c_{11}+\lambda_0^2 k_{11}) & 0\\
0& (\lambda_1^2 m_{22}+\lambda_0\lambda_1 c_{22}+\lambda_0^2 k_{22}) \\
\end{bmatrix}
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}
\\
\nonumber &=
\begin{bmatrix}
d_1(\lambda_1-\lambda_0 \alpha_1)(\lambda_1-\lambda_0 \alpha_2) & 0\\
0 & d_2(\lambda_1-\lambda_0 \beta_1)(\lambda_1-\lambda_0 \beta_2)
\end{bmatrix}
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}\end{aligned}$$ The polynomials $(\lambda_1^2 m_{ii}+\lambda_0\lambda_1 c_{ii}+\lambda_0^2 k_{ii})$, for $i=1,2$ can be factored over the complex numbers into linear products. The total degree of each equation is $m+1$, but the equations are linear in $x$ and degree $m$ in $\lambda$.
From the known eigenpairs at $t=t_0$, solutions at $t=t_0+\Delta t$ can be obtained by iterative methods [@allgower2012numerical; @allgower1993continuation]. This process is called path tracking. The steps in iteration are repeated until $t$ reaches $1$ or $t$ is sufficiently close to $1$ by some criteria. The output is regarded as an approximation of a solution to the PEP.
The path tracking becomes numerically unstable as we get close to the space of ill-posed systems, also called the *discriminant locus*. In other words, this occurs when the Jacobian of $H(z,t_0)$ with respect to $z$ has large condition number. Using randomization and adaptive-precision we are able to avoid these ill-conditioned problems in the middle of path tracking. This is because the discriminant locus has real codimension two in the space of coefficients, whereas the homotopy is over a real one-dimensional path.
A QEP may be expressed as a univariate polynomial root finding problem by taking the determinant of $Q(\lambda)$. Say, $n=2$, then we have $$f(\lambda,M,C,K):=\det Q(\lambda)=\det\biggl(\lambda^2
\begin{bmatrix}
m_{11} & m_{12}\\
m_{21} & m_{22}\\
\end{bmatrix}
+\lambda
\begin{bmatrix}
c_{11} & c_{12}\\
c_{21} & c_{22}\\
\end{bmatrix}
+
\begin{bmatrix}
k_{11} & k_{12}\\
k_{21} & k_{22}\\
\end{bmatrix}
\biggr).$$ Solving this univariate problem is numerically unstable — in fact this is a special case of the resultant method, which is unstable in general [@NT2016]. The discriminant locus is the set of $(M,C,K) \in \mathbb{C}^{2 \times 2 \times 3}$ is such that there exists a $\lambda$ where the following is satisfied: $f(\lambda,M,C,K)= {\partial f}/{\partial \lambda}=0.$ If we were to express $f$ as a sum of monomials then it is the determinant of a Sylvester matrix. Recall that the entries of a Sylvester matrix of two polynomials are coefficients of the polynomials, which in our case are $f$ and $\partial f/\partial \lambda$, i.e., $${\begin{bmatrix}
s_4 & s_3 &s_2&s_1&s_0& &\\
&s_4 & s_3 &s_2&s_1&s_0& \\
& &s_4 & s_3 &s_2&s_1&s_0\\
4s_4&3s_3&2s_2&s_1& & &\\
&4s_4&3s_3&2s_2&s_1& &\\
&&4s_4&3s_3&2s_2&s_1& \\
&&&4s_4&3s_3&2s_2&s_1\\
\end{bmatrix},
}$$ where $s_i$’s are polynomials in the entries of $(M,C,K)$. The determinant of this matrix is zero when the two polynomials have a common root, i.e., $$f(\lambda,M,C,K)=s_4\cdot\lambda^4+s_3\cdot\lambda^3+s_2\cdot\lambda^2+s_1\cdot\lambda+s_0.$$ So the vanishing of this determinant defines where the problem is ill-conditioned. By taking a randomized start system, the homotopy method will avoid this space.
Numerical experiments {#Numerical results}
=====================
In this section, we provide numerical experiments to compare the speed and accuracy of (i) solving the PEP as formulated in \[eq:pep\_def\] with homotopy continuation (henceforth abbreviated as *homotopy method*), and (ii) solving the companion GEP in \[eq:gep\] with QZ algorithm (henceforth abbreviated as *linearization method*). We will also compare the conditioning of the PEP in \[eq:pep\_def\] and its companion GEP in \[eq:gep\]. All experiments were performed on a computer running Windows 8 with an Intel Core i7 processor. For the linearization method, we use the implementation in the <span style="font-variant:small-caps;">Matlab</span> `polyeig` function; for the homotopy method, we implemented it with <span style="font-variant:small-caps;">Bertini</span> 1.5.1. We tested both methods in serial on a common platform: <span style="font-variant:small-caps;">Matlab</span> R2016a with the interface <span style="font-variant:small-caps;">BertiniLab</span> 1.5 [@Bates2016]. We also tested the homotopy method in parallel using Intel MPI Library 5.1 compiled with intel/16.0 on the Midway1 compute cluster[^1] in the University of Chicago Research Computing Center.
The `polyeig` and `quadeig` functions {#sec:polyeig}
-------------------------------------
In our numerical experiments, we rely on the `polyeig` function in <span style="font-variant:small-caps;">Matlab</span> to solve PEPs with linearization method and compute condition numbers. This routine adopts the companion linearization, uses QZ factorization to compute generalized Schur decomposition, recovers the right eigenvectors that has the minimal residual, and gives the condition number for each eigenvalue. While different linearization forms can have widely varying eigenvalue condition numbers [@doi:10.1137/050628283], `polyeig` does not allow for other choices.
A more recent linearization-based algorithm, `quadeig`, was proposed in [@Hammarling:2013:ACS:2450153.2450156] for QEPs. While `polyeig` is a built-in <span style="font-variant:small-caps;">Matlab</span> function, `quadeig` is a third-party program implemented in <span style="font-variant:small-caps;">Matlab</span>. It incorporates extra preprocessing steps that scale the problem’s parameters and choose the linearization with favorable conditioning and backward stability properties. However, we remain in favor of using `polyeig` as the basis of comparison against our homotopy method for solving QEPs as (i) the scaling is redundant since the $2$-norms of our random coefficient matrices are all approximately one; (ii) for a uniform comparison, we prefer a method that works for all $m$ but `quadeig` does not extend to higher order PEP since the scaling process cannot be generalized. There will be further discussions of these issues in \[sec:cond\].
Speed comparisons for QEP {#sec:speedQEP}
-------------------------
We first test and compare timings for our homotopy method and the linearization method for general quadratic matrix polynomials. The matrix polynomial is generated at random[^2] with coefficient matrices $M,C,K$ in \[eq:pep\_def\] having independent entries following the standard complex Gaussian distribution $\mathcal{N}_{\mathbb{C}}(0,1)$, using the `randn` function in <span style="font-variant:small-caps;">Matlab</span>. The coefficients of $L(x)$ are also chosen randomly in the same way. We performed three sets of numerical experiments:
1. \[item:lin\] \[timing\_linearization\] gives the elapsed timings for computing eigenpairs with the linearization method for dimensions $n=2,\dots,100$.
2. \[item:serial\] \[timing\_homotopy\_serial\] gives the elapsed timings for computing eigenpairs with the homotopy method for dimensions $n=2,\dots,80$.
3. \[item:par\] \[timing\_homotopy\_parallel\] gives the elapsed timings for computing the eigenpairs with homotopy method in parallel on $20$ cores for dimensions $n=20, 30\dots,100$.
For \[item:lin\] and \[item:serial\] the timings include both the setup and the solution process; for \[item:par\], the timings only include the solution process running <span style="font-variant:small-caps;">Bertini</span> in parallel. For each method and each dimension, we run our experiments ten times and record the best, average, median, and worst performance. The conclusions drawn from these experiments are discussed in \[sec:conclude\].
$n$ \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ----------- ----------- ----------- --------
2 4 2.4550E-4 0.0015 8.0279E-4 0.0082
3 6 3.1406E-4 6.7019E-4 6.0923E-4 0.0013
4 8 3.6414E-4 6.7064E-4 5.9055E-4 0.0011
5 10 4.2818E-4 6.7003E-4 5.2938E-4 0.0013
6 12 4.6144E-4 0.0010 7.3403E-4 0.0034
7 14 5.8542E-4 9.3642E-4 9.1938E-4 0.0015
8 16 6.7327E-4 9.5284E-4 8.5719E-4 0.0013
9 18 7.3280E-4 0.0027 0.0012 0.0158
10 20 9.2533E-4 0.0015 0.0013 0.0028
11 22 9.2246E-4 0.0014 0.0012 0.0027
12 24 0.0011 0.0017 0.0015 0.0027
13 26 0.0012 0.0017 0.0017 0.0025
14 28 0.0016 0.0020 0.0019 0.0025
15 30 0.0015 0.0023 0.0021 0.0035
16 32 0.0020 0.0030 0.0029 0.0050
17 34 0.0025 0.0037 0.0033 0.0058
18 36 0.0029 0.0037 0.0036 0.0056
19 38 0.0032 0.0048 0.0040 0.0091
20 40 0.0031 0.0038 0.0038 0.0048
30 60 0.0089 0.0109 0.0103 0.0164
40 80 0.0114 0.0157 0.0152 0.0220
50 100 0.0225 0.0253 0.0252 0.0295
60 120 0.0328 0.0407 0.0400 0.0511
70 140 0.0418 0.0527 0.0538 0.0624
80 160 0.0642 0.0762 0.0768 0.0850
90 180 0.0867 0.0983 0.0957 0.1191
100 200 0.1177 0.1285 0.1229 0.1504
: <span style="font-variant:small-caps;">Speed of QEP — linearization.</span> Elapsed timings (in seconds) for the linearization method with dimensions $n=2,3,\dots,100$ (not all displayed).[]{data-label="timing_linearization"}
$n$ \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ----------- ----------- ----------- -----------
2 4 0.3692 0.4185 0.3950 0.6064
3 6 0.4717 0.5509 0.5225 0.7905
4 8 0.5005 1.1083 0.5553 5.9034
5 10 0.6316 0.7809 0.6735 1.0826
6 12 0.7767 1.0083 0.9946 1.2599
7 14 0.8593 1.0333 0.9418 1.4684
8 16 1.0043 1.4300 1.1941 2.6119
9 18 1.2054 1.7269 1.3505 2.8268
10 20 1.4373 1.6836 1.5428 2.9608
11 22 1.6972 2.2203 2.0245 3.3329
12 24 1.9914 2.7784 2.3702 4.6917
13 26 2.1979 3.3690 2.6251 7.5556
14 28 2.6687 3.8757 3.7469 5.3345
15 30 3.1057 5.2769 5.1792 9.3289
16 32 3.4994 7.2866 6.6201 16.3712
17 34 3.8719 6.1690 5.9380 9.3421
18 36 4.9051 7.8015 8.2610 13.4794
19 38 5.4613 10.4376 11.2833 13.4682
20 40 6.4268 14.0531 14.0381 23.9232
30 60 15.2266 32.1909 32.1480 63.8345
40 80 74.4945 103.9896 93.9979 161.6461
50 100 133.0395 244.3541 245.9194 394.0840
60 120 309.0921 532.1712 485.0595 943.0120
70 140 705.9720 1200.4053 1101.9838 1796.4483
80 160 1207.1300 1848.9342 1745.6093 2974.4295
: <span style="font-variant:small-caps;">Speed of QEP — homotopy in serial.</span> Elapsed timings (in seconds) for the homotopy method with dimensions $n=2,3,\dots,80$ (not all displayed).[]{data-label="timing_homotopy_serial"}
$n$ \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ---------- ---------- ---------- ----------
20 40 2.5010 2.8091 2.7410 3.5160
30 60 2.5890 5.6856 4.6965 9.4390
40 80 8.3950 12.7100 13.2490 16.3700
50 100 16.1620 22.2175 21.8220 26.9460
60 120 18.2790 24.4677 23.4870 29.1530
70 140 36.5210 53.7870 54.7050 64.7610
80 160 63.6320 83.7379 77.9380 105.2450
90 180 101.4390 142.3974 149.6170 177.7420
100 200 164.3740 196.5620 199.4585 239.5180
: <span style="font-variant:small-caps;">Speed of QEP — homotopy in parallel.</span> Elapsed timings (in seconds) for the homotopy method ran in parallel on $20$ cores and with dimensions $n=20, 30,\dots,100$.[]{data-label="timing_homotopy_parallel"}
Accuracy comparisons for QEP {#sec:accuracyQEP}
----------------------------
Next we test and compare the absolute and relative backward errors of the computed eigenpairs corresponding to the smallest and largest eigenvalues for our homotopy method and the linearization method for randomly generated matrix polynomials. For each of the methods, we tabulated the absolute and relative backward errors of the computed eigenpairs corresponding to the smallest and largest eigenvalues with dimension $n=2,\dots,100$. All tests are averaged over ten runs. They are compared side-by-side in \[Ave\_abs\_berr\] (absolute error) and \[Ave\_rel\_berr\] (relative error), and graphically in \[Ab\_BErr\_2-100\] (absolute error) and \[Rel\_BErr\_2-100\] (relative error). The conclusions drawn from these experiments are presented in \[sec:conclude\].
----- ------------- ------------- ------------- -------------
$n$ SMALLEST LARGEST SMALLEST LARGEST
2 3.39024E-16 2.01468E-16 2.81624E-16 2.13291E-16
5 1.03302E-15 8.68237E-16 4.98414E-16 3.21280E-16
10 2.34081E-15 2.43521E-15 1.03488E-15 7.96172E-16
20 5.21350E-15 4.81234E-15 1.20237E-15 1.25302E-15
30 8.24479E-15 6.06398E-15 1.31006E-15 3.27204E-15
40 1.18830E-14 8.06968E-15 1.83843E-15 1.56400E-15
50 1.37506E-14 9.77673E-15 2.66411E-15 3.04532E-15
60 1.69521E-14 9.92973E-15 2.37774E-15 3.37814E-15
70 1.99600E-14 1.12571E-14 2.66186E-15 3.414932E15
80 2.27208E-14 1.61510E-14 2.91539E-15 4.21745E-15
90 2.41366E-14 1.63954E-14 3.23766E-15 3.10200E-15
100 2.85801E-14 1.75552E-14 3.51036E-15 3.62939E-15
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Accuracy of QEP — linearization vs homotopy.</span> Absolute backward errors of computed smallest and largest eigenpairs with dimensions $n=2,\dots,100$. []{data-label="Ave_abs_berr"}
![Graphs for \[Ave\_abs\_berr\]. Blue for linearization and red for homotopy.](Absolute_BErr_min_2-20_3_snip "fig:") ![Graphs for \[Ave\_abs\_berr\]. Blue for linearization and red for homotopy.](Absolute_BErr_max_2-20_3_snip "fig:")\
![Graphs for \[Ave\_abs\_berr\]. Blue for linearization and red for homotopy.](Absolute_BErr_min_20-100_3_snip "fig:") ![Graphs for \[Ave\_abs\_berr\]. Blue for linearization and red for homotopy.](Absolute_BErr_max_20-100_3_snip "fig:") \[Ab\_BErr\_2-100\]
----- ------------- ------------- ------------- -------------
$n$ SMALLEST LARGEST SMALLEST LARGEST
2 2.02195E-16 1.07268E-16 1.53449E-16 1.38260E-16
5 3.21357E-16 2.63883E-16 1.36922E-16 8.70572E-17
10 3.97923E-16 4.17780E-16 1.95933E-16 1.37749E-16
15 5.64010E-16 4.37375E-16 2.82767E-16 1.26614E-16
20 6.25007E-16 5.71766E-16 1.42798E-16 1.49327E-16
30 8.03800E-16 5.77483E-16 1.25286E-16 3.20068E-16
40 9.85766E-16 6.64878E-16 1.49583E-16 1.29559E-16
50 9.98415E-16 7.09502E-16 1.92299E-16 2.26032E-15
60 1.11238E-15 6.56447E-16 1.59075E-16 2.24618E-16
70 1.21839E-15 6.95665E-16 1.63888E-16 2.10081E-16
80 1.29842E-15 8.85652E-16 1.65893E-16 2.42794E-16
90 1.30478E-15 8.85652E-16 1.72905E-16 1.66882E-16
100 1.44644E-15 8.90062E-16 1.80208E-16 1.84720E-16
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Accuracy of QEP — linearization vs homotopy.</span> Relative backward errors of computed smallest and largest eigenpairs with dimensions $n=2,\dots,100$.[]{data-label="Ave_rel_berr"}
![Graphs for \[Ave\_rel\_berr\]. Blue for linearization and red for homotopy.](Relative_BErr_min_2-20_3_snip "fig:") ![Graphs for \[Ave\_rel\_berr\]. Blue for linearization and red for homotopy.](Relative_BErr_max_2-20_3_snip "fig:")\
![Graphs for \[Ave\_rel\_berr\]. Blue for linearization and red for homotopy.](Relative_BErr_min_20-100_3_snip "fig:") ![Graphs for \[Ave\_rel\_berr\]. Blue for linearization and red for homotopy.](Relative_BErr_max_20-100_3_snip "fig:") \[Rel\_BErr\_2-100\]
Speed and accuracy comparisons for PEP {#PEP_test}
--------------------------------------
We repeat the timing and accuracy tests in \[sec:speedQEP\] and \[sec:accuracyQEP\] for PEP where $m=4$. The timing results are presented in \[timing\_linearization\_PEP\] and \[timing\_homotopy\_PEP\]. The accuracy results are presented side-by-side in tabulated form in \[Ave\_abs\_berr\_PEP\] (absolute error) and \[Ave\_rel\_berr\_PEP\] (relative error), and graphically in \[Abs\_BErr\_pep\] (absolute error) and \[Rel\_BErr\_pep\] (relative error). As in the case of QEP, for speed comparisons, we run our experiments ten times and record the best, average, median, and worst performance for each method and each dimension; for accuracy comparisons, the tests are averaged over ten runs. The conclusions drawn from these experiments are discussed in \[sec:conclude\].
$n$ \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ----------- ----------- ----------- --------
2 8 3.6209E-4 7.3132E-4 6.2914E-4 0.0016
4 16 6.4617E-4 9.8930E-4 8.6807E-4 0.0017
6 24 0.0010 0.0017 0.0015 0.0028
8 32 0.0017 0.0022 0.0022 0.0031
10 40 0.0022 0.0030 0.0031 0.0042
12 48 0.0030 0.0040 0.0037 0.0055
14 56 0.0045 0.0054 0.0050 0.0074
16 64 0.0068 0.0080 0.0080 0.0100
18 72 0.0091 0.0108 0.0099 0.0141
20 80 0.0106 0.0121 0.0116 0.0149
30 120 0.0262 0.0337 0.0349 0.0373
40 160 0.0656 0.0751 0.0695 0.0929
50 200 0.0920 0.1225 0.1095 0.1724
60 240 0.1606 0.1906 0.1807 0.2513
70 280 0.2401 0.2719 0.2675 0.3262
80 320 0.3757 0.4189 0.4140 0.4986
90 360 0.4947 0.5507 0.5296 0.6968
100 400 0.7903 0.8285 0.8210 0.9458
: <span style="font-variant:small-caps;">Speed of PEP — linearization.</span> Elapsed timings (in seconds) for the linearization method with dimension $n=2,3,\dots,100$ (not all displayed).[]{data-label="timing_linearization_PEP"}
n \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ----------- ----------- ----------- -----------
2 8 0.4449 0.5051 0.4725 0.6503
4 16 0.6237 0.8137 0.6946 1.2260
6 24 1.0655 2.4848 2.0590 4.9239
8 32 1.6349 3.5440 2.7851 7.6895
10 40 2.7746 8.6236 9.2868 16.6303
12 48 7.3469 16.3581 16.8977 25.2626
14 56 10.0197 26.8512 18.3648 57.4097
16 64 22.8207 45.5760 46.3289 66.0404
18 72 19.6662 61.4427 56.9810 94.0309
20 80 47.8803 119.9619 98.6018 245.0955
30 120 14.0910 30.9721 30.2710 56.8700
40 160 35.1230 55.5960 51.7820 77.9600
50 200 75.2800 111.3593 111.8870 148.0980
60 240 165.5460 214.3197 205.0230 290.4680
70 280 290.4960 397.6789 421.8780 474.0510
80 320 479.8200 616.7326 574.7630 811.5990
90 360 684.9450 1037.8848 1114.4720 1258.1440
100 400 1046.2700 1452.7988 1423.2730 1947.1600
: <span style="font-variant:small-caps;">Speed of PEP — homotopy in serial and in parallel.</span> Elapsed timings (in seconds) for the homotopy method with dimension $n=2,3,\dots,100$; in serial for $n = 2,3,\dots,20$ (not all displayed); in parallel on $20$ cores for $n = 30,40,\dots,100$.[]{data-label="timing_homotopy_PEP"}
----- ------------- ------------- ------------- -------------
$n$ SMALLEST LARGEST SMALLEST LARGEST
2 3.97673E-16 4.06234E-16 1.93084E-16 4.72180E-16
5 1.30486E-15 1.01101E-15 5.14466E-16 5.52493E-16
10 3.40010E-15 2.43585E-15 6.66926E-16 9.29498E-16
15 4.06465E-15 3.66940E-15 1.13208E-15 9.14666E-16
20 5.58414E-15 5.16130E-15 1.81496E-15 1.81759E-15
30 8.03892E-15 6.24574E-15 1.38205E-15 1.39749E-15
40 1.13754E-14 8.64433E-15 1.70770E-15 1.81052E-15
50 1.38394E-14 1.07375E-14 2.23095E-15 2.08415E-15
60 1.76932E-14 1.24398E-14 2.45192E-15 3.05775E-15
70 1.90902E-14 1.42487E-14 2.61274E-15 2.70850E-15
80 2.24766E-14 1.57303E-14 2.80187E-15 3.28150E-15
90 2.56978E-14 1.77254E-14 3.14128E-15 3.88126E-15
100 2.64759E-14 1.94732E-14 3.01357E-15 2.92889E-15
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Accuracy of PEP — linearization vs homotopy.</span> Absolute backward errors of computed smallest and largest eigenpairs with dimension $n=2,\dots,100$.[]{data-label="Ave_abs_berr_PEP"}
![Graphs for \[Ave\_abs\_berr\_PEP\]. Blue for linearization and red for homotopy.](Absolute_BErr_min_2-20_pep_3_snip "fig:") ![Graphs for \[Ave\_abs\_berr\_PEP\]. Blue for linearization and red for homotopy.](Absolute_BErr_max_2-20_pep_3_snip "fig:")\
![Graphs for \[Ave\_abs\_berr\_PEP\]. Blue for linearization and red for homotopy.](Absolute_BErr_min_20-100_pep_3_snip "fig:") ![Graphs for \[Ave\_abs\_berr\_PEP\]. Blue for linearization and red for homotopy.](Absolute_BErr_max_20-100_pep_3_snip "fig:") \[Abs\_BErr\_pep\]
----- ------------- ------------- ------------- -------------
$n$ SMALLEST LARGEST SMALLEST LARGEST
2 2.38800E-16 2.42932E-16 1.20966E-16 2.63005E-16
5 3.65363E-16 2.64597E-16 1.46731E-16 1.58114E-16
10 5.84645E-16 4.42737E-16 1.18572E-16 1.67121E-16
15 5.76099E-16 5.22943E-16 1.60022E-16 1.29557E-16
20 6.84554E-16 6.19752E-16 2.19221E-16 2.22975E-16
30 7.73547E-16 6.07382E-16 1.33975E-16 1.36049E-16
40 9.28358E-16 7.18160E-16 1.39950E-16 1.49331E-16
50 1.02753E-15 7.95390E-16 1.64387E-16 1.52488E-16
60 1.18312E-15 8.28990E-16 1.63573E-16 2.05105E-16
70 1.16505E-15 8.77305E-16 1.53855E-16 1.67640E-16
80 1.29478E-15 8.95204E-16 1.62206E-16 1.85393E-16
90 1.38225E-15 9.62668E-16 1.69063E-16 2.10661E-16
100 1.33725E-15 9.86613E-16 1.53193E-16 1.45843E-16
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Accuracy of PEP — linearization vs homotopy.</span> Relative backward errors of computed smallest and largest eigenpairs with dimension $n=2,\dots,100$.[]{data-label="Ave_rel_berr_PEP"}
![Graphs for \[Ave\_rel\_berr\_PEP\]. Blue for linearization and red for homotopy.](Relative_BErr_min_2-20_pep_3_snip "fig:") ![Graphs for \[Ave\_rel\_berr\_PEP\]. Blue for linearization and red for homotopy.](Relative_BErr_max_2-20_pep_3_snip "fig:")\
![Graphs for \[Ave\_rel\_berr\_PEP\]. Blue for linearization and red for homotopy.](Relative_BErr_min_20-100_pep_3_snip "fig:") ![Graphs for \[Ave\_rel\_berr\_PEP\]. Blue for linearization and red for homotopy.](Relative_BErr_max_20-100_pep_3_snip "fig:") \[Rel\_BErr\_pep\]
Conclusions {#sec:conclude}
-----------
From the results of the numerical experiments in preceding subsections, we may draw several conclusions regarding the speed and accuracy of our proposed homotopy method versus linearization method, the existing method-of-choice.
Speed
: From the elapsed times, the linearization method is consistently much faster than the homotopy method, even when the latter is run in parallel.
Accuracy
: From the normwise backward errors, homotopy method is more accurate than the linearization method across coefficient matrices of all dimensions.
Dimension
: The gap in accuracy increases significantly with the dimension of the coefficient matrices, favoring homotopy method for higher dimensional problems, particularly when all eigenpairs are needed.
Stability
: Our results reflect what is known about linearization method (see \[ss:Accuracy\]), namely, it is numerically unstable for both QEP and PEP.
Initialization
: Our results reflect what is known about homotopy method (see \[ss:homotopymethod\]), namely, its dependence on the choice of start system — one that leads to a path passing near a singularity takes longer to converge.
Parallelism
: The timings of homotopy method can be improved substantially with parallel computing. On the other hand, the linearization method exhibits no obvious parallelism.
It is also evident from these results that both the linearization and homotopy methods take considerably longer to solve a PEP with $m=4$ than to solve a QEP. This can be attributed to the fact that a PEP with $m =4$ has $5/3$ times more parameters than a QEP: $\operatorname{dim} (\mathbb{C}^{n \times n \times 5}) = 5n^2$ versus $\operatorname{dim} (\mathbb{C}^{n \times n \times 3}) = 3n^2$, and that a PEP with $m=4$ has twice as many eigenpairs as a QEP: $4n$ versus $2n$ eigenpairs.
It is interesting to observe that accuracy does not exhibit a similar deterioration with increasing $m$ — for a fixed $n$, the averaged backward error for a PEP with $m=4$ can be smaller than that for a QEP.
Conditioning and accuracy {#sec:cond}
=========================
It has been shown [@doi:10.1137/050628283] that if the $2$-norms of the coefficient matrices in a PEP are all approximately $1$, then the companion linearization and original PEP have similar condition numbers. In particular, define $$\label{rho}
\rho = \frac{\max_i \|A_i \|_2}{\min(\|A_0 \|_2,\|A_m\|_2)} \geq 1.$$ When $\rho$ is of order 1, there exists a linearization for a particular eigenvalue that is about as well conditioned as the original PEP itself for that eigenvalue, to within a small constant factor. However this is no longer true when the $A_i$’s vary widely in norm: The companion GEP is potentially far more ill-conditioned than the PEP.
For the QEP in \[QEP\_def\], the quantity $$\rho = \frac{\max(\|M\|,\|C\|,\|K\|)}{\min(\|M\|,\|K\|)}$$ is of order $1$ if $\|C\| \lesssim \max(\|M\|,\|K\|)$ and $\|M\| \approx \|K\|$. When these are not satisfied, a scaling of $Q(\lambda)$ will typically improve the conditioning of the linearization — provided that $Q(\lambda)$ is not too heavily damped, i.e., $\|C\|\lesssim \sqrt{\|M\|\|K\|}$. However, in general such a scaling is unavailable; for instance, it is still not known how one should scale a heavily damped QEP. We compare how linearization and homotopy methods perform on damped QEPs:
1. We generate random $20 \times 20$ coefficient matrices $M,C,K$ having independent entries that follow the standard real Gaussian distribution $\mathcal{N}_\mathbb{R}(0,1)$.
2. For $Q_k(\lambda)=\lambda^2M+\lambda (2^k\cdot C)+K$, $k=0,1,\dots,5$, we determine the relative backward errors of all computed eigenpairs for both linearization and homotopy methods. The results are in \[RBError\_ScaleC\].
3. At the same time, for each $Q_k(\lambda)$, we compute the condition number of each eigenvalue in both the original QEP (used in homotopy method) and its companion GEP (used in linearization method). The results are in \[ConditionNumber\_ScaleC\].
In both \[RBError\_ScaleC\] and \[ConditionNumber\_ScaleC\], the horizontal axis is the index of eigenvalues in ascending order of magnitude, the vertical axis is on a log scale.
![Relative backward errors (in Frobenius norm) of all computed eigenpairs. Blue dots: linearization method. Red crosses: homotopy method.](Relative_berr_k_3_snip "fig:")\
\[RBError\_ScaleC\]
![Condition numbers of all computed eigenpairs. Blue dots: companion GEPs. Red crosses: original QEPs.](condition_number_k_3_snip "fig:") \[ConditionNumber\_ScaleC\]
We may deduce the following from these results:
1. From \[RBError\_ScaleC\] we see that homotopy method is backward stable for all eigenvalues and all $k = 0, 1, \dots, 5$; on the other hand, linearization method becomes significantly less stable for the larger eigenvalues as $k$ increases past $3$. Note that the larger the value of $k$, the heavier damped the QEP.
2. From \[ConditionNumber\_ScaleC\] we see that the larger eigenvalues in the original QEP are far better conditioned than in its companion GEP, whereas the smaller eigenvalues are similarly conditioned in both problems.
3. From \[ConditionNumber\_ScaleC\] we see that as the QEP becomes more heavily damped, the companion GEP becomes exceedingly worse-conditioned than the original QEP.
In summary, homotopy method is evidently the preferred method for accurate determination of large eigenvalues in heavily damped QEPs.
Although it is possible to find an alternative linearization of a QEP into a GEP that is better conditioned towards the larger eigenvalues [@tisseur2000backward], there is no known linearization with conditioning comparable to the original QEP across *all* eigenvalues (in fact, such an ideal linearization quite likely does not exist). In principle, one might use different linearizations to determine eigenvalues in different ranges, and then combine the results to obtain a full set of eigenvalues. However this is not only impractical but suffers from a fallacy — we do not know a priori which eigenvalues from which linearizations are more accurate.
Applications {#sec:app}
============
In this section we provide numerical experiments on real data (as opposed to randomly generated data in \[Numerical results\]) arising from the two application problems mentioned in \[Intro\]. All experiments here are conducted in the same environment as described in \[Numerical results\], except that the matrices defining the problems are generated with the `nlevp` function in the <span style="font-variant:small-caps;">Matlab</span> toolbox <span style="font-variant:small-caps;">Nlevp</span> [@MIMS_ep2011.116].
Acoustic wave problem {#ss:QEPEx}
---------------------
This application is taken from [@acoustic_wave]. Consider an acoustic medium with constant density and space-varying sound speed $c(x)$ occupying the volume $\Omega \subseteq \mathbb{R}^d$. The homogeneous wave equation for the acoustic pressure $p(x,t)=\widehat p(x)e^{\widehat{\lambda} t}$ has a factored form that simplifies the wave equation to the following, where $\widehat{\lambda}$ is the eigenvalue: $$\label{time_harmonic}
-\Delta \widehat{p}(x)-\bigl(\widehat{\lambda}/c\bigr)^2 \cdot \widehat{p}(x)=0.$$ For our purpose, it suffices to consider the one-dimensional acoustic wave problem, i.e., $d=1$ and $\Omega=[0,1]$, with Dirichlet boundary condition $\widehat{p} = 0$ and impedance boundary condition $\partial p / \partial n + i\widehat\lambda p/ \zeta = 0$.
### Numerical results
The quadratic matrix polynomial $Q(\lambda)=\lambda^2 M+ \lambda C+K$ arises from a finite element discretization of \[time\_harmonic\]. In our numerical experiments we set impedance $\zeta=1$. The three $n\times n$ matrices in this QEP are $$M=-4\pi^2 \frac{1}{n}\left(I_n-\frac{1}{2}e_ne_n^\tp\right),
\quad C=2\pi i \frac{1}{\zeta}e_ne_n^\tp,
\quad K=n\begin{bmatrix} 2 & -1 & & \\ -1 &\ddots & \ddots &\\ & \ddots &2&-1\\ & & -1&1 \end{bmatrix}.$$
As before, we compare the accuracy and timings of homotopy and linearization methods on this problem. We tabulate the absolute and relative backward errors of the computed eigenpairs corresponding to the smallest and largest eigenvalues for dimensions $n=20,30,\dots,100$ in \[Ave\_abs\_berr\_APP1\] and \[Ave\_rel\_berr\_APP1\], and plot them graphically in \[ex1\_BErr\_20-100\]. All accuracy tests are averaged over ten runs. The elapsed timings are tabulated in \[timing\_linearization\_APP1\] and \[timing\_homotopy\_APP1\]. All speed tests are run ten times with the best, average, median and worst timings recorded.
----- ------------- ------------- ------------- -------------
n SMALLEST LARGEST SMALLEST LARGEST
20 1.91540E-14 3.26558E-14 5.50499E-15 1.20102E-15
30 2.48304E-14 3.98265E-14 8.69382E-15 6.41943E-16
40 4.10844E-14 4.28048E-14 1.09178E-14 4.76802E-16
50 4.78653E-14 4.97587E-14 1.04868E-14 4.50403E-16
60 6.85333E-14 5.29234E-14 1.40925E-14 1.91169E-16
70 7.68982E-14 4.65017E-14 1.90018E-14 1.07398E-16
80 8.80830E-14 4.12431E-14 1.58457E-14 8.02790E-17
90 1.11451E-13 4.90212E-14 1.37846E-14 9.11717E-17
100 1.14562E-13 4.49546E-14 1.52145E-14 1.52150E-16
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Acoustic wave problem — absolute backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$ via homotopy and linearization methods.[]{data-label="Ave_abs_berr_APP1"}
----- ------------- ------------- ------------- -------------
n SMALLEST LARGEST SMALLEST LARGEST
20 4.19838E-16 7.81869E-15 1.20664E-16 2.87557E-16
30 3.87748E-16 1.35042E-14 1.35761E-16 2.17667E-16
40 5.03756E-16 1.88157E-14 1.33868E-16 2.09588E-16
50 4.86198E-16 2.68895E-14 1.06521E-16 2.43396E-16
60 5.96657E-16 3.39445E-14 1.22691E-16 1.22614E-16
70 5.87499E-16 3.45264E-14 1.45173E-16 7.97402E-17
80 6.00842E-16 3.47937E-14 1.08089E-16 6.77252E-17
90 6.87825E-16 4.63156E-14 8.50724E-17 8.61397E-17
100 6.46406E-16 4.70234E-14 8.58465E-17 1.59152E-16
----- ------------- ------------- ------------- -------------
: <span style="font-variant:small-caps;">Acoustic wave problem — relative backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$ via homotopy and linearization methods.[]{data-label="Ave_rel_berr_APP1"}
n \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- -------- --------- -------- --------
20 40 0.0056 0.0139 0.0071 0.0754
30 60 0.0107 0.0175 0.0134 0.0592
40 80 0.0202 0.0310 0.0221 0.1127
50 100 0.0365 0.0499 0.0387 0.1464
60 120 0.0469 0.0673 0.0600 0.1380
70 140 0.0680 0.0826 0.0830 0.1040
80 160 0.0933 0.1192 0.1004 0.2491
90 180 0.1304 0.1549 0.1445 0.2135
100 200 0.1723 0.2015 0.1818 0.3137
: <span style="font-variant:small-caps;">Acoustic wave problem — linearization method.</span> Elapsed timings (in seconds) for the linearization method with dimensions $n=20, 30, \dots,100$.[]{data-label="timing_linearization_APP1"}
n \# ROOTS BEST AVERAGE MEDIAN WORST
----- ---------- ---------- ----------- ----------- -----------
20 40 7.7472 19.5057 16.7535 34.7300
30 60 104.2553 166.2379 155.9914 257.1145
40 80 265.2321 557.2342 526.8778 1003.7452
50 100 30.0660 37.9267 35.5240 49.5630
60 120 74.2800 92.1235 94.7845 113.7100
70 140 149.3000 189.5564 182.7230 280.6460
80 160 252.4920 378.7414 370.6855 519.2240
90 180 530.7180 691.8141 736.6715 841.8600
100 200 559.9340 1079.0177 1154.3995 1464.4170
: <span style="font-variant:small-caps;">Acoustic wave problem — homotopy method.</span> Elapsed timings (in seconds) for the homotopy method with dimensions $n=20, 30, \dots,100$, in serial for $n = 20,30,40$, and in parallel on $20$ cores for $n = 50,60,\dots,100$.[]{data-label="timing_homotopy_APP1"}
![<span style="font-variant:small-caps;">Acoustic wave problem — absolute and relative backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$. Blue dashed lines: linearization method. Red solid lines: homotopy method.](abe_smallest_3_snip "fig:") ![<span style="font-variant:small-caps;">Acoustic wave problem — absolute and relative backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$. Blue dashed lines: linearization method. Red solid lines: homotopy method.](abe_largest_3_snip "fig:")\
![<span style="font-variant:small-caps;">Acoustic wave problem — absolute and relative backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$. Blue dashed lines: linearization method. Red solid lines: homotopy method.](rbe_smallest_3_snip "fig:") ![<span style="font-variant:small-caps;">Acoustic wave problem — absolute and relative backward errors.</span> Computed smallest and largest eigenpairs with dimensions $n=20,30,\dots,100$. Blue dashed lines: linearization method. Red solid lines: homotopy method.](rbe_largest_3_snip "fig:") \[ex1\_BErr\_20-100\]
### Conclusions {#conclusions}
From these results, we may draw essentially the same conclusions in \[sec:conclude\] that we made about our numerical experiments on randomly generated data. In particularly, homotopy method vastly outperforms linearization method in accuracy, as measured by normwise backward errors. The difference in this case is that we have very sparse, highly structured matrices $M,C,K$.
### Conditioning and accuracy {#conditioning-and-accuracy}
We next examine the accuracy of the two methods for the case $n=100$ more closely — computing all eigenvalues, instead of just the largest and smallest. We will also compute the condition number for each eigenvalue in both the original QEP formulation and in the companion GEP formulation. The results are plotted in \[RBError\_ConditionNumber\_APP1\]. In this case, $\|M\|_F=3.9$, $\|C\|_F=6.3$, $\|K\|_F=2439.3$, which is not regarded as a heavily damped QEP.
![<span style="font-variant:small-caps;">Acoustic wave problem with $n =100$ — accuracy and conditioning.</span> Horizontal axes: index of eigenvalues in ascending order of magnitude. Vertical axis: log scale. <span style="font-variant:small-caps;">Left plot:</span> relative backward errors of computed eigenpairs; blue dots for the linearization method, red crosses for the homotopy method. <span style="font-variant:small-caps;">Right plot:</span> condition numbers of each eigenvalue; blue dots for the companion GEP, red crosses for the original QEP.](app1_backward_error_comparison_3_snip "fig:") ![<span style="font-variant:small-caps;">Acoustic wave problem with $n =100$ — accuracy and conditioning.</span> Horizontal axes: index of eigenvalues in ascending order of magnitude. Vertical axis: log scale. <span style="font-variant:small-caps;">Left plot:</span> relative backward errors of computed eigenpairs; blue dots for the linearization method, red crosses for the homotopy method. <span style="font-variant:small-caps;">Right plot:</span> condition numbers of each eigenvalue; blue dots for the companion GEP, red crosses for the original QEP.](app1_condition_number_comparison_3_snip "fig:") \[RBError\_ConditionNumber\_APP1\]
Again, we see that homotopy method is numerically stable across all eigenpairs whereas the linearization method is unstable for nearly 90% of eigenpairs. The original QEP is also better-conditioned than its companion GEP: the condition number of each eigenvalue of the former is consistently smaller than, or very close to, that of the same eigenvalue in the latter.
### Certification {#certification}
Last but not least, we will use Smale’s $\alpha$-theory, discussed in \[sec:cert\], to deduce that for $n=20,60,100,140$, the Newton iterations in an application of homotopy method to the acoustic wave problem will converge quadratically to all eigenpairs.
Specifically, for $n=20,60,100,140$, we solve for all eigenpairs, compute $\mu(f,x,\lambda)$ using \[mu\_formula\], and obtain a bound for $\gamma(f,x,\lambda)$ according to \[pep\_certification\]. The Newton residual $\beta(f,x,\lambda)$ is also computed, which together with $\gamma(f,x,\lambda)$ yields an upper bound for $\alpha(f,x,\lambda)$. We present our results in \[Certification\_APP1\].
![<span style="font-variant:small-caps;">Acoustic wave problem — certification.</span> The values of $\beta(f,x,\lambda)$, $\mu(f,x,\lambda)$, and upper bound values of $\alpha(f,x,\lambda)$, $\gamma(f,x,\lambda)$ for the acoustic wave problem with $n=20,60,100,140$. The circle in the middle, the whiskers above and below represent the mean, maximum, and minimum value of each of these quantities taken over all eigenpairs. The red horizontal dashed line in the plot for $\alpha(f,x,\lambda)$ represents the threshold in \[smale\]. Note that the vertical axis is in log scale.](alpha_snip "fig:") ![<span style="font-variant:small-caps;">Acoustic wave problem — certification.</span> The values of $\beta(f,x,\lambda)$, $\mu(f,x,\lambda)$, and upper bound values of $\alpha(f,x,\lambda)$, $\gamma(f,x,\lambda)$ for the acoustic wave problem with $n=20,60,100,140$. The circle in the middle, the whiskers above and below represent the mean, maximum, and minimum value of each of these quantities taken over all eigenpairs. The red horizontal dashed line in the plot for $\alpha(f,x,\lambda)$ represents the threshold in \[smale\]. Note that the vertical axis is in log scale.](beta_snip "fig:")\
![<span style="font-variant:small-caps;">Acoustic wave problem — certification.</span> The values of $\beta(f,x,\lambda)$, $\mu(f,x,\lambda)$, and upper bound values of $\alpha(f,x,\lambda)$, $\gamma(f,x,\lambda)$ for the acoustic wave problem with $n=20,60,100,140$. The circle in the middle, the whiskers above and below represent the mean, maximum, and minimum value of each of these quantities taken over all eigenpairs. The red horizontal dashed line in the plot for $\alpha(f,x,\lambda)$ represents the threshold in \[smale\]. Note that the vertical axis is in log scale.](gamma_snip "fig:") ![<span style="font-variant:small-caps;">Acoustic wave problem — certification.</span> The values of $\beta(f,x,\lambda)$, $\mu(f,x,\lambda)$, and upper bound values of $\alpha(f,x,\lambda)$, $\gamma(f,x,\lambda)$ for the acoustic wave problem with $n=20,60,100,140$. The circle in the middle, the whiskers above and below represent the mean, maximum, and minimum value of each of these quantities taken over all eigenpairs. The red horizontal dashed line in the plot for $\alpha(f,x,\lambda)$ represents the threshold in \[smale\]. Note that the vertical axis is in log scale.](mu_snip "fig:") \[Certification\_APP1\]
For every dimension that we test, the value of $\alpha(f,x,\lambda)$ for each eigenpair is much smaller than the threshold given in \[smale\]. In other words, this certifies that all eigenpairs that we computed using the homotopy method are accurate solutions to the PEP, for $n=20,60,100,140$.
Planar waveguide problem {#ss:PEPEx}
------------------------
This example is taken from [@stowell2010guided]. The $129\times 129$ quartic matrix polynomial $P(\lambda)=\lambda^4A_4+\lambda^3A_3+\lambda^2A_2+\lambda A_1+A_0$ arises from a finite element solution of the equation for the modes of a planar waveguide using piecewise linear basis $\varphi_i$, $i=0,\dots,128$. The coefficient matrices are defined by: $$\begin{gathered}
A_1=\frac{\delta^2}{4}\operatorname{diag}(-1,0,0,\dots,0,0,1), \quad A_3=\operatorname{diag}(1,0,0,\dots,0,0,1),\\
A_0(i,j)=\frac{\delta^4}{16}\langle \varphi_i,\varphi_j\rangle,\quad A_2(i,j)=\langle\varphi_i',\varphi_j'\rangle -\langle q\varphi_i,\varphi_j\rangle,\quad A_4(i,j)=\langle\varphi_i,\varphi_j\rangle.\end{gathered}$$ The parameter $\delta$ describes the difference in refractive index between the cover and substrate of the waveguide and $q$ is a function from the variational formulation. The dimension of this PEP is fixed at $n=129$. As before, we compare the accuracy and timings of homotopy and linearization methods on this problem. We present the absolute and relative backward errors for all computed eigenpairs in \[ex2\_BErr\] and tabulate the best, average, median, worst performance in \[Ave\_berr\_APP2\]. The elapsed timings are given in \[timing\_APP2\]. All speed and accuracy tests are run ten times with the best, average, median and worst results recorded.
$n=129$ LINEARIZATION ABS. BK. ERR. HOMOTOPY ABS. BK. ERR.
--------- ----------------------------- ------------------------
BEST 4.77790E-15 6.06981E-19
MEAN 5.38568E-13 2.05656E-15
MEDIAN 2.83418E-13 6.90006E-16
WORST 1.76856E-12 1.166201E-14
$n=129$ LINEARIZATION REL. BK. ERR. HOMOTOPY REL BK. ERR.
BEST 1.14289E-15 1.72613E-17
MEAN 2.53466E-12 9.48695E-17
MEDIAN 5.37243E-14 7.64027E-17
WORST 1.87416E-11 4.24275E-16
: <span style="font-variant:small-caps;">Planar waveguide problem — absolute and relative backward errors.</span> Best, average, median, worst absolute (top table) and relative (bottom table) backward errors of all eigenpairs computed via homotopy and linearization methods.[]{data-label="Ave_berr_APP2"}
![<span style="font-variant:small-caps;">Planar waveguide problem — sorted absolute and relative backward errors.</span> Sorted absolute and relative backward errors of all $516$ computed eigenpairs. Blue dashed lines represent linearization method; red solid lines represent homotopy method. Tests are averaged over ten runs and vertical axis is in log scale.](app2_abe_sorted_log_3_snip "fig:") ![<span style="font-variant:small-caps;">Planar waveguide problem — sorted absolute and relative backward errors.</span> Sorted absolute and relative backward errors of all $516$ computed eigenpairs. Blue dashed lines represent linearization method; red solid lines represent homotopy method. Tests are averaged over ten runs and vertical axis is in log scale.](app2_rbe_sorted_log_3_snip "fig:") \[ex2\_BErr\]
$n=129$ LINEARIZATION TIMINGS HOMOTOPY TIMINGS
--------- ----------------------- ------------------
BEST 2.6670 2673.0350
MEAN 2.7321 3013.9718
MEDIAN 2.7213 2973.6600
WORST 2.8414 3460.5290
: <span style="font-variant:small-caps;">Planar waveguide problem — speed.</span> Elapsed timings (in seconds) for homotopy and linearization methods. The homotopy method is run in parallel on $80$ cores.[]{data-label="timing_APP2"}
The results obtained for this quartic PEP arising from the planar waveguide problem is consistent with what we have observed for the acoustic wave QEP in \[ss:QEPEx\] as well as the randomly generated PEPs and QEPs in \[Numerical results\] — while homotopy method requires much longer running times than the linearization method, its results are also vastly superior in terms of accuracy.
In summary, if our main goal is to obtain accurate solutions to polynomial eigenvalue problems, particularly when all eigenpairs are needed, then expending additional resources (more cores and longer computing time) to employ the homotopy method is not only worthwhile but perhaps inevitable — we know of no other alternative that would achieve the same level of accuracy.
Acknowledgment {#acknowledgment .unnumbered}
==============
The work in this article is generously supported by DARPA D15AP00109 and NSF IIS 1546413. LHL is supported by a DARPA Director’s Fellowship. JIR is supported by a University of Chicago Provost Postdoctoral Scholarship.
[^1]: <https://rcc.uchicago.edu/resources/high-performance-computing>
[^2]: Results on matrix polynomials arising from actual applications will be presented in \[sec:app\].
|
[Centre de Physique Théorique[^1] - CNRS - Luminy, Case 907]{}
[F–13288 Marseille - Cedex 9]{}
[LIGHT-CONE WAVE FUNCTIONS OF THE PHOTON: CLASSIFICATION UP TO TWIST FOUR]{}
[**Gautier STOLL**]{}
Fond National Suisse de la Recherche Scientifique
[**Abstract**]{}
The different light-cone wave functions of the photon up to twist four are defined. Some explicit expressions are extracted from results of Balitskii and al.[@BBK], Ali and Braun[@BA].
Key-Words : Non-perturbative QCD, Conformal expansion.
March 1999
CPT-99/P. 3805
anonymous ftp: ftp.cpt.univ-mrs.fr\
www.cpt.univ-mrs.fr
Introduction
============
Understanding the non-perturbative aspects of QCD is still an unsolved problem, but several ways of parameterizing these effects exist.
One method is to use the notion of “wave function” or distribution amplitude (momentum fractions of partons in particular meson state). This notion, firstly introduced by Brodsky and Lepage[@BL], is specially useful when one deals with hard exclusive processes.
In an other way, the notion of wave function appears in the method of QCD light-cone sum rules[@B] as basic non-perturbative objects (the basic non-perturbative objects of the usual QCD sum rules are vacuum condensates [@SVZ]).
In the approach of Brodsky and Lepage[@BL], the parton decomposition is consider in the infinite momentum frame (a mathematically equivalent approach is the light-cone quantization[@BPP]). In this paper, I take the point of view of Braun and al.[@BF; @BBK; @BBKT; @BB; @BBS], where the wave functions are extracted from matrix elements between vacuum and a physical state (meson or photon) of a gauge invariant light-cone operator. In this approach, one obtains directly the inputs for the method of light-cone QCD sum rules; in addition, the exact equation of motion can be used.
In this paper, I classify the different wave functions of the photon up to twist-4 (the meson case has already been treated [@BF; @BB; @BBKT; @BBS; @GS]). Then I extract some explicit expressions from [@BBK; @BA].
General Framework
=================
In this paper, the photon wave functions are extracted from vacuum expectation values of operators (massless quark fields and gluon field) on the light-cone, at first order in the electric charge in an external classical electromagnetic field.
There are the two-points wave functions, extracted from this kind of matrix element: 0 |(x)\[x,-x\] (-x)|0 \_[F\_]{} \[2matel\] and the three points wave functions, extracted from this kind of matrix element: 0 | (x)\[x,vx\] g G\_(vx)\[vx,-x\] (-x)|0\_[F\_]{} \[3matel\] where $x$ is almost on the light cone, $v \in [0,1]$, $\Gamma$ any kind of product of $\gamma_{\mu}$ matrices and $[x,y]$ a path-ordered gauge factor along the straight line connecting $x$ and $y$ : =[P]{}{ i\^1\_0 dt (x-y)\_} $A_\mu$ is the gluon field and $B_\mu$ is the electromagnetic field, which is related to the gauge-invariant classical electromagnetic tensor field $F_{\mu\nu}=i \exp(iqx)({\epsilon}_\mu q_\nu-{\epsilon}_\nu q_\mu)$.
The gauge factor is a way to introduce interaction (see [@Bali]), it ensure the gauge invariance of these non-local matrix elements (I will omit to write it sometimes).
In order to classify these wave functions, the projector onto the direction orthogonal to $q$ and $x$ is useful: \_=g\_ -(q\_x\_-q\_x\_)
The following notation will be often used: a.a\_z\^ , a\_\* a\_p\^/ (pz)
In order to keep the Lorenz-invariance and the gauge-invariance, the matrix-elements \[2matel\] and \[2matel\] can only be functions of q\_\
x\_\
F\_\
\_\
qx
$q^2$ and $x^2$ are set to zero (a non zero $x^2$ can appear explicitly for twist-4 corrections of twist-2 wave functions).
Definition of the different wave functions
==========================================
Twist classification
--------------------
For local operators, the twist means the dimension minus the spin. But for non-local matrix elements, the definition of twist is a little different: it is built in analogy with the case of Deep Inelastic Scattering, where the different “twist” give contributions to different powers in the hard momentum transfer. A good description of the notion of twist can be found in [@JJ]. In this paper, the classification is taken in analogy to the meson wave functions ([@GS; @BBKT; @BBS]), up to twist-4.
Two-points wave functions
-------------------------
Chiral-Even:
0 |(x)\_(-x)|0 \_[F\_]{} & = & e\_\_0\^1 du F\_(x)x\^C(u)\
0 |(x)\_\_5 (-x)|0 \_[F\_]{} &=& e\_\_0\^1 du F\_(x) x\^\_a(u) where $\xi=2u-1$. $x^2$ is set to zero ($x^2$ corrections are already twist 5). $e_\psi$ is the electric charge of the (massless) quarl field $\psi$.
Chiral-Odd:
0 |(x)\_(-x)|0 \_[F\_]{} = & & e\_\_0\^1 du F\_(x)\
&+& e\_\_0\^1 du (F\_(x) x\^x\_- F\_(x)x\^x\_)b(u)\
\[t2chiodd\] $x^2$ is not set to zero in front of $d'(u)$, because this wave function is a twist-4 correction to $d(u)$.
Matrix element $\left\langle 0 |\ov{\psi}(x)\psi(-x)|0 \right\rangle_{F}$ is zero in first order in the electric charge.
Three-points wave functions
---------------------------
Here, I build the classification in analogy to Ball & Braun[@BBS].
Chiral even: 0 | (x)g\_s \_(vx) \_\_5 (-x)|0\_[F\_]{} &=& e\_ F\_(\[x\])q\_A()\
0 | (x)g\_s G\_(vx) i\_(-x)|0 \_[F\_]{} &=& e\_ F\_(\[x\])q\_V() where $\unal=\{\alpha_1,\alpha_2,\alpha_g\}$ is a set of momentum fractions. The integration measure is defined as \_0\^1 d\_1 \_0\^1 d\_2\_0\^1 d\_g (1-\_i) and F\_(\[x\])F\_(x(\_1-\_2+v\_g))
The different wave functions are classified by their different projections onto light-cone component, see Table 1 (the symbol $\perp$ means projection onto the plane perpendicular to $x$ and $q$). When one compares this list of distributions with the one for the vector meson (in [@BBS]), one can see that there is less possibilities in the case of the photon. It comes from the electromagnetic gauge invariance.
$$\begin{array}{|c|lcc|lc|}
\hline
{\rm Twist} &(\mu\nu\alpha)
& \bar\psi \widetilde{G}_{\mu\nu}\gamma_\alpha\gamma_5
\psi & \bar\psi G_{\mu\nu}\gamma_\alpha \psi
&(\mu\nu\alpha\beta) & \bar\psi G_{\mu\nu} \sigma_{\alpha\beta}\psi
\\ \hline
3 & \cdot\perp \cdot & A & & \cdot\perp
\cdot\perp & \\ \hline
4 & & & &
\perp\perp\cdot\!\perp & T_1
\\
& & & & \cdot\perp
\perp\perp & T_2 \\
& & & & \cdot * \cdot\perp & T_3\\
& & & & \cdot\perp\! \cdot\, * & T_4\\\hline
\end{array}$$
Chiral odd:\
& &e\_{-() } T\_1()\
&+&e\_{-() } T\_2()\
&+&e\_F\_(\[x\]) (q\_x\_- q\_x\_) T\_3()\
&+&e\_F\_(\[x\]) (q\_x\_- q\_x\_) T\_4()\
0 | (x)g\_s G\_(vx) (-x)|0\_[F\_]{}= e\_F\_(\[x\])S() 0 | (x)ig\_s \_(vx) (-x)|0\_[F\_]{}= e\_F\_(\[x\])()
Some explicit expressions
=========================
In this part, I use the results of [@BA; @BBK] to obtain some evaluations of wave functions. The technic is based on the conformal expansion[@BBK; @BF; @Ma; @GS; @Oh] (expansion on different parts which renormalized multiplicatively).
For the case of two-points chiral-odd wave functions, expressions can be extracted from [@BA]:
|(0)\_ (x)\_F &=& e\_|\_0\^1 duF\_(ux)\
&+&e\_|\_0\^1 dug\_\^[(2)]{}(u)(ux)\
with $$\begin{aligned}
\phi_\gamma(u) &=& 6u(1-u) {\nonumber}\\
g_\gamma^{(1)}(u) &=&-\frac{1}{8}(1-u)(3-u)
\nonumber\\
g_\gamma^{(2)}(u) &=&-\frac{1}{4}(1-u)^2\end{aligned}$$ and the value of $\chi$ (the magnetic susceptibility) is[@chi]: $$\chi(\mu= 1\,\text{GeV}) =-4.4 \,\text{GeV}^{-2}$$
These relations can be transformed in order to obtain expressions for $d(u)$, $d'(u)$ and $b(u)$ (defined in equation \[t2chiodd\]): d(u)&=&6 u(1-u)\
d’(u)&=&u(u-2)\
b(u)&=&-u\^2
For the case of three-points chiral-odd wave functions, the results of [@BBK] can be used:\
& &e\_\_0\^1 d\_1 d\_2 d\_3 (\_i -1) \_(\_1\_2\_3)F\_(u\_1 x+\_2 x +v\_3 x)\
\
& &e\_\_0\^1 d\_1 d\_2 d\_3 (\_i -1) \_(\_1\_2\_3)F\_(u\_1 x+\_2 x +v\_3 x) with \_(\_i)&=&30(\_1-\_2){\_3\^2+12\_2 \_1\_2\_3}\
\_(\_i)&=&30\^2\_3{(1-\_3)+\_1 (1-\_3)(1-2\_3)+\_2}\
and &=&0.2\
\_1&=&0.4\
\_2&=&0.3
These expressions imply: S()-()=15\^2\_3{(1-\_3)+\_1 (1-\_3)(1-2\_3)+\_2}\
& &30(\_1-\_2){\_3\^2+12\_2 \_1\_2\_3}\
Conclusion
==========
In this paper, I classified the different wave functions of the photon and extracted some explicit expressions. An evaluation of all these wave functions (using the technic of conformal expansion) will be certainly very useful, in order to understand better the QCD at low energies and to use with a good accuracy the method of QCD light-cone sum rules for decay with photons.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by Schweizerischer Nationalfond. I am grateful to V. Braun for introducing me to the subject.
[999]{}
M. A. Shifman, A. I. Vainshtein and Zakharov, Nucl. Phys. [**B147**]{} (1979) 385.
V. L. Chernyak and A. R. Zhitnitsky, Phys. Rep. [**112**]{} (1984) 173.
S.J. Brodky and G. P. Lepage, in: [*Perturbative Quatum Chromodynamics*]{}, ed. by A. H. Mueller, p.93, World Scientific (Singapore) 1989.
S. J. Brodsky, H.-C. Pauli and S. Pinsky, Phys. Rept. [**301**]{}, 299 (1998)
V. M. Braun and I. E. Filyanov, Z. Phys. [**C48**]{} (1990) 239.
I. I. Balitskii, V. M. Braun and A. V. Kolesnichenko, Nucl. Phys. [**B312**]{} (1989) 509.
P. Ball, V. M. Braun, Y. Koike and K. Tanaka, Nucl. Phys. [**B529**]{} (1998) 323-382.
R. L. Jaffe and X. Ji, Nucl. Phys. [**B375**]{} (1992) 527.
P. Ball and V. M. Braun, hep-ph/9810475.
Th. Ohrndorf, Nucl. Phys. [**B198**]{} (1982) 26.
Yu. M. Makeenko, Yad. Fiz. [**33**]{} (1981) 842.
V.M. Braun and A.V. Kolesnichenko, Phys. Lett. B [**175**]{} (1986) 485; Sov. J. Nucl. Phys. [**44**]{} (1986) 489.
P. Ball, V. M. Braun, Phys. Rev. [**D58**]{} (1998) 094016
I. I. Balitsky, Phys. Lett. [**124**]{} (1983) 230.
V. M. Braun, hep-ph/9801222
A. Ali and V. M. Braun, Phys. Lett. [**B359**]{} (1995) 223.
G. Stoll, hep-ph//9812432.
V.M. Belyaev and Ya.I. Kogan, Yad. Fiz. [**40**]{} (1984) 1035;\
I.I. Balitsky, A.V. Kolesnichenko and A.V. Yung, Yad. Fiz. [**41**]{} (1985) 282.
[^1]: Unité Propre de Recherche 7061
|
---
abstract: 'We explore the nature of the Bose condensation transition in driven open quantum systems, such as exciton-polariton condensates. Using a functional renormalization group approach formulated in the Keldysh framework, we characterize the dynamical critical behavior that governs decoherence and an effective thermalization of the low frequency dynamics. We identify a critical exponent special to the driven system, showing that it defines a new dynamical universality class. Hence critical points in driven systems lie beyond the standard classification of equilibrium dynamical phase transitions. We show how the new critical exponent can be probed in experiments with driven cold atomic systems and exciton-polariton condensates.'
author:
- |
L. M. Sieberer$^{1,2}$, S. D. Huber$^{3,4}$, E. Altman$^{4,5}$, and S. Diehl$^{1,2}$\
[$^1$*Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria*]{}\
[$^2$*Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria*]{}\
[$^3$*Theoretische Physik, Wolfgang-Pauli-Strasse 27, ETH Zurich, CH-8093 Zurich, Switzerland*]{}\
[$^4$*Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 76100, Israel*]{}\
[$^5$*Department of Physics, University of California, Berkeley, CA 94720, USA*]{}\
title: 'Dynamical Critical Phenomena in Driven-Dissipative Systems'
---
Recent years have seen major advances in the exploration of many-body systems in which matter is strongly coupled to light [@carusotto12]. Such systems include for example polariton condensates [@kasprzak06], superconducting circuits coupled to microwave resonators [@schoelkopf08; @clarke08], cavity quantum electrodynamics [@hartmann08] as well as ultracold atoms coupled to high finesse optical cavities [@ritsch12]. As in traditional quantum optics settings, these experiments are subject to losses, which may be compensated by continuous drive, yet they retain the many-body character of condensed matter. This combination of ingredients from atomic physics and quantum optics in a many-body context defines a qualitatively new class of quantum matter far from thermal equilibrium. An intriguing question from the theoretical perspective is what new universal behavior can emerge under such conditions.
A case in point are exciton-polariton condensates. Polaritons are short lived optical excitations in semiconductor quantum wells. Continuous pumping is required to maintain their population in steady state. But in spite of the non-equilibrium conditions, experiments have demonstrated Bose condensation [@kasprzak06] and, more recently, have even observed the establishment of a critical phase with power-law correlations in a two dimensional system below a presumed Kosterlitz-Thouless phase transition [@roumpos12]. At a fundamental level however there is no understanding of the condensation transition in the presence of loss and external drive, and more generally of continuous phase transitions under such conditions.
In this letter we develop a theory of dynamical critical phenomena in driven-dissipative systems in three dimensions. Motivated by the experiments described above we focus on the case of Bose condensation with the following key results. *(i) Low-frequency thermalization* – The microscopic dynamics of a driven system is incompatible with an equilibrium-like Gibbs distribution at steady state. Nevertheless a scale independent effective temperature emerges at low frequencies in the universal regime near the critical point, and all correlations in this regime obey a classical fluctuation-dissipation relation (FDR). Such a phenomenon of low frequency effective equilibrium has been identified previously in different contexts [@mitra06; @diehl08; @dallatorre10; @dallatorre12; @oztop12; @wouters2006]. *(ii) Universal low-frequency decoherence* – In spite of the effective thermalization, the critical dynamics is significantly affected by the non-equilibrium conditions set by the microscopic theory. Specifically we show that all coherent dynamics, as measured by standard response functions, fades out at long wavelengths as a power-law with a new universal critical exponent. The decoherence exponent cannot be mimicked by any equilibrium model and places the critical dynamics of a driven system in a new dynamical universality class beyond the Halperin-Hohenberg classification of equilibrium dynamical critical behavior [@hohenberg77].
*Open system dynamics–* A microscopic description of driven open systems typically starts from a Markovian quantum master equation or an equivalent Keldysh action (see Supplementary Information (SI)). However, the novel aspects in the critical dynamics of driven dissipative systems discussed below can be most simply illustrated by considering an effective mesoscopic description of the order parameter dynamics using a stochastic Gross-Pitaevskii equation [@carusotto2005] $$\label{SGP}
i \partial_t \psi = \left[ - \left( A - i D \right) \nabla^2 - \mu + i \chi +
\left( \lambda - i \kappa \right) {\left\lvert \psi \right\rvert}^2 \right] \psi
+ \zeta.$$ As we show below, this equation can be rigorously derived from a fully quantum microscopic description of the condensate when including only the relevant terms near the critical point. The different terms in have a clear physical origin. $ \chi= \left( \g_p - \g_l \right)/2$ is the effective gain, which combines the incoherent pump field minus the local single-particle loss terms. $\kappa,\lambda>0$ are respectively two-body loss and interaction parameters. The diffusion term $D$ is not contained in the original microscopic model, and is not crucial to describe most non-universal aspects of, e.g., exciton-polariton condensates [@wouters07] (but see [@wouters10]). In a systematic treatment of long-wavelength universal critical behavior, however, such term is generated upon integrating out high frequency modes during the renormalization group (RG) flow, irrespective of its microscopic value. We therefore include it at the mesoscopic level with a phenomenological coefficient. Finally $\zeta$ is a Gaussian white noise with correlations $\av{\zeta^*(t,\mathbf{x}) \zeta(t',\mathbf{x}')} = \g\delta(t-t')
\delta(\mathbf{x} - \mathbf{x}') $ where $\gamma = \gamma_p + \gamma_l$. Such noise is necessarily induced by the losses and sudden appearances of particles due to pumping.
The dGP describes a mean field transition from a stationary condensate solution with density $|\psi|^2=\chi/\kappa$ for $\chi
> 0$ to the vacuum state when $\chi$ crosses zero. Dynamical stability [@keeling10] determines the chemical potential as $\mu=\lambda|\psi|^2$. Similar to a temperature, the noise term in Eq. can drive a transition at finite particle density, thereby inducing critical fluctuations.
As the equation of motion is cast in Langevin form, one might suspect that it can be categorized into one of the well-known models of dynamical critical phenomena classified by Hohenberg and Halperin [@hohenberg77]. However, this is not true in general. Crucially coherent (real parts of the couplings in Eq. ) and dissipative (imaginary parts) dynamics have different physical origins in driven-dissipative systems. In particular, the dissipative dynamics is determined by the intensity of the pump and loss terms, independently of the intrinsic Hamiltonian dynamics of the system. Equilibrium models [@hohenberg77], on the other hand, are constrained to have a specific relation between the reversible and dissipative terms to ensure a thermal Gibbs ensemble in steady state [@chaikin95:_princ; @tauber07] (see below). The unconstrained dynamics in driven systems is the key feature that can lead to novel dynamic critical behavior.
*Microscopic Model* – Having illustrated the nature of the problem with the effective classical equation we turn to a fully quantum description within the Keldysh framework. Our starting point is a non-unitary quantum evolution described by a many-body master equation in Lindblad form, or equivalently by the following dissipative Keldysh action (see SI for details of the correspondence) $$\begin{gathered}
\mathcal{S} = \int_{t,\mathbf{x}} \biggl\{ \left( \phi_c^{*},\phi_q^{*}
\right)
\begin{pmatrix}
0 & P^A\\
P^R & P^K
\end{pmatrix}
\begin{pmatrix}
\phi_c \\ \phi_q
\end{pmatrix} + i 4 \kappa \phi_c^{*} \phi_c \phi_q^{*} \phi_q \\
- \left[ \left( \lambda + i \kappa \right) \left( \phi_c^{*2} \phi_c \phi_q
+ \phi_q^{*2} \phi_c \phi_q \right) + c.c. \right] \biggr\}.
\label{eq:micro}\end{gathered}$$ Here $\phi_c$, $\phi_q$ are the “classical” and “quantum” fields, defined by the symmetric and anti-symmetric combinations of the fields on the forward and backward parts of the Keldysh contour [@kamenev09:_keldy; @altlandsimons]. The microscopic inverse Green’s functions are given by $P^R = i \partial_t + A
\nabla^2 + \mu - i \chi$, $P^A = P^{R \dag}$, $P^K = i \gamma$.
The importance of the various terms in the microscopic action in the vicinity of the critical point can be inferred from canonical power counting, which serves as a valuable guideline for the explicit evaluation of the problem. Vanishing of the mass scale $\chi$ defines a Gaussian fixed point with dynamical critical exponent $z=2$ ($ \w\sim k^z$, $k$ a momentum scale). Canonical power counting determines the scaling dimensions of the fields and interaction constants with respect to this fixed point: At criticality, the spectral components of the Gaussian action scale as $P^{R/A} \sim k^2$, while the Keldysh component generically takes a constant value, i.e., $P^K \sim
k^0$. Hence, to maintain scale invariance of the quadratic action, the scaling dimensions of the fields must be $[\phi_c] = \frac{d - 2}{2}$ and $[\phi_q] =
\frac{d + 2}{2}$. From this result we read off the canonical scaling dimensions of the interaction constants. This analysis shows that in the case of interest $d=3$, local vertices containing more than two quantum fields or more than five classical fields are irrelevant. For the critical problem, the last terms in both lines of Eq. can thus be skipped, massively simplifying the complexity of the problem. The only marginal term with two quantum fields is the Keldysh component of the single-particle inverse Green’s function, i.e., the noise vertex. In this sense, the critical theory is equivalent to a stochastic *classical* problem [@msr73; @dedominics76], as previously observed in [@mitra06; @mitra11_1]. But as noted above it cannot be *a priori* categorized in one of the dynamical universality classes [@hohenberg77] subject to an intrinsic equilibrium constraint.
*Functional RG* – In order to focus quantitatively on the critical behavior we use a functional RG approach formulated originally by Wetterich [@wetterich93] and adapted to the Keldysh real time framework in Refs. [@gasenzer08; @berges09] (see SI for details). At the formal level this technique provides an exact functional flow equation for an effective action functional $\G_\L[\phi_c,\phi_q]$, which includes information on increasingly long wavelength fluctuations (at the microscopic cutoff scale $\G_{\L_0}\approx
\mathcal S$). In practice one works with an ansatz for the effective action and thereby projects the functional flow onto scaling equations for a finite set of coupling constants. For the description of general equilibrium [@berges02; @salmhofer01; @pawlowski07; @delamotte07; @rosten12; @boettcher12] and Ising dynamical [@canet07] critical behavior the functional RG gave results that are competitive with high-order epsilon expansion and with Monte Carlo simulations already in rather simple approximation schemes.
Our ansatz for the effective action is motivated by the power counting arguments introduced above. We include in $\G_\L$ all couplings that are relevant or marginal in this scheme:
$$\label{eq:ansatz}
\Gamma_{\Lambda} = \int_{t,\mathbf{x}} \left\{ \left( \phi_c^{*},\phi_q^{*} \right)
\begin{pmatrix}
0 & i Z \partial_t + \bar{K} \nabla^2 \\
i Z^{*} \partial_t + \bar{K}^{*} \nabla^2 & i \bar{\gamma}
\end{pmatrix}
\begin{pmatrix}
\phi_c \\ \phi_q
\end{pmatrix} - \left( \frac{\partial \bar{U}}{\partial \phi_c} \phi_q +
\frac{\partial \bar{U}^{*}}{\partial \phi_c^{*}} \phi_q^{*} \right)
\right\}.$$
The dynamical couplings $Z$ and $\bar K$ have to be taken complex valued in order to be consistent with power counting, even if the respective imaginary parts vanish (or are very small) at the microscopic scale: Successive momentum mode elimination implemented by the RG flow generates these terms due to the simultaneous presence of local coherent and dissipative couplings in the microscopic model. The fact that the spectral components of the effective action depend only linearly on $\phi_q$ allowed us to introduce an effective potential $\bar U$ determined by the complex static couplings. $\bar{U}(\rho_c)=
\frac{1}{2} \bar{u} \left( \rho_c - \rho_{0} \right)^2 + \frac{1}{6} \bar{u}'
\left( \rho_c - \rho_{0} \right)^3$ is a function of the $U(1)$ invariant combination of classical fields $\rho_c = \phi^*_c \phi_c$ alone. It has a mexican hat structure ensuring dynamical stability. With this choice we approach the transition from the ordered side, taking the limit of the stationary state condensate $\rho_0 = \phi_c^* \phi_c^{} |_{\rm ss} = \phi_0^* \phi_0^{} \to 0$.
All the parameters appearing in including the stationary condensate density $\rho_0$ are functions of the running cutoff $\Lambda$. Hence, the functional flow of $\Gamma_{\Lambda}$ is reduced by means of the approximate ansatz to the flow of a finite number of couplings $\mathbf{g} = \left( Z, \bar{K}, \rho_0, \bar{u}, \bar{u}', \bar{\gamma}
\right)^T$ determined by the $\beta$-functions $\Lambda \partial_{\Lambda}
\mathbf{g} = \beta_{\mathbf{g}}(\mathbf{g})$ (see SI). The critical system is described by a scaling solution to these flow equations. It is obtained as a fixed point of the flow of dimensionless renormalized couplings, which we derive in the following. First we rescale couplings with $Z$, $$\label{eq:1}
K = \bar{K}/Z, \quad u = \bar{u}/Z, \quad u' = \bar{u}'/Z, \quad \gamma =
\bar{\gamma}/{\left\lvert Z \right\rvert}^2.$$ Coherent and dissipative processes are encoded, respectively, in the real and imaginary parts of the renormalized coefficients $K = A + i D$, $u = \lambda + i
\kappa$, and $u' = \lambda' + i \kappa'$.
We define the first three dimensionless scaling variables to be the ratios of coherent to dissipative coefficients: $r_K = A/D$, $r_u = \lambda/\kappa$, and $r_{u'} = \lambda'/\kappa'$. Another three dimensionless variables are defined by rescaling the loss coefficients $\kappa$ and $\kappa'$ and the condensate density $\rho_0$: $$\label{eq:6}
w = \frac{2 \kappa \rho_0}{\L^2 D}, \quad
\tilde{\kappa} = \frac{\gamma \kappa}{2 \L D^2}, \quad
\tilde{\kappa}' = \frac{\gamma^2 \kappa'}{4 D^3}.$$ The flow equations for the couplings $\mathbf{r} = \left( r_K,r_u,r_{u'}
\right)^T$ and $\mathbf{s} = \left( w,\tilde{\kappa},\tilde{\kappa}' \right)^T$ form a closed set, $$\label{eq:7}
\Lambda \partial_{\Lambda} \mathbf{r} = \beta_{\mathbf{r}}(\mathbf{r},\mathbf{s}), \quad
\Lambda \partial_{\Lambda} \mathbf{s} = \beta_{\mathbf{s}}(\mathbf{r},\mathbf{s})$$ (see SI for the explicit form). As a consequence of the transformations and , these $\b$-functions acquire a contribution from the running anomalous dimensions $\eta_a({\mathbf r},{\mathbf
s}) = - \Lambda \partial_{\Lambda} \ln a$ associated with $a = Z, D, \gamma$.
![Flow in the complex plane of dimensionless renormalized couplings. (a) The microscopic action determines the initial values of the flow. Typically, the coherent propagation will dominate over the diffusion, $A \gg D$, while two-body collisions and two-body loss are on the same order of magnitude, $\tilde{\lambda} \approx \tilde{\kappa}$, with a similar relation for the marginal complex coupling $\tilde{u}'$. The initial flow is non-universal. (b) At criticality, the infrared (IR) flow approaches a universal linear domain encoding the critical exponents and anomalous dimensions. In particular, this regime is independent of the precise microscopic initial conditions. (c) The Wilson-Fisher fixed point describing the interacting critical system is purely imaginary.[]{data-label="fig:flow"}](Figure)
*Critical properties –* The universal behavior near the critical point is controlled by the infrared flow to a Wilson-Fisher like fixed point. The values of the coupling constants at the fixed point, determined by solving $\b_{\bf
s}(\br_{*},\bs_{*})=0$ and $\b_{\bf r}(\br_{*},\bs_{*})=0$, are given by: $$\label{eq:3}
\begin{split}
\mathbf{r}_{*} & = \left( r_{K *},r_{u *},r_{u' *} \right) = \mathbf{0},\\
\mathbf{s}_{*} & = \left( w_{*},\tilde{\kappa}_{*},\tilde{\kappa}'_{*}
\right)\approx \left( 0.475,5.308,51.383 \right).
\end{split}$$ The fact that $\mathbf{r}_*=0$ implies that the fixed point action is purely imaginary (or dissipative), as in Model A of Hohenberg and Halperin [@hohenberg77], cf. Fig. \[fig:flow\] (c). We interpret the fact that the ratios of coherent vs. dissipative couplings are zero at the fixed point as a manifestation of decoherence at low frequencies in an RG framework. The coupling values $\bs_{*}$ are identical to those obtained in an equilibrium classical $O(2)$ model from functional RG calculations at the same level of truncation [@berges02].
Let us turn to the linearized flow, which determines the universal behavior in the vicinity of the fixed point. We find that the two sectors corresponding to $\bs$ and $\br$ decouple in this regime, giving rise to a block diagonal stability matrix $${\partial\over \partial\ln \L}
\begin{pmatrix}
\delta \mathbf{r} \\ \delta \mathbf{s}
\end{pmatrix}
=
\begin{pmatrix}
N & 0 \\
0 & S
\end{pmatrix}
\begin{pmatrix}
\delta \mathbf{r} \\ \delta \mathbf{s}
\end{pmatrix},$$ where $\delta \mathbf{r} \equiv \mathbf{r}$, $\delta \mathbf{s} \equiv
\mathbf{s} - \mathbf{s}_{*}$, and $N,S$ are $3\times 3$ matrices (see SI).
The anomalous dimensions entering this flow are found by plugging the fixed point values $\br_{*}, \bs_{*}$ into the expressions for $\eta_a(\br,\bs)$. We obtain the scaling relation between the anomalous dimensions $\eta_Z=\eta_{\bar
\g}$, valid in the universal infrared regime. This leads to cancellation of $\eta_Z$ with $\eta_{\bar \g}$ in the static sector $S$ (see SI). The critical properties in this sector, encoded in the eigenvalues of $S$, become identical to those of the standard $O(2)$ transition. This includes the correlation length exponent $\nu \approx 0.716$ and the anomalous dimension $\eta \approx 0.039$ associated with the bare kinetic coefficient $\bar{K}$. These values are in good agreement with more sophisticated approximations [@guida98].
The equilibrium-like behavior in the $S$ sector can be seen as a result of an emergent symmetry. Locking of the noise to the dynamical term implied by $\eta_Z
= \eta_{\bar \gamma}$ leads to invariance of the long wavelength effective action (times $i$) under the transformation $\Phi_c(t,\mathbf{x}) \to \Phi_c(-
t,\mathbf{x}), \Phi_q(t,\mathbf{x}) \to \Phi_q(- t,\mathbf{x}) +
\frac{2}{\gamma} \sigma^z \partial_t \Phi_c(- t,\mathbf{x}), i \to - i$ with $\Phi_\nu = (\phi_\nu, \phi_\nu^*)^T, \nu = (c,q)$, $\sigma^z$ the Pauli matrix. It generalizes the symmetry noted in Refs. [@aron10; @canet11] to models that include also reversible couplings. The presence of this symmetry implies a classical FDR with a distribution function $F = 2T_\text{eff}/\omega$, governed by an effective temperature $T_\text{eff} = \bar\gamma/ (4
{\left\lvert Z \right\rvert})$. This quantity becomes scale independent in the universal critical regime where $\bar \gamma \sim k^{- \eta_{\bar{\gamma}}}$ and $Z \sim
k^{-\eta_Z}$ cancel. We interpret this finding as an asymptotic low-frequency thermalization mechanism of the driven system at criticality. The thermalized regime sets in below the Ginzburg scale where fluctuations start to dominate, for which we estimate perturbatively $\chi_G = \left( \gamma \kappa
\right)^2/\left( 16 \pi^2 D^3 \right)$ (see SI). The values entering here are determined on the mesoscopic scale, and we specify them for exciton-polariton systems in the SI based on Ref. [@wouters10]. Above the scale $\chi_G$, no global (scale independent) temperature can be defined in general. We note that, unlike Hohenberg-Halperin type models, here the symmetry implied by $\eta_Z =
\eta_{\bar\gamma}$ is not imposed at the microscopic level of the theory, but rather is emergent at the critical point.
The key new element in the driven-dissipative dynamics is encoded in the decoupled “drive” sector (the $3 \times 3$ matrix $N$ in our case). It describes the flow towards the emergent purely dissipative Model A fixed point (see Fig. \[fig:flow\] (b)) and thus reflects a mechanism of low frequency decoherence. This sector has no counterpart in the standard framework of dynamical critical phenomena and is special to driven-dissipative systems. In the deep infrared regime, only the lowest eigenvalue of this matrix governs the flow of the ratios. This means that only one new critical exponent $\eta_r
\approx - 0.101$ is encoded in this sector. Just as the dynamical critical exponent $z$ is independent of the static ones, the block diagonal structure of the stability matrix ensures that the drive exponent is independent of the exponents of the other sectors.
The fact that the inverse Green’s function in Eq. is specified by three real parameters, ${\mathop{\mathrm{Re}}}\bar{K}, {\mathop{\mathrm{Im}}}\bar{K}$, and $|Z|$ (the phase of $Z$ can be absorbed by a $U(1)$ transformation) allows for only three independent anomalous dimensions: $\eta_D$, $\eta_Z$ and the new exponent $\eta_r$. Hence the extension of critical dynamics described here is *maximal*, i.e., no further independent exponent will be found. Moreover this extension of the purely relaxational (Model A) dynamics leads to different universality than an extension that adds reversible couplings compatible with relaxation towards a Gibbs ensemble. The latter is obtained by adding real couplings to the imaginary ones with the same ratio of real to imaginary parts for all couplings [@graham73; @deker75; @tauber01; @longpaper]; in this case the above symmetry is present, while absent in the general non-equilibrium case. The compatible extension adds only an independent $1 \times 1$ sector $N$ to the purely relaxational problem, for which we find $\eta_R = - 0.143 \neq \eta_r$. This proves that the independence of dissipative and coherent dynamics defines indeed a new non-equilibrium universality class with no equilibrium counterpart. It is rooted in different symmetry properties of equilibrium vs. non-equilibrium situation.
*Experimental detection* – The novel anomalous dimension identified here leaves a clear fingerprint in single-particle observables accessible with current experimental technologies on different platforms. For ultracold atomic systems this can be achieved via RF-spectroscopy [@Stewart2008] close to the driven-dissipative BEC transition. In exciton-polariton condensates, the dispersion relation can be obtained from the energy- and momentum resolved luminescence spectrum as demonstrated in [@Utsunomiya2008]. Using the RG scaling behavior of the diffusion and propagation coefficients $D \sim D_0 \Lambda^{- \eta_D}$, $A = D
r_K \sim A_0 \Lambda^{- \eta_r - \eta_D}$, we obtain the anomalous scaling of the frequency and momentum resolved, renormalized retarded Green’s function $G^R
(\omega,\mathbf{q}) = (\omega - A_0 {\left\lvert \mathbf{q} \right\rvert}^{2 - \eta_r -\eta_D} + i
D_0 {\left\lvert \mathbf{q} \right\rvert}^{2 -\eta_D})^{-1}$, with $A_0$ and $D_0$ non-universal constants. Peak position and width are implied by the complex dispersion $\omega
\approx A_0 {\left\lvert \mathbf{q} \right\rvert}^{2.22} - i D_0 {\left\lvert \mathbf{q} \right\rvert}^{2.12} $. The energy resolution necessary to probe the critical behavior is again set by the Ginzburg scale $\chi_G$ (see above).
*Conclusions –* We have developed a Keldysh field theoretical approach to characterize the critical behavior of driven-dissipative three dimensional Bose systems at the condensation transition. The main result presents a hierarchical extension of classical critical phenomena. First, all static aspects are identical to the classical $O(2)$ critical point. In the next shell of the hierarchy a sub-class of the dynamical phenomena is identical to the purely dissipative Model A dynamics of the equilibrium critical point. Finally we identify manifestly non-equilibrium features of the critical dynamics, encoded in a new independent critical exponent that betrays the driven nature of the system.
*Acknowledgements* – We thank J. Berges, M. Buchhold, I. Carusotto, T. Esslinger, T. Gasenzer, A. Imamoglu, J. M. Pawlowski, P. Strack, S. Takei, U. C. Täuber, C. Wetterich and P. Zoller for useful discussions. This research was supported by the Austrian Science Fund (FWF) through the START grant Y 581-N16 and the SFB FoQuS (FWF Project No. F4006-N16).
I. Carusotto, C. Ciuti, Rev. Mod. Phys. [**85**]{}, 299 (2013).
J. Kasprzak *et al.*, Nature [**443**]{}, 409 (2006).
R. J. Schoelkopf and S. M. Girvin, Nature [**451**]{}, 7179 (2008).
J. Clarke and F. K. Wilhelm, Nature [**453**]{}, 1031 (2008). M. J. Hartmann, F. G. S. L. Brandao, M. B. Plenio, Laser & Photon. Rev. [ **2**]{}, No. 6, 527 (2008).
H. Ritsch, P. Domokos, F. Brennecke, T. Esslinger, arXiv:1210.0013 (2012).
G. Roumpos *et al.*, PNAS [**109**]{} no. 17, 6467 (2012).
A. Mitra, S. Takei, Y. B. Kim and A. J. Millis, Phys. Rev. Lett. [**97**]{}, 236808 (2006).
S. Diehl *et al.*, Nature Physics [**4**]{}, 878 (2008); S. Diehl *et al.*, Phys. Rev. Lett. [**105**]{}, 015702 (2010).
E. G. Dalla Torre, E. Demler, T. Giamarchi, E. Altman, Nature Physics [**6**]{}, 806 (2010).
E. G. Dalla Torre *et al.*, Phys. Rev. A [**87**]{}, 023831 (2013).
B. Öztop, M. Bordyuh, O. E. Müstecaplioglu, and H. E. Türeci, New J. Phys. [**14**]{}, 085011 (2012).
M. Wouters and I. Carusotto, Physical Review B [**74**]{}, 245316 (2006).
P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys. [**49**]{}, 435 (1977).
I. Carusotto and C. Ciuti, Physical Review B [**72**]{}, 125335 (2005).
M. Wouters and I. Carusotto, Phys. Rev. Lett. [**99**]{}, 140402 (2007); J. Keeling, P. R. Eastham, M. H. Szymanska, and P. B. Littlewood, Phys. Rev. Lett. [**93**]{}, 226403 (2004); M. H. Szymanska, J. Keeling, and P. B. Littlewood, Phys. Rev. Lett. [**96**]{}, 230602 (2006); J. Keeling and N.G. Berloff, Phys. Rev. Lett [**100**]{}, 250401 (2008).
M. Wouters and I. Carusotto, Phys. Rev. Lett. [**105**]{}, 020602 (2010); M. Wouters, T. C. H. Liew, and V. Savona, Phys. Rev. B [**82**]{}, 245315 (2010).
J. Keeling, M. H. Szymanska, P. B. Littlewood, in *Optical Generation and Control of Quantum Coherence in Semiconductor Nanostructures*, Nanoscience and Technology, edited by G. Slavcheva and P. Roussignol (Springer Berlin, 2010) pp. 293-329. P. M. Chaikin and T. C. Lubensky, *Principles of Condensed Matter Physics*, (Cambridge University Press, 1995)
U. C. Täuber, Lecture Notes Phys. [**716**]{} 295 (2007).
A. Kamenev and A. Levchenko, Advances in Physics, [**58(3)**]{}, 197 (2009).
A. Altland and B. Simons, *Condensed Matter Field Theory* (Cambridge University Press, 2010).
P. C. Martin, E. D. Siggia, H. A. Rose, Phys. Rev. A [**8**]{}, 423 (1973).
J. DeDominics, J. Physique (Paris) [**37**]{}, C1 (1976).
A. Mitra, A. Rosch, Phys. Rev. Lett. [**106**]{}, 106402 (2011).
C. Wetterich, Phys. Lett. B [**301**]{}, 90 (1993); Z. Phys. C [**57**]{}, 451 (1993).
T. Gasenzer, J. M. Pawlowski, Phys. Lett. B [**670**]{} 135 (2008).
J. Berges, G. Hoffmeister, Nucl. Phys. B [**813**]{} 383 (2009). J. Berges, N. Tetradis, C. Wetterich, Phys. Rept. [**363**]{} 223 (2002).
M. Salmhofer and C. Honerkamp, Prog. Theor. Phys. [**105**]{}, 1 (2001).
J. M. Pawlowski, Annals Phys. [**322**]{}, 2831 (2007).
B. Delamotte, cond-mat/0702365 (2007).
O. J. Rosten, Physics Reports [**511**]{}, 177 (2012).
I. Boettcher, J. M. Pawlowski, S. Diehl, Nucl. Phys. Proc. Suppl. [**228**]{}, 63 (2012).
L. Canet and H. Chaté J. Phys. A: Math. Theor. [**40**]{}, 1937 (2007).
R. Guida and J. Zinn-Justin, J. Phys. A **31**, 8103 (1998).
C. Aron, G. Biroli, and L. F. Cugliandolo, J. Stat. Mech. [**[1011]{}**]{}, 11018 (2010).
L. Canet, H. Chaté, B. Delamotte, J. Phys. A: Math. Theor. [**44**]{}, 495001 (2011).
L. M. Sieberer, S. D. Huber, E. Altman, S. Diehl, in preparation (2013).
R. Graham, Springer Tracts in Modern Physics, Vol. 66 (Springer-Verlag, Berlin, 1973).
U. Deker and F. Haake, Phys. Rev. A [**11**]{}, 2043 (1975).
U. C. Täuber, V. K. Akkineni, J. E. Santos, Phys. Rev. Lett. [**88**]{} 045702 (2002).
J. T. Stewart, J. P. Gaebler, and D. S. Jin, Nature [**454**]{}, 744 (2008).
S. Utsunomiya *et al.*, Nature Phys. [**4**]{}, 700 (2008).
|
---
abstract: 'We suggest that white dwarf (WD) pulsars can compete with neutron star (NS) pulsars for producing the excesses of cosmic ray electrons and positrons ($e^{\pm}$) observed by the PAMELA, ATIC/PPB-BETS, Fermi and H.E.S.S experiments. A merger of two WDs leads to a rapidly spinning WD with a rotational energy ($\sim 10^{50} \mathrm{erg}$) comparable to the NS case. The birth rate ($\sim 10^{-2} \mbox{-} 10^{-3} \mathrm{/yr/galaxy}$) is also similar, providing the right energy budget for the cosmic ray $e^{\pm}$. Applying the NS theory, we suggest that the WD pulsars can in principle produce $e^{\pm}$ up to $\sim 10$ TeV. In contrast to the NS model, the adiabatic and radiative energy losses of $e^{\pm}$ are negligible since their injection continues after the expansion of the pulsar wind nebula, and hence it is enough that a fraction $\sim 1 \%$ of WDs are magnetized ($\sim 10^7$–$10^9$ G) as observed. The long activity also increases the number of nearby sources ($\sim 100$), which reduces the Poisson fluctuation in the flux. The WD pulsars could dominate the quickly cooling $e^{\pm}$ above TeV energy as a second spectral bump or even surpass the NS pulsars in the observing energy range $\sim 10\mathrm{GeV} \mbox{-} 1 \mathrm{TeV}$, providing a background for the dark matter signals and a nice target for the future AMS-02, CALET and CTA experiment.'
author:
- Kazumi Kashiyama
- Kunihito Ioka
- Norita Kawanaka
title: 'White Dwarf Pulsars as Possible Cosmic Ray Electron-Positron Factories'
---
Introduction {#sec1}
============
Recently, the observational windows to the electron and positron ($e^{\pm}$) cosmic rays are rapidly expanding the energy frontier, revealing new aspects of our Universe. The PAMELA satellite [@Adriani:2008zr] shows that the cosmic ray positron fraction (the ratio of positrons to electrons plus positrons) rises in the energy range of $10$ to $100$ GeV, contrary to the theoretical prediction of secondary positrons produced by hadronic cosmic rays interacting with the interstellar medium (ISM) [@Moskalenko:1997gh]. Shortly thereafter, ATIC/PPB-BETS [@chang:2008; @Torii:2008xu] suggest an sharp excess of the $e^{\pm}$ with a peak at $600$ GeV, and although not confirming the ATIC/PPB-BETS sharp peak spectrum [^1], Fermi [@Ackermann:2010_8; @Abdo:2009zk; @Moiseev:2007js] and H.E.S.S [@Collaboration:2008aa; @Aharonian:2009ah] also suggest an excess of the $e^{\pm}$ total flux around $100$ GeV – $1$ TeV compared to theoretical predictions based on low energy cosmic ray $e^{\pm}$ spectrum [@Baltz:1998xv; @Ptuskin:2006b]. All these observations of the $e^{\pm}$ excesses probably connected with the PAMELA positron excess, and most likely suggest a new source, possibly the astrophysical accelerators [@Kawanaka:2009dk; @Hooper:2008kg; @Yuksel:2008rf; @Profumo:2008ms; @Malyshev:2009tw; @Grasso:2009ma; @Kistler:2009wm; @Heyl:2010md; @Fujita:2009wk; @Shaviv:2009bu; @Hu:2009bc; @Blasi:2009hv; @Blasi:2009bd; @Mertsch:2009ph; @Biermann:2009qi; @Ahlers:2009ae; @Kachelriess:2010gt; @Heinz:2002qj; @Ioka:2008cv; @Calvez:2010fd; @KIOK:2010] or dark matter annihilation [@Asano:2006nr; @ArkaniHamed:2008qn; @Baltz:1998xv; @Barger:2008su; @Barger:2009yt; @Bergstrom:2008gr; @Bertone:2008xr; @Borriello:2009fa; @Chen:2008fx; @Cheng:2002ej; @Cholis:2008qq; @Cholis:2008wq; @Cirelli:2008pk; @Cirelli:2008jk; @Crocker:2010gy; @Feldman:2009wv; @Fox:2008kb; @Hall:2008qu; @Harnik:2008uu; @Hisano:2004ds; @Hisano:2008ah; @Hisano:2009rc; @Hooper:2008kv; @Hooper:2009fj; @Feldman:2009; @Ibe:2008ye; @Ishiwata:2008cv; @Kadota:2010xm; @MarchRussell:2008tu; @Meade:2009iu; @Nomura:2008ru; @Pospelov:2008jd; @Yin:2008bs; @Zavala:2009zr; @Zhang:2008tb] /decay [@Arvanitaki:2009yb; @Arvanitaki:2008hq; @Barger:2009yt; @Borriello:2009fa; @Buchmuller:2009xv; @Chen:2008dh; @Chen:2008yi; @Chen:2008qs; @Cirelli:2008pk; @DeLopeAmigo:2009dc; @Fukuoka:2009cu; @Hamaguchi:2008ta; @Hamaguchi:2008rv; @Hisano:2008ah; @Ibarra:2008jk; @Ibarra:2009dr; @Ishiwata:2008cu; @Mardon:2009gw; @Meade:2009iu; @Nardi:2008ix; @Okada:2009bz; @Shirai:2009fq; @Yin:2008bs; @Zhang:2008tb], although there might remain alternatives such as the propagation effects [@Delahaye:2007fr; @Cowsik:2009ga; @Katz:2009yd; @Stawarz:2009ig; @Schlickeiser:2009qq] or proton contamination [@Israel:2009; @Fazely:2009jb; @Schubnell:2009gk]. These discoveries have excited the entire particle and astrophysics communities and prompted over 300 papers within a year. See [@YiZhong:2010] for a recent review.
The most fascinating possibility for the $e^{\pm}$ excesses is the dark matter, such as weakly interacting massive particles (WIMPs) that only appear beyond the Standard Model. Dark matter is a stable particle that accounts most of the matter in the Universe but the nature is not known yet. Usually, the observed $e^{\pm}$ excesses are far larger than expected in the conventional dark matter annihilation scenarios. The annihilation cross section must be enhanced by two or three orders of magnitudes larger than that for dark matter to leave the desired thermal relic density. Astrophysical boosts from substructure are difficult to accommodate such large enhancements. A possible solution is that dark matter interacts with a light force carrier, enhancing the annihilation by the Sommerfeld effect, only at the present time (not at freeze out) [@ArkaniHamed:2008qn; @Cholis:2008qq; @Hisano:2004ds]. The other possibilities include the dark matter decay [@Arvanitaki:2009yb; @Arvanitaki:2008hq; @Barger:2009yt; @Borriello:2009fa; @Buchmuller:2009xv; @Chen:2008dh; @Chen:2008yi; @Chen:2008qs; @Cirelli:2008pk; @DeLopeAmigo:2009dc; @Fukuoka:2009cu; @Hamaguchi:2008ta; @Hamaguchi:2008rv; @Hisano:2008ah; @Ibarra:2008jk; @Ibarra:2009dr; @Ishiwata:2008cu; @Mardon:2009gw; @Meade:2009iu; @Nardi:2008ix; @Okada:2009bz; @Shirai:2009fq; @Yin:2008bs; @Zhang:2008tb] and the annihilation boosted by resonances [@Feldman:2009; @Ibe:2008ye]. Because the PAMELA anti-proton observations show no excess [@Adriani:2008zq; @Adriani:2010rc], any dark matter model should preferentially produce leptons rather than hadrons. The other multi-messenger constraints with radio, gamma-ray and neutrino observations are also getting tight but not completely excluding the dark matter models [@Meade:2009iu; @Cirelli:2008pk; @Yin:2008bs; @Barger:2008su; @Barger:2009yt; @Nardi:2008ix; @Zhang:2008tb; @Bertone:2008xr; @Zavala:2009zr; @Borriello:2009fa; @Crocker:2010gy; @Hisano:2008ah; @DeLopeAmigo:2009dc; @Ishiwata:2008cu; @Ackermann:2010rg; @Abdo:2010ex; @Abdo:2010dk].
More conservative candidates are the astrophysical accelerators in our Galaxy, such as neutron star (NS) pulsars [@Kawanaka:2009dk; @Hooper:2008kg; @Yuksel:2008rf; @Profumo:2008ms; @Malyshev:2009tw; @Grasso:2009ma; @Kistler:2009wm; @Heyl:2010md], supernova remnants (SNRs)[@Fujita:2009wk; @Shaviv:2009bu; @Hu:2009bc; @Blasi:2009hv; @Blasi:2009bd; @Mertsch:2009ph; @Biermann:2009qi; @Ahlers:2009ae; @Kachelriess:2010gt], microquasars [@Heinz:2002qj], or possibly a gamma-ray burst [@Ioka:2008cv; @Calvez:2010fd]. Under plausible assumptions, they can supply sufficient energy for $e^{\pm}$ cosmic rays, as already known before the PAMELA era [@Shen70; @MaoShen72; @boulares89; @Aha:95; @Atoyan:1995; @chi:1996; @zhang:2001; @grimani:2007; @Buesching:2008hr; @ShenBerkey68; @cowsik79; @erlykin02; @pohl98; @Moskalenko:1997gh; @Strong:1998fr; @Kobayashi:2003kp; @Berezhko:2003pf; @Strong:2004de]. Cosmic ray $e^{\pm}$ propagate via diffusion in our Galaxy deflected by magnetic fields [@Berezinski:1990]. Since $e^{\pm}$ cannot propagate far away due to energy losses by the synchrotron and inverse Compton emission, the sources should be located nearby ($\lesssim 1$ kpc). This proximity of the source provides a chance to directly probe the as-yet-unknown cosmic particle acceleration [@Kobayashi:2003kp] and investigate how the $e^{\pm}$ cosmic rays escape from the source to the ISM [@KIOK:2010]. Unlike dark matter, the astrophysical models generally predict, if at all, a broad spectral peak due to the finite source duration [@Kawanaka:2009dk; @Ioka:2008cv]. The hadronic models such as SNRs also predicts the antiproton excess above $\sim 100$ GeV [@Fujita:2009wk; @Blasi:2009bd] (but see [@Kachelriess:2010gt]), as first pointed out by Fujita et al. [@Fujita:2009wk], as well as the excesses of secondary nuclei such as the boron-to-carbon and titanium-to-iron ratio [@Mertsch:2009ph; @Ahlers:2009ae]. The arrival anisotropy [@MaoShen72; @Buesching:2008hr; @Ioka:2008cv] is also useful to discriminate between dark matter and astrophysical origins. The exciting thing is that these signatures will be soon proved by the next generation experiments, such as AMS-02 [@Beischer:2009; @Pato:2010ih], CALET [@torii:2006; @torii:2008] onboard the Experiment Module of the International Space Station, and CTA [@CTA:2010] on the ground, in coming several years.
With the forthcoming next breakthrough, it is important to lay down the theoretical foundation for the TeV $e^{\pm}$ windows. In particular, there could still be room for additional astrophysical signals since the $e^{\pm}$ cosmic rays have only $\lesssim 1\%$ energy budget of the hadronic cosmic rays. Although the supernova (SN)-related sources such as NS pulsars and SNRs may be the most plausible sources of the TeV $e^{\pm}$, there should be only a few local sources [@Watters:2010], while $e^{\pm}$ from distant sources can not reach us due to the fast inverse Compton and synchrotron cooling [@Kobayashi:2003kp; @Kawanaka:2009dk]. Hence a clean window is possibly open for the dark matter or other astrophysical signals. A part of this window may have been already implied by the spectral cutoff around $\sim 1$ TeV in the H.E.S.S. data. The future AMS-02 experiment will detect $e^{\pm}$ up to $\sim 1$ TeV [@Beischer:2009; @Pato:2010ih], while CALET will observe electrons up to $\sim 10$ TeV with an energy resolution better than a few % ($>100$ GeV) [@torii:2006; @torii:2008]. Also CTA will be able to measure the cosmic ray electron spectrum up to $\sim 15$ TeV [@CTA:2010].
In this paper, we propose yet another $e^{\pm}$ source – white dwarf (WD) pulsars – that could potentially dominate the $\gtrsim$ TeV $e^{\pm}$ window or even already have been detected as the $e^{\pm}$ excesses above the conventional models [@Baltz:1998xv; @Ptuskin:2006b]. A WD pulsar is an analogue of the NS pulsar with the central compact object being a WD rather than a NS. A spinning magnetized compact object generates huge electric fields (potential differences) in the magnetosphere via unipolar induction [@goldreich:1969; @ruderman:1975; @cheng:1986], and accelerates particles to produce $e^{\pm}$ pairs if certain conditions are met. Then, almost all the spindown energy is transferred to the outflows of relativistic $e^{\pm}$, resulting in the cosmic ray $e^{\pm}$.
In our model, a rapidly spinning WD is mainly formed by a merger of two ordinary WDs (or possibly by an accretion), since the observed WDs are usually slow rotators [@Kawaler:2003sr]. Such a merger scenario was proposed to explain Type Ia supernovae (SNIa). However, it is not clear that such mergers lead to the SN explosions [@Pakmor:2009yx]. It seems reasonable that about half of mergers leave rapidly spinning WDs with the event rate of about one per century in our Galaxy [@Nelemans:2001hp; @Farmer:2003pa]. The strong magnetic fields ($> 10^6$ G) are also expected as a fraction $\sim 10\%$ of WDs [@Liebert:2002qu; @Schmidt:2003ip]. Combining these facts, we will estimate that the WD pulsars can potentially provide the right amount of energy for the cosmic ray $e^{\pm}$ (see Sec.\[sec2\]). We note that the WD mergers are also related to the low frequency gravitational wave background for LISA [@LISA:2010].
The WD pulsars have been theoretically adopted to interpret the observational features of the anomalous X-ray pulsars [@paczynski:1990; @usov:1993; @usov:1988], the close binary AE Aquarii [@Ikhsanov:2005qf], and the transient radio source GCRT J1745–3009 [@Zhang:2005kz]. Our calculations for the $e^{\pm}$ production are essentially similar to that of Usov [@usov:1988; @usov:1993] and Zhang & Gil [@Zhang:2005kz]. However, this is the first time to apply the WD pulsars to the $e^{\pm}$ cosmic rays, as far as we know. We also discuss the adiabatic energy losses of $e^{\pm}$ in the pulsar wind nebula, that are found to be negligible in contrast to the NS model. From the observational viewpoint, the WD pulsars have not been firmly established, whereas there are several indications for their existence, such as the hard X-ray pulsation in AE Aquarii [@Terada:2007br]. The WD pulsars are likely still below the current level of detection because they are rare, $\sim 10^{-4}$ of all WDs, and relatively dim.
This paper is organized as follows. In Sec.\[sec2\], we show that the WD pulsars can produce and accelerate $e^{\pm}$ up to the energy above TeV. At first we show that the energy budgets of WD pulsars are large enough to explain the PAMELA positron excess by order-of-magnitude estimates. Then we discuss, more closely, whether or not WD pulsars can produce and accelerate $e^{\pm}$ up to the energy above TeV by considering the magnetospheres and pulsar wind nebulae. We also point out that there should be much more nearby active WD pulsars compared with NS pulsars since the lifetime of WD pulsars are much longer. In Sec.\[sec3\], we discuss the propagation of the $e^{\pm}$ from WD pulsars, and show the possible energy spectrum observed by the current and future observations in the WD pulsar dominant model and the WD and NS pulsar mixed model. As complements, we also give a short review of the current status of the observations of WD pulsar candidates. In Sec.\[sec4\], we summarize our paper and discuss open issues.
White dwarf pulsars {#sec2}
===================
Energy Budgets of White Dwarf Pulsars
-------------------------------------
In this subsection, we show that WDs potentially have enough rotational energy for producing high energy $e^{\pm}$ cosmic rays.
NS pulsars, which are formed after the SN explosions, are one of the most promising candidates for the astrophysical sources of high energy positrons. For the PAMELA positron excess, each NS pulsar should provide mean energy $\sim 10^{48}\ {\rm erg}$ to positrons [@Kawanaka:2009dk; @Hooper:2008kg], since the energy budgets of cosmic ray positrons is $\sim 0.1\%$ of that of cosmic ray protons, which is estimated as $\sim 10^{50}\ {\rm erg}$ per each SN, and the positrons suffer from the radiative cooling during the propagation more than the protons. The intrinsic energy source is the rotational energy of a newborn NS, which is typically $$\label{Erot1}
E_{\text{rot,NS}} \approx \frac{1}{2} I \Omega^2 \sim 10^{50} \left(\frac{M}{1.0 M_{\odot}} \right) \left(\frac{R}{10^{6} \text{cm}} \right)^{2} \left(\frac{\Omega}{10^{2} \text{s}^{-1}} \right)^{2} \text{erg},$$ where $I$ is the moment of inertia of the NS. Then, if all the NS pulsars are born with the above rotational energy and the $\sim 1 \%$ energy is used for producing and accelerating $e^{\pm}$, the NS pulsars can supply enough amounts of $e^{\pm}$ for explaining the PAMELA positron excess [@Kawanaka:2009dk].
Let us show that double degenerate WD binary mergers can also supply enough amounts of rotational energy. Here we consider the mass $0.6
M_{\odot}$ and radius $R \sim 10^{8.7} \text{cm}$ for each WDs, which are typically observed ones [@Falcon:2010]. Just after a merger of the binary, the rotational speed $v_{\text{rot}}$ can be estimated as $v_{\text{rot}} \approx (GM/R)^{1/2} \sim 10^8 \text{cm/s}$, which corresponds to the mass shedding limit, and the angular frequency is about $\Omega = v_{\text{rot}}/R \sim 0.1 \text{s}^{-1}$. Then, the rotation energy of the merged object is $$\label{Erot2}
E_{\text{rot,WD}}\approx \frac{1}{2} I \Omega^2 \sim 10^{50}\left(\frac{M}{1.0 M_{\odot}} \right) \left(\frac{R}{10^{8.7} \text{cm}} \right)^{2} \left(\frac{\Omega}{0.1 \text{s}^{-1}} \right)^{2} \text{erg},$$ which is comparable to the NS pulsar case in Eq.(\[Erot1\]). The event rate $\eta_{\text{WD}}$ of the double degenerate WD mergers in our Galaxy remains uncertain. Any theoretical estimate requires a knowledge of the initial mass function for binary stars, the distribution of their initial separation, and also the evolution of the system during periods of nonconservative mass transfer. There are still reasonable estimates in the range [@Nelemans:2001hp; @Farmer:2003pa], $$\label{eta}
\eta_{\text{WD}} \sim 10^{-2}\mbox{--}10^{-3} \ /\text{yr}/\text{galaxy}.$$ This is comparable to the typical birth rate of NS pulsars [@Narayan:1987; @Lorimer:1993]. Therefore, from the viewpoint of energy budget in Eqs. (\[Erot1\]), (\[Erot2\]) and (\[eta\]), the WDs are also good candidates for the high energy $e^{\pm}$ sources as the NS pulsars, if the merged binaries can efficiently produce and accelerate $e^{\pm}$.
The estimated merger rate is also similar to that of SNIa, which is one of the reason that the double degenerate WD mergers are possible candidates for SNIa. Since the typical WD mass is $0.6M_{\odot}$, the merged objects do not exceed the Chandrasekhar limit $1.4M_{\odot}$ even without any mass loss. Then, they leave fast rotating WDs, as suggested by some recent simulations [@Loren:2009], and could become WD pulsars. In this paper, we assume that a fair fraction of double degenerate WD mergers result in the WD pulsars. [^2] The accretion scenario is another possibility for the fastly rotating WD formation. In the single degenerate binary, which consists of a WD and a main sequence star, there should be a mass transfer from the main sequence star to the WD as the binary separation becomes smaller and the Roche radius becomes larger than the radius of the main sequence star. In this stage, the angular momentum is also transferred to the WD, and the WD can spin up to around the mass shedding limit with the rotational energy as large as Eq.(\[Erot2\]). In Sec.\[observation\], we refer to such a WD pulsar candidate, AE Aquarii.
Since the birth rate is relatively uncertain in the accretion scenario, we just concentrate on the merger scenario in this paper.
$e^{\pm}$ Production and Acceleration {#Sec.condition}
-------------------------------------
In this subsection we discuss the possibility that WD pulsars emit high energy $e^{\pm}$ above TeV. In order to produce the TeV $e^{\pm}$, a pulsar has to
1. [produce $e^{\pm}$ pairs]{}
2. [accelerate $e^{\pm}$ up to TeV.]{}
We show that WD pulsars can meet both of the conditions. From now on we set fiducial parameters of the WD pulsar’s surface dipole magnetic field, angular frequency, and radius as $B_{\text{p}} = 10^{8}\text{G}$, $\Omega = 0.1 \text{s}^{-1}$ and $R = 10^{8.7}\text{cm}$, respectively. For comparison, we set fiducial parameters of the NS pulsars as $B_{\text{p}} = 10^{12}\text{G}$, $\Omega = 10^2\text{s}^{-1}$ and $R = 10^{6}\text{cm}$.
### $e^{\pm}$ pair production in magnetosphere
Some of the observed WDs have strong magnetic fields of $B \sim 10^{7\mbox{-}9} \text{G}$ [@Liebert:2002qu; @Schmidt:2003ip]. For such WDs, if they are rapidly rotating as we discuss in the previous subsection, the electric field along the magnetic field are induced on the surface and the charged particles are coming out from the surface layer of the pulsars. Then we can expect that, as in the case of ordinary NS pulsars, the corotating magnetosphere are formed around the WDs, in which the charge distribution of plasma should be the Goldreichi-Julian (GJ) density in a stationary case [@goldreich:1969], $$\label{GJ}
\rho_0 = {\bf \nabla} \cdot \frac{(\bf{\Omega} \times {\bf r}) \times {\bf B}}{4 \pi c} \approx - \frac{\bf{\Omega} \cdot \bf B}{2 \pi c} \sim -\frac{10^{5}}{|Z|} \left( \frac{B_{\text{p}}}{10^8 \text{G}}\right) \left( \frac{\Omega}{0.1\text{s}^{-1}}\right) \text{cm}^{-3},$$ where $Z$ is the elementary charge of particles in the plasma. Here we assume that the large scale configuration of the magnetic field is dipole. Since the corotating speed of the magnetic field lines cannot exceed the speed of light, the magnetic field cannot be closed outside the light cylinder $R_{\text{lc}} = c/\Omega$. This fact leads to the open magnetic field lines in the polar region. The electric potential difference across this open field lines is [@goldreich:1969] $$\label{DelVmax}
\Delta V_{\text{max}} = \frac{B_{\text{p}} \Omega^2 R^3}{2 c^2} \sim 10^{13} \left( \frac{B_{\text{p}}}{10^8 \text{G}}\right) \left( \frac{\Omega}{0.1 \text{s}^{-1}}\right)^{2} \left( \frac{R}{10^{8.7} \text{cm}}\right)^{3} \text{Volt},$$ which is the maximum value for the pulsars in principle.
If the GJ density is completely realized in the magnetosphere, electric fields along the magnetic field lines is absent: ${\bf E} \cdot {\bf B} = 0$ . Since the charged particles are tied to the strong magnetic field, the acceleration of $e^{\pm}$ cannot occur. That leads to the absence of high energy $\gamma$ ray emissions from the accelerated $e^{\pm}$ and successive pair production avalanches. However, there are two prospective scenarios of forming the region where the charge density is not equal to the GJ density, and hence $e^{\pm}$ are accelerated and produced in the NS pulsar magnetosphere, that is the polar cap [@ruderman:1975; @Arons:1979] and outer gap model [@cheng:1986]. From now on, we assume that the magnetosphere structure of WD pulsars are similar to that of NS pulsars, and discuss the $e^{\pm}$ pair production especially in the polar cap region.
In polar cap models, electric potential drops along the magnetic fields are formed in the polar region of the pulsars. There are some different types of polar cap models. First, the angle between the magnetic and rotational axis determines the sign of electric charge of the particles propagating along the open magnetic field lines in accordance to Eq.(\[GJ\]) [@goldreich:1969]. The GJ density in the polar cap region is positive when ${\bf \Omega} \cdot {\bf B} < 0$ and negative when ${\bf \Omega} \cdot {\bf B} > 0$. Second, polar cap models depend on whether or not steady charge currents flow out from the surface of the pole region. After the GJ density is realized, there are no electric forces working on the charged particles in the surface layer. Hence, whether or not the charged particles come out from the surface is determined by the competition between the binding energy of ions or electrons at the surface and thermal energy. In the original model proposed by Ruderman and Sutherland [@ruderman:1975], they assume that the binding energy is bigger. Then due to the outflow along the open magnetic field, a gap where the charge density is almost $0$ is formed in the pole region. On the other hand, if the thermal energy is bigger, there exist a positive or negative space-charge-limited flow [@Arons:1979]. Even in this case, it is shown that, by virtue of the curvature of magnetic fields, the charge density deviates from the GJ density and electric potential drops along the open magnetic field lines can be formed [@Arons:1979]. Although a general relativistic frame dragging effect also contributes to form electric potential drops in the polar cap region [@MuslimovTsygan:1991], the effect can be neglected compared with the effect of magnetic field curvature in the case of the WDs [@Zhang:2005kz].
In the polar cap region, where the GJ density is not realized, primary electrons or positrons are accelerated, and they emit curvature radiations, which interact with the magnetic fields and produce secondary $e^{\pm}$ pairs, $\gamma + B \rightarrow e^- + e^+ $ [@ruderman:1975]. The secondary $e^{\pm}$ are also accelerated and emit curvature radiations that produce further $e^{\pm}$ pairs (pair creation avalanche). Inverse Compton scatterings can also serve as a way to produce high energy $e^{\pm}$ and successive pair creation avalanches [@ZhangQiao:1996]. Due to the abundant charges supplied by the avalanche, the GJ density is realized at a finite distance from the surface and the polar cap formation stops. In the quasi-steady state, the size of the polar cap region can be approximated as $h \approx l$, where $l$ is the mean free path of the $e^{\pm}$ pair creation process. To put it the other way around, only when the available size of the polar cap region $h_{\text{max}}$ is larger than $l$, $e^{\pm }$ pair creation avalanches can be formed. Chen & Ruderman first derived the condition for NS pulsars and succeeded in showing the NS pulsar “death line” [@ChenRuderman:1993]. Harding & Muslimov also derived the NS death line under more general conditions [@Harding:2001; @Harding:2002]. Here we follow Chen & Ruderman’s approach and drive the $e^{\pm}$ pair production avalanche condition in the case of WD pulsars. We discuss the validity of this simple treatment in Sec.\[sec4\].
Going through any potential drop $\Delta V$ along the open magnetic field lines, $e^{\pm}$ are accelerated up to the Lorentz factor $$\label{gamma}
\gamma = \frac{e \Delta V }{m_{\text{e}} c^2},$$ where $m_{\text{e}}$ is the mass of electrons. The characteristic frequency of curvature radiation photons from the accelerated $e^{\pm}$ is $$\label{omega}
\omega_{\text{c}} = \gamma^3 \frac{c}{r_{\text{c}}},$$ where $r_{\text{c}}$ is the curvature radius of the magnetic field lines. The mean free path of a photon of energy $\hbar \omega > 2m_{\text{e}} c^2$ moving through a region of magnetic fields is [@Erber:1966] $$l = 4.4 \ \frac{\hbar c}{e^2}\frac{\hbar}{m_{\text{e}} c} \frac{B_{\text{q}}}{B_{\perp}} \exp \left( \frac{4}{3 \chi} \right); \ (\chi << 1), \notag$$ $$\label{chi}
\chi\equiv \frac{\hbar \omega}{2 m_{\text{e}} c^2} \frac{B_{\perp}}{B_{\text{q}}}.$$ Here $B_{\text{q}}=m_{\text{e}}^2 c^3/e \hbar = 4.4 \times 10^{13}\text{G}$ and $B_{\perp} = B_{\text{s}} \sin \theta$ with $\theta$ is the angle between the direction of propagation for photon and the surface magnetic field lines of the pulsars. $B_{\text{s}}$ is the local magnetic field at the surface of the pulsar which is not necessarily coincident with the dipole field $B_{\text{p}}$. At distance $h$ above the pulsar surface, the $\sin \theta$ can be approximated to $\approx h/r_{\text{c}}$, then $$\label{Bperp}
B_{\perp} \approx B_{\text{s}} \frac{h}{r_{\text{c}}}.$$ We shall consider the situation $l \approx h$, however which could be realized when $\chi^{-1} = {\it O}(10)$ without relying on precise value of the parameters characterizing NS or WD pulsars since small changes in $\chi$ correspond to the exponentially large change in $l$. Here we take the critical value as $\chi^{-1} = 15$ following [@ChenRuderman:1993]. Substituting Eq.(\[gamma\]), (\[omega\]), (\[Bperp\]) to Eq.(\[chi\]), this condition is given by $$\label{pair_cre_con}
\left( \frac{e \Delta V}{m_{\text{e}} c^2} \right)^3 \frac{\hbar}{2 m_{\text{e}} c r_{\text{c}}} \frac{h}{r_{\text{c}}} \frac{B_{\text{s}}}{B_{\text{q}}} \approx \frac{1}{15}.$$ Eq.(\[pair\_cre\_con\]) corresponds to a general condition for $e^{\pm}$ pair production avalanches in the polar cap region of pulsars. Then we have to specify $h$, $\Delta V$, $B_{\text{s}}$ and $r_{\text{c}}$. The thickness $h$ and the potential drop $\Delta V$ in the polar cap region depend on which polar cap model we adopt. Here we consider the original polar cap model proposed by Ruderman and Sutherland [@ruderman:1975]. In this case, the relation between $h$ and $\Delta V$ is given by $$\label{DelV}
\Delta V = \frac{B_{\text{s}} \Omega h^2}{2c}.$$ Then, since $\Delta V$ cannot exceed the maximum potential drop available in a pulsar magnetosphere, $\Delta V_{\text{max}}$ in Eq.($\ref{DelVmax}$), $h$ also cannot exceed the maximum thickness $$h_{\text{max}} \approx \left( \frac{R^3 \Omega}{c} \right)^{1/2}.$$ $B_{\text{s}}$ and $r_{\text{c}}$ depend on the configuration of the surface magnetic field, which is very uncertain even in the case of the NS pulsars. Here we suppose curved magnetic fields in the polar cap region and set $r_{\text{c}} \approx R$ and $B_{\text{s}} \approx B_{\text{p}}$. In this case, the condition for $e^{\pm}$ pair production avalanche (Eq.(\[pair\_cre\_con\])) is $$\left( \frac{e \Delta V_{\text{max}}}{m_{\text{e}} c^2} \right)^3 \frac{\hbar}{2 m_{\text{e}} c R} \frac{h_{\text{max}}}{R} \frac{B_{\text{p}}}{B_{\text{q}}} \gtrsim \frac{1}{15},$$ which is equivalent to $$\label{avalanche_polar}
4 \log B_{\text{p}} -6.5 \log P + 9.5 \log R \gtrsim 96.7,$$ where the unit of $B_{\text{p}}$, $P=2\pi/\Omega$ and $R$ are \[G\], \[sec\] and \[cm\], respectively. By substituting $R \sim 10^{6}\text{cm}$, which is the typical radius of NSs, Chen and Ruderman succeeded in explaining the NS pulsar death line [@ChenRuderman:1993]. In the case of WD pulsars, substituting our fiducial parameters $B_{\text{p}} \sim 10^8 \text{G}$, $P \sim 50 \text{s}$ ($\Omega \sim 0.1 \text{s}$) and $R \sim 10^{8.7} \text{cm}$, we find that the WD pulsars well satisfy Eq.(\[avalanche\_polar\]), and thus also the condition (i) in Sec.\[Sec.condition\]. Fig.\[death\_line\] shows the death lines of the WD and NS pulsar with the fiducial parameters of the WD pulsars. We also plot parameters of the observed WD pulsar candidates, AE Aquarii and EUVE J0317. As we discuss in Sec.\[observation\], the pulse emission like ordinary NS pulsars are observed for AE Aquarii, and not for EUVE J0317, which is consistent with the death line.
![This figure shows the death lines of WD (solid line) and NS (dashed line) pulsars. The cross shape indicates the fiducial parameters of the WD pulsars, $B_{\text{p}} = 10^8 \text{G}$ and $P = 50 \text{s}$. The observed data of rapidly rotating magentised WDs, AE Aquarii (filled circle) and EUVE J0317 855 (open square) are also plotted. The parameters and observational properties of these WDs are given in Sec.\[observation\].[]{data-label="death_line"}](deathline.eps){width="110mm"}
### $e^{\pm}$ acceleration and cooling in pulsar wind nebula
In the previous subsection we show that WD pulsars can produce $e^{\pm}$ pairs in the magnetospheres. In this subsection we discuss the acceleration and cooling of the $e^{\pm}$ in the pulsar wind nebulae.
Fig.\[wind\_nebula\] shows the schematic picture of a expected WD pulsar wind nebula. Once a WD pulsar is formed, the relativistic wind blasts off from the pulsar magnetosphere $\sim R_{\text{lc}}$. The supersonic wind becomes subsonic by passing the shock front at $\sim R_{\text{in}}$, reaches the ISM and forms a contact discontinuity. Since the wind is continuously injected by the pulsar, the contact discontinuity keeps sweeping the interstellar matter, and then the outer shock front is formed at $\sim R_{\text{out}}$. We emphasize that the SN shock front does not exist outside the shocked region unlike the NS pulsars since there suppose to be no SN explosion when the WD pulsar is formed.
First we estimate the energy of $e^{\pm}$ available in the wind region $R_{\text{lc}} < r < R_{\text{in}}$. In principle, $e^{\pm}$ can be accelerated to the energy that the equipartition is realized between the wind and magnetic field, $\epsilon N = B^2/8\pi$ , that is $$\label{wind_gamma}
\epsilon = \frac{B^2}{8\pi N},$$ where $N$ is the number density of $e^{\pm}$. If the number flux is conserved in the wind region, $4\pi r^2 c N \approx \text{const}$, $N$ can be described as $$N = N_{\text{lc}} \left( \frac{R_{\text{lc}}}{r} \right)^2 ,$$ where $N_{\text{lc}}$ is the number density at the light cylinder which can be estimated as $$\label{Ndensity}
N_{\text{lc}} = \frac{\rho_{\text{lc}}}{e} {\cal M} = \frac{B_{\text{lc}}\Omega}{2\pi c e} {\cal M},$$ where $\rho_{\text{lc}}$ and $B_{\text{lc}}$ are the GJ density (Eq.(\[GJ\])) and magnetic field strength at the light cylinder, respectively and ${\cal M}$ is the multiplicity of $e^{\pm}$ in the magnetosphere. Inside the light cylinder $r < R_{\text{lc}} = c/\Omega$, the magnetic field is almost pure dipole, $$\label{Bconf1}
B = B_{\text{p}} \left( \frac{R}{r} \right)^3 .$$ For the fiducial parameters of WD pulsars, the radius of the light cylinder is $R_{\text{lc}} \sim 3\times 10^{11} \text{cm}$ and $B_{\text{lc}} = B_{\text{p}} (R/(c/ \Omega))^3 \sim 1 \text{G}$. Outside the light cylinder $r > R_{\text{lc}}$, if the energy flux of the magnetic field is also conserved $B\cdot r \approx \text{const}$, then $$\label{Bconf2}
B = B_{\text{lc}} \frac{R_{\text{lc}}}{r}. %= B_p \left( \frac{R}{c/\Omega} \right)^3 \left( \frac{c/\Omega}{r} \right) = \frac{2\Delta V_{\text{max}}}{r}.$$ Substituting $B_{\text{lc}} = B_{\text{p}} (\Omega R/c)^3$ and Eq.(\[Ndensity\]) to Eq.(\[wind\_gamma\]), the typical energy of $e^{\pm}$ in the wind region can be described as $$\label{emax}
\epsilon = \frac{e \Delta V_{\text{max}}}{{\cal M}} \sim 10 {\cal M}^{-1} \left( \frac{B_{\text{p}}}{10^{8}\text{G}} \right) \left( \frac{\Omega}{0.1 \text{s}^{-1}} \right)^2 \left( \frac{R}{10^{8.7}\text{cm}} \right)^3 \text{TeV},$$ where $\Delta V_{\text{max}}$ is shown in Eq.(\[DelVmax\]). The multiplicity of $e^{\pm}$ in the pulsar magnetosphere and wind nebula have not been understood clearly even in the case of NS pulsars and there are several discussions [@chi:1996; @zhang:2001]. Although details of the multiplicity in the magnetosphere cannot be discussed at this stage[^3], TeV energy $e^{\pm}$ could come out of the wind region and the condition (ii) in Sec.\[Sec.condition\] can be fulfilled if ${\cal M}$ is not large.
Secondly we estimate the adiabatic and radiative cooling of $e^{\pm}$ in the shocked region. To that end, we have to identify the radii of the inner and outer shock front $R_{\text{in}}$ and $R_{\text{out}}$. The equation of motion for the outer shock front is $$\label{windEOM}
\frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\text{out}}^3 \rho \frac{dR_{\text{out}}}{dt} \right\} = 4\pi R_{\text{out}}{}^2 P_{\text{sh}},$$ where $P_{\text{sh}}$ is the pressure of the shocked region and $\rho$ is the density of the ISM $\rho \sim 10^{-24} \text{g cm}^{-3}$. The energy conservation law at the outer shock front is $$\label{windEcon}
\frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\text{out}}{}^3 \frac{3}{2} P_{\text{sh}} \right\} = L - P_{\text{sh}} \frac{d}{dt} \left\{ \frac{4 \pi}{3} R_{\text{out}}{}^3 \right\} .$$ Here $L$ is the spin down luminosity of WD pulsars, $$\label{Lspindown}
L = \frac{B_{\text{p}}^2 \Omega^4 R^6}{c^3},$$ and we suppose that in the shocked region the particles are relativistic and its internal energy is $3P/2 $. Solving Eq.(\[windEOM\]) and (\[windEcon\]) for $R_{\text{out}}(t)$, $$\label{Rout}
\begin{split}
R_{\text{out}}(t) &= \left( \frac{125}{154 \pi} \right)^{1/5} \left( \frac{L}{\rho} \right)^{1/5} t^{3/5} \\
&\sim 10^{16} \left( \frac{B_{\text{p}}}{10^{8}\text{G}} \right)^{2/5} \left( \frac{\Omega}{0.1 \text{s}^{-1}} \right)^{4/5} \left( \frac{R}{10^{8.7}\text{cm}} \right)^{6/5} \left( \frac{t}{\text{yr}} \right)^{3/5} \text{cm}.
\end{split}$$
The outer shock finally decays when the pressure of the shocked region $P_{\text{sh}}$ becomes equal to that of the ISM $p$. At this stage the shocked region may be physically continuous to the ISM. Solving Eq.(\[windEOM\]) and (\[windEcon\]) for $P_{\text{sh}}$, $$\label{P_s}
\begin{split}
P_{\text{sh}} &= \frac{7}{25} \left ( \frac{125}{154\pi} \right)^{2/5} \rho{}^{3/5} L^{2/5} t^{-4/5} \\
&\sim 10^{-8} \left( \frac{B_{\text{p}}}{10^{8}\text{G}} \right)^{4/5} \left( \frac{\Omega}{0.1 \text{s}^{-1}} \right)^{8/5} \left( \frac{R}{10^{8.7}\text{cm}} \right)^{12/5} \left( \frac{t}{\text{yr}} \right)^{-4/5} \text{dyn/cm}^{-2}.
\end{split}$$ Besides assuming that the density of the ISM is $\rho \sim 10^{-24} \text{g} \ \text{cm}^{-3}$, that is the number density of hydrogen is $n \sim 1 \text{cm}^{-3}$, the pressure can be estimated as $$\label{p}
p = nk_{\text{B}} T \sim 10^{-13} \left( \frac{T}{10^3 \text{K}} \right) \text{dyn/cm}^{-2},$$ where $k_{\text{B}} = 1.4 \times 10^{-16} \text{erg} \ \text{K}^{-1}$ is the Boltzmann constant and $T$ is the temperature of the ISM. From Eq.(\[P\_s\]) and (\[p\]), the outer shock decays at about $$\label{decay}
t_{\text{dec}} \sim 10^{6} \left( \frac{T}{10^3 \text{K}} \right)^{5/4} \text{yr},$$ for the fiducial parameters of the WD pulsars. The lifetime of a pulsar $\tau$ can be estimated as $$\label{tau}
\tau = \frac{E_{\text{rot}}}{L},$$ From Eq.(\[Erot2\]) and (\[Lspindown\]), for the fiducial parameters of the WD pulsars $$\label{WDlifetime}
\tau_{\text{WD}} \sim 10^9 \left( \frac{M}{1.0 M_{\odot}} \right) \left( \frac{B_{\text{p}}}{10^8 \text{G}} \right)^{-2} \left( \frac{\Omega}{0.1\text{s}^{-1}} \right)^{-2} \left( \frac{R}{10^{8.7} \text{cm}} \right)^{-4} \text{yr}.$$ Compared with Eq.(\[decay\]) and (\[WDlifetime\]), we found that the outer shock decays at a very early stage of the lifetime of WD pulsars. For $t < t_{\text{dec}}$, the momentum transfer by the wind balances the pressure of shocked region at the inner shock front, $$\label{Rs}
\frac{L c}{4\pi R_{\text{in}}{}^2}= P_{\text{sh}}.$$ Then the radius of the inner shock front can be estimated as $$\label{Rin1}
\begin{split}
R_{\text{in}}(t < t_{\text{dec}}) &= \left( \frac{25}{28\pi} \right)^{1/2} \left( \frac{154\pi}{125} \right)^{1/5} \left( \frac{L}{\rho c^{5/3}} \right)^{3/10} t^{2/5} \\
&\sim 10^{15} \left( \frac{t}{\text{yr}} \right)^{2/5} \text{cm},
\end{split}$$ for the fiducial parameters. For $t > t_{\text{dec}}$, there is no well-defined shocked region any more and the radius of the inner shock front is determined by the balance between the wind pressure and the pressure of the ISM $p$ instead of $P_{\text{sh}}$, and $R_{\text{in}}(t)$ become constant for $t$. For the fiducial parameters, $$\label{Rin_2}
R_{\text{in}}(t > t_{\text{dec}}) \sim 10^{17} \text{cm}.$$ In the case of NS pulsars, the adiabatic cooling due to the expansion of the shocked region is considerable as a cooling process in the pulsar wind nebula. However, in the case of WD pulsars, since the outer edge of the shocked region does not expand after $t \gtrsim t_{\text{dec}}$, the adiabatic cooling shall give minor contributions to the cooling process of the high energy $e^{\pm}$.
Now we discuss the $e^{\pm}$ radiative cooling in the shocked region $r > R_{\text{in}}$. In the region swept by the shock, the magnetic field may be highly fluctuated and the high energy $e^{\pm}$ coming from the wind region are trapped because of the multiple scattering by the field, and lose the energy by the synchrotron radiation and inverse Compton scattering. Here we take the Bohm limit, where the fluctuation of the magnetic field $\delta B$ is comparable to the coherent magnetic field strength $B$. In this limit, the diffusion coefficient $D_{\text{sh}}$ can be approximated by $$D_{\text{sh}} = \frac{c r_{\text{g}}}{3},$$ where $r_{\text{g}} = \epsilon/eB$ is the Larmor radius of the $e^{\pm}$ with energy $\epsilon$. The time scale $t_{\text{dif}}$ for the $e^{\pm}$ trapping in the shocked region is given by $$\label{tdiff}
t_{\text{dif}} = \frac{d^2}{2D_{\text{sh}}} = \frac{3}{2} \frac{eBd^2}{\epsilon c},$$ where $d$ is the size of the shocked region.
We consider the age $t = \tau_{\text{WD}} > t_{\text{dec}}$. For $t > t_{\text{dec}}$, we set the size of the shocked region as the forward shock front at $t = t_{\text{dec}}$, that is $$\label{D_WD}
d \approx R_{\text{out}}(t=t_{\text{dec}}) \sim 10^{19} \text{cm},$$ for the fiducial parameters. As we have shown in Eq.(\[Rin\_2\]), the radius of the inner shock front is about $R_{\text{in}} \sim 10^{17} \text{cm}$ at $t = \tau_{\text{WD}}$. From Eq.(\[Bconf2\]), the strength of the magnetic field at the inner edge $B_{\text{in}}$ can be estimated as $$\label{B_in_WD}
B_{\text{in}} \sim 3 \times 10^{-6} \left( \frac{R_{\text{in}}}{10^{17}\text{cm}} \right)^{-1} \text{G},$$ which is almost the same as that of the ISM. Then, substituting Eq.(\[D\_WD\]) and Eq.(\[B\_in\_WD\]) to Eq.(\[tdiff\]) [^4] , the time scale for the high energy $e^{\pm}$ with energy $\epsilon$ being trapped in the shocked region is $$t_{\text{dif}} \sim 3 \times 10^{4} \left( \frac{\epsilon}{10\text{TeV}} \right)^{-1} \text{yr}.$$ The synchrotron energy loss of the $e^{\pm}$ with energy $\epsilon$ is described as $$\label{synchrotron}
\frac{d \epsilon}{d t} = -\frac{4}{3}\sigma_{\text{T}} c \beta^2 \frac{B^2}{8\pi} \left( \frac{\epsilon}{m_{\text{e}} c^2} \right)^2,$$ where $\sigma_{\text{T}}$ is the Thomson scattering cross section, and $\beta = v_{\text{e}}/c$ is the velocity in terms of the speed of light. Then from Eq.(\[synchrotron\]), the typical energy loss of the electron with energy $\epsilon$ during the time scale $t_{\text{dif}}$ can be estimated as, $$\label{Eloss}
\frac{\Delta \epsilon}{\epsilon} \sim 0.1 \left( \frac{B_{\text{in}}}{3 \times 10^{-6} \text{G}} \right)^3.$$ This means that the high energy $e^{\pm}$ injected into the shocked region lose roughly $10 \%$ of the energy by the synchrotron radiation before diffusing out into the ISM. Although the inverse Compton scattering is also considerable process as a radiative cooling, it would be comparable to the synchrotron cooling. Then we can conclude that the radiative energy loss of $e^{\pm}$ in the pulsar wind nebula is not so large.
Differences between white dwarf and neutron star pulsars {#Difference_in_NS_WD}
--------------------------------------------------------
In this subsection, we discuss the differences between WD pulsars and NS pulsars as TeV $e^{\pm}$ sources.
Ordinary NS pulsars have been already discussed as a candidate for high energy $e^{\pm}$ sources for the PAMELA positron excess ([@Kawanaka:2009dk] and the references listed in Sec.\[sec1\]). Compared with the NS pulsars, there are distinct features of the WD pulsars as high energy $e^{\pm}$ sources. As we saw in the previous sections, the WD pulsars can provide the high energy $e^{\pm}$ and the intrinsic energy budgets are almost the same as that of the NS pulsars. However, the magnetic field and rotation speed of the WD pulsars are much smaller than that of the NS pulsars. As a result, the spin down luminosity (Eq.(\[Lspindown\])) of the WDs are much smaller than that of the NSs, $$L_{\text{WD}} \sim 10^{41} \left(\frac{B_{\text{p}}}{10^8 \text{G}} \right)^2 \left(\frac{\Omega}{0.1 \text{s}^{-1}} \right)^{4} \left(\frac{R}{10^{8.7} \text{cm}} \right)^6 \text{erg/yr} \sim 10^{-4} L_{\text{NS}}.$$ Then from Eq.(\[tau\]), the lifetime of the WD pulsars are much longer than the NS pulsars $$\tau_{\text{WD}} \sim 10^9 \text{yr} \sim 10^4 \tau_{\text{NS}}.$$ Therefore, the number of the WD pulsars which are currently TeV $e^{\pm}$ sources are much larger than that of the NS pulsars. Since the high energy electrons above TeV cannot propagate more than $\sim 1$ kpc in our Galaxy, the number density of the WD pulsars which can be the TeV $e^{\pm}$ sources is $$\label{nWD}
n_{\text{WD}} = \frac{\alpha \cdot \eta_{\text{WD}} \cdot \tau_{\text{WD}}}{V_{\text{G}}} \sim 10^3 \alpha \left( \frac{\eta_{\text{WD}}}{10^{-2} \text{yr}^{-1} \text{galaxy}^{-1}} \right) \left( \frac{\tau_{\text{WD}}}{10^{9} \text{yr}} \right) \left( \frac{V_{\text{G}}}{10^3 \text{kpc}^{3}} \right)^{-1} \text{kpc}^{-3}.$$ where $V_{\text{G}}$ is the volume of our Galaxy and $\eta_{\text{WD}}$ is the event rate of the double degenerate WD binary merger in our Galaxy, Eq.(\[eta\]). A parameter $\alpha$ is the fraction of the binary mergers which lead to the WD pulsars with the strong magnetic field $B \gtrsim 10^{8} \text{G}$. Eq.(\[nWD\]) means that there may be enough WD pulsars which supply TeV $e^{\pm}$ near the Earth, although the parameter $\alpha$ has a large ambiguity at this stage. On the other hand, the number density of the TeV $e^{\pm}$ sources for the NS pulsars is $$\label{rate_in_Galaxy}
n_{\text{NS}} \sim 0.1 \text{kpc}^{-3} \sim 10^{-4} \alpha^{-1} n_{\text{WD}}.$$ Eq.(\[rate\_in\_Galaxy\]) means that it is uncertain whether NS pulsars are $e^{\pm}$ sources above TeV energy or not.
Another important difference is the environment of the pulsars, especially the strength of the magnetic field in the pulsar wind nebulae. The magnetic field is crucial for the cooling process since it determines how the high energy $e^{\pm}$ produced at pulsars are trapped and lose their energy by synchrotron radiation in the pulsar wind nebulae. In the case of the WD pulsars, the strength of the magnetic field at the shocked region is, in most of their lifetime, comparable to that of the ISM. As we saw in the previous subsection, this may imply that most of the accelerated $e^{\pm}$ directly escape into the ISM without cooling in the shocked region. On the other hand, in the NS pulsar wind nebulae, the situation is quite different. First the magnetic field are much stronger than the WD pulsars. Second there exists a SN shock front outside the pulsar wind nebula. These facts make the cooling process in the pulsar wind nebula more complicated, and the escape process of $e^{\pm}$ into the ISM is still uncertain.
In the case of the NS pulsars, almost all the spin down luminosity is transformed to the kinetic energy of the $e^{\pm}$ wind before the wind goes into the shocked region [@KennelCoroniti:1984]. The NS pulsars are consistent to be the source of the observed $e^{\pm}$ if the $e^{\pm}$ lose $\sim 99 \%$ of their energy in the shocked region [@Kawanaka:2009dk]. As we discussed in Sec.\[sec2\], the total energy budgets of the WD and NS pulsars are almost the same when almost all the double degenerate WD binaries merge to become the WD pulsars, that is when $\alpha = 1$. Since the $e^{\pm}$ lose only $\sim 10 \%$ of their energy in the WD pulsar wind nebulae (Eq.(\[Eloss\])), the expected amount of $e^{\pm}$ from WD pulsars can exceed the current observation bound. Hence if $$\label{fiducial_alpha}
\alpha \sim 0.01,$$ we can expect that the PAMELA positron excess can be explained by the WD pulsars without any contribution of other sources. We should note that the fraction in Eq.(\[fiducial\_alpha\]) seems consistent with the observed fraction of the magnetized WDs $\sim 10 \%$. We show the brief summary of the comparison between WD and NS pulsars in Table \[comparison\].
----------- ---------------- ---------------- --------------- ------------------- --------------------- ----------- -----------------------
WD pulsar $\sim 10^{50}$ $\sim 10^{41}$ $\sim 10^{9}$ $\sim \alpha/100$ $\sim 10^{3}\alpha$ $\sim 90$ $\sim 10^{44} \alpha$
NS pulsar $\sim 10^{50}$ $\sim 10^{45}$ $\sim 10^{5}$ $\sim 1/100$ $\sim 0.1$ $\sim 1$ $\sim 10^{42}$
----------- ---------------- ---------------- --------------- ------------------- --------------------- ----------- -----------------------
: The comparison between WD and NS pulsars as $e^{\pm}$ sources.[]{data-label="comparison"}
Energy Spectrum Calculation {#sec3}
===========================
In this section, we calculate the $e^{\pm}$ energy spectrum observed at the solar system after the propagation in our Galaxy for the WD pulsar model. We solve the diffusion equation taking into account the Klein-Nishina (KN) effect.
electron distribution function from a single source
---------------------------------------------------
Here we formulate cosmic ray $e^{\pm}$ propagation through our Galaxy according to [@Atoyan:1995]. For simplicity we assume that the diffusion approximation is good (e.g., neglecting convection), the $e^{\pm}$ propagate in spherically symmetric way and diffuse homogeneously in our Galaxy [^5]. Following these assumptions, the $e^{\pm}$ propagation equation can be described as follow. $$\label{diff_eq}
\frac{\partial f}{\partial t}=\frac{D}{r}\frac{\partial }{\partial r}r^2\frac{\partial f}{\partial r}- \frac{\partial }{\partial \epsilon}(Pf)+Q.$$ Here $f(t,\epsilon,r)~\mathrm{[m^{-3} \cdot GeV^{-1}]}$ is the energy distribution function of $e^{\pm}$. $P(\epsilon)$ is the cooling function of the $e^{\pm}$ which corresponds to the energy loss rate during the propagation. $D(\epsilon)$ denotes the diffusion coefficient, which does not depend on the position $r$. $Q(t,\epsilon,r)$ is the energy injection term. Considering $\delta$-function injection at the time $t=t_0$, that is $$Q(t,\epsilon,r) = \Delta N(\epsilon) \delta(r)\delta(t-t_0),$$ we can get the analytical solution [@Atoyan:1995]. For an arbitrary injection spectrum $\Delta N(\epsilon)$, the energy distribution can be described as $$\label{dis_short}
f(r,t,\epsilon) = \frac{\Delta N(\epsilon_{t,0})}{\pi^{3/2}r_{\text{dif}}{}^3}\frac{ P(\epsilon_{t,0})}{P(\epsilon)}\exp \left( -\frac{r^2}{r_{\text{dif}}{}^2} \right).$$ Here $\epsilon_{t,0}$ corresponds to the energy of $e^{\pm}$ which are cooled down to $\epsilon$ during the time $t-t_0$, and is obtained by solving the integral equation $$t-t_0= \int^{\epsilon_{t,0}}_{\epsilon} \frac{d \epsilon'}{P(\epsilon')}.$$ The $e^{\pm}$ propagate to the diffusion length defined by $$\label{r_diff}
r_{\text{dif}}(\epsilon, \epsilon_{t,0}) = 2\left( \int^{\epsilon_{t,0}}_{\epsilon}
\frac{D(\epsilon')}{P(\epsilon')} d \epsilon' \right)^{1/2}.$$ Eq.(\[dis\_short\]) is the distribution function for the $\delta$-functional (short term) injection source, i.e., the Green function of Eq.(\[diff\_eq\]). From now on, we set the observation is taking place at $t=0$.
Even for a continuous (long term) injection source, the distribution function can be calculated by integrating Eq.(\[dis\_short\]) for the active time of the source. The integration can be done numerically by transforming the integration from $dt_0$ to $d \epsilon_{t,0} = P(\epsilon_{t,0}) dt_0$. (That is, we take $\epsilon_{t,0}$ as the time coordinate.) Substituting $\Delta N(\epsilon_{t,0}(\epsilon,t_0)) = Q(\epsilon_{t,0}(\epsilon,t_0))dt_0$ into Eq.(\[dis\_short\]) and integrating over $dt_0$, the resulting distribution function reads $$\label{dis_long}
f(\epsilon,r) = \frac{1}{\pi^{3/2} P(\epsilon)}\int^{\epsilon_{\hat{t}}}_{\epsilon}\frac{Q(\epsilon_{t,0})}{r_{\text{dif}}(\epsilon,\epsilon_{t,0})^3}\exp \left( -\frac{r^2}{r_{\text{dif}}(\epsilon,\epsilon_t)^{2}} \right)d \epsilon_{t,0}.$$ Here $\epsilon_{\hat{t}}$ is the energy of $e^{\pm}$ when they leave the source at the source birth time $t=\hat{t}~(<0)$, that is $$\hat{t} = -\int^{\epsilon_{\hat{t}}}_{\epsilon} \frac{d \epsilon'}{P(\epsilon')}.$$ The flux at $r$ is given by $\Phi(\epsilon,r)=(c/4\pi) f(\epsilon,r) \mathrm{[m^{-2} \cdot s^{-1} \cdot sr^{-1} \cdot GeV^{-1}]}$.
Now, in order to estimate the observed $e^{\pm}$ flux, we have to specify the cooling function $P(\epsilon)$, diffusion coefficient $D(\epsilon)$ and injected energy spectrum $Q(\epsilon)$. First, we formulate the $e^{\pm}$ cooling function including the KN effect. Following the equation (5) in [@Stawarz:2009ig], the energy loss rate of the $e^{\pm}$ including the KN effect is written as $$\label{cooling_KN}
P(\epsilon) = -\frac{d \epsilon}{dt}= \frac{4}{3}\sigma_{\text{T}} c \left( \frac{\epsilon}{m_{\text{e}} c^2} \right)^2 \left[ \frac{B_{\text{ISM}}^2}{8\pi} + \int d \epsilon_{\text{ph}} u_{\text{tot}}(\epsilon_{\text{ph}}) f_{\text{KN}}\left(\frac{4 \epsilon \epsilon_{\text{ph}}}{m_{\text{e}}{}^2 c^4} \right) \right].$$ Here $\sigma_{\text{T}} = 6.62 \times 10^{-25} \mathrm{cm^2}$ is the Thomson scattering cross section and $\epsilon_{\text{ph}}$ are the energy of the background photon. $B_{\text{ISM}}$ is the magnetic field strength in the ISM where we set $B_{\text{ISM}} = 1 \mathrm{\mu G}$. $f_{\text{KN}}$ is the KN suppression function which is explicitly shown in [@Moderski:2005]. $$f_{\text{KN}}(\tilde{b}) = \frac{9g(\tilde{b})}{\tilde{b}^3},$$ where $$g(\tilde{b}) = \left( \frac{1}{2}\tilde{b} +6 +\frac{6}{\tilde{b}} \right)\ln (1+\tilde{b})-\left(\frac{11}{12}\tilde{b}^3 +6\tilde{b}^2+ 9\tilde{b}+4 \right) \frac{1}{(1+\tilde{b})^2} - 2 + 2\text{Li}_2(-\tilde{b})$$ and $\text{Li}_2$ is the dilogarithm $$\text{Li}_2(z) = \int^{0}_{z}\frac{\ln(1-s)ds}{s}.$$ The ISM photons consists of the stellar radiation, reemitted radiation from dust, and CMB, $$u_{\text{tot}} = u_{\text{star}} + u_{\text{dust}} + u_{\text{CMB}}.$$ Here we model the interstellar radiation field using the results of the GALPROP code [@Poter:2008].
![The energy density of the ISM photon field at $8\mathrm{kpc}$ from the center of our Galaxy [@Poter:2008].[]{data-label="fig:photon"}](Photon_field_fit.eps){width="70mm"}
Fig.\[fig:photon\] shows the ISM radiation field energy density $\epsilon_{\text{ph}} \times u_{\text{tot}}(\epsilon_{\text{ph}})$ at $\sim 8\mathrm{kpc}$ from the center of our Galaxy. Following the formulation above, we numerically calculate the $e^{\pm}$ cooling function including the KN effect.
![The cooling function $P(\epsilon)$ for $e^{\pm}$ at $\sim 8$ kpc from the center of our Galaxy.[]{data-label="fig:cooling"}](Cooling_GAL_KN.eps){width="70mm"}
Fig.\[fig:cooling\] shows the cooling function for $e^{\pm}$ with or without the KN effect. The solid line shows the function $P(\epsilon)$ in Eq.(\[cooling\_KN\]). The dotted line shows the cooling function when we set $f_{\text{KN}} = 1$. We can see that the KN effect becomes relevant for $\epsilon \gtrsim 1 \mathrm{TeV}$.
Second, we formulate the diffusion coefficient. As the diffusion coefficient $D(\epsilon)$ for the $e^{\pm}$ propagating through our Galaxy, we use an empirical law given by the boron-to-carbon ratio observation, that is $$\label{diffusion}
D(\epsilon)=D_0\left(1+ \frac{\epsilon}{3\mathrm{GeV}} \right)^{\delta}.$$ Here $D_0 = 5.8 \times 10^{28} \mathrm{cm^2 s^{-1}} $, $\delta=1/3$ [@Baltz:1998xv].
Finally, we assume that the intrinsic energy spectrum at the source is described by the cutoff power law, that is $$\label{injection}
Q(\epsilon,t_0,\hat{t}) = Q_0 \epsilon^{-\nu} \exp \left( -\frac{\epsilon}{\epsilon_{\text{cut}}} \right) \left( 1+\frac{t_0-\hat{t}}{\tau} \right)^{-2}.$$ Here $\tau$ is the lifetime of the source and $t_0$ is the time when the $e^{\pm}$ leave the source. Then substituting Eq.(\[cooling\_KN\]), (\[diffusion\]) and (\[injection\]) into Eq.(\[dis\_long\]), we can get the observed electron distribution function $f(\epsilon,r,\hat{t})$ from a pulsar which is located at the distance $r$ from the solar system and born at $t=\hat{t}$.
$e^{\pm}$ distribution function from multiple sources
-----------------------------------------------------
Here we consider the $e^{\pm}$ distribution function from multiple sources. As we show in Sec.\[Difference\_in\_NS\_WD\], there should be multiple pulsars which contribute to the observed $e^{\pm}$ flux.
To calculate the distribution function from multiple sources, we integrate Eq.(\[dis\_long\]) for the pulsar birth time $\hat{t}$ and the pulsar position $r$, taking into account the birth rate of the pulsars. Then the observed $e^{\pm}$ distribution function is $$\label{multi_spec}
F(\epsilon) = \int^{0}_{-\tau_{\text{WD}}} d \hat{t} \int^{r_{\text{dif}}(\epsilon,\epsilon_{\hat{t}})}_{0} 2 \pi r dr \cdot \alpha \cdot \eta_{\text{WD}} f(\epsilon, r, \hat{t}).$$ Again $\eta_{\text{WD}}$ is the merger rate of the double degenerate WD binary, and $\alpha$ is the fraction of the mergers resulting in WD pulsars. We take the lifetime of the WD pulsars $\hat{t}=-\tau_{\text{WD}}$ as the lower limit of the time integral. We have comfirmed that the following results does not depends on this limit as long as it is smaller than $-\tau_{\text{WD}}$. As the upper limit of the space integral we take the diffusion length $r_{\text{dif}}(\epsilon ,\epsilon_{\hat{t}})$, which is defined in the same way as Eq.(\[r\_diff\]). Through the distance $r_{\text{dif}}(\epsilon, \epsilon_{\hat{t}})$, the energy of the propagating $e^{\pm}$ changes from $\epsilon_{\hat{t}}$ to $\epsilon$.
Since Eq.(\[multi\_spec\]) is the mean value, we also estimate the standard deviation of the calculated energy spectrum, that is $$\label{delta_F}
(\delta F)^2 = \int^{0}_{-\tau_{\text{WD}}} d \hat{t} \int^{r_{\text{dif}}(\epsilon ,\epsilon_{\hat{t}})}_{0} 2 \pi r dr \cdot \alpha \cdot \eta_{\text{WD}} f^2 - N f_{\text{ave}}^2 ,$$ where $N$ is the number of the pulsars in our Galaxy that are the source of observing $e^{\pm}$, that is $$N = \int^{0}_{-\tau_{\text{WD}}} d \hat{t} \int^{r_{\text{dif}}(\epsilon ,\epsilon_{\hat{t}})}_{0} 2 \pi r dr \cdot \alpha \cdot \eta_{\text{WD}} ,$$ and $f_{\text{ave}} = F(\epsilon)/N$ is the averaged $e^{\pm}$ spectrum per pulsar. We should note that the integral of Eq.(\[delta\_F\]) contains a serious divergence at $\hat{t}=0$ because of the large but improbable contributions from very young and nearby sources [@lee79; @Berezinski:1990; @lagutin95; @Ptuskin:2006]. Here we follow Ptuskin et al (2006) and set the cutoff parameter as $$\hat{t}_{\text{c}}(\epsilon) = -(4\pi \eta_{\text{WD}} \cdot \alpha D(\epsilon) )^{-1/2},$$ which approximately corresponds to the birth time of the newest pulsr that contributes $e^{\pm}$ with energy $\epsilon$.
Results
-------
Here we consider two types of models. Fig.\[model\_A\] shows the WD pulsar dominant model. In the left panel, the $e^{\pm}$ flux from multiple WD pulsars (thin solid line) is shown with the standard deviations (thin dashed line), background flux (dotted line) and total flux (thick solid line). For each WD pulsar, we set the cutoff energy of the injection spectrum $\epsilon_{\text{cut}} \sim 1 \mathrm{TeV}$ (Eq.(\[injection\])), intrinsic spectral index $\nu = 1.9$, lifetime $\tau_{\text{WD}} \sim 10^{9} \mathrm{yr}$, total energy for each $\sim 10^{50} \mathrm{erg}$, merger rate of double degenerate WD pulsar binaries $\eta = 10^{-5} \mathrm{yr}^{-1} \mathrm{kpc}^{-2}$ and probability of forming WD pulsars $\alpha = 0.01$, which means that the birth rate of WD pulsars in our Galaxy is $\sim 10^{-7} \mathrm{yr}^{-1} \mathrm{kpc}^{-2}$. The left panel of Fig.\[model\_A\] includes the observational data of cosmic ray electrons plus positrons given by the balloon and satellite experiments, ATIC/PPT-BETS/Fermi [@chang:2008; @Torii:2008xu; @Ackermann:2010_8; @Abdo:2009zk; @Moiseev:2007js], and also the data of ground-based air Cherenkov telescopes, H.E.S.S/KASKADE/GRAPES/CASA-MIA [@Collaboration:2008aa; @Aharonian:2009ah; @Schatz:2003; @Gupta:2009; @Chantell:1997]. For KASKADE/GRAPES/CASA-MIA, the plots show the observed flux of the diffuse gamma rays. Since a gamma-ray entering into the air first produces an $e^{\pm}$ pair to begin a cascade, its shower will look very similar to that of an $e^{\pm}$ of equivalent energy [@Kistler:2009wm]. Thus we presume these date as the upper limits on the $e^{\pm}$ flux. H.E.S.S electron data are also partly contaminated with photons. Therefore, a viable model should not significantly overshoot the points. The background flux consists of the primary electrons which is conventionally attributed to the SNRs and the secondary $e^{\pm}$ produced by the hadron interaction between cosmic ray protons and the interstellar matter, and successive pion decays. For the secondary $e^{\pm}$ flux, we adopt the fitting function in Baltz & Edsj$\ddot{\text{o}}$ (1999) [@Baltz:1998xv; @Ptuskin:2006b; @Moskalenko:1997gh]. For the primary electron flux, we also refer Baltz & Edsj$\ddot{\text{o}}$ (1999) but with an exponential cutoff at $5 \mathrm{TeV}$ [^6], which is similar to that shown in Aharonian et al. (2008) [@Collaboration:2008aa]. Our result fits well the observational data of H.E.S.S and Fermi.
The right panel of Fig.\[model\_A\] shows the positron fraction using the same parameters as the left panel. The results show that the observed positron excess can be explained by considering only the contribution from multiple WD pulsars, and the positron fraction is expected to drop at around the WD pulsar cutoff energy $\sim \mathrm{TeV}$. The background contribution of the positron fraction begins to rise around $\sim 3 \mathrm{TeV}$ since we set the exponential cutoff only for the primary electron background, not for the secondary $e^{\pm}$ background. This treatment is appropriate since the abundance of cosmic ray protons is observationally robust in this energy range and so is the amount of the secondary $e^{\pm}$ background. As we discuss in Sec.\[sec4\], our calculations become less reliable below $\lesssim 10 \mathrm{GeV}$ since we neglect the anisotropic effects during the diffusion in the Galactic disk. Note that in these energy range, the solar modulation is also relevant.
Fig.\[model\_B\] shows the WD and NS pulsar mixed model. In the left panel, the thin solid line shows the $e^{\pm}$ flux from multiple WD pulsars which have the total energy for each $\sim 5 \times 10^{49} \mathrm{erg}$, cutoff energy of the injection spectrum $\epsilon_{\text{cut}} \sim 10 \mathrm{TeV}$ and the same value for other parameters as Fig.\[model\_A\]. The difference of the $\epsilon_{\text{cut}}$ means, in our WD pulsar model, the difference of the multiplicity ${\cal M}$, the magnetic field strength $B_{\text{p}}$, the angular freauency $\Omega$ and the radius $R$ according to Eq.(\[emax\]). The dotted-dash line shows the $e^{\pm}$ flux from multiple NS pulsars with the total energy $\sim 10^{48} \mathrm{erg}$ for each, cutoff energy of the injection energy spectrum $\epsilon_{\text{cut}} \sim 1 \mathrm{TeV}$, lifetime $\sim 10^{5} \mathrm{yr}$, birth rate in our Galaxy $\sim 10^{-5} \mathrm{yr}^{-1} \mathrm{kpc}^{-2}$. The standard deviation of the $e^{\pm}$ energy flux from the WD pulsars is relatively small compared with the NS pulsars. This is because the larger abundance of WD puslar are expected as we discussed in the previous sections. The dotted line shows the same background contribution as Fig.\[model\_A\]. The total flux and the deviations are shown by thick solid line and thick dushed line, respectively. It is shown that the excess in the $e^{\pm}$ flux in the range $100 \mathrm{GeV} \lesssim \epsilon \lesssim 1 \mathrm{TeV}$ is explained by the multiple NS pulsars. By considering the contribution of multiple WD pulsars, the smooth “double bump” are formed in the energy spectrum around $1 \mathrm{TeV}$ and $10 \mathrm{TeV}$, which can be observable by the future experiments like CALET [@torii:2006; @torii:2008] and CTA [@CTA:2010].
The right panel of Fig.\[model\_B\] shows the positron fraction for the mixed model. The observed positron excess can be explained and, in this case, there will be no flux drop around $\sim \mathrm{TeV}$ in contrast to Fig. \[model\_A\].
Observed WD pulsar candidates {#observation}
-----------------------------
Finally, in this subsection we give two interesting examples of the observed WD pulsar candidates, AE Aquarii and EUVE J0317-855. So far, a few thousand WDs have been discovered, and the magnetic field and the rotational period have been detected for some of them [@Wickramasinghe:2000; @Schmidt:2003; @Vanlandingham:2005; @Mereghetti:2009]. Forthcoming experiments like ASTRO-H [@Takahashi:2008] will find more magnetized and rapidly spinning WDs, which will reveal the detailed characteristics of such WDs. Then, we will know whether sufficient amount of WD pulsars exist in our Galaxy or not.\
**AE Aquarii**
AE Aquarii is a magnetized cataclysmic variable, that is, consisting of a primary WD and a spectral type K5V main sequence star, located at $\sim 100 \mathrm{pc}$ from the solar system. The primary WD has spin period $\sim 33\mathrm{s}$, which is identified by the approximately sinusoidal profiles of the observed emissions at energies below $\sim 4 \mathrm{keV}$ [@Patterson:1979; @Eracleous:1994]. Recently Suzaku satellite discovered that AE Aquarii shows hard X-ray sharp pulsations at the period consistent with its rotation [@Terada:2007br]. Also TeV gamma emissions during the optical flares were reported [@Meintjes:1992; @Meintjes:1994], although there have been no detection since then. The primary WD is spinning down at a rate $\sim 6 \times 10^{-14} \mathrm{sec} \ \mathrm{sec}^{-1}$, implying the spin down luminosity $\sim 10^{33} \mathrm{erg/sec}$, which is three orders of magnitude larger than the UV to X-ray emissions. The magnetic field strength inferred from the spin down luminosity is $\sim 5 \times 10^{7}\mathrm{G}$ [@Ikhsabov:1998].
Since the AE Aquarii is an accreting binary system, the density of the plasma surrounding the primary WD may be much higher than the GJ density. However, both theoretical [@Wynn:1997] and observational works suggest that the rapid rotation and strong magnetic field produce a low-density region around the WD, and the particle acceleration by the same mechanism as spin-powered pulsars could be possible. The parameters of AE Aquarii satisfy the condition Eq.(\[avalanche\_polar\]), above the death line of WD pulsars (Fig.\[death\_line\]).\
**EUVE J0317-855 (RE J0317-853)**
EUVE J0317-855 is a hydrogen-rich magnetized WD discovered by ROSAT and EUVE survey [@Barstow:1995; @Ferrario:1997]. By analyzing the photometric, spectroscopic and polarimetric variations, EUVE J0317-855 is shown to rotate at the period $\sim 725 \mathrm{s}$, which is one of the fastest isolated WDs, and the dipole magnetic field is $\sim 4.5 \times 10^{8} \mathrm{G}$. EUVE J0317-855 have a DA WD companion which is located at $\gtrsim 10^3 \mathrm{AU}$ from EUVE J0317-855. Because of the large separation, there suppose to be no interaction between the two WDs. By analysing the emission from the companion, Barstow et al (1995) [@Barstow:1995] noted that EUVE J0317-855 is located at $\sim 35\mathrm{pc}$ from the solar system, and the mass is $1.31\mbox{-}1.37 M_{\odot}$ which is relatively large compared with the typical WD mass $\sim 0.6M_{\odot}$. Its rapid rotation and large mass suggest that EUVE J0317-855 may be the outcome of a double degenerate WD binary merger [@Ferrario:1997]. Relevant pulse emission from EUVE J0317-855 has not been observed yet, which may suggest that the $e^{\pm}$ creation and acceleration does not occur. When we put the parameters of EUVE J0317-855 on Fig.\[death\_line\], it comes below the death line, which is also consistent with the observation.
Summary and Discussion {#sec4}
======================
We have investigated the possibility that WD pulsars become a new TeV $e^{\pm}$ source. We have supposed that a fair fraction of double degenerate WD binaries merge to become WD pulsars, and these WDs have the magnetospheres and pulsar wind nebulae. The $e^{\pm}$ pair creation in the magnetospheres and their acceleration and cooling in the wind nebulae have been discussed, and we have found the following.
1. If a double degenerate WD binary merges into a maximally spinning WD, its rotational energy will become $\sim 10^{50} \mathrm{erg}$, which is comparable to that of a NS pulsar. Also the birth rate $\sim 10^{-2} \mbox{-} 10^{-3} \mathrm{/yr/galaxy}$ is similar to the NS case, which provides the right energy budget for cosmic ray $e^{\pm}$.
2. Applying the theory of NS magnetospheres, we give the $e^{\pm}$ pair creation condition (“the death line”) for WD pulsars. Since our fiducial parameters of WD pulsars meet the condition, the WD pulsars are eligible for the $e^{\pm}$ factories. The death line is consistent with the observations for some WD pulsar candidates.
3. By assuming the energy equipartition between $e^{\pm}$ and magnetic field in the wind region, we have shown that the $e^{\pm}$ produced in the WD pulsar magnetosphere can accelerate up to $\sim 10 \mathrm{TeV}$ when the WD pulsar has a rapid rotation ($P \sim 50 \mathrm{s}$) and strong magnetic fields ($B \sim 10^{8} \mathrm{G}$) and the $e^{\pm}$ multiplicity is not so large (${\cal M} \sim 1$).
4. In contrast to the NS case, the adiabatic energy losses of $e^{\pm}$ in the pulsar wind nebula region are negligible in the case of the WD pulsars since they continue to inject the $e^{\pm}$ after the nebula stop expanding. Also the radiative cooling of $e^{\pm}$ is not so large, and the high energy $e^{\pm}$ can escape from the nebula without losing much energy. As a consequence, it is enough that a fraction $\sim 1 \%$ of WDs are magnetized as observed in order for the WD pulsars to become the relevant TeV $e^{\pm}$ sources.
Based on the WD pulsar model above, we have calculated the observed $e^{\pm}$ flux from multiple WD pulsars in our Galaxy. We have solved the diffusion equation including the KN effect, and found the following.
1. We have shown the two model $e^{\pm}$ fluxes. In one model (WD pulsar dominant model), only considering the contribution from the multiple WD pulsars, we can explain the reported excess of the $e^{\pm}$ flux around $100 \mathrm{GeV} \lesssim \epsilon \lesssim 1 \mathrm{TeV}$ and also the PAMELA positron excess. In the other model (WD and NS pulsar mixed model), the combination of the multiple WD and NS pulsars can also explain the existent observations, and form the double bump in the energy spectrum of $e^{\pm}$, which can be a signature for the future $e^{\pm}$ observation like CALET [@torii:2006; @torii:2008] and CTA [@CTA:2010]. Since the lifetime of WD pulsars are relatively large, the number of nearby active sources can be huge, which give a small Poisson fluctuation in the $e^{\pm}$ flux compared with NS pulsars.
As we have shown, WD pulsars could dominate the quickly cooling $e^{\pm}$ above TeV energy as a second spectral bump or even surpass the NS pulsars in the observing energy range $\sim 100$ GeV, providing a background for the dark matter signals and a nice target fo the future AMS-02 [@Beischer:2009; @Pato:2010ih], CALET [@torii:2006; @torii:2008] and CTA [@CTA:2010]. As the future works we should consider other observational signatures than $e^{\pm}$ for the coming multi-messenger astronomy era. For example we have to consider the radio to $\gamma$ ray emission from WD pulsars based on our model. The number of observed pulsars in the Galactic disk should be proportional to $\sim$ (number density) $\times$ (radio luminosity). Since about $\sim 10^3$ NS pulsars have been discovered by radio telescopes, assuming that WD pulsars can convert the spin down luminosity to the radio emission with the same radio efficiency as NS pulsars, the number of WD pulsars which should have been already detected by radio observation can be estimated as $$10^3 \left( \frac{\alpha \cdot \eta_{WD}}{\eta_{NS}} \right) \left( \frac{\tau_{WD}}{\tau_{NS}} \right) \left( \frac{L_{WD}}{L_{NS}} \right) \sim 10 \left( \frac{\alpha}{0.01} \right).$$ Thus $O(10)$ WD pulsars may well be observed as radio pulsar with relatively long period $P \sim 50 \mathrm{sec}$. However, since the efficiency of the radio emission depends on the detailed situation in the polar cap regions, whether WD pulsars have the same efficiency as NS pulsars is highly uncertain at this stage. Other than the electromagnetic emissions, double degenerate WD mergers, which we consider as the origin of WD pulsars, is a promising source of the future gravitational wave observation by LISA [@LISA:2010]. It is very interesting if we get a strong constraint on the event rate of the mergers in our Galaxy by observing the high energy $e^{\pm}$. In this paper, we consider only merged WDs as a source of high energy $e^{\pm}$ emissions. In the single-degenerate binaries, the accretion could induce the rapid rotation of the WDs. These accreting binary systems could also become WD pulsars if they have strong magnetic fields as AE Aquarii.
![The comparison of our result and the GALPROP code for $e^{\pm}$ total flux from multiple NS pulsars.[]{data-label="test"}](multi_NS_pulsar_test.eps){width="70mm"}
At the current moment, our model have several crucial assumptions which should be considered more carefully. Last of all, we discuss these things and what we should consider in the future works.
1. We have discussed the death line of WD pulsars based on the simplest polar cap model, considering only the curvature radiation for the $e^{\pm}$ pair creation photons and the vacuum polar cap gap in which $\rho =0$. It has been shown that the inverse Compton scattered photon is important for the $e^{\pm}$ pair creation in the polar cap, and the observed death line of NS pulsars is well explained also by the space charge limited flow model [@Harding:2001; @Harding:2002]. Especially for the WD, even in the case of ${\bf \Omega} \cdot {\bf B} < 0$, the charged limited flows may exist since the binding energy of the ions could be smaller than the thermal energy at the surface. Hence we have to investigate the death line for WD pulsars based on, for example, the Harding & Muslimov model [@Harding:2001; @Harding:2002]. Also the $e^{\pm}$ multiplicity in the magnetosphere is crucial for the maximum energy, and we have to calculate it consistently with the polar cap model.
2. There are uncertainties about the accelerating and cooling processes of $e^{\pm}$ in the pulsar wind nebula. The energy flux of the magnetic field may not be conserved in the wind region like the Crab nebula [@Rees:1974; @KennelCoroniti:1984]. This time we have assumed Eq.(\[Bconf2\]) for simplicity. Moreover, we have to evaluate more precisely the inverse Compton scattering in the pulsar wind nebula as a radiative cooling. In this paper we roughly estimate it to be comparable to the synchrotron radiation. We also have to worry about whether the wind mainly consist of the $e^{\pm}$, which is still under debate even in the case of NS pulsars.
3. When calculating the $e^{\pm}$ flux from the multiple sources, we assume that the source distribution and the $e^{\pm}$ diffusion process are isotropic. However compact objects like WDs and NSs may distribute more densely near the center of our Galaxy. (Also, the large kick which could be given at their birth may affect the spatial distribution of the pulsars.) Since the arrival anisotropy can be useful to discriminate the origin of the observed $e^{\pm}$, we should take into account these anisotropic effects. For the $e^{\pm}$ with relatively low energy $\lesssim 10 \mathrm{GeV}$, the inverse Compton energy losses become less important and consequently the propagation range of $e^{\pm}$ increases, i.e. the anisotropic effects during the propagation, for example the effect of the Galactic disk structure, become more prominent. In these low energy range, the public GALPROP code [@GALPROP] can provide a more reliable calculation of the propagation from distant sources arbitrarily distributed in our Galaxy. Fig.\[test\] shows the comparison of our result and the GALPROP code (WEBRUN [@GALPROP]) for the primary $e^{\pm}$ total flux from multiple NS pulsars with the same parameters as Fig.\[model\_B\]. We have confirmed that our result is consistent with the more realistic calculation in the high energy region and begin to deviate below $\lesssim 10 \mathrm{GeV}$. The bump around $0.5 \mathrm{GeV}$ in the result of the GALPROP code is formed mainly due to the diffusive reacceleration of $e^{\pm}$ during the propagation in our Galaxy. Note that in this region, the solar modulation is relevant and the uncertainty becomes large.
Acknowledgements
================
We thank T. Piran, I. V. Moskalenko, Y. Suwa, Y. Ohira, K. Murase, H. Okawa, F. Takahara, S. Shibata and T. Nakamura for many useful discussions and comments. K.K acknowledge the support of the Grant-in-Aid for the Global COE Program “The Next Generation of Physics, Spun from Universality and Emergence” from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. This work is also supported by the Grant-in-Aid from the MEXT of Japan, Nos. 19047004, 21684014, 22244019, 22244030 for K.I. and 22740131 for N.K..
[99]{} O. Adriani et al. (PAMELA Collaboration), Nature,458,2009,607. J. Chang et al., Nature,456,2008,362.
S. Torii et al. (PPB-BETS Collaboration), arXiv:0809.0760. Ackermann, M. et al. \[The Fermi LAT Collaboration\], arXiv:1008.3999.
A. A. Abdo et al. \[The Fermi LAT Collaboration\], Phys. Rev. Lett. [**102**]{}, 181101 (2009) \[arXiv:0905.0025 \[astro-ph.HE\]\]. A. A. Moiseev, J. F. Ormes and I. V. Moskalenko, arXiv:0706.0882.
F. A. Aharonian et al. (H.E.S.S. Collaboration), Phys. Rev. Lett.,101,2008,261104. F. A. Aharonian et al. (H.E.S.S. Collaboration), Astron. Astrophys.,508,2009,561.
Y. Fan, B. Zhang and J.Chang, arXiv:1008.4646.
N. Kawanaka, K. Ioka and M. M. Nojiri, ApJ,710,2010,958. D. Hooper, P. Blasi and P. D. Serpico, J. Cosmol. Astropart. Phys.,0901,2009,025. H. Yuksel, M. D. Kistler and T. Stanev, Phys. Rev. Lett.,103,2009,051101. S. Profumo, arXiv:0812.4457. D. Malyshev, I. Cholis and J. Gelfand, Phys. Rev. D,80,2009,063005. D. Grasso et al., Astropart. Phys.,32,2009,140. M. D. Kistler and H. Yuksel, arXiv:0912.0264 \[astro-ph.HE\]. J. S. Heyl, R. Gill and L. Hernquist, arXiv:1005.1003 \[astro-ph.HE\]. Y. Fujita, K. Kohri, R. Yamazaki and K. Ioka, Phys. Rev. D,80,2009,063003. N. J. Shaviv, E. Nakar and T. Piran, Phys. Rev. Lett.,103,2009,111302. H. B. Hu, Q. Yuan, B. Wang, C. Fan, J. L. Zhang and X. J. Bi, ApJ,700,2009,L170. P. Blasi, Phys. Rev. Lett.,103,2009,051104. P. Blasi and P. D. Serpico, Phys. Rev. Lett.,103,2009,081103. P. Mertsch and S. Sarkar, Phys. Rev. Lett.,103,2009,081104. P. L. Biermann, J. K. Becker, A. Meli, W. Rhode, E. S. Seo and T. Stanev, Phys. Rev. Lett.,103,2009,061101. M. Ahlers, P. Mertsch and S. Sarkar, Phys. Rev. D [**80**]{}, 123017 (2009) \[arXiv:0909.4060 \[astro-ph.HE\]\]. M. Kachelriess, S. Ostapchenko and R. Tomas, arXiv:1004.1118 \[astro-ph.HE\]. N. Kawanaka, K. Ioka, Y. Ohira and K. Kashiyama, arXiv:1009.1142 \[astro-ph.HE\].
S. Heinz and R. A. Sunyaev, Astron. Astrophys.,390,2002,751. K. Ioka, Prog. Theor. Phys. [**123**]{}, 743 (2010) \[arXiv:0812.4851 \[astro-ph\]\]. A. Calvez and A. Kusenko, arXiv:1003.0045 \[astro-ph.HE\]. M. Asano, S. Matsumoto, N. Okada and Y. Okada, Phys. Rev. D,75,2007,063506. N. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner, Phys. Rev. D,79,2009,015014. E. A. Baltz and J. Edsjo, Phys. Rev. D,59,1999,023511. V. S. Ptuskin, I. V. Moskalenko, F. C. Jones, A. W. Strong and V. N. Zirakashvili, Astrophys. J. [**642**]{}, 902 (2006)
V. Barger, W. Y. Keung, D. Marfatia and G. Shaughnessy, Phys. Lett. B [**672**]{}, 141 (2009) \[arXiv:0809.0162 \[hep-ph\]\]. V. Barger, Y. Gao, W. Y. Keung, D. Marfatia and G. Shaughnessy, Phys. Lett. B [**678**]{}, 283 (2009) \[arXiv:0904.2001 \[hep-ph\]\]. L. Bergstrom, T. Bringmann and J. Edsjo, Phys. Rev. D,78,2008,103520. G. Bertone, M. Cirelli, A. Strumia and M. Taoso, JCAP [**0903**]{}, 009 (2009) \[arXiv:0811.3744 \[astro-ph\]\]. E. Borriello, A. Cuoco and G. Miele, Astrophys. J. [**699**]{}, L59 (2009) \[arXiv:0903.1852 \[astro-ph.GA\]\]. C. R. Chen, K. Hamaguchi, M. M. Nojiri, F. Takahashi and S. Torii, J. Cosmol. Astropart. Phys.,0905,2009,015. H. C. Cheng, J. L. Feng and K. T. Matchev, Phys. Rev. Lett. [**89**]{}, 211301 (2002) \[arXiv:hep-ph/0207125\]. I. Cholis, D. P. Finkbeiner, L. Goodenough and N. Weiner, JCAP [**0912**]{}, 007 (2009) \[arXiv:0810.5344 \[astro-ph\]\]. I. Cholis, G. Dobler, D. P. Finkbeiner, L. Goodenough and N. Weiner, Phys. Rev. D,80,2009,123518. M. Cirelli, M. Kadastik, M. Raidal and A. Strumia, Nucl. Phys. B,813,2009,1. M. Cirelli and A. Strumia, arXiv:0808.3867. R. M. Crocker, N. F. Bell, C. Balazs and D. I. Jones, Phys. Rev. D [**81**]{}, 063516 (2010) \[arXiv:1002.0229 \[hep-ph\]\]. D. Feldman, Z. Liu, P. Nath and B. D. Nelson, Phys. Rev. D [**80**]{}, 075001 (2009) \[arXiv:0907.5392 \[hep-ph\]\]. P. J. Fox and E. Poppitz, Phys. Rev. D [**79**]{}, 083528 (2009) \[arXiv:0811.0399 \[hep-ph\]\]. J. Hall and D. Hooper, Phys. Lett. B,681,2009,220. R. Harnik and G. D. Kribs, Phys. Rev. D [**79**]{}, 095007 (2009) \[arXiv:0810.5557 \[hep-ph\]\]. J. Hisano, S. Matsumoto, M. M. Nojiri and O. Saito, Phys. Rev. D,71,2005,063528. J. Hisano, M. Kawasaki, K. Kohri and K. Nakayama, Phys. Rev. D,79,2009,043516; Phys. Rev. D,79,2009,063514; \[Errata;,80,2009,029907\]. J. Hisano, M. Kawasaki, K. Kohri, T. Moroi and K. Nakayama, Phys. Rev. D [**79**]{}, 083522 (2009) \[arXiv:0901.3582 \[hep-ph\]\]. D. Hooper, A. Stebbins and K. M. Zurek, Phys. Rev. D,79,2009,103513. D. Hooper and K. M. Zurek, Phys. Rev. D [**79**]{}, 103529 (2009) \[arXiv:0902.0593 \[hep-ph\]\]. D. Feldman, Z. Liu and P. Nath, Phys. Rev. D [**79**]{}, 063509 (2009)
M. Ibe, H. Murayama and T. T. Yanagida, Phys. Rev. D [**79**]{}, 095009 (2009) \[arXiv:0812.0072 \[hep-ph\]\]. K. Ishiwata, S. Matsumoto and T. Moroi, Phys. Lett. B,675,2009,446. K. Kadota, K. Freese and P. Gondolo, Phys. Rev. D [**81**]{}, 115006 (2010) \[arXiv:1003.4442 \[hep-ph\]\]. J. D. March-Russell and S. M. West, Phys. Lett. B,676,2009,133. P. Meade, M. Papucci, A. Strumia and T. Volansky, Nucl. Phys. B [**831**]{}, 178 (2010) \[arXiv:0905.0480 \[hep-ph\]\]. Y. Nomura and J. Thaler, Phys. Rev. D [**79**]{}, 075008 (2009) \[arXiv:0810.5397 \[hep-ph\]\]. M. Pospelov and A. Ritz, Phys. Lett. B [**671**]{}, 391 (2009) \[arXiv:0810.1502 \[hep-ph\]\]. P. f. Yin, Q. Yuan, J. Liu, J. Zhang, X. j. Bi and S. h. Zhu, Phys. Rev. D [**79**]{}, 023512 (2009) \[arXiv:0811.0176 \[hep-ph\]\]. J. Zavala, V. Springel and M. Boylan-Kolchin, arXiv:0908.2428 \[astro-ph.CO\]. J. Zhang, X. J. Bi, J. Liu, S. M. Liu, P. F. Yin, Q. Yuan and S. H. Zhu, Phys. Rev. D,80,2009,023007. A. Arvanitaki, S. Dimopoulos, S. Dubovsky, P. W. Graham, R. Harnik and S. Rajendran, Phys. Rev. D [**80**]{}, 055011 (2009) \[arXiv:0904.2789 \[hep-ph\]\]. A. Arvanitaki, S. Dimopoulos, S. Dubovsky, P. W. Graham, R. Harnik and S. Rajendran, Phys. Rev. D [**79**]{}, 105022 (2009) \[arXiv:0812.2075 \[hep-ph\]\]. W. Buchmuller, A. Ibarra, T. Shindou, F. Takayama and D. Tran, JCAP [**0909**]{}, 021 (2009) \[arXiv:0906.1187 \[hep-ph\]\]. C. R. Chen and F. Takahashi, J. Cosmol. Astropart. Phys.,0902,2009,004. C. R. Chen, F. Takahashi and T. T. Yanagida, Phys. Lett. B,671,2009,71; Phys. Lett. B,673,2009,255. C. R. Chen, M. M. Nojiri, F. Takahashi and T. T. Yanagida, PTP,122,2009,553. S. De Lope Amigo, W. Y. Cheung, Z. Huang and S. P. Ng, JCAP [**0906**]{}, 005 (2009) \[arXiv:0812.4016 \[hep-ph\]\]. H. Fukuoka, J. Kubo and D. Suematsu, Phys. Lett. B [**678**]{}, 401 (2009) \[arXiv:0905.2847 \[hep-ph\]\]. K. Hamaguchi, S. Shirai and T. T. Yanagida, Phys. Lett. B,673,2009,247. K. Hamaguchi, E. Nakamura, S. Shirai and T. T. Yanagida, Phys. Lett. B [**674**]{}, 299 (2009) \[arXiv:0811.0737 \[hep-ph\]\]. A. Ibarra and D. Tran, JCAP [**0902**]{}, 021 (2009) \[arXiv:0811.1555 \[hep-ph\]\]. A. Ibarra, D. Tran and C. Weniger, JCAP [**1001**]{}, 009 (2010) \[arXiv:0906.1571 \[hep-ph\]\]. K. Ishiwata, S. Matsumoto and T. Moroi, Phys. Rev. D,78,2008,063505; Phys. Rev. D,79,2009,043527; JHEP [**0905**]{}, 110 (2009); Phys. Lett. B [**679**]{}, 1 (2009). J. Mardon, Y. Nomura and J. Thaler, Phys. Rev. D [**80**]{}, 035013 (2009) \[arXiv:0905.3749 \[hep-ph\]\]. E. Nardi, F. Sannino and A. Strumia, JCAP [**0901**]{}, 043 (2009) \[arXiv:0811.4153 \[hep-ph\]\]. N. Okada and T. Yamada, Phys. Rev. D [**80**]{}, 075010 (2009) \[arXiv:0905.2801 \[hep-ph\]\]. S. Shirai, F. Takahashi and T. T. Yanagida, Phys. Lett. B [**680**]{}, 485 (2009) \[arXiv:0905.0388 \[hep-ph\]\]. O. Adriani et al. (PAMELA Collaboration), Phys. Rev. Lett. 102,2009,051101. O. Adriani [*et al.*]{} \[PAMELA Collaboration\], arXiv:1007.0821 \[astro-ph.HE\]. A. A. Abdo [*et al.*]{} \[Fermi-LAT Collaboration\], JCAP [**1004**]{}, 014 (2010) \[arXiv:1002.4415 \[astro-ph.CO\]\]. M. Ackermann [*et al.*]{}, JCAP [**1005**]{}, 025 (2010) \[arXiv:1002.2239 \[astro-ph.CO\]\]. A. A. Abdo [*et al.*]{}, Astrophys. J. [**712**]{}, 147 (2010) \[arXiv:1001.4531 \[astro-ph.CO\]\].
T. Delahaye, R. Lineros, F. Donato, N. Fornengo and P. Salati, Phys. Rev. D,77,2008,063527. R. Cowsik and B. Burch, arXiv:0905.2136. B. Katz, K. Blum and E. Waxman, 2010, MNRAS, 405, 1458. L. Stawarz, V. Petrosian and R. D. Blandford, AJ,710,2010,236. R. Schlickeiser and J. Ruppel, arXiv:0908.2183. M. H. Israel, Physics,2,2009,53.
A. R. Fazely, R. M. Gunasingha and S. V. Ter-Antonyan, arXiv:0904.2371. M. Schubnell, arXiv:0905.0444. C. S. Shen, ApJ,162,1970,L181.
C. Y. Mao and C. S. Shen, Chin. J. Phys.,10,1972,16.
A. Boulares, ApJ,342,1989,807.
F. A. Aharonian, A. M. Atoyan and H. J. V$\ddot{\rm o}$lk, Astron. Astrophys.,294,1995,L41.
X. Chi, K. S. Cheng and E. C. M. Young, ApJ,459,1996,L83.
L. Zhang and K. S. Cheng, Astron. Astrophys.,368,2001,1063.
C. Grimani, Astron. Astrophys.,474,2007,339.
I. Buesching, O. C. de Jager, M. S. Potgieter and C. Venter, ApJ,678,2008,L39.
C. S. Shen and G. B. Berkey, Phys. Rev.,171,1968,1344.
R. Cowsik and M. A. Lee, ApJ,228,1979,297.
A. M. Atoyan, F. A. Aharonian and H. J. V$\ddot{\rm o}$lk, Phys. Rev. D,52,1995,3265.
A. D. Erlykin and A. W. Wolfendale, J. of Phys. G,28,2002,359.
M. Pohl and J. A. Esposito, ApJ,507,1998,327.
I. V. Moskalenko and A. W. Strong, ApJ,493,1998,694. A. W. Strong, I. V. Moskalenko and O. Reimer, ApJ,537,2000,763 \[Errata;,541,2000,1109\]. T. Kobayashi, Y. Komori, K. Yoshida and J. Nishimura, ApJ,601,2004,340. E. G. Berezhko, L. T. Ksenofontov, V. S. Ptuskin, V. N. Zirakashvili and H. J. Voelk, Astron. Astrophys. [**410**]{}, 189 (2003) \[arXiv:astro-ph/0308199\]. A. W. Strong, I. V. Moskalenko and O. Reimer, ApJ,613,2004,962. V. S. Berezinski, S. V. Bulanov, V. A. Dogiel, V. L. Ginzburg, and V. S. Ptuskin, 1990, Astrophysics of Cosmic Rays (Amsterdam: North-Holland).
Beischer, B.; von Doetinchem, P.; Gast, H.; Kirn, T.; Schael, S. New Journal of Physics, Volume 11, Issue 10, pp. 105021 (2009).
M. Pato, D. and M. Simet, JCAP [**1006**]{}, 022 (2010) \[arXiv:1002.3341 \[astro-ph.HE\]\]. S. Torii (CALET Collaboration), Nucl. Phys. B (Proc. Suppl.),150,2006,345.
S. Torii (CALET Collaboration), J. Phys. Conf. Ser.,120,2008,062020.
The CTA Consortium, arXiv:1008.3703.
K. P. Watters and R. W. Romani, arXiv:1009.5305 \[astro-ph.HE\].
P. Goldreich, and W. H. Julian ApJ,157,1969,869.
M. A. Ruderman and P. G. Sutherland ApJ,196,1975,51.
K. S. Cheng, C. Ho, and M. Ruderman ApJ,300,1986,500.
S. D. Kawaler, arXiv:astro-ph/0301539. G. Nelemans, L. R. Yungelson and S. F. Portegies Zwart, Astronomy and Astrophysics, v.375, p.890-898 (2001). A. J. Farmer and E. S. Phinney, Mon. Not. Roy. Astron. Soc. [**346**]{}, 1197 (2003) \[arXiv:astro-ph/0304393\]. R. Pakmor, M. Kromer, F. K. Roepke, S. A. Sim, A. J. Ruiter and W. Hillebrandt, Nature, Volume 463, Issue 7277, pp. 61-64 (2010). J. Liebert, P. Bergeron and J. B. Holberg, Astron. J. [**125**]{}, 348 (2003) \[arXiv:astro-ph/0210319\]. G. D. Schmidt [*et al.*]{} \[SDSS Collaboration\], Astrophys. J. [**595**]{}, 1101 (2003) \[arXiv:astro-ph/0307121\]. Ruiter, A., Belczynski, K.,Benacquista, M., Larson, S., Williams, G., 2010, ApJ, 717, 1006.
B. Paczynski, ApJ,365,1990,L9.
V. V. Usov, ApJ,410,1993,761.
V. V. Usov, Sov. Astron. Lett.,14,1988,258.
N. R. Ikhsanov and P. L. Biermann, Astronomy and Astrophysics, Volume 445, Issue 1, January I 2006, pp.305-312. Y. Terada [*et al.*]{}, Publications of the Astronomical Society of Japan, Vol.60, No.2, pp.387–397. B. Zhang and J. Gil, Astrophys. J. [**631**]{}, L143 (2005) \[arXiv:astro-ph/0508213\]. V. S. Ptuskin, F. C. Jones, E. S. Seo and R. Sina, Adv. Space Research,37,2006,1909.
M. A. Lee, ApJ,229,1979,424.
A. A. Lagutin and Y. A. Nikulin, J. Exp. Theor. Phys.,81,1995,825.
R. E. Falcon, M. H. Winget, M. H. Montgomery & K. A. Williams, 2010, ApJ, 712, 585.
R. Narayan, 1987, ApJ, 319, 162.
D. R. Lorimer, M. Bailes,R. J. Dewey, & P. A. Harrison, 1993, MNRAS, 263, 401.
P. Lor$\acute{\text{e}}$n-Aguilar, J. Isern, & E. Garc$\acute{\text{i}}$a-Berro, 2009, A & A, 500, 1193.
J. Arons, & E. T. Scharlemann, 1979, ApJ, 231, 854.
A. G. Muslimov, & A. I. Tsygan, 1991, MNRAS, 225, 61.
B. Zhang, & G. J. Qiao, 1996, A& A, 310, 135.
K. Chen, & M. Ruderman, 1993, ApJ, 402, 264.
T. Erber, 1966, Rev. Mod. Phys. 38, 626.
C. F. Kennel, & F. V. Coroniti, 1984, ApJ, 283, 710.
M. J. Rees, & J. E. Gunnm, 1974, MNRAS, 167, 1.
P. Patterson, 1979, ApJ, 234, 978.
M. Eracleous, et al. 1994, ApJ, 433, 313.
P. J. Meintjes, B. C. Raubenheimer, O. C. de jager, C. Brink, H. I. Nel, A. R. North, G. van Urk & B. Visser, 1992, ApJ, 401, 325.
P. J. Meintjes,O. C. de Jager, B. C. Raubenheimer, H. I. Nel, A. R. North, D. A. H. Burckley & C. Koen, 1994, ApJ, 4334, 292.
N. R. Ikhasanov, 1998, A & A, 338, 526.
G. A. Wynn, A. R. King & K. Horn, 1997, MNRAS, 286, 436.
M. A. Barstow, S. Jordan, D. O’Donoghue, M. R. Burleigh, R. Napiwotzki & M. K. Harrop-Allin, 1995, MNRAS, 277, 971.
L. Ferrario, S. Vennes, D. T. Wickramasinghe, J. A. Bailey & D. J. Christian, 1997, MNRAS, 292, 205.
R. Moderski, M. Sikora, P. S. Coppi & F. Aharonian, 2005, MNRAS, 363, 952.
T. A. Poter, I. V. Moskalenko, A. W. Strong, E. Orlando & L. Bouchet, 2008, ApJ, 682, 400.
G. Schatz, et al. 2003, Proc. 28th Intl. Cosmic Ray Conf., Tsukuba, 4, 2293.
S. Gupta, 2009, Proc. 31th Intl. Cosmic Ray Conf.
M. C. Chantell, et al. 1997, Phys. Rev. Lett. 79, 1805.
D. T. Wickramasinghe & L. Ferrario, 2000, PASP, 112, 873.
G. Schmidt, et al. 2002, ApJ, 510, 1101.
K. M. Vanlandingham, et al. 2005, ApJ, 130, 734.
S. Mereghetti, A. Tiengo, P. Esposite, N. La Palombara, G. L. Israel, L. Stella, Science, 325, 1222, (2009).
T. Takahashi, et al. 2008, Proc. SPIE, 7001, 7001.
A. K. Harding & A. G. Muslimov, 2001, ApJ, 556, 987.
A. K. Harding& A. G. Muslimov, 2002, ApJ, 568, 862.
http://galprop.stanford.edu/
[^1]: The difference between the ATIC/PPB-BETS and Fermi results is still under debate [@Israel:2009]. In this paper we call these features as a whole “[*excesses*]{}”.
[^2]: Since the highly magnetized WDs have higher mean mass $\sim 0.95 M_{\odot}$ than the total average $\sim 0.6 M_{\odot}$ [@Liebert:2002qu], the fraction of mergers that leave spinning WDs could be lower than the average.
[^3]: In [@usov:1993], Usov discussed the multiplicity in the magnetosphere for a X-ray pulsar 1E 2259+586 based on the WD pulsar model by investigating the observed X-ray luminosity, in which ${\cal M} \sim 0.1$.
[^4]: In this case, the diffusion coefficient can be estimated as $$D_{\text{sh}} \sim 10^{24} \left( \frac{\epsilon}{3 \mathrm{GeV}}\right) \mathrm{cm^2/s}.$$ This $D_{\text{sh}}$ is smaller than the diffusion coefficient in the ISM (see Eq.(\[diffusion\])), which means that we consider the situation where the $e^{\pm}$ are highly trapped in the shocked region.
[^5]: These assumptions become worse as the energy of $e^{\pm}$ decreases below $\lesssim 10 \mathrm{GeV}$. We discuss the validity of our results by comparing with more realistic calculation using the GALPROP code [@GALPROP] in Sec.\[sec4\].
[^6]: We also reduce the flux by $30 \%$ since the fitting function of Baltz & Edsj$\ddot{\text{o}}$ (1999) provide larger flux than the data even without other contributions.
|
---
abstract: 'We report the discovery of a Neptune-size planet ($R_p = 3.0 R_\oplus$) in the Hyades Cluster. The host star is in a binary system, comprising a K5V star and M7/8V star with a projected separation of 40 AU. The planet orbits the primary star with an orbital period of 17.3 days and a transit duration of 3 hours. The host star is bright ($V=11.2$, $J=9.1$) and so may be a good target for precise radial velocity measurements. K2-136A c is the first Neptune-sized planet to be found orbiting in a binary system within an open cluster. The Hyades is the nearest star cluster to the Sun, has an age of 625-750 Myr, and forms one of the fundamental rungs in the distance ladder; understanding the planet population in such a well-studied cluster can help us understand and set constraints on the formation and evolution of planetary systems.'
author:
- 'David R. Ciardi, Ian J. M. Crossfield, Adina D. Feinstein, Joshua E. Schlieder, Erik A. Petigura, Trevor J. David, Makennah Bristow, Rahul I. Patel, Lauren Arnold, Björn Benneke, Jessie L. Christiansen, Courtney D. Dressing, Benjamin J. Fulton, Andrew W. Howard, Howard Isaacson Evan Sinukoff, Beverly Thackeray\'
title: 'K2-136: A Binary System in the Hyades Cluster Hosting a Neptune-Sized Planet'
---
Introduction {#sec:intro}
============
Most stars are thought to form in open clusters [@ll2003], but most planets have been found around old, isolated stars that have long since left their nascent cluster families. There have a been a series of studies to try to find planets in open clusters. Part of the reason to study planets in open clusters is that the stars are typically well understood in terms of mass, metallicity, and age (especially in comparison to field stars), and, therefore, because the derivation of planet parameters requires accurate and precise knowledge of the host stars, any planets found within open clusters would also be much better understood. With the pending release of Gaia distances, most field star planetary systems will be more clearly defined akin to what is currently possible with systems in open clusters – with the exception of ages. The discovery of exoplanets in open clusters enable us to explore possible evolutionary effects on the distribution and characteristics on exoplanets as a function of time and age.
While there are more than 1300 confirmed exoplanets with mass determinations and more than 2200 statistically validated planets [@NEA; @akeson2013], only about $\sim$1% have been discovered in open clusters, and the majority of these are Jupiter-sized planets. The first planet discovered in any open cluster was in the Hyades; $\epsilon$ Tauri b is a $\approx 7M_{Jup}$ mass planet in a 600 day orbit around an evolved K0 giant star [@sato2007]. Since that discovery, there have been a handful of planets discovered via radial velocity in young open clusters including an additional planet in the Hyades \[[HD285507 b, @quinn2014]\], two planets in the Taurus region \[[V830 Tau b, @donati2016]; [Cl Tau b, @jk2016]\], one planet in the distance cluster NGC2423 \[[NGC2423-3 b, @lm2007]\], and three planets in Praesepe Cluster \[[Pr0201 b, Pr0211 b, @quinn2012]; [Pr0211 c, @malavolta2016]\]. However, most transit cluster surveys prior to the Kepler mission were not sensitive enough or had samples large enough to detect the more common Neptune-sized and smaller planets [e.g., @pg2006; @quinn2012; @quinn2014; @brucalassi2017].
With Kepler and K2, a handful of transiting small planets have been discovered in open clusters. Kepler was sensitive enough to detect two super-Earth-sized planets in the billion-year-old cluster NGC 6811, located more than 1000 pc away [Kepler-66b, $2.80$R$_\oplus$; Kepler-67b, $2.94$R$_\oplus$; @meibom2013]. K2, through its larger survey area, has been surveying open clusters much closer to home [@howell2014]. With K2, a sub-Saturn-sized planet (K2-33b, $R=5.04$R$_\oplus$) was discovered in the 5-10 Myr old cluster Upper Scorpius [@david2016b; @mann2016]; six planets spanning super-Earth to Neptune-sized (K2-95b, K2-100b, 101b, 102b, 103b, 104b) have been detected orbiting K and M dwarfs and in the Praesepe Cluster [@obermeier2016; @mann2017], and a Neptune-sized planet (K2-25b, $R=3.47$R$_\oplus$) was discovered orbiting an M4.5V star in the Hyades [@mann2016; @david2016a].
A key goal of young cluster exoplanet searches is to test whether planets around young cluster stars have the same occurrence distribution as mature planets around field stars [e.g., @meibom2013]; this would be a relatively expected result, if field stars are indeed primarily born in clusters. Thus, understanding the frequency and distribution of planets in open clusters – particularly those that are younger than the field stars – can help constrain the formation and evolution mechanisms that shape the frequency and distribution of planets observed in the older field stars.
The Hyades is the nearest open cluster to the Sun and is one of the best studied clusters. The cluster center is located $46.34\pm0.27$ pc away [@vanleeuwen2009], but the cluster members themselves span an extent that is 10-20 pc across [e.g., @mann2016]. The Hyades has a metallicity slightly higher than solar (\[Fe/H\] $\approx 0.13\pm0.01$; @paulson2003 [@maderak2013]), and typically, the age of the Hyades is quoted as $625\pm50$ Myr [@perryman1998], although some recent work indicates that the cluster may be slightly older [$750\pm100$ Myr; @bh2015; @david2015]. The stellar binarity rate within the Hyades is also fairly well documented showing a strong dependence on stellar type; stars earlier than solar have nearly a 100% companion fraction and that fraction drops to below 50% for early-K stars [@bv2007].
![image](f1.pdf)
Recent work has suggested that the presence of stellar companions may inhibit the formation of planets [e.g., @kraus2016], but other work has suggested the stellar companion rate of planet hosting stars is similar to the field star companion rate [e.g., @horch2014]. Additionally, stellar encounters (i.e., fly-bys and collisions) within the cluster environment may alter the formation and/or survival of planets and planetary systems, in comparison to what might be expected for single isolated stars [e.g., @malmberg2011]. Finding planets within the Hyades cluster can help yield important constraints on planet formation and evolutionary theories – particularly, if the frequency of planets in the Hyades as a function of stellar type and stellar multiplicity can be established.
This paper presents the discovery of a Neptune-sized planet host by the K-dwarf EPIC 247589423 within the Hyades cluster. The detection was made with K2; we have performed a suite of follow-up observations which include high-resolution imaging and spectroscopy. In addition to the transit light curve of the planet, the imaging was used to detect a late M-dwarf stellar companion; spectroscopy was utilized to derive precise stellar parameters of the primary host star and show that the star is indeed a Hyades member, based upon kinematic arguments. The primary star is a K5V and has an M7/8V stellar companion located approximately 40 AU (projected) from the primary star. The light curve modeling and validation is consistent with a Neptune-sized planet ($\sim3.0$ R$_\oplus$) orbiting the primary star with a period of $\sim17.3$ days. We demonstrate that the primary K5V star is the host of the planet. With the discovery of the stellar companion and the planet, we set the nomenclature of the system: K2-136A is the K5V primary star; K-136B is the M7/8V stellar companion. Finally, we should that the Neptune-sized planet most likely orbits the primary star.
K2 Detection {#sec:k2detect}
============
EPIC 247589423 (LP 358-348) was observed by K2 at a 30-minute cadence in Campaign 13, which ran from 2017 March 08 until 2017 May 27. The star was proposed for observation by numerous K2 General Observer programs: 13008, A. Mann; 13018, I. Crossfield; 13023, L. Rebull; 13049, E. Quintana; 13064, M. Agueros; 13077, M. Endl; and 13090, J. Glaser. The properties of EPIC 247589423 are summarized in Table \[tab:stellar\].
We identified the transit candidate in the light curve analysis of raw K2 cadence data using a series of free software tools made available by the community, following the same approach described by @crossfield2016 and Christiansen et al. (in review). In brief: we processed the cadence data into target pixel files with `Kadenza`[^1] [@kadenza], generated time-series photometry and removed K2’s well-known systematics using `k2phot`[^2], and searched for candidate planet transits using `TERRA`[^3] [@petigura2013b; @petigura2013a]. Fig. \[fig:fits\] shows the several stages of light curve processing.
The resulting light curve shows coherent variation with a peak-to-peak amplitude of roughly 1%, as seen in Fig. \[fig:fits\]c. `TERRA` also identified one strong transit-like signal clearly visible in Fig. \[fig:fits\] with $P\approx17.3$ d, a depth of $\sim 1500$ ppm, and with a S/N=18. We saw no obvious secondary eclipse ($\lesssim 240$ ppm) or evidence of flux modulation on the detected period. After masking out those transits, `TERRA` found no other transit signals with S/N$\ge$7.
Follow-Up Observations {#sec:fop}
======================
Following the detection of the candidate planet around EPIC 247589423 in the K2 light curve, we began our standard follow-up process to assess the stellar parameters of the targets and to validate the planetary candidate as a true planetary system utilizing both archival data and new imaging and spectroscopy data [e.g., @crossfield2016; @martinez2017; @dressing2017a; @petigura2017].
![POSS1 Blue and Red plates observed in 1950. The circle shows the location of EPIC 247589423 at the 2017 position of the star. Between 1950 and 2017, the star moved moved by $\sim 6\arcsec$, which can be clearly seen in the POSS images. The POSS1 plate rules out a background star coincident with the current location of EPIC 247589423 to $\Delta B\sim 3$ mag and $\Delta R\sim 4$ mag. \[fig:pm\]](f2.pdf)
Archival Imaging and Proper Motion {#subsec:pm}
----------------------------------
EPIC 247589423 is a high proper motion star [$+81.8$ mas yr$^{-1}$ in right ascension and $-35.2$ mas yr$^{-1}$ in declination; UCAC4, @zacharias2013]. In the 67 years since the 1950 Palomar Observatory Sky Survey (POSS) images, EPIC 247589423 has moved more than 6$\arcsec$, enabling us to utilize archival POSS data to search for background stars that are now, in 2017, hidden by EPIC 247589423. The Blue POSS1 image has better resolution ($\sim 2\arcsec$ *vs.* $\sim 4\arcsec$) but the Red POSS1 image goes deeper ($\Delta \mathrm{mag} \sim 4$ *vs.* $\Delta \mathrm{mag} \sim 6$)
Using the 1950 POSS data (Fig. \[fig:pm\]), we find no evidence of a background star at the current position of EPIC 247589423 to a differential magnitude of $\Delta B\sim 3$ mag in the blue and $\Delta R\sim 4$ mag in the red. Because EPIC 247589423 is slightly saturated in the POSS images, this sensitivity was estimated by placing fake sources at the epoch 2017 position of EPIC 247589423 in the epoch 1950 images and estimating the 5$\sigma$ threshold for detection. The photometric scale of the image (and hence, the magnitudes of the injected test stars) was set using the star located 40 to the southeast of EPIC 247589423, which has an optical magnitude of approximately $B\approx 17$ mag and $R\approx16$ mag.
This analysis rules out a $\lesssim10\%$ eclipsing binary that is $3-4$ magnitudes fainter than the primary star, but it does not rule out the more extreme background eclipsing binaries (a 100% eclipsing binary could produce a 1500 ppm transit at a differential magnitude of $\sim 7$ magnitudes). However, this analysis was sufficient for us to initiate the remainder of the follow-up observations.
![image](f3.pdf)
Spectroscopy {#subsec:spec}
------------
We performed both near-infrared and optical spectroscopy in order to characterize the host star properties and to search for secondary spectral lines.
### IRTF SpeX {#subsubsec:irtf}
We observed EPIC 247589423 with the near-infrared cross-dispersed spectrograph SpeX [@rayner2003; @raynor2004] on the 3m NASA Infrared Telescope Facility on 2017 July 24 UT (Program 2017A019, PI C. Dressing). While available photometry indicates that the star is late-type, follow-up spectroscopy is essential to measure the spectral type and fundamental parameters.
We observed EPIC 247589423 under clear skies with an average seeing of $\sim$ 0.7$^{\prime\prime}$. We used SpeX in its short cross-dispersed mode (SXD) with the 0.3$\times$15$^{\prime\prime}$ slit, allowing us to observe the star over $0.7-2.55\ \mu$m at resolution $R \sim 2000$. The target was observed at two locations along the slit in three AB nod pairs using a 50s integration time in each frame, providing a total integration time of 300s. The slit position angle was synced to the parallactic angle to avoid differential slit losses. An A0 standard, HD31411, was observed after our target and flat and arc lamp exposures were taken immediately after that, to allow for telluric correction and wavelength calibration using the data reduction package, SpeXTool [@vacca2003; @cushing2004].
SpeXTool performs flat fielding, bad pixel removal, wavelength calibration, sky subtraction, flux calibration, and spectral extraction and combination. The final extracted and combined spectra have signal-to-noise ratios (SNR) of 175 per resolution element in the $J$-band (1.25$\mu$m), 217 per resolution element in the $H$-band (1.6$\mu$m), and 208 per resolution element in the $K$-band (2.2$\mu$m). The $JHK$-band spectra were compared to late-type standards from the IRTF Spectral Library [@rayner2009], seen in Figure \[fig:spex\]. EPIC 247589423 is an approximate visual match to the K5 standard across all three bands. The increased noise visible in the regions of strong H$_2$O absorption is a result of increased telluric contamination, potentially due to the relatively large $\sim$19$^{\circ}$ separation between the primary target and the available A0 standard.
Following the methods presented in [@mann2013], we use our SpeX spectrum to estimate the fundamental parameters of effective temperature ([$T_\mathrm{eff}$]{}), radius ($R_*$), mass ($M_*$), and luminosity ($L_*$) for EPIC 247589423. We used the index-based temperature relations of @mann2013 to estimate the temperature in each of the $J$-, $H$- and $K$-bands and calculated the mean of the three values. We estimated the uncertainty by adding in quadrature the standard deviation of the mean and the scatter in each of the @mann2013 index relations. The resulting [$T_\mathrm{eff}$]{} = 4360 $\pm$ 206 K was then used to estimate the remaining stellar parameters and their uncertainties using the polynomial relations from @mann2013. We estimate $R_*/R_\odot$ = 0.674 $\pm$ 0.061, $M_*/M_\odot$ = 0.696 $\pm$ 0.070, and $L_*/L_\odot$ = 0.152 $\pm$ 0.052. The radius and mass yield an empirical stellar density of $3.2 \pm 1.0$ g cm$^{-3}$.
We also used the TiO5 and CaH3 molecular indices from @lepine2003 to measure a spectral type of K7 $\pm$ 0.5. This visible index based spectral type is consistent with the estimated stellar parameters [@pecaut2013] and the visual comparison to standards in the near-IR. Given these results, we adopt a conservative dwarf spectral type of K5 $\pm$ 1 for EPIC 247589423. The late-M type companion detected at close separation in near-IR adaptive optics imaging is $\sim$5-10 magnitudes fainter than the K5 star at visible to near-IR wavelengths (see §\[subsec:hri\]). Thus, it only contributes $\lesssim$1% of the flux across the wavelength ranges used in the SpeX analyses and does not significantly affect the results.
### Keck HIRES {#subsubsec:hires}
We also observed the star on UT 2017 Aug 04 with the HIRES spectrometer [@vogt1994] on the Keck I telescope. We observed for 64 s using the C2 decker and no iodine cell, achieving 10,000 counts on the HIRES exposure meter (corresponding to S/N of 22 pix$^{-1}$ on blaze). As an independent check of the SpeX derived values, stellar parameters were estimated from the iodine-free template spectrum using the SpecMatch-Emp code [@yee2017][^4].
SpecMatch-Emp contains a dense spectral library of $\sim$400 touchstone stars with well-determined properties. This library is made up of HIRES spectra taken at high signal to noise ($SNR > 100$/pix). SpecMatch-Emp fits an unknown target spectrum by finding the optimum linear combination of library spectra that best matches the target spectrum. SpecMatch-Emp performs particularly well when analyzing cool stars with ${\ensuremath{T_\mathrm{eff}}}< 4700$ K (SpT $\ge$ K4). At low temperatures, the onset of dense molecular bands challenges LTE spectral synthesis codes. SpecMatch-Emp achieves an accuracy of 70 K in [$T_\mathrm{eff}$]{}, 10% in $R_∗$, and 0.12 dex in \[Fe/H\] [@yee2017]. Because the spectral library radii are measured using model-independent techniques such as interferometry or spectrophotometry, the derived radii do not suffer from model-dependent offsets associated with converting [$T_\mathrm{eff}$]{}, $\log g$, and \[Fe/H\] into $R_*$.
The HIRES SpecMatch-Emp results are consistent with the SpeX results and the adopted spectral type of K5 $\pm$ 1; we find ${\ensuremath{T_\mathrm{eff}}}=4364 \pm 70$ K, $R_*=0.71\pm0.10\ R_\odot$, \[Fe/H\]=$+0.15\pm0.09$. We note that the stellar metallicity is consistent with the Hyades cluster metallicity of \[Fe/H\]$ = 0.13$. We use the `isochrones` package [@morton2015] to convert the SM-Emp stellar parameters ([$T_\mathrm{eff}$]{}, $R_*$, and \[Fe/H\]) and the $K_s$ magnitude into a stellar mass and $\log g$. With these inputs, we find $M_*=0.71 \pm 0.06 M_\odot$ and $\log g=4.63 \pm 0.11$. We also used the HIRES spectrum to measure the star’s radial velocity, RV = 39.6 $\pm$ 0.2 km s$^{-1}$, and projected rotational velocity, $v\mathrm{sin}i = 3.9\pm1.0$ km s$^{-1}$ (see Table \[tab:stellar\].
To search for stellar companions at small separations, we ran the secondary line search algorithm presented by @kolbl2015 on the HIRES spectrum. There is no evidence of secondary lines in the spectrum for companions down to $\Delta V\lesssim5$ mag and $\Delta$RV$\gtrsim$10 km s$^{-1}$. These results complement the results of the high resolution imaging where the spectroscopy can probe regions inside the inner working angle of the imaging. The results are also consistent with the results of the infrared high-resolution imaging presented in the next section where a late M-dwarf has been detected. That M-dwarf would be $\sim$10 magnitudes fainter than the K5V star in the $V$-band and beyond the sensitivity of the HIRES spectrum.
![Contrast sensitivities and inset images of EPIC 247589423 in the J, H, and $Br$-$\gamma$ filters as observed with the Palomar Observatory Hale Telescope adaptive optics system; the secondary companion $\sim 0.72\arcsec$ to the south of the primary target is clearly detected. The $5\sigma$ contrast limits for additional companions, in $\Delta$magnitude, are plotted against angular separation in arcseconds for each of the filters. The black points represent one step in the FWHM resolution of the images. \[fig:ao\_contrast\]](f4.pdf)
High-resolution Imaging {#subsec:hri}
-----------------------
As part of our standard process for validating transiting exoplanets, we observed EPIC 247589423 with infrared high-resolution adaptive optics (AO) imaging, both at Keck Observatory and Palomar Observatory. The Keck Observatory observations were made with the NIRC2 instrument on Keck-II behind the natural guide star AO system. The observations were made on 2017 Aug 20 in the narrow-band $Br-\gamma$ filter in the standard 3-point dither pattern that is used with NIRC2 to avoid the left lower quadrant of the detector which is typically noisier than the other three quadrants. The dither pattern step size was $3\arcsec$ and was repeated three times, with each dither offset from the previous dither by $0.5\arcsec$. The observations utilized an integration time of 3 seconds with one coadd per frame for a total of 27 seconds. The camera was in the narrow-angle mode with a full field of view of $10\arcsec$ and a pixel scale of approximately $0.1\arcsec$ per pixel. The Keck AO observations clearly detected a faint companion approximately $0.7\arcsec$ to the south of the primary target. However, good relative photometry of the detected companion was hampered by the fixed speckle pattern, which our post-processing was unable to fully remove.
EPIC 247589423 was re-observed with the $200\arcsec$ Hale Telescope at Palomar Observatory on 2017 Sep 06 utilizing the near-infrared AO system P3K and the infrared camera PHARO [@hayward2001]. PHARO has a pixel scale of $0.025\arcsec$ per pixel with a full field of view of approximately $25\arcsec$. The data were obtained with a narrow-band $Br$-$\gamma$ filter $(\lambda_o = 2.166; \Delta\lambda = 0.02\mu$m ), a narrow-band $H$-continuum filter $(\lambda_o = 1.668; \Delta\lambda = 0.0018\mu$m ), and a standard $J$-band filter $(\lambda_o = 1.246; \Delta\lambda = 0.162\mu$m).
The AO data were obtained in a 5-point quincunx dither pattern with each dither position separated by 4$^{\prime\prime}$. Each dither position is observed 3 times with each pattern offset from the previous pattern by $0.5^{\prime\prime}$ for a total of 15 frames. The integration time per frame was 4.2 seconds, 9.9 seconds, and 1.4 seconds in the $Br$-$\gamma$, $H$-cont, and $J$ filters. We use the dithered images to remove sky background and dark current, and then align, flat-field, and stack the individual images. The PHARO AO data have a resolution of 0.10$^{\prime\prime}$ (FWHM) in the $Br$-$\gamma$ filter and 0.08$^{\prime\prime}$ (FWHM) in the $H$-$cont$ and $J$ filters, respectively.
The sensitivities of the AO data were determined by injecting fake sources into the final combined images with separations from the primary targets in integer multiples of the central source’s FWHM [@furlan2017]. The sensitivity curves shown in Figure \[fig:ao\_contrast\] represent the 5$\sigma$ limits of the imaging data.
The nearby stellar companion was detected in all three filters with PHARO. The companion separation was measured from the $Br$-$\gamma$ image and found to be $\Delta\alpha = 0.10\arcsec \pm 0.003\arcsec$ and $\Delta\delta = 0.723\arcsec \pm 0.03\arcsec$. At the distance of the Hyades, the companion has a projected separation from the primary star of $\approx 40$ AU. The AO imaging rules out the presence of any additional stars within $\sim 0.5$ of the primary ($\sim 30$ AU) and the presence of any brown dwarfs, or widely-separated tertiary components beyond 0.5($\sim 30-1000$ AU). The presence of the blended companion must be taken into account to obtain the correct transit depth and planetary radius [@ciardi2015].
Table \[tab:stellar\] presents the deblended magnitudes of both stars. The stars have blended 2MASS magnitudes of $J = 9.343 \pm 0.026$ mag, $H=8.496 \pm 0.02$ mag and $K_s = 9.196 \pm 0.023$ mag. The stars have measured magnitude differences of $\Delta J = 4.97 \pm 0.04$ mag, $\Delta H = 4.96 \pm 0.03$ mag, and $\Delta K_s = 4.65 \pm 0.03$ mag. $Br$-$\gamma$ has a central wavelength that is sufficiently close to $Ks$ to enable the deblending of the 2MASS magnitudes into the two components. The primary star has deblended real apparent magnitudes of $J_1 = 9.11 \pm 0.04$ mag, $H_1 = 8.51 \pm 0.02$ mag, and $Ks_1 = 8.38 \pm 0.02$ mag, corresponding to $(J-H)_1 = 0.060 \pm 0.05$ mag and $(H-K_s)_1 = 0.13 \pm 0.02$ mag; the companion star has deblended real apparent magnitudes of $J_2 = 14.1 \pm 0.1$ mag, $H = 13.47 \pm 0.04$ mag, and $Ks_2 = 13.03 \pm 0.03$ mag, corresponding to $(J-H)_2 = 0.63 \pm 0.11$ mag and $(H-K_s)_2 = 0.44 \pm 0.05$ mag. Utilizing the $(Kepmag - Ks)\ vs.\ (J-Ks)$ color relationships [@howell2012], we derive approximate deblended Kepler magnitudes of the two components of $Kepmag_1 = 10.9\pm0.1$ mag and $Kepmag_2 = 17.4\pm0.2$ mag, for Kepler magnitude difference of $\Delta Kepmag = 6.5\pm0.2$ mag, which is used when fitting the light curves and deriving a true transit depth.
The companion star has infrared colors that are consistent with M7/8V spectral type (Figure \[fig:cc\]). It is unlikely that the star is a heavily reddened background star. Based upon an $R=3.1$ extinction law, an early-F or late-A star would have to be attenuated by more than 6 magnitudes of extinction to make the star appear as a late M-dwarf. The entire line-of-sight extinction through the Galaxy is only $A_V\approx2$ mag [@sf2011] making a background A or F star an unlikely source of the detected companion.
![2MASS $JHKs$ color-color diagram showing the dwarf branch locus (green), the giant branch locus (blue), and the brown dwarf locus (red). The black dashed lines represent the direction of reddening induced by extinction ($A_V$). The positions of the stellar components are overplotted showing the primary component is consistent with being a K5V, and the secondary component is consistent with being an M7/8V.\[fig:cc\]](f5.pdf)
Association with Hyades Cluster {#sec:hyades}
===============================
There is a sparse amount of literature on our target star, but it has been consistently regarded as a Hyades member. The star was first proposed as a Hyades member by @weis1983 on the basis of photometry and proper motions, and included in a later study on the H-R diagram of the cluster [@reid1993]. The star was also detected as an X-ray source from ROSAT observations [@stern1995] and as a GALEX NUV source, possibly due to the low-mass companion which we report here. Finally, it was included in a previous search for transiting planets in the Hyades using photometry from the WASP telescope, and indeed reported as a candidate transiting planet host [@gaidos2014]. However, the period and depth of the candidate signal detected by those authors ($P$=3.169 d, $\delta$=0.38%) bears no resemblance to any transit or stellar variability signal observed in the K2 photometry.
The current Gaia release (DR1) only has a photometric magnitude ($G=10.4$ mag) for the primary star and has no detection for the companion star. The association of the stars with the Hyades cluster can be investigated via photometric and/or kinematic methods.
The spectroscopic observations (§\[subsec:spec\]) and the infrared colors of the primary star are consistent with the primary star being a K5V (see Figure \[fig:cc\]). In the V-band, the M7/8V companion is expected to be more than 10 magnitudes fainter; as a result, the measured optical magnitude of $V=11.20\pm0.03$ mag is dominated by the primary star at the 99.99% level. Thus, the V-band magnitude can be utilized to determine the photometric distance to the primary star.
Based upon the 625–800 Myr isochrone models from @choi2016, a K5V star has an absolute magnitude of $M_V=7.57$, corresponding to a distance of $d_{phot}\sim 53\pm1$ pc. Given the $10-20$ pc spread in the Hyades cluster [@mann2016], this distance is in reasonable agreement with the cluster center distance of 45 pc.
The kinematics of the Hyades cluster center have been re-evaluated with the release of the Gaia DR1 and have the following values for the cluster center radial and proper motions: $v_{rad}=39.1\pm0.02$ km/s, $\mu_\alpha = 104.92 \pm 0.12$ mas/yr, and $\mu_\delta = -28.00 \pm 0.09$ mas/yr. The values measured for EPIC 247589423 very similar to those of the Hyades cluster center (Table \[tab:stellar\]).
Using the measured proper motions from UCAC4 and the radial velocity derived from the HIRES spectrum ($v_{rad} = 39.6\pm0.2$ km/s, $\mu_\alpha = 81.8 \pm 1.0$ mas/yr, $\mu_\delta = -35.2 \pm 0.9$ mas/yr), we recalculated the $UVW$ components for the target, but allowed the distance to vary from 1 pc to 100 pc in steps of 1 pc. By minimizing the differences between the derived $UVW$ velocities and those established for the Hyades cluster center [@vanleeuwen2009], we derived a kinematic distance of $d_\mathrm{kin} = 58\pm2$ pc. We also used the star’s partial kinematics and the methods presented in @lepine2009 to calculate the predicted radial velocity of the star if it is a Hyades member. We find RV$_p$ = 37.8 $\pm$ 0.9 km s$^{-1}$, consistent with our measured HIRES RV at the 2$\sigma$ level. With the general agreements between the photometric and kinematic distances and the general agreement with the kinematic parameters and distance of the Hyades cluster center, we regard EPIC 247589423A as a Hyades cluster member with $>90\%$ probability.
The association of the M7/8V companion to the cluster can only be based upon photometric considerations. The absolute magnitudes of a late M-dwarf (M7/8) star span $M_J \approx 10-11$ mag and $M_K \approx 9-10$ mag corresponding to a distance for the detected M-dwarf companion of $d \sim 40-60$ pc [@choi2016]. The photometrically derived distances are consistent with the average distance to the Hyades and with the distance to primary K5V star. While not definitive, the spatial coincidence and the similar distances of the K5V and M7/8V stars suggests that the M-dwarf companion may be a physically associated star, and EPIC 247589423 is a wide binary system. Additional high-resolution imaging will be required to demonstrate common proper motion and physical association.
![Stellar density as a function of the stellar mass as derived from the @choi2016 models for a 625 Myr and an 800 Myr set of isochrones, both with \[Fe/H\]=+0.13, appropriate for the Hyades. The horizontal dashed line and associated shaded area represent the derived stellar density and uncertainties from the transit fit (assuming a circular orbit). The vertical gray lines indicate the adopted primary mass and approximate secondary mass. \[fig:density\]](f6.pdf)
Discussion {#sec:discussion}
==========
Neptune-sized Planet Orbiting the Primary Star {#subsec:planet}
----------------------------------------------
The large 4 pixels of Kepler mean we cannot isolate the transit to either the primary or secondary star, from the Kepler data alone. However, we rule out the possibility that the observed transits are of the M7/M8 star due to the lack of a secondary eclipse. We also show that the transit duration strongly favors a planet transiting the K5 star and not the M7/8 dwarf companion.
With a flux difference in the Kepler bandpass of 6.5 magnitudes, the transit/eclipse would have to be $\gtrsim 65\%$ deep in order to be occurring around the M7/8, after the dilution of the brighter K5V is taken into account. Such a deep eclipse would require the late-type star to be an eclipsing binary star system, which would require a secondary eclipse depth of $\lesssim 25\%$. Even with the dilution of the primary star, a 25% eclipse would still produce an observed eclipse that is $\approx 1\%$ deep. Yet, no secondary eclipse (to a limit of $\approx 0.03\%$) is observed, indicating that the transit event is not a stellar eclipse around the companion M-star.
Additionally, the observed transit duration is more consistent with the orbiting event being around the primary K5V star rather than the M7/8V companion. The time between first and last contact is $T_{14}=3.59\pm0.15$ hr. For a K5V star and the measured stellar radius of $R\sim0.7$ R$_\odot$ and mass of $M \sim0.7$ M$_\odot$, a circular orbit with a period of 17.3 days would have a transit duration of $T_{14}\approx3.7$ hr. If instead the star that is transited is the M7/8V star, the stellar radius and mass reduce to $R\sim0.1$ R$_\odot$ and $M\sim0.1$ M$_\odot$, and corresponding transit duration would only last 1 hr – significantly shorter than the observed transit duration [@smo2003].
The transit duration could, of course, be longer if the orbit is not circular: $$\frac{t_\mathrm{ecc}}{t_\mathrm{circ}} = \frac{\sqrt{(1-e^2)}}{1+e\cos(\omega-90^\circ)}$$ where $e$ is the eccentricity and $\omega$ is the argument of periastron [e.g., @kane2012]. In order to achieve a transit duration near what is observed ($\sim3.5$ hr), the eccentricity would need to be $e\gtrsim0.85$ and the transit would need to occur near apoapsis. For any other argument of periastron, the eccentricity would need to be even larger. While this is not impossible, it seems a rather contrived scenario; of the 876 confirmed planets with eccentricity estimates, only 14 have a eccentricities of $e\gtrsim0.8$ and only 7 confirmed planets have an eccentricity of $e\gtrsim0.8$. Given that none of these systems have orbital periods less than 70 days, and these systems represent only $1-2\%$ of the 876 confirmed planets with measured eccentricities, we consider such a scenario for the planet presented here as unlikely.
Finally, the stellar density from the transit duration of the light curve is more consistent with the host star being a K5V star than being an M7/8V star. Based upon the @choi2016 models, a mid- to late-M-dwarf with a mass of $M \sim0.1$ M$_\odot$ should have a stellar density near $\rho \gtrsim 30$ g cm$^{-3}$. By comparison, the stellar density for a K5V star with a mass of $M \sim0.7$ M$_\odot$ should be near $\rho \approx 3.5$ g cm$^{-3}$, and is in reasonable agreement with the derived stellar density, assuming a circular orbit (see Figure \[fig:density\]). We also note that the higher end of the measured stellar density distribution is most consistent with our adopted primary mass and radius.
We also applied the `vespa` planet validation tool to this system. This tool assumes that a planet candidate orbits a single main-sequence star, so here we assume that the planet orbits the brighter of our two stars and that stars in the Hyades have converged onto the main sequence. In this analysis, which also incorporates our high-resolution imaging data and our exclusion of additional spectroscopic companions, `vespa` returns a false positive probability of $8\times10^{-5}$. Because of the caveats already mentioned, we do not take this as the true false positive probability, but qualitatively it indicates that, if the planet orbits the brighter K5V star, then it is likely not a false positive.
We regard all these items – the lack of a secondary eclipse, the length of the transit duration, the agreement of the derived stellar density with that of a K5V star, and the `vespa` results – as sufficient evidence to indicate that the observed transit most likely occurs around the primary star, that it is caused by a planet, and that, given the transit depth and the stellar radius ($0.71$R$_\odot$), the transiting planet is Neptune-sized.
As in our team’s previous work [@schlieder2016; @crossfield2017; @dressing2017b] we use the free `BATMAN`[^5] software [@kreidberg2015] to derive transit parameters from our light curve. We ran light curve fits while imposing Gaussian priors on the limb-darkening coefficients, using values appropriate for stars of K5V stars and included the dilution of the transit caused by the blending of the K5V with the M7/8V star. The derived transit and planet parameters are presented in Table \[tab:planet\], and the final fit to the phase-folded light curve is shown in Fig. \[fig:fits\]d. Based upon the transit fits and the HIRES stellar parameters, the transit is caused by a Neptune-sized planet (Rp$=3.03^{+0.53}_{-0.47}$ R$_\oplus$) orbiting the K5V primary star with an orbital period of P$=17.3077\pm0.0013$ days.
We now refer to the planetary system as the following separate components: K2-136A is the primary K5V star and K2-136B is the M7/8V stellar companion. In the course of writing this paper, the authors became aware of a similar discovery paper [@mann2017b]. In that paper, they report the simultaneous discovery of the Neptune-sized planet reported here. They derive a very similar planetary radius ($R_p\approx2.9\pm0.1\ R_\oplus$ [*vs.*]{} $R_p\approx3.0\pm0.5\ R_\oplus$). In addition, they report two other planets in the system: an inner earth-sized planet ($R_p=0.99\pm0.05\ R_\oplus$) and an outer superearth-sized planet ($R_p=1.45\pm0.1\ R_\oplus$). In their paper, they “letter” the planets in order of orbital period: the inner planet as K2-136 b, the outer planet as K2-136 d, and the Neptune-sized planet, jointly discovered, is referred as K2-136 c. We adopt the same lettering scheme in this paper; however, given our discovery of the stellar companion, the planets should should be referred to K2-136A b, K2-136A c, and K2-136A d.
Stellar Rotation Period and Alignment {#subsec:rotation}
-------------------------------------
The light curve is clearly modulated by stellar variability that appears to be quasi-periodic (Fig. \[fig:fits\]b). The full amplitude of the variations is $\sim0.5\%$ ($\sim 5$ mmag) which is comparable to field K-dwarfs [@ciardi2011]. Being $\sim 6.5$ magnitudes fainter than the K-dwarf, the M-dwarf companion would need to have variability amplitudes on the order of $1-2$ magnitudes in order to produced the observed amplitude of variability. That level of variability associated with quasi-periodic rotation is typically not observed in the field or Hyades M-dwarfs [@ciardi2011; @douglas2016]. Thus, the (primary) source of the observed variability is likely the primary component of the system: K2-136A.
A Lomb-Scargle periodogram of the light curve shows its strongest peak at $15.2\pm0.2$ d, and an autocorrelation of the light curve shows its strongest (non-zero-lag) peak at $13.8\pm1.0$ d. Such a period is consistent with the periods of other Hyades members of a similar mass [@delorme2011; @douglas2016] and further evidence that the spot modulation pattern is due to the primary, rather than the secondary which would be expected to be rotating more rapidly; it therefore seems possible that the stellar rotation period of K2-136A lies in this range.
A rotation period of $14-15$ days is expected to produce an equatorial velocity for an $0.71$ R$_\odot$ of $V_\mathrm{eq}\approx 2.4-2.5$ km/s. The HIRES spectrum yields [$v \sin i$]{}=$3.9\pm1.0$ km s$^{-1}$, which is marginally consistent with the expected equatorial velocity derived from the rotation periods. The modest inconsistencies between our measured [$v \sin i$]{} and expectations from the stellar radius and photometric rotation period might be accounted for by (1) systematic effects involved in our estimation of [$v \sin i$]{}, of order 1 km s$^{-1}$, and (2) surface differential rotation. Measured and expected differential rotation rates in K-dwarfs are in the range of $\lesssim$0.05 rad d$^{-1}$ [@barnes2005; @kitchatinov2012]. If the modulation pattern in the K2 photometry is due to surface features at higher, more slowly rotating, latitudes, it is possible the equatorial rotation period is shorter by $\sim$1 d.
While not definitive, the marginal agreement between the measured [$v \sin i$]{} and the rotation period indicates that the star’s rotational axis is nearly perpendicular to the orbital plane of K2-136A c. If there were a significant misalignment, we would expect a more significant difference between the light curve derived rotation period and the measured [$v \sin i$]{}, although a longer time baseline would be useful to confirm this.
Finally, we note that the orbital period of the planet and the rotation period of the star are similar, but not the same. We estimated how long the planet would take to circularize ($\tau_\mathrm{circ}$) using the equation given by @al2006:
$$\begin{split}
\tau_\mathrm{circ} = 1.6\ Gyr \times \left (\frac{Q_p}{10^6}\right )
\times \left (\frac{M_*}{M_\odot}\right )^{-1.5}\\
\times \left (\frac{M_p}{M_\mathrm{Jup}}\right )
\times \left (\frac{R_p}{R_\mathrm{Jup}}\right )^{-5}
\times \left (\frac{a}{0.05 AU}\right )^{6.5}
\end{split}$$
The tidal circularization time scales linearly with $Q_p$, and the tidal parameter ($Q_p$) is notoriously uncertain. However, the Neptune value is estimated to be $\sim 10^5$ with a possible range of $10^4 - 10^6$ [@maness2007], indicating that the circularization timescale for K2-136A c may be $\sim500 - 600$ Myr – a timescale very similar to the age of the Hyades Cluster. If the tidal parameter is more akin to Jupiter ($10^6$), the circularization timescale would be closer to 5 Gyr, well beyond the age of the Hyades.
![The two-dimensional distribution of planet size and incident stellar flux, adopted from @fulton2017, is shown with the location of the previously known transiting planet in the Hyades Cluster: K2-25b (green diamond) and the planets in K2-136. The blue square represents K2-136A c from this work and the magenta circles represent the K2-136 planets as measured by @mann2017b. K2-136A d is off the right-side of the figure with an insolation flux of $\approx 5$. \[fig:evaporation\]](f7.pdf)
![The two-dimensional distribution of planet size and stellar mass. The positions of the known cluster transiting planets from the Praesepe, Upper Sco, and Hyades clusters are shown. K2-136A c is marked with the red star. The blue points represent the known transiting planets with orbital periods of $\le 30$ days. Data were gathered from the NASA Exoplanet Archive.\[fig:rad\_vs\_mass\]](f8.pdf)
Comparison to Other Systems {#subsec:compare}
---------------------------
K2-136A c is in a 17 day orbit around a K5V star and is experiencing a stellar insolation flux of S$\sim 12$ S$_\oplus$ (Table \[tab:planet\]) and as a result it has an expected equilibrium temperature that is $T_\mathrm{eq} = 400 - 600$ K depending on the planet albedo and the atmosphere re-circulation. The other previous detected transiting planet in the Hyades [K2-25b, @mann2016] orbits an M4.5V star, but has a much shorter orbital period of 3.485 days but experiences a similar insolation flux (S$\sim 10$ S$_\oplus$) as K2-136A c.
The two Neptunes-sized planets in K2-25 and K2-136, respectively, occupy a similar location in the two dimensional distribution of planet radius *vs.* stellar insolation flux (see Figure \[fig:evaporation\]). Both planets are at the edge of the distribution, perhaps indicating that the $\sim0.6-0.8$ Gyr Hyades planets, in comparison to the $>$Gyr Kepler sample used to define the evaporation valley, may still need to undergo significant evolution.
If the known young cluster transiting planets are compared to the old field star transiting planets (primarily dominated by Kepler detections) in a two dimensional distribution of planet radius *vs.* stellar mass (see Figure \[fig:rad\_vs\_mass\]), the distribution of the cluster planets does not look significantly different than the distribution of old field planets. This suggests that the long term evolution of the cluster planets should lead them to the distribution of planets currently observed in the field.
Potential for More Follow-Up {#subsec:followup}
----------------------------
K2-136A c orbits a relatively bright star in the infrared ($K\sim 9$ mag) in a very well-studied and nearby open cluster and, thus, offers the opportunity for more detailed studies (e.g., with Spitzer and JWST). In the optical, the star is a bit fainter with $V \approx 11$ magnitude. If the K2-136A c has a similar density to Neptune, the expected radial velocity (RV) amplitude caused by the orbital motion of the planet should be on the order of 5 m s$^{-1}$, well within the reach of modern radial velocity spectrographs. As noted above, the system has been detected as an X-ray source which may indicate that the star is active; however, this could be the M-dwarf companion and not the K-dwarf primary. The spectra of the K-dwarf appears to have an activity indicator of $S_{HK}=1.03$, suggesting that the RV jitter could be of 1–10 m s$^{-1}$ [@isaacson:2010]. Thus an RV measurement of the planet’s mass may be feasible. Such a measurement would provide an all-too-rare constraint on the bulk properties of a young sub-Neptune.
Unfortunately, the ecliptic latitude of K2-136A is $\sim 1^\circ$ and (like most K2 targets) it will not be observed in the prime TESS mission. However, the transit depth is approximately 1.5 mmag and, thus, ground-based observations of the transits may be possible to refine the transit ephemeris and to search for long-term timing variations indicative of the other planets in the system [e.g., @lendl:2017; @barros:2017].
Summary {#sec:summary}
=======
We present the discovery of a sub-Neptune-sized ($3.0$ R$_\oplus$) planet in a 17.3 day orbit around a K-dwarf in the Hyades cluster. The host star also appears to have a late M-dwarf companion that is separated from the primary star by at least 40 AU. This planetary system, K2-136A c, represents the fourth planet discovered in the Hyades cluster, and only the second transiting planet in the Hyades. Both transiting planets now known in the Hyades are Neptune-sized and orbit relatively low-mass stars; K2-25b orbits an M4.5V dwarf and the newly presented K2-136A b orbits a K5V dwarf which also has two other smaller planets and low-mass M-dwarf companion.
By finding and studying planets in clusters spanning a range of stellar ages, we may begin to understand how and on what timescales planetary systems form and evolve. The planets discovered in the Upper Sco, Praesepe, and Hyades clusters provide snapshots in time and represent the first steps in mapping out this evolution. As we begin to understand the planetary distribution in the nascent clusters in which stars and their planetary systems are born, we can begin to set constraints on and understand how planetary systems form and evolve into the systems we see today in the field of stars.
The authors thank the Andrew Mann and his collaborators for contacting us regarding their efforts so that we could work together to submit our respective discovery papers. We also note that another paper was submitted after the submission of our paper which is consistent with the results presented here [@livingston2017]. The authors thank the referee for comments which help to improve the clarity of the manuscript.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This research has made use of the NASA Exoplanet Archive and the ExoFOP website, which are operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. MB acknowledges support from the North Carolina Space Grant Consortium. LA acknowledges support from NASA’s Minority University Research and Education Program Institutional Research Opportunity to the University of the Virgin Islands. BT acknowledges support from the National Science Foundation Graduate Research Fellowship under grant number DGE1322106 and NASA’s Minority University Research and Education Program. Finally, DRC would like to dedicate this paper to Teresa Ciardi for her years of insight to all of my papers - and this paper was no exception.
Adams, F. C., & Laughlin, G. 2006, , 649, 1004
Akeson, R. L., Chen, X., Ciardi, D., et al. 2013, , 125, 989
Barentsen, G. 2017, DOI:10.5281/zenodo.344973
Barnes, J. R., Collier Cameron, A., Donati, J.-F., et al. 2005, , 357, L1
Barros, S. C. C., Gosselin, H., Lillo-Box, J., et al. 2017, arXiv:1709.00865
B[ö]{}hm-Vitense, E. 2007, , 133, 1903
Brucalassi, A., Koppenhoefer, J., Saglia, R., et al. 2017, , 603, A85
Brandt, T. D. & Huang, C. X. 2015, , 807, 58
Choi, J., Dotter, A., Conroy, C., et al. 2016, , 823, 102
Ciardi, D. R., von Braun, K., Bryden, G., et al. 2011, , 141, 108
Ciardi, D. R., Beichman, C. A., Horch, E. P., & Howell, S. B. 2015, , 805, 16
Coelho, P., Barbuy, B., Mel[é]{}ndez, J., Schiavon, R. P., & Castilho, B. V. 2005, , 443, 735
Crossfield, I. J. M., Ciardi, D. R., Petigura, E. A., et al. 2016, , 226, 7
Crossfield, I. J. M., Ciardi, D. R., Isaacson, H., et al. 2017, , 153, 255
Cushing, M. C., Vacca, W. D., & Rayner, J. T. 2004, , 116, 362
David, T. J. et al. 2016, , 534, 658
David, T. J., Conroy, K. E., Hillenbrand, L. A., et al. 2016, , 151, 112
David, T. J. & Hillenbrand, L. A. 2015, , 804, 146
Delorme, P., Cameron, A. C., Hebb, L., et al. 2011, 16th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, 448, 841
Donati, J. F., Moutou, C., Malo, L., et al. 2016, , 534, 662
Douglas, S., Ag[ü]{}eros, M., Covey, K., et al. 2016, 19th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun (CS19), 106
Dressing, C. D., Newton, E. R., Schlieder, J. E., et al. 2017, , 836, 167
Dressing, C. D., Vanderburg, A., Schlieder, J. E., et al. 2017, arXiv:1703.07416
Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, , 154, 109
Furlan, E., Ciardi, D. R., Everett, M. E., et al. 2017, , 153, 71
Gaidos, E., Anderson, D. R., L[é]{}pine, S., et al. 2014, , 437, 3133
Hayward, T. L., Brandl, B., Pirger, B., et al. 2001, , 113, 105
Horch, E. P. et al. 2014, ApJ, 795, 60
Howell, S. B., Rowe, J. F., Bryson, S. T., et al. 2012, , 746, 123
Howell, S. B., Sobeck, C., Haas, M., et al. 2014, , 126, 398
Huber, D., Bryson, S. T., Haas, M. R., et al. 2016, VizieR Online Data Catalog, 222
Isaacson, H. & Fischer, D, 2010, ApJ, 725, 875
Johns-Krull, C. M., McLane, J. N., Prato, L., et al. 2016, , 826, 206
Kane, S. R., Ciardi, D. R., Gelino, D. M., & von Braun, K. 2012, , 425, 757
Kitchatinov, L. L. & Olemskoy, S. V. 2012, , 423, 3344
, R., [Marcy]{}, G. W., [Isaacson]{}, H., & [Howard]{}, A. W. 2015, , 149, 18
Kraus, A. et al. 2016, , 152, 8
Kreidberg, L. 2015, , 127, 1161
Lada, C. J., & Lada, E. A. 2003, , 41, 57
Lendl, M., Ehrenreich, D., Turner, O. D., et al. 2017, A&A, 603, 5
L[é]{}pine, S., Rich, R. M., & Shara, M. M. 2003, , 125, 3
L[é]{}pine, S., & Simon, M. 2009, , 137, 3632
Livingston, J. H., Dai, F., Hirano, T., et al. 2017, arXiv:1710.07203
Lovis, C., & Mayor, M. 2007, , 472, 657
Maderak, R. M., Deliyannis, C. P., King, J. R., & Cummings, J. D. 2013, , 146, 143
Malavolta, L., Nascimbeni, V., Piotto, G., et al. 2016, , 588, A118
Malmberg, D., Davies, M. B., & Heggie, D. C. 2011, , 411, 859
Maness, H. L., Marcy, G. W., Ford, E. B., et al. 2007, , 119, 90
Mann, A. W., Gaidos, E., Ansdell, M 2013, , 779, 188
Mann, A. W., Newton, E. R., Rizzuto, A. C., et al. 2016, , 152, 61
Mann, A. W., Gaidos, E., Vanderburg, A., et al. 2017, , 153, 64
Mann, A. W., Vanderburg, A., Rizzuto, A. C., et al. 2017, arXiv:1709.10328
Martinez, A. O., Crossfield, I. J. M., Schlieder, J. E., et al. 2017, , 837, 72
Meibom, S., Torres, G., Fressin, F., et al. 2013, , 499, 55
Morton, T. D. 2012, , 761, 6
Morton, T. D. 2015, Astrophysics Source Code Library, ascl:1503.010
NASA Exoplanet Archive, 2017, Update 2017 September 15
Obermeier, C. et al. 2016, , 152, 223
Paulson, D. B., Sneden, C., & Cochran, W. D. 2003, , 125, 3185
Pecaut, M. J., & Mamajek, E. E. 2013, , 208, 9
Pepper, J., & Gaudi, B. S. 2006, Acta Astronomica, 56, 183
Perryman, M. A. C., Brown, A. G. A., Lebreton, Y., et al. 1998, , 331, 81
, E. A., [Howard]{}, A. W., & [Marcy]{}, G. W. 2013, Proceedings of the National Academy of Science, 110, 19273
, E. A., [Marcy]{}, G. W., & [Howard]{}, A. W. 2013, , 770, 69
Petigura, E. A. 2015, Ph.D. Thesis
Petigura, E. A., Crossfield, I. J. M., Isaacson, H. 2017, submitted
Quinn, S. N., White, R. J., Latham, D. W., et al. 2012, , 756, L33
Quinn, S. N., White, R. J., Latham, D. W., et al. 2014, , 787, 27
Rayner, J. T., Toomey, D. W., Onaka, P. M., et al. 2003, , 115, 362
Rayner, J. T., Onaka, P. M., Cushing, M. C., & Vacca, W. D. 2004, , 5492, 1498
Rayner, J. T., Cushing, M. C., & Vacca, W. D. 2009, , 185, 289
Reid, N. 1993, , 265, 785
Sato, B., Izumiura, H., Toyota, E., et al. 2007, , 661, 527
Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103
Schlieder, J. E., Crossfield, I. J. M., Petigura, E. A., et al. 2016, , 818, 87
Seager, S., & Mall[é]{}n-Ornelas, G. 2003, , 585, 1038
Stern, R. A., Schmitt, J. H. M. M., & Kahabka, P. T. 1995, , 448, 683
Vacca, W. D., Cushing, M. C., & Rayner, J. T. 2003, , 115, 389
van Leeuwen, F. 2009, , 497, 209
, S. S., [Allen]{}, S. L., [Bigelow]{}, B. C., [et al.]{} 1994, in , Vol. 2198, Instrumentation in Astronomy VIII, ed. D. L. [Crawford]{} & E. R. [Craine]{}, 362
Weis, E. W. 1983, , 95, 29
Yee, S. W., Petigura, E. A., & von Braun, K. 2017, , 836, 77
Yu, L., Donati, J.-F., H[é]{}brard, E. M., et al. 2017, , 467, 1342
Zacharias, N., Finch, C. T., Girard, T. M., et al. 2013, , 145, 44
[l r r ]{}\[bt\]\
EPIC ID & 247589423 &\
$\alpha$ R.A. (hh:mm:ss) & 04:29:39.0 & Gaia\
$\delta$ Dec. (dd:mm:ss) & +22:52:57.8 & Gaia\
$\mu_{\alpha}$ (mas yr$^{-1}$) & $+81.8\pm 1.0$ & UCAC4\
$\mu_{\delta}$ (mas yr$^{-1}$) & $-35.2\pm0.9$ & UCAC4\
Barycentric RV (km s$^{-1}$) & $39.6\pm0.2$ & HIRES; This Work\
$S_{HK}$ & 1.027 & HIRES; This Work\
Distance (pc) & $50-60$ & This Work\
Age (Myr) & $625 - 750$ & @perryman1998\
& & @bh2015\
\
NUV (mag) ........ & $19.47 \pm 0.10$ & GALEX\
B (mag) .......... & $12.479 \pm 0.041$ & APASS\
V (mag) .......... & $11.200 \pm 0.030$ & APASS\
g (mag) .......... & $11.969 \pm 0.030$ & APASS\
r (mag) .......... & $10.746 \pm 0.040$ & APASS\
Kepmag (mag) & $10.771$ & @huber2016\
i (mag) .......... & $10.257\pm 0.020$ & APASS\
J (mag) .......... & $9.096 \pm 0.022$ & 2MASS\
H (mag) .......... & $8.496 \pm 0.020$ & 2MASS\
Ks(mag) ......... & $8.368 \pm 0.019$ & 2MASS\
\
\
\
Kepmag (mag) & $10.9 \pm 0.1$ & A-Component\
J (mag) .......... & $9.11 \pm 0.04$ & A-Component\
H (mag) .......... & $8.51 \pm 0.02$ & A-Component\
Ks(mag) ......... & $8.38 \pm 0.02$ & A-Component\
\
Kepmag (mag) & $17.4\pm 0.2$ & B-Component\
J (mag) .......... & $14.1 \pm 0.1$ & B-Component\
H (mag) .......... & $13.47 \pm 0.04$ & B-Component\
Ks(mag) ......... & $13.03 \pm 0.03$ & B-Component\
\
\
Spectral Type & K5V $\pm$ 1 & SpeX\
[$T_\mathrm{eff}$]{} (K) & $4364 \pm 70$ & HIRES\
& $4360 \pm 206$& SpeX\
$[$Fe/H$]$ & +0.15 $\pm$ 0.09 & HIRES\
$M_*$ ($M_\odot$) & $0.71 \pm 0.06$ & HIRES\
& $0.70 \pm 0.07$ & SpeX\
$R_*$ ($R_\odot$) & $0.71 \pm 0.10$ & HIRES\
& $0.67 \pm 0.06$ & SpeX\
$L_*$ ($L_\odot$) & $0.164 \pm 0.031$ & HIRES\
& $0.152 \pm 0.052$ & SpeX\
$\log_\mathrm{10} g$ (cgs) & $4.63 \pm 0.11$ & HIRES\
& $4.62^{+0.05}_{-0.10}$ & SpeX\
[$v \sin i$]{} (km s$^{-1}$) & $3.9\pm 1.0$ & HIRES
[llll]{}\[bt\] Time of Transit Center & $T_{0}- 2454833$ & $BJD_\mathrm{TDB} $ & $2997.0235\pm0.0025$\
Orbital Period & $P$ & d & $17.3077\pm0.0013$\
Orbital Inclination & $i$ & deg & $89.30^{+0.49}_{-0.76}$\
Planet/Star Radius Ratio & $R_P/R_*$ & % & $3.85^{+0.47}_{-0.20}$\
Linear Limb Darkening & $\alpha$ & – & $0.900\pm0.030$\
Quadratic Limb Darkening & $\beta$ & – & $0.486\pm0.030$\
Transit Duration ($1^{st}-4^{th}$) & $T_{14}$ & hr & $3.59^{+0.17}_{-0.14}$\
Transit Duration ($2^{nd}-3^{rd}$) & $T_{23}$ & hr & $3.22^{+0.15}_{-0.18}$\
Stellar Radius-Orbit Ratio & $R_*/a$ & – & $0.0287^{+0.0075}_{-0.0027}$\
Impact Parameter & $b$ & – & $0.43\pm0.28$\
Stellar Density & $\rho_{*,circ}$ & g cm$^{-3}$ & $2.67^{+0.90}_{-1.34}$\
Semi-major Axis & $a$ & AU & $0.11728\pm0.00048$\
Planet Radius & $R_P$ & $R_\oplus$ & $3.03^{+0.53}_{-0.47}$\
Incident Flux & $S_{inc}$ & $S_\oplus$ & $11.9^{+3.7}_{-3.2}$\
Secondary Eclipse Depth & $\delta_\mathrm{ecl}\ (3\sigma)$ & ppm & $< 238$
[^1]: <https://github.com/KeplerGO/kadenza>
[^2]: <https://github.com/petigura/k2phot>
[^3]: <https://github.com/petigura/terra>
[^4]: https://github.com/samuelyeewl/specmatch-emp
[^5]: <https://github.com/lkreidberg/batman>
|
---
abstract: 'We establish an optimal transportation inequality for the Poisson measure on the configuration space. Furthermore, under the Dobrushin uniqueness condition, we obtain a sharp transportation inequality for the Gibbs measure on $\mathbb{N}^\Lambda$ or the continuum Gibbs measure on the configuration space.'
address:
- 'School of Mathematical Sciences and Lab. Math. Com. Sys., Beijing Normal University, 100875 Beijing, China. '
- 'College of Science, Minzu University of China, 100081 Beijing, China.\'
- 'School of Mathematics, Wuhan University, 430072 Hubei, China.\'
- 'Laboratoire de Mathématiques, CNRS UMR 6620, Universit'' e Blaise Pascal, avenue des Landais 63177 Aubière, France and Institute of Applied Mathematics, Chinese Academy of Sciences, 100190 Beijing, China. '
author:
-
-
-
-
title: 'Transportation inequalities: From Poisson to Gibbs measures'
---
,
,
Introduction {#sec1}
============
*Transportation inequality $W_1H$.* Let $\mathcal{X}$ be a Polish space equipped with the Borel $\sigma$-field $\mathcal{B}$ and $d$ be a lower semi-continuous metric on the product space $\mathcal
{X}\times
\mathcal{X}$ (which does not necessarily generate the topology of $\mathcal{X}$). Let $\mathcal{M}_1(\mathcal{X})$ be the space of all probability measures on $\mathcal{X}$. Given $p\ge1$ and two probability measures $\mu$ and $\nu$ on $\mathcal{X}$, we define the quantity $$W_{p, d}(\mu, \nu)=\inf\biggl(\int \hspace{-2pt}\int {d}(x, y)^p\,\mathrm{d}\pi(x, y)\biggr)^{1/p},$$ where the infimum is taken over all probability measures $\pi$ on the product space $\mathcal{X}\times\mathcal{X}$ with marginal distributions $\mu$ and $\nu$ (say, coupling of $(\mu, \nu)$). This infimum is finite provided that $\mu$ and $\nu$ belong to $\mathcal{M}_1^p(\mathcal
{X},d):=\{\nu\in
\mathcal{M}_1(\mathcal{X});
\int d^p(x,x_0)\,\mathrm{d}\nu<+\infty\}$, where $x_0$ is some fixed point of $\mathcal{X}$. This quantity is commonly referred to as the *$L^p$-Wasserstein distance* between $\mu$ and $\nu
.$ When $d$ is the trivial metric $d(x, y)= 1_{x\neq y}, 2W_{1,d}(\mu,
\nu)=\|\mu-\nu\|_{\mathrm{TV}},$ the total variation of $\mu-\nu.$
The Kullback information (or relative entropy) of $\nu$ with respect to $\mu$ is defined as $$\label{kullback} H(\nu/\mu)=
%
\cases{
\displaystyle\int\log\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\,\mathrm{d}\nu &\quad\mbox{if} $\nu\ll \mu$, \cr
+\infty&\quad\mbox{otherwise}.
}
%$$ Let $\alpha$ be a non-decreasing left-continuous function on $\mathbb{R}^+=[0,+\infty)$ which vanishes at $0$. If, moreover, $\alpha$ is convex, we write $\alpha\in\mathcal{C}$. We say that the probability measure $\mu$ satisfies the *transportation inequality $\alpha$-$W_1H$ with deviation function $\alpha$* on $(\mathcal{X}, d)$ if $$\label{W1H*} \alpha(W_{1, d}(\mu, \nu)
)\le H(\nu
/\mu)
\qquad \forall\nu\in\mathcal{M}_1(\mathcal{X}).$$ This transportation inequality $W_1H$ was introduced and studied by Marton [@Mar96] in relation with measure concentration, for quadratic deviation function $\alpha$. It was further characterized by Bobkov and Götze [@BG99], Djellout, Guillin and Wu [@DGW], Bolley and Villani [@BV05] and others. The latest development is due to Gozlan and Léonard [@GL], in which the general $\alpha$-$W_1H$ inequality above was introduced in relation to large deviations and characterized by concentration inequalities, as follows.
\[Gozleo\] Let $\alpha\in\mathcal{C}$ and $\mu\in\mathcal{M}_1^1(\mathcal
{X},d)$. The following statements are then equivalent:
1. the transportation inequality $\alpha$-$W_1H$ (\[W1H\*\]) holds;
2. for all $\lambda\geq0$ and all $F\in b\mathcal{B}$, $\|F\|_{{\rm Lip}(d)}:=\sup_{x\ne
y}\frac{|F(x)-F(y)|}{d(x,y)}\le1$, $$\log\int_\mathcal{X} \exp\bigl(\lambda
[F-\mu(F)]\bigr)\mu(\mathrm{d}x)\leq\alpha^\ast(\lambda),$$ where $\mu(F):=\int_\mathcal{X}
F\,\mathrm{d}\mu$ and $\alpha^\ast(\lambda):=\sup_{r\ge0}(\lambda r -
\alpha(r))$ is the semi-Legendre transformation of $\alpha$;
3. for all $\lambda\geq0$ and all $F,G\in C_b(\mathcal
{X})$ (the space of all bounded and continuous functions on $\mathcal{X}$) such that $F(x)-G(y)\le d(x,y)$ for all $x,y\in\mathcal{X}$, $$\log\int_\mathcal{X} \mathrm{e}^{\lambda
F}\mu(\mathrm{d}x)\leq\lambda\mu(G)+ \alpha^\ast(\lambda);$$
4. for any measurable function $F$ such that $\|F\|_{{\rm
Lip}(d)}\le1$, the following concentration inequality holds true: for all $n\geq1, r\geq0$, $$\label{gozleo-1}
\mathbb{P}\Biggl(\frac{1}{n}\sum
_1^nF(\xi_k)\geq\mu(F)+r\Biggr)\leq
\mathrm{e}^{-n\alpha(r)},$$ where $(\xi_n)_{n\geq1}$ is a sequence of i.i.d. $\mathcal{X}$-valued random variables with common law $\mu$.
The estimate on the Laplace transform in (b) and the concentration inequality in (\[gozleo-1\]) are the main motivations for the transportation inequality ($\alpha$-$W_1H$).
*Objective and organization*. The objective of this paper is to prove the transportation inequality $(\alpha$-$W_1H)$ for:
(1) (the free case) the Poisson measure $P^0$ on the configuration space consisting of Radon point measures $\omega=\sum_i \delta_{x_i},
x_i\in E$ with some $\sigma$-finite intensity measure $m$ on $E$, where $E$ is some fixed locally compact space;
(2) (the interaction case) the continuum Gibbs measure over a compact subset $E$ of $\mathbb{R}^d$, $$P^{\phi}(\mathrm{d}\omega)=\frac{\mathrm{e}^{-(1/2)\sum_{x_i, x_j\in\operatorname{supp}\omega, i\ne j}\phi(x_i-x_j)
-\sum_{k, x_i\in\operatorname{supp}(\omega)}\phi(x_i-y_k)}}{Z}P^0(\mathrm{d}\omega),$$ where $\phi\dvtx \mathbb{R}^d\to[0,+\infty]$ is some pair-interaction non-negative even function (see Section \[sec4\] for notation) and $P^0$ is the Poisson measure with intensity $z\,\mathrm{d}x$ on $E$.
For Poisson measures on $\mathbb{N}$, Liu [@Liu09] obtained the optimal deviation function by means of Theorem \[Gozleo\]. For transportation inequalities of Gibbs measures on discrete sites, see [@Marton04] and [@Wu06].
For an illustration of our main result (Theorem \[w1hgibbs\]) on the continuum Gibbs measure $P^\phi$, let $E:=[-N,N]^d$ ($1\le N\in
\mathbb{N}$) and $f\dvtx [-N,N]^d\to\mathbb{R}$ be measurable and periodic with period $1$ at each variable so that $|f|\le M$. Consider the empirical mean per volume $F(\omega):=\omega(f)/(2N)^d$ of $f$. Under Dobrushin’s uniqueness condition $D:=z
\int_{\mathbb{R}^d}(1-\mathrm{e}^{-\phi(y)})\, \mathrm{d}y<1$, we have (see Remark \[rem41\] for proof) $$\label{a1}
P^\phi\bigl(F>P^\phi(F) +r \bigr)\le\exp\biggl(-
\frac{(2N)^d(1-D)r}{2M} \log\biggl( 1+\frac{(1-D)r}{zM} \biggr)
\biggr),\qquad r>0,$$ an explicit Poissonian concentration inequality which is sharp when $\phi=0$.
The paper is organized as follows. In the next section, we prove $(\alpha$–$W_1H)$ for the Poisson measure on the configuration space with respect to two metrics: in both cases, we obtain optimal deviation functions. Our main tool is Gozlan and Leonard’s Theorem \[Gozleo\] and a known concentration inequality in [@Wu00]. Section \[sec3\], as a prelude to the study of the continuum Gibbs measure $P^\phi$ on the configuration space, is devoted to the study of a Gibbs measure on $\mathbb{N}^{\Lambda}$. Our method is a combination of a lemma on $W_1H$ for mixed measure, Dobrushin’s uniqueness condition and the McDiarmid–Rio martingale method for dependent tensorization of the $W_1H$-inequality. Finally, in the last section, by approximation, we obtain a sharp $(\alpha$–$W_1H)$ inequality for the continuum Gibbs measure $P^\phi$ under Dobrushin’s uniqueness condition $D=z
\int_{\mathbb{R}^d}(1-\mathrm{e}^{-\phi(y)})\,\mathrm{d}y<1$. The latter is a sharp sufficient condition, both for the analyticity of the pressure functional and for the spectral gap; see [@Wu04].
Poisson point processes {#sec2}
=======================
*Poisson space.* Let $E$ be a metric complete locally compact space with the Borel field $\mathcal{B}_E$ and $m$ a $\sigma$-finite positive Radon measure on $E$. The Poisson space $(\Omega,\mathcal{F},P^0)$ is given by:
(1) $\Omega:=\{\omega=\sum_i\delta_{x_i} \mbox{(Radon
measure); } x_i\in
E\}$ (the so-called configuration space over $E$);
(2) $\mathcal{F}=\sigma(\omega\rightarrow\omega(B)|B\in
\mathcal{B}_E)$;
(3) $\forall B\in\mathcal{B}_E, \forall
k\in\mathbb{N}\mbox{: } P^0(\omega
\dvtx \omega(B)=k)=\mathrm{e}^{-m(B)}\frac{m(B)^k}{k!}$;
(4) $\forall B_1,\dots,B_n\in\mathcal{B}_E$ disjoint, $\omega(B_1),\ldots,\omega(B_n)$ are $P^0$-independent,
where $\delta_x$ denotes the Dirac measure at $x$. Under $P^0$, $\omega$ is exactly the Poisson point process on $E$ with intensity measure $m(\mathrm{d}x)$. On $\Omega$, we consider the vague convergence topology, that is, the coarsest topology such that $\omega\to
\omega(f)$ is continuous, where $f$ runs over the space $C_0(E)$ of all continuous functions with compact support on $E$. Equipped with this topology, $\Omega$ is a Polish space and this topology is the weak convergence topology (of measures) if $E$ is compact.
Letting $\varphi$ be a positive measurable function on $E$, we define a metric $d_\varphi(\cdot, \cdot)$ (which may be infinite) on the Poisson space $(\Omega, \mathcal{F}, P^0)$ by $$\begin{aligned}
d_\varphi(\omega, \omega')=\int_E\varphi \,\mathrm{d}|\omega-\omega'|,\end{aligned}$$ where $|\nu|:=\nu^++\nu^-$ for a signed measure $\nu$ ($\nu^\pm$ are, respectively, the positive and negative parts of $\nu$ in the Hahn–Jordan decomposition).
If $\varphi$ is continuous, then the metric $d_\varphi$ is lower semi-continuous on $\Omega$.
Indeed, for any $\omega,\omega'\in\Omega$, $$d_\varphi(\omega, \omega')=\sup_{f} |\omega(f)-\omega'(f)|,$$ where the supremum is taken over all bounded $\mathcal{B}_E$-measurable functions $f$ with compact support such that $|f|\le\varphi$. Now, as $\varphi$ is continuous, we can approximate such $f$ by $f_n\in
C_0(E)$ in $L^1(E, \omega+\omega')$ and $|f_n|\le\varphi$. Then $$d_\varphi(\omega, \omega')=\sup_{f\in C_0(E), |f|\le\varphi}
|\omega(f)-\omega'(f)|.$$ As $(\omega,\omega')\to|\omega(f)-\omega'(f)|$ is continuous on $\Omega\times\Omega$, $d_\varphi(\omega, \omega')$ is lower semi-continuous on $\Omega\times\Omega$.
Assume from now on that $\varphi$ is continuous. Then, for any $\nu,\mu\in\mathcal{M}_1(\Omega),$ we have the Kantorovitch–Rubinstein equality [@Kel84; @Leo; @Villani], $$\begin{aligned}
W_{1,d_\varphi}(\mu,\nu)
&=&\sup\biggl\{\int F \,\mathrm{d}\nu-\int G\,\mathrm{d}\mu\Big| F,G\in C_b(\Omega),
F(\omega)-G(\omega')\leq d_\varphi(\omega, \omega')\biggr\}
\\
&=&\sup\biggl\{\int G \,\mathrm{d}(\nu-\mu)\dvtx G\in b\mathcal{F}, \|G\|_{{\rm
Lip}(d_\varphi)}\leq1\biggr\}.\end{aligned}$$ Here, $b\mathcal{F}$ is the space of all real, bounded and $\mathcal
{F}$-measurable functions.
*The difference operator $D$.* We denote by $L^0(\Omega,P^0)$ the space of all $P^0$-equivalent classes of real measurable functions w.r.t. the completion of $\mathcal{F}$ by $P^0$. Hence, the difference operator $D\dvtx L^0(\Omega,P^0)\rightarrow L^0(E\times
\Omega, m\otimes P^0)$ given by $$F\rightarrow
D_xF(\omega):=F(\omega+\delta_x)-F(\omega)$$ is well defined (see [@Wu00]) and plays a crucial role in the Malliavin calculus on the Poisson space.
\[lem22\] Given a measurable function $F\dvtx \Omega\rightarrow\mathbb{R}$, $\| F\|_{{\rm
Lip}(d_\varphi)}\leq1$ if and only if $|D_xF(\omega)|\leq\varphi(x)$ for all $\omega\in\Omega$ and $x\in E$.
If $\|F\|_{\mathrm{Lip}(d_\varphi)}\le1$, since $$|D_xF(\omega)|=|F(\omega+\delta_x)-F(\omega)|\leq
d_\varphi(\omega+\delta_x, \omega)=\int_E \varphi
\,\mathrm{d}|(\omega+\delta_x)-\omega|=\varphi(x),$$ the necessity is true. We now prove the sufficiency. For any $\omega,
\omega'\in\Omega,$ we write $\omega=\sum_{k=1}^i\delta_{x_k}+\omega\wedge\omega'$ and $\omega'=\sum_{k=1}^j\delta_{y_k}+\omega\wedge\omega'$, where $\omega\wedge\omega':=\frac12(\omega+\omega' - |\omega-\omega
'|)$. We then have $$\begin{aligned}
|F(\omega)-F(\omega')|&\le& |F(\omega)-F(\omega
\wedge\omega')|+|F(\omega')-F(\omega\wedge\omega')|
\\
&\le&\sum_{k=1}^i\Biggl|F\Biggl(\omega\wedge\omega'+\sum
_{l=1}^k\delta_{x_l}\Biggr)-F\Biggl(\omega\wedge\omega'+\sum
_{l=1}^{k-1}\delta_{x_l}\Biggr)\Biggr|
\\
&&{} +\sum_{k=1}^j\Biggl|F\Biggl(\omega\wedge\omega'+\sum
_{l=1}^k\delta_{y_l}\Biggr)-F\Biggl(\omega\wedge\omega'+\sum
_{l=1}^{k-1}\delta_{y_l}\Biggr)\Biggr|
\\
&\leq &\sum_{k=1}^i\varphi(x_k)+\sum_{k=1}^j\varphi(y_k)=\int_E\varphi
\,\mathrm{d}|\omega-\omega'|=d_\varphi(\omega, \omega'),\end{aligned}$$ which implies that $\| F\|_{\mathrm{Lip}(d_\varphi)}\leq1$.
When $\varphi=1$, we denote $d_\varphi$ by $d$. Obviously, $d(\omega, \omega')=|\omega-\omega'|(E)=\|
\omega-\omega'\|_{\mathrm{TV}}$, that is, $d$ is exactly the total variation distance.
The following result, due to the fourth-named author [@Wu00], was obtained by means of the $L^1$-log-Sobolev inequality and will play an important role.
\[Wuptrf\] Let $F\in L^1(\Omega, P^0)$. If there is some $0\le\varphi\in
L^2(E, m)$ such that $|D_xF(\omega)|\leq\varphi(x)$, $m\otimes
P^0$-a.e., then for any $\lambda\geq0$, $$\begin{aligned}
\mathbb{E}^{P^0}
\mathrm{e}^{\lambda(F-P^0(F))}\leq\exp\biggl\{\int_E(\mathrm{e}^{\lambda\varphi
}-\lambda\varphi-1)\,\mathrm{d}m\biggr\}.\end{aligned}$$ In particular, if $m$ is finite and $|D_xF(\omega)|\leq1$ for $m\times P^0$-a.e. $(x,\omega)$ on $E\times\Omega$ (i.e., $\varphi(x)=1)$, then $$\begin{aligned}
\mathbb{E}^{P^0}
\mathrm{e}^{\lambda(F-P^0(F))}\leq\exp\{(\mathrm{e}^{\lambda}-\lambda
-1)m(E)\}.\end{aligned}$$
We now state our main result on the Poisson space.
\[main\] Let $(\Omega, \mathcal{F}, P^0)$ be the Poisson space with intensity measure $m(dx)$ and $\varphi$ a bounded continuous function on $E$ such that $0<\varphi\leq M$ and $\sigma^2=\int_E \varphi^2 \,\mathrm{d}m<+\infty$. Then $$\label{W1honpoisson} \frac1M h_c (W_{1,d_\varphi}(Q,
P^0))\leq
H(Q|P^0)\qquad \forall Q\in\mathcal{M}_1(\Omega),$$ where $c=\sigma^2/M$ and $$\label{main2}
h_{c}(r)=c\cdot h\biggl(\frac{r}{c}\biggr), \qquad
h(r)=(1+r)\log(1+r)-r.$$
Note that $h^*(\lambda):=\sup_{r\ge0}(\lambda r-h(r))=\mathrm{e}^\lambda
-\lambda-1$ and $h_c^*(\lambda)=ch^*(\lambda)$.
[Proof of Theorem \[main\]]{} Since the function $(\mathrm{e}^{\lambda\varphi}-\lambda\varphi-1)/\varphi^2$ is increasing in $\varphi,$ it is easy to see that $$\label{legen}
\int_E(\mathrm{e}^{\lambda\varphi}-\lambda\varphi-1)\,\mathrm{d}m\leq\frac
{\mathrm{e}^{\lambda
M}-\lambda M-1}{M^2}\int\varphi^2\,\mathrm{d}m.$$ Further, the Legendre transformation of the right-hand side of is, for $r\ge0$, $$\begin{aligned}
\sup_{\lambda\geq0}\biggl\{\lambda r-\frac{\mathrm{e}^{\lambda M}-\lambda
M-1}{M^2}\int\varphi^2\,\mathrm{d}m\biggr\}&=&\biggl(\frac{r}{M}+\frac{\int
\varphi^2
\,\mathrm{d}m}{M^2}\biggr)\log\biggl(\frac{Mr}{\int\varphi^2 \,\mathrm{d}m}+1
\biggr)-\frac{r}{M}
\\
&=& \frac1M h_{c}(r).\end{aligned}$$ The desired result then follows from Theorem \[Gozleo\], by Lemma \[Wuptrf\].
Let $\beta(\lambda):=\int_E(\mathrm{e}^{\lambda\varphi}-\lambda\varphi-1)\,\mathrm{d}m$ and $\alpha(r):=\sup_{\lambda\ge0}(\lambda r -\beta(\lambda))$. The proof above gives us $$\alpha(W_{1,d_\varphi}(Q,P^0))\le H(Q|P^0)
\qquad \forall Q\in\mathcal{M}_1(\Omega).$$ This less explicit inequality is sharp. Indeed, assume that $E$ is compact and let $F(\omega):=\int_E
\varphi(x) (\omega- m)(\mathrm{d}x)$. We have $\|F\|_{\mathrm{Lip}(d_\varphi)}=1$ and $$\log\mathbb{E}^{P^0} \mathrm{e}^{\lambda F} = \beta(\lambda).$$ The sharpness is then ensured by Theorem \[Gozleo\].
\[prosharp\] If $\varphi=1$ and $m$ is finite, then the inequality turns out to be $$\label{sharp}h_{m(E)}(W_{1, d}(Q, P^0))\le H(Q|P^0)\qquad
\forall
Q\in\mathcal{M}_1(\Omega).$$ In particular, for the Poisson measure $\mathcal{P}(\lambda)$ with parameter $\lambda>0$ on $\mathbb{N}$ equipped with the Euclidean distance $\rho$, $$\label{sharp2}h_\lambda(W_{1,
\rho}(\nu, \mathcal{P}(\lambda)))\le H(\nu|\mathcal{P}(\lambda
)) \qquad \forall\nu
\in
\mathcal{M}_1(\mathbb{N}).$$
The inequality (\[sharp\]) is a particular case of (\[W1honpoisson\]) with $\varphi=1$ and it holds on $\Omega^0:=\{
\omega\in\Omega; \omega(E)<+\infty\}$ (for $P^0$ is actually supported in $\Omega^0$ as $m$ is finite). For (\[sharp2\]), let $m(E)=\lambda$ and consider the mapping $\Psi\dvtx \Omega^0\to
\mathbb{N}$, $\Psi(\omega)=\omega(E)$. Since $|\Psi(\omega)-\Psi
(\omega
')|=|\omega(E)-\omega'(E)|\le d(\omega,
\omega')$, $\Psi$ is Lipschitzian with the Lipschitzian coefficient less than $1$. Thus, (\[sharp2\]) follows from (\[sharp\]) by [@DGW], Lemma 2.1 and its proof.
\[rem1\] The transportation inequality (\[sharp2\]) was shown by Liu [@Liu09] by means of a tensorization technique and the approximation of $\mathcal{P}(\lambda)$ by binomial distributions. It is optimal (therefore, so is (\[sharp\])). In fact, consider another Poisson distribution $\mathcal{P}(\lambda')$ with parameter $\lambda'>\lambda$. On the one hand, $$\begin{aligned}
H(\mathcal{P}(\lambda')|\mathcal{P}(\lambda
))&=&\int_\mathbb{N}\log
\frac{\mathrm{d}\mathcal{P}(\lambda')}{\mathrm{d}\mathcal{P}(\lambda)} \,\mathrm{d}\mathcal
{P}(\lambda')=
\sum_{n=0}^{\infty}\mathcal{P}(\lambda')(n)\log\biggl(\frac
{\mathrm{e}^{-\lambda
'}\lambda'^n}{n!}\Big/\frac{\mathrm{e}^{-\lambda}\lambda^n}{n!}\biggr)
\\
&=&\lambda-\lambda'+\sum_{n=0}^{\infty}\mathcal{P}(\lambda')(n)
n\log\frac{\lambda'}{\lambda}
\\
&=&\lambda-\lambda'+\lambda'\log\frac{\lambda'}{\lambda}.\end{aligned}$$ On the other hand, let $r:=\lambda'-\lambda>0$. Let $X, Y$ be two independent random variables having distributions $\mathcal
{P}(\lambda)$ and $\mathcal{P}(r)$, respectively. Obviously, the law of $X+Y$ is $\mathcal{P}(\lambda').$ Then $$W_{1,
\rho}(\mathcal{P}(\lambda'), \mathcal{P}(\lambda))\le\mathbb
{E}|X-(X+Y)|=\mathbb{E}Y=r.$$ Now, supposing that $(X, X')$ is a coupling of $\mathcal{P}(\lambda')$ and $\mathcal{P}(\lambda)$, we have $$\mathbb{E}|X-X'| \ge|\mathbb{E}X-\mathbb{E}X'|=r,$$ which implies that $W_{1, \rho}(\mathcal{P}(\lambda'), \mathcal
{P}(\lambda))\ge r.$ Then $W_{1, \rho}(\mathcal{P}(\lambda'), \mathcal{P}(\lambda))= r$ (and $(X,X+Y)$ is an optimal coupling for $\mathcal{P}(\lambda)$ and $\mathcal{P}(\lambda
')$). Therefore, $$h_{\lambda}(W_{1, \rho}(\mathcal{P}(\lambda'),
\mathcal{P}(\lambda)))=h_{\lambda}(r)=H(\mathcal{P}(\lambda
')|\mathcal{P}(\lambda
)).$$ Namely, $h_\lambda$ is the optimal deviation function for the Poisson distribution $\mathcal{P}(\lambda)$.
A discrete spin system {#sec3}
======================
*The model and the Dobrushin interdependence coefficient.* Let $\Lambda=\{1, \dots, N\}$ ($2\le N\in\mathbb{N}$) and $\gamma\dvtx
\Lambda\times\Lambda\mapsto[0,+\infty]$ be a *non-negative* interaction function satisfying $\gamma_{ij}=\gamma_{ji} $ and $\gamma_{ii}=0$ for all $i,j\in\Lambda$. Consider the Gibbs measure $P$ on $\mathbb{N}^{\Lambda}$ with $$\label{Gibbs}P(x_1, \dots,
x_N)=\mathrm{e}^{-\sum_{i<j}\gamma_{ij}x_ix_j}\prod_{i=1}^{N}\mathcal
{P}(\delta_i)(x_i)\Big/C,$$ where $\mathcal{P}(\delta_i)(x_i)=\mathrm{e}^{-\delta_i}
\frac{\delta_i^{x_i}}{x_i!}, x_i\in\mathbb{N}$, is the Poisson distribution with parameter $\delta_i>0$ and $C$ is the normalization constant. Here and hereafter, the convention that $0\cdot\infty=0$ is used. Let $P_i(\mathrm{d}x_i|x_{\Lambda})$ be the given regular conditional distribution of $x_i$ given $x_{\Lambda\setminus\{i\}},$ which is, in the present case, the Poisson distribution $\mathcal{P}(\delta_i
\mathrm{e}^{-\sum_{j\neq i}\gamma_{ij}x_j})$ with parameter $\delta_i
\mathrm{e}^{-\sum_{j\neq i}\gamma_{ij}x_j}$, with the convention that the Poisson measure $\mathcal{P}(0)$ with parameter $\lambda=0$ is the Dirac measure $\delta_0$ at $0$. Define the Dobrushin interdependence matrix $C:=(c_{ij})_{i,j\in\Lambda}$ w.r.t. the Euclidean metric $\rho$ by $$\label{Dobrushindef} c_{ij}=\sup_{x_\Lambda
=x'_\Lambda
{\rm off }j}\frac{W_{1, \rho}(P_i(\mathrm{d}x_i|x_{\Lambda}),
P_i(\mathrm{d}x_i'|x_{\Lambda}') )} {|x_j-x_j'|} \qquad\forall i,
j\in\Lambda$$ (obviously, $c_{ii}=0$). The Dobrushin uniqueness condition [@Dobrushin68; @Dobrushin70] is then $$D:=\sup_{j}\sum_{i}c_{ij}<1.$$ For this model, we can identify $c_{ij}.$
\[Dobrushin\] Recall that $\gamma_{ij}\ge0$. We have $$c_{ij}=\delta_i(1-\mathrm{e}^{-\gamma_{ij}}).$$
By Remark \[rem1\], if $x_\Lambda=x'_\Lambda$ off $j$, then $$W_{1, \rho}(P_i(\mathrm{d}x_i|x_{\Lambda}),
P_i(\mathrm{d}x_i'|x_{\Lambda}'))=\delta_i
|\mathrm{e}^{-\sum_{k}\gamma_{ik}x_k}-\mathrm{e}^{-\sum_{k}\gamma_{ik}x_k'}|.$$ Without loss of generality, suppose that $x_j=x_j'+x$ with $x\ge1$. We have then $$\begin{aligned}
c_{ij}&=&
\delta_i \sup_{x_\Lambda= x_\Lambda' \mathrm{off} j}\frac
{|\mathrm{e}^{-\sum_{k}\gamma_{ik}x_k}-\mathrm{e}^{-\sum_{k}\gamma
_{ik}x_k'}|}{|x_j-x_j'|}
\\
&=& \delta_i \sup_{x\ge1}\frac{1-\mathrm{e}^{-\gamma_{ij}x}}{x}
\qquad \mbox{(taking $x_k=x_k'=0$ for $k\ne j$, $x_j'=0$)}
\\
&=&\delta_i(1-\mathrm{e}^{-\gamma_{ij}}).\end{aligned}$$ Here, the first equality holds since $\gamma_{ij}$ is non-negative and the last equality is due to the fact that $(1-\mathrm{e}^{-\gamma_{ij}x})/x$ is decreasing in $x>0.$
*The transportation inequality $W_1H$ for mixed measure.* We return to the general framework of the . Let $\mathcal{X}$ be a general Polish space and $d$ be a metric on $\mathcal{X}$ which is lower semi-continuous on $\mathcal{X}\times
\mathcal{X}$. Consider a mixed probability measure $\mu:=\int_I \mu_\lambda \,\mathrm{d}\sigma
(\lambda)$ on $\mathcal{X}$, where, for each $\lambda\in I$, $\mu
_{\lambda}$ is a probability on $\mathcal{X}$ and $\sigma$ is a probability measure on another Polish space $I$. Let $\rho$ be a lower semi-continuous metric on $I$.
\[translate\] Suppose that:
for any $\lambda\in I$, $\mu_\lambda$ satisfies $\alpha$–$W_1H$ with deviation function $\alpha\in\mathcal{C}$, $$\alpha(W_{1, d}(\nu, \mu_{\lambda}))\le H(\nu|\mu_{\lambda}) \qquad
\forall\nu\in\mathcal{M}_1(\mathcal{X});$$
$\sigma$ satisfies a $\beta$–$W_1H$ inequality on $I$ with deviation function $\beta\in\mathcal{C}$, $$\beta(W_{1,\rho}(\eta,\sigma)) \le H(\eta|\sigma) \qquad \forall
\eta\in\mathcal{M}_1(I);$$
$\lambda\to\mu_\lambda$ is Lipschitzian, that is, for some constant $M>0$, $$W_{1, d}(\mu_{\lambda}, \mu_{\lambda'})\le M \rho(\lambda,
\lambda') \qquad \forall\lambda,\lambda'\in I.$$
The mixed probability $\mu=\int_{I} \mu_{\lambda} \,\mathrm{d}\sigma(\lambda)$ then satisfies $$\label{w1hmunu} \tilde{\alpha}(W_{1, d}(\nu, \mu
))\le
H(\nu|\mu) \qquad \forall\nu\in\mathcal{M}_1(\mathcal{X}),$$ where $$\tilde{\alpha}(r)=\sup_{b\ge0}\{b r - [\alpha^*(b)+\beta
^*(b M)]\},\qquad r\ge0.$$
By Gozlan and Leonard’s Theorem \[Gozleo\], it is enough to show that for any Lipschitzian function $f$ on $\mathcal{X}$ with $\|f\|_{\mathrm{Lip}(d)}\le1$ and $b\ge0$, $$\int_\mathcal{X}\mathrm{e}^{b [f(x) - \mu(f)]} \,\mathrm{d}\mu(x)\le\exp\bigl(\alpha
^*(b) +
\beta^*(b M)\bigr).$$ Let $g(\lambda):=\int_\mathcal{X}f(x) \,\mathrm{d}\mu_\lambda(x)=\mu_\lambda
(f)$. We have $\sigma(g)=\mu(f)$ and, by Kantorovitch’s duality equality and our condition (iii), $|g(\lambda)-g(\lambda')|\le M
\rho(\lambda,\lambda')$. Using Theorem \[Gozleo\] and our conditions (i) and (ii), we then get, for any $b\ge0$, $$\begin{aligned}
\int_\mathcal{X}\mathrm{e}^{b [f(x) - \mu(f)]} \,\mathrm{d}\mu&=&
\int_I \biggl(\int_\mathcal{X}\mathrm{e}^{b [f(x) - \mu_\lambda(f)]}
\,\mathrm{d}\mu_\lambda(x)\biggr)\mathrm{e}^{b [g(\lambda) - \sigma(g)]}
\,\mathrm{d}\sigma(\lambda),
\\
&\le& \mathrm{e}^{\alpha^*(b)+\beta^*(b M)}\end{aligned}$$ the desired result.
We now turn to a mixed Poisson distribution, $$\label{mu}
\mu=\int_{0}^{a}\mathcal{P}(\lambda)\sigma(\mathrm{d}\lambda),$$ where $a>0$. By Proposition \[prosharp\], we know that w.r.t. the Euclidean metric $\rho$, $$h_{\lambda}(W_{1, \rho}(\nu, \mathcal{P}(\lambda)))\le
H(\nu|\mathcal{P}(\lambda))$$ and $W_{1, \rho}(\mathcal{P}(\lambda), \mathcal{P}(\lambda
'))=|\lambda-\lambda'|.$ Since $h_{\lambda}$ is decreasing in $\lambda,$ the hypotheses in Proposition \[translate\] with $E=\mathbb{N}$, $I=[0,a]$, both equipped with the Euclidean metric $\rho$, are satisfied with $\alpha(r)=h_a(r)=a h(\frac{r}{a})$ and $\beta(r)=2r^2/a^2$ (the well-known CKP inequality). On the other hand, obviously, $$h(r)=(1+r)\log(1+r)-r\le\frac{r^2}{2}, \qquad r\ge0,$$ which implies that $$h_{a^2/4}(r)=\frac{a^2}4 h\biggl(\frac{4r}{a^2}\biggr)\le
\frac{2r^2}{a^2}=\beta(r).$$ Since $h_c^*(\lambda)=c (\mathrm{e}^\lambda-\lambda-1)$, $$\sup_{b\ge
0}\{br-[(h_a(b))^{\ast}+(h_{a^2/4}(b))^{\ast}]\}=\sup
_{b\ge
0}\{br-(a+a^2/4)(\mathrm{e}^b-b-1)\}=h_{a+a^2/4}(r).$$ By Proposition \[translate\], we have, for the mixed Poisson measure $\mu$ given in (\[mu\]), $$\label{mixed} h_{a+a^2/4}(W_{1,
d}(\nu, \mu))\le H(\nu|\mu)\qquad \forall\nu\in\mathcal
{M}_1(\mathbb{N}
).$$
See Chafai and Malrieu [@CM09] for fine analysis of transportation or functional inequalities for mixed measures. We can now state the main result of this section.
\[dispoisson\] Let $P$ be the Gibbs measure given in with $\gamma_{ij}\ge0$. Assume Dobrushin’s uniqueness condition $$D:=\sup_{j\in\Lambda} \sum_{i\in\Lambda}
\delta_i(1-\mathrm{e}^{-\gamma_{ij}})<1.$$ For any probability measure $Q$ on $\mathbb{N}^\Lambda$ equipped with the metric $\rho_H(x_\Lambda, y_\Lambda):=\sum_{i\in\Lambda}
|x_i-y_i|$ (the index $H$ refers to Hamming), we then have, for $c:=\sum_{i\in\Lambda} (\delta_i+\delta_i^2/4)$, $$h_c\bigl((1-D)W_{1, \rho_H}(Q, P)\bigr)
\le H(Q|P)\qquad \forall Q\in\mathcal{M}_1(\mathbb{N}^\Lambda).$$
This result, without the extra constants $\delta_i^2/4$, would become sharp if $\gamma=0$ (i.e., without interaction) or $P=\mathcal{P}(\delta)^{\otimes\Lambda}$.
[Proof of Theorem \[dispoisson\]]{} By Theorem \[Gozleo\], it is equivalent to prove that for any $1$-Lipschitzian functional $F$ w.r.t. the metric $\rho_H$, $$\label{disGL} \log\mathbb{E}^P \mathrm{e}^{\lambda(F-\mathbb
{E}^ P F)}\le
h^*_{c}\biggl(\frac{\lambda}{1-D}\biggr)=c
h^*\biggl(\frac{\lambda}{1-D}\biggr)\qquad \forall\lambda>0.$$ We prove the inequality by the McDiarmid–Rio martingale method (as in [@DGW; @Wu06]). Consider the martingale $$M_0=\mathbb{E}^{P}(F), \qquad M_k(x_1^k)=\int F(x_1^k, x_{k+1}^N) P
(\mathrm{d}x_{k+1}^N|x_1^k), \qquad 1\le k\le N,$$ where $x_i^j=(x_k)_{i\le
k\le j}, P(dx_{k+1}^N|x_1^k)$ is the conditional distribution of $x_{k+1}^N$ given $x_1^k.$ Since $M_N=F,$ we have $$\mathbb{E}^P \mathrm{e}^{\lambda(F-\mathbb{E}^P F )}=\mathbb{E}^P\exp
\Biggl(\lambda\sum
_{k=1}^N (M_k-M_{k-1})\Biggr).$$ By induction, for , it suffices to establish that for each $k=1, \dots, N, P$-a.s., $$\label{subdis}
\log\int
\exp\bigl(\lambda\bigl(M_k(x_1^{k-1},
x_k)-M_{k-1}(x_1^{k-1})\bigr)\bigr)P(\mathrm{d}x_k|x_1^{k-1})\le
(\delta_k+\delta_k^2/4) h^*\biggl(\frac{\lambda}{1-D}\biggr).$$ By (\[mixed\]), $P(\mathrm{d}x_k|x_1^{k-1})$, being a convex combination of Poisson measures $P_k(\mathrm{d}x_k|x_\Lambda)=\mathcal{P}(\delta_k
\mathrm{e}^{-\sum_{j\neq k}\gamma_{kj}x_j})$ (over $x_{k+1}^N$), satisfies the $W_1H$-inequality with the deviation function $h_{\delta_k+\delta_k^2/4}$. Hence, by Theorem \[Gozleo\], holds if $$\label{Lip}|M_k(x_1^{k-1},
x_k)-M_{k}(x_1^{k-1}, y_k)|\le\frac{1}{1-D} |x_k-y_k|.$$ In fact, the inequality has been proven in [@Wu06], step 2 in the proof of Theorem 4.3. The proof is thus complete.
For a previous study on transportation inequalities for Gibbs measures on discrete sites, see Marton [@Marton04] and Wu [@Wu06]. Our method here is quite close to that in [@Wu06], but with two new features: (1) $W_1H$ for mixed probability measures; (2) Gozlan and Léonard’s Theorem \[Gozleo\] as a new tool.
Every Poisson distribution $\mathcal{P}(\lambda)$ satisfies the Poincaré inequality ([@Wu00], Remark 1.4) $$\operatorname{Var}_{\mathcal{P}(\lambda)} (f) \le\lambda\int_\mathbb{N}(Df(x))^2
\,\mathrm{d}\mathcal{P}(\lambda)(x) \qquad \forall f\in L^2(\mathbb{N},\mathcal
{P}(\lambda)),$$ where $Df(x):=f(x+1)-f(x)$ and $\operatorname{Var}_\mu(f):=\mu(f^2)-[\mu(f)]^2$ is the variance of $f$ w.r.t. $\mu$. By [@Wu06], Theorem 2.2 we have the following Poincaré inequality for the Gibbs measure $P$: if $D<1$, then $$\operatorname{Var}_P(F) \le\frac{\max_{1\le i\le N}\delta_i}{1-D}
\int_{\mathbb{N}^\Lambda} \sum_{i\in\Lambda} (D_i F)^2(x) \,\mathrm{d}P(x) \qquad
\forall
F\in L^2(\mathbb{N}^\Lambda, P),$$ where $D_iF(x_1,\dots,x_N):=F(x_1,\dots, x_{i-1}, x_i+1,
x_{i+1},\dots, x_N)-F(x_1,\dots,x_N)$. We remind the reader that an important open question is to prove the $L^1$-log-Sobolev inequality (or entropy inequality) $$H(F P|P) \le C \int_{\mathbb{N}^\Lambda} \sum_{i\in\Lambda} D_i F
\cdot D_i
\log F \,\mathrm{d}P \qquad \mbox{for all $P$-probability densities } F$$ (which is equivalent to the exponential convergence in entropy of the corresponding Glauber system) under Dobrushin’s uniqueness condition, or at least for high temperature.
$W_1H$-inequality for the continuum Gibbs measure {#sec4}
=================================================
We now generalize the result for the discrete sites Gibbs measure in Section \[sec3\] to the continuum Gibbs measure (continuous gas model), by an approximation procedure.
Let $(\Omega,\mathcal{F},P^0)$ be the Poisson space over a compact subset $E$ of $\mathbb{R}^d$ with intensity $m(\mathrm{d}x)=z \,\mathrm{d}x$, where the Lebesgue measure $|E|$ of $E$ is positive and finite, and $z>0$ represents the *activity*. Given a *non-negative* pair-interaction function $\phi\dvtx \mathbb{R}^d\mapsto[0,+\infty]$, which is measurable and even over $\mathbb{R}^d$, the corresponding Poisson space is denoted by $(\Omega,\mathcal{F}, P^0)$ and the associated Gibbs measure is given by $$P^{\phi}(\mathrm{d}\omega)=\frac{\mathrm{e}^{-(1/2)\sum_{x_i, x_j\in{\rm \operatorname{supp}}(\omega), i\ne j}\phi(x_i-x_j)
-\sum_{k, x_i\in{\rm \operatorname{supp}}(\omega)}\phi(x_i-y_k)}}{Z}P^0(\mathrm{d}\omega),$$ where $Z$ is the normalization constant and $\{y_k, k\}$ is an at most countable family of points in $\mathbb{R}^d\backslash E$ such that $\sum_k
\phi(x-y_k)<+\infty$ for all $x\in E$ (boundary condition). The main result of this section is the following theorem.
\[w1hgibbs\] Assume that the Dobrushin uniqueness condition holds, that is, $$\label{D}D:=z
\int_{\mathbb{R}^d}\bigl(1-\mathrm{e}^{-\phi(y)}\bigr)\,\mathrm{d}y<1.$$ Then, w.r.t. the total variation distance $d=d_\varphi$ with $\varphi=1$ on $\Omega$, $$\label{w1hgibbs2}
h_{z |E|}\bigl((1-D)W_{1,d}(Q,
P^{\phi})\bigr)\le H(Q| P^{\phi}) \qquad \forall Q\in\mathcal
{M}_1(\Omega
).$$
Without interaction (i.e., $\phi=0$), $D=0$ and the $W_1H$-inequality (\[w1hgibbs2\]) is exactly the optimal $W_1H$-inequality for the Poisson measure $P^0$ in Proposition \[prosharp\]. In the presence of non-negative interaction $\phi$, it is well known that $D<1$ is a sharp condition for the analyticity of the pressure functional $p(z)$: indeed, the radius $R$ of convergence of the entire series of $p(z)$ at $z=0$ satisfies $R \int_{\mathbb{R}^d}(1-\mathrm{e}^{-\phi(y)})\,\mathrm{d}y<1$; see [@Ru], Theorem 4.5.3. The corresponding sharp Poincaré inequality for $P^\phi$ was established in [@Wu04].
[Proof of Theorem \[w1hgibbs\]]{} We shall establish this sharp $\alpha$–$W_1H$ inequality for $P^\phi$ by approximation.
By part (b$'$) of Theorem \[Gozleo\], it is equivalent to show that for any $F,G\in C_b(\Omega)$ such that $F(\omega)-G(\omega')\le
d(\omega,\omega'),~ \omega,\omega'\in\Omega$, and for any $\lambda>0$,
$$\label{w1hgibbs01} \log\int_\Omega \mathrm{e}^{\lambda
F}\,\mathrm{d}P^\phi\le
\lambda P^\phi(G) + z|E| h^*\biggl(\frac\lambda{1-D}\biggr),$$
where $h^*(\lambda)=\mathrm{e}^\lambda-\lambda-1$.
*Step . $\phi$ is continuous and $\{y_k,k\}$ is finite.* We want to approximate $P^\phi$ by the discrete sites Gibbs measures given in the previous section. To this end, assume first that $\phi$ is continuous ($+\infty$ is regarded as the one-point compactification of $\mathbb{R}^+$) or, equivalently, that $\mathrm{e}^{-\phi
}\dvtx \mathbb{R}
^d\to
[0,1]$ is continuous with the convention that $\mathrm{e}^{-\infty}:=0$.
For each $N\ge2$, let $\{E_1, \dots, E_N\}$ be a measurable decomposition of $E$ such that, as $N$ goes to infinity, $\max_{1\le
i\le N} \operatorname{Diam}(E_i)\to0$ and $\max_{1\le i\le N}|E_i|\to0$, where $|E|$ is the Lebesgue measure of $E$ and ${\rm
Diam}(E_i)=\sup_{x,y\in E_i}|x-y|$ is the diameter of $E_i$. Fix $x_i^0\in E_i$ for each $i$. Consider the probability measure $P_N$ on $\mathbb{N}^\Lambda$ ($\Lambda:=\{1,\dots,N\}$) given by, for all $(n_1,\dots,n_N)\in\mathbb{N}^\Lambda$, $$\begin{aligned}
P_N(n_1,\dots,n_N) &=&(1/Z)\mathrm{e}^{-(1/2)\sum_{i\neq
j}\phi(x_i^0-x_j^0)n_in_j-\sum_{i,k} \phi(x_i^0-y_k)n_i
}\prod_{i=1}^N\mathcal{P}(z|E_i|)(n_i)
\\
&=&(1/Z') \mathrm{e}^{-\sum_{i<
j}\phi(x_i^0-x_j^0)n_in_j}\prod_{i=1}^N\mathcal{P}(\delta_{N,i})(n_i),\end{aligned}$$ where $Z,Z'$ are normalization constants and $\delta_{N,i}=z|E_i| \mathrm{e}^{-
\sum_k\phi(x_i^0-y_k)}\le z|E_i|$. Consider the mapping $\Phi\dvtx \mathbb{N}^\Lambda\to\Omega$ given by $$\Phi(n_1,\dots,n_N)=\sum_{i=1}^N n_i \delta_{x_i^0}.$$ $\Phi$ is isometric from $(\mathbb{N}^\Lambda,\rho_H)$ to $(\Omega,d)$, where $d=d_\varphi$ with $\varphi=1$ (given in Section \[sec2\]). Finally, let $P^N$ be the push-forward of $P_N$ by $\Phi$. It is quite direct to see that $P^N\to P$ weakly.
The Dobrushin constant $D_N$ associated with $P_N$ is given by $$D_N =\sup_j\sum_{i}\delta_{N,i}
\bigl(1-\mathrm{e}^{-\phi(x_i^0-x_j^0)}\bigr)\le\sup_j \sum_{i} z|E_i|
\bigl(1-\mathrm{e}^{-\phi(x_i^0-x_j^0)}\bigr).$$ When $N$ goes to infinity, $$\limsup_{N\to\infty}D_N\le\sup_{y\in
\mathbb{R}^d} z\int_E \bigl(1-\mathrm{e}^{-\phi(x-y)}\bigr) \,\mathrm{d}x = z\int_{\mathbb{R}^d}
\bigl(1-\mathrm{e}^{-\phi(x)}\bigr) \,\mathrm{d}x=D.$$ Therefore, if $D<1$ and $D_N<1$ for all $N$ large enough, then the $W_1H$-inequality in Theorem \[dispoisson\] holds for $P_N$. By the isometry of the mapping $\Phi$, $P^N$ satisfies the same $W_1H$-inequality on $\Omega$ w.r.t. the metric $d$, which gives us, by Theorem \[Gozleo\](b$'$), $$\log\mathbb{E}^{P^N} \mathrm{e}^{\lambda F} \le\lambda P^N(G) +
\biggl(\sum_{i\in\Lambda} [\delta_{N,i} + \delta_{N,i}^2/4]\biggr)
h^*\biggl(\frac\lambda{1-D_N}\biggr).$$ By letting $N$ go to infinity, this yields (\[w1hgibbs01\]), for $P^N\to P^\phi$ weakly and $$\sum_{i\in\Lambda} [\delta_{N,i} + \delta_{N,i}^2/4]\le
\sum_{i\in\Lambda} z|E_i|(1+ z|E_i|/4)\to z|E|.$$
*Step . General $\phi$ and $\{y_k,k\}$ is finite.* For general measurable non-negative and even interaction function $\phi$, we take a sequence of continuous, even and non-negative functions $(\phi_n)$ such that $1-\mathrm{e}^{-\phi_n}\to1-\mathrm{e}^{-\phi}$ in $L^1(\mathbb{R}^d,\mathrm{d}x)$. Now, note that $ \frac{\mathrm{d}P^{\phi_n}}{\mathrm{d}P^0} \to\frac{\mathrm{d}P^{\phi}}{\mathrm{d}P^0} $ in $L^1(\Omega,P^0)$, that is, $P^{\phi_n}\to P^\phi$ in total variation. Hence, (\[w1hgibbs01\]) for $P^{\phi_n}$ (proved in step 1) yields (\[w1hgibbs01\]) for $P^{\phi}$.
*Step . General case.* Finally, if the set of points $\{y_k,k\}$ is infinite, approximating $\sum_{k=1}^\infty\phi(x_i-y_k)$ by $\sum_{k=1}^n \phi(x_i-y_k)$ in the definition of $P^\phi$, we get (\[w1hgibbs01\]) for $P^{\phi}$, as in step 2.
\[rem41\] The explicit Poissonian concentration inequality (\[a1\]) follows from Theorem \[w1hgibbs\] by Theorem \[Gozleo\](c) (with $n=1$) by noting that the observable $F(\omega)=\omega(f)/(2N)^d$ there is Lipschitzian w.r.t. $d$ with $\|F\|_{\rm Lip(d)}\le M/(2N)^d$ and $h(r)\ge(r/2)\log(1+r)$.
A quite curious phenomena occurs in the continuous gas model: the [*extra*]{} constant $\delta_i^2/4$ coming from the mixture of measures now disappears.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are grateful to the referee for his conscientious comments. Yutao Ma was supported by NSFC Grant No. 10721091.
[99]{}
Bobkov, S.G. and Götze, F. (1999). Exponential integrability and transportation cost related to logarithmic [S]{}obolev inequalities. *J. Funct. Anal.* **163** 1–28.
Bolley, F. and Villani, C. (2005). Weighted [C]{}siszár–[K]{}ullback–[P]{}insker inequalities and applications to transportation inequalities. *Ann. Fac. Sci. Toulouse* **14** 331–352.
Chafai, D. and Malrieu, F. (2010). On fine properties of mixtures with respect to concentration and Sobolev type inequalities. *Ann. Inst. H. Poincaré.* Preprint. To appear.
Djellout, H., Gullin, A. and Wu, L.M. (2004). Transportation cost-information inequalities for random dynamical systems and diffusions. *Ann. Probab.* **32** 2702–2732.
Dobrushin, R.L. (1968). The description of a random field by means of conditional probabilities and condition of its regularity. [*Theory Probab. Appl.*]{} **13** 197–224.
Dobrushin, R.L. (1970). Prescribing a system of random variables by conditional distributions. *Theory Probab. Appl.* **15** 458–486.
Gozlan, N. and Léonard, C. (2007). A large deviation approach to some transportation cost inequalities. *Probab. Theory Related Fields* **139** 235–283.
Kellerer, H.G. (1984). Duality theorems for marginal problems. *Z. Wahrsch. Verw. Gebiete* **67** 399–432.
Léonard, C. (2007). Transport inequalities: A large deviation point of view. In *Course in Chinese Summer School for Ph.D. Students, Wuhan*.
Liu, W. Optimal transportation-entropy inequalities for several usual distributions on $\mathbb{R}$. [Preprint. Submitted.]{}
Marton, K. (1996). Bounding $\bar{d}$-distance by informational divergence: A way to prove measure concentration. *Ann. Probab.* **24** 857–866.
Marton, K. (2004). Measure concentration for Euclidean distance in the case of dependent random variables. *Ann. Probab.* **32** 2526–2544.
Ruelle, D. (1969). *Statistical Mechanics: Rigorous Results*. New York: Benjamin.
C. Villani. (2003). *Topics in Optimal Transportation*. Providence, [RI]{}: [Amer. Math. Soc.]{}
Wu, L.M. (2000). A new modified logarithmic Sobolev inequality for Poisson processes and several applications. *Probab. Theory Related Fields* **118** 427–438.
Wu, L.M. (2004). Estimate of the spectral gap for continuous gas. *Ann. Inst. H. Poincaré Probab. Statist.* **40** 387–409.
Wu, L.M. (2006). Poincaré and transportation inequalities for Gibbs measures under the Dobrushin uniqueness condition. *Ann. Probab.* **34** 1960–1989.
|
---
abstract: 'The retrieval of phases from intensity measurements is a key process in many fields in science, from optical microscopy to x-ray crystallography. Here we study phase retrieval of a one-dimensional multi-phase object that is illuminated by quantum states of light. We generalize the iterative Gerchberg-Saxton algorithm to photon correlation measurements on the output plane, rather than the standard intensity measurements. We report a numerical comparison of classical and quantum phase retrieval of a small one-dimensional object of discrete phases from its far-field diffraction. While the classical algorithm was ambiguous and often converged to wrong solutions, quantum light produced a unique reconstruction with smaller errors and faster convergence. We attribute these improvements to a larger Hilbert space that constrains the algorithm.'
author:
- 'Liat Liberman[^1]'
- 'Yonatan Israel$^{\ast}$'
- Eilon Poem
- Yaron Silberberg
title: Quantum Enhanced Phase Retrieval
---
Introduction
============
Quantum states of light have been widely explored in recent years for their ability to offer considerable enhancement in measurement sensitivity over classical ones [@QuantumMetrologyReview2011]. Quantum states were mostly considered for enhancing the sensitivity of the measurement of a single optical phase using an optical interferometer. Recently, the problem of simultaneous estimation of several optical phases using quantum light was investigated [@spagnolo2012; @MultiPhaseWalmsleyPRL2013]. Here we investigate an iterative phase-retrieval technique for the estimation of one-dimensional phase objects with quantum states of light, and show that the technique is more robust than its classical version.
The problem of phase retrieval is one of great scientific interest, which arises when the intensity recorded in the far-field is used to determine the phase structure of an object, information which is otherwise undetected. It stems from the fact that detectors record the intensity of waves, while often the important information is encoded in their phases. Phase retrieval has been intensively investigated [@shechtmanPR] and found applications across many fields of science, from astronomy (wave-front sensing) [@PR_astronomy] to nanotechnology (x-ray crystallography and electron microscopy) [@MiaoXray; @ElectronMicroscopyBook]. In the most common scenario, the far-field diffraction intensity pattern is measured; This measured far-field intensity, together with known constrains on the illumination (e.g. its intensity profile) are used in an iterative algorithm to derive the phase structure of the object [@GerchbergAndSaxton1972]. It is known, however, that phase retrieval has certain limitation; for example, phase-retrieval of one-dimensional objects is problematic, and often leads to multiple ambiguous solutions [@shechtmanPR].
![A schematic description of phase retrieval using quantum states. A quantum state $|\psi\rangle$, entangled over $m$ modes, is input to a multi-mode interferometric system. The state passes through $m$ phases denoted by $\vec{\theta} = \{\theta_1,\theta_2,\dots,\theta_m\}$, and followed by a transformation $\hat{U}$. Photon correlation measurement is carried out on the output state. \[fig:system\]](system2.eps){width=".85\columnwidth"}
Here we study the use of quantum states of light to measure a one-dimensional multi-phase object, as shown schematically in Fig. \[fig:system\], using a phase-retrieval approach. We present a protocol that utilizes entangled states to reconstruct multiple phases simultaneously using an iterative error-reduction algorithm. It is shown by our numerical results that a quantum approach has a few advantages over the approaches that use classical light. First, phase retrieval using quantum light can be unambiguous, as it reaches the single, correct solution. The algorithm also converges much faster than in the classical case. Furthermore, the use of quantum states can enhance the sensitivity of the retrieved phases over cases where classical light is used, for the same number of photons probing the system. These quantum enhanced capabilities are already revealed for two-photon entangled states, and are particularly important when probing delicate samples which are sensitive to illumination intensity, such as biological samples [@BiologicalMicroscopyReview2003], quantum gases [@QNDofQuantumGasesNatPhys2007], and atomic ensembles [@EntandledProbingDelicateMaterialNatPhoton2012]. With recent advances in generating quantum states in x-ray [@XRAY-PDC_ShwartzPRL2012], their application for phase retrieval holds great promise.
An important milestone in the field of phase retrieval was the iterative algorithm of Gerchberg and Saxton [@GerchbergAndSaxton1972], that reduces the error in the phase object with every iteration. It is doing so by iterating between the input plane $E_{in}(x)$, and the output plane $\tilde{E}_{out}(u)$, related by the known transformation of the system, and applying the information obtained from the intensity of the input plane and the output intensity measurement at each iteration of the algorithm. In the common case of far-field diffraction, the transformation between the input and output planes is the Fourier transform, where $x$ and $u$ are the coordinates along the input and output planes, respectively. Similarly, phase retrieval can be applied to the frequency-time domains, where the temporal field $E_{in}(t)$ is related to its spectral one $\tilde{E}_{out}(\omega)$ by a Fourier transformation, and the problem relates to determining temporal amplitudes and phases from spectral power measurements [@trebino93]. However, phase reconstruction is not unique in general. As the amplitudes of the input and output planes $|E_{in}(x)|$, and $|\tilde{E}_{out}(u)|$ are restricted by their measured intensities, there might be additional solutions for the phase image which are incorrect. Some examples include shifted images $E_{in}(x-x_0)$, mirror images $E_{in}^*(-x)$, and global phases $e^{i\Phi}E_{in}(x)$, which are exact solutions of the phase retrieval problem, yet there may exist many other non-trivial exact solutions as well [@Walther; @sanz1985], which to the best of our knowledge were never analyzed.
Quantum enhanced phase retrieval uses quantum states to probe the object, as we outline in the next section. In the far-field, instead of intensity measurements, we employ measurement of photon correlations on the output quantum states. Using these measurements and the knowledge of the input state, we describe the algorithm used for the retrieval of the phases of the object. We then describe a specific example, where we also discuss in some detail the sensitivity of using quantum light for phase retrieval.
Methodology {#sec:methodology}
===========
Let us consider the problem of estimating a one-dimensional object of multiple phases, probed by a quantum state of light, as shown in Fig. \[fig:system\]. A phase object is characterized by a set of $m$ unknown phases, $\vec{\theta}=\{\theta_1,\dots,\theta_m\}$.
Quantum light
-------------
An initial pure state of $N$ photons in $m$ modes has the form $$\begin{aligned} \label{eq:psi_input}
|\psi\rangle = \sum_{k=1}^D \alpha_k |n_1^{(k)},n_2^{(k)},\dots,n_m^{(k)}\rangle = \sum_{k=1}^D \alpha_k |\vec{n}^{(k)}\rangle,
%&|S\rangle = |s_1s_2...s_m\rangle={a_1^\dagger}^{s_1}{a_2^\dagger}^{s_2}...{a_m^\dagger}^{s_m}|0\rangle \\
%&|T\rangle = |t_1t_2...t_m\rangle={b_1^\dagger}^{t_1}{b_2^\dagger}^{t_2}...{b_m^\dagger}^{t_m}|0\rangle
\end{aligned}$$ where $\vec{n}^{(k)}$ is a vector of length $m$ with photon number components $n_{x}^{(k)}$ in each mode $x$ and for each configuration $k$, such that $\sum_{x=1}^m n_x^{(k)} = N$. The set of amplitudes $\vec{\alpha} = \{\alpha_k\}$ (where $k=1,\dots,D$) is normalized $\sum_{k=1}^D |\alpha_k|^2 =1$, and the total number of configurations is $D = \binom{N+m-1}{N}$.
After passing through the phase object the state accrues $m$ phases, as described by the unitary transformation $\hat{U}_{\vec{\theta}} = \exp(\imath \sum_{x=1}^m \theta_x \hat{n}_x)$, where $\hat{n}_x$ is the number operator for mode $x$. The state in Eq. \[eq:psi\_input\] then becomes $$\label{eq:psi_theta}
|\psi_{\vec{\theta}}\rangle = \hat{U}_{\vec{\theta}} |\psi\rangle = \sum_{k=1}^D \alpha_k e^{\imath \phi_k} |\vec{n}^{(k)}\rangle,$$ where the set of phases accrued by the state, $\vec{\phi} = \{\phi_k\}$ is related to the object phases $\vec{\theta}$ by $\phi_k = \vec{\theta} \cdot \vec{n}^{(k)}$.
Next, this quantum state undergoes a transformation, most commonly Fourier transformation via diffraction, which transforms the state in Eq. \[eq:psi\_theta\] to the final state at the output $$\label{eq:psi_F}
|\psi_{F}\rangle = \hat{U} |\psi_{\vec{\theta}}\rangle = \sum_{t=1}^D \beta_t |\vec{n}^{(t)}\rangle.$$ In Eq. \[eq:psi\_F\] we assumed that the transformation described by $\hat{U}$ is unitary; it can therefore be represented by a unitary $m\times m$ matrix $U$. Using such operation transforms the photon creation operators, in any mode $x$, by $\hat{a}^{\dagger}_x \rightarrow \sum_{x=1}^m [U]_{x,y} \hat{a}^{\dagger}_y$. One example of such an operation is the discrete Fourier transform (DFT), which we use as the transformation to the far-field plane. The DFT is represented by an $m \times m$ matrix $[U]_{x,y}=\exp(\imath 2 \pi (x-1)(y-1)/m)/\sqrt{m}$ [@marek_MultiportBS_1997]. The set of amplitudes denoted by $\vec{\beta}$ in Eq. \[eq:psi\_F\] can be calculated as a function of the input state amplitudes $\vec{\alpha}$ of Eq. \[eq:psi\_input\], $$\label{eq:psi_perm}
\beta_t = \langle \vec{n}^{(t)}|\hat{U}|\psi_{\vec{\theta}} \rangle = \sum_{k=1}^D \frac{\alpha_k e^{\imath \phi_k} \textrm{Per}(V_{k,t})} {\sqrt{\prod_{x=1}^{m}(n_{x}^{(k)})!\prod_{y=1}^{m}(n_{y}^{(t)})!}},$$ where $V_{k,t}$ is an $N\times N$ sub-matrix of the matrix $U$ constructed by repeating the $x$th row of $U$ $n^{(k)}_x$ times, and then repeating the $y$th column of the result matrix $n^{(t)}_y$ times for all $x$ and $y$, and $\textrm{Per}(V_{k,t})$ is the permanent of the matrix $V_{k,t}$.
Finally, all $D$ probabilities in the output state $\vec{P}_{\beta} = \{{P_{\beta}}_t\} = \{|\beta_t|^2\}$ are measured by employing $N$-photon coincidence detection in $m$ modes, as described in Fig. \[fig:system\].
![A schematic description of phase retrieval algorithm for (a) quantum and (b) classical light. Each iteration in the algorithm uses the input state amplitudes, $\vec{\alpha}$ or $\vec{E_{in}}$, transforms these amplitudes ($\hat{U}$), and applies the measured photon correlations $\vec{P}_\beta$ or intensities $\vec{I}_{out}$, which is followed by the inverse transformation ($\hat{U}^{\dagger}$), for the quantum or classical algorithms respectively. The object phases $\vec{\theta}$ evolve over the iterations of the algorithm, while for the quantum algorithm these phases are found from $\vec{\phi}$. \[fig:diagram\]](PR_diagram_v4.eps){width="\columnwidth"}
Phase retrieval algorithm
-------------------------
We will now describe the procedure for reconstructing the set of unknown phases $\vec{\theta}$, which generalizes the Gerchberg-Saxton (GS) error reduction iterative algorithm [@GerchbergAndSaxton1972] to quantum light, as shown in Fig \[fig:diagram\](a). The goal is to retrieve the phases $\vec{\theta}$ from the known amplitudes of the input state $\vec{\alpha}$, and the measured probabilities of the output state $\vec{P}_{\beta}$.
We begin by guessing random initial values for the set of $m$ phases, $\vec{\theta^{(0)}}$. The $i$th iteration of the algorithm begins by constructing an input state $|\tilde{\psi}_{\vec{\theta}}^{(i)}\rangle = \sum_{k=1}^D \alpha_k \exp(\imath \phi_k^{(i-1)}) |\vec{n}^{(k)}\rangle$, as in Eq. \[eq:psi\_theta\], where $\phi_k^{(i-1)} = \vec{\theta}^{(i-1)}\cdot\vec{n}^{(k)}$. Then, the state $|\tilde{\psi}_{\vec{\theta}}^{(i)}\rangle$ is transformed as in Eq. \[eq:psi\_F\] to find a set of output state amplitudes $\vec{\beta}^{(i)}$. The arguments of these complex amplitudes are combined with the measured output probabilities $\vec{P}_{\beta}$ to yield a new estimate of the output state $|\psi_F^{(i)}\rangle = \sum_{t=1}^D \sqrt{{P_{\beta}}_t}\exp(\imath \, \textrm{arg}\, (\beta_t^{(i)}))|\vec{n}^{(k)}\rangle$. This state is then transformed back to retrieve the corresponding input state $|\psi_{\vec{\theta}}^{(i)}\rangle = \hat{U^{\dagger}}|\psi_{F}^{(i)}\rangle = \sum_{k=1}^D \alpha_k^{(i)} \exp(\imath \phi_k^{(i)})|\vec{n}^{(k)}\rangle$, from which a new estimate for the set of phases $\vec{\theta}^{(i)}$ is found by inverting the relation $\phi_k^{(i)} = \vec{\theta}^{(i)}\cdot\vec{n}^{(k)}$
The GS algorithm is known to always converge to a solution by means of error reduction [@GerchbergAndSaxton1972; @Fienup:82], not necessarily to the correct solution. In order to quantify the error with which the algorithm converges we use two different measures: the error in the Fourier output state in the $i$th iteration, $\delta P_F(i)$, $$\label{eq:E_F}
\delta P_F^2(i) = \sum_{t=1}^D \left( |\beta_t^{(i)}|^2 - P_{\beta_t}\right)^2,$$ and the phase error in the $i$th iteration, $\delta\vec{\theta}(i)$, $$\label{eq:E_theta}
\delta\vec{\theta}^2(i) = \sum_{x=2}^m \mod(|\theta_x^{(i)}- \theta_x| ,2\pi)^2,$$ where $\mod(a,b)$ is the reminder of the division $a/b$ rounded to the nearest value.
Required conditions for uniqueness {#subsection:uniqueness}
----------------------------------
In order to achieve a unique phase retrieval the input quantum state amplitudes $\vec{\alpha}$ of Eq. \[eq:psi\_input\] are chosen such that they satisfy two conditions.
### Avoiding trivial ambiguities
In order to eliminate the trivial ambiguities, the input states should be chosen such that they have no symmetries of translation and reflection with respect to the phases of the modes. The simplest solution is to arrange the average photon number in these modes to break those symmetries.
### Phase transformation
The object phases $\vec{\theta}$ are evaluated by first estimating the phases $\vec{\phi}$ of the quantum states that are used as the interrogating field. To uniquely determine the $m$ phases, clearly one has to start with at least $m$ basis states amplitudes $\vec{\alpha}$ that are non-zero. When $m$ initial amplitudes are used, the matrix that expresses the relations between the $m$ object phase $\vec{\theta}$ and the subset of $m$ phases that are used from $\vec{\phi}$ should have non-zero determinant, so that the phases of each mode can be uniquely extracted from the reconstructed phases of the basis vectors. In addition, since the phases of the state $\vec{\phi}$ are reconstructed up to an integer number of $2\pi$, we need to make sure that this shift remains an integer number of $2\pi$ for the object phases $\vec{\theta}$ as well.
Example: Retrieval of Six Phases
================================
We describe here a phase retrieval problem with $m=6$ as an instructive example. We assume that the transformation $\hat{U}$ is the discrete Fourier transform (DFT) which is the most relevant one for many practical realizations. We begin with the following quantum two-photon state ($N=2$): $$\begin{aligned}
\label{eq:psi_6}
%|\psi_6\rangle=\frac{1}{\sqrt{6}}(&|1,0,0,0,0,1\rangle+|0,2,0,0,0,0\rangle+ \nonumber\\
% &|0,1,1,0,0,0\rangle+|0,1,0,1,0,0\rangle+ \nonumber\\ &|0,1,0,0,1,0\rangle+|0,1,0,0,0,1\rangle).
|\psi_6\rangle=\frac{1}{\sqrt{6}}(&|2,0,0,0,0,0\rangle+ |1,1,0,0,0,0\rangle+ \nonumber\\
&|1,0,1,0,0,0\rangle+|1,0,0,1,0,0\rangle+ \nonumber\\
&|1,0,0,0,1,0\rangle+|0,1,0,0,0,1\rangle).\end{aligned}$$ The input state of Eq. \[eq:psi\_6\] contains only six of the $D_{m=6}= 21$ configurations of two photons in six modes, all six with equal amplitudes of $1/\sqrt{6}$. This input state was constructed with care in order to fulfil the two requirements for uniqueness (section \[subsection:uniqueness\]): First, the state is chosen to eliminate trivial ambiguities, i.e. translation or reflection. Indeed, the input state of Eq. \[eq:psi\_6\] has non-equal average photon numbers in the various ports; the intensity ratio between the modes is $6:2:1:1:1:1$, which breaks both symmetries. Second, the input state basis is chosen such that it enables the extraction of measured object phases $\vec{\theta}$, from the phases of the quantum state $\vec{\phi}$, which are actually estimated by the algorithm.
To compare the phase retrieval performance with quantum light to that with classical light, we performed two sets of simulations, with the same set of object phases that was chosen randomly to be $\vec{\theta}_{obj}=\{0, 3.22, 4.10, 4.57, 1.35, 4.11\}$.
![Different solutions found by the phase retrieval algorithm with classical light. Solutions (a)-(g) are wrong reconstructions, while (h) is the correct one. \[fig:phase\_rec\_6phases\]](phase_rec_cl_fig.eps){width="\columnwidth"}
![Comparing the performance of the phase retrieval algorithm with quantum and classical light. (a) Histograms of the retrieved phase error $\delta\vec{\theta}$ and (b) the Fourier errors $\delta P_F$ as a function of the iteration number $i$, using classical and quantum light for retrieval of $\vec{\theta}_{obj}$ for $1000$ runs of the algorithm. In the classical case, only $~16\%$ correct reconstructions were achieved, while the erroneous solutions are the majority of the instances. The Fourier error of the classical algorithm ($\delta P_F^{(cl)}$) is shown only for cases that converged to the correct solution. The quantum case, that used the entangled two-photon state given in Eq. \[eq:psi\_6\], always converged to the correct phases. \[fig:errors\]](hist_phase_error_and_Fourier.eps){width="\columnwidth"}
First, we applied the GS algorithm with classical light, as shown in Fig \[fig:diagram\](b). We assumed a classical coherent input field with input amplitudes $\vec{E}_{in}=\{\sqrt{6},\sqrt{2},1,1,1,1\}$, such that it reproduces the same intensity ratio as the quantum state in Eq. \[eq:psi\_6\], and, similarly, does not have reflection and translation symmetries. This input field was transformed with the phases $\vec{\theta}_{obj}$ and then by the DFT to obtain a set of six complex output amplitudes $\vec{\tilde{E}}_{out}$. The output intensities $\vec{I}_{out}$ are then used as the input to the GS algorithm. We ran the algorithm a large number of times, each run starting with a different random set of initial phases. The algorithm almost always converged, i.e. found a solution which reproduces the intensities in the Fourier plane with very low error in the Fourier plane, $\delta P_F\ll10^{-3}$, but most of the times it did not find the correct set of phases $\vec{\theta}_{obj}$. All the solutions that were found using classical light are presented in Fig. \[fig:phase\_rec\_6phases\], where 7 out of 8 of these solutions are actually wrong. A histogram showing the phase error distribution for 1000 runs of the algorithm is shown in Fig. \[fig:errors\](a). In this representative example, the algorithm converged to the correct phases only in about 16% of the runs.
In contrast, the quantum algorithm always found the correct solution. The quantum simulation was performed in an analogous way: the input state of Eq. \[eq:psi\_6\] was first transformed by the phase vector $\vec{\theta}_{obj}$ according to Eq. \[eq:psi\_theta\], and then by the DFT to calculate the set of output amplitudes $\vec{\beta}$, as in Eq. \[eq:psi\_perm\]. Note that in contrast to the $m=6$ amplitudes that characterized the classical case, here there are $D_{m=6}=21$ amplitudes, that describe all the combinations of two photons in six modes. The values of $\vec{P}_{\beta}=|\vec{\beta}|^2$ are used as input to the GS algorithm, that was run many times with random initial phases. As shown in Fig. \[fig:errors\](a), it always converged to the correct phase vector $\vec{\theta}_{obj}$.
Even when the algorithm using the classical light converged to the correct phases (i.e. in about 16% of the times), it did so less efficiently than the quantum one. Fig. \[fig:errors\](b) shows the progressive reduction of Fourier plane error $\delta P_F$ with increasing iterations for both the quantum and classical states of light, averaged over many runs. For the quantum state, this error is given by Eq. \[eq:E\_F\], and similarly, for the classical state it is given by $(\delta P_F^{(cl)})^2 = \sum_{x=1}^m (|\tilde{E}_x^{(i)}|^2 - |\tilde{E}_x|^2)^2 / ( \sum_{x=1}^m |\tilde{E}_x|^2)$, where $\tilde{E}_x^{(i)}$ is the far-field amplitude of the $x$th mode in the $i$th iteration [@Fienup:82].
Sensitivity
===========
In the simulations described above, the far-field amplitudes, either classical or quantum, were calculated from theory. In practice, these values would be measured. Measurements with classical or quantum states of light are often limited in sensitivity due to shot-noise. In fact, metrology with nonclassical states of light is more often than not motivated by its superior sensitivity in phase measurements. While this is not the focus of this work, for completeness, we wish to compare the precision of the algorithms with classical and quantum light. For this purpose, we performed Monte-Carlo simulations with quantum and classical light, for the same example that we discussed in the previous section, given a total number of photons passing through the sample $N_{T}$. Again, in the classical case, we considered only the runs that yielded the correct phases which were less than $16\%$ of the total runs. In practice, of course, there is no way to identify the correct solution, but here we are interested to check the ultimate precision of the algorithm.
![Phase error for the algorithm using quantum and classical light input $\delta\vec{\theta}$, and the ultimate minimal error limit achievable with classical light $\delta\vec{\theta}^{\texttt{(min)}}$, as a function of total number of photons probing the system, $N_{T}$. Here, too, The classical line is drawn only for the subset of runs, $\sim 16\% $ of all runs, that converged to the correct solution. \[fig:E\_phi\_N\]](E_theta_noise4.eps){width="\columnwidth"}
The phase error $\delta\vec{\theta}$ for the quantum and classical light are shown as a function of $N_T$ in Fig. \[fig:E\_phi\_N\]. Also shown in the figure is the minimal error that could be achieved with classical light, $\delta\vec{\theta}^{\texttt{(min)}} = (m-1)/\sqrt{N_{T}}$ [@MultiPhaseWalmsleyPRL2013]. The phase error $\delta\vec{\theta}$, achieved with the particular quantum state of Eq. \[eq:psi\_6\] and for the particular phase object $\vec{\theta}_{obj}$, can be fitted by $8.0/\sqrt{N_T}$. From Fig. \[fig:E\_phi\_N\] it is evident that this phase error of the quantum state is better than that achieved with classical light, $11.8/\sqrt{N_T}$, again pointing to the relative advantage of the quantum algorithm.
Our particular state does not show sensitivity below classical limits (the ultimate classical limit in our example is $5/\sqrt{N_T}$). Still it performs 48$\%$ better than classical light with the same intensity distribution, again, even if compared only with the small fraction of classical solutions that converged to the correct solution. We note that a method for calculating the ultimate limit of sensitivity for any input pure state is given in Ref. [@MultiPhaseWalmsleyPRL2013]. Furthermore, enhancement factors of sensitivity greater than was shown here are probably possible, either by considering higher photon numbers $N>2$ then in our example, or by optimizing the input state amplitudes, as well as the transformation $\hat{U}$.
Discussion
==========
The solution found with quantum light is unambiguous, converges faster, and is more precise than the one found using classical light. The reason for this lies in the fact that the quantum states are characterized by $D={{m+N-1} \choose N}$ amplitudes, which is significantly larger than the $m$ amplitudes of the classical case for any $N>1$, 21 vs. 6 in our two-photon in six modes example above. This significant increase in the number of constraints imposed on the Fourier plane quantum state amplitudes for the same number of unknown phases is most probably what leads to the elimination of extra solutions, the faster convergence, and the very small error.
We have considered in the example presented above the phase retrieval of 6 modes, however, we have checked that our approach performs perfectly also for higher number of modes. We tested our quantum algorithm with two photon states ($N=2$) for $m = 10$, $20$, and $30$ unknown phases. For that, we have generalized the quantum state of Eq. \[eq:psi\_6\] by introducing additional terms of the form $|1,0,\dots,0,1,0,\dots,0\rangle$ having in total $m$ terms of equal amplitudes, still meeting the symmetry requirements for the input state. The algorithm performed just as well, retrieving the phases accurately for all values of $m$ tested, while the classical algorithm with coherent light with intensities that matched that of the generalized quantum state had much reduced rate of successful reconstruction, less than $1\%$ for $m\geq10$. Finally, we discuss the considerations for practical realization of the suggested method. First, arbitrary entangled states of light are generally hard to generate, however, probabilistic generation of multi-mode correlated two-photon states is technically possible, although, to the best of our knowledge, has never been demonstrated in multi-mode systems, perhaps due to lack of interest. The DFT can be implemented by Fourier multiport devices, which have been experimentally demonstrated for quantum states of light [@spagnolo2012; @Poem_MMW_PRL2012]. Additionally, measurements of $N$-photon correlations over $m$ modes requires a set of $D$ measurements which for large number of modes/photons can be challenging. A similar problem has been recently encountered in the realizations of the boson-sampling problem [@White_BS13; @Walmsley_BS13; @Walther_BS13; @Sciarrino_BS14]. For a quantum state of two photons however, the problem involves only $m(m+1)/2$ measurements, which is quite practical, using, for example, large arrays of single photon detectors [@DetectingQLight_Silberhorn2007], or cameras with single-photon sensitivities [@EntanglementwithCamera].
It is important to note here that quantum measurement of correlations on classical coherent input light will not be useful: they will not yield any additional information or any other advantage, as classical states are uncorrelated and separable, unlike quantum states which can exhibit inherent photon correlations between the modes. We also note that these findings raise many interesting theoretical questions, for example on the optimal choice of the input quantum state and on its ultimate sensitivity, and how the technique will perform with two-dimensional objects, as well as objects of phase and absorption.
Summary
=======
We studied the use of quantum states of light for phase retrieval. We showed that quantum states of two photons exhibit a few advantages in retrieving the phase of a one-dimensional object from its far-field diffracted intensity, as compared with classical states of light. The quantum approach achieves a unique reconstruction, which converges faster, and is more robust when subjected to shot-noise, when compared with classical approaches.
Funding Information {#funding-information .unnumbered}
===================
Financial support of this research by the ERC grant QUAMI, the ICore program of the ISF, the Israeli Nanotechnology FTA program, the Minerva foundation and the Crown Photonics Center is gratefully acknowledged. E.P. would like to acknowledge an EU Marie Curie Fellowship, a British-Technion Society Coleman-Cohen Fellowship, and the Oxford Martin School for initial support.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Ben Leshem, Oren Raz and Dan Oron for fruitful discussions.
[10]{}
V. Giovannetti, S. Lloyd, and L. Maccone, “Advances in quantum metrology,” Nature Photonics **5**, 222–229 (2011).
N. Spagnolo, L. Aparo, C. Vitelli, A. Crespi, R. Ramponi, R. Osellame, P. Mataloni, and F. Sciarrino, “Quantum interferometry with three-dimensional geometry,” Scientific reports **2** (2012).
P. C. Humphreys, M. Barbieri, A. Datta, and I. A. Walmsley, “Quantum enhanced multiple phase estimation,” Phys. Rev. Lett. **111**, 070403 (2013).
Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: A contemporary overview,” Signal Processing Magazine, IEEE **32**, 87–109 (2015).
J. Dainty and J. Fienup, “Phase retrieval and image reconstruction for astronomy,” in “Image Recovery: Theory and Application,” , H. Stark, ed. (Elsevier Science, 1987), chap. 7, pp. 231–275.
J. Miao, T. Ishikawa, Q. Shen, and T. Earnest, “Extending x-ray crystallography to allow the imaging of noncrystalline materials, cells, and single protein complexes,” Annu. Rev. Phys. Chem. **59**, 387–410 (2008).
S. Amelinckx, D. van Dyck, J. van Landuyt, and G. van Tendeloo, *Electron microscopy: principles and fundamentals* (John Wiley & Sons, 2008).
R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik **35**, 237–246 (1972).
D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science Signaling **300**, 82 (2003).
K. Eckert, O. Romero-Isart, M. Rodriguez, M. Lewenstein, E. S. Polzik, and A. Sanpera, “Quantum non-demolition detection of strongly correlated systems,” Nature Physics **4**, 50–54 (2007).
F. Wolfgramm, C. Vitelli, F. A. Beduini, N. Godbout, and M. W. Mitchell, “Entanglement-enhanced probing of a delicate material system,” Nature Photonics (2012).
S. Shwartz, R. N. Coffee, J. M. Feldkamp, Y. Feng, J. B. Hastings, G. Y. Yin, and S. E. Harris, “X-ray parametric down-conversion in the langevin regime,” Phys. Rev. Lett. **109**, 013602 (2012).
R. Trebino and D. J. Kane, “Using phase retrieval to measure the intensity and phase of ultrashort pulses: frequency-resolved optical gating,” JOSA A **10**, 1101–1111 (1993).
A. [Walther]{}, “[The Question of Phase Retrieval in Optics]{},” Optica Acta **10**, 41–49 (1963).
J. L. Sanz, “Mathematical considerations for the problem of fourier transform phase retrieval from magnitude,” SIAM Journal on Applied Mathematics **45**, 651–664 (1985).
Z. Marek, A. Zeilinger, and M. Horne, “Realizable higher-dimensional two-particle entanglements via multiport beam splitters,” Physical Review A **55**, 2564 (1997).
J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21**, 2758–2769 (1982).
E. Poem, Y. Gilead, and Y. Silberberg, “Two-photon path-entangled states in multimode waveguides,” Physical review letters **108**, 153602 (2012).
M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph, and A. G. White, “Photonic boson sampling in a tunable circuit,” Science **339**, 794–798 (2013).
J. B. Spring, B. J. Metcalf, P. C. Humphreys, W. S. Kolthammer, X.-M. Jin, M. Barbieri, A. Datta, N. Thomas-Peter, N. K. Langford, D. Kundys *et al.*, “Boson sampling on a photonic chip,” Science **339**, 798–801 (2013).
M. Tillmann, B. Daki[ć]{}, R. Heilmann, S. Nolte, A. Szameit, and P. Walther, “Experimental boson sampling,” Nature Photonics **7**, 540–544 (2013).
N. Spagnolo, C. Vitelli, M. Bentivegna, D. J. Brod, A. Crespi, F. Flamini, S. Giacomini, G. Milani, R. Ramponi, P. Mataloni *et al.*, “Experimental validation of photonic boson sampling,” Nature Photonics **8**, 615–620 (2014).
C. Silberhorn, “Detecting quantum light,” Contemporary Physics **48**, 143–156 (2007).
M. P. Edgar, D. S. Tasca, F. Izdebski, R. E. Warburton, J. Leach, M. Agnew, G. S. Buller, R. W. Boyd, and M. J. Padgett, “Imaging high-dimensional spatial entanglement with a camera,” Nature communications **3**, 984 (2012).
[^1]: These authors contributed equally to this work
|
---
abstract: 'Augmented Lagrangian method (also called as method of multipliers) is an important and powerful optimization method for lots of smooth or nonsmooth variational problems in modern signal processing, imaging, optimal control and so on. However, one usually needs to solve the coupled and nonlinear system together and simultaneously, which is very challenging. In this paper, we proposed several semismooth Newton methods to solve the nonlinear subproblems arising in image restoration, which leads to several highly efficient and competitive algorithms for imaging processing. With the analysis of the metric subregularities of the corresponding functions, we give both the global convergence and local linear convergence rate for the proposed augmented Lagrangian methods with semismooth Newton solvers.'
author:
- Hongpeng Sun
title: An Investigation on Semismooth Newton based Augmented Lagrangian Method for Image Restoration
---
Introduction
============
The augmented Lagrangian method (shorted as ALM throughout this paper) was originated in [@HE; @POW]. The early developments can be found in [@BE; @FG; @Roc2] and the extensive studies in infinite dimensional spaces with various applications can be found in [@FG; @KK] and so on. The comprehensive studies of ALM for convex, nonsmooth optimization and variational problems can be found in [@BE; @KK]. In [@Roc2], the celebrated connections between the ALM and proximal point algorithms are established, where ALM is found to be equivalent to the proximal point algorithm applying to the essential dual problem. The convergence can thus be concluded in the general and powerful proximal point algorithm framework for convex optimization [@Roc1; @Roc2].
ALM is very flexible for constrained optimization problems including both equality and inequality constraint problems [@BE; @KK]. However, the challenging problem is solving the nonlinear and coupling systems simultaneously which are usually appeared while applying ALM. This is different from alternating direction method of multipliers (ADMM) type methods [@GL1; @FG], which decouple the unknown variables, deal with each subproblem separately and update them consecutively. However, for ALM, the extra effort is deserved if the nonlinear system can be solved efficiently. This is due to the appealing asymptotic linear or superlinear convergence of ALM with increasing step sizes [@Roc1; @Roc2; @LU]. It is well known semismooth Newton methods are efficient solvers for nonsmooth and nonlinear systems. Semismooth Newton based augmented Lagrangian methods already have lots of successful applications in semidefinite programming [@ZST], compressed sensing [@LST; @ZZST], friction and contact problem [@Gor; @Gor1] and imaging problems [@HK1; @KK1]. In this paper, we proposed several novel semismooth Newton based augmented Lagrangian method for the ROF image denosing model [@ROF]. ROF model is quite fundamental for imaging problems and it is claimed to be a pivotal component in producing the first image of a black hole [@ROFwiki]. Besides, it is a typical nonsmooth and convex optimization problem which is very challenging to solve and is standard for testing various algorithms [@CP]. Currently, there are no systematical studies on semismooth Newton based ALM for ROF model. To the best of our knowledge, the most related work is [@HK1; @KK1]. In [@HK1], Moreau-Yosida regularization of the dual problem of the anisotropic ROF model in Banach space is considered and solved with semismooth Newton based ALM method. In [@KK1], the primal-dual active set method is employed for nonlinear systems of the ALM applying to the regularized primal problems with additional regularization strategy for solving the nonlinear system.
All our algorithms are based on applying ALM to the primal ROF model, whose strong convexity can be employed. Our contributions belong to the following parts. First, by introducing auxiliary variables, we use primal-dual semismooth Newton method [@HS] for nonlinear system of ALM, which is very efficient for ALM and does not need any globalization strategy such as Armijo line search method numerically. The proposed ALM is very efficient compared to the popular existing algorithms including primal-dual first-order method [@CP]. They are especially very fast for the anisotropic ROF model. Second, we proved that the maximal monotone KKT (Kuhn-Tucker) mapping is metric subregular for the anisotropic ROF. We thus get the asymptotic linear or superlinear convergence rate of both the primal and dual sequence for the anisotropic case by the framework in [@Roc1; @Roc2; @LU]. With the help of calm intersection theorem [@KKU], we also prove the metric subregularity of the maximal monotone operator associated with the dual problem under mild condition. This leads to the asymptotic linear or superlinear convergence rate of the dual sequence for the isotropic case [@Roc1; @Roc2; @LU]. To the best of our knowledge, these subregularities are novel for the ROF model. Third, we also give a systematical investigation of another two kinds of semismooth Newton methods for solving the nonlinear system that appeared in ALM applying to ROF, which are also very efficient for some cases. One is involving the soft thresholding operator and the other is involving the projection operator. We found that both semismooth Newton methods need globalization strategy such as the line search and the corresponding numerical results are also presented.
The rest of this paper is organized as follows. In section \[sec:rofALM\], we give an introduction to ROF model and the ALM including the isotropic and anisotropic cases. In section \[sec:alm:pdssn\], we present discussions on the primal-dual semismooth Newton for ALM by introducing auxiliary variables, which turns out to be very efficient. In section \[sec:alm:ssnp:thres\] and \[sec:alm:ssnp:proj\], we give detailed discussions on the semismooth Newton method involving the soft thresholding operators and the projection operators correspondingly. Although all the semismooth Newton algorithms are solving the same nonlinear system for the isotropic or the anisotropic ROF. However, different formulations turn out different algorithms with efficiency. In section \[sec:convergece:ssn:alm\], we give the metric subregularity for the maximal monotone KKT mapping for anisotropic ROF and the maixmal monotone operator associate to the dual problem of isotropic ROF. Together with the convergence of the semismooth Newton method, we get the corresponding asymptotic linear convergence. In section \[sec:numer\], we present the detailed numerical tests for all the algorithms and the comparison with typical efficient first-order algorithms. In the last section \[sec:conclude\], we give some conclusions.
ROF model and Augmented Lagrangian Methods {#sec:rofALM}
==========================================
The total variation regularized model is as follows. $$\label{eq:ROF}
\min_{u\in X} D(u) + \alpha \| \nabla u\|_{1}, \tag{P}$$ where $D(u) = {\|{u - f}\|_{2}}^2/{2}$ being the ROF model. We define the finite dimensional discrete image space $X$ and the auxiliary space $Y$ as $$X = \{u: \Omega \rightarrow \mathbb{R} \}, \quad Y = \{p: \Omega \rightarrow \mathbb{R}^2\}, \quad \Omega \subset \mathbb{R}^2,$$ with the standard $L^2$ scalar product. The functional spaces $X$ and $Y$ and all other functional spaces setting are finite dimensional throughout this paper. We denote $\nabla \in \mathcal{L}(X,Y)$ as the discretized gradient operator where $ \mathcal{L}(X,Y)$ denotes the linear and bounded operator mapping $X$ to $Y$. Finite differences are used to discretize the operator $\nabla$ and its adjoint operator $\nabla^{*} = -\operatorname{div}$ with homogeneous Neumann and Dirichlet boundary conditions respectively, $$\label{eq:gradient:div:adjoint}
\langle \nabla u, p\rangle_{Y} = \langle u, \nabla^{*} p\rangle_{X} = -\langle u, \operatorname{div}p \rangle_{X}, \quad \forall u \in X, \ p \in Y.$$ We denote $|\cdot|$ as the Euclidean norm including the absolute value for real valued scalar. With $\nabla u = (\nabla_1 u, \nabla_2 u)^{T} \in \mathbb{R}^2$, the isotropic or anisotropic TV is as follows, $$\label{eq:alm:up:ani:pd1}
\|\nabla u\|_{1}: = \int_{\Omega}|\nabla u|dx, \quad \text{or} \quad \|\nabla u\|_{1}: = \int_{\Omega}|\nabla_1 u|dx + \int_{\Omega}|\nabla_2 u|dx,$$ where $dx$ in is the volume element of $\mathbb{R}^2$. By the Fenchel-Rockafallar duality theory [@HBPL; @KK], the primal-dual form of can be written as $$\label{eq:tv-denoising-saddle}
\min_{\substack{u \in X}} \max_{\substack{\lambda \in Y}} \
D(u) + {\langle{{\nabla}u},{\lambda}\rangle_{L^2}} -
{\mathcal{I}}_{\{{\|{\lambda}\|_{\infty}} \leq \alpha\}}(\lambda),$$ and the dual form of the ROF model can be written $$\label{eq:dual:rof}
\max_{\lambda \in Y}\left\{-\left(d(\lambda) := \frac{1}{2} \| \operatorname{div}\lambda +f\|_{2}^2 -\frac{1}{2}\|f\|_{2}^2+ {\mathcal{I}}_{\{{\|{\lambda}\|_{\infty}} \leq \alpha\}}(\lambda)\right)\right\}. \tag{D}$$ The optimality conditions on the saddle points $(\bar u, \bar \lambda)$ are as follows $$\label{eq:opti:primaldual}
\bar u- f + \nabla^*\bar \lambda = 0, \quad
\nabla \bar u \in \partial {\mathcal{I}}_{\{{\|{\bar \lambda}\|_{\infty}} \leq \alpha\}}(\bar \lambda).$$ By the Fenchel-Rockafallar duality theory, we have $\bar \lambda \in \partial \alpha \| \nabla \bar u\|_{1}$ and $$\langle \bar \lambda, \nabla \bar u\rangle = {\mathcal{I}}_{\{{\|{\bar \lambda}\|_{\infty}} \leq \alpha\}}(\bar \lambda) + \alpha \| \nabla \bar u\|_{1}.$$ The optimality condition for $\bar \lambda$ in is also equivalent to $${\mathcal{P}}_{\alpha}(\bar \lambda + \sigma \nabla \bar u) = \bar \lambda, \quad \forall \sigma >0,$$ where $\mathcal{P}_{\alpha}$ is projection to the set $\{p:{\|{p}\|_{\infty}} \leq \alpha\}$ with $p=(p_1,p_2)^T$, i.e., $$\label{eq:projection:ani:iso}
\mathcal{P}_{\alpha}(p) =\frac{p}{\max(1.0, {|p|}/{\alpha})} \ \ \text{or} \ \ \mathcal{P}_{\alpha}(p) =\left(\frac{p_1}{\max(1, {|p_1|}/{\alpha})},\frac{p_2}{\max(1, {|p_2|}/{\alpha})}\right)^{T}.$$ Introducing $p=\nabla u$, the ROF model becomes the following constrained optimization problem $$\min_{x\in X} D(u) + \alpha\|p\|_{1}, \quad \text{such that} \ \ \nabla u=p.$$ The augmented Lagrangian method thus follows, with nondecreasing update of $\sigma_k \rightarrow c_{\infty}$, $$\begin{aligned}
(u^{k+1}, p^{k+1}) &= \operatorname*{arg\,min}_{u,p} L(u,p;\lambda^k) := \frac{1}{2}\|u-f\|_{2}^2 + \alpha \|p\|_{1} + \langle \lambda^k, \nabla u -p \rangle + \frac{\sigma_k}{2}\|\nabla u -p\|_{2}^2, \label{eq:alm:up}\\
\lambda^{k+1} &= \lambda^k + \sigma_k(\nabla u^{k+1} - p^{k+1}),\label{eq:update:lambda}\end{aligned}$$ where $p=(p_1, p_2)^{T} \in \mathbb{R}^2$, the norm $\|\cdot\|_{1}$ is based on the following isotropic or anisotropic norm $$\label{eq:l1:iso:pd}
|p| := \sqrt{p_1^2 + p_2^2} \ \ (\text{anisotropic })\quad \text{or} \quad |p|_1 = |p_1|+|p_2| \ \ (\text{isotropic}).$$ Similarly, the constraint ${\mathcal{I}}_{\{{\|{\lambda}\|_{\infty}} \leq \alpha\}}(\lambda)$ in and for the isotropic case means $$|\lambda| = \sqrt{\lambda_1^2 + \lambda_2^2} \leq \alpha,$$ and while for the anisotropic case, the constraint means $$|\lambda_1| + |\lambda_2| \leq \alpha, \quad i=1,2.$$ We will use the semismooth Newton method to solve . The optimality conditions with fixed $\sigma_k$ and $\lambda^k$ for are $$\begin{aligned}
&&u - f + \nabla^* \lambda^k + \sigma_k \nabla^*(\nabla u - p) = 0, \label{eq:opti:u} \\
&& \partial \alpha \| p\|_{1} - \lambda^k + \sigma_k(p-\nabla u) \ni 0,\label{eq:opti:p}\end{aligned}$$ where $(u,p)=(u^{k+1},p^{k+1})$ are the optimal solutions of . The equation leads to $$\label{eq:moreau:p:update}
\lambda^k + \sigma_k \nabla u \in (\sigma_k I + \partial \alpha \|\cdot\|_{1})p \Rightarrow p = (I + \frac{1}{\sigma_k} \partial \alpha \|\cdot\|_{1})^{-1}(\frac{\lambda^k + \sigma_k \nabla u}{\sigma_k}): = S_{\frac{\alpha}{\sigma_k}}(\frac{\lambda^k}{\sigma_k} + \nabla u),$$ where $S_{\frac{\alpha}{\sigma_k}}(\cdot)$ is the soft thresholding operator for isotropic or anisptropic $\|\cdot\|_1$ norm. With relation , the augmented Lagrangian $L(u,p;\lambda^k)$ can be reformulated as $$\begin{aligned}
\Phi_k(u; \lambda^k, \sigma_k)&: = L(u, S_{\frac{\alpha}{\sigma_k}}(\frac{\lambda^k}{\sigma_k} + \nabla u); \lambda^k) \notag \\
&= \frac{1}{2}\|u-f\|_{2}^2 + \alpha \|p\|_{1} + \frac{\sigma_k}{2}\|\frac{\lambda^k}{\sigma_k}+ \nabla u - S_{\frac{\alpha}{\sigma_k}}(\frac{\lambda^k}{\sigma_k} + \nabla u)\|_2^2 -\frac{1}{2\sigma_k}\|\lambda^k\|_2^2, \label{eq:augmented:lagrangian:only:u}\end{aligned}$$ which will be more convenient than once the globalization strategy including the line search is employed. Substituting $p$ of into , we get $$\label{eq:u:alm:suntoh1}
u - f + \nabla^* \lambda^k + \sigma_k \nabla^*\nabla u -\sigma_k \nabla^*(I + \frac{1}{\sigma_k} \partial \alpha \|\cdot\|_{1})^{-1}(\frac{\lambda^k + \sigma_k \nabla u}{\sigma_k})=0.$$ Denoting $G^*(p) = \alpha \|p\|_{1}$, we see the Fenchel dual function of $G^*$ is $G(h) = I_{\{ \|\cdot\|_{\infty} \leq \alpha\}}(h)$. With the Moreau’s identity, $$\label{eq:moreau:indentity}
x = (I + \tau \partial G)^{-1}(x) + \tau (I +\frac{1}{\tau} \partial G^*)^{-1}(\frac{x}{\tau}),$$ we arrive at $$\label{eq:moreau:sub}
\sigma_k p = \sigma_k(I + \frac{1}{\sigma_k} \partial \alpha \|\cdot\|_{1})^{-1}(\frac{\lambda^k + \sigma_k \nabla u}{\sigma_k})=\lambda^k + \sigma_k \nabla u - (I + \sigma_k \partial G)^{-1}(\lambda^k + \sigma_k \nabla u ).$$ Substituting $p$ of into leads to the equation of $u$ $$\label{eq:u:proj:solve}
u - f + \nabla ^* (I + \sigma_k \partial G)^{-1}(\lambda^k + \sigma_k \nabla u ) = 0.$$ Indeed, we can solve or directly with semismooth Newton method, which will be discussed in the subsequent sections. Now let’s turn to another formulation by introducing an auxiliary variable $$\label{eq:opti:modi:pro}
h: = (I + \sigma_k \partial G)^{-1}(\lambda^k + \sigma_k \nabla u ) = \mathcal{P}_{\alpha}(\lambda^k + \sigma_k \nabla u).$$ By the definition of the projection and taking the isotropic case for example, becomes $$\label{eq:proj:cons}
h = \mathcal{P}_{\alpha}(\lambda^k + \sigma_k \nabla u) = \dfrac{\lambda^k + \sigma_k \nabla u}{\max(1.0, {|\lambda^k + \sigma_k \nabla u|}/{\alpha})}.$$ The equation thus becomes $$\label{eq:opti:modi:h}
u - f + \nabla^*h = 0.$$ Combining , and , we get the following equations of $(u, \lambda)$ instead of $(u,p)$, $$\label{eq:opti:u:lambda}
\mathcal{F}(u,h) = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \quad \mathcal{F}(u,h):=
\begin{bmatrix}
u - f + \nabla^*h \\
- \sigma_k \nabla u - \lambda^k + \max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u|}{\alpha} \big) h
\end{bmatrix}
.$$ It can also be reformulated as $$\label{eq:opti:u:lambda:similar}
\mathcal{F}(u,h) =
\begin{bmatrix}
u - f + \nabla^*h \\
- \alpha \nabla u - \dfrac{\alpha}{\sigma_k} \lambda^k + \max \big(\dfrac{\alpha}{\sigma_k}, |\dfrac{\lambda^k}{\sigma_k} + \nabla u| \big) h
\end{bmatrix}
= \begin{bmatrix} 0 \\ 0 \end{bmatrix}.$$ Once solving any of the equations , and and obataining the solution $u$, we can update the $p$ by or accordingly. Then the Lagrangian multiplier $\lambda^{k+1}$ can be updated by . Actually, according to , compared to the update of $\lambda^{k+1}$ , we see the update of the multiplier can also be $$\label{eq:update:multiplier:projection}
\lambda^{k+1} = (I + \sigma_k \partial G)^{-1}(\lambda^k + \sigma_k \nabla u )=\mathcal{P}_{\alpha}(\lambda^k + \sigma_k \nabla u),$$ which is nonlinear update compared to the linear update . We refer to [@KK] (chapter 4) for general nonlinear updates of Lagrangian multipliers with different derivations and framework of ALM. Throughout this paper, we will use the semismooth Newton methods to any of the subproblems , and . The different formulations of , and will bring out different algorithms and it would turn out different efficiency. ALM with the update of $u^{k+1}$ in and $\lambda^{k+1}$ in was pointed out in [@KK] (chapter 4.7.2) without solvers including semismooth Newton method and numerics. We begin with the semismoothness where the Newton derivative can be chosen as the Clarke generalized derivative [@KK; @KKU1].
$F: D \subset X \rightarrow Z$ is called Newton differentiable at $x$ if there exist an open neighborhood $N(x) \subset D$ and mapping $G: N(x)\rightarrow \mathcal{L}(X,Z)$ such that (Here the spaces $X$ and $Z$ are Banach spaces.) $$\lim_{|h|\rightarrow 0} \frac{|F(x+h)-F(x)-G(x+h)h|_{Z}}{|h|_X}=0.$$ The family $\{G(s): s \in N(x)\}$ is called an Newton derivative of $F$ at $x$.
If $F:\mathbb{R}^n \rightarrow \mathbb{R}^m$ and the set of mapping $G$ is the Clarke generalized derivative $\partial F$, we call $F$ is semismooth [@KKU].
Let $F: O \subseteq X \rightarrow Y$ be a locally Lipschitz continuous function on the open set $O$. $F$ is said to be semismooth at $x \in O$ if $F$ is directionally differentiable at $x$ and for any $V\in \partial F(x+ \Delta x)$ with $\Delta x \rightarrow 0$, $$F(x+\Delta x) -F(x) - V\Delta x = {o}(\|\Delta x\|).$$
The Newton derivatives of vector-valued functions can be computed componentwise [@Cla] (Theorem 9.4). Together with the definition of semismoothness, we have the following lemma.
\[lem:vector:semismooth:newton\] Suppose $F : \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $F = (F_1(x), F_2(x), \cdots, F_l(x))^{T}$ with $F_i : \mathbb{R}^n \rightarrow \mathbb{R}^{l_i} $ being semismooth. Here $l_i \in \mathbb{Z}^{+}$ and $\sum_{i=1}^l l_i=m$. Assuming the Newton derivative $ D_N F_i(x) \in \partial F_i(x)$, $i = 1,2,\cdots, l$, then the Newton derivative of $F$ can be chosen as $$D_N F(x) = \begin{bmatrix}
D_N F_1(x), D_N F_2(x)
, \cdots, D_N F_l(x) \end{bmatrix}^T.$$
Now we turn to semismooth Newton method for solving , and . The semismooth Newton method for the general nonlinear equation $F(x)=0$ can be written as $$\label{semi:smoothnewton:cal:newton:direc}
{\mathcal{V}}(x^l)^{-1}\delta x^{l+1} = - F (x^l),$$ where ${\mathcal{V}}(x^l)$ is a semismooth Newton derivative of $F$ at $x^l$, for example ${\mathcal{V}}(x^l) \in \partial F(x^l)$. Additionally, ${\mathcal{V}}(x)$ satisfies the *regularity condition* [@Cla; @MU]. Henceforth, we say ${\mathcal{V}}(x)$ satisfies the *regularity condition* if ${\mathcal{V}}(x)^{-1}$ exist and are uniformly bounded in a small neighborhood of the solution $x^*$ of $F(x^*)=0$. When the globalization strategy including line search is necessary, one can get Newton update $x^{l+1}$ with Newton direction $\delta x^{l+1}$ in . Once the globalization strategy is not needed, the semismooth Newton iteration can also be written as follows with updating $x^{l+1}$ directly $$\label{semi:smoothnewton:sys}
{\mathcal{V}}(x^l)x^{l+1} = {\mathcal{V}}(x^l)x^l - F(x^l).$$
Augmented Lagrangian Method with Primal-Dual Semismooth Newton Method {#sec:alm:pdssn}
=====================================================================
The Isotropic Total Variation
-----------------------------
Now we turn to the semismoothness of nonlinear system . The only nonlinear or nonsmooth part comes from the function $\max (1.0, |\lambda^k + \sigma_k \nabla u|/\alpha)$.
\[lem:semismooth:max\] The function ${\mathcal{G}}(u):=\max (1.0, \dfrac{|\lambda^k + \sigma_k \nabla u|}{\alpha})$ is semismooth on $u$ and its Clarke’s generalized gradient for $u$ is as follows,$$\label{eq:newton:deri:max:iso}
\left\{\chi^s_{\lambda^k, u}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u, \nabla \cdot \ \rangle }{ |\lambda^k + \sigma_k \nabla u |} \ | \ s\in[0,1]\right\} = \partial_{u}(\max (1.0, \dfrac{|\lambda^k + \sigma_k \nabla u|}{\alpha})),$$ where $\chi^s_{\lambda^k, u}$ is the generalized Clarke derivatives of $\max(\cdot, 1.0)$, $$\label{eq:newton:deri:max}
\chi^s_{\lambda^k, u} = \begin{cases}
1, \quad & |\lambda^k +\sigma_k \nabla u | /\alpha >1.0, \\
s, \quad & |\lambda^k +\sigma_k \nabla u | /\alpha=1.0, \ s\in [0,1], \\
0, \quad & |\lambda^k + \sigma_k \nabla u | / \alpha <1.0.
\end{cases}$$ Furthermore, ${\mathcal{F}}(u,h)$ in is semismooth on $(u,h)$. Henceforth, we choose $s=1$ for the Newton derivative of ${\mathcal{G}}(u)$ in and denote $\chi_{\lambda^k, u} := \chi^1_{\lambda^k, u}$.
We claim that ${\mathcal{G}}(u)$ is actually a $PC^{\infty}$ function of $u$. Introduce ${\mathcal{G}}_1(u) = 1.0$ and ${\mathcal{G}}_2(u) = {|\lambda^k + \sigma_k \nabla u|}/{\alpha}$ which are *selection functions* of ${\mathcal{G}}(u)$ and ${\mathcal{G}}(u)$ is *continuous selection* of the functions ${\mathcal{G}}_1(u)$ and ${\mathcal{G}}_2(u)$ [@Sch] (Chapter 4) (or Definition 4.5.1 of [@FP]). We see ${\mathcal{G}}_1(u)$ is a smooth function and ${\mathcal{G}}_2(u)$ is smooth in any open set outside the closed set $D_0:=\{u \ | \ |\lambda^k + \sigma_k \nabla u|=0\}$. Thus for any $u\in D_{\alpha}:=\{u \ | \ |\lambda^k + \sigma_k \nabla u|=\alpha\}$, there exists a small open neighborhood of $u$ such that ${\mathcal{G}}_1(u)$ and ${\mathcal{G}}_2(u)$ are smooth functions. We thus conclude that ${\mathcal{G}}(u)$ is a $PC^{\infty}$ function of $u$. ${\mathcal{G}}(u)$ is also $PC^1$ and hence semismooth on $u$ [@MU] (Proposition 2.26). Since $\nabla_u {\mathcal{G}}_1(u) = 0$ and $\nabla_u {\mathcal{G}}_2(u) = {\sigma_k\langle \lambda^k + \sigma_k \nabla u, \nabla \cdot \ \rangle }/{ (\alpha|\lambda^k + \sigma_k \nabla u |)}$ out side $D_0$. For any $u \in D_{\alpha}$, by [@Sch] (Proposition 4.3.1), we see $$\partial_u {\mathcal{G}}(u) = \text{co}\{\nabla_u {\mathcal{G}}_1(u), \nabla_u {\mathcal{G}}_2(u)\},$$ where “$\text{co}$" denotes the convex hull of the corresponding set [@CL]. We thus obtain the equation . Since each component of $\mathcal{F}(u,h)$ is an affine function on $h$ and is semismoooth on $(u,h)$, the semismooth of $\mathcal{F}(u,h)$ on $(u,h)$ then follows [@MU] (Proposition 2.10).
By Lemma \[lem:vector:semismooth:newton\] and \[lem:semismooth:max\], denoting $x^l = (u^l, h^l)$, $x = (u, h)$ and $\mathcal{F}(u,h) = ({\mathcal{F}}_1(u,h), {\mathcal{F}}_2(u,h))^{T}$, the Newton derivative of $F(u,h)$ can be chosen as $$\label{eq:newton:vector:deri}
D_{N}{\mathcal{F}}= (D_N {\mathcal{F}}_1(u,h), D_N {\mathcal{F}}_2(u,h))^{T}.$$ Thus the Newton derivative of the nonlinear equation can be chosen as $${\mathcal{V}}^{I}(x^l) = \begin{bmatrix}
I & \nabla^* \\
-\sigma_k \nabla + \chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla \cdot \ \rangle }{ |\lambda^k + \sigma_k \nabla u^l |}h^l & \max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u^l|}{\alpha}\big)
\end{bmatrix}.$$ Let’s introduce $$\begin{aligned}
&D_l =
U_{\sigma_k}(\lambda^k,u^l): = \max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u^l|}{\alpha}\big),
\quad
B_l = \begin{bmatrix}
\chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla \cdot \rangle }{ |\lambda^k + \sigma_k \nabla u^l |}h^l
\end{bmatrix},\\
&C_l = -\sigma_k\nabla +B_l.\end{aligned}$$ It can be readily verified that $${\mathcal{V}}^{I}(x^l)x^l - \mathcal{F}(x^l) =\begin{bmatrix}
f \\ \lambda^k + \chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla u^l \rangle }{ |\lambda^k + \sigma_k \nabla u^l |}h^l
\end{bmatrix}=\begin{bmatrix} f \\ \lambda^k + B_lu^l \end{bmatrix}:= \begin{bmatrix} f \\ b_2^l \end{bmatrix}.$$ Next, we turn to solve the Newton update $$\label{eq:system:u:h}
\begin{bmatrix}
I & \nabla^* \\
C_l& D_l
\end{bmatrix}
\begin{bmatrix}
u^{l+1} \\ h^{l+1}
\end{bmatrix}
=\begin{bmatrix} f \\ b_2^l \end{bmatrix}.$$ For solving the linear system , it is convenient either to solve $u^{l+1}$ first, i.e., solving the equation of the Schur complement ${\mathcal{V}}^I(x^l)/D_l$. Substituting $$\label{eq:ufirst:h}
h^{l+1} = \bigg( b_2^l + \sigma_k \nabla u^{l+1} - \chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla u^{l+1} \rangle } { |\lambda^k + \sigma_k \nabla u^l |}h^l \bigg) \bigg/ \bigg(\max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u^l|}{\alpha}\big) \bigg)$$ into the first equation on $u^{k+1}$, we have $$\label{eq:newton:equation:iso}
(I-\nabla^*D_l^{-1}C_l)u^{l+1} = f + \operatorname{div}\dfrac{b_2^l}{U_{\sigma_k}( \lambda^k, u^l)},$$ which is also the following equation in detail $$\begin{aligned}
\label{eq:calculate:u:first}
u^{l+1} - \operatorname{div}\dfrac{\sigma_k\nabla u^{l+1} - \chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla u^{l+1} \rangle } { |\lambda^k + \sigma_k \nabla u^l |}h^l}{U_{\sigma_k}( \lambda^k, u^l)} = f + \operatorname{div}\dfrac{b_2^l}{U_{\sigma_k}( \lambda^k, u^l)}.\end{aligned}$$ After calculating $u^{l+1}$ in , we get $h^{l+1}$ by .
We can also first calculate $h^{l+1}$ following the calculation of $u^{l+1}$, i.e., solving the equation of the Schur complement ${\mathcal{V}}^I(x^l)/I$. Solving dual variables first can also be found in [@HPRS], where the primal dual semismooth Newton is employed for total generalized variation. Substituting $$\label{eq:u:h:recover}
u^{l+1} = \operatorname{div}h^{l+1} + b_1^l = \operatorname{div}h^{l+1} + f,$$ into , we obtain the linear equation of $h^{l+1}$ $$(D_l-C_l\nabla^*)h^{l+1} = b_2^k - C_l f,$$ which is the folloing equation in detail,
\[eq:ssn:pdd:h\] $$\begin{aligned}
&\max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u^l|}{\alpha}\big)h^{l+1} -\sigma_k \nabla \operatorname{div}h^{l+1}+\chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla \operatorname{div}h^{l+1} \rangle } { |\lambda^k + \sigma_k \nabla u^l |}h^l \\
&= b_2^l +\sigma_k \nabla f -\chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla f\rangle } { |\lambda^k + \sigma_k \nabla u^l |}h^l \\
& = \lambda^k+ \sigma_k \nabla f +\chi_{\lambda^k, u^l}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla (u^l-f) \rangle } { |\lambda^k + \sigma_k \nabla u^l |}h^l.\end{aligned}$$
We then recover $u^{l+1}$ by after calculating $h^{l+1}$.
We have the following lemma for the regularity conditions of the Newton derivative.
\[lem:positive:iso:pd\] If the feasibility of $h^l$ is satisfied, i.e., $|h^l|\leq \alpha$ by , we have the positive definiteness of the Schur complement ${\mathcal{V}}^I(x^l)/D_l=(I-\nabla^*D_l^{-1}C_l)$ of $D_l$ respecting to ${\mathcal{V}}^I(x^l)$, i.e., $$\label{eq:positive:pd:iso}
\langle -\nabla^*D_l^{-1}C_lu, u \rangle_{2} \geq 0 \Rightarrow \langle (I-\nabla^*D_l^{-1}C_l)u,u \rangle_{2} \geq \|u\|_{2}^2.$$ Thus ${\mathcal{V}}^I(x^l)/D_l$ satisfies the regularity condition. Furthermore ${\mathcal{V}}^I(x^l)$ can be chosen as a Newton derivative of ${\mathcal{F}}(u,h)$. The linear operator ${\mathcal{V}}^I(x^l)$ and the Schur complement ${\mathcal{V}}^I(x^l)/I$ satisfy the regularity condition for any fixed $\sigma_k$ and $\lambda^k$, i.e., ${\mathcal{V}}^I(x^l)/I$, and ${\mathcal{V}}^I(x^l)$ are nonsingular and the corresponding inverse are uniformly bounded for any fixed $\sigma_k$ and $\lambda^k$.
We first prove the regular condition of ${\mathcal{V}}^I(x^l)/D_l$, whose proof is essentially similar to the proof of Lemma 3.3 in [@HS]. Denote $$\Omega^{+}: = \{ (x_1,x_2) \in \Omega: |\lambda^k + \sigma_k\nabla u | \geq \alpha\}, \quad \Omega^{-}: = \{ (x_1,x_2) \in \Omega: |\lambda^k + \sigma_k\nabla u | < \alpha\}, \quad \Omega = \Omega^{+} \cup \Omega^{-}.$$ Since for any $u\in L^2(\Omega)$, we have $$\begin{aligned}
&\langle -\nabla^*D_l^{-1}C_lu, u \rangle_{2} =-\langle D_l^{-1}C_lu, \nabla u \rangle_{2}
=\langle -D_l^{-1}(-\sigma_k \nabla + B_l)u, \nabla u\rangle_{2} \notag \\
&=\langle \sigma_k D_l^{-1}\nabla u, \nabla u \rangle_{2} - \langle D_l^{-1}B_lu, \nabla u \rangle_{2} \label{eq:inequ:iso:posi:esi}\\
&=\int_{\Omega^{-}}\sigma_kD_l^{-1}|\nabla u|^2 d\sigma + \int_{\Omega^{+}}\sigma_kD_l^{-1}|\nabla u|^2 d\sigma-\int_{\Omega^{+}} \langle D_l^{-1} \dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla u \rangle }{ |\lambda^k + \sigma_k \nabla u^l |}h^k,\nabla u \rangle d\sigma. \notag
\end{aligned}$$ With the assumption $|h^l| \leq \alpha$ and direct calculations, we see $$\label{eq:inequ:iso:posi:esiII}
|\langle D_l^{-1} \dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda^k + \sigma_k \nabla u^l, \nabla u \rangle }{ |\lambda^k + \sigma_k \nabla u^l |}h^l,\nabla u \rangle |
\leq D_l^{-1} \dfrac{\sigma_k}{\alpha}\dfrac{|\lambda^k + \sigma_k \nabla u^l|| \nabla u| }{ |\lambda^k + \sigma_k \nabla u^l|}|h^l||\nabla u|
\leq \sigma_k D_l^{-1} |\nabla u|^2.$$ Combining and , we arrive at $$\langle -\nabla^*D_l^{-1}C_lu, u \rangle_{2} \geq \int_{\Omega^{-}}\sigma_kD_l^{-1}|\nabla u|^2 d\sigma \geq 0,$$ which leads to . For the regularity condition of ${{\mathcal{V}}^I}(x^l)$, it is known that ([@Zhangfu] formula 0.8.1 which is similar to the Banachiewicz inversion formula) $${{\mathcal{V}}^I}(x^l)^{-1} = \begin{bmatrix}
({\mathcal{V}}^I(x^l)/D_l)^{-1} & -({\mathcal{V}}^I(x^l)/D_l)^{-1}\nabla^*D_l^{-1} \\
-D_l^{-1}C_l ({\mathcal{V}}^I(x^l)/D_l)^{-1} & D_l^{-1} + D_l^{-1}C_l ({\mathcal{V}}^I(x^l)/D_l)^{-1}\nabla^*D_l^{-1}.
\end{bmatrix}$$ By the boundedness of $({\mathcal{V}}^I(x^l)/D_l)^{-1}$, $C_l$ and $D_l^{-1}$, we get the boundedness of ${{\mathcal{V}}^I}(x^l)^{-1}$.
Similarly, for the existence and boundedness of $({\mathcal{V}}^I(x^l)/I)^{-1}$, by the Duncan inversion formula (see 0.8.1 and 0.8.2 of [@Zhangfu]) or Woodbury formula, we have $$({\mathcal{V}}^I(x^l)/I)^{-1} = (D_l -C_l\nabla^*)^{-1} = D_l^{-1} + D_l^{-1}C_l ({\mathcal{V}}^I(x^l)/D_l)^{-1}\nabla^*D_l^{-1}.$$ We thus get the boundedness of $({\mathcal{V}}^I(x^l)/I)^{-1}$.
The Anisotropic Total Variation
-------------------------------
For anisotropic $l_1$ norm in , the projection to the set $\{y:{\|{y}\|_{\infty}} \leq \alpha\}$ actually is $$\begin{aligned}
\label{eq:proj:cons:ani}
&\mathcal{P}_{\alpha}^{A}(y) =\left(\frac{y_1}{\max(1, {|y_1|}/{\alpha})},\frac{y_2}{\max(1, {|y_2|}/{\alpha})}\right)^{T}, \\
&\mathcal{P}_{\alpha}^{A}(\lambda^k + \sigma_k \nabla u) = \left(\dfrac{\lambda_1^k + \sigma_k \nabla_1 u}{\max(1.0, {|\lambda_1^k + \sigma_k \nabla_1 u|}/{\alpha})}, \dfrac{\lambda_2^k + \sigma_k \nabla_2 u}{\max(1.0, {|\lambda_2^k + \sigma_k \nabla_2 u|}/{\alpha})}\right)^T.\end{aligned}$$ After completely similar analysis as the isotropic case, the equation becomes $$\label{eq:opti:u:lambda:ani}
{\mathcal{F}}^A(u,h) = \begin{bmatrix} 0 \\ 0 \\0 \end{bmatrix}, \quad {\mathcal{F}}^A(u,h) :=
\begin{bmatrix}
u - f + \nabla^*h \\
- \sigma_k \nabla_1 u - \lambda_1^k + \max \big(1.0, \dfrac{|\lambda_1^k + \sigma_k \nabla_1 u|}{\alpha} \big) h_1 \\
- \sigma_k \nabla_y u - \lambda_2^k + \max \big(1.0, \dfrac{|\lambda_2^k + \sigma_k \nabla_2 u|}{\alpha} \big) h_2
\end{bmatrix}
.$$ Similar to Lemma \[lem:semismooth:max\], we have the following lemma, whose proof is completely similar to Lemma \[lem:semismooth:max\] and we omit here.
\[lem:ani:pdd\] The functions $\max (1.0, {|\lambda_1^k + \sigma_k \nabla_1 u|/\alpha})$ and $\max (1.0, {|\lambda_2^k + \sigma_k \nabla_2 u|}/{\alpha})$ are semismooth functions of $u$ and their Clarke generalized gradients are as follows, $$\label{eq:subdifffer:ani:pd:max}
\left\{\chi_{\lambda^k, u}^{i,s}\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_i^k + \sigma_k \nabla_i u, \nabla_i \cdot \ \rangle }{ |\lambda_i^k + \sigma_k \nabla_i u |} \ | \ s\in [0,1] \right \} = \partial_{u}(\max (1.0, \dfrac{|\lambda_i^k + \sigma_k \nabla_i u|}{\alpha})), \quad i=1,2
$$ where $\chi_{\lambda^k, u}^{1,s}$ and $\chi_{\lambda^k, u}^{2,s}$ are the generalized derivatives of $\max(\cdot, 1.0)$, $$\label{eq:ani:pd:actset}
\chi_{\lambda^k, u}^{1,s} = \begin{cases}
1, \quad & |\lambda_1^k +\sigma_k \nabla_1 u | / \alpha > 1.0, \\
s, \quad & |\lambda_1^k +\sigma_k \nabla_1 u | / \alpha = 1.0, \\
0, \quad & |\lambda_1^k + \sigma_k \nabla_1 u | / \alpha < 1.0,
\end{cases} \quad
\chi_{\lambda^k, u}^{2,s} = \begin{cases}
1, \quad & |\lambda_2^k +\sigma_k \nabla_2 u | / \alpha \geq 1.0, \\
s, \quad & |\lambda_2^k +\sigma_k \nabla_2 u | / \alpha = 1.0, \ s \in [0,1],\\
0, \quad & |\lambda_2^k + \sigma_k \nabla_2 u | / \alpha < 1.0.
\end{cases}$$ Furthermore $F^A(u,h)$ is semismooth on $(u,h)$. Henceforth, denote $\chi_{\lambda^k, u}^{1}:=\chi_{\lambda^k, u}^{1,1}$ and $\chi_{\lambda^k, u}^{2}: = \chi_{\lambda^k, u}^{2,1} $ for $s=1$ cases in .
Let’s introduce $$D_l^{A} = \begin{bmatrix}
U_{\sigma_k}^1 & 0\\
0 & U_{\sigma_k}^2
\end{bmatrix},
\quad
B_l^{A} = \begin{bmatrix}
\chi_{\lambda^k, u^k}^1\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_1^k + \sigma_k \nabla_1 u^l, \nabla_1 \cdot \ \rangle } { |\lambda_1^k + \sigma_k \nabla_1 u^l |}h_1^l \\
\chi_{\lambda^k, u^l}^2\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_2^k + \sigma_k \nabla_2 u^l, \nabla_2 \cdot \ \rangle } { |\lambda_2^k + \sigma_k \nabla_1 u^l |}h_2^l
\end{bmatrix}, \ \ C_l^{A} = -\sigma \nabla + B_l^A,$$ where $$U_{\sigma_k}^1( \lambda^k, u^l)=\max \left(1.0, \dfrac{|\lambda_1^k + \sigma_k \nabla_1 u^l|}{\alpha}\right),\quad
U_{\sigma_k}^2( \lambda^k, u^l)=\max \left(1.0, \dfrac{|\lambda_2^k + \sigma_k \nabla_2 u^l|}{\alpha}\right).$$ By Lemma \[lem:vector:semismooth:newton\] and \[lem:ani:pdd\], since $F^A(u,h)$ is an affine function of $h$, together with Lemma \[lem:ani:pdd\], it can be readily verified that, we can choose the Newton derivative of the nonlinear equation as $${\mathcal{V}}^{A} = \begin{bmatrix}
I & \nabla ^* \\
C_l^{A} & D_l^{A}
\end{bmatrix}.$$ The right-hand side becomes $${\mathcal{V}}^A(x^l)x^l - {\mathcal{F}}^A(x^l) =\begin{bmatrix}
f \\ B_lf
\end{bmatrix}
=
\begin{bmatrix}
f \\ \lambda_1^k + \chi_{\lambda^k, u^l}^1\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_1^k + \sigma_k \nabla_1 u^l, \nabla_1 u^l \rangle }{ \|\lambda_1^k + \sigma_k \nabla_1 u^l \|}h_1^l
\\ \lambda_2^k + \chi_{\lambda^k, u^l}^2\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_2^k + \sigma_k \nabla_2 u^l, \nabla_2 u^l \rangle }{ \|\lambda_2^k + \sigma_k \nabla_2 u^l \|}h_1^l
\end{bmatrix}: = \begin{bmatrix} f \\ b_1^l \\b_2^l \end{bmatrix}.$$ For solving $u^{k+1}$ first the Newton update becomes $$\label{eq:newton:equation:ani}
(I-\nabla^*{{D^A_l}}^{-1}{C^{A}_l})u^{l+1} = f + \operatorname{div}{D^{A}_l}^{-1}b_2^l.$$ Then $h^{l+1}$ can be recovered by $$\label{eq:ufirst:h:ani}
h^{l+1} = \begin{bmatrix}\left( b_1^l + \sigma_k \nabla_1 u^{l+1} - \chi_{\lambda^k, u^l}^1\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_1^k + \sigma_k \nabla_1 u^l, \nabla_1 u^{l+1} \rangle } { |\lambda_1^k + \sigma_k \nabla_1 u^l |}h_1^l \right) \bigg / U_{\sigma_k}^1( \lambda^k, u^l) \\
\left( b_2^l + \sigma_k \nabla_2 u^{l+1} - \chi_{\lambda^k, u^l}^2\dfrac{\sigma_k}{\alpha}\dfrac{\langle \lambda_2^k + \sigma_k \nabla_2 u^l, \nabla_2 u^{l+1} \rangle } { |\lambda_2^k + \sigma_k \nabla_2 u^l |}h_2^l \right) \bigg/ U_{\sigma_k}^2( \lambda^k, u^l)
\end{bmatrix}.$$ We have the Newton update for $h^{k+1}$ is $$\label{eq:pdssn:h:ani}
(D_l^A - C_l\nabla^*)h^{l+1}= (D_l^A +\sigma_k \nabla \nabla^* - B_l\nabla^*)h^{l+1} = b^l +\sigma_k \nabla f -B_lf,$$ where $b^l = (b_1^l, b_2^l)^{T}$. Then $u^{l+1}$ can be recovered through $$u^{l+1}= f-\nabla^*h^{l+1}.$$ For the positive definiteness and regularity condition of Schur complement of $D_k$ for ${\mathcal{V}}_k^{A}$ , we have the following lemma, whose proof is completely similar to the proof of Lemma \[lem:positive:iso:pd\] and we omit here.
\[lem:positive:ani:pd\] If the feasibility of $h^l$ is satisfied, i.e., $|h_i^l|\leq \alpha$ by , $i=1,2$ we have the positive definiteness of $(I-\nabla^*{D^A_l}^{-1}C^A_l)$,i.e., $$\label{eq:positive:pd:ani:iso}
\langle -\nabla^*{D^A_l}^{-1}C^A_l u, u \rangle_{2} \geq 0 \Rightarrow \langle (I-\nabla^*{D^A_l}^{-1}C^A_l)u,u \rangle_{2} \geq \|u\|_{2}^2.$$ We thus conclude ${\mathcal{V}}^A(x^l)$ can be chosen as a Newton derivative of ${\mathcal{F}}^A(u,h)$. ${\mathcal{V}}^A(x^l)/D_l^A$ satisfies the regularity condition. The linear operator ${\mathcal{V}}^A(x^l)$ and the Schur complement ${\mathcal{V}}^A(x^l)/I$ satisfy the regularity condition for any fixed $\sigma_k$ and $\lambda^k$.
We conclude section \[sec:alm:pdssn\] by the following algorithm \[alm:SSN\_PDP\] and \[alm:SSN\_PDD\] which are subproblems for the $k$th iteration of ALM applying to .
(or for the anisotropic case) .
.
The projections to the feasible set $\{h:{\|{h}\|_{\infty}} \leq \alpha\}$ is very important for the positive definiteness of or . It can bring out more efficiency for solving the linear systems or numerically as in our numerical tests (see also [@HS]).
ALM with Semismooth Newton Involving Soft Thresholding Operators {#sec:alm:ssnp:thres}
================================================================
ALM with Semismooth Newton: Isotropic Case
------------------------------------------
For the isotropic TV, we can rewrite the equation as follows $$\label{eq:newton:iso:u:p}
F^I(u)=0, \quad F^I(u):=u-f + \nabla^* \lambda^k + \sigma_k \nabla^*\nabla u - \sigma_k \nabla^* S_{\frac{\alpha}{\sigma_k}}^{I}(\hat \lambda^k+\nabla u).$$ We now turn to the semismooth Newton to solve . The main problem comes from the soft threshold operator $ S_{\frac{\alpha}{\sigma_k}}^{I}(\cdot)$. Still, let’s introduce the active sets $$\chi_{u, \hat \lambda^k}^{+} = \begin{cases}
1, \ \ \ |\hat \lambda^k+ \nabla u | \geq {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda^k+ \nabla u | < {\alpha}/{\sigma_k}.
\end{cases}
$$
With the Moreau’s indentity , with direct calculations, we have $$\label{eq:proj:thres:rela}
S_{\frac{\alpha}{\sigma_k}}^{I}(\hat \lambda^k+ \nabla u ) = \hat \lambda^k + \nabla u- \mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u).$$\[eq:proj:thres:rela:gradient\] By [@CL] (Corollary 2 in section 2.3.3), we have $$\partial_u(S_{\frac{\alpha}{\sigma_k}}^{I}(\hat \lambda^k+ \nabla u ) ) = \nabla - \partial_u( \mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u)).$$ Denote $P(u) = \mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u)$. Let’s introduce the active sets $$\label{eq:act:set:iso:kun:shreshold}
\chi_{u, \hat \lambda^k}^{+} = \begin{cases}
1, \ \ \ |\hat \lambda^k+ \nabla u | \geq {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda^k+ \nabla u | < {\alpha}/{\sigma_k},
\end{cases} \quad
\chi_{u, \hat \lambda^k}^{-} = \begin{cases}
1, \ \ \ |\hat \lambda^k+ \nabla u | < {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda^k+ \nabla u | \geq {\alpha}/{\sigma_k}.
\end{cases}$$
Actually, we have the following lemma.
\[eq:proj:semismooth\] $ P(u)=\mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u)$ is a semismooth function of $u$. Furthermore, we have $$\label{eq:inclusion:iso}
\left\{ A^{P,s}_u (\nabla \cdot \ ) \ | \ s \in [0,1] \right\} \subset (\partial_{u} \mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u)),$$ It means that for any $v$ and $s \in [0,1]$, we have $$\label{eq:first:minus}
A^{P,s}_u(\nabla \cdot \ ) = \begin{cases}
A_{u}^{-}(\nabla \cdot \ ) := \nabla \cdot \ , &|\hat \lambda^k+ \nabla u | <{\alpha}/{\sigma_k}, \\
\dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla \cdot \ }{|\hat \lambda^k+ \nabla u |} - s\dfrac{\langle \hat \lambda^k + \nabla u, \cdot \ \rangle (\hat \lambda^k + \nabla u) }{|\hat \lambda^k+ \nabla u |^3} \right), \ &|\hat \lambda^k+ \nabla u | ={\alpha}/{\sigma_k},\\
A_{u}^{+}(\nabla \cdot \ )=\dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla \cdot \ }{|\hat \lambda^k+ \nabla u |} - \dfrac{\langle \hat \lambda^k + \nabla u, \nabla \cdot \ \rangle (\hat \lambda^k + \nabla u) }{|\hat \lambda^k+ \nabla u |^3} \right), &|\hat \lambda^k+ \nabla u | > {\alpha}/{\sigma_k}.
\end{cases}$$ Throughout this paper, we choose the Newton derivative $A^{P}$ with $s=1$ in and , i.e., $$\label{eq:iso:proj}
A_{u,I}^{P} := \chi_{u, \hat \lambda^k}^{+} A_{u}^{+} + \chi_{u, \hat \lambda^k}^{-} A_{u}^{-}.$$
The semismooth of $P(u)$ is as follows. It is known that $L(y) = \mathcal{P}_{\frac{\alpha}{\sigma_k}}(y)$ is $PC^{\infty}$ function [@MU] (Example 5.16) (or see [@FP] Theorem 4.5.2 for more general projections of $PC^1$ function). Since $y(u) = \nabla u + \hat \lambda^k $ is differentiable and affine function of $u$, $y(u)$ is also semismooth on $u$. We thus get the semismoothness of $P(u)=L(y(u))$ on $u$ [@MU] (Proposition 2.9).
When $|\hat \lambda^k+ \nabla u | < {\alpha}/{\sigma_k}$, $$P(u) = P_1(u) := \mathcal{P}_{\alpha}(\hat \lambda^k+ \nabla u ) = (\hat \lambda^k + \nabla u).$$ Since $P_1(u)$ is an affine and differentiable function on $u$, we have $$P_1(u) = \hat \lambda^k + \nabla u, \quad \nabla_u P_1(u) = \nabla,$$ The Gâteaux derivative of $\mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u)$ can be directly calculated as in . We thus have the first case of .
While $|\hat \lambda^k+ \nabla u | > \frac{\alpha}{\sigma_k}$, $P(u)$ is a smooth function on $u$. Similarly, we have $$\begin{aligned}
P(u) &= P_2(u):= \mathcal{P}_{\frac{\alpha}{\sigma_k}}( \hat \lambda^k + \nabla u) = \dfrac{\alpha}{\sigma_k}\dfrac{ (\hat \lambda^k+ \nabla u)}{ |\hat \lambda^k + \nabla u|}, \ \ |\hat \lambda^k+ \nabla u | > {\alpha}/{\sigma_k}.\\
\nabla_uP_2(u) &= \dfrac{\alpha}{\sigma_k}\left[\dfrac{\nabla \cdot }{ |\hat \lambda^k + \nabla u|} - \dfrac{ \langle \hat \lambda^k + \nabla u, \nabla \cdot \ \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}\right], \ \ |\hat \lambda^k+ \nabla u | > {\alpha}/{\sigma_k}.
\end{aligned}$$ Actually, it can be readily verified by the directional derivative as follows. $$\begin{aligned}
&\partial_{u}\dfrac{ (\hat \lambda^k+ \nabla u)}{ |\hat \lambda^k + \nabla u|}(v)
= \lim_{t \rightarrow 0}\dfrac{1}{t}\left( \dfrac{\hat \lambda^k+ \nabla u + t\nabla v}{ |\hat \lambda^k + \nabla u+t\nabla v| } - \dfrac{\hat \lambda^k + \nabla u}{ |\hat \lambda^k + \nabla u| }\right)\\
&=\lim_{t \rightarrow 0}\dfrac{1}{t}\left( \dfrac{\hat \lambda^k+ \nabla u + t\nabla v}{ \sqrt{ |\hat \lambda^k + \nabla u|^2 + 2t \langle \hat \lambda^k + \nabla u, \nabla v\rangle +t^2 |\nabla v|^2} } - \dfrac{\hat \lambda^k + \nabla u}{ |\hat \lambda^k + \nabla u| }\right) \notag \\
&=\lim_{t \rightarrow 0}\dfrac{1}{t}\left( \dfrac{\hat \lambda^k+ \nabla u + t\nabla v}{ |\hat \lambda^k + \nabla u|}\bigg(1-\dfrac{1}{2}\dfrac{2t \langle \hat \lambda^k + \nabla u, \nabla v\rangle + t^2 |\nabla v|^2 }{\|\hat \lambda^k + \nabla u\|^2} + \mathcal{O}(t^2)\bigg) - \dfrac{\hat \lambda^k+ \nabla u}{ |\hat \lambda^k + \nabla u| }\right) \notag \\
& = \lim_{t \rightarrow 0}\dfrac{1}{t}\left( \dfrac{t\nabla v}{ |\hat \lambda^k + \nabla u|} - \dfrac{t \langle \hat \lambda^k + \nabla u, \nabla v \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3} + \mathcal{O}(t^2) \right) \\
& = \dfrac{\nabla v}{ |\hat \lambda^k + \nabla u|} - \dfrac{ \langle \hat \lambda^k + \nabla u, \nabla v \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}.
\end{aligned}$$ For any $u$ such that $|\hat \lambda^k+ \nabla u | = {\alpha}/{\sigma_k} $, by [@Sch] (Proposition 4.3.1), we have $$\text{co}\{ (\nabla_u P_1(u)), (\nabla_uP_2(u))\} = A^P_u (\nabla \cdot) \subset \partial_u P(u),$$ which leads to and . By [@CL] (Corollary 2.6.6), we have $$\nabla^* A^P_u(\nabla v) \subset (\nabla^* \partial_uP(u))(v) = \partial_u(\nabla^*P(u))(v).$$
Similarly, we have the following lemma.
\[lem:positive:iso:p\] $F^I(u)$ is a semismooth function of $u$. We have $$\begin{aligned}
&A^S_u (\nabla v)\subset (\partial_{u} S_{\frac{\alpha}{\sigma_k}}^{I}(\hat\lambda^k+\nabla u)(v), \label{eq:inclusion:iso:p} \\
A^S_u(\nabla v ) &= \begin{cases}
0, &|\hat \lambda^k+ \nabla u | < \frac{\alpha}{\sigma_k} \\
\left(\nabla v - \dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla v}{|\hat \lambda^k+ \nabla u |} - s\dfrac{\langle \hat \lambda^k + \nabla u, \nabla v \rangle (\hat \lambda^k + \nabla u) }{|\hat \lambda^k+ \nabla u |^3} \right)\right), \ s \in [0,1], \ &|\hat \lambda^k+ \nabla u | = \frac{\alpha}{\sigma_k} \\
A_{u}^{+}(\nabla v) :=\left(\nabla v - \dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla v}{|\hat \lambda^k+ \nabla u |} - \dfrac{\langle \hat \lambda^k + \nabla u, \nabla v \rangle (\hat \lambda^k + \nabla u) }{|\hat \lambda^k+ \nabla u |^3} \right)\right). &|\hat \lambda^k+ \nabla u | > \frac{\alpha}{\sigma_k}
\end{cases}\label{eq:first:minus1p}
\end{aligned}$$ Throughout this paper, we choose the following generalized gradient for computations $$A_{u}^I(\nabla v) := \chi_{u, \hat \lambda^k}^{+} A_{u}^{+}(\nabla v) \subset A^S_u(\nabla v ) ,$$ where we always choose $s=1$ in . The Newton derivative of can be chosen as $$\label{eq:sub:iso:primal}
I +\sigma_k \nabla^*\nabla - \sigma_k\nabla ^* A_u^I \nabla,$$ which is positive definite with lower bound on $u$ and thus satisfies the regularity condition, $$\langle (I +\sigma_k \nabla^*\nabla - \sigma_k\nabla ^* A_u^I \nabla)u, u\rangle_{2} \geq \|u\|_{2}^2.$$
By Lemma \[eq:proj:semismooth\] and , we see $S_{\frac{\alpha}{\sigma_k}}^{I}(\hat \lambda^k+ \nabla u ) $ is semismooth and the semismoothness of $F^I(u)$ then follows. Furthermore, by , we obtain . Since $$I +\sigma_k \nabla^*\nabla - \sigma_k\nabla ^* A_u^I \nabla \subset \partial F^I(u) = I +\sigma_k \nabla^*\nabla - \sigma_k \nabla^* \partial_u(S_{\frac{\alpha}{\sigma_k}}^{I}(\frac{\lambda^k}{\sigma_k}+\nabla u)),$$ we can choose $ I +\sigma_k \nabla^*\nabla - \sigma_k\nabla ^* A_u^I \nabla$ as a Newton derivative for $F^I(u)$.
For the positive definiteness of the Newton derivative , we have $$\begin{aligned}
&\langle (I + \sigma_k \nabla^* \nabla -\nabla ^* A_u^I \nabla)u, u\rangle_{2} = \|u\|_{2}^2 +\sigma_k\langle \nabla u, \nabla u \rangle_{2} -\sigma_k \langle A_u^I \nabla u, \nabla u \rangle_{2} \\
=& \|u\|_{2}^2 + \sigma_k\langle (I-\chi_{u, \hat \lambda}^{+ }) \nabla u, \nabla u \rangle_{2} + \frac{\alpha}{\sigma_k} \langle \chi_{u, \hat \lambda}^{+ }\dfrac{\nabla u}{ |\hat \lambda^k + \nabla u|} - \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}, \nabla u\rangle_{2} \\
=& \|u\|_{2}^2 + \sigma_k\langle (I-\chi_{u, \hat \lambda}^{+ }) \nabla u, \nabla u \rangle_{2} + \frac{\alpha}{\sigma_k} \langle \chi_{u, \hat \lambda}^{+ }\dfrac{\nabla u}{ |\hat \lambda^k + \nabla u|} , \nabla u\rangle_{2} \\
&-\alpha \langle \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}, \nabla u\rangle_{2}
\geq \|u\|_{2}^2,
\end{aligned}$$ since $\chi_{u, \hat \lambda}^{+ } \leq I$ and by the comparison the integral functions $$\begin{aligned}
&\langle \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u |^3}, \nabla u\rangle = \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle^2 }{|\hat \lambda^k + \nabla u|^3} \\
& \leq \chi_{u, \hat \lambda}^{+ }\dfrac{ | \hat \lambda^k + \nabla u|^2 |\nabla u|^2}{|\hat \lambda^k + \nabla u|^3} = \chi_{u, \hat \lambda}^{+ }\dfrac{ |\nabla u|^2}{|\hat \lambda^k + \nabla u|}.
\end{aligned}$$
The semismooth Newton method for solving follows $$\label{eq:ssn:PT:iso}
(I + \sigma_k \nabla ^* \nabla - \sigma_k \nabla^* A_{u^l}^I \nabla)\delta u^{l+1} = - F^I(u^l).$$
ALM with Semismooth Newton: Anisotropic Case
--------------------------------------------
For the anisotropic $l_1$ norm , we have $$\label{eq:h:ani:newton}
p = (p_1,p_2)^T:=H(u)=S_{\frac{\alpha}{\sigma_k}}(\hat\lambda^k+\nabla u) = (S_{\frac{\alpha}{\sigma_k}}(\hat\lambda_1^k +\nabla_1 u),S_{\frac{\alpha}{\sigma_k}}(\hat\lambda_2^k +\nabla_2 u) )^T,$$ $$\label{eq:p:shreshold}
p_i =S_{\frac{\alpha}{\sigma_k}}(\hat\lambda_i^k +\nabla_i u) = \begin{cases}
\hat \lambda_i^k + \nabla_i u - \dfrac{\alpha}{\sigma_k}, \quad &\hat \lambda_i^k + \nabla_i u > {\alpha}/{\sigma_k}, \\
0, \quad &|\hat \lambda_i^k + \nabla_i u| \leq {\alpha}/{\sigma_k}, \\
\hat \lambda_i^k + \nabla_i u + {\alpha}/{\sigma_k}, \quad & \hat \lambda_i^k + \nabla_i u < -{\alpha}/{\sigma_k}.
\end{cases} \quad i=1,2,$$ With , the equation becomes $$\label{eq:u:cancel:p}
F^A(u)=0, \quad F^A(u): = u - f + \nabla^* \lambda^k + \sigma_k \nabla^*\nabla u - \sigma_k \nabla^*
(I + \frac{\alpha}{\sigma_k} \partial \|\cdot\|_{1})^{-1}({\lambda^k}/{\sigma_k} + \nabla u).$$ The Newton derivative is of the critical importance for the semismooth Newton to solve the equation . Let’s first introduce $$\chi_{1, \hat \lambda^k}^{+,s} = \begin{cases}
1, \ \ \ |\hat \lambda_1^k + \nabla_1 u| > {\alpha}/{\sigma_k}, \\
s, \ \ \ |\hat \lambda_1^k + \nabla_1 u| = {\alpha}/{\sigma_k},\\
0, \ \ \ |\hat \lambda_1^k + \nabla_1 u| < {\alpha}/{\sigma_k},
\end{cases} \quad
\chi_{2, \hat \lambda^k}^{+,s} = \begin{cases}
1, \ \ \ |\hat \lambda_2^k + \nabla_2 u| > {\alpha}/{\sigma_k}, \\
s, \ \ \ |\hat \lambda_2^k + \nabla_2 u| = {\alpha}/{\sigma_k},\\
0, \ \ \ |\hat \lambda_2^k + \nabla_2 u| < {\alpha}/{\sigma_k}.
\end{cases}
s \in [0,1].$$ Introduce $\chi_{i, \hat \lambda^k}^{+} : = \chi_{1, \hat \lambda^k}^{+,s}$ for $s=1$ with $i=1,2$. For the Newton derivative of equation , we have the following lemma.
\[lem:positive:ani:p\] The function $F^A(u)$ is semismooth on $u$. We have $$\label{eq:subdifffer:ani:threshold}
\left\{\chi_{\lambda^k, u}^{i,s} \nabla_i \ | \ s\in [0,1] \right \} = \partial_{u}(S_{\frac{\alpha}{\sigma_k}}(\dfrac{\lambda_i^k}{\sigma_k} +\nabla_i u)), \quad i=1,2.
$$ The Newton derivative of the equation can be choose as $$\label{eq:gene:newton:deri:ani}
I + \sigma_k \nabla^*\nabla - \sigma_k \nabla^* A_u^A(\nabla),$$ where $A_u^A(\nabla)$ can be chosen as the Newton derivative of $H(u)$ $$\label{eq:subdiff:ani:shreshold12}
A_u^A(\nabla) =\begin{bmatrix}
\chi_{1, \hat \lambda^k}^{+} & 0 \\
0 & \chi_{2, \hat \lambda^k}^{+}
\end{bmatrix} \begin{bmatrix} \nabla_1 \\ \nabla_2\end{bmatrix}.$$ Furthermore, the Newton derivative is positive definite and satisfies the regularity condition, $$\langle (I + \sigma_k \nabla^*\nabla - \sigma_k \nabla^* A_u^A(\nabla))u, u\rangle_{2} \geq \|u\|_{2}^2.$$
It can be seen that $H_i(u) = S_{\frac{\alpha}{\sigma_k}}({\lambda_i^k}/{\sigma_k} +\nabla_i u)$ is piecewise differentiable, $i=1,2$. Take $H_1(u)$ for example. Denote $H_1^1(u) ={\lambda_1^k}/{\sigma_k} + \nabla_1 u - {\alpha}/{\sigma_k} $ and $H_1^2(u)=0$ being *selection functions* of $H_1(u)$. For $|{\lambda_1^k}/{\sigma_k} + \nabla_1 u - {\alpha}/{\sigma_k}| > {\alpha}/{\sigma_k}$, $$H_1(u) = H_1^1(u), \quad \nabla_u H_1(u) = \nabla_u H_1^1(u) = \nabla_1,$$ and while $|{\lambda_1^k}/{\sigma_k} + \nabla_1 u - {\alpha}/{\sigma_k}| < {\alpha}/{\sigma_k}$, $$H_1(u) = H_1^2(u), \quad \nabla_u H_1(u) = \nabla_u H_1^2(u)=0.$$ Finally, while $|{\lambda_1^k}/{\sigma_k} + \nabla_1 u - {\alpha}/{\sigma_k}| = {\alpha}/{\sigma_k}$, $$\partial_u H_1(u) = \text{co} \{ \nabla_u H_1^1(u), \nabla_u H_1^2(u)\} = \text{co}\{ 0, \nabla_1 \},$$ which leads to . By Lemma \[lem:vector:semismooth:newton\], we found that $A_k(u)$ in can be chosen as a Newton derivative of $H(u)$. Still, since $\nabla^*$ is a linear, bounded and thus is a differentiable mapping, the Newton derivative of $F^A(u)$ can be choose as [@KK] (Lemma 8.15). For the positive definiteness, we see $$\langle (I + \sigma_k \nabla^*(\nabla - A_u^A(u)))u, u\rangle_{2} = \|u\|_{2}^2 + \sigma_k \left[\langle \nabla_1 u, (I-\chi_{1, \hat \lambda^k}^{+})\nabla_1 u \rangle_2 + \langle \nabla_2 u, (I-\chi_{2, \hat \lambda^k}^{+})\nabla_2 u \rangle_2\right] \geq \|u\|_{2}^2,$$ since $\chi_{1, \hat \lambda^k}^{+} \leq I$ and $\chi_{2, \hat \lambda^k}^{+} \leq I$.
Then the semismooth Newton method for solving equation follows $$\label{eq:ssn:PT:ani}
(I + \sigma_k \nabla^*\nabla - \sigma_k\nabla^* A_{u^l}^A(\nabla))(\delta u^{l+1}) = -F^A(u^l).$$
We conclude this section by the following Algorithm \[alm:SSN\_PP\], i.e., the semismooth Newton method for the primal problem involving with the soft thresholding operator (SSNPT) , which is subproblem for the $k$th iteration of ALM applying to .
ALM with Semismooth Newton Involving Projection Operators {#sec:alm:ssnp:proj}
=========================================================
The Isotropic Total Variation
-----------------------------
For the isotropic TV, we rewrite the equation as follows $$\begin{aligned}
&F_{P}^I(u)=0, \quad F_P^I(u): = u -f + \nabla^*\mathcal{P}_{\alpha}(\lambda^k + \sigma_k \nabla u ) , \label{eq:newton:iso:u:st}\end{aligned}$$ where $\mathcal{P}_{\alpha}$ is the same as in . We thus use the semismooth Newton to solve , i.e., $$\label{eq:newton:deri:st:iso}
u - f + \nabla^*\left( \dfrac{(\lambda^k + \sigma_k \nabla u)}{\max(1.0, {|\lambda^k + \sigma_k \nabla u|}/{\alpha})}\right)=0 \Leftrightarrow u - f + \sigma_k \nabla^*\left( \dfrac{ \hat \lambda^k + \nabla u}{\max(1.0, |\hat \lambda^k+ \nabla u|/(\alpha \sigma_k^{-1}))}\right)=0.$$ The equation is the same as the equation with $h$ defined in . Now we solve it directly with the semismooth Newton method instead of introducing the auxiliary variable $h$. Let’s turn to the Newton derivative of . With Lemma \[eq:proj:semismooth\], we have the following lemma.
$F(u)$ in is a semismooth function of $u$. The Newton derivative of $F_P^I(u)$ in can be chosen as $$\label{eq:newton:deri:iso:proj}
I + \sigma_k \nabla ^* A_{u,I}^P \nabla,$$ where $A_{u,I}^P $ is as in of Lemma \[eq:proj:semismooth\]. The Newton derivative is positive definite on $u$ with lower bound thus satisfies the regularity condition, $$\langle (I + \sigma_k \nabla ^* A_{u,I}^P \nabla)u, u\rangle_{2} \geq \|u\|_{2}^2.$$
For the positive definiteness of the Newton derivative of , we have $$\begin{aligned}
&\langle (I + \sigma_k\nabla ^* A_{u,I}^P \nabla)u, u\rangle_{2} = \|u\|_{2}^2 +\sigma_k \langle A_{u,I}^P \nabla u, \nabla u \rangle_{2} \\
=& \|u\|_{2}^2 + \sigma_k\langle \chi_{u, \hat \lambda}^{-} \nabla u, \nabla u \rangle_{2} + \alpha \langle \chi_{u, \hat \lambda}^{+ }\dfrac{\nabla u}{ |\hat \lambda^k + \nabla u|} - \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}, \nabla u\rangle_{2} \\
=& \|u\|_{2}^2 + \sigma_k\langle \chi_{u, \hat \lambda}^{-} \nabla u, \nabla u \rangle_{2} + \alpha \langle \chi_{u, \hat \lambda}^{+ }\dfrac{\nabla u}{ |\hat \lambda^k + \nabla u|} , \nabla u\rangle_{2} \\
&-\alpha \langle \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u|^3}, \nabla u\rangle_{2}
\geq \|u\|_{2}^2 + \sigma_k\langle \chi_{u, \hat \lambda}^{-} \nabla u, \nabla u \rangle_{2}\geq \|u\|_{2}^2,
\end{aligned}$$ since by the comparison the integral functions $$\begin{aligned}
&\langle \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle (\hat \lambda^k+ \nabla u )}{|\hat \lambda^k + \nabla u |^3}, \nabla u\rangle = \chi_{u, \hat \lambda}^{+ }\dfrac{ \langle \hat \lambda^k + \nabla u, \nabla u \rangle^2 }{|\hat \lambda^k + \nabla u|^3} \\
& \leq \chi_{u, \hat \lambda}^{+ }\dfrac{ | \hat \lambda^k + \nabla u|^2 |\nabla u|^2}{|\hat \lambda^k + \nabla u|^3} = \chi_{u, \hat \lambda}^{+ }\dfrac{ |\nabla u|^2}{|\hat \lambda^k + \nabla u|}.
\end{aligned}$$
The semismooth Newton method for solving follows $$\label{eq:ssn:pp:linear:iso}
(I + \sigma_k\nabla^* A_{u^l,I}^P\nabla)\delta u^{l+1} = - F_P^I(u^l).$$
The Anisotropic Total Variation
-------------------------------
For anisotropic case, since the projection to $l_1$ ball $\mathbb{B}_{\alpha}$ becomes $$\mathcal{P}_{\frac{\alpha}{\sigma_k}}(y) = (\mathcal{P}_{\frac{\alpha}{\sigma_k}}(y_1) ,\mathcal{P}_{\frac{\alpha}{\sigma_k}}(y_2) )^{T}, \quad y \in \mathbb{R}^2,$$ which is still $PC^{1}$ function and hence is semismooth. The nonlinear equation becomes $$\label{eq:ani:kunisch}
F_P^A(u): = u - f + \sigma_k [\nabla_x^*, \nabla_y^* ]\begin{bmatrix}
\dfrac{\hat \lambda_1^k + \nabla_1 u}{\max(1.0, |\hat \lambda_1^k + \nabla_1 u|/(\alpha\sigma_k^{-1}))} \\
\dfrac{\hat \lambda_2^k + \nabla_2 u}{\max(1.0, |\hat \lambda_2^k + \nabla_2 u|/(\alpha\sigma_k^{-1}))}
\end{bmatrix}=0.$$ Let’s introduce
\[eq:act:set:iso:kun:shreshold:sni\] $$\begin{aligned}
&\chi_{u, 1,\hat \lambda^k}^{+} = \begin{cases}
1, \ \ \ |\hat \lambda_1^k+ \nabla_1 u | \geq {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda_1^k+ \nabla_1 u | < {\alpha}/{\sigma_k},
\end{cases} \quad
\chi_{u, 1, \hat \lambda^k}^{-} = \begin{cases}
1, \ \ \ |\hat \lambda_1^k+ \nabla_1 u | < {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda_1^k+ \nabla_1 u | \geq {\alpha}/{\sigma_k}.
\end{cases} \\
& \chi_{u, 2,\hat \lambda^k}^{+} = \begin{cases}
1, \ \ \ |\hat \lambda_2^k+ \nabla_2 u | \geq {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda_2^k+ \nabla_2 u | < {\alpha}/{\sigma_k},
\end{cases} \quad
\chi_{u, 2, \hat \lambda^k}^{-} = \begin{cases}
1, \ \ \ |\hat \lambda_2^k+ \nabla_2 u | < {\alpha}/{\sigma_k}, \\
0, \ \ \ |\hat \lambda_2^k+ \nabla_2 u | \geq {\alpha}/{\sigma_k}.
\end{cases}
\end{aligned}$$
Similar to Lemma \[eq:proj:semismooth\] of the isotropic case, for the anisotropic case, we have the following lemma on the Newton derivative involving the projection operator. The proof is similar and we omit it here.
\[eq:proj:semismooth:ani\] The anisotropic projection $\mathcal{P}_{\frac{\alpha}{\sigma_k}}(\hat \lambda^k + \nabla u )$ is a semismooth function of $u$. Furthermore, we have $$\label{eq:inclusion:iso:ani}
[A^{P,s_1}_{u,1}(\nabla_1 \cdot \ ),A^{P,s_2}_{u,2}(\nabla_2 \cdot \ ) ]^{T} \subset (\partial_{u} \mathcal{P}_{\frac{\alpha}{\sigma_k}}(\hat \lambda^k + \nabla u )).$$ For any $v$, we have $$\label{eq:first:minus:ani}
A^{P,s_i}_{u,i}(\nabla_i v ) = \begin{cases}
A_{u,i}^{-}(\nabla_i v) := \nabla_i v, &|\hat \lambda_i^k+ \nabla_i u | < \frac{\alpha}{\sigma_k} \\
\dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla_i v}{|\hat \lambda_i^k+ \nabla_i u |} - s_i\dfrac{\langle \hat \lambda_i^k + \nabla_i u, \nabla_i v \rangle (\hat \lambda_i^k + \nabla_i u) }{|\hat \lambda_i^k+ \nabla_i u |^3} \right), \ s_i \in [0,1], \ &|\hat \lambda_i^k+ \nabla_i u | = \frac{\alpha}{\sigma_k} \\
A_{u,i}^{+}(\nabla_i v) :=\dfrac{\alpha}{\sigma_k}\left( \dfrac{\nabla_i v}{|\hat \lambda_i^k+ \nabla_i u |} - \dfrac{\langle \hat \lambda_i^k + \nabla_i u, \nabla_i v \rangle (\hat \lambda_i^k + \nabla_i u) }{|\hat \lambda_i^k+ \nabla_i u |^3} \right). &|\hat \lambda_i^k+ \nabla_i u | > \frac{\alpha}{\sigma_k}
\end{cases}$$ Throughout this paper, we choose the following subgradient for computations $$A_{u,A}^P(\nabla v) = \begin{bmatrix}
\chi_{u, 1, \hat \lambda_1^k}^{+} A_{u,1}^{+}(\nabla_1 v) + \chi_{u, 1, \hat \lambda_1^k}^{-} A_{u,1}^{-}(\nabla_1 v) \\
\chi_{u, 2, \hat \lambda_2^k}^{+} A_{u,2}^{+}(\nabla_2 v) + \chi_{u, 2, \hat \lambda_2^k}^{-} A_{u,2}^{-}(\nabla_2 v)
\end{bmatrix},$$ which means we always choose $s_i=1$ in .
\[eq:ani:kunisch:positive2\] $F(u)$ in is a semismooth function of $u$. The Newton derivative of $F(u)$ in can be chosen as $$I + \sigma_k\nabla ^* A_{u,A}^P \nabla,$$ which is positive definite on $u$ with lower bound and thus satisfies the regularity condition, $$\langle (I + \sigma_k\nabla ^* A_{u,A}^P \nabla)u, u\rangle_{2} \geq \|u\|_{2}^2.$$
The linear equation for Newton update becomes $$\label{eq:ssn:pp:linear:ani}
(I + \sigma_k\nabla ^* A_{u^l,A}^P \nabla )\delta u^{l+1} = -F_P^A(u^l) ,$$ We conclude this section by the following semismooth Newton method for solving the primal problem with thresholding operator (SSNPT) , which is subproblem for the $k$th iteration of ALM applying to .
Convergence of the Augmented Lagrangian Methods and the Corresponding Semismooth Newton Methods {#sec:convergece:ssn:alm}
===============================================================================================
Convergence of the semismooth Newton method
-------------------------------------------
Suppose $x^*$ is a solution to $F(x)=0$ and that $F$ is Newton differentiable at $x^*$ with Newton derivative $G$. If $G$ is nonsingular for all $x \in N(x^*)$ and $\{ \|G(x)^{-1}\| : x \in N(x^*)\}$ is bounded, then Newton iteration $$x^{l+1} = x^l-G(x^l)^{-1}F(x^l),$$ converges superlinearly to $x^*$ provided that $|x^0-x^*|$ is sufficiently small.
For semismooth Newton method, once the Newton derivative exists and is nonsingular at the solution point, we can employ semismooth Newton method [@KK] possibly with some globalization strategy [@DFC]. The existence of Newton derivative depends on the semismoothness of the corresponding functions or mappings. We now turn to the semismoothness for our cases, where each of the Newton derivative is nonsingular (uniform regular) and has a low bound providing Newton derivative exists; see lemmas \[lem:positive:iso:pd\], \[lem:positive:ani:pd\], \[lem:positive:ani:p\], \[lem:positive:iso:p\]. The convergence of the semismooth Newton methods of Algorithm \[alm:SSN\_PP\] and \[alm:SSN\_PT\] is standard under suitable stopping criterion and globalization strategy. For the convergence of the semismooth Newton methods including \[alm:SSN\_PDP\] and \[alm:SSN\_PDD\], we refer to [@HS; @SH] for the analysis of semismooth Newton under perturbations.
Here, we follow the standard stopping criterion for the inexact augmented Lagrangian method [@Roc1; @Roc2] and [@LST; @ZZST]. $$\begin{aligned}
& \Phi_k(u^{k+1},h^{k+1}) - \inf \Phi_k \leq \epsilon_k^2/2\sigma_k, \quad \sum_{k=0}^{\infty}\epsilon_k < \infty, \label{stop:a} \tag{A} \\
& \Phi_k(u^{k+1},h^{k+1}) - \inf \Phi_k \leq \delta_k^2/2\sigma_k\|\lambda^{k+1}-\lambda^k\|^2, \quad \sum_{k=0}^{\infty}\delta_k < +\infty, \label{stop:b1} \tag{B1}\\
&\text{dist}(0, \partial \Phi_k(u^{k+1}, h^{k+1})) \leq \delta_k'/\sigma_k\|\lambda^{k+1} - \lambda^k\|, \quad 0 \leq \delta_k' \rightarrow 0. \label{stop:b2}\tag{B2}\end{aligned}$$
We conclude this section with the following algorithmic framework of ALM with Algorithm \[alm:SSN\_ALM\]. Henceforth, we denote ALM-PDP or ALM-PDD as the ALM with the Algorithm \[alm:SSN\_PDP\] (SSNPDP) or Algorithm \[alm:SSN\_PDD\] (SSNPDD). We also denote ALM-PT or ALM-PP as the ALM with the Algorithm \[alm:SSN\_PT\] (SSNPT) or Algorithm \[alm:SSN\_PP\] (SSNPP).
For Algorithm \[alm:SSN\_PDP\], \[alm:SSN\_PDD\] or Algorithm \[alm:SSN\_PP\] using , i.e., $$\lambda^{k+1} = \mathcal{P}_{\alpha}(\lambda^k + \sigma_k \nabla u^{k+1}).$$
Convergence of the Augmented Lagrangian Method
----------------------------------------------
It is well-known that the augmented Lagrangian method can be seen as applying proximal point algorithm to the dual problem [@Roc1; @Roc2]. The convergence and the corresponding rate of augmented Lagrangian method are closely related to the convergence of the proximal point algorithm. Especially, the local linear convergence of the multipliers, primal or dual variables is mainly determined by the metric subregularities of the corresponding maximal monotone operators [@Roc1; @Roc2; @LE; @LU]. We now turn to analysis the metric subregularity of the corresponding maximal monotone operators which is usually efficient for the asymptotic (or local) linear convergence of ALM.
Now we introduce some basic definitions and properties of multivaled mapping from convex analysis [@DR; @LST]. Let $F: X \Longrightarrow Y $ be a multivalied mapping. The graph of $F$ is defined as the set $$\text{gph} F: = \{ (x,y) \in X\times Y| y\in F(x)\}.$$ The inverse of $F$, i.e., $F^{-1}: Y \Longrightarrow X$ is defined as the multivalued mapping whose graph is $\{(y,x)| (x,y) \in \text{gph} F\}$. The distance $x$ from the set $C\subset X$ is defined by $$\text{dist}(x,C): = \inf\{\|x-x'\|\ | \ x' \in C\}.$$ Let’s introduce the error bound and metrical subregularity for $F$ [@DR; @LST].
\[def:errorbound\] Suppose $y \in Y$ and $F^{-1}(y) \neq \emptyset$. A mapping $F: X \Longrightarrow Y$ is said to satisfy the error bound condition for the point $y$ with modulus $\kappa \geq 0$ if there exists $\epsilon >0$ such that if $x \in X$ with $\text{dist}(y, F(x)) \leq \epsilon$, then $$\label{eq:error:bound}
\text{dist}(x, F^{-1}(y)) \leq \kappa \text{dist}(y, F(x)).$$
\[def:metricregular\] A mapping $F: X \Longrightarrow Y$ is called metrically subregular at $\bar x$ for $\bar y $ if $(\bar x, \bar y) \in \text{gph} F$ and there exists $\kappa \geq 0$ along with a neighborhoods $U$ of $\bar x$ and $V$ of $\bar y$ such that $$\label{eq:metricregular}
\text{dist}(x, F^{-1}(\bar y)) \leq \kappa \text{dist}(\bar y, F(x) \cap V ) \quad \text{for all} \ \ x \in U.$$
A mapping $S: \mathbb{R}^m \rightrightarrows \mathbb{R}^n$ is said to calm at $\bar y$ for $\bar x$ if $(\bar y, \bar x) \in \text{gph} \ S$, and there is a constant $\kappa \geq 0$ along with neighborhoods $U$ of $\bar x$ and $V $ of $\bar y$ such that $$\label{calmness:def}
S(y) \cap U \subset S(\bar y) + \kappa |y-\bar y| \mathbb{B}, \quad \forall y \in V.$$ In , $\mathbb{B}$ denotes the closed unit ball in $\mathbb{R}^n$.
Actually, with the definition \[def:errorbound\] and \[def:metricregular\], one can see that if $F$ satisfies the error bound condition for $\bar y$ with modulus $\kappa$, then it is also metrically subregular at $\bar x$ for $\bar y$ with the same modulus for any $\bar x\in F^{-1}(\bar y)$.
Let’s now turn to the finite dimensional space setting in detail. Suppose $$\begin{aligned}
& \nabla_x \in \mathbb{R}^{m\times n}: \mathbb{R}^n \rightarrow \mathbb{R}^m, \ \ \nabla_y\in \mathbb{R}^{m\times n}: \mathbb{R}^n \rightarrow \mathbb{R}^m, \ \ u=(u^1, \cdots, u^n)^{T} \in \mathbb{R}^n, \\
&p=(p_1,\cdots, p_m)^T \in \mathbb{R}^{2m}, \quad p_i = (p_i^1, p_i^2)^T \in \mathbb{R}^2, \\
& \lambda=(\lambda_1,\cdots, \lambda_m)^T \in \mathbb{R}^{2m}, \quad \lambda_i = (\lambda_i^1, \lambda_i^2)^T \in \mathbb{R}^2. \\\end{aligned}$$ For the anisotropic case, we see $$\|p\|_{1} =\sum_{i=1}^m \|p_i\|_1 = \sum_{i=1}^m|p_i^1| + \sum_{i=1}^m|p_i^2|=\sum_{i=1}^m(|p_i^1| + |p_i^2|),$$ which is a polyhedral function.
For the istropic case, we notice $$\|p\|_{1} = \sum_{i=1}^m |p_i| = \sum_{i=1}^m \sqrt{{p_i^1}^2 +{p_i^2}^2}$$ which is not a polyhedral function. Fortunately, it is a group Lasso norm [@YY; @ZZST]. The $ \|p\|_{\infty} $ is defined as $$\quad \|p\|_{\infty} = \sup_{i}\{ \|p_i\|_1 | \ i=1, \cdots, m\}.$$ Now, let’s turn to the anisotropic case first. Introduce the Lagrangian function $$l(u,p,\lambda) = \frac{1}{2}\|Au-f\|_{2}^2 + \alpha \|p\|_{1} + \langle \nabla u-p, \lambda \rangle.$$ It is well-known that $l$ is a convex-concave function on $(u,p,\lambda)$. Define the maximal monotone operator $T_{l}$ by $$T_{l}(u,p,\lambda) =\{(u',p',\lambda')|(u',p',-\lambda')\in \partial l(u,p,\lambda)\},$$ and the corresponding inverse is given by $$T_{l}^{-1}(u',p',\lambda') =\{(u,p,\lambda)|(u',p',-\lambda')\in \partial l(u,p,\lambda)\}.$$
\[thm:metric:regular:lag\] For the anisotropic ROF model, assuming the KKT system has at least one solution, then $T_{l}$ is metrically subregular at $(\bar u, \bar p, \bar \lambda)^T$ for the origin.
Actually, we have $$T_{l}(u,p,\lambda) = (A^*(Au-f), -x + \alpha \partial \|p\|_{1}, p-\nabla u)^{T}: = \mathcal{A}(x) + \mathcal{B}(x),$$ where $$\mathcal{A}\begin{pmatrix}
u\\p\\ \lambda
\end{pmatrix}
:=\begin{pmatrix}
0 & 0& 0\\
0 & \alpha \partial \|\cdot\|_{1} &0 \\
0 &0&0
\end{pmatrix}\begin{pmatrix}
u\\p\\ \lambda
\end{pmatrix},
\quad
\mathcal{B}\begin{pmatrix}
u\\p\\ \lambda
\end{pmatrix} :=
\begin{pmatrix}
A^*A & 0 &\nabla^* \\
0&0&-I \\
-\nabla & I &0
\end{pmatrix}
\begin{pmatrix}
u\\p\\ \lambda
\end{pmatrix}
+\begin{pmatrix}
-A^*f \\ 0 \\0
\end{pmatrix}.$$ It can be seen that the monotone operator $\mathcal{A}$ is polyhedral since the anisotropic $\|\cdot\|_{1}$ is a polyhedral convex function and the operator $\mathcal{B}$ is a maximal monotone and affine operator. Thus $T_{l}$ is a polyhedral mapping [@Rob]. By the corollary in [@Rob], we see $T_{l}$ satisfes the error bound condition for the point origin and thus is also metricall subregular at $(\bar u, \bar p, \bar \lambda)^{T}$ for the origin.
Let’s now turn to the metric subregularity of $\partial d$ for the dual problem , supposing $(\partial d)^{-1}(0) \neq \emptyset$ and there exists $\bar \lambda$ such that $0 \in (\partial d)(\bar \lambda)$, $$\label{eq:subgradient:dual}
(\partial d)(\lambda) = \operatorname{div}^*(\operatorname{div}\lambda + f) + \partial g(\lambda), \quad g(\lambda): = {\mathcal{I}}_{\{{\|{\lambda}\|_{\infty}} \leq \alpha\}}(\lambda).$$
For the anisotropic case, actually, the constraint set is a polyhedral convex set in $\mathbb{R}^{2m}$, since $$g(\lambda) = 0 \Leftrightarrow \left\{\lambda=(\lambda_1,\lambda_2, \cdots, \lambda_m)^{T} \ | \ \lambda_i \in \mathbb{R}^{2}, \ \|\lambda_i\|_1= \ |\lambda_i^1| + |\lambda_i^2| \leq \alpha, \ i =1, 2, \cdots, m \right\}.$$ Together with $\operatorname{div}^*(\operatorname{div}\lambda + f) $ being an affine and monotone mapping, $\partial g$ is a polyhedral mapping by [@Rob]. This leads to that $\partial g$ is metrically subregular at $\bar \lambda$ for the origin with similar argument as in Theorem \[thm:metric:regular:lag\].
Now we turn to the isotropic case. The metric subregularity of $\partial d$ is more subtle, since the constraint set $$g(\lambda) = 0 \Leftrightarrow \left\{\lambda=(\lambda_1, \cdots, \lambda_m)^{T} \ | \ \lambda_i \in \mathbb{R}^{2}, \ |\lambda_i|= \sqrt{ (\lambda_i^1)^2 + (\lambda_i^2)^2} \leq \alpha, \ i =1,\cdots, m\right\}.$$ is not a polyhedral set. Introduce $$g_i(\lambda^i) = {\mathcal{I}}_{\{ |\lambda_i|\leq \alpha\}}(\lambda_i), \quad i=1,2,\cdots, m.$$ Henceforth, let’s denote $\mathbb{B}_{a}(\bar \lambda_i)$ or $\mathbb{B}^k_{a}(\bar \lambda_i)$ as the Euclidean closed ball centered at $\bar \lambda_i \in \mathbb{R}^2$ with radius $a$. Furthermore, denote $ \mathbb{B}_{a}( \lambda) = \Pi_{i=1}^m \mathbb{B}_{a}(\bar \lambda_i)$ with $\bar \lambda = (\bar \lambda_1, \cdots, \bar \lambda_m)^T$. We can thus write $$\partial g = \Pi_{i=1}^m\partial g_{i} = \Pi_{i=1}^m\partial {\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda_i).$$ It is known that each $\partial {\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda_i)$ is metrically regular at $(\bar \lambda_i, \bar v_i) \in \text{gph} \partial {\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)} $ [@YY]. For the metric subregularity of $\partial g$, we have the following lemma.
\[lem:metric:subregular:g\] For any $(\bar \lambda, \bar v)^T \in \emph{gph} \ \partial g$, $\partial g$ is metrically subregular at $\bar \lambda$ for $\bar v$.
For any $(\bar \lambda, \bar v)^T \in \text{gph} \ \partial g$, and $V$ of a neignborhoods of $\bar \lambda$, since $$\begin{aligned}
&\text{dist}^2(\lambda, (\partial g)^{-1}(\bar v) ) = \sum_{i=1}^m \text{dist}^2(\lambda_i, (\partial g_i)^{-1}(\bar v_i) ) \\
& \leq \sum_{i=1}^m \kappa_i^2 \text{dist}^2(\bar v_i, (\partial g_i)(\bar \lambda_i))
\leq \sum_{i=1}^m \max(\kappa_i^2, i=1,\cdots, m) \text{dist}^2(\bar v_i, (\partial g_i)(\bar \lambda_i)) \\
& = \max(\kappa_i^2, i=1,\cdots, m) \text{dist}^2(\bar v, (\partial g)(\bar \lambda)).
\end{aligned}$$ Thus with choice $\kappa = \sqrt{ \max_{i=1}^m(\kappa_i^2, i=1,\cdots, m) }$, we found that $\partial g$ is metrically subregular at $\bar \lambda$ for $\bar v$ with modulus $\kappa$.
By [@HS] (Theorem 2.1), the solution $\bar u$ of the primal problem and the solution $\bar \lambda$ of the dual problem have the following relations $$\begin{aligned}
-\alpha \nabla \bar u + |\nabla \bar u|\bar \lambda = 0, \quad &|\bar \lambda| = \alpha, \label{eq:hk:resi:neq0} \\
\nabla \bar u = 0, \quad &|\bar \lambda| < \alpha, \label{eq:hk:resi:eq0}\end{aligned}$$ which can be derived from the optimality conditions . Now we turn to a more general model compared to . Suppose $A \in \mathbb{R}^{m\times 2m}: \mathbb{R}^{2m} \rightarrow \mathbb{R}^m$, $$\label{eq:dual:ROF:general}
f(\lambda) = \frac{\| A \lambda -b\|^2 }{2} + \langle q, \lambda \rangle + \Pi_{i=1}^m{\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda_{i}),
\quad b\in \mathbb{R}^m,$$ where ${\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)}(x)$ is the indicator function for the following $l_2$ ball constraints, $$\mathbb{B}_{\alpha}^i(0) := \left\{\lambda_{i}: = (\lambda_{i}^1,\lambda_{i}^2)^{T} \in \mathbb{R}^2 \ | \ |\lambda_{i}| = \sqrt{(\lambda_{i}^1)^2 + (\lambda_{i}^2)^2} \leq \alpha\right\}, \ \ i = 1,\cdots, m, \ \alpha >0.$$ Supposing $g(\lambda) = \Pi_{i=1}^m{\mathcal{I}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda_{i})$, $f_1(\lambda) = {\| A \lambda -b\|^2 }/{2} + \langle q, \lambda \rangle $, let’s introduce $$\begin{aligned}
&\mathcal{X}: = \{ \lambda \ | \ A\lambda=\bar y, \quad -\bar g \in \partial g(\lambda)\}, \\
&\Gamma_1(p^1) = \{ \lambda \ | \ A \lambda - \bar y = p^1 \}, \quad
\Gamma_2(p^2) = \{ \lambda \ | \ p_2 \in \bar g + \partial g(\lambda) \}, \\
&\hat \Gamma(p^1) = \Gamma_1(p^1)\cap\Gamma_2(0) = \{ \lambda\ | \ p^1 = A\lambda - \bar y, \ 0 \in \bar g + \partial g(\lambda) \}.\\\end{aligned}$$ where $\mathcal{X}$ is actually the solution set of , since $$\bar g := A^T \nabla h(\bar y) +q=(\bar g_1, \bar g_2, \cdots, \bar g_m)^{T},\ h(y) = \|y-b\|^2/2, \quad \bar g_i \in \mathbb{R}^2.$$ We also need another two set valued mapping, $$\begin{aligned}
&\Gamma(p^1, p^2):=\{\lambda \ |\ p^1 = A \lambda -\bar y, \quad p^2 \in \bar g + \partial g(\lambda) \},\\
&S(p):=\{\lambda \ | \ p \in \nabla f_1(\lambda)+ \partial g(\lambda) \} \Rightarrow \mathcal{X}=S(0).\end{aligned}$$ Actually the metric subregularity of $\partial f$ at $(\bar \lambda, 0)$ is equivalent to the calmness $S$ at $(0, \bar \lambda)$ [@DR]. Now we turn to the calmness of $S$. By [@YY] (Proposition 7), the calmness of $S$ at $(0, \bar \lambda)$ is equivalent to the calmness of $\Gamma$ at $(0,0, \bar \lambda)$ for any $\bar \lambda \in S(0)$. We would use the following calm intersection theorem to prove the calmness of $\Gamma$.
\[prop:calm:inter\] Let $T_1: \mathbb{R}^{q_1} \rightrightarrows \mathbb{R}^n$, $T_2: \mathbb{R}^{q_2} \rightrightarrows \mathbb{R}^n$ be two set-valued maps. Define set-valued maps $$\begin{aligned}
T(p^1, p^2): &= T_1(p^1)\cap T_2(p^2), \\
\hat T(p^1):&= T_1(p^1)\cap T_2(0).
\end{aligned}$$ Let $\tilde x \in T(0,0)$. Suppose that both set-valued maps $T_1$ and $T_2$ are calm at $(0, \tilde x)$ and $T_1^{-1}$ is pseudo-Lipschitiz at $(0,\tilde x)$. Then $T$ is calm at $(0,0, \tilde x)$ if and only if $\hat T$ is calm at $(0, \tilde x)$.
We need the following assumption first, which is actually a mild condition by and .
\[asump:existence\] Let’s assume that $\bar \lambda \in \mathcal{X}$ and
- Either $\bar \lambda_{i} \in \emph{bd} \mathbb{B}_{\alpha}^i(0)$ and there exists $\bar g_i \neq 0 $ such that $\bar g_i \in {\mathcal{N}}_{\mathbb{B}_{\alpha}^i(\bar \lambda_i)}$,
- Either $\bar \lambda_{i} \in \emph{int} \mathbb{B}_{\alpha}^i(0)$.
\[thm:metric:regular:dual:iso\] For the problem , supposing the dual problem has at least one solution $\bar \lambda$ satisfying the Assumption \[asump:existence\], then $\partial f$ is metrically subregular at $\bar \lambda$ for the origin.
We mainly need to prove the calmness of $\hat\Gamma(p^1)$ at $(0,\bar \lambda)$. By metric subregularity of $\partial g$ by Lemma \[lem:metric:subregular:g\] and the fact that $\Gamma_1^{-1}$ is pseudo-Lipschitiz and bounded metrically subregular at $(0,\bar \lambda)$ ([@YY], Lemma 3), with the Calm intersection theorem as in Proposition \[prop:calm:inter\], we get calmness of $\Gamma$ at $(0,0, \bar \lambda)$. We thus get the calmness of $S$ at $(0, \bar \lambda)$ and the metric subregular of $\partial f$ at $\bar \lambda$ for the origin.
Now let’s focus the the calmness of $\hat\Gamma(p^1)$ at $(0,\bar \lambda)$. Our proof is an extension of Proposition 11 of [@YY]. Without loss of generality, suppose $$\begin{aligned}
& \bar \lambda_{i} \in \text{int} \mathbb{B}_{\alpha}^i(0), \quad i = 1, \cdots, L; \\
& \bar \lambda_{i} \in \text{bd} \mathbb{B}_{\alpha}^k(0), \quad \bar g_i \neq 0 \in {\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\bar \lambda_i), \quad i = L+1, \cdots, m, \quad 1<L<m.\end{aligned}$$ For $i=1,\cdots, L$, $\bar \lambda_{i} \in \text{int} \mathbb{B}_{\alpha}^i(0)$, we have $\bar g_i \in {\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\bar \lambda_{i}) = \{0\}$. We thus conclude $\bar g_i = 0$ and $$\Gamma_2^i(0) = \{\lambda | 0 \in{\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda) \} = \mathbb{ B}_{\alpha}^i(0), \quad i = 1, \cdots, L.$$ For $i=L+1,\cdots, m$, since $\bar \lambda_{i} \in \text{bd}\mathbb{ B}_{\alpha}^i(0)$, while $\lambda \in \mathbb{B}_{\epsilon}(\bar \lambda_{i}) \cap \mathbb{ B}_{\alpha}^i(0)$, we notice either $\lambda \in \text{bd}\mathbb{ B}_{\alpha}^i(0)$ or $\lambda \in \text{int}\mathbb{ B}_{\alpha}^i(0)$. While $\lambda \in \text{int}\mathbb{ B}_{\alpha}^i(0)$, by the definition of $\Gamma_2(0)$, together with ${\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda) = \{0\}$, we see $\bar g_i = 0$, which is contracted with the assuption (ii). While $\lambda \in \text{int}\mathbb{ B}_{\alpha}^i(0)$, since $${\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\lambda) = \{s \lambda|s\geq 0\}, \quad{\mathcal{N}}_{\mathbb{B}_{\alpha}^i(0)}(\bar \lambda) = \{s_1 \bar \lambda|s_1\geq 0\},$$ together with the definiton of $\Gamma_2$, we see the only choice is $$\Gamma_2^i(0) = \{ \bar \lambda_{i}\} , \quad i = L+1, \cdots, m.$$ Choose $\epsilon >0$ small enough such that $\mathbb{B}_{\epsilon}^i(\bar \lambda_{i}) \subset \mathbb{B}_{\alpha}(0)$, $i=1,\cdots, L$. We thus conclude that $$\Gamma_2(0) \cap \mathbb{B}_{\epsilon}(\bar \lambda) = (\mathbb{B}_{\epsilon}^1(\bar \lambda_{1}), \cdots, \mathbb{B}_{\epsilon}^L(\bar \lambda_{L}), \bar \lambda_{{L+1}}, \cdots, \bar \lambda_{m})^{T}.$$ Suppose $p=(p_1, p_2, \cdots, p_m)^{T}$ and $ \lambda \in \Gamma_1(p) \cap \Gamma_2(0) \cap \mathbb{B}_{\epsilon}(\bar \lambda)$, $p_i \in \mathbb{R}^2 $, $i=1, 2, \cdots, m$. Introduce the following constraint on $\lambda$ $$\mathcal{R}: = \{ \lambda \ | \ \lambda_{i} \in \mathbb{R}^2, \ i=1, \cdots, L, \ \lambda_{i} = \bar \lambda_{i}, \ i=L+1, \cdots, m \},$$ which is a convex and closed polyhedral set. It can be seen as follows. Denote $\bar L = m-L$ and $0_{2L\times2m} \in \mathbb{R}^{2L\times2m}$, $0_{2 \bar L\times2L} \in \mathbb{R}^{2 \bar L\times2L}$ as the zero matrix with elements are all zero. Denote $I_{2\bar L\times 2\bar L} \in \mathbb{R}^{2 \bar L\times2 \bar L}$ as the identity matrix. Introduce $$E_{+} = [0_{2L\times2m};
0_{2 \bar L\times2L} \ I_{2 \bar L\times 2 \bar L}] \in \mathbb{R}^{2m \times 2m}, \quad
E_{-} = [0_{2L\times2m};
0_{2 \bar L\times2L} \ -I_{2 \bar L\times 2 \bar L}] \in \mathbb{R}^{2m \times 2m}$$ $$E = [E_{+}; E_{-}] \in \mathbb{R}^{4m \times 2m}, \quad \bar \lambda_0 = [0, \cdots, 0, \bar \lambda_{L+1}, \cdots, \bar \lambda_{m}] \in \mathbb{R}^{2m}.$$ We have $\mathcal{R} = \{ \lambda \ | \ E \lambda \leq E \bar \lambda_0 \}$ which is a polyhedral set. Consider the following problem $$A \tilde \lambda = p, \quad \tilde \lambda \in \mathcal{R},$$ and the corresponding solution set, which is also a polyhedral set $$\hat \Gamma_1(p) = \{ \lambda \ | \ A \lambda = p, \quad \lambda \in \mathcal{R}\}.$$ For any $\lambda \in \Gamma_1(p)\cap\Gamma_2(0) \cap \mathbb{B}_{\epsilon}(\bar \lambda) $, denote $\tilde \lambda$ as its projection on $\hat \Gamma_1(0)$. We thus have $$\|\lambda - \tilde \lambda \| \leq \| \lambda - \bar \lambda\| \Rightarrow \tilde \lambda \in \mathbb{B}_{\epsilon}(\bar \lambda).$$ Together the celebrated results related to Hoffman error bound [@PVZ], there exists a constant $\kappa$, such that $$\text{dist}(\lambda, \hat \Gamma(0)) \leq \|\lambda -
\tilde \lambda\| = \text{dist}(\lambda, \hat \Gamma_1(0) ) \leq \kappa \|p\|,\quad \forall \lambda \in \hat \Gamma(p) \cap \mathbb{B}_{\epsilon}(\bar \lambda).$$ We thus get the calmness of $\hat\Gamma(p^1)$ at $(0,\bar \lambda)$ and the proof is finished.
Similar result is also given in [@YTP] (see Example 4.1(ii) of [@YTP]), where the delicate analysis based on LMI-representable (Linear Matrix Inequalities) functions are employed.
Henceforth, we denote $\mathcal{X}^A$ and $\mathcal{X}^I$ as the solution sets for the dual problem for anisotropic ROF and isotropic ROF respectively. With the stopping criterion , we have the following global and local convergence.
\[thm:ani:KKT\] For the anisotropic ROF model, denote the iteration sequence $(u^k, p^k,\lambda^k)$ generated by ALM-PDP, ALM-PDD, ALM-PT or ALM-PP with stopping criteria . Then the sequence $(u^k, p^k,\lambda^k)$ is bounded and convergences to $(u^*, p^*, \lambda^*)$. If $T_d: =\partial d$ is metric regular for the origin with modulus $\kappa_d$ and with the additional stopping criteria , the sequence $\{\lambda^k\}$ converges to $\lambda^* \in \mathcal{X}^A$ and for arbitrary sufficiently large $k$, $$\label{eq:convergence:rate:dual:ani}
\emph{dist}(\lambda^{k+1}, \mathcal{X}^A) \leq \theta_k \emph{dist}(\lambda^k, \mathcal{X}^A),$$ where $$\theta_k = [\kappa_d(\kappa_d^2 + \sigma_k^2)^{-1/2} + \delta_k](1-\delta_k)^{-1}, \ \emph{as} \ k\rightarrow \infty, \ \theta_k \rightarrow \theta_{\infty} = \kappa_d(\kappa_d^2+ \sigma_{\infty}^2)^{-1/2} < 1.$$ If in addition that $T_l$ is metrically subregular at $(u^*, p^*, \lambda^*)$ for the origin with modulus $\kappa_l$ and the stopping criteria is employed, then for sufficiently large $k$, we have $$\label{eq:convergence:rate:up:ani}
\|(u^{k+1}, p^{k+1}) - (u^k, p^k)\| \leq \theta_k'\|\lambda^{k+1}-\lambda^k\|,$$ where $\theta_k'=\kappa_l(1+\delta_k')/\sigma_k$ with $\displaystyle{\lim_{k\rightarrow \infty}\theta_k' = \kappa_l/\sigma_{\infty}}$.
Since $X$ is finite dimensional reflexive space and the primal function is l.s.c. proper convex functional and strongly convex, hence coecive. Thus the existence of the solution can be guaranteed [@KK] (Theorem 4.25). Furthermore, since $\operatorname{dom}D = X$ and $\operatorname{dom}\|\cdot \|_1 = Y$, by Fenchel-Rockafellar theory [@KK] (Chapter 4.3) (or Theorem 5.7 of [@Cla]), the solution to the dual problem is not empty and $$\inf_{u \in X} D(u) + \alpha \|\nabla u\|_{1} = \sup_{\lambda \in Y} -d(\lambda).$$ By [@Roc2] (Theorem 4) (or Theorem 1 of [@Roc1] where the augmented Lagrangian method essentially comes from proximal point method applying to the dual problem $\partial d$), with criterion , we get the boundedness of $\{\lambda^k\}$. The uniqueness of $(u^*,p^*)$ follows from the strongly convexity of $F(u)$ and the $p^*=\nabla u^*$ which is one of the KKT conditions. The boundedness of $(u^k,p^k)$ and convergence of $(u^k, p^k, \lambda^k)$ then follows by [@Roc2] (Theorem 4).
The local convergence rate with metrical regularity of $T_g$ and the stopping criteria can be obtained from [@Roc2] (Theorem 5) (or Theorem 2 of [@Roc1]). Now we turn to the local convergence rate of $(u^k,p^k)$. By the metrical regularity of $T_l$, for sufficiently large $k$, we have $$\| (u^{k+1}, p^{k+1}) - (u^*, p^*))\| + \text{dist} (\lambda^k, \mathcal{X}^A) \leq \kappa_l \text{dist} (0, T_l(u^{k+1}, p^{k+1}, \lambda^{k+1})).$$ Together with the stopping criteria and [@Roc2] (Theorem 5 and Corollary with formula (4.21)), we arrive at $$\begin{aligned}
\| (u^{k+1}, p^{k+1}) - (u^*, p^*))\| &\leq \kappa_l \sqrt{{\delta_k'^2}{\sigma_k^{-2}} \|\lambda^{k+1} -\lambda^k\|^2 + {\sigma_k^{-2}} \|\lambda^{k+1} -\lambda^k\|^2 } \\
& = \kappa_l\sqrt{\delta_k'^2+1}\sigma_k^{-1}\|\lambda^{k+1} -\lambda^k\| \leq \theta_k' \|\lambda^{k+1} -\lambda^k\|,
\end{aligned}$$ which leads to .
For the isotropic case, we can get similar results with metric regularity of the dual problem.
For the isotropic ROF model, denote the iteration sequence $(u^k, p^k,\lambda^k)$ generated by ALM-PDP, ALM-PDD, ALM-PT or ALM-PP with stopping criteria . Then the sequence $(u^k, p^k,\lambda^k)$ is bounded and convergences to $(u^*, p^*, \lambda^*)$ and . If $T_d:=\partial d$ is metric regular for the origin with modulus $\kappa_d$ and with the additional stopping criteria , then the sequence $\{\lambda^k\}$ converges to $\lambda^* \in \mathcal{X}^I$ and for arbitrary sufficiently large $k$, $$\emph{dist}(\lambda^{k+1}, \mathcal{X}^I) \leq \theta_k \emph{dist}(\lambda^k, \mathcal{X}^I),$$ where $$\theta_k = [\kappa_d(\kappa_d^2 + \sigma_k^2)^{-1/2} + \delta_k](1-\delta_k)^{-1}, \ \emph{as} \ k\rightarrow \infty, \ \theta_k \rightarrow \theta_{\infty} = \kappa_d(\kappa_d^2+ \sigma_{\infty}^2)^{-1/2} < 1.$$
Numerical Experiments {#sec:numer}
=====================
We employ the standard finite difference discretization of the discrete gradient $\nabla$ and divergence operator $\operatorname{div}$ [@CP], which satisfies and are convenient for operator actions based implementation. We use the following conditions for the iteration sequences. The residual of $u$ is $$\text{res}(u)(u^{k+1},\lambda^{k+1}): = \|u^{k+1} - f + \operatorname{div}\lambda^{k+1}\|_F,$$ where $\|\cdot\|_F$ denotes the Frobenius norm henceforth. The residual of $\lambda$ is defined by $$\text{res}(\lambda)(u^{k+1},\lambda^{k+1}) := \|\lambda^{k+1} - \mathcal{P}_{\alpha}(\lambda^{k+1}+ c_0 \nabla u^{k+1})\|_F.$$ The residual originated in the analysis in [@KK1] is $$\text{Kun}(u^{k+1},\lambda^{k+1}):= \|{\mathcal{I}}_{\{{\|{ \lambda}\|_{\infty}} \leq \alpha\}}( \lambda^{k+1}) + \alpha \| \nabla u^{k+1}\|_{1} - \langle \lambda^{k+1}, \nabla u^{k+1}\rangle\|_F.$$ The residual $\max(|\nabla u|,0) \lambda -\alpha \nabla u$ for $|\nabla u| \neq 0$ (see also [@HS]). Since while $\nabla u =0$ in , we also have $|\nabla u| \lambda -\alpha \nabla u=0$. We thus conclude the following criterion $$\text{Hin}(u^{k+1},\lambda^{k+1})=\|\alpha \nabla u^{k+1}-|\nabla u^{k+1}| \lambda^{k+1} \|_F.$$ The primal-dual gap in [@CP] is $${\mathfrak{G}}(u^{k+1},\lambda^{k+1}) = \frac{{\|{u^{k+1}-f}\|_{2}}^2}{2}
+ \alpha {\|{{\nabla}u^{k+1}}\|_{1}} + \frac{{\|{\operatorname{div}\lambda^{k+1} + f}\|_{2}}^2}{2}
- \frac{{\|{f}\|_{2}}^2}{2} + {\mathcal{I}}_{\{{\|{\lambda}\|_{\infty}} \leq \alpha\}}(\lambda^{k+1}).$$ We use the following normalized primal dual gap [@CP] $$\label{eq:l2-tv-gap}
\text{gap}(u^{k+1},\lambda^{k+1}) : ={\mathfrak{G}}(u^{k+1},\lambda^{k+1}) /NM, \quad \text{with} \ \ NM = N*M, \ \ u^k \in \mathbb{R}^{N\times M}.$$ We employ the scaled residual of $u$ and $\lambda$ as our stopping criterion, $$\text{Err}(u^{k+1}, \lambda^{k+1}): = (\text{res}(u)(u^{k+1},\lambda^{k+1}) + \text{res}(\lambda)(u^{k+1},\lambda^{k+1}))/\|f\|_F.$$
Let’s now turn to the stopping criterion for linear iterative solvers including BiCGSTAB ( biconjugate gradient stabilized method) and CG (conjugate gradient) for each linear system for the Newton update in Algorithm \[alm:SSN\_PDP\], \[alm:SSN\_PDD\], \[alm:SSN\_PT\] and \[alm:SSN\_PP\]. For anisotropic TV of Algorithm \[alm:SSN\_PT\], we use CG due to the symmetric linear system . For any other linear system, e.g., , , , and their anisotropic counterparts, we use BiCGSTAB (see Figure 9.1 of [@VAN]), which is very efficient for nonsymmetric linear system. The following stopping criterion is employed for solving linear systems to get Newton updates with BiCGSTAB or CG [@HS], $$\label{eq:stop:bicg}
\text{tol}_{k+1}: =.1\min\left\{ \left(\frac{\text{res}_k}{\text{res}_0}\right)^{1.5}, \ \frac{\text{res}_k}{\text{res}_0} \right\},$$ which can help catch the superlinear convergence of semismooth Newton we employ. The $\text{res}_k$ in denotes the residual of the Newton linear system after the $k$-th BiCGSTAB or CG iteration while $\text{res}_0$ denotes the original residual before iterations.
Now, we turn to the most important stopping criterion ,, of each ALM iteration for determining how many Newton iterations are needed when solving the corresponding nonlinear systems , or . The criterion is not practical. New stopping criterion of ALM for cone programming can be found in [@CST]. We found the following empirical stopping criterion for each ALM iteration is efficient numerically. For the semismooth Newton method involving the soft thresholding operator (SSNPT), we employ $$\label{eq:alm:stop:ssnpt}
\text{res-alm}_{SSNPT}^l: = \|u^l - f + \nabla^* \lambda^k + \sigma_k \nabla^*\nabla u^l -\sigma_k \nabla^*(I + \frac{1}{\sigma_k} \partial \alpha \|\cdot\|_{1})^{-1}(\frac{\lambda^k + \sigma_k \nabla u^l}{\sigma_k})\|_F.$$ For the semismooth Newton method involving the projection operator (SSNPT), we employ $$\label{eq:alm:stop:ssnpp}
\text{res-alm}_{SSNPP}^l := \|u^l - f + \nabla ^* (I + \sigma_k \partial G)^{-1}(\lambda^k + \sigma_k \nabla u^l )\|_F.$$ For the primal dual semismooth Newton with auxiliary variable, we empirical employ $$\label{eq:stop:alm:ssn:dual}
\text{res-alm}_{SSNPD}^l = \|- \sigma_k \nabla u^l - \lambda^k + \max \big(1.0, \dfrac{|\lambda^k + \sigma_k \nabla u^l|}{\alpha} \big) h^l\|_F,$$ where $h^l$ is computed by Algorithm \[alm:SSN\_PDP\] or \[alm:SSN\_PDD\] before the projection to the feasible set. Here we use $\text{res-alm}_{SSNPD}^l $ without $\text{res}(u)$ compared to since we found $\text{res}(u)$ is usually much smaller compared to in numerics. We employ the following stopping criterion $$\label{eq:stop:alm}
\text{res-alm}_{SSNPD}^l, \ \text{res-alm}_{SSNPT}^l, \ \text{or} \ \ \text{res-alm}_{SSNPP}^l \leq {\delta_k}/{\sigma_k},$$ where $\delta_k$ is a small parameter which can be chosen as fixed constants $10^{-2}$, $10^{-4}$ and so on in our numerical tests. We emphasis that divided by $\sigma_k$ is of critical importance for the convergence of ALM, which is also required by the stopping criterion , , .
--------- -- ------------------ -- ---------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
ALM-PDP 4(5.42s) 2.46e-5 6.96e-3 4.98e-5 3.53e-5 5.89e-8 19.25 1e-4
ALM-PDD 4(23.98s) 1.68e-13 6.96e-3 4.98e-5 3.53e-5 5.89e-8 19.25 1e-4
ALM-PT 6(118.67s) 1.60e-5 9.83e-4 6.84e-6 4.85e-6 6.38e-9 19.25 1e-4
ALM-PP 5(513.66s) 1.69e-8 6.63e-3 4.79e-5 3.39e-5 7.43e-8 19.25 1e-4
ALG2 1032(6.71s) 1.30e-2 6.92e-4 7.86e-6 5.56e-6 1.00e-8 19.25 1e-4
ALM-PDP 6(11.45s) 3.66e-5 9.12e-6 6.91e-8 4.93e-8 2.58e-11 19.25 1e-6
ALM-PDD 7(56.49s) 3.66e-5 7.78e-5 1.55e-6 1.11e-6 7.85e-11 19.25 1e-6
ALM-PT 7(135.83s) 2.13e-7 2.08e-5 1.51e-7 1.07e-7 7.14e-11 19.25 1e-6
ALM-PP 7(686.86s) 4.92e-10 8.86e-6 6.70e-8 4.78e-8 2.63e-11 19.25 1e-6
ALG2 14582(90.28s) 1.37e-4 2.26e-7 2.46e-9 1.83e-9 4.93e-13 19.25 1e-6
ALM-PDP 9(18.83s) 1.06e-6 3.74e-10 3.03e-12 2.16e-12 4.02e-16 19.25 1e-8
ALM-PDD 7(69.76s) 2.80e-7 1.07e-7 8.62e-10 6.22e-10 2.02e-13 19.25 1e-8
ALM-PT 8(142.21s) 3.76e-7 8.22e-10 1.36e-11 9.64e-12 1.25e-15 19.25 1e-8
ALM-PP 8(746.32s) 7.90e-7 1.05e-7 8.40e-10 6.06e-10 1.99e-13 19.25 1e-8
ALG2 365019(2069.36s) 1.37e-6 2.88e-13 3.37e-15 2.38e-15 $<$eps 19.25 1e-8
--------- -- ------------------ -- ---------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
: For $n(t)$ of the first column of each algorithm, $n$ presents the outer ALM iteration number or iteration number for ALG2 for the scaled residual err$(u^{k+1}, \lambda^{k+1})$ less than the stopping value, $t$ denoting the CPU time. The notation “$<$eps" denotes the corresponding quality less than the machine precision, i.e., the “eps" in Matlab. []{data-label="tab:rof:lena:ani"}
--------- -- ------------------ -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
ALM-PDP 3(4.75s) 2.32e-3 1.14e-2 5.61e-5 9.36e-5 9.70e-8 19.18 1e-4
ALM-PDD 5(57.29s) 8.61e-3 2.61e-3 1.37e-5 2.19e-5 2.01e-8 19.18 1e-4
ALM-PT 7(2507.66s) 1.74e-5 2.60e-3 1.39e-5 2.19e-5 1.94e-8 19.18 1e-4
ALM-PP 5(852.79s) 2.38e-5 1.59e-2 8.01e-5 1.31e-4 1.37e-7 19.85 1e-4
ALG2 479(2.65s) 1.59e-2 6.03e-6 7.38e-6 7.01e-4 8.69e-9 19.18 1e-4
ALM-PDP 7(20.35s) 6.75e-5 8.08e-5 4.46e-7 6.96e-7 4.01e-10 19.18 1e-6
ALM-PDD 7(191.50s) 3.98e-5 4.94e-5 2.73e-7 4.24e-7 2.28e-10 19.18 1e-6
ALM-PT — — — — — — — 1e-6
ALM-PP 8(3249.58s) 1.53e-6 8.10e-5 4.50e-7 6.99e-7 4.02e-10 19.18 1e-6
ALG2 10808(55.16s) 1.60e-4 1.60e-9 2.46e-9 2.63e-7 6.56e-13 19.18 1e-6
ALM-PDP 10(243.57s) 1.07e-6 4.22e-7 2.43e-9 3.82e-9 1.04e-12 19.18 1e-8
ALM-PDD — — — — — — — 1e-8
ALM-PT — — — — — — — 1e-8
ALM-PP — — — — — — — 1e-8
ALG2 262811(1348.55s) 1.60e-6 5.01e-13 7.47e-13 8.11e-11 $<$eps 19.25 1e-8
--------- -- ------------------ -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
: For $n(t)$ of the first column of each algorithm, $n$ presents the outer ALM iteration number or iteration number for ALG2 for the scaled residual err$(u^{k+1}, \lambda^{k+1})$ less than the stopping value, $t$ denoting the CPU time. “—" denotes the iteration time more than 5000 seconds. The notation “$<$eps" is the same as in Table \[tab:rof:lena:ani\].[]{data-label="tab:rof:cameraman:iso"}
--------- -- ------------------- -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
ALM-PDP 6(25.50s) 1.56e-2 1.15e-3 7.31e-5 2.28e-5 1.61e-5 18.90 1e-4
ALM-PDD 4(159.38) 7.60e-3 1.97e-2 2.73e-4 2.28e-4 4.76e-8 18.90 1e-4
ALM-PT 6(1329.11s) 6.37e-3 1.89e-3 1.37e-5 9.68e-6 4.63e-9 18.90 1e-4
ALM-PP 5(4059.90) 1.08e-7 1.33e-2 9.87e-5 6.99e-5 5.81e-8 18.90 1e-4
ALG2 933(28.23s) 3.35e-2 1.83e-3 2.10e-5 1.49e-5 1.08e-8 18.90 1e-4
ALM-PDP 7(52.52s) 5.68e-5 2.25e-6 2.03e-8 1.45e-8 9.42e-13 18.90 1e-6
ALM-PDD 6(377.60s) 2.18e-5 1.13e-5 9.03e-8 6.43e-8 1.25e-11 18.90 1e-6
ALM-PT 8(2351.80s) 1.51e-4 2.17e-7 1.76e-9 1.24e-9 1.80e-13 18.90 1e-6
ALM-PP 7(5427.86s) 3.98e-5 1.07e-5 8.58e-8 6.11e-8 1.24e-11 18.90 1e-6
ALG2 14842(476.20s) 3.53e-4 3.40e-7 3.94e-9 2.99e-9 3.09e-13 18.90 1e-6
ALM-PDP 7(54.51s) 5.65e-7 2.25e-6 2.03e-8 1.45e-8 9.38e-13 18.90 1e-8
ALM-PDD 14(122.23s) 2.68e-6 3.69e-12 3.44e-14 2.47e-14 $<$eps 18.90 1e-8
ALM-PT 9(2523.95s) 1.58e-6 1.95e-10 6.66e-12 4.71e-12 $<$eps 18.90 1e-8
ALM-PP 8(5606.62s) 2.94e-6 9.94e-8 8.08e-10 5.74e-10 6.78e-14 18.90 1e-8
ALG2 371060(11339.88s) 3.54e-6 6.98e-13 8.20e-15 5.80e-15 $<$eps 18.90 1e-8
--------- -- ------------------- -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
: For $n(t)$ of the first column of each algorithm, $n$ presents the outer ALM iteration number or iteration number for ALG2 for the scaled residual Err$(u^{k+1}, \lambda^{k+1})$ less than the stopping value, $t$ denoting the CPU time. The notation “$<$eps" is the same as in Table \[tab:rof:lena:ani\].[]{data-label="tab:rof:sails:ani"}
--------- -- ------------------ -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
ALM-PDP 3(25.12) 1.12e-2 1.43e-2 7.31e-5 1.20e-4 4.29e-8 29.89 1e-4
ALM-PDD 5(370.53s) 9.89e-3 5.82e-3 3.01e-5 4.89e-5 1.54e-8 29.89 1e-4
ALM-PT 7(30224.3s) 4.16e-3 1.31e-2 8.06e-6 1.87e-4 3.33e-9 29.89 1e-4
ALM-PP 6(15872.39s) 7.69e-9 5.76e-3 3.00e-5 4.85e-5 1.52e-8 29.89 1e-4
ALG2 525(15.77s) 2.86e-2 7.69e-6 1.02e-5 1.07e-3 4.76e-9 29.89 1e-4
ALM-PDP 7(145.27s) 7.98e-5 1.80e-4 9.79e-7 1.54e-6 3.15e-10 29.89 1e-6
ALM-PDD 7(1379.33s) 1.47e-7 1.09e-4 5.96e-7 9.33e-7 1.78e-10 29.89 1e-6
ALM-PT — — — — — — — 1e-6
ALM-PP — — — — — — — 1e-6
ALG2 12049(387.21s) 2.87e-4 2.61e-9 3.79e-9 4.27e-7 3.63e-13 29.89 1e-6
ALM-PDP 10(1549.91s) 1.79e-6 1.06e-6 5.90e-9 9.05e-9 8.94e-13 29.89 1e-8
ALM-PDD — — — — — — — 1e-8
ALM-PT — — — — — — — 1e-8
ALM-PP — — — — — — — 1e-8
ALG2 292442(8590.28s) 2.87e-6 1.56e-12 2.11e-12 2.33e-10 $<$eps 29.89 1e-8
--------- -- ------------------ -- --------- -- ---------- -- ---------- -- ---------- -- ---------- -- ------- -- ------ --
: For $n(t)$ of the first column of each algorithm, $n$ presents the outer ALM iteration number or iteration number for ALG2 for the scaled residual Err$(u^{k+1}, \lambda^{k+1})$ less than the stopping value, $t$ denoting the CPU time. “—" denotes the iteration time more than $10^4$ seconds. []{data-label="tab:rof:butterfly:iso"}
For numerical comparisons, we mainly choose the accelerated primal-dual algorithm ALG2 [@CP] with asymptotic convergence rate $\mathcal{O}(1/k^2)$, which is very efficient, robust and standard algorithms for imaging problems. We follow the same parameters for ALG2 as in [@CP] and the corresponding software. The test images Lena and Cameraman are with size $256\times 256$ and images Monarch and Sails are with size $768 \times 512$. The original, noisy and denoised images can be seen in Figure \[lena:cameraman:denoise\] and \[monarch:sails:denoise\]. We acknowledge that the residuals $\text{Kun}(u^{k+1},\lambda^{k+1})$ and $\text{Gap}(u^{k+1},\lambda^{k+1})$ can be unbounded without projection of $\lambda^{k+1}$ to the feasible set as in ALM-PT. We leave them in the tables for references.
From Table \[tab:rof:lena:ani\] and Table \[tab:rof:sails:ani\], it can be seen that the proposed ALM-PDP and ALM-PDD especially ALM-PDP are very efficient and competitive for the anisotropic ROF model and are very robust for different sizes of images. Algrithms ALM-PT and ALM-PP are also efficient for anisotropic case for high accuracy cases. From Table \[tab:rof:cameraman:iso\] and Table \[tab:rof:butterfly:iso\], it can be seen that the proposed ALM-PDP are still very efficient and competitive for the isotropic cases. The performance of ALG2 seems to hold nearly the same efficiency for the isotropic and anisotropic cases.
[l\*[14]{}[c]{}r]{} & & & & & &\
res($u$) &3.50e-6 &5.30e-7 &1.53e-5&6.14e-8 &3.29e-8 &1.94e-7\
res($\lambda$) & 9.70 &1.09 &9.99e-2 &6.96e-3 &3.07e-4 &9.13e-6\
Gap &1.61e-4 &1.41e-5 &1.14e-6&5.89e-8 &1.58e-9 &2.58e-11\
$N_{SSN}$ &10 &9 &8&11 &9 &7\
$N_{ABG}$ & 20 &29 &51 &56 &102 &116\
[l\*[14]{}[c]{}r]{} & & & & & & & &\
res($u$) &3.10e-7 &4.36e-8 &1.14e-6&2.19e-9 &5.72e-8 &9.08e-9 &1.76e-8& 2.54e-7\
res($\lambda$) & 5.22 &6.95e-1 &1.09e-1 &1.62e-2 &2.61e-3 &4.62e-4&8.08e-5 & 1.49e-5\
Gap &8.03e-5 &8.56e-6 &1.12e-6&1.41e-7 & 1.95e-8 &2.81e-9 &4.01e-10&5.77e-11\
$N_{SSN}$ &9 &7 &8&9 &8 &9 &10& 10\
$N_{ABG}$ & 21 &35 &57 &91 &139 &360 &570&535\
It is known that the computation of the isotropic case is usually more challenging than the anisotropic case. However, our proposed algorithms are surprisingly much more efficient for the anisotropic case compared to the isotropic case. Besides Theorem \[thm:metric:regular:lag\] and \[thm:metric:regular:dual:iso\], we present another comparison between the isotropic TV and anisotropic TV with algorithm ALM-PDP. For Table \[tab:lena:ssn:iter:ani\] and \[tab:cameraman:ssn:iter:iso\], we give enough Newton iterations compared to previous tables with the stopping the criterions $\text{res-alm}_{SSNPD}^l \leq {10^{-3}}/{\sigma_k}$ for each ALM as in and $\text{tol}_{k+1}\leq 10^{-5}$ in for each Newton iteration. The ALM iterations stop while Err$\leq 10^{-7}$ both for the isotropic and anisotropic TV. It can be seen that the isotropic TV generally needs more BiCGSTAB iterations than the anisotropic TV. Besides, the residual error $\text{res}(\lambda)(u^{k+1},\lambda^{k+1}) $ drops more slowly for the isotropic case.
\
\
\
Discussion and Conclusions {#sec:conclude}
==========================
In this paper, we proposed several semismooth Newton based ALM algorithms. The proposed algorithms are very efficient and competitive especially for anisotropic TV. Global convergence and the corresponding asymptotic convergence rates are also discussed with metrical subregularites. Numerical tests show that compared with first-order algorithm, more computation efforts are deserved with ALM. Currently, no preconditioners are employed by BiCGSTAB or CG for the linear systems solving Newton updated. Actually, preconditioners for BiCGSTAB or CG are desperately needed, especially for the isotropic cases. Additionally, the asymptotic convergence rate of the KKT residuals is also an interesting topic for future study [@CST].
[ **Acknowledgements** The author acknowledges the support of NSF of China under grant No. 11701563. The work was originated during the author’s visit to Prof. Defeng Sun of the Hong Kong Polytechnic University in October 2018. The author is very grateful to Prof. Defeng Sun for introducing the framework on semismooth Newton based ALM developed by him and his collaborators. The author is also very grateful to Prof. Kim-Chuan Toh, Dr. Chao Ding, Dr. Xudong Li and Dr. Xinyuan Zhao for the discussion on the semismooth Newton based ALM. The author is also very grateful to Prof. Michael Hinterm[ü]{}ller for the discussion on the primal-dual semismooth Newton method during the author’s visit to Weierstrass Institute for Applied Analysis and Stochastics (WIAS) supported by Alexander von Humboldt Foundation during 2017. ]{}
[99]{}
, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, New York, 2011.
D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, Paris, 1982.
A. Chambolle, T. Pock, *A first-order primal-dual algorithm for convex problems with applications to imaging*, J. Math. Imaging and Vis., 40(1), pp. 120–145, 2011.
F. H. Clarke, Optimization and Nonsmooth Analysis, Vol. 5, Classics in Applied Mathematics, SIAM, Philadelphia, 1990.
C. Clason, Nonsmooth Analysis and Optimization, Lecture notes, <https://arxiv.org/abs/1708.04180>, 2018. Y. Cui, D. Sun, K. Toh, [*On the R-superlinear convergence of the KKT residues generated by the augmented Lagrangian method for convex composite conic programming*]{}, Math. Program., Ser. A, 178:38–415, 2019, https://doi.org/10.1007/s10107-018-1300-6. T. De Luca, F. Facchinei, C. Kanzow, [*A semismooth equation approach to the solution of nonlinear complementarity problems*]{}, Math. Program. 75, pp. 407–439, 1996.
A. L. Dontchev, R. T. Rockafellar, [ Functions and Solution Mappings: A View from Variational Analysis]{}, Second Edition, Springer Science+Business Media, New York 2014.
F. Facchinei, J. Pang, [Finite-Dimensional Variational Inequalities and Complementarity Problems]{}, Volume I, Springer-Verlag New York, Inc, 2003.
M. Fortin, R. Glowinski (eds.), [ Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems]{}, North-Holland, Amsterdam, 1983.
R. Glowinski, S. Osher, W. Yin (eds.), Splitting Methods in Communication, Imaging, Science, and Engineering, Springer, 2016.
M. R. Hestenes, [*Multiplier and gradient methods*]{}, J. Optim. Theory Appl., 4, pp. 303–320, 1968
M. Hintermüller, K. Kunisch, [*Total bounded variation regularization as a bilaterally constrained optimization problem*]{}, SIAM J. Appl. Math, 64(4), pp. 1311–1333.
M. Hintermüller, K. Papafitsoros, C. N. Rautenberg, H. Sun, [*Dualization and automatic distributed parameter selection of total generalized variation via bilevel optimization*]{}, preprint, to appear, 2019.
M. Hintermüller, G. Stadler, [*A infeasible primal-dual algorithm for total bounded variaton-based inf-convolution-type image restoration*]{}, SIAM J. Sci. Comput., 28(1), pp. 1–23, 2006.
D. Klatte, B. Kummer, [*Constrained minima and Lipschitzian penalties in metric spaces*]{}, SIAM J. Optim., 13(2), pp. 619–633, 2002.
D. Klatte, B. Kummer, Nonsmooth Equations in Optimization. Regularity, Calculus, Methods and Applications, Series Nonconvex Optimization and Its Applications, Vol. 60, Springer, Boston, MA, 2002.
, Lagrange Multiplier Approach to Variational Problems and Applications, Advances in design and control 15, Philadelphia, SIAM, 2008.
, [*An active set strategy based on the augmented Lagrangian formulation for image restoration*]{}, RAIRO, Math. Mod. and Num. Analysis, 33(1), pp. 1–21, 1999.
, [*Metric subregularity and the proximal point method*]{}, J. Math. Anal. Appl., 360(2009), pp. 681-688, 2009.
, [*Semismooth and semiconvex functions in constrained optimization*]{}, SIAM J. Control Optim., 15(6), pp. 959–972, 1977.
X. Li, D. Sun, C. Toh, [*A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems*]{}, SIAM J. Optim., 28(1), pp. 433–458, 2018.
F. J. Luque, [*Asymptotic convergence analysis of the proximal point algorithm*]{}, SIAM J. Control Optim., 22(2), pp. 277–293, 1984.
J. Pena, J. Vera, L. Zuluaga, [*New characterizations of Hoffman constants for systems of linear constraints*]{}, <https://arxiv.org/abs/1905.02894>, 2019.
M. J. D. Powell, [*A method for nonlinear constraints in minimization problems*]{}, in Optimization, R. Fletcher, ed., Academic Press, New York, pp. 283–298, 1968.
S. M. Robinson, [*Some continuity properties of polyhedral multifunctions*]{}, in Mathematical Programming at Oberwolfach, Math. Program. Stud., Springer, Berlin, Heidelberg, pp. 206–214, 1981.
R. T. Rockafellar, [*Monotone operators and the proximal point algorithm*]{}, SIAM J. Control Optim., 14(5), pp. 877–898, 1976.
R. T. Rockafellar, [*Augmented Lagrangians and applications of the proximal point algorithm in convex programming*]{}, Math. Oper. Res., 1(2), pp. 97-116, 1976.
L. I. Rudin, S. Osher, E. Fatemi, [*Nonlinear total variation based noise removal algorithms*]{}, Physica D., 60(1-4), pp. 259–268, 1992.
S. Scholtes, Introduction to Piecewise Differentiable Equations, Springer Briefs in Optimization, Springer, New York, 2012.
G. Stadler, [*Semismooth Newton and augmented Lagrangian methods for a simplified friction problem*]{}, SIAM J. Optim., 15(1), pp. 39–62, 2004.
G. Stadler, Infinite-Dimensional Semi-Smooth Newton and Augmented Lagrangian Methods for Friction and Contact Problems in Elasticity, PhD thesis, University of Graz, 2004.
D. Sun and J. Han, [*Newton and quasi-Newton methods for a class of nonsmooth equations and related problems*]{}, SIAM J. Optim., 7, pp. 463–480, 1997.
M. Ulbrich, Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces, MOS-SIAM Series on Optimization, 2011.
H. A. Van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge University Press, Cambridge, 2003.
Wikipedia: The Free Encyclopedia. Wikimedia Foundation Inc. Updated 12 October 2019, at 12:15 (UTC). Encyclopedia on-line. Available from <https://en.wikipedia.org/wiki/Total_variation_denoising>. Internet. Retrieved 15 November 2019.
J. Ye, X. Yuan, S. Zeng, J. Zhang, [*Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems*]{}, preprint, <http://www.optimization-online.org/DB_HTML/2018/10/6881.html>, 2018.
P. Yu, G. Li, TK. Pong, [*Deducing Kurdyka- Lojasiewicz exponent via inf-projection*]{}, <https://arxiv.org/abs/1902.03635>, 2019.
F. Zhang (eds.), [The Schur Complement and Its Applications]{}, Numerical Methods and Algorithms 4, Springer US, 2005.
Y. Zhang, N. Zhang, D. Sun, K. Toh, [*An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems*]{}, Mathematical Programming A, <https://doi.org/10.1007/s10107-018-1329-6>, 2018.
X. Zhao, D. Sun, K. Toh, [*A Newton-CG augmented Lagrangian method for semidefinite programming*]{}, SIAM J. Optim., 20(4), pp. 1737–1765, 2010.
|
---
abstract: 'The Minority Game (MG) behaves as a stochastically perturbed deterministic system due to the coin-toss invoked to resolve tied strategies. Averaging over this stochasticity yields a description of the MG’s deterministic dynamics via mapping equations for the strategy score and global information. The strategy-score map contains both restoring-force and bias terms, whose magnitudes depend on the game’s quenched disorder. Approximate analytical expressions are obtained and the effect of ‘market impact’ discussed. The global-information map represents a trajectory on a De Bruijn graph. For small quenched disorder, an Eulerian trail represents a stable attractor. It is shown analytically how anti-persistence arises. The response to perturbations and different initial conditions are also discussed.'
author:
- |
P. Jefferies, M.L. Hart and N.F. Johnson\
Physics Department, Clarendon Laboratory\
Oxford University, Oxford OX1 3PU, U.K.
title: Deterministic Dynamics in the Minority Game
---
Introduction
============
The Minority Game (MG) introduced by Challet and Zhang [@Challet-Zhang_Original-MG] offers possibly the simplest paradigm for a complex, dynamical system comprising many competing agents. Models based on the Minority Game concept have a broad range of potential applications, for example financial markets, biological systems, crowding phenomena and routing problems [@EconPhys_Website]. There have been many studies of the statistical properties of the MG [@Challet-Zhang_Original-MG; @EconPhys_Website; @Challet_Reduced-Space; @Mike_Belgium; @Us_Crowd-AntiCrowd; @Savit_Original-MG; @Sherrington_TMG; @Us_TMG; @Challet_MG-Memory; @Cavagna_RandomHist; @heimel; @Mike_TMG_Markov; @Challet_StationaryStates; @Zheng_EfficientStats] which treat the game as a quasi-stochastic system.
In this paper we examine the MG from a different perspective by treating it as a primarily *deterministic* system and then exploring the rich dynamics which result. Our desire to look at microscopic dynamical properties, as opposed to global statistics, is motivated by the fact that the physical systems we are interested in modeling are only realized once (e.g. the time-evolution of a financial market). Only limited insight is therefore available from taking configuration averages in such cases. In addition it is of great interest to examine transient effects such as the response of the system to perturbations, and the mechanisms which determine the game’s trajectories in time. We find that we are able to provide a description of the resulting deterministic dynamics via mapping equations, and can hence investigate these important effects. The outline of the paper is as follows: after briefly discussing the MG in the remainder of this section, Sec. 2 examines the MG as a functional map. Section 3 focuses on the effect of the underlying (‘quenched’) disorder arising from unequal population of the strategy-space. Section 4 discusses the dynamics of the game on a de Bruijn graph. Section 5 provides the conclusions.
The most basic formulation of a MG comprises an odd number of agents $N$ who at each turn of the game choose between two options ‘0’ and ‘1’ [@Challet-Zhang_Original-MG; @EconPhys_Website]. These options could be used to represent buy/sell, choose road A/road B etc. The aim of the agents is common: to choose the least-subscribed option, the ‘minority’ group. At the end of each turn of the game, the winning decision corresponds to the minority group and is announced to all the agents. The agents have a *memory* of $m$ bits, hence they can recall the last $m$ winning decisions. The *global information* $\mu$ available to each and every agent is therefore a binary word $m$ bits long, hence $\mu$ belongs to the set $\left\{ 0,1,...P-1\right\} $ where $P=2^{m}$. In order to make a decision about which option to choose, each agent is allotted $s$ *strategies* at the outset of the game, which cannot be altered during the game. Each strategy $R$ maps every possible value of $\mu$ to a prediction $a_{R}^{\mu}\in\left\{ -1,1\right\} $ where $-1\Rightarrow\left( \text{option
0}\right) $ and $1\Rightarrow\left( \text{option 1}\right) $. There are $2^{2^{m}}$ different possible binary strategies. However, many of the strategies in this space are similar to one another, i.e. they are separated by a small Hamming distance. It has been shown [@Challet_Reduced-Space] that the principle features of the MG are reproduced in a smaller *Reduced Strategy Space* (RSS) of $2^{m+1}$ strategies, in which any two strategies are separated by a Hamming distance of 2$^{m}$ or $2^{m-1}$, i.e. the two strategies are *anti-correlated* or *un-correlated* respectively.
The agents follow the prediction of their historically best-performing strategy. They measure this performance by rewarding strategies with the correct mapping of global information to winning decision, and penalizing those with an incorrect mapping. Strategies are scored in this manner irrespective of whether they are played. As each agent will reward and penalize the same strategy in the same way, there is a common set of strategy scores which are collected together to form the *strategy score vector* $\underline{S}$. The common perception of a strategy’s success or failure will lead to agents deciding to use or avoid the same strategy in groups - this leads to crowd behavior as analyzed in Refs. [@Mike_Belgium; @Us_Crowd-AntiCrowd].
MG as a functional map
======================
The Minority Game is often introduced heuristically as a set of rules determining the update of the agents’ strategies and the global information. It can however easily be cast into a functional map which reproduces the game when iterated. Moreover, this functional map can be iterated *without* having to keep track of the labels for individual agents. We achieve this by introducing a formalism which groups together agents who hold the same combination of strategies, and hence respond in an identical way to all values of the global information set $\mu=\{0,1...P-1\}$. This grouping is achieved via the tensor which is initialized at the outset of the game and quantifies the particular quenched disorder for that game [@Mike_Belgium]. is $s$-dimensional with rows/columns of length $2P$ (in the RSS) such that entry $_{R1,R2,...}$ is the number of agents holding strategies $\{R1,R2,...\}$. The entries of (and also of the strategy score vector ) are ordered by increasing decimal equivalent. For example, strategies from the RSS for $m=2$ are ordered $\{0000,0011,0101,0110,...\}$, therefore strategy $R$ is anti-correlated to strategy $2P+1-R$. is randomly filled with uniform probability such that $$\sum_{R,R^{\prime},...}\underline{\underline{\Omega}}_{R,R^{\prime},...}=N$$ It is useful to construct a configuration of this tensor, , which is symmetric in the sense that $_{\{R1,R2,...\}}=$ $_{p\{R1,R2,...\}}$ where $p\{R1,R2,...\}$ is any permutation of strategies $R1,R2,...$ . For $s=2$ we let $=\frac{1}{2}\left( \underline{\underline{\Omega}}+\underline{\underline{\Omega}}^{\text{T}}\right) $ [@countpsi]. Now we proceed to a formula for the attendance $A$ of the MG (i.e. the sum of all the agents’ predictions and hence actions): $$A=\underline{a}^{\mu}\cdot\underline{n}=\sum_{R=1}^{2P}a_{R}^{\mu}n_{R}\label{Eq Attendance Basic}$$ where $a_{R}^{\mu}$ is the response of strategy $R$ to global information $\mu$ and $n_{R}$ is the number of agents playing strategy $R$. We can define $n_{R}$ in terms of the strategy score vector and and hence rewrite Eq. \[Eq Attendance Basic\] to give the following for $s=2$: $$\begin{aligned}
A\left[ \underline{S},\mu\right] & =\sum_{R=1}^{2P}a_{R}^{\mu}\sum_{R^{\prime}=1}^{2P}\left( 1+\operatorname{sgn}\left[ S_{R}-S_{R^{\prime}}\right] \right) \underline{\underline{\Psi}}_{R,R^{\prime}}\label{Eq Attendance}\\
& +\sum_{R\neq R^{\prime}}^{2P}a_{R}^{\mu}\delta_{S_{R},S_{R^{\prime}}}\left( \operatorname{bin}\left[ 2\underline{\underline{\Psi}}_{R,R^{\prime
}},\frac{1}{2}\right] -\underline{\underline{\Psi}}_{R,R^{\prime}}\right)
\nonumber\end{aligned}$$ where $\operatorname{bin}\left[ n,p\right] $ is a sample from a binomial distribution of $n$ trials with probability of success $p$. Here the constraint $\operatorname{bin}\left[ 2\underline{\underline{\Psi}}_{R,R^{\prime}},\frac{1}{2}\right] +\operatorname{bin}\left[ 2\underline
{\underline{\Psi}}_{R^{\prime},R},\frac{1}{2}\right] =2\underline
{\underline{\Psi}}_{R,R^{\prime}}$ applies in order to conserve agent number. The second term of this attendance equation (Eq. \[Eq Attendance\]) introduces a stochastic element in the game; it corresponds to the situation where agents may have several top-scoring strategies and must thereby toss a coin to decide which to use. We note that Eq. \[Eq Attendance\] could be re-written replacing the $\operatorname*{sgn}$ function with a $\tanh$. The effect of this would be to make the number of agents playing strategy $R1$ (as opposed to their other strategy $R2$) vary smoothly as a function of the separation in score of the two strategies, rather than simply playing the best. This modification is similar in concept to that of the Thermal Minority Game (TMG) [@Sherrington_TMG; @Us_TMG] wherein agents play their best strategy with a certain probability depending on its score. The difference here would be that, in contrast to the TMG, the system would still be entirely deterministic hence lending itself readily to similar analysis as presented here.
With this formalism, the game can be described concisely by the following coupled mapping equations: $$\begin{aligned}
\underline{S}\left[ t\right] & =\underline{S}\left[ t-1\right]
-\underline{a}^{\mu\left[ t-1\right] }\chi\left[ A\left[ \underline
{S}\left[ t-1\right] ,\mu\left[ t-1\right] \right] \right]
\label{Eq Score Update}\\
\mu\left[ t\right] & =2\mu\left[ t-1\right] -P\operatorname{H}\left[
\mu\left[ t-1\right] -\frac{P}{2}\right] +\operatorname{H}\left[ -A\left[
\underline{S}\left[ t-1\right] ,\mu\left[ t-1\right] \right] \right]
\label{Eq Hist Update}$$ where $\operatorname{H}\left[ x\right] $ is the Heaviside function and $\chi\left[ A\right] $ is a monotonic, increasing function of the game attendance quantifying the particular choice of reward structure (i.e. payoff). In most of the MG literature $\chi\left[ A\right]
=\operatorname{sgn}\left[ A\right] $ or $\chi\left[ A\right] =A$ [@Savit_Original-MG]. Although the macroscopic statistical properties of the MG are largely unaltered by the choice of $\chi$, we later demonstrate that the microscopic dynamics can be affected markedly.
This formulation shows that the MG obeys a one-step, stochastically perturbed deterministic mapping between states $\{\underline{S}\left[ t\right]
,\mu\left[ t\right] \}$ and $\{\underline{S}\left[ t+1\right] ,\mu\left[
t+1\right] \}$. It is interesting to ask the following question: ‘How important is the stochastic term of Eq. \[Eq Attendance\] to the resultant dynamics?’. Table 1 shows the frequency with which the outcome ($\operatorname{sgn}\left[ -A\right] $) is changed by the stochastic perturbation to the mapping. We can see that the stochastic term has a small but non-negligible effect on the game. For the strategy reward system $\chi=\operatorname{sgn}$, the number of instances of coin-tossing agents affecting the outcome is greater than with the proportional reward system of $\chi=1$. This is easily understood in terms of the homogeneity of the score-vector $\underline{S}$; the $\chi=\operatorname{sgn}$ scoring system is much more likely to generate tied strategies than the $\chi=1$ system which also incorporates the dynamics of the attendance $A$. Therefore, in the $\chi=\operatorname{sgn}$ scoring system there will be a much higher proportion of coin-tossing agents and thus a greater effect on the game.
The general effect of the stochastic contribution to the MG is to break the pattern of behavior emergent from the deterministic part of the map. It is therefore of great interest to examine further what the dynamics of this deterministic behavior are. To do this we replace the stochastic term of Eq. \[Eq Attendance\] by its mean. The equation thus becomes ($A_{D}\left[
\underline{S},\mu\right] $ in Ref. [@Mike_TMG_Markov]): $$A\left[ \underline{S},\mu\right] =\sum_{R=1}^{2P}a_{R}^{\mu}\sum_{R^{\prime
}=1}^{2P}\left( 1+\operatorname{sgn}\left[ S_{R}-S_{R^{\prime}}\right]
\right) \underline{\underline{\Psi}}_{R,R^{\prime}}\label{Eq Det Attendance}$$ Physically this replacement is an averaging process; when $S_{R1}=S_{R2}$ we have half the agents who hold $\{R1,R2\}$ playing $R1$ and the other half playing $R2$ [@zero]. Equations \[Eq Score Update\],\[Eq Hist Update\] & \[Eq Det Attendance\] now define a deterministic map which replicates the behavior of the MG between perturbative events caused by the coin-tossing agents - we refer to this system as the ‘Deterministic Minority Game’ (DMG). We will now use this system to investigate the emergence of microscopic and macroscopic dynamics.
Disorder in $\underline{\underline{\Psi}}\label{Sec Disorder}$
==============================================================
The game is conditioned at the start with the initial state $\{\underline
{S}\left[ 0\right] ,\mu\left[ 0\right] \}$. It is also given a $\underline{\underline{\Psi}}$ tensor for a particular parameter set $N,m,s$. The game’s future behavior will be inherited from $\underline{\underline
{\Psi}}$; games with sparsely and densely filled tensors hence behave in entirely different ways. By assuming each entry of $\underline{\underline
{\Omega}}$ is an independent binomial sample $\underline{\underline{\Omega}}_{R1,R2}=\operatorname{bin}\left[ N,\frac{1}{\left( 2P\right) ^{s}}\right] $ we may categorize the disorder in the $\underline{\underline
{\Omega}}$ tensor by the standard deviation of an element divided by its mean size. For $s=2$, this gives $$\frac{\sigma\left[ \underline{\underline{\Omega}}_{R1,R2}\right] }{\mu\left[ \underline{\underline{\Omega}}_{R1,R2}\right] }=\sqrt
{\frac{\left( 2P\right) ^{2}-1}{N}}$$ which rapidly becomes large as $m$ increases. For low $m$ and high $N$, the game is said to be in an ‘efficient phase’ [@EconPhys_Website] where all states of the global information set $\mu$ are visited equally and hence, on average, there is no drift in the strategies’ scores i.e. $\left\langle
S_{R}\right\rangle _{t}=0$. In this regime, the disorder in the $\underline
{\underline{\Omega}}$ tensor is small and thus all elements are approximately of equal magnitude. This in turn implies that the dynamics of the game are dominated by the movement of $\underline{S}$ rather than by the asymmetry of $\underline{\underline{\Omega}}$. The attendance of the ($s=2 $) game here reduces to $$A\left[ \underline{S},\mu\right] \thickapprox\frac{N}{4P^{2}}\sum_{R=1}^{2P}a_{R}^{\mu}\sum_{R^{\prime}=1}^{2P}\operatorname{sgn}\left[
S_{R}-S_{R^{\prime}}\right] \label{Eq Flat Attendance}$$ The second sum in Eq. \[Eq Flat Attendance\] corresponds to a quantity $q_{R}$ which is based on the rank of strategy $R$; specifically $q_{R}=2P+1-2\rho_{R}$ where $\rho_{R}$ is the rank of strategy $R$, with $\rho_{R}=1$ being the highest scoring and $\rho_{R}=2P$ being the lowest scoring. Hence Eq. \[Eq Flat Attendance\] becomes $$A\left[ \underline{S},\mu\right] \thickapprox\frac{N}{4P^{2}}\underline
{a}^{\mu}\cdot\underline{q}\ \ \ .\label{Eq Attendance Approx}$$ We now examine the increment in strategy score, $\left[
t\right] =\underline{S}\left[ t\right] -\underline{S}\left[ t-1\right] .$ For simplicity, we here assume the proportional scoring system of $\chi=1$. Hence $$\underline{\delta S}=-\underline{a}^{\mu}A\left[ \underline{S},\mu\right]
\thickapprox-\frac{N}{4P^{2}}\underline{a}^{\mu}\left( \underline{a}^{\mu
}\cdot\underline{q}\right) \ \ .$$ If we average over uniformly occurring states of $\mu$, we then have for each strategy $$\left\langle \delta S_{R}\right\rangle _{\mu}\thickapprox-\frac{N}{4P^{2}}\sum_{R^{\prime}=1}^{2P}\left\langle a_{R}^{\mu}a_{R^{\prime}}^{\mu
}\right\rangle _{\mu}q_{R^{\prime}}$$ We now use the orthogonality of strategies in the RSS: $\frac{1}{P}\sum_{\mu
}a_{R1}^{\mu}a_{R2}^{\mu}=\{0$ for $R1\neq R2,1$ for $R1=R2,-1$ for $R2=\overline{R1}\}$ . This yields $$\left\langle \delta S_{R}\right\rangle _{\mu}\thickapprox\frac{N}{2P^{2}}\left( \rho_{R}-\rho_{\overline{R}}\right) \label{Eq Score Inc Rho}$$ where $\overline{R}=2P+1-R$ is the anti-correlated strategy to $R$. Equation \[Eq Score Inc Rho\] now shows us explicitly that strategies and their anti-correlated partners attract each other in pairs. The magnitude of the score increment is also of interest; for low $m$ and high $N$ the attractive force is large, which will cause the strategies to overshoot each other and thus perform a constant cycle of swapping positions. As we increase $m$ or decrease $N$ the attractive force becomes weaker and so the score cycling adopts a longer time-period; it eventually becomes too weak to overcome the separate force arising from the asymmetry in $\underline{\underline{\Psi}}$. Hence the system moves away from the strongly mean-reverting behavior in $\underline{S}$.
We can investigate this change of regime further by examining $\left\langle
\delta S_{R}\right\rangle _{\mu}$ for finite disorder in $\underline
{\underline{\Psi}}$ [@efficient]. Again using the orthogonality of strategies in the RSS, we have $$\begin{aligned}
\left\langle \delta S_{R}\right\rangle _{\mu} & =\delta S_{R}^{bias}+\delta
S_{R}^{restoring}=\label{Eq Score Inc}\\
& -\sum_{R^{\prime}=1}^{2P}\left( \underline{\underline{\Psi}}_{R,R^{\prime
}}-\underline{\underline{\Psi}}_{\overline{R},R^{\prime}}\right) \\
& -\sum_{R^{\prime}=1}^{2P}\left( \operatorname{sgn}\left[ S_{R}-S_{R^{\prime}}\right] \underline{\underline{\Psi}}_{R,R^{\prime}}+\operatorname{sgn}\left[ S_{R}+S_{R^{\prime}}\right] \underline
{\underline{\Psi}}_{\overline{R},R^{\prime}}\right) \ \ .\nonumber\end{aligned}$$ Equation \[Eq Score Inc\] has two distinct contributions. The first term $\delta S_{R}^{bias}$ arises from disorder in $\underline{\underline{\Psi}}$ alone and is time-independent, representing a constant bias on the score increment. The second term $\delta S_{R}^{restoring}$ acts as a mean-reverting force on the strategy score; its magnitude depends on how many strategies lie between it and its anti-correlated partner (just as in Eq. \[Eq Score Inc Rho\]). Figure 1 illustrates this for a case where $S_{R}>0$; here the net contribution to $\delta S_{R}^{restoring}$ is likely to be negative as there are more contributing elements with a negative sign than with a positive sign. The strategies $R^{\prime}\ni-\left| S_{R}\right|
<S_{R^{\prime}}<\left| S_{R}\right| $ always contribute terms $-\operatorname{sgn}\left[ S_{R}\right] \left( \Psi_{R,R^{\prime}}+\Psi_{\overline{R},R^{\prime}}\right) $ to $\delta S_{R}^{restoring}$ and so will always act as a mean-reverting component. Terms from strategies outside this range will always be divided into equally sized positive and negative groups as shown in Fig. 1. These groups will on average cancel out each other’s effect on the score increment.
We can model the average magnitude of each term in Eq. \[Eq Score Inc\] by using the same binomial representation for the elements of $\underline
{\underline{\Omega}}$ as before. The mean magnitude of the bias and restoring force terms $\left\langle \left| \delta S_{R}^{bias}\right| \right\rangle
_{R}$ and $\left\langle \left\langle \left| \delta S_{R}^{restoring}\right|
\right\rangle _{S_{R}}\right\rangle _{R}$ are thus approximately given as follows: $$\begin{aligned}
\left\langle \left| \delta S_{R}^{bias}\right| \right\rangle _{R} &
\thickapprox\sqrt{\frac{N}{P\pi}\left( 1-\frac{1}{\left( 2P\right) ^{2}}\right) }\label{Eq Approx Inc Terms}\\
\left\langle \left\langle \left| \delta S_{R}^{restoring}\right|
\right\rangle _{S_{R}}\right\rangle _{R} & \thickapprox\frac{N\gamma}{4P^{2}}\ \ .\nonumber\end{aligned}$$ The term $\gamma$ enumerates the average net number of terms in $\delta
S_{R}^{restoring}$ that act to mean revert $S_{R}$ i.e. the excess number of terms with sign $-\operatorname{sgn}\left[ S_{R}\right] $. Averaged over the entire set of strategies, we have $\gamma=2P$. Figure 2 shows that our approximate form for the average strategy score bias in Eq. \[Eq Approx Inc Terms\] is extremely good over the entire range of $\alpha=P/N$ whereas the approximation of the restoring force term becomes progressively worse as $\alpha$ is increased. This effect can be explained in terms of the ‘market impact’ of a strategy. The greater the number of agents using a strategy $n_{R}$, the greater its contribution is to the attendance as can be seen from Eq. \[Eq Attendance Basic\]. As $n_{R}$ is increased above $n_{R^{\prime
}\neq R}$, the greater the probability becomes of the game outcome ($-\operatorname{sgn}\left[ A\right] $) being opposed to $a_{R}^{\mu}$ and hence strategy $R$ being penalized. This effect will arise if the quenched disorder in $\underline{\underline{\Psi}}$ is such that more agents hold strategy $R$ than $R^{\prime}\neq R$. As $\alpha$ is raised and the quenched disorder in $\underline{\underline{\Psi}}$ grows, this effect will become increasingly important. Hence it can be seen that $\underline{\underline{\Psi
}}_{R,R^{\prime}}$ and $\left\{ S_{R},S_{R^{\prime}}\right\} $ are not independent as assumed in obtaining Eq. \[Eq Approx Inc Terms\], but are instead correlated through the effect of market impact; this correlation becomes more significant as $\alpha$ is increased.
The nature of the correlation between $\underline{\underline{\Psi}}_{R,R^{\prime}}$ and $\left\{ S_{R},S_{R^{\prime}}\right\} $ introduced by market impact is non-trivial in form as can be seen from Fig. 3. We will not discuss the details of an analytic reconstruction of $\underline
{\underline{\Psi}}_{\rho,\rho^{\prime}}$ here, but will instead simply note some straightforward constraints on its form. Let us take the approximation that on average the ranking of the strategies $\{\rho_{R}\}$ is given by the ranking of their bias terms $\{\delta S_{R}^{bias}\}$. This will be true *on average* for a system described by Eq. \[Eq Score Inc\]. We then use the approximation that $\delta S_{R}^{bias}\backsim N\left[ 0,\sqrt
{\frac{N}{2P}\left( 1-\frac{1}{4P^{2}}\right) }\right] $. Ordering the bias terms resulting from samples drawn from this distribution, gives us that $\underline{\underline{\Psi}}_{\rho,\rho^{\prime}}$ satisfies $$\operatorname*{Erf}\left[ \frac{\left\langle \delta S_{\rho}^{bias}\right\rangle }{\sqrt{\frac{N}{P}\left( 1-\frac{1}{4P^{2}}\right) }}\right]
=\frac{P-\rho}{P}$$ with $\delta S_{\rho}^{bias}$ given by $-\sum_{\rho^{\prime}=1}^{2P}\left(
\underline{\underline{\Psi}}_{\rho,\rho^{\prime}}-\underline{\underline{\Psi}}_{\overline{\rho},\rho^{\prime}}\right) $ as in Eq. \[Eq Score Inc\]. This relation gives us an indication of how the rank of a strategy is affected by its excess population, and is consistent with the form of $\underline
{\underline{\Psi}}_{\rho,\rho^{\prime}}$ as shown in Fig. 3. Note that in the absence of market impact we would not be able to write down any equation linking these parameters and Fig. 3 would be flat with no structure.
We have thus shown how market impact is profoundly manifest within the structure of the MG [@Challet-Zhang_Original-MG]. In particular, Fig. 2 shows clearly that consideration of market impact is necessary in the calculation of the transition point from efficient to inefficient regimes [@Challet-Zhang_Original-MG]. The game enters the inefficient regime if the magnitude of the bias term to the score increment (arising from disorder in $\underline{\underline{\Psi}}$) exceeds the magnitude of the restoring force term. We can calculate when *on average* strategies begin to drift by looking at when $\left\langle \left| \delta S_{R}^{bias}\right|
\right\rangle _{R}=\left\langle \left\langle \left| \delta S_{R}^{restoring}\right| \right\rangle _{S_{R}}\right\rangle _{R}$ in Eq. \[Eq Approx Inc Terms\]. This occurs near $\alpha=\alpha_{c}\thickapprox\frac{\pi
}{4}$. This over-estimation of the transition point (which numerically occurs in the DMG at around $\alpha_{c}=0.39$) could be corrected by taking into account the non-flat structure of $\underline{\underline{\Psi}}_{\rho
,\rho^{\prime}}$. We would like to stress here that only *on average* does there exist a specific point at which the game passes from mean-reverting to biased behavior (efficient to inefficient regime). Because the behavior of the game is dictated by the disorder in and not just by the specific parameters $N,m,s$ alone, a knowledge of $\alpha$ is not enough information to classify the game as being in either the efficient or inefficient regime. Therefore it seems arguable as to whether $\alpha_{c}$ is a ‘critical’ value for any particular realization of this system.
Equation \[Eq Score Inc\] can also yield insight into the dynamics in the regime past the transition point. We were able to predict from Eq. \[Eq Score Inc Rho\] that in the efficient regime, pairs of anti-correlated strategies would cycle around each other thus producing an ever changing score rank vector $\underline{\rho}$. In the inefficient regime wherein the strategy scores have appreciable bias, it would be natural to assume that $\underline{\rho}$ would rapidly find a steady state as the strategy scores diverged. This in fact does not happen; for example, consider the outermost pair of strategies in the score-space (i.e. the current best, and its anti-correlated partner the worst) at a point in the game. For these strategies, Eq. \[Eq Score Inc\] is given approximately by $$\left\langle \delta S_{R}\right\rangle _{\mu}\thickapprox-\sum_{R^{\prime}=1}^{2P}\left( \underline{\underline{\Psi}}_{R,R^{\prime}}-\underline
{\underline{\Psi}}_{\overline{R},R^{\prime}}\right) -\operatorname{sgn}\left[ S_{R}\right] \sum_{R^{\prime}=1}^{2P}\left( \underline
{\underline{\Psi}}_{R,R^{\prime}}+\underline{\underline{\Psi}}_{\overline
{R},R^{\prime}}\right) \ \ .$$ Irrespective of the disorder in $\underline{\underline{\Psi}}$, we have $\left| \delta S^{bias}\right| \lesssim\left| \delta S^{restoring}\right|
$. It is thus likely that this strategy pair attract each other until at least one other pair take their place as best/worst. This behavior will lead to a non-stationary $\underline{\rho}$-vector, even in this regime.
The present analysis has described general properties of the game such as the transition in behavior between efficient and inefficient regimes. It has also shown that dynamical processes such as the changing nature of $\underline
{\rho}$ can be quantitatively explained purely in terms of the quenched disorder in the strategy population tensor .
Dynamics in $\mu$-space\[Sec Mu Dynamics\]
==========================================
The previous section was concerned with the behavior of the strategy score vector $\underline{S}$, and often treated the dynamical variable $\mu$ as a random process to be averaged over. This however glosses over the subtle and very interesting dynamics of $\mu$ itself as dictated by Eq. \[Eq Hist Update\]. (References [@Challet_MG-Memory] and [@Mike_TMG_Markov] also consider aspects of $\mu$ dynamics). To aid in our discussion, we note that Eq. \[Eq Hist Update\] describes a trajectory along the edges of a directed de Bruijn graph $\operatorname{D}_{2}\left[ m\right] $. Fig. 4 shows an example of such a graph for $m=2$. As explained in the previous section, in the efficient regime $\underline{S}$ is strongly mean reverting. This implies that the set of states of the game $\left\{ \underline{S},\mu\right\} $ is finite. As the system is Markovian and deterministic, this in turn implies that it must exhibit periodic behavior in this regime as return to a past state would then be followed by the revisiting of the trajectory from that state. In the inefficient regime where the strategy scores are biased, the set of states $\left\{ \underline{S},\mu\right\} $ is unbounded and we may expect aperiodic behavior of the DMG.
We now examine the structure of the periodic behavior in the efficient regime. One observation from numerical simulations is that the period i.e. return time to any state $\left\{ \underline{S},\mu\right\} $ is observed over many runs to be $T=2P$ for the $\chi=\operatorname*{sgn}$ scoring system whereas for the $\chi=1$ system the period is much longer and run-dependent. This periodic behavior seems able to exist up to the point where the occurrence of zero attendance $A\left[ \underline{S},\mu\right] =0$ causes stochastic perturbation to $\mu$ [@zero]; after this point we can no longer treat our system as deterministic. Such periodic behavior must satisfy the conditions $\left\{ \underline{\Delta S}_{cycle}=0,\underline{\Delta\mu}_{cycle}=0\right\} $. $T=2P$ is in fact the shortest possible period which satisfies these conditions. The two edges leading away from any vertex $\mu$ on the de Bruijn graph must necessarily inccur score increments of opposite sign: $+\underline{a}^{\mu}\left| \chi\left[ A\right] \right| ,-\underline
{a}^{\mu}\left| \chi\left[ A\right] \right| $ corresponding to positive and negative attendance respectively. The vectors $^{\mu1}$ and $^{\mu2\neq\mu1}$ are orthogonal; hence the only way that an increment to the score of $\underline{a}^{\mu\left[ t\right] }\chi\left[
A\left[ \underline{S}\left[ t\right] ,\mu\left[ t\right] \right]
\right] $ can be negated in order to achieve $\underline{\Delta S}_{cycle}=0$, is to return to that vertex (i.e. $\mu\left[ t^{\prime}\right]
=\mu\left[ t\right] $) a particular number of times such that $$\sum_{\left\{ t^{\prime}\right\} }\chi\left[ A\left[ \underline{S}\left[
t^{\prime}\right] ,\mu\left[ t\right] \right] \right]
=0\label{Eq Cycle Condition}$$ This condition must be satisfied at all vertices of the graph because the set $\left\{ t^{\prime}\right\} $ which satisfies Eq. \[Eq Cycle Condition\] must have a minimum of two entries (each of opposite attendance) thereby leading the game to different, new vertices until all are spanned.
Consider the $\chi=\operatorname*{sgn}$ scoring system. The condition corresponding to Eq. \[Eq Cycle Condition\] is easily satisfied at each vertex with a set $\left\{ t^{\prime}\right\} $ of exactly $2\lambda$ entries, $\lambda$ being an integer. We now have the situation where all edges of the graph are visited equally. The shortest way of doing this is with $\lambda=1$; this cycle is known as an ‘Eulerian trail’. This dynamical stable state of the game acts as an attractor; the MG in the efficient phase will rapidly find this state after undergoing a stochastic perturbation. We note that the Time-Horizon Minority Game [@Mike_TMG_Markov] exhibits similar behavior for special values of the time horizon $\tau$. This trajectory of the DMG along a Eulerian trail corresponds to the occurrence of perfect anti-persistence in the $[A|\mu]$ time series. This anti-persistence has been empirically observed in many studies of the MG [@Challet-Zhang_Original-MG; @Savit_Original-MG; @Zheng_EfficientStats].
Now consider the $\chi=1$ scoring system. The condition of Eq. \[Eq Cycle Condition\] is very much harder to achieve over all vertices as the dynamics of $A$ are incorporated back into the score vector making the set $\left\{ \underline{S},\mu\right\} $ very much larger. This explains the very much longer period of this game which, even over very long time windows, can appear aperiodic. The Eulerian trail will still however be an attractor to the dynamics within $\mu$-space, since the anti-persistence in $[A|\mu]$ is still strong (in the efficient phase). It is not however perfect as was the case for the DMG using the $\chi=\operatorname*{sgn}$ scoring system.
To quantitatively explain this anti-persistence, we make the following approximation: $$\operatorname*{sgn}\left[ A\right] \thickapprox\operatorname*{sgn}\left[
\underline{a}\cdot\underline{S}\right] \ \ .\label{Eq Sgn A Approx}$$ This approximation can be understood by referring back to Eq. \[Eq Attendance Approx\] where $\underline{S}$ now plays the same role as the rank-measure $\underline{q}$. It is valid for the regime where the strategy scores are densely spaced, i.e. for the efficient regime/low disorder in $\underline{\underline{\Psi}}$. Consider the $\chi=\operatorname*{sgn}$ scoring system wherein the score vector is simply given by $\underline
{S}\left[ t\right] =\underline{S}\left[ 0\right] -\sum_{j=1}^{t-1}\operatorname*{sgn}\left[ A\left[ j\right] \right] \underline
{a}^{\mu\left[ j\right] }$. We use the fact that the vectors $^{\mu1}$ and $^{\mu2\neq\mu1}$ are orthogonal to transform Eq. \[Eq Sgn A Approx\] to the following form: $$\operatorname*{sgn}\left[ A\left[ t\right] \right] \thickapprox
\operatorname*{sgn}\left[ \underline{a}^{\mu}\cdot\underline{S}\left[
0\right] -2P\sum_{\left\{ t^{\prime}\right\} }\operatorname*{sgn}\left[
A\left[ t^{\prime}\right] \right] \right] \label{Eq Antipersistence}$$ where we recall that the set of times $\left\{ t^{\prime}\right\} $ are such that $\mu\left[ t^{\prime}\right] =\mu\left[ t\right] =\mu$ for $0<t^{\prime}<t$. This dynamical process occurring over times $t^{\prime}$ rather than $t$ is zero-reverting. Let us demonstrate this by taking an example. Let $P=4$ and the initial strategy score be such that $\underline
{a}^{\mu}\cdot\underline{S}\left[ 0\right] =20$. The time-series of $\operatorname*{sgn}\left[ A\left[ t\right] \right] $ thus becomes as shown in Table 2. Hence the game cascades from its initial state, the attendance at a given vertex of the de Bruijn graph ($[A|\mu]$) exhibiting persistent behavior until a point is reached such that $\left| \underline
{a}^{\mu}\cdot\underline{S}\left[ 0\right] -2P\sum_{\left\{ t^{\prime
}\right\} }\operatorname*{sgn}\left[ A\left[ t^{\prime}\right] \right]
\right| <2P$. Subsequently the attendance $[A|\mu]$ becomes perfectly *anti*-persistent. When this anti-persistence occurs at each vertex, the game has locked into one of the $2^{2^{m}}/2^{m+1}$ Eulerian trails. The analysis above can be generalized for different scoring systems (such as $\chi=1$) where in general it is found that the game exhibits strong but not perfect anti-persistence in $[A|\mu]$ in this regime.
In the analysis above we introduced the effect of the initial condition on the score vector $\underline{S}\left[ 0\right] $ (see also Ref. [@Challet_StationaryStates]). However, we could just as correctly view $\underline{S}\left[ 0\right] $ as the current state, left by some other game process such as a shock to the system, a build up from some other game mechanism or a stochastic perturbation. It is therefore interesting to examine how the DMG evolves after a given state $\left\{ \underline{S}\left[
0\right] ,\mu\left[ 0\right] \right\} $ is imposed. The ‘initial’ condition $\underline{S}\left[ 0\right] $ must obey the form $S_{R}=-S_{\overline{R}}$; this is to ensure that a priori no strategies are given a bias. It would be unphysical to break this rule; strategy $R$ always loses the same number of points as its anti-correlated partner $\overline{R}$ gains in any reasonable physical mechanism. We expect that if the elements $S_{R}\left[ 0\right] $ have magnitude less than $2P$ then the system will very quickly lock into the Eulerian trail trajectory and visit all $\mu
$-states equally. However, if the elements $\left| S_{R}\left[ 0\right]
\right| \gg2P$ then Eq. \[Eq Antipersistence\] predicts that there will be persistence in $[A|\mu]$ until the dynamical stable state is found. This persistence in trajectory at each node of the de Bruijn graph will lead to the game visiting only a small subset of the vertices on the graph unlike in the stable-state situation. This reduced cycling effect may lead to a bias in the attendance over a significant period of time, i.e. a ‘crash’ or ‘rally’.
We now demonstrate the recovery of the DMG from a randomly chosen initial score vector $\underline{S}\left[ 0\right] $. We take a system with low disorder in $\underline{\underline{\Psi}}$ and $m=2$ (such that $2P=8$). However we draw $S_{R}\left[ 0\right] $ from a much wider uniform distribution, spanning $-100...100$. (Note we maintain $S_{R}=-S_{\overline
{R}}$ as required). Figure 5 shows the evolution of the game out of this state. The initial condition is soon ‘worked out’ of the system - it rapidly finds the Eulerian cycle $\mu=0,0,1,3,3,2,1,2,..$ after only 174 turns. As can also be seen, the game adopts several different types of cycle on its way towards this stable state. The switch between cycle types occurs as each vertex snaps from persistent to anti-persistent behavior.
We have hence discussed and explained the dynamics of the stable state, and how the system enters that state from an initial or perturbed state. This analysis has been for the system in the efficient phase where the quenched disorder of $\underline{\underline{\Psi}}$ is low. The inefficient regime will in general show a different set of dynamics. As discussed earlier, the inefficient phase is characterized by score vectors which have an appreciable drift; this is an effect of the disorder in $\underline{\underline{\Psi}}$. The corresponding unbounded $\underline{S}\left[ t\right] $ vector leads to an unbounded set of states for the system $\left\{ \underline{S},\mu\right\}
$. This suggests that the overall dynamics may be aperiodic, i.e. the system never returns to a past state. We can however say something about the nature of the resulting dynamics in $\mu$-space. As the score vector diverges the score rank vector $\underline{\rho}$ becomes more well defined (although not completely stationary in time, as mentioned in Sec. \[Sec Disorder\]). This is tantamount to there being a certain degree of persistence in the attendance at a vertex $[A|\mu]$. This will lead to the motion around the de Bruijn graph being limited to a certain sub-space, just as that described above for the recovery from an initial score vector $\underline{S}\left[ 0\right] $. This difference in the dynamics for the efficient and inefficient regimes leads to the well-documented result that the occurrence of different $m+1$ bit words is even in the efficient regime but very uneven in the inefficient regime [@Savit_Original-MG].
Conclusion
==========
The results in this paper confirm that the MG can be usefully viewed as a stochastically perturbed deterministic system, and that this deterministic system can be described concisely by coupled mapping equations (Eqs. \[Eq Score Update\], \[Eq Hist Update\] and \[Eq Det Attendance\]). We used this system to explore the dynamics of the score vector $\underline{S}\left[
t\right] $. We showed that the score increment comprises a bias and restoring-force term, the comparative magnitude of these terms being governed by the disorder in the strategy population tensor $\underline{\underline
{\Omega}}$. Furthermore, we were able to obtain analytic approximations for the bias and restoring force terms. We showed how the market-impact effect correlated the strategy population to the score vector and how this then affected our approximations.
We also discussed the dynamics of the global information $\mu\left[ t\right]
$ as a trajectory on a de Bruijn graph. We were able to show that in the efficient regime the system would be periodic and that the favored periodic trajectory was that of an Eulerian Trail. Analytically we were able to demonstrate how anti-persistence and persistence arise in the attendance at a vertex $[A|\mu]$, and how this would manifest itself in efficient and inefficient regimes either in response to a perturbed state or an initial condition of $\underline{S}\left[ 0\right] $.
We are grateful to A. Short and P.M. Hui for useful discussions and comments.
[99]{} D. Challet and Y.C. Zhang, Physica A **246**, 407 (1997); *ibid.* **269**, 30 (1999); D. Challet and M. Marsili, Phys. Rev. E **60**, R6271 (1999); D. Challet, M. Marsili, and R. Zecchina, Phys. Rev. Lett. **84**, 1824 (2000); M Marsili, D. Challet and R. Zecchina, cond-mat/9908480.
See http://www.unifr.ch/econophysics for a detailed account of the Minority Game literature.
D. Challet and Y.C. Zhang, Physica A **256**, 514 (1998).
M. Hart, P. Jefferies, N.F. Johnson and P.M. Hui, cond-mat/0008385 (to appear in Eur. J. Phys. B (2001).
M. Hart, P. Jefferies, N.F. Johnson and P.M. Hui, cond-mat/0005152; cond-mat/0003486; N.F. Johnson, P.M. Hui, D. Zheng and M. Hart, J. Phys. A: Math. Gen. **32** L427 (1999); N.F. Johnson, M. Hart and P.M. Hui, Physica A **269**, 1 (1999).
R. Savit, R. Manuca and R. Riolo, Phys. Rev. Lett. **82**, 2203 (1999); Physica A **276**, 234 (2000); *ibid.* 265 (2000). See also Y. Li, A. VanDeemen and R. Savit, nlin.AO/0002004. This group’s work shows that similar statistical properties arise for different payoff functions. This can be explained using the crowd-anticrowd theory of Ref. \[4,5\], since the probability function $P(r^{\prime}={\bar{r}})$ has a similar form for a wide range of payoff functions \[P.M. Hui et. al. (unpublished)\].
A. Cavagna, J.P. Garrahan, I. Giardina and D. Sherrington, Phys. Rev. Lett. **83**, 4429 (1999); J.P. Garrahan, E. Moro and D. Sherrington, cond-mat/0012269.
M. Hart, P. Jefferies, N.F. Johnson and P.M. Hui, Phys. Rev. E. **63**, 017102 (2001); P. Jefferies, M. Hart, N.F. Johnson and P.M. Hui, J. Phys. A: Math. Gen. **33** L409 (2000).
M. Marsili and D. Challet, cond-mat/0004196.
A. Cavagna, Phys. Rev. E **59**, R3783 (1999).
J. A. F. Heimel and A. C. C. Coolen, cond-mat/0012045.
M.L. Hart and P. Jefferies and N.F. Johnson, cond-mat/0102384.
M. Marsili and D. Challet, cond-mat/0102257.
D. Zheng and B.H. Wang, cond-mat/0101225.
The expression +$^{\text{T}}$ double-counts the diagonal terms of . This property is compensated for by the form of Eq. \[Eq Attendance\].
Partitioning the number of agents with tied strategies in this way, may lead to non-integer $n_{R}$. In turn a zero attendance is possible even from an odd number of agents. In these (rare) occurrences we assign $A=\pm1$ randomly. This break in the determinism of the system is of particular relevance to the discussion in Sec. \[Sec Mu Dynamics\].
As the game is moved away from the efficient regime wherein states of $\mu$ are visited equally, averaging over $\mu$ should strictly become invalid. However some average properties of the game are reasonably insensitive to the replacement of the $\mu$-process of Eq. \[Eq Hist Update\] by a uniform random process. See Ref. [@Challet_MG-Memory; @Cavagna_RandomHist] for a discussion of the validity of random $\mu$.
**TABLES**
\[c\][|l|l|l|]{}m & $\chi=\operatorname{sgn}$ & $\chi=1$\
2 & $7.2\pm4.2$ & $0.7\pm0.6$\
3 & $6.3\pm3.0$ & $2.4\pm0.8$\
4 & $9.4\pm2.1$ & $3.4\pm0.8$\
TABLE 1. Percentage of time-steps in which the minority room is changed by the stochastic decision of agents with tied strategies. Percentages are shown for the digital and proportional payoffs. Statistics obtained from 16 numerical runs of the MG with $N=101$, $s=2$, and over 1000 time-steps.
\[c\][|l|l|]{}$\underline{a}^{\mu}\cdot\underline{S}\left[ 0\right] -2P\sum_{\left\{
t^{\prime}\right\} }\operatorname*{sgn}\left[ A\left[ t^{\prime}\right]
\right] $ & $\operatorname*{sgn}\left[ A\left[ t\right] \right] $\
$20-8\times\left( 0\right) =20$ & $1$\
$20-8\times\left( 0+1\right) =12$ & $1$\
$20-8\times\left( 0+1+1\right) =4$ & $1$\
$20-8\times\left( 0+1+1+1\right) =-4$ & $-1$\
$20-8\times\left( 0+1+1+1-1\right) =4$ & $1$\
$20-8\times\left( 0+1+1+1-1+1\right) =-4$ & $-1$\
TABLE 2. An example of how the game cascades from an initial state (c.f. Eq. \[Eq Antipersistence\]). Here $P=4$ and $\underline{a}^{\mu}\cdot
\underline{S}\left[ 0\right] =20$. The attendance (right column) exhibits persistent, and then anti-persistent, behavior.
**FIGURE CAPTIONS**
FIG. 1. Schematic representation of the signs of contributing terms to $\delta
S^{restoring}$.
FIG. 2. Numerical and approximate analytical magnitude of average score increment terms $\left\langle \left| \delta S_{R}^{bias}\right|
\right\rangle _{R}$ and $\left\langle \left\langle \left| \delta
S_{R}^{restoring}\right| \right\rangle _{S_{R}}\right\rangle _{R}$
FIG. 3. Contour plot of $\left\langle \underline{\underline{\Psi}}_{\rho
,\rho^{\prime}}\right\rangle $, i.e. an average of the strategy population tensor re-ordered each turn with strategies running from highest to lowest score (top to bottom and left to right). Black areas indicate low population and white areas indicate high population. The averaging is carried out over 50 runs (different $\underline{\underline{\Omega}}$) and 1000 turns within each run. MG game parameters $\alpha=0.32$, $s=2$.
FIG. 4. De Bruijn graph $\operatorname{D}_{2}\left[ 2\right] $ corresponding to $m=2$. Vertices are labelled with the state $\mu$, edges are labelled with the quantity $\underline{\delta S}/\left| \chi\left[ A\right] \right| $. The dotted line shows one of the two possible Eulerian trails of this graph.
FIG. 5. An example of the convergence of the DMG onto the Eulerian trail attractor. Top graph shows the dynamics in the global information $\mu\left[
t\right] $. Bottom graph shows the dynamics in score $S_{R}\left[ t\right]
$ for $1\leqslant R\leqslant4$ (out of $2P=8$). Game locks into attractor at turn 174. Game parameters $N=101$, $m=2$, $s=2$.
|
---
abstract: 'Finding out universal conditions describing the freeze-out parameters was a subject of various phenomenological studies. In the present work, we introduce a new condition based on constant trace anomaly (or interaction measure) calculated in the hadron resonance gas (HRG) model. Various extensions to the [*ideal*]{} HRG which are conjectured to take into consideration different types of interactions have been analysed. When comparing HRG thermodynamics to that of lattice quantum chromodynamics, we conclude that the hard-core radii are practically irrelevant, especially when HRG includes all resonances with masses less than $2~$GeV. It is found that the constant trace anomaly (or interaction measure) agrees well with most of previous conditions.'
author:
- 'A. Tawfik'
title: 'Constant Trace Anomaly as a Universal Condition for the Chemical Freeze-Out'
---
Introduction
============
The QCD trace anomaly which is defined as $(\epsilon-3 p)/T^4$ is also known as the interaction measure. This quantity can be derived from the trace of the energy-momentum tensor, $T_{\mu}^{\mu}=\epsilon-3 p$ and is conjectured to be sensitive to the presence of massive hadronic states. For instance, for non-interacting hadron gas at vanishing chemical potential &=& \_0\^ dm (m) e\^[-]{} \_0\^ ()\^3 (m) K\_1() dm, \[eq:Imuel1\] where $\rho(m)$ is the mass spectrum [@hgdrn] which relates the number of hadronic resonances to their masses as an exponential. In general, all thermodynamic quantities are sensitive to $\rho(m)$. For the hadronic resonances which not yet measured in experiments, a parametrization for total spectral weight has been introduced [@brnt]. In the present work, we only include known resonance states with mass $\leq 2~$GeV instead of the Hagedorn mass spectrum [@hgdrn]. In the classical limit, the trace anomaly at finite chemical potential $\mu$ reads && e\^[/T]{} ()\^2 . In addition to these aspects, there are other reasons speak for utilizing even [*ideal*]{} hadron resonance gas (HRG) model in predicting the hadron abundances and their thermodynamics. As will be discussed, the HRG model seems to provide a good description for the thermal evolution of the thermodynamic quantities in the hadronic matter [@Tawfik:2004sw; @Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004vv; @Tawfik:2006yq; @Tawfik:2010uh; @Tawfik:2010pt; @Tawfik:2012zz]. Furthermore the HRG model has been successfully utilized in characterizing two different conditions generating the chemical freeze-out at finite densities, namely $s/T^3=7$ [@sT3p1; @sT3p2; @Tawfik:2005qn; @Tawfik:2004ss] and $\kappa\, \sigma^2=0$ [@Tawfik:2013dba]. As introduced in Ref. [@Tawfik:2004ss], constant $s/T^3$ is accompanied with constant $s/n$. Recently, the HRG has been used to calculate the higher order moments of particle multiplicity using a grand canonical partition function of an ideal gas with all experimentally observed states up to a certain large mass as constituents [@Tawfik:2012si]. The grand canonical ensemble includes two important features [@Tawfik:2004sw]; the kinetic energies and the summation over all degrees of freedom and energies of the resonances. On the other hand, it is known that the formation of resonances can only be achieved through strong interactions [@hgdrn]; [*Resonances (fireballs) are composed of further resonances (fireballs), which in turn consist of resonances (fireballs) and so on*]{}. In other words, the contributions of the hadron resonances to the partition function are the same as those of free particles with some effective mass. At temperatures comparable to the resonance half-width, the effective mass approaches the physical one [@Tawfik:2004sw]. Thus, at high temperatures, the strong interactions are conjectured to be taken into consideration through including heavy resonances. It is found that the hadron resonances with masses up to $2\;$GeV include a suitable set of constituents needed for the partition function [@Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004sw; @Tawfik:2004vv; @Tawfik:2006yq; @Tawfik:2010uh; @Tawfik:2010pt; @Tawfik:2012zz]. Such a way, the singularity expected at the Hagedorn temperature [@Karsch:2003zq; @Karsch:2003vd] can be avoided. The strong interactions are assumed to be taken into consideration. In light of this, the validity of HRG is limited to temperatures below the critical one, $T_c$. An extensive discussion on this point will be elaborated in section \[sec:extn\].
In high energy experiments, the produced particles and their correlations are conjectured to provide information about the nature, composition and size of the medium from which they are originating. To determine the freeze-out parameters at various center-of-mass energies $\sqrt{s_{NN}}$, the particle yields are analysed in terms of temperature $T$ and baryon chemical potential $\mu$. The baryon density is related to the chemical potential, which is given by the nucleon stopping in the collision region. Furthermore, the chemical freeze-out is defined as the stage in the evolution of the hadronic system when inelastic collisions entirely cease and the relative particle ratios become fixed. Both $T$ and $\mu$ can be related to $\sqrt{s_{NN}}$ [@jean2006]. In the present work, we introduce constant trace anomaly (or constant interaction measure) to re-estimate the available sets of $T-\mu$ that were deduced from the different experiments, Fig. \[fig:e3p\].
The model is introduced in section \[sec:model\]. In section \[sec:extn\], we discuss different extensions to the ideal hadron gas. Section \[sec:phys\] is devoted to introduce the novel condition describing the freeze-out parameters and their dependence on $\sqrt{s_{NN}}$. Physics of constant trace anomaly is presented in section \[sec:others\]. Other conditions for chemical freeze-out are reviewed in section \[sec:rslt\]. The conclusions are outlined in section \[sec:conc\].
The Hadron Resonance Gas Model {#sec:model}
==============================
The hadron resonances treated as a free gas [@Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004sw; @Tawfik:2004vv] are conjectured to add to the thermodynamic pressure in the hadronic phase (below $T_c$). This statement is valid for free as well as for strongly interacting resonances. It has been shown that the thermodynamics of a strongly interacting system can also be approximated by ideal gas composed of hadron resonances with masses $\le 2~$GeV [@Tawfik:2004sw; @Vunog]. Therefore, in the confined phase of QCD, the hadronic phase, is modelled as a non-interacting gas of resonances. The grand canonical partition function reads Z(T, , V) &=& , where $H$ is the Hamiltonian of the system and $T$ ($\mu$) is the temperature (chemical potential). The Hamiltonian is given by the sum of the kinetic energies of relativistic Fermi and Bose particles. The main motivation of using this Hamiltonian is that it contains all relevant degrees of freedom of confined and [*strongly interacting*]{} matter. It includes implicitly the interactions that result in formation of resonances. In addition, it has been shown that this model can offer a quite satisfactory description of the particle production in the heavy-ion collisions. With the above assumptions, dynamics of the partition function can be calculated exactly and apparently expressed as a sum over [*single-particle partition*]{} functions $Z_i^1$ of all hadrons and their resonances. \[eq:lnz1\] Z(T, \_i ,V) &=& \_i Z\^1\_i(T,V) = \_i\_0\^ k\^2 dk {1 }, where $\varepsilon_i(k)=(k^2+ m_i^2)^{1/2}$ is the $i-$th particle dispersion relation, $g_i$ is spin-isospin degeneracy factor and $\pm$ stands for bosons and fermions, respectively.
Before the discovery of QCD, a probable phase transition of a massless pion gas to a new phase of matter was speculated [@lsm1]. Based on statistical models like Hagedorn [@hgdrn1] and statistical Bootstrap model [@boots1; @boots2], the thermodynamics of such an ideal pion gas has been studied, extensively. After the QCD, the new phase of matter is now known as the quark gluon plasma (QGP). The physical picture was that at $T_c$ the additional degrees of freedom carried by QGP are to be released resulting in an increase in the thermodynamic quantities like energy and pressure densities. The success of HRG model in reproducing lattice QCD results at various quark flavors and masses (below $T_c$) changed this physical picture drastically. Instead of releasing additional degrees of freedom at $T>T_c$, the hadronic system reduces its effective degrees of freedom, namely at $T<T_c$. In other words, the hadron gas has many degrees of freedom than QGP.
At finite temperature $T$ and baryon chemical potential $\mu_i $, the pressure of the $i$-th hadron or resonance species reads $$\label{eq:prss}
p(T,\mu_i ) = \pm \frac{g_i}{2\pi^2}T \int_{0}^{\infty}\,
k^2\, dk \ln\left\{1 \pm \exp[(\mu_i -\varepsilon_i)/T]\right\}.$$ As no phase transition is conjectured in the HRG model, summing over all hadron resonances results in the final thermodynamic pressure (of the hadronic phase).
The switching between hadron and quark chemistry is given by the relations between the [*hadronic*]{} chemical potentials and the quark constituents; $\mu_i =3\, n_b\, \mu_q + n_s\, \mu_S$, with $n_b$($n_s$) being baryon (strange) quantum number. The chemical potential assigned to the light quarks is given as $\mu_q=(\mu_u+\mu_d)/2$ and the one assigned to strange quark reads $\mu_S=\mu_q-\mu_s$. It is worthwhile to notice that the strangeness chemical potential $\mu_S$ should be calculated as a function of $T$ and $\mu_i $. In doing this, it is assumed that the overall strange quantum number has to remain conserved in the heavy-ion collisions [@Tawfik:2004sw].
Extensions to the ideal hadron gas: hadron interactions {#sec:extn}
-------------------------------------------------------
In literature, three types of interactions can be implemented into the [*ideal*]{} hadron gas. The repulsive interactions between hadrons are considered as a phenomenological extension, which would be exclusively based on van der Waals excluded volume [@exclV1; @exclV2; @exclV3; @exclV4]. Accordingly, considerable modifications in thermodynamics of hadron gas including energy, entropy and number densities are likely. There are intensive theoretical works devoted to estimate the excluded volume and its effects on the particle production and fluctuations [@exclV5], for instance. It is conjectured that the hard-core radius of hadron nuclei can be related to the multiplicity fluctuations [@exclV6]. Assuming that hadrons are spheres and all have the same radius, we compare between different radii in Fig. \[fig:vdW\]. On other hand, the assumption that the radii would depend on the hadron masses and sizes could come up with a very small improvement.
The first principle lattice QCD simulations for various thermodynamic quantities offer an essential framework to check the ability of extended [*ideal*]{} hadron gas, in which the excluded volume is taken into consideration [@apj], to describe the hadronic matter in thermal and dense medium. Figure \[fig:vdW\] compares normalized energy density and trace anomaly as calculated in lattice QCD and HRG model. The symbols with error bars represent the lattice QCD simulations for $2+1$ quark flavors with physical quark masses in continuum limit, i.e. vanishing lattice spacing [@latFodor]. The curves are the HRG calculations at different hard-core radii of hadron resonances, $r$. We note that increasing the hard-core radius reduces the ability to reproduce the lattice QCD results. Two remarks are now in order. At $0\leq r<0.2~$fm, the ability of HRG model to reproduce the lattice energy density or trace anomaly is apparently very high. Furthermore, we note that varying $r$ in this region makes almost no effect i.e., the three radii, $r=[0.0,0.1,0.2]~$fm, have almost the same results. At $r>0.2~$fm, the disagreement becomes obvious and increases with increasing $r$. At higher temperatures, the resulting thermodynamic quantities, for instance energy density and trace anomaly become [*non*]{}-physical. For example, the energy density and trance anomaly nearly tends toward vanishing. So far, we conclude that the excluded volume is practically irrelevant. It comes up with a negligible effect, at $r\leq 0.2~$fm. On other hand, a remarkable deviation from the lattice QCD calculations appears, especially when relative large values are assigned to $r$. With this regard, it has to be taken into consideration that the excluded volume itself is conjectured to assure the thermodynamic consistency in the HRG model [@exclV1; @exclV2; @exclV3; @exclV4; @exclV5; @exclV6].
It is obvious that the thermodynamic quantities calculated from the HRG model are likely to diverge at $T_c$ [@hgdrn; @hgdrn1]. It is a remarkable finding that despite the mass cut-off at $2~$GeV, the energy density remains finite even when $T$ exceeds $T_c$. Apparently, this is the main reason why the trace anomaly gets negative values. The correction to pressure is tiny or negligible [@exclV1; @exclV2; @exclV3; @exclV4; @exclV5; @exclV6]. Nevertheless, the finite hard-core should not be believed to reproduce the lattice QCD simulations at $T>T_c$. The validity of HRG model is strictly limited to $T<T_c$.
The second type of extensions has been introduced by Uhlenbeck and Gropper [@uhlnb]. This is mainly the correlation. The [*non*]{}-ideal (correlated) hadron statistics is given by the classical integral if the Boltzmann factor $\exp(-\phi_{ij}/T)$ is corrected as follows. \[eq:uhlb\] (-), where $r_{ij}$ is the average correlation distance between $i$-th and $j$-th particle. $\phi_{i j}$ is the interaction potential between $i$-th and $j$-th particle pairs. Apparently, the summation over all pairs gives the total potential energy. This kind of modifications takes into account correlations and also the [*non-ideality*]{} of the hadron gas. The latter would among others refer to the discreteness of the energy levels. Uhlenbeck and Gropper introduced an additional correction but concluded that it is only valid at very low temperatures [@uhlnb]. It should be noticed that the correction, expression (\[eq:uhlb\]), is belonging to [*generic*]{} types of correlation interactions.
As introduced in Ref. [@Tawfik:2004sw], the third type of interactions to be implemented in the ideal hadron gas is the attraction. The Hagedorn states are considered as the framework to study the physics of strongly interacting matter for temperatures $<T_c$. The hadron resonances add attraction interaction to the partition function. In other words, the Hagedorn interaction finds its description in the hadron mass spectrum, $\rho(m)$, Eq. (\[eq:Imuel1\]). Using the hadrons and resonances which are verified experimentally, the limits of the exponential limit of $\rho(m)$ can be determined. For instance, the mass cut-off may vary from strange to non-strange states Ref. [@brnt], for strange $1.5~$GeV and for non-strange $2.0~$GeV or from bosons to fermions, etc. According to the statistical Bootstrap model [@boots1; @boots2], the fireballs are treated as hadronic massive states possessing all conventional hadronic properties. It is apparent that the fireball mass is determined by the mass spectrum. Furthermore, fireballs are consisting of further fireballs. This is only valid at the statistical equilibrium of an ensemble consisting of an undetermined number of states (fireballs).
On one hand, the whole spectrum of possible interactions can be represented by $S$-matrix. According to [@Tawfik:2004sw], the fugacity term can be expanded to include various kinds of interactions. In such a way, the $S$-matrix would give the plausible scattering processes taking place in the system of interest. This would be partly understood that including hadron resonances with some effective masses has almost the same effect as that of a free particle with same mass. At high energy, the effective mass approaches the physical value. In other words, even strong interactions are taken into consideration via heavy resonances. These conclusions suggest that the grand canonical partition function is able to simulate various types of interactions, when hadron resonances with masses up to $2~$GeV are included. As discussed, this sets the limits of Hagerdorn mass spectrum, Eq. (\[eq:Imuel1\]). The mass cut-off avoids the Hagedorn’s singularity. A conclusive convincing proof has been presented through confronting HRG to lattice QCD results [@Karsch:2003vd; @Karsch:2003zq; @Redlich:2004gp; @Tawfik:2004sw; @Tawfik:2004vv]. Figure \[fig:vdW\] illustrates the excellent agreement between HRG with $r=0~$fm and the lattice QCD calculations. It should be noticed that the results at $T>T_c$ are out of scope of the HRG model.
So far, we conclude that the attraction interaction is very sufficient to overcome the hard-core repulsion interaction. In light of this, we comment on the conclusion of Ref. [@apj]. In framework of interacting hadron resonance gas, a thermal evaluation of thermodynamic quantities has been proposed. The interactions to be implemented into HRG model are mainly van der Waals repulsion which are implemented through correction for the finite size of hadrons. Different values for the hadron radii can be assigned to the baryons and mesons. The authors studied the sensitivity of the modified HRG model calculations of hadron radii. The results on different thermodynamic quantities were confronted with predictions from lattice QCD simulations. Therefore, the conclusion would be understood, as hadron resonances with masses up to $3~$GeV are taken into consideration. At this mass cut-off, the exponential description of the mass spectrum would be no longer valid. Furthermore, it is straightforward to deduce from textbook that heavier masses lead to lower thermodynamic quantities. It is correctly emphasized [@lFodor10] that including all known hadrons up to $2.5$ or even $3.0$ GeV would increase the number of hadron resonances by a few states with masses $>2~$GeV. An attempt to improve the HRG model by including an exponential mass spectrum for these very heavy resonances has been proposed [@brnt]. In Refs. [@lFodor10; @apj] only known states and not the mass spectrum are taken into account. For this reason, the authors of [@apj] concluded that the HRG model with small or even vanishing radii gives thermodynamic quantities which apparently are less steeply than in case of a [*ideal*]{} HRG.
It is apparent that the excluded volume has almost no effect, especially when the baryon chemical potential is small. Also, the first principle lattice QCD calculations are reliable at small baryon chemical potential. On the other hand, the statistical-thermal models would make no sense, if the hadron resonances are point-like and the baryon chemical potential is large.
Results {#sec:rslt}
=======
The HRG calculations are performed as follows. Starting with a certain value of the baryon chemical potential $\mu_b$, the temperature $T$ is increased very slowly. At this value of $\mu_b$ and at each raise in $T$, the strangeness chemical potential $\mu_S$ is determined under the condition that the strange quantum numbers should remain conserved in the heavy-ion collisions. Having the three values of $\mu_b$, $T$ and $\mu_S$, then all thermodynamic quantities including $(\epsilon-3p)/T^4$ are calculated. When the trace anomaly reaches the value $7/2$, then the temperature $T$ and chemical potential $\mu_b$ are registered. This procedure is repeated over all values of $\mu_b$. It is worthwhile to mention that the normalized trace anomaly is calculated using grand canonical statistics. Furthermore, in HRG calculations no statistical fitting has been applied in determining any thermodynamic quantities, including pressure, entropy density and trace anomaly.
In Fig. \[fig:e3p\], the freeze-out parameters, temperature $T$ and baryon chemical potential $\mu_b$, as calculated in the HRG model are plotted in a log-log graph (solid curve). The symbols with error bars represent the phenomenologically estimated parameters know as experimental data [@jean2006; @dataCR]. The experimental data covers a center-of-mass energy ranging from couple GeV in SchwerIonen Synchrotron (SIS) to several TeV, in Large Hadron Collider (LHC). HADES [@hades] and FOPI [@fopi] results are also included.
The phenomenological parameters [@jean2006; @dataCR] have been estimated as follows. In different high-energy experiments (corresponding to different center-of-mass energies), various particle ratios measured in these experiments are re-calculated by the thermal models [@jean2006; @dataCR]. In doing this, the chemical potential $\mu_b$ measured from the stopping power is used as an input. In light of this, the thermal models are used to merely estimate the freeze-out temperature. Figure \[fig:e3p\] presents the freeze-out diagram relating $T$ to $\mu_b$ (symbols with error bars). In the present paper, a new universal description is suggested. It is assumed that constant trace anomaly is able to reproduce the freeze-out diagram, $T$ vs. $\mu_b$.
Physics of constant trace anomaly or constant interaction measure {#sec:phys}
=================================================================
The QCD equation of state can be deduced from the energy-momentum tensor, for instance, the normalized pressure can be obtained for the integral of $(\epsilon-3 p)/T^5$. For completeness, we mention that the trace anomaly gets related the QCD coupling constant so that $I(T)/T^4\propto T^4\, \alpha_s^2$ [@peter], where $I(T)=\epsilon(T)-3 p(T)$. In light of this, essential information about weakly coupled systems would be provided through trace anomaly.
A universal parametrization for the QCD trace anomaly at $\mu=0$ was proposed [@FodorFit1] \[eq:parmQCD\] &=& e\^[--]{} {h\_0+}, where $t=T/200$. The fitting parameters are listed in Ref. [@FodorFit1]. This gives the thermal evolution of $I(T)$. A parametrization at finite $\mu$ was suggested [@OweRev] \[eq:OweP1\] &=& + , where the susceptibility $\chi_2(T,\mu_b)$ is given by the second derivative of the partition function, \_2(T,\_b) &=& . In the classical limit, the derivative of $\chi_2(T,\mu_b)$ wrt temperature reads \[eq:My1\] .|\_[\_b=0]{} &=& - e\^[\_b/T]{} T ()\^3 . When implementing the result in Eq. (\[eq:OweP1\]), we get \[eq:Icls\] &=& - + e\^[\_b/T]{} \_b\^2 ()\^3 .
When assuming that $I(T,\mu_b)/T^4=7/2$ at vanishing and finite $\mu_b$, then &=& T () K\_2()
In grand canonical ensemble at $T<T_c$ \[eq:OweP3\] &=& ()\^2 +\
&& ()\^2 \_0\^ ( - .\
&& 3 + .) p\^2 dp,\
&=& ()\^2 +\
&& ()\^2 \_0\^ e\^[\_b/T]{} p\^2 dp, where F(T,\_b) &=& {
[l l]{} 2 e\^[2 \_b/T]{} + e\^[(+\_b)/T]{} + e\^[2 /T]{} &\
&\
3 e\^[(+\_b)/T]{} - 4 e\^[2 \_b/T]{} - e\^[2 /T]{} &\
.. The coefficient of $(\mu_b/T)^2$ seems to play a crucial role in estimating the chemical parameters, $T$ and $\mu_b$, the freeze-out diagram. The second term of Eq. (\[eq:OweP3\]) can be decomposed into bosonic and fermionic parts ()\^2 &=& ()\^2 , revealing that the fermionic susceptibility is to a large extend responsible for the $T-\mu_b$ curvature.
Other conditions for the chemical freeze-out {#sec:others}
============================================
Starting from phenomenological observations at SIS energy, it was found that the averaged energy per averaged particle $\epsilon / n \approx 1~$GeV [@jeanRedlich], where Boltzmann approximations are applied, this constant ratio is assumed to describe the whole $T-\mu_b$ diagram. For completeness, we mention that the authors assumed that the pions and rho-mesons are conjectured to get dominant at high $T$ and small $\mu_b$. The second criterion assumes that total baryon number density $ n_b + n_{\bar{b}} \approx 0.12~$fm$^{-3}$ [@nb01]. In framework of percolation theory, a third criterion has been suggested [@percl]. As shown in Fig. 2 of [@Tawfik:2005qn], the last two criteria seem to give almost identical results. Both of them are apparently stemming from phenomenological observations. A fourth criterion based on lattice QCD simulations was introduced in Ref. [@Tawfik:2005qn; @Tawfik:2004ss]. Accordingly, the entropy normalized to cubic temperature is assumed to remain constant over the whole range of baryon chemical potentials, which is related to the nucleus-nucleus center-of-mass energies $\sqrt{s_{NN}}$ [@jean2006]. An extensive comparison between constant $\epsilon / n$ and constant $s/T^3$ is given in [@Tawfik:2005qn; @Tawfik:2004ss].
In the HRG model, the thermodynamic quantities generating the chemical freeze-out are deduced [@Tawfik:2005qn; @Tawfik:2004ss]. The motivation of suggesting constant normalized entropy is the comparison to the lattice QCD simulations with two and three quark flavors. We found the $s/T^3=5$ for two flavors and $s/T^3=7$ for three flavors. Furthermore, we confront the hadron resonance gas results to the experimental estimation for the freeze-out parameters, $T$ and $\mu_b$.
Another novel condition characterizing the freeze-out parameters has been introduced in Ref. [@Tawfik:2013dba]. To this extend, the higher order moments are applied [@Tawfik:2012si]. Vanishing ${\kappa}\, \sigma^2$ or equivalently $m_4/\chi=3$ results in $T$ - $\mu_b$ sets coincident with the phenomenologically estimated ones. Recently, lattice QCD calculations confirm the same connection between the ratios of higher order fluctuations and the freeze-out parameters [@fodorFO; @nakamura1].
Figure \[fig:comp\] compares between the present condition, $(\epsilon-3 p)/T^4=7/2$ and two other conditions, namely $s/T^3=7$ [@Tawfik:2005qn; @Tawfik:2004ss] and ${\kappa}\, \sigma^2=0$ [@Tawfik:2013dba]. The agreement is convincing.
Conclusions {#sec:conc}
===========
So far, we conclude that the freeze-out parameters deduced at various center-of-mass energies are well reproducible in the HRG model using the condition that the trace anomaly (interaction measure) $(\epsilon-3 p)/T^4$ remains constant. We found that $(\epsilon-3 p)/T^4=7/2$ reproduces very well the freeze-out diagram. From Eq. (\[eq:Icls\]), constant normalized $I(T,\mu_b)=I(T)$ leads to \_2(T,\_b) &=& e\^[\_b/T]{} . Then, the chemical freeze-out potential \_b &=& m . seems to be related to the effective mass, $m$.
Furthermore, we conclude the present condition, $(\epsilon-3 p)/T^4=7/2$ agrees with two other conditions, namely $s/T^3=7$ [@Tawfik:2005qn; @Tawfik:2004ss] and ${\kappa}\, \sigma^2=0$ [@Tawfik:2013dba].
Acknowledgements {#acknowledgements .unnumbered}
================
The author likes to thank Prof. Antonino Zichichi for his kind invitation to attend the International School of Subnuclear Physics 2013 at the ”Ettore Majorana Foundation and Centre for Scientific Culture” in Erice-Italy, where the present script was completed.
[99]{} R. Hagedorn, Suppl. Nuovo Cimento [**III**]{}, 147 (1965); Nuovo Cimento [**35**]{}, 395 (1965).
A. Majumder and B. Müller, Phys. Rev. Lett. [**105**]{}, 252002, (2010). A. Tawfik, Phys. Rev. D [**71**]{} 054502 (2005).
F. Karsch, K. Redlich and A. Tawfik, Eur. Phys. J. C [**29**]{}, 549 (2003).
F. Karsch, K. Redlich and A. Tawfik, Phys. Lett. B [**571**]{}, 67 (2003).
K. Redlich, F. Karsch and A. Tawfik, J. Phys. G [**30**]{}, S1271 (2004).
A. Tawfik, J. Phys. G [**G31**]{}, S1105-S1110 (2005).
A. Tawfik, Indian J. Phys. [**85**]{}, 755-766 (2011).
A. Tawfik, Prog. Theor. Phys. [**126**]{}, 279-292 (2011).
A. Tawfik, Nucl. Phys. A [**859**]{}, 63-72 (2011).
A. Tawfik, Int. J. Theor. Phys. [**51**]{}, 1396-1407 (2012).
M. A. Stankiewicz, [*”Entropy in the thermal model”*]{}, e-Print: nucl-th/0509058
J. Cleymans, H. Oeschler, K. Redlich and S. Wheaton, Phys. Lett. B [**615**]{}, 50 (2005).
A. Tawfik, Nucl. Phys. A [**764**]{}). A. Tawfik, Europhys. Lett. [**75**]{}, 420 (2006).
A. Tawfik, [*”Chemical Freeze-Out and Higher Order Multiplicity Moments”*]{}, arXiv:1306.1025 \[hep-ph\].
A. Tawfik, Adv. High Energy Phys. [**2013**]{}, 574871 (2013). J. Cleymans, H. Oeschler, K. Redlich and S. Wheaton, Phys. Rev. C [**73**]{}, 034905 (2006).
R. Venugopalan, M. Prakash, Nucl. Phys. A [**546**]{}, 718 (1992).
M. Gell-Mann and M. Levy, Il Nuovo Cimento [**16**]{}, 705–726 (1960). R. Hagedorn, Nuovo Cim. Suppl. [**6**]{}, 311-354 (1968); Nuovo Cim. A [**56**]{}, 1027-1057 (1968).
R. Hagedorn, ”Springer Lecture Notes in Physics”, [**221**]{}, ed. K. Kajantie, Springer-Verlag, Berlin, Heidelberg, New York (1985).
J. Letessier, J. Rafelski, Hadrons and Quark-Gluon Plasma, Cambridge University Press, 2002.
D. H. Rischke, M. I. Gorenstein, H. Stöcker, and W. Greiner, Z. Phys. C [**51**]{}, 485 (1991).
J. Cleymans, M. I. Gorenstein, J. Stalnacke, and E. Suhonen, Z. Phys. C [**8**]{}, 347 (1993).
G. D. Yen, M. I. Gorenstein, W. Greiner, and S. N. Yang, Phys. Rev. C [**56**]{}, 2210 (1997).
M.I. Gorenstein, M. Gazdzicki, and W. Greiner, Phys. Rev. C [**72**]{}, 024909 (2005).
M. I. Gorenstein, M. Hauer and D.O. Nikolajenko, Phys. Rev. C [**76**]{}, 024901 (2007).
M.I. Gorenstein, M. Hauer, and O.N. Moroz, Phys. Rev. C [**77**]{}, 024911 (2008).
A. Andronic, P. Braun-Munzinger, J. Stachel and M. Winn, Phys. Lett. B [**718**]{}, 80-85 (2012). Sz. Borsanyi, G. Endrodi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo, JHEP [**1208**]{}, 053 (2012).
G. E. Uhlenbeck and L. Gropper, Phys. Rev. [**41**]{}, 79 (1932).
Sz. Borsanyia, [*et al.*]{}, JHEP [**1011**]{}, 077 (2010). A. Andronic, P. Braun-Munzinger and J. Stachel, Nucl. Phys. A [**772**]{}, 167 (2006).
G. Agakishiev, [*et al.*]{} \[HADES Collaboration\], Eur. Phys. J. A [**47**]{}, 21 (2011).
X. Lopez, [*et al.*]{} \[FOPI Collaboration\], Phys. Rev. C [**76**]{}, 052203 (2007).
P. Petreczky, PoS LATTICE [**2012**]{}, 069 (2012).
S. Borsányi, G. Endrõdi, Z. Fodor, A. Jakovác, S. D. Katz, [et al.]{}, JHEP [**1011**]{}, 077 (2010). Owe Philipsen, Prog. Part. Nucl. Phys. [**70**]{}, 55-107 (2012). J. Cleymans and K. Redlich, Phys. Rev. C [**60**]{}, 054908 (1999).
P. Braun-Munzinger and J. Stachel, J. Phys. G [**28**]{}, 1971 (2002).
V. Magas and H. Satz, Eur. Phys. J. C [**32**]{}, 115 (2003).
S. Borsanyi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo, arXiv:1305.5161 \[hep-lat\]
A. Nakamura and K. Nagata, arXiv:1305.0760 \[hep-ph\]
|
---
abstract: 'The abstract goes here.'
author:
-
-
-
title: 'Bare Demo of IEEEtran.cls for Conferences'
---
Introduction
============
This demo file is intended to serve as a “starter file” for IEEE conference papers produced under LaTeX using IEEEtran.cls version 1.7 and later. I wish you the best of success.
mds
January 11, 2007
Subsection Heading Here
-----------------------
Subsection text here.
### Subsubsection Heading Here
Subsubsection text here.
Conclusion
==========
The conclusion goes here.
Acknowledgment {#acknowledgment .unnumbered}
==============
The authors would like to thank...
[1]{}
H. Kopka and P. W. Daly, *A Guide to LaTeX*, 3rd ed.1em plus 0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999.
|
---
abstract: 'The problem of the Hamiltonian matrix in the oscillator and orthogonalized Laguerre basis construction from a given S-matrix is treated in the context of the algebraic analogue of the Marchenko method.'
---
[**A discrete version of the inverse scattering problem and the J-matrix method**]{}\
[S. A. Zaytsev[^1]]{}\
[*Department of Physics, Khabarovsk State University of Technology,\
Tikhookeanskaya 136, Khabarovsk 680035, Russia*]{}\
Introduction
============
The J-matrix [@HY] theory of scattering is based on the fact that the $\ell$th partial wave kinetic energy or the Coulomb Hamiltonian $H^0$ is represented in a certain square-integrable basis set by an infinite symmetric tridiagonal matrix. In the harmonic oscillator and the Laguerre basis sets $\left\{\phi_n^{\ell}\right\}_{n=0}^{\infty}$ the eigenvalue problem for $H^0$ can be solved analytically. The J-matrix method yields an exact solution to a model scattering Hamiltonian where the given short-range potential is approximated by truncating in a finite subset $\left\{\phi_n^{\ell}\right\}_{n=0}^{N-1}$.
In Refs. [@Z1; @Z2] an inverse scattering formalism within the J-matrix method has been proposed, where the matrix $\|V_{n,m}\|$ of the potential $$V_{\ell}(r, \, r')=\hbar\omega \sum \limits_{n, \, m=0}^{N-1}
\phi_n^{\ell}(x)\,V_{n, \, m}\,\phi_m^{\ell}(x') \label{Pot}$$ with the oscillator form factors $$\phi_n^{\ell}(x)=(-1)^n\,\sqrt{\frac{2n!}{\rho
\Gamma(n+\ell+\frac32)}}\,
x^{\ell+1}e^{-x^2/2}\,L_n^{\ell+1/2}(x^2) \label{bf}$$ is determined from a given S-matrix. Here, $x=r/\rho$ is the relative coordinate in units of the oscillator radius $\rho=\sqrt{\hbar/\mu\omega}$, $\mu$ is the reduced mass.
Obviously a correlation can be made between the J-matrix method and a discrete model of quantum mechanics, within of which a finite-difference Schrödinger equation is used. As a result the J-matrix versions of the Gel’fand-Levitan-Marchenko method algebraic analogue can be formulated. For instance, the J-matrix method formally and computationally is quite similar to the R-matrix theory. It is this analogy that the previous J-matrix version of inverse scattering theory [@Z1; @Z2; @ZK] \[also see [@PC]\] leans upon. Within the J-matrix approach the discrete representation of the Green function in finite subspace of the basis functions $\left\{ \phi_n^{\ell}\right\}_{n=0}^{N-1}$ $${\bf G}(\epsilon)=\left(\epsilon {\bf I}-{\bf h}\right)^{-1}$$ is used. Here, ${\bf I}$ is identity matrix and ${\bf h}$ is the truncated Hamiltonian matrix of order $N$ in the oscillator basis (\[bf\]). We measure the energy $E$ in the units of the oscillator basis parameter $\hbar\omega$, i.e. $E=\hbar\omega\epsilon$ and $\epsilon=q^2/2$, where $q$ is the dimensionless momentum: $q=k\rho$. In particular, the element $\left[{\bf G}(\epsilon)
\right]_{N-1,\,N-1}\equiv\mathcal{P}_N(\epsilon)$ can be presented in the two rational forms [@Z1; @YAA]: $$\label{GR1}
\mathcal{P}_N(\epsilon)= \frac{\prod \limits _{j=0}^{N-2}(\epsilon-\mu_j)}
{\prod \limits_{j=0}^{N-1}(\epsilon-\lambda_j)}$$ where $\left\{\lambda_j\right\}_{j=0}^{N-1}$ and $\left\{\mu_j\right\}_{j=0}^{N-2}$ satisfy the interlacing property $$\lambda_0 < \mu_0 < \lambda_1 < \ldots < \lambda_{N-2} < \mu_{N-2} < \lambda_{N-1},$$ and [@YF] $$\label{GR2}
\mathcal{P}_N(\epsilon)= \sum \limits_{j=0}^{N-1}\frac{Z_{N-1,\,j}^2}
{\epsilon-\lambda_j}.$$ Here, $\left\{Z_{N-1,\,j} \right\}_{j=0}^{N-1}$ are the elements of the $N$th row of the eigenvector matrix ${\bf Z}$ of the truncated Hamiltonian matrix ${\bf h}$. The sets $\left\{\lambda_j\right\}_{j=0}^{N-1}$, $\left\{\mu_j\right\}_{j=0}^{N-2}$ and $\left\{\lambda_j\right\}_{j=0}^{N-1}$, $\left\{Z_{N-1,\,j}
\right\}_{j=0}^{N-1}$ are derived from the S-matrix which is intimately connected with $\mathcal{P}_N$ \[see Eq. (\[SN\])\]. Notice that the both sets of the spectral parameters determine unique \[apart from the off diagonal elements sign\] Hamiltonian matrix ${\bf h}$ of a Jacobi form [@G; @GW] $$\|h_{n,\,m}\| = \left(
\begin{array}{cccccc}
a_0 & b_1 & & & \\
b_1 & a_1 & b_2 & & \mbox{\Large $0$} & \\
& b_2 & a_2 & b_3 & & \\
&& b_3& \times & \times & \\
&\mbox{\Large $0$} &&\times & \times & b_{N-1}\\
&&&& b_{N-1} & a_{N-1} \\
\end{array}
\right). \label{hnm}$$ Hence, the sought-for potential matrix $\|V_{n,\,m}\|$ is also of a Jacobi form. Recall that the kinetic energy operator $$H^0 = \frac{\hbar^2}{2\mu}\left(-\frac{\displaystyle
d^2}{\displaystyle d\,r^2}+ \frac{\ell(\ell+1)}{\displaystyle
r^2}\right)\label{H0}$$ matrix representation $\|T_{n,\,m}^{\ell}\|=\frac{1}{\hbar\omega}\|H^0_{n,\,m}\|$ in the harmonic oscillator basis (\[bf\]) is symmetric tridiagonal [@HY; @YF]: $$\begin{array}{c}
T_{n,\,n}^{\ell}=\frac12\left(2n+\ell+\frac32\right),\\[3mm]
T_{n,\,n+1}^{\ell}=T_{n+1,\,n}^{\ell}=-\frac12\sqrt{(n+1)
\left(n+\ell+\frac32\right)}.\\
\end{array}
\label{T}$$
Thus the inverse scattering problem within the J-matrix approach admits of the solution in the tridiagonal Hamiltonian matrix form. In this regard the J-matrix method is similar to a discrete model of quantum mechanics, in the framework of which the Hamiltonian matrix representation is also symmetric tridiagonal. Note that the tridiagonal matrix representation of both the kinetic energy operator and the Hamiltonian is fundamental for a finite-difference analogue of the Gel’fand-Levitan equations \[see e.g. [@Chabanov]\] as well as for a discrete version [@Case] of the Marchenko equations. As shown below, the similarity between the J-matrix method and a finite-difference approach can be plainly extended to the inverse scattering formalism as well. In the present paper, in particular, an inverse scattering J-matrix approach via the Marchenko equations (JME) is given.
In JME the expansion coefficients $c_n$ of the wave function $\psi$ in terms of the $L^2$ basis set play a role similar to that of the values $\psi_n=\psi(x_n)$ at points $x_n=n\Delta$ within the finite-difference inverse scattering approach. Here, the completeness relation for the solutions $c_n$ of the Schrödinger equation discrete analogue is also exploited which involves an integration $c_n\overline{c_m}$ over the energy from zero to infinity. This raises the question as to whether the integrals converge. As shown below, taking account of the phase shift $\delta^N_{\ell}(k)$ corresponding to the potential (\[Pot\]) of finite rank $N$ asymptotic behavior at large $k$ provides the convergence of the integrals. Generally, with a potential (\[Pot\]) matrix of finite order $N$ it is possible to reproduce the phase shift $\delta_{\ell}$ only on a finite energy interval $[0, \epsilon_0]$ with $\epsilon_0 < \lambda_{N-1}$. This is why the eigenvalue $\lambda_{N-1}$ and the corresponding eigenvector component $Z_{N-1, \,N-1}$ are the variational parameters within the previous J-matrix version [@Z1; @Z2; @ZK; @PC] of the inverse scattering theory. By contrast, in JME the phase shift, even if modified in accordance with the $\delta_{\ell}^N$ asymptotic feature, on infinite energy interval is used. As a result JME has not variational parameters (apart from $N$ and $\rho$).
The elements of the J-matrix method formalism are presented in Sec. 2. In Sec. 3 the inverse scattering J-matrix approach in the context of the Marchenko equations is formulated. The features of JME numerical realization are discussed in Sec. 4. In Sec. 5 JME is expanded to the Laguerre basis case. Here, we are dealing with the tridiagonal Hamiltonian matrix construction in an orthogonalized Laguerre basis set, in which the kinetic energy operator matrix is also tridiagonal. In Sec. 6 we summarize our conclusions.
The direct problem
==================
The oscillator-basis J-matrix formalism is discussed in detail elsewhere. We present here only some relations needed for understanding the inverse scattering J-matrix approach. Within the J-matrix method, the radial wave function $\psi(k,\,r)$ is expanded in an oscillator function (\[bf\]) series $$\psi(k,\,r) = \sum \limits_{n=0}^{\infty}c_n(k) \,
\phi_n^{\ell}(x). \label{rk}$$ In the assumption that the Hamiltonian matrix is of the form (\[hnm\]) the functions $c_n$ are the solutions to the set of equations $$\begin{array}{c}
a_0\,c_0(k)+b_1\,c_1(k)=\epsilon\,c_0(k)\\[3mm]
b_n\,c_{n-1}(k)+a_n\,c_n(k)+b_{n+1}\,c_{n+1}(k)=\epsilon\,c_n(k),
\quad n=1,\, 2, \, \ldots \, .\\
\end{array}
\label{PEq}$$ The asymptotic behavior of $c_n(k)$ for $k>0$ as $n \rightarrow
\infty$ is given by $$c_n(k)=f_n(k)\equiv \frac{\mbox{i}}{2}
\left[\;\mathcal{C}_{n,\,\ell}^{(-)}(q)-S(k)\,\mathcal{C}_{n,\,\ell}^{(+)}(q)\;
\right]. \label{f01}$$ Here, the functions $$\begin{array}{c}
\mathcal{C}_{n,\,\ell}^{(\pm)}(q)=\mathcal{C}_{n,\,\ell}(q)
\pm\mbox{i}\mathcal{S}_{n,\,\ell}(q),\\[3mm]
\mathcal{S}_{n,\,\ell}(q) = \sqrt{\frac{\pi\rho
n!}{\Gamma(n+\ell+\frac32)}}\,
q^{\ell+1}e^{-q^2/2}\,L_n^{\ell+1/2}(q^2),\\[3mm]
\mathcal{C}_{n,\,\ell}(q) = \sqrt{\frac{\pi\rho
n!}{\Gamma(n+\ell+\frac32)}}\,
\frac{\Gamma(\ell+1/2)}{\pi\,q^{\ell}}\,e^{-q^2/2}\,
F\left(-n-\ell-1/2,\, -\ell+1/2;\, q^2 \right)\\
\end{array}
\label{SC}$$ obey the “free” equations $$T_{n, \, n-1}^{\ell}\,e_{n-1}(q)+ T_{n, \,
n}^{\ell}\,e_n(q)+T_{n, \, n+1}^{\ell}\,e_{n+1}(q)=\epsilon
\,d_n(q), \quad n=1, \, 2, \, \ldots \, .\label{TRR}$$ $\mathcal{S}_{n,\,\ell}$ satisfy in addition the equation $$T_{0, \, 0}^{\ell}\,\mathcal{S}_{0,\,\ell}(q)+T_{0, \,
1}^{\ell}\,\,\mathcal{S}_{1,\,\ell}(q) =
\epsilon\,\mathcal{S}_{0,\,\ell}(q).$$ Besides, $\mathcal{S}_{n,\,\ell}$ meet the completeness relation $$\frac{2}{\pi}\int \limits_0^{\infty}dk\,\mathcal{S}_{n,\,\ell}(q)
\,\mathcal{S}_{m,\,\ell}(q)=\delta_{n,\,m}. \label{SCompl}$$
Notice that $$\begin{array}{c}
\widetilde{S}(r)=\sum \limits
_{n=0}^{\infty}\mathcal{S}_{n,\,\ell}(q)\,\phi_n^{\ell}(x),\\[3mm]
\widetilde{C}(r)=\sum \limits
_{n=0}^{\infty}\mathcal{C}_{n,\,\ell}(q)\,\phi_n^{\ell}(x),\\
\end{array}$$ subject to the asymptotic condition [@YF] $$\label{scasym}
\begin{array}{c}
\widetilde{S}(r) \mathop{\sim}\limits_{r\rightarrow\infty}\sin(kr-\ell\pi/2),\\[3mm]
\widetilde{C}(r) \mathop{\sim}\limits_{r\rightarrow\infty}\cos(kr-\ell\pi/2).\\
\end{array}$$
As for the coefficients of the expansion $$\psi_{\nu}(r) = \sum
\limits_{n=0}^{\infty}c_n(\mbox{i}\kappa_{\nu}) \,
\phi_n^{\ell}(x) \label{ik}$$ of the normalized bound state wave function $\psi_{\nu}$ with the energy $-\kappa_{\nu}^2/2$, $$c_n(\mbox{i}\kappa_{\nu})=f_n(\mbox{i} \kappa_{\nu})\equiv
\mathcal{M}_{\nu}\,
\mbox{i}^{\ell}\,\mathcal{C}_{n,\,\ell}^{(+)}(\mbox{i}\kappa_{\nu}\rho)
\label{f02}$$ holds as $n \rightarrow \infty$. Here, $\mathcal{M}_{\nu}$ is the bound state normalization constant which is related to the residue of the S-matrix [@Baz]: $$\mbox{i}\mathop{Res}\limits _{k=\mbox{\scriptsize
i}\kappa_{\nu}}S(k) = (-1)^{\ell}\, \mathcal{M}_{\nu}^2.
\label{M}$$
It can be easy verified that from the completeness relation for the solutions $\psi(k, \, r)$, $\psi_{\nu}(r)$ [@CS] $$\frac{2}{\pi}\int \limits _0^{\infty} dk
\,\psi(k,\,x)\,\overline{\psi(k,\,y)} + \sum \limits _{\nu}
\psi_{\nu}(x)\, \overline{\psi_{\nu}(y)}= \delta(x-y)
\label{ComplSE}$$ it follows that $$\frac{2}{\pi}\int \limits _0^{\infty} dk \,
c_n(k)\,\overline{c_m(k)} + \sum \limits _{\nu}
c_n(\mbox{i}\kappa_{\nu})\, \overline{c_m(\mbox{i}\kappa_{\nu})}=
\delta_{n,\, m}. \label{Compl}$$
The inverse problem
===================
To take advantage of the algebraic analogue of the Marchenko method it is essential that there exist coefficients $K_{n, \,
m}$ \[independent of $k$\] such that $$c_n(k) = \sum \limits _{m=n}^{\infty}K_{n, \, m}\, f_m(k).
\label{EC}$$ By analogy with Ref. [@Case] assume that $$c_n(k) = \, f_n(k), \quad n \ge N$$ \[$N$ specifies the order of a potential matrix\]. If $f_N$ and $f_{N+1}$ are inserted \[instead of respectively $c_N$ and $c_{N+1}$\] into Eq. (\[PEq\]) for $n=N$, we obtain, in view of Eq. (\[TRR\]), $$\begin{array}{c}
c_{N-1}(k) = \left( \right.T_{N,\, N-1}^{\ell}f_{N-1}(k)+[T_{N,\,
N}^{\ell}-a_N]\,f_{N}(k)+\qquad \qquad\\[3mm]
\qquad \qquad \qquad \qquad \qquad \qquad
+[T_{N,\,N+1}^{\ell}-b_{N+1}]\,f_{N+1}(k)]
\left.\right)/b_{N}.\\
\end{array}
\label{fN}$$ Then, using the three-term recursion relation (\[TRR\]) with every $n=N~-1, \ldots,1$ we obtain $$c_n(k) = \sum \limits _{m=n}^{2N-n-1}K_{n, \, m}\, f_m(k), \quad
n=0, \, 1, \, \ldots, \, N-1 \label{Sol}$$ \[which in the limit $N \rightarrow \infty$ gives (\[EC\])\].
The coefficients $K_{n,\, m}$ are found from the completeness relation (\[Compl\]). From the condition of the orthogonality of $c_n$ and every $c_m$, $m > n$ follows the condition of the orthogonality of $c_n$ and every $f_m$, $m > n$, i. e. $$\frac{2}{\pi}\int \limits _0^{\infty} dk \,
c_n(k)\,\overline{f_m(k)} + \sum \limits _{\nu}
c_n(\mbox{i}\kappa_{\nu})\,\overline{f_m(\mbox{i}\kappa_{\nu})}=
0, \quad m > n. \label{Em}$$ Inserting the expansion of (\[Sol\]) in (\[Em\]) gives the system of linear equations in $K_{n, \, m}$ $$K_{n,\, n}\,Q_{n, \,m}+\sum \limits_{p=n+1}^{2N-n-1} K_{n, \,
p}\,Q_{p, \, m}=0, \quad m>n.
\label{KEq1}$$ Then, inserting Eq. (\[Sol\]) into (\[Compl\]) and putting $n=m$, we obtain, in view of Eq. (\[Em\]), the equation in $K_{n,\,n}$ $$K_{n,\, n}\left(K_{n,\, n}\,Q_{n,\, n}+\sum \limits_{p=n+1}^{2N-n-1} K_{n, \,
p}\,Q_{p, \, n} \right)=1.
\label{KEq2}$$ Note that from Eq. (\[KEq1\]) it follows that $K_{n, \, m}$, $m>n$ are proportional to $K_{n,\,n}$. In Eqs. (\[KEq1\]) and (\[KEq2\]) $Q_{n,\, m}$ are defined from the scattering data by $$% \begin{array}{c}
Q_{n, \, m}= \frac{2}{\pi}\int \limits _0^{\infty} dk \,
f_n(k)\,\overline{f_m(k)} + \sum \limits _{\nu}
f_n(\mbox{i}\kappa_{\nu})\, \overline{f_m(\mbox{i}\kappa_{\nu})}.
\label{Q}$$
The elements $a_n$ and $b_n$ of the sought-for Hamiltonian matrix (\[hnm\]) are related to $K_{n,\,m}$ by the equations $$\begin{array}{c}
a_n = T_{n,\,n}^{\ell}+\frac{K_{n,\,n+1}}{K_{n,\,n}}\,T_{n+1,\,n}^{\ell}-
\frac{K_{n-1,\,n}}{K_{n-1,\,n-1}}\,T_{n,\,n-1}^{\ell}, \\[3mm]
b_n=\frac{K_{n,\,n}}{K_{n-1,\,n-1}}\,T_{n,\,n-1}^{\ell}, \quad
n=1,\, 2, \, 3, \, \ldots.\\
\end{array}
\label{abK}$$ $a_0$ is specified by the solutions $c_0$ and $c_1$ to Eq. (\[PEq\]).
A numerical realization
=======================
To this point the assumption has been made that the phase shift $\delta_{\ell}(k)$ is a continuous function of the wave number $k$ that meets the conditions [@CS] $$\delta_{\ell}(\infty)=0, \qquad \int \limits
^{\infty}k^{-1}|\delta_{\ell}(k)|dr < \infty.$$ In this case $\delta_{\ell}$ must satisfy stringent requirements. Indeed, the integrated function in r.h.s. of Eq.(\[Q\]) can be expressed in the form of a product $g_n\,g_m$ of (real) functions $$\label{fn}
g_n(q)= \mathcal{S}_{n\,\ell}(q)\,\cos\delta_{\ell}
+\mathcal{C}_{n\,\ell}(q)\,\sin\delta_{\ell}.$$ It is obviously that a sufficient condition to the convergence of the integrals in (\[Q\]) is that functions $g_n$ are square-integrable. Notice that from (\[SC\]) follows $$\label{AS}
\mathcal{S}_{n\,\ell}(q)
\mathop{\sim}\limits_{q\rightarrow\infty} (-1)^n
\sqrt{\frac{\pi\rho}{n!\;\Gamma(n+\ell+\frac32)}}\,q^{2n+\ell+1}\,e^{-q^2/2},$$ i.e. the first term in (\[fn\]) decays exponentially at asymptotically large $q$. However, $\mathcal{C}_{n\,\ell}$ grows exponentially with increasing $q$: $$\label{AC}
\mathcal{C}_{n\,\ell}(q)\mathop{\sim}\limits_{q\rightarrow\infty}
(-1)^{n+1}\frac{\sqrt{\pi\rho\,n!\;\Gamma(n+\ell+\frac32)}}{\pi}\,
q^{-(2n+\ell+2)}e^{q^2/2}.$$ This suggests that the phase shift $\delta_{\ell}$ must decay rapidly enough to provide the convergence of the integral in r.h.s. of Eq. (\[Q\]).
Actually the phase shift $\delta_{\ell}^N$ corresponding to the potential (\[Pot\]) of rank $N$ [@YF] $$\label{tdl}
\tan \delta_{\ell}^N=-\frac{\mathcal{S}_{N-1,\, \ell}(q)-
\mathcal{P}_N(\epsilon)\,T_{N-1,\, N}^{\ell}\,\mathcal{S}_{N,\, \ell}(q)}
{\mathcal{C}_{N-1,\, \ell}(q)-\mathcal{P}_N(\epsilon)\,T_{N-1,\,
N}^{\ell}\,
\mathcal{C}_{N,\, \ell}(q)},$$ as seen in Eqs. (\[AS\]), (\[AC\]), fulfills even more strict requirement $$\label{dla}
\delta_{\ell}^N\mathop{\sim}
\limits_{q\rightarrow\infty}
\frac{\pi\,(2N+\ell-\frac12)}{(N-1)!\;\Gamma(N+\ell+\frac12)}\,
q^{4N+2\ell-3}\,e^{-q^2}.$$ Because of the restriction (\[dla\]) on the phase shift $\delta_{\ell}^N$ the potential (\[Pot\]) of finite rank $N$ generally is incapable to describe the scattering data on the infinite interval $k \in [0, \, \infty)$. At most, we can set ourselves the task of constructing the potential (\[Pot\]) that describes the experimental phase shift $\delta_{\ell}$ on some finite interval $[0, \, k_0]$, since generally $\delta_{\ell}$ needs to be modified in the region $k>k_0$ to provide at least the convergence of the integrals in Eq. (\[Q\]).
As an example we consider the $s$-wave scattering case. The “experimental” phase shift $\delta_{\ell}$ \[dotted curve in figure 1\] is that of the scattering on the potential given by straight well with the depth $V_0$: $\sqrt{\frac{2\mu}{\hbar^2}\,V_0}\,R=1.5$. The potential (\[Pot\]) is sought for that describes the phase shift on the interval $[0, \, k_0]$, $k_0R=6$ \[in figure 1 crosses represent a modified phase shift\]. The phase shift $\delta_{\ell}^{N}$ corresponding to the resulting potential (\[Pot\]) of rank $N=6$ and $\rho=\frac12R$ is shown in figure 1 \[solid curve\]. Notice that $\mathcal{C}_{n\,\ell}$ explodes exponentially with increasing $q$. Thus, the contribution from the region $q>q_0=\rho k_0=3$ to the integral in Eq. (\[Q\]) may become overwhelming \[see figure 2 where $g_0(q)^2$ is plotted\], with the result that the method fails. Matters can be improved by a transition to lesser $\rho$ that shifts $q_0$ to a region where $\mathcal{C}_{n\,\ell}$ is not that large, or replacing $\delta_{\ell}$ at $q>q_0$ with a function that decays rapidly enough.
In the second example, a scattering data on the potential with a bound state has been used as input. The phase shift $\delta_{\ell}$ \[dotted curve in figure 3\] corresponds to the s-wave scattering on a spherically symmetric potential in the form of straight well. The well parameter $\sqrt{\frac{2\mu}{\hbar^2}\,V_0}\,R=2$ determines the bound state with the energy $E=-\kappa^2$, $\kappa R=0.638045$ and the asymptotic normalization constant $\mathcal{M}R^{1/2}=1.583324$. $k_0$ and $\rho$ have been taken the same as in the first example \[the modified phase shift is represented by crosses in figure 3\]. It is well known that a phase shift does not depend on energy positions and asymptotic normalization constants of bound states. Thus, the inverse scattering problem in the presence of a bound state can be split into two steps.
On the first step we focus on the describing the phase shift and, in spite of the whole of scattering data is used \[see Eq.(\[Q\])\], do not seek to describe the bound state with high degree of accuracy. The phase shift $\delta_{\ell}^{N}$ corresponding to the potential (\[Pot\]) parameters, which together with $\kappa R$ and $\mathcal{M}R^{1/2}$ are presented in the left half of Table, is shown in figure 3 \[solid curve\].
On the second step, to improve the description of the bound states we use the relationship (\[M\]) between the poles and residues of the S-matrix and the characteristics of the bound states \[see e.g. Ref. [@PC]\]. Here, the smallest eigenvalue $\lambda_0$ and the corresponding eigenvector component $Z_{N-1,\,0}$ associated with the bound state are found from the system $$\label{ZN}
\sum \limits _{j=0}^{N-1} \,Z_{N-1,\,j}^2=1,$$ $$\label{DZ}
\mathcal{P}_N(-\kappa^2\rho^2/2)
=\frac{1}{T_{N-1,\, N}^{\ell}}\frac{\mathcal{C}_{N-1,\, \ell}^{(+)}
(\mbox{i}\kappa\rho)}
{\mathcal{C}_{N,\, \ell}^{(+)}(\mbox{i}\kappa\rho)},$$ $$\label{DDZ}
\frac{\mathcal{C}_{N-1,\, \ell}^{(-)}(\mbox{i}\kappa\rho)-
\mathcal{P}_N(-\kappa^2 \rho^2/2)\,T_{N-1,\, N}^{\ell}\,
\mathcal{C}_{N,\, \ell}^{(-)}(\mbox{i}\kappa\rho)}
{\frac{d}{dq}\left\{\mathcal{C}_{N-1,\, \ell}^{(+)}(q)-
\mathcal{P}_N(q^2/2)\,T_{N-1,\,
N}^{\ell}\,
\mathcal{C}_{N,\, \ell}^{(+)}(q)\right\}\left.\vphantom{I^I}
\right|_{q=\mbox{\scriptsize i}\kappa\rho}} =\mbox{i}
(-1)^{\ell+1}\, \rho \mathcal{M}^2.$$ Eqs. (\[DZ\]), (\[DDZ\]) are derived from the S-matrix formula for the potential (\[Pot\]) [@YF] $$\label{SN}
S_{\ell}^N=\frac{\mathcal{C}_{N-1,\, \ell}^{(-)}(q)-
\mathcal{P}_N(\epsilon)\,T_{N-1,\, N}^{\ell}\,\mathcal{C}_{N,\, \ell}^{(-)}(q)}
{\mathcal{C}_{N-1,\, \ell}^{(+)}(q)-\mathcal{P}_N(\epsilon)\,T_{N-1,\,
N}^{\ell}\,
\mathcal{C}_{N,\, \ell}^{(+)}(q)}$$ and Eq. (\[M\]). Notice that the component $Z_{N-1,\,N-1}$ corresponding to the leading eigenvalue $\lambda_{N-1}$ is involved to meet the normalization condition (\[ZN\]). The phase shift is scarcely affected by changing the parameters $\left\{\lambda_0, \, Z_{N-1, \, 0}\right.$, $\left.Z_{N-1, \,
N-1} \right\}$ from the initial values obtained on the first step to the ones that are evaluated from Eqs.(\[ZN\]) - (\[DDZ\]). The potential parameters, which provide the correct values of $\kappa R$ and $\mathcal{M}R^{1/2}$, are presented in the right half of Table.
The Laguerre basis
==================
Preliminaries
-------------
For simplicity’s sake we restrict our consideration to the scattering of neutral particles. However, the resulting equations still stand in the presence of the repulsive Coulomb interaction. The potential sought is given by the expression $$V_{\ell}(r, \, r')=\frac{\hbar^2}{2\mu} \sum_{n, \, m =0}^{N-1}
\overline{\phi}_n^{\ell}(x) V_{n, \,
m}\overline{\phi}_{m}^{\ell}(x') \label{LPot}$$ where the functions $\overline{\phi}_n^{\ell}$ $$\overline{\phi}_n^{\ell}(x) =\frac{\displaystyle n!}
{\displaystyle r \, (n+2\ell+1)!}\, \phi_n^{\ell}(x) \label{OLbf}$$ are bi-orthogonal to the base Laguerre functions $\phi_n^{\ell}$: $$\phi_n^{\ell}(x) = (2 b r)^{\ell+1} \,
e^{-b r} L_n^{2\ell+1}(2 b r),\label{Lbf}$$ i.e. $$\int \limits_0^{\infty} dr \overline{\phi}_n^{\ell}(x)\,
\phi_m^{\ell}(x)=\delta_{n,\,m}. \label{O1}$$ Here, $b$ is the scale parameter: $x=br$.
The coefficients $u_n$ of the expansion $$\psi(k,\,r) = \sum \limits_{n=0}^{\infty}u_n(k) \,
\phi_n^{\ell}(x) \label{Lrk}$$ of the Schrödinger equation regular solution $\psi(k,\,r)$ satisfy the system of equations $$\left(h^0_{n, \, m}+V_{n, \, m} \right)u_m(k)=k^2\,A^{\ell}_{n, \,
m}\,u_m(k), \qquad n=0,\, 1,\, \ldots\, . \label{LEq}$$ Here, $\|h^0_{n, \, m}\|$ is the symmetric tridiagonal matrix of the reference Hamiltonian $\frac{2\mu}{\hbar^2}H^0$ (\[H0\]) calculated in the basis (\[Lbf\]) [@HY]: $$\begin{array}{c}
h^0_{n,\,n}=
b\frac{(n+2\ell+1)!}{n!}(n+\ell+1),\\[3mm]
h^0_{n,\,n+1}=h^0_{n+1,\,n}=
b\frac{(n+2\ell+2)!}{2n!}, \qquad n=0,\, 1, \, \ldots \,.\\
\end{array}
\label{LH0}$$ $\|A^{\ell}_{n, \, m}\|$ signifies the basis-overlap matrix $$A^{\ell}_{n, \, m}=\int \limits_0^{\infty} dr
\phi_n^{\ell}(x)\,\phi_m^{\ell}(x)\label{OVLD}$$ which is also of Jacobi form: $$\begin{array}{c}
A^{\ell}_{n,\,n}=\frac{(n+2\ell+1)!}{b\,n!}(n+\ell+1),\\[3mm]
A^{\ell}_{n,\,n+1}=A_{n+1,\,n}=-\frac{(n+2\ell+2)!}{2bn!},
\qquad n=0,\, 1, \, \ldots \,.\\
\end{array}
\label{OVLM1}$$
The asymptotic behaviour of the coefficients $u_n(k)$, $k>0$, as $n\rightarrow\infty$ is given by the following expression: $$\label{Lkp}
u_n(k)=w_n(k) \equiv \frac{\mbox{
i}}{2}\left[\mathcal{C}_{n,\,\ell}^{(-)}(k)-S(k)\,\mathcal{C}_{n,\,\ell}^{(+)}(k)
\right]$$ where the functions [@YF] $$\begin{array}{c}
\mathcal{C}_{n,\,\ell}^{(\pm)}(k)=-\frac{n!}{(n+\ell+1)!}
\frac{(-\xi)^{\pm(n+1)}}{\left(2\, \sin \zeta \right)^{\ell}}\,
{_2F_1}(-\ell, \, n+1; \; n+\ell+2; \; \xi^{\pm 2}),\\[3mm]
\xi=e^{\mbox{\scriptsize i}\zeta}=\frac{ \displaystyle\mbox{i}b -
k }
{ \displaystyle \mbox{i}b + k },\\
\end{array}
\label{LCpm}$$ obey the inhomogeneous “free” equation $$J^{\ell}_{n,\,m}(k)\,\mathcal{C}_{m,\,\ell}^{(\pm)}(k)=\delta_{n,\,0}\,
\frac{k}{\mathcal{S}_{0,\,\ell}(k)}, \qquad n=0, \, 1, \, \ldots
\,. \label{LJC}$$ Here, $\|J^{\ell}_{n,\,m}(k)\|=\|h^0_{n,\,m}-k^2\,A^{\ell}_{n,\,m}\|$ is the so-called J-matrix. $\mathcal{S}_{n,\,\ell}$ are the solutions of the system of equations $$J^{\ell}_{n,\,m}(k)\,\mathcal{S}_{m,\,\ell}(k)=0, \qquad n=0, \,
1, \, \ldots, \label{LJS}$$ $$\mathcal{S}_{n,\,\ell}(k)=\frac{\ell!\left(2\, \sin \zeta
\right)^{\ell+1}}{2\,(2\ell+1)!}(-\xi)^n\,{_2F_1}(-n, \ell+1;\;
2\ell+2;\;1-\xi^{-2}). \label{LS}$$ It can easily be shown that the completeness relation for the functions $\mathcal{S}_{n,\,\ell}$ of Ref. [@Broad] can be rewritten as $$\frac{2}{\pi}\int \limits _0^{\infty} dk\,
\mathcal{S}_{n,\,\ell}(k)\,A^{\ell}_{n',\,m}\,\mathcal{S}_{m,\,\ell}(k)=\delta_{n,
\,n'}.\label{LCRS}$$
The coefficients $u_n(\mbox{i}\kappa_{\nu})$ of the expansion of the bound state normalized wave function $\psi_{\nu}(r)$ with the energy $-\kappa_{\nu}^2$ have the following asymptotic behaviour $$u_n(\mbox{i}\kappa_{\nu})=w_n(\mbox{i}\kappa_{\nu}) \equiv
\mathcal{M}_{\nu}\,\mbox{i}^{\ell}\,
\mathcal{C}_{n,\,\ell}^{(+)}(\mbox{i}\kappa_{\nu}) \label{LBA}$$ as $n\rightarrow\infty$.
Notice that the sine-like J-matrix solutions $\widetilde{S}(r)=\sum \limits
_{n=0}^{\infty}\mathcal{S}_{n,\,\ell}(k)\,\phi_n^{\ell}(x)$ and the cosine-like one $\widetilde{C}(r)=\sum \limits
_{n=0}^{\infty}\mathcal{C}_{n,\,\ell}(k)\,\phi_n^{\ell}(x)$, where $\mathcal{C}_{n,\,\ell}(k)=\frac12\left(\mathcal{C}^{(+)}_{n,\,\ell}(k)+\right.$ $\left.\mathcal{C}^{(-)}_{n,\,\ell}(k) \right)$, have the asymptotic behaviour (\[scasym\]).
The the completeness relation (\[ComplSE\]) is transformed into $$\frac{2}{\pi}\int \limits _0^{\infty}u_n(k)\,
A^{\ell}_{n',\,m}\overline{u_m(k)}+\sum \limits
_{\nu}u_n(\mbox{i}\kappa_{\nu})\,A^{\ell}_{n',\,m}\,
\overline{u_m(\mbox{i}\kappa_{\nu})}=\delta_{n,\,n'}. \label{CR2}$$
Inverse problem
---------------
In the framework of the J-matrix version [@ZK] of the inverse scattering problem the spectral parameter set $\left\{\lambda_j,
\, Z_{N-1,\,j} \right\}_{j=0}^{N-1}$ is obtained from the scattering data of the truncated Hamiltonian matrix of order $N$ in the orthogonal basis $\varphi_n^{\ell}=\sum_{m=0}^{N-1}
D_{n,\,m}^{\ell}\,\phi_m^{\ell}$, where $$\label{DM}
D_{n,\,m}^{\ell}=\left\{ \begin{array}{lr}
d_n^{\ell}, & n \ge m,\\[3mm]
0, & n<m, \\
\end{array} \right.
\quad d_n^{\ell} = \sqrt{\frac{2b\,n!}{(n+2\ell+2)!}},$$ i. e. $$\label{bas1}
\varphi_n^{\ell}(x)=d_n^{\ell}(2br)^{\ell+1}e^{-br}L_n^{2\ell+2}(2br).$$ Clearly the set $\left\{\lambda_j, \, Z_{N-1,\,j}
\right\}_{j=0}^{N-1}$ determines a tridiagonal Hamiltonian matrix of order $N$ in any orthogonal basis $\chi_n^{\ell}=\sum_{m=0}^{N-1}
P_{n,\,m}^{\ell}\,\varphi_m^{\ell}$ where $\| P_{n,\,m}^{\ell}\|$ is an arbitrary orthogonal $(N \times N)$-matrix of the form $$\label{PT}
\| P_{n,\,m}^{\ell}\|= \left(
\begin{array}{cccc}
P_{0,\,0}^{\ell} & \cdots & P_{0,\,N-2}^{\ell} & 0 \\
\vdots & \vdots & \vdots & \vdots \\
P_{N-2,\,0}^{\ell} & \cdots & P_{N-2,\,N-2}^{\ell} & 0 \\
0 & \cdots & 0 & 1
\end{array}
\right).$$ Let us assume that $\| P_{n,\,m}^{\ell}\|$ is the orthogonal transformation matrix that performs the change from $\left\{\varphi_n^{\ell}\right\}_{n=0}^{N-1}$ to the new basis $\left\{\chi_n^{\ell}\right\}_{n=0}^{N-1}$ in which the kinetic energy operator truncated matrix is tridiagonal. To perfect the analogy with the oscillator basis case, let us denote the kinetic energy operator $\frac{2\mu}{\hbar^2}H^0$ (\[H0\]) tridiagonal matrix in the basis $\left\{\chi_n^{\ell}\right\}_{n=0}^{N-1}$ by $\|T_{n,\, m}^{\ell} \|$. The sought for Hamiltonian $\frac{2\mu}{\hbar^2}H$ matrix $\|h_{n,\,m}\|$ of order $N$ is presumed to be of a Jacobi form (\[hnm\]) in the basis $\left\{\chi_n^{\ell}\right\}_{n=0}^{N-1}$.
Thus, the first $N-1$ of the wave function $\psi(k, \, r)$ expansion coefficients in the combined basis set $\left\{\{\chi_n^{\ell}\}_{n=0}^{N-1},\;
\{\phi_n^{\ell}\}_{n=N}^{\infty}\right\}$ obey the equations $$\label{Eab}
\begin{array}{c}
a_0\,c_0(k)+b_1\,c_1(k)=k^2\,c_0(k)\\[3mm]
b_n\,c_{n-1}(k)+a_n\,c_n(k)+b_{n+1}\,c_{n+1}(k)=k^2\,c_n(k),
\quad n=1,\, \ldots \, N-2.\\
\end{array}$$ It is easy to check that a sufficient condition that the algebraic version of the Marchenko method be applicable for the construction of the tridiagonal Hamiltonian matrix (\[hnm\]) is that $$\label{NC}
a_{n+1}=T_{n+1,\,n+1}^{\ell},\; b_{n+1}=T_{n,\,n+1}^{\ell}, \quad
\mbox{for } n=M, \ldots,\, N-2, \; M=\lceil\frac{N}{2}\rceil.$$ If $N$ is odd, to (\[NC\]) must be added the constraint that $a_M=T_{M,\,M}^{\ell}$. In this case $c_{M+1}=f_{M+1}$, $c_{M}=f_{M}$, where $f_n$ satisfy the “free” equations $$\label{J1}
T_{n,n-1}^{\ell}f_{n-1}(k)+T_{n,n}^{\ell}f_{n}(k)
+T_{n,n+1}^{\ell}f_{n+1}(k)=k^2f_n(k),
\quad n=1, \, \ldots, \, N-2,$$ and we obtain for $n \le M-1$ $$\label{EM}
c_n(k) = \sum \limits_{m=n}^{N-n-1}K_{n,\,m}f_{m}(k).$$
Notice that in going from the initial Laguerre basis $\left\{
\phi_n^{\ell}\right\}_{n=0}^{\infty}$ to the combined basis set $\left\{\{\chi_n^{\ell}\}_{n=0}^{N-1},\right.$ $\left.
\{\phi_n^{\ell}\}_{n=N}^{\infty}\right\}$ the submatrices $\|h^0_{n,m}\|_{n,m=0}^{N-1}$ and\
$\|A^{\ell}_{n,m}\|_{n,m=0}^{N-1}$ are transformed into $\|T^{\ell}_{n,m}\|_{n,m=0}^{N-1}$ and the identity matrix of order $N$ respectively. In addition, the elements $h^0_{N-1,N}$, $A^{\ell}_{N-1,N}$ and $h^0_{N,N-1}$, $A^{\ell}_{N,N-1}$ are multiplied by $d_{N-1}^{\ell}$. The rest of the (infinite) matrices $\|h^0_{n,m}\|$ and $\|A^{\ell}_{n,m}\|$ is unaltered. It thus follows that $f_n$ must satisfy (in addition to (\[J1\])) the equations $$\label{J2}
T^{\ell}_{N-1,\,N-2}f_{N-2}(k)+
T^{\ell}_{N-1,\,N-1}f_{N-1}(k)+
d^{\ell}_{N-1}J^{\ell}_{N-1,\,N}f_N(k)=k^2f_{N-1}(k),$$ $$\label{J3}
d^{\ell}_{N-1}J^{\ell}_{N,N-1}(k)f_{N-1}(k)+
J^{\ell}_{N,N}(k)f_N(k)+
J^{\ell}_{N,N+1}(k)f_{N+1}(k)=0.$$ $$\label{J4}
J^{\ell}_{n,\,m}(k)f_m(k)=0, \quad n=N+1, \ldots\,.$$ Putting $f_n=\mathcal{S}_{n,\ell}$ for $n \ge N$ and $f_{N-1}=\mathcal{S}_{N-1,\ell}/d^{\ell}_{N-1}$ \[in view of the equation (\[J4\]) and (\[J3\]), respectively\], from Eq. (\[J2\]) by using the tree-term recursion relation (\[J1\]) we obtain the coefficients $\widetilde{\mathcal{S}}_{n,\ell}$ with $n=0,\ldots\,, N-2$. Similarly, setting $f_n=\mathcal{C}^{(\pm)}_{n,\ell}$ for $n \ge N$ and $f_{N-1}=\mathcal{C}^{(\pm)}_{N-1,\ell}/d^{\ell}_{N-1}$, we obtain the coefficients $\widetilde{\mathcal{C}}^{(\pm)}_{n,\ell}$ with $n=0,\ldots\,,
N-2$. From the Wronskian-like relation (see e.g. [@BR]) $$\label{W1}
J^{\ell}_{n+1,\,n}(k)\left(\mathcal{C}_{n+1,\ell}^{(\pm)}(k)\mathcal{S}_{n,\ell}(k)-
\mathcal{C}_{n,\ell}^{(\pm)}(k)\mathcal{S}_{n+1,\ell}(k)\right)=k,
\; n \ge 0$$ it follows that $$\label{W2}
T^{\ell}_{n+1,\,n}\left(\widetilde{\mathcal{C}}_{n+1,\ell}^{(\pm)}(k)
\widetilde{\mathcal{S}}_{n,\ell}(k)-
\widetilde{\mathcal{C}}_{n,\ell}^{(\pm)}(k)
\widetilde{\mathcal{S}}_{n+1,\ell}(k)\right)=k, \; 0 \le n \le N-2.$$ Besides, since the system of equations (\[LJS\]) in $\mathcal{S}_{n,\ell}$ is homogeneous, the sets $\left\{\mathcal{S}_{n,\ell} \right\}_{n=0}^{\infty}$ and $\left\{\widetilde{\mathcal{S}}_{n,\ell} \right\}_{n=0}^{\infty}$ are connected by a linear transformation and therefore $\widetilde{\mathcal{S}}_{n,\ell}$ also satisfy the homogeneous equation $$\label{S2}
T^{\ell}_{0,0}\widetilde{\mathcal{S}}_{0,\ell}(k)+
T^{\ell}_{0,1}\widetilde{\mathcal{S}}_{1,\ell}(k)=k^2
\widetilde{\mathcal{S}}_{0,\ell}(k),$$ whereas $\widetilde{\mathcal{C}}_{n,\ell}^{(\pm)}$ obey the inhomogeneous one $$\label{C2}
T^{\ell}_{0,0}\widetilde{\mathcal{C}}^{(\pm)}_{0,\ell}(k)+
T^{\ell}_{0,1}\widetilde{\mathcal{C}}^{(\pm)}_{1,\ell}(k)=k^2
\widetilde{\mathcal{C}}^{(\pm)}_{0,\ell}(k)+
\frac{k}{\widetilde{\mathcal{S}}_{0,\ell}(k)}.$$ Thus, the two sets, $\left\{\widetilde{\mathcal{S}}_{n,\ell}
\right\}_{n=0}^{\infty}$ and $\left\{\widetilde{\mathcal{C}}_{n,\ell} \right\}_{n=0}^{\infty}$, $\widetilde{\mathcal{C}}_{n,\ell}=\frac12\left(
\widetilde{\mathcal{C}}^{(-)}_{n,\ell}\right.+$ $\left.\widetilde{\mathcal{C}}^{(+)}_{n,\ell}\right)$, are “free” independent respectively sine-like \[$\widetilde{\mathcal{S}}_{n,\ell}= \mathcal{S}_{n,\ell}$, $n \ge
N$\] and cosine-like \[$\widetilde{\mathcal{C}}^{(\pm)}_{n,\ell}=
\mathcal{C}^{(\pm)}_{n,\ell}$, $n \ge N$\] solutions to Eqs. (\[J1\])-(\[J4\]) (see e.g. [@YF]).
From the above discussion it follows that to obtain $f_n$ with $0
\le n \le N-1$, which are involved in Eq. (\[EM\]), we can place $f_N=w_N$, $f_{N-1}=w_{N-1}/d^{\ell}_{N-1}$, where $w_n$ are defined by (\[Lkp\]). Then, inserting this $f_{N}$ and $f_{N-1}$ in Eq. (\[J2\]) gives $f_{N-2}$. Once $f_{N-2}$ and $f_{N-1}$ are known, $f_{n}$ for $n=N-3,\, \ldots,\, 0$ are obtained by using the tree-term recursion relation (\[J1\]). $K_{n,\,m}$ are determined by the equations (\[KEq1\]) and (\[KEq2\]) \[in which the upper limit in the sums is equal to $N-n-1$\]. The expressions for $\{a_n\}$, $\{b_n\}$ are the same as (\[abK\]).
$Q_{n,\,n'}$ with $n \le N-1$ in Eqs. (\[KEq1\]) and (\[KEq2\]) are defined \[in view of the overlap matrix form in the combined basis\] by $$\label{CR3}
Q_{n,\,n'}=\frac{2}{\pi}\int \limits _0^{\infty}f_n(k)\,
\overline{f_{n'}(k)}+\sum \limits
_{\nu}f_n(\mbox{i}\kappa_{\nu})\,
\overline{f_{n'}(\mbox{i}\kappa_{\nu})}, \quad n' \le N-2,$$ $$\label{CR4}
\begin{array}{c}
Q_{n,\,N-1}=\frac{2}{\pi}\int \limits
_0^{\infty}f_n(k)\,[\overline{w_{N-1}(k)}/d^{\ell}_{N-1}+
d^{\ell}_{N-1}A^{\ell}_{N-1,N}\overline{w_{N}(k)}]+
\qquad\qquad \qquad \\[3mm]
\qquad +\sum \limits _{\nu}f_n(\mbox{i}\kappa_{\nu})\,[
\overline{w_{N-1}(\mbox{i}\kappa_{\nu})}/d^{\ell}_{N-1}+
d^{\ell}_{N-1}A^{\ell}_{N-1,N}\overline{w_{N}
(\mbox{i}\kappa_{\nu})}],\\
\end{array}$$ $$\label{CR5}
Q_{n,\,n'}=\frac{2}{\pi}\int \limits
_0^{\infty}f_n(k)\,A^{\ell}_{n',m}\overline{w_m(k)}+ \sum \limits
_{\nu}f_n(\mbox{i}\kappa_{\nu})\,A^{\ell}_{n',m}
\overline{w_m(\mbox{i}\kappa_{\nu})}, \quad n' \ge N.$$ Notice that at large $k$, as seen in Eq. (\[LCpm\]), $\mathcal{C}^{(\pm)}_{n,\ell}(k) \sim k^{\ell}$. Thus, as in the case of the oscillator basis, we should restrict ourselves to the description of the scattering data on a finite energy interval, beyond the boundary of which the phase shift needs generally to be modified to provide the convergence of the integrals in Eqs. (\[CR3\])-(\[CR5\]).
Conclusion
==========
In the potential scattering case the finite-difference approach and J-matrix method share the tridiagonal representation of the Hamiltonian. The analogy can be carried over to the inverse scattering problem formalism. Here, the J-matrix version of the Marchenko equation algebraic analogue is formulated and its numerical realization features are considered. The merit of JME is that it is free from a parameter fit inherent in the previous J-matrix inverse scattering approach [@Z1; @Z2; @ZK; @PC]. We also construct a tridiagonal Hamiltinian matrix of some order $M$ in an orthogonalized Laguerre basis; in doing so it is sufficient to tridiagonalize the matrix representation of the reference Hamiltonian $H^0$ in the finite orthogonal basis subset of size $N=2M$. As has been shown in Ref. [@PC], in the two coupled-channel case without threshold the sought-for interaction matrix may be of a “quasi-tridiagonal” form. On this assumption JME can be easily expanded to multichannel scattering.
Acknowledgments {#acknowledgments .unnumbered}
---------------
The author acknowledge helpful conversations with A .M. Shirokov. This work was partially supported by the State Program “Russian Universities”, by the Russian Foundation of Basic Research grant No 02-02-17316.
[99]{}
E. J. Heller, H. A. Yamani, Phys. Rev. A [**9**]{}, 1209 (1974).
S. A. Zaitsev, Teoret. Mat. Fiz. [**115**]{}, 263 (1998) \[Theor. Math. Phys. [**115**]{}, 575 (1998)\].
S. A. Zaitsev, Teoret, Mat. Fiz. [**121**]{}, 424 (1999) \[Theor. Math. Phys. [**121**]{}, 1617 (1999)\].
S. A. Zaitsev, E. I. Kramar, J. Phys. G, [**27**]{}, 2037 (2001).
A. M. Shirokov, A. I. Mazur, S. A. Zaytsev, J. P. Vary, T. A. Weber, Phys. Rev. C [**70**]{} (2004) 044005.
H. A. Yamani, A. D.Alhaidari, and M. S. Abdelmonem, Phys. Rev. A [**64**]{}, 042703 (2001).
H. A. Yamani, L. Fishman, J. Math. Phys. [**16**]{}, 410 (1975).
K. Ghanbari, IP [**17**]{}, 211 (2001).
G. M. L. Gladwell, N. B. Willms, IP [**5**]{}, 165 (1989).
V. M. Chabanov, J. Phys. A, [**37**]{}, 9139 (2004).
K. M. Case, J. Math. Phys. [**14**]{}, 916 (1973)
B. N. Zakhariev, A. A. Suzko, [*Direct and inverse problems. In: Potentials in quantum scattering*]{} (2-nd ed. Berlin, Heidelberg, New York: Springer-Verlag, 1990).
A. I. Baz, Ya. B. Zeldovitch, and A. M. Perelomov, [*Scattering, Reactions and Decays in Non-relativistic Quantum Mechanics*]{} (Moscow: Nauka, 1971).
K. Chadan, P. C. Sabatier, [*Inverse Problems in Quantum Scattering Theory*]{} (New York, Heidelberg, Berlin: Springer-Verlag, 1977).
J. T. Broad, Phys. Rev. A [**31**]{}, 1494 (1985).
J. T. Broad, W. P. Reinhardt, J. Phys. B [**9**]{}, 1491 (1976).
Table
$$\begin{array}{ccc|cc}
\hline \hline \multicolumn{5}{c}{\vphantom{{{C^C}_C}^C} N=7, \; \rho=\frac{R}{2}}\\
\hline
\multicolumn{3}{c}{\begin{array}{c}\kappa R=.6512647458, \\
\mathcal{M}R^{1/2}=1.6017576599\\
\end{array}
} &
\multicolumn{2}{|c}{\begin{array}{c}\kappa R=.6380449999, \\
\mathcal{M}R^{1/2}=1.5833238674\\
\end{array}
}\\
\hline
j & Z_{N-1, \, j} & \lambda_j & Z_{N-1, \, j} & \lambda_j\\
\hline
\begin{array}{c}
0\\ 1\\ 2\\ 3\\ 4\\ 5\\ 6\\
\end{array}
& \begin{array}{l}
0.0356514517\\ 0.1482712147\\ 0.2309801539\\ 0.3094585382\\ 0.4084394267\\
0.5275945630\\ 0.6184249465\\
\end{array}
& \begin{array}{l}
-0.0381260178\\ \phantom{-} 0.4605384282\\ \phantom{-} 1.3246452781\\
\phantom{-} 2.5044702689\\ \phantom{-} 4.1865934000\\ \phantom{-} 6.7063360348\\
\phantom{-} 10.1425219887\\
\end{array}
& \begin{array}{l}
0.0362075259\\ 0.1482712147\\ 0.2309801539\\ 0.3094585382\\ 0.4084394267\\
0.5275945630\\ 0.6183926387\\
\end{array}
& \begin{array}{l}
-0.0353279575\\ \phantom{-} 0.4605384282\\ \phantom{-} 1.3246452781\\
\phantom{-} 2.5044702689\\ \phantom{-} 4.1865934000\\ \phantom{-} 6.7063360348\\
\phantom{-} 10.1425219887\\
\end{array}\\
\hline \hline
\end{array}$$
[^1]: This work has been done partially while the author was visiting the Institute for Nuclear Theory, University of Washington.
|
---
abstract: 'We study the properties of $K$ and $\bar K$ mesons in nuclear matter at finite temperature from a chiral unitary approach in coupled channels which incorporates the $s$- and $p$-waves of the kaon-nucleon interaction. The in-medium solution accounts for Pauli blocking effects, mean-field binding on all the baryons involved, and $\pi$ and kaon self-energies. We calculate $K$ and $\bar K$ (off-shell) spectral functions and single particle properties. The $\bar K$ effective mass gets lowered by about $-50$ MeV in cold nuclear matter at saturation density and by half this reduction at $T=100$ MeV. The $p$-wave contribution to the ${\bar K}$ optical potential, due to $\Lambda$, $\Sigma$ and $\Sigma^*$ excitations, becomes significant for momenta larger than 200 MeV/c and reduces the attraction felt by the $\bar K$ in the nuclear medium. The $\bar K$ spectral function spreads over a wide range of energies, reflecting the melting of the $\Lambda (1405)$ resonance and the contribution of $YN^{-1}$ components at finite temperature. In the $KN$ sector, we find that the low-density theorem is a good approximation for the $K$ self-energy close to saturation density due to the absence of resonance-hole excitations. The $K$ potential shows a moderate repulsive behavior, whereas the quasi-particle peak is considerably broadened with increasing density and temperature. We discuss the implications for the decay of the $\phi$ meson at SIS/GSI energies as well as in the future FAIR/GSI project.'
author:
- |
L. Tolós $^{1}$, D. Cabrera$^2$ and A. Ramos$^3$\
$^1$ FIAS. Goethe-Universität Frankfurt am Main,\
Ruth-Moufang-Str. 1, 60438 Frankfurt am Main, Germany\
$^2$Departamento de Física Teórica II, Universidad Complutense,\
28040 Madrid, Spain\
$^3$ Departament d’Estructura i Constituents de la Matèria\
Universitat de Barcelona, Diagonal 647, 08028 Barcelona, Spain
title: Strange mesons in nuclear matter at finite temperature
---
0.5 cm
[*PACS:*]{} 13.75.-n; 13.75.Jz; 14.20.Jn; 14.40.Aq; 21.65.+f; 25.80.Nv
[*Keywords:*]{} Effective $s$-wave meson-baryon interaction, Coupled $\bar K N$ channels, Finite temperature, Spectral function, $\Lambda(1405)$ in nuclear matter.
Introduction {#sec:intro}
============
The properties of hadrons and, in particular, of mesons with strangeness in dense matter have been a matter of intense investigation over the last years, in connection to the study of exotic atoms [@Friedman:2007zz] and the analysis of heavy-ion collisions (HIC’s) [@Rapp:1999ej].
At zero temperature, the study of the $\bar K$ interaction in nuclei has revealed some interesting characteristics. First, the presence below the $\bar K N$ threshold of the $\Lambda(1405)$ resonance gives rise to the failure of the $T \rho$ approximation for the $\bar K$ self-energy. Whereas the $\bar K N$ interaction is repulsive at threshold, the phenomenology of kaonic atoms requires an attractive potential. The consideration of Pauli blocking on the intermediate $\bar K N$ states [@Koch; @schaffner] was found to shift the excitation energy of the $\Lambda (1405)$ to higher energies, hence changing the real part of the $\bar K N$ amplitude from repulsive in free space to attractive in a nuclear medium already at very low densities. Further steps were taken to account for self-consistency in the evaluation of the $\bar K$ self-energy [@Lutz] as well as for relevant medium effects on the intermediate meson-baryon states [@Ramos:1999ku], which results in a moderate final size of the attractive potential as well as in a sizable imaginary part associated to several in-medium decay mechanisms [@Ramos:1999ku; @TOL00; @Tolos:2002ud; @TOL06]. Several studies of the $\bar K$ potential based on phenomenology of kaonic atoms have pointed towards a different class of deeply attractive potentials [@gal]. Unfortunately, the present experimental knowledge is unable to solve this controversy, as both kinds of potentials fairly describe the data from kaonic atoms [@gal; @baca].
A different direction in the study of this problem was given in [@akaishi], where a highly attractive antikaon-nucleus potential was constructed leading to the prediction of narrow strongly bound states in few body systems [@akaishi; @dote04; @akaishi05]. This potential has been critically discussed in [@toki] because of the omission of the direct coupling of the $\pi
\Sigma$ channel to itself, the assumption of the nominal $\Lambda(1405)$ as a single bound $\bar{K}$ state, the lack of self-consistency in the calculations and the seemingly too large nuclear densities obtained of around ten times normal nuclear matter density at the center of the nucleus. Experiments devised to the observation of deeply bound kaonic states, measuring particles emitted after the absorption of $K^-$ in several nuclei, reported signals that could actually be interpreted from conventional nuclear physics processes. The experimental observations could be explained simply either by the two-body absorption mechanism raised in [@toki], without [@kek1; @kek2; @finuda1; @magas2] or with [@finuda2; @magas1] final state interactions, or coming from a three-body absorption process [@kek3; @finuda3; @magas3]. Actually, recent improved few-body calculations using realistic ${\bar K} N$ interactions and short-range correlations [@shevchenko; @shevchenko2; @Ikeda:2007nz; @dote_hyp06] predict few-nucleon kaonic states bound only by 50–80 MeV and having large widths of the order on 100 MeV, thereby disclaiming the findings of Refs. [@akaishi; @dote04; @akaishi05].
Relativistic heavy-ion experiments at beam energies below 2AGeV [@Forster:2007qk; @FOPI] is another experimental scenario that has been testing the properties of strange mesons not only in a dense but also in a hot medium. Some interesting conclusions have been drawn comparing the different theoretical transport-model predictions and the experimental outcome on production cross sections, and energy and polar-angle distributions [@Forster:2007qk]. For example, despite the significantly different thresholds in binary $NN$ collisions, there is a clear coupling between the $K^-$ and $K^+$ yields since the $K^-$ is predominantly produced via strangeness exchange from hyperons which, on the other hand, are created together with $K^+$ mesons. Furthermore, the $K^-$ and $K^+$ mesons exhibit different freeze-out conditions as the $K^-$ are continuously produced and reabsorbed, leaving the reaction zone much later than the $K^+$ mesons. However, there is still not a consensus on the influence of the kaon-nucleus potential on those observables [@Cassing:2003vz].
Compared to the $\bar K N$ interaction, the $KN$ system has received comparatively less attention. Because of the lack of resonant states in the $S=+1$ sector, the single-particle potential of kaons has usually been calculated in a $T \rho$ approximation, with a repulsion of around 30 MeV for nuclear matter density (the information on $T$ is taken from scattering lengths and energy dependence is ignored). However, a recent analysis of the $K N$ interaction in the Jülich model has demonstrated that the self-consistency induces a significant difference in the optical potential with respect to the low-density approximation at saturation density, and the kaon potential exhibits a non-trivial momentum dependence [@Tolos:2005jg].
A precise knowledge of kaon properties in a hot and dense medium is also an essential ingredient to study the fate of the $\phi$ meson. Electromagnetic decays of vector mesons offer a unique probe of high density regions in nuclear production experiments and HIC’s [@Rapp:1999ej]. The $\phi$ meson predominantly decays into $\bar K K$, which are produced practically at rest in the center of mass frame, each carrying approximately half of the mass of the vector meson. Such a system is highly sensitive to the available phase space, so that small changes in the kaon effective masses or the opening of alternative baryon-related decay channels may have a strong repercussion in the $\phi$-meson mass and decay width. The analysis of the mass spectrum of the $\phi$ decay products in dedicated experiments has drawn inconclusive results since the long-lived vector meson mostly decays out of the hot/dense system [@Akiba:1996ab; @Adler:2004hv; @Adamova:2005jr; @Ishikawa:2004id; @Muto:2006eg; @:2007mga]. In addition, despite the sizable modifications predicted in most theoretical studies [@Hatsuda:1991ez; @Asakawa:1994tp; @Zschocke:2002mn; @Klingl:1997tm; @Oset:2000eg; @Cabrera:2002hc; @Smith:1997xu; @AlvarezRuso:2002ib], the current experimental resolution typically dominates the observed spectrum from $\phi$ decays. Still, recent measurements of the $\phi$ transparency ratio in the nuclear photoproduction reaction by the LEPS Collaboration [@Ishikawa:2004id] have shed some light on the problem and seem to indicate an important renormalization of the absorptive part of the $\phi$ nuclear potential, as it was suggested in the theoretical analysis of Ref. [@Cabrera:2003wb]. The unprecedented precision achieved by the CERN NA60 Collaboration in the analysis of dimuon spectrum data from In-In collisions at 158 AGeV [@Arnaldi:2006jq], as well as the advent of future studies of vector meson spectral functions to be carried out at the HADES [@HADES] and CBM [@CBM] experiments in the future FAIR facility, advise an extension of our current theoretical knowledge of the $\phi$ spectral function to the $(\mu_B, T)$ plane, and hence, of the $K$ and $\bar K$ properties at finite temperature and baryon density.
In this work we evaluate the $K$ and $\bar K$ self-energy, spectral function and nuclear optical potentials in a nuclear medium at finite temperature. We follow the lines of Refs. [@Ramos:1999ku; @TOL06; @TOL07; @Oset:1997it] and build the $s$-wave $K N$ and $\bar K N$ $T$-matrix in a coupled channel chiral unitary approach. Medium effects are incorporated by modifying the intermediate meson-baryon states. We account for Pauli blocking on intermediate nucleons, baryonic binding potentials and meson self-energies for pions and kaons. The latter demands a self-consistent solution of the $K$ and $\bar K$ self-energies as one sums the kaon nucleon scattering amplitude over the occupied states of the system, whereas the $T$-matrix itself incorporates the information of the kaon self-energies in the intermediate meson-baryon Green’s functions. The interaction in $p$-wave is also accounted for in the form of $YN^{-1}$ excitations, which lead to a sizable energy dependence of the $\bar K$ self-energy below the quasi-particle peak. Finite temperature calculations have been done in the Imaginary Time Formalism in order to keep the required analytical properties of retarded Green’s functions which, together with the use of relativistic dispersion relations for baryons (and, of course, for mesons), improves on some approximations typically used in former works. The organization of the present article goes as follows: in Sect. \[sec:Form\] we develop the formalism and ingredients on which the calculation is based. Sect. \[sec:Resul\] is devoted to the presentation of the results. The $\bar K$ and $K$ self-energies and spectral functions are discussed in Sects. \[ssec:Resul-Kbar-spectral\] and \[ssec:Resul-Kaon-spectral\], respectively. We devote Sect. \[ssec:optical\] to the discussion of momentum, density and temperature dependence of $\bar K$ and $K$ nuclear optical potentials. Finally, in Sect. \[sec:Conclusion\] we draw our conclusions as well as the implications of the in-medium properties of kaons at finite temperature in transport calculations and $\phi$ meson phenomenology. We also give final remarks pertaining to the present and future works.
Kaon nucleon scattering in hot nuclear matter {#sec:Form}
=============================================
In this section we discuss the evaluation of the effective kaon nucleon scattering amplitude in a dense nuclear medium, extending the unitarized chiral model for $\bar K$ of Refs. [@Ramos:1999ku; @TOL06] to account for finite temperature. This allows us to obtain the in-medium $K$ and ${\bar K}$ self-energy, spectral function and nuclear optical potential. We follow closely the lines of Ref. [@TOL07], where a similar study was reported for open-charm mesons in hot nuclear matter.
$s$-wave kaon nucleon scattering and kaon self-energy {#ssec:swave-kaon-self-energy}
-----------------------------------------------------
The kaon nucleon interaction at low energies has been successfully described in Chiral Perturbation Theory ($\chi$PT) [@Gasser:1984gg; @Meissner:1993ah; @Bernard:1995dp; @Pich:1995bw; @Ecker:1994gg]. The lowest order chiral Lagrangian which couples the octet of light pseudoscalar mesons to the octet of $1/2^+$ baryons is given by $$\begin{aligned}
{\cal L}_1^{(B)} &=& \langle \bar{B} i \gamma^{\mu} \nabla_{\mu} B
\rangle - M \langle \bar{B} B\rangle \nonumber \\
&& + \frac{1}{2} D \left\langle \bar{B} \gamma^{\mu} \gamma_5 \left\{
u_{\mu}, B \right\} \right\rangle + \frac{1}{2} F \left\langle \bar{B}
\gamma^{\mu} \gamma_5 \left[u_{\mu}, B\right] \right\rangle \ ,
\label{chiralLag}\end{aligned}$$ where $B$ is the $SU(3)$ matrix for baryons, $M$ is the baryon mass, $u$ contains the $\Phi$ matrix of mesons and the symbol $\langle \, \rangle$ denotes the flavour trace. The $SU(3)$ matrices appearing in Eq. (\[chiralLag\]) are standard in notation and can be found, for instance, in [@Oset:1997it]. The axial-vector coupling constants have been determined in [@Jido:2003cb] and read $D=0.8$ and $F=0.46$.
Keeping at the level of two meson fields, the covariant derivative term in Eq. (\[chiralLag\]) provides the following interaction Lagrangian in $s$-wave, $${\cal L}_1^{(B)} = \left\langle \bar{B} i \gamma^{\mu} \frac{1}{4 f^2}
[(\Phi\, \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi) B
- B (\Phi\, \partial_{\mu} \Phi - \partial_{\mu} \Phi \Phi)]
\right\rangle \ . \label{lowest}$$ Expanding the baryon spinors and vertices in $M_B^{-1}$ the following expression of the $s$-wave kaon nucleon tree level amplitude can be derived $$\begin{aligned}
V_{i j}^s= - C_{i j} \, \frac{1}{4 f^2} \, (2 \, \sqrt{s}-M_{B_i}-M_{B_j})
\left( \frac{M_{B_i}+E_i}{2 \, M_{B_i}} \right)^{1/2} \, \left(
\frac{M_{B_j}+E_j}{2 \, M_{B_j}} \right)^{1/2} \ ,
\label{swa}\end{aligned}$$ with $M_{B_i}$ and $E_i$ the mass and energy of the baryon in the $i$th channel, respectively. The coefficients $C_{ij}$ form a symmetric matrix and are given in [@Oset:1997it]. The meson decay constant $f$ in the $s$-wave amplitude is taken as $f=1.15 f_\pi$. This renormalized value provides a satisfactory description of experimental low energy $\bar K N$ scattering observables (such as cross sections and the properties of the $\Lambda (1405)$ resonance) using only the interaction from the lowest order chiral Lagrangian plus unitarity in coupled channels. We have considered the following channels in the calculation of the scattering amplitude: in the strangeness sector $S=-1$ we have $\bar K N$, $\pi \Sigma$, $\eta \Lambda$ and $K \Xi$ for isospin $I=0$; $\bar K N$, $\pi \Lambda$, $\pi
\Sigma$, $\eta \Sigma$ and $K \Xi$ for $I=1$. For $S=1$, there is only one single channel, $K N$, for each isospin.
In Refs. [@Oset:1997it; @Ramos:1999ku; @TOL06] unitarization of the tree level amplitudes in coupled channels was implemented, which extends the applicability of $\chi$PT to higher energies and in particular allows to account for dynamically generated resonances. In particular, the $\Lambda(1405)$ shows up in the unitarized $s$-wave ${\bar K} N$ amplitude. Following [@Oset:1997it], the effective kaon-nucleon scattering amplitude is obtained by solving the Bethe-Salpeter equation in coupled channels (in matrix notation), $$T = V + \overline{V G T} \, ,$$ where we use the $s$-wave tree level amplitudes as the potential (kernel) of the equation, $V^s_{ij}$, and $$\label{G_vacuum}
G_i (\sqrt{s}) = {\rm i} \,
\int \frac{d^4q}{(2\, \pi)^4} \,
\frac{M_i}{E_i(-\vec{q}\,)} \,
\frac{1}{\sqrt{s} - q_0 - E_i(-\vec{q}\,) + {\rm i} \varepsilon} \,
\frac{1}{q_0^2 - \vec{q}\,^2 - m_i^2 + {\rm i} \varepsilon}$$ stands for the intermediate two-particle meson-baryon Green’s function of channel $i$ ($G$ is diagonal). In principle, both $V$ and $T$ enter off-shell in the momentum integration ($\overline{VGT}$ term) of the meson-baryon loop. However, as it was shown in Refs. [@Oset:1997it; @TOL06], the (divergent) off-shell contributions of $V$ and $T$ in the $s$-wave interaction can be reabsorbed in a renormalization of the bare coupling constants and masses order by order. Therefore, both $V$ and $T$ can be factorized on-shell out of the meson-baryon loop, leaving the four-momentum integration only in the two-particle meson-baryon propagators. An alternative justification of solving the Bethe-Salpether equation with on-shell amplitudes may be found in the framework of the $N/D$ method, applied for meson-meson interactions in Ref. [@Oller:1998zr] and for meson-baryon interactions in Ref. [@Oller:2000fj]. We are thus left with a set of linear algebraic equations with trivial solution, $$T = [1 - V G ]^{-1} V \,\,\, .
\label{eq:BSalgeb}$$ The meson baryon loop function, $G_i$, needs to be regularized. We apply a cut-off in the three momentum of the intermediate particles, which provides a simple and transparent regularization method for in-medium calculations, cf. [@Ramos:1999ku; @TOL06].
In order to obtain the effective $s$-wave $\bar K(K) N$ amplitude in hot and dense matter, we incorporate in the loop functions the modifications on the properties of the mesons and baryons induced by temperature and density.
In the Imaginary Time Formalism (ITF), the baryon propagator in a hot medium is given by: $${\cal G}_B(\omega_m,\vec{p};T) =
\frac{1}{{\rm i}\omega_m-E_B(\vec{p},T)}\ ,
\label{eq:nuc}$$ where ${\rm i} \omega_m={\rm i} (2m+1)\pi T + \mu_B$ is the fermionic Matsubara frequency, with $\mu_B$ the baryon chemical potential, and $E_B$ is the baryon single particle energy, which, in the case of nucleons and singly strangeness hyperons, will also contain the medium binding effects obtained within a temperature dependent Walecka-type $\sigma -\omega$ model (see Ref. [@KAP-GALE]). According to this model, the nucleon energy spectrum in mean-field approximation is obtained from $$\begin{aligned}
E_N(\vec{p},T)=\sqrt{\vec{p}\,^2+M_N^*(T)^2}+\Sigma^v \ ,\end{aligned}$$ with the vector potential $\Sigma^v$ and the effective mass $M_N^*(T)$ given by $$\begin{aligned}
\Sigma^v&=&\left(\frac{g_v}{m_v}\right)^2 \rho \nonumber \\
M_N^*(T)&=&M_N-\Sigma^s, ~~~~~~~~~{\rm with}~\Sigma^s=
\left(\frac{g_s}{m_s}\right)^2 \rho_s \ ,\end{aligned}$$ where $m_s$ and $m_v$ are the meson masses ($m_s=440$ MeV, $m_v=782$ MeV), while $g_s$ and $g_v$ are the scalar and vector density dependent coupling constants. These constants are obtained by reproducing the energy per particle of symmetric nuclear matter at $T=0$ coming from a Dirac-Brueckner-Hartree-Fock calculation (see Table 10.9 of Ref. [@Machleidt:1989tm]). The vector ($\rho$) and scalar ($\rho_s$) densities are obtained by momentum integration, namely $$\begin{aligned}
\rho_{(s)} = 4 \,\int \frac{ d^3p}{(2\pi)^3} \,
n_N^{(s)}(\vec{p},T) \ , \label{eq:density}\end{aligned}$$ of the corresponding vector \[$n_N(\vec p, T)$\] and scalar \[$n_N^s(\vec p, T)$\] density distributions, which are defined in terms of the nucleon Fermi-Dirac function as $$n_N(\vec{p}, T)=\frac{1}{1+\exp{\left [(E_N(\vec{p},
T)-\mu_B)/T\right ]}} \label{eq:density-dist}$$ and $$n_N^s(\vec{p}, T)=\frac{M_N^*(T)n_N(\vec
p,T)}{\sqrt{\vec{p}\,^2+M_N^*(T)^2}}\ , \label{eq:density-dist-s}$$ respectively. The quantities $E_N(\vec{p}, T), M_N^*(T)$ and $\mu_B$ are obtained simultaneously and self-consistently for given $\rho$ and $T$ and for the corresponding values of $g_s$ and $g_v$.
The hyperon masses and energy spectra, $$\begin{aligned}
E_{Y}(\vec{p},T)=\sqrt{\vec{p}\,^2+M_{Y}^*(T)^2}+\Sigma_{Y}^v \ ,\end{aligned}$$ can be easily inferred from those for the nucleon as $$\begin{aligned}
\Sigma_{Y}^v&=&\frac{2}{3} \left(\frac{g_v}{m_v}\right)^2 \rho= \frac{2}{3}\Sigma^v \nonumber \ ,\\
M_{Y}^*(T)&=&M_{Y}-\Sigma_{Y}^s=M_{Y}-\frac{2}{3}\left(\frac{g_s}{m_s}\right)^2 \rho_s \nonumber \\
&=& M_{Y}-\frac{2}{3}(M_N-M_N^*(T)) \ .\end{aligned}$$ Here we have assumed that the $\sigma$ and $\omega$ fields only couple to the $u$ and $d$ quarks, as in Refs. [@Tsushima:2002cc; @Tsushima:2003dd], so the scalar and vector coupling constants for hyperons and charmed baryons are $$\begin{aligned}
g_v^{Y}=\frac{2}{3}g_v , \hspace{1cm} g_s^{Y}=\frac{2}{3}g_s .\end{aligned}$$ In this way, the potential for hyperons follows the simple light quark counting rule as compared with the nucleon potential: $V_{Y}=2/3 \, V_N$. As reference, we quote in Table \[table:dmuB\] the nucleon and hyperon single particle properties for three densities ($0.25\rho_0$, $\rho_0$ and $2\rho_0$), where $\rho_0=0.17$ fm$^{-3}$ is the normal nuclear matter saturation density, and four temperatures ($T=0$, 50, 100 and 150 MeV). The hyperon attraction at $\rho=\rho_0$ and $T=0$ MeV is about $-50$ MeV, the size of which gets reduced as temperature increases turning even into repulsion, especially at higher densities. This behavior results from the fact that the temperature independent vector potential takes over the strongly temperature-dependent scalar potential which decreases with temperature. We note that the quark-meson coupling (QMC) calculations of Refs. [@Tsushima:2002cc; @Tsushima:2003dd], performed at $T=0$, obtained a somewhat smaller scalar potential (about half the present one) for the $\Lambda$ and $\Sigma$ baryons due to the inclusion of non-linear terms associated to quark dynamics. To the best of our knowledge, no temperature effects have been studied within this framework.
$\rho[{\rm fm}^{-3}]$ T\[MeV\] $\mu_B$\[MeV\] $M_N^*$\[MeV\] $\Sigma^v$\[MeV\] $M_{\Lambda}^*$\[MeV\] $\Sigma_{\Lambda}^v$\[MeV\] $M_{\Sigma}^*$\[MeV\] $\Sigma_{\Sigma}^v$\[MeV\]
----------------------- ---------- ---------------- ---------------- ------------------- ------------------------ ----------------------------- ----------------------- ---------------------------- -- --
0.0425 0 920 781 121 1010 81 1088 81
0.0425 50 820 793 121 1019 81 1096 81
0.0425 100 618 805 121 1026 81 1104 81
0.0425 150 364 815 121 1033 81 1110 81
0.17 0 920 579 282 872 188 950 188
0.17 50 892 605 282 893 188 970 188
0.17 100 783 634 282 913 188 990 188
0.17 150 618 659 282 929 188 1006 188
0.34 0 979 443 422 787 281 865 281
0.34 50 969 470 422 803 281 881 281
0.34 100 905 510 422 830 281 907 281
0.34 150 787 545 422 853 281 930 281
: $\sigma-\omega$ model at finite temperature
\[table:dmuB\]
The meson propagator in a hot medium is given by $$D_M(\omega_n,\vec{q};T) = \frac{1}{({\rm i} \omega_n)^2-\vec{q}\,^2 - m_M^2 -
\Pi_M(\omega_n,\vec{q};T)} \ ,
\label{eq:prop1}$$ where ${\rm i} \omega_n = {\rm i} 2 n \pi T$ is the bosonic Matsubara frequence and $\Pi_M (\omega_n, \vec{q}; T)$ is the meson self-energy. Note that throughout this work we set the mesonic chemical potential to zero, since we are dealing with an isospin symmetric nuclear medium with zero strangeness. We will consider in this work the dressing of pions and kaons.
An evaluation of the in-medium self-energy for pions at finite temperature and baryonic density was given in the Appendix of Ref. [@Tolos:2002ud], which generalized the zero temperature evaluation of the pion self-energy from Refs. [@Oset:1989ey; @Ramos:1994xy] by incorporating thermal effects. We recall that the pion self-energy in nuclear matter at $T=0$ is strongly dominated by the $p$-wave coupling to particle-hole ($ph$) and $\Delta$-hole ($\Delta h$) components (a small, repulsive $s$-wave contribution takes over at small momenta), as well as to 2$p$-2$h$ excitations, which account for pion absorption processes, and short range correlations. We come back to the pion self-energy in Sec. \[ssec:pion-self-energy\], where we improve on some approximations in previous works.
In the case of the kaons, the self-energy receives contributions of comparable size from both $s$- and $p$-wave interactions with the baryons in the medium. We evaluate the $s$-wave self-energy from the effective $\bar K (K)N$ scattering amplitude in the medium, a procedure which, as will be shown explicitly in the following, must be carried out self-consistently. The $p$-wave part of the kaon self-energy will be discussed separately in the next section.
The evaluation of the effective $\bar K (K)N$ $s$-wave scattering amplitude in the hot medium proceeds by first obtaining the meson-baryon two-particle propagator function (meson-baryon loop) at finite temperature in the ITF, ${\cal G}_{MB}$. Given the analytical structure of ${\cal G}_{MB}$ it is convenient to use the spectral (Lehmann) representation for the meson propagator, $$\begin{aligned}
\label{Lehmann}
D_M(\omega_n,\vec{q};T) &=& \int d\omega \,
\frac{S_M(\omega,\vec{q};T)}{{\rm i}\omega_n -
\omega}
\nonumber \\
&=&
\int_0^{\infty} d\omega \,
\frac{S_M(\omega,\vec{q};T)}{{\rm i}\omega_n - \omega}
-
\int_0^{\infty} d\omega \,
\frac{S_{\bar M}(\omega,\vec{q};T)}{{\rm i}\omega_n + \omega}
\,\,\,,\end{aligned}$$ where $S_M$, $S_{\bar M}$ stand for the spectral functions of the meson and its corresponding anti-particle. The separation in the second line of Eq. (\[Lehmann\]) reflects the retarded character of the meson self-energy and propagator, ${\rm Im} \, \Pi
(-q_0,\vec{q};T) = - {\rm Im} \, \Pi (q_0,\vec{q};T)$. Due to strangeness conservation, $K$ and ${\bar K}$ experience markedly different interactions in a nuclear medium [@Ramos:1999ku], which justifies using a different notation ($S_M$, $S_{\bar M}$) for the spectral functions in Eq. (\[Lehmann\])[^1]. Combining Eqs. (\[eq:prop1\]) and (\[Lehmann\]), conveniently continued analytically from the Matsubara frequencies onto the real energy axis, one can write $$S_M(\omega,{\vec q}; T)= -\frac{1}{\pi} {\rm Im}\, D_M(\omega,{\vec q};T)
= -\frac{1}{\pi}\frac{{\rm Im}\, \Pi_M(\omega,\vec{q};T)}{\mid
\omega^2-\vec{q}\,^2-m_M^2- \Pi_M(\omega,\vec{q};T) \mid^2 } \ .
\label{eq:spec}$$
Applying the finite-temperature Feynman rules, the meson-baryon loop function in the ITF reads $$\begin{aligned}
\label{G_ITF}
{\cal G}_{MB}(W_m,\vec{P};T) &=& - T \int \frac{d^3q}{(2\pi)^3} \,
\sum_n \frac{1}{{\rm i} W_m - {\rm i}\omega_n - E_B(\vec{P}-\vec{q},T)} \,
\nonumber \\
&\times&
\int_0^{\infty} d\omega \,
\left( \frac{S_M(\omega,\vec{q};T)}{{\rm i}\omega_n - \omega}
- \frac{S_{\bar M}(\omega,\vec{q};T)}{{\rm i}\omega_n + \omega} \right)
\,\,\, ,\end{aligned}$$ where $\vec{P}$ is the external total three-momentum and $W_m$ an external fermionic frequency, ${\rm i} W_m={\rm i} (2m+1)\pi T + \mu_B$. Note that we follow a quasi-relativistic description of baryon fields all throughout this work: in Eq. (\[G\_ITF\]) the negative energy part of the baryon propagator has been neglected but we use relativistic dispersion relations. All the Dirac structure is included in the definition of the tree level amplitudes. The Matsubara sums can be performed using standard complex analysis techniques for each of the two terms in the meson propagator and one finds $$\begin{aligned}
\label{G_ITF:Matsu-summed}
{\cal G}_{MB}(W_m,\vec{P};T) &=&
\int \frac{d^3q}{(2\pi)^3} \,
\int_0^{\infty} d\omega \,
\left[ S_M(\omega,\vec{q};T) \,
\frac{1-n_B(\vec{P}-\vec{q},T)+f(\omega,T)}
{{\rm i} W_m - \omega - E_B(\vec{P}-\vec{q},T)} \right.
\nonumber \\
&+&
\left.
S_{\bar M}(\omega,\vec{q};T) \,
\frac{n_B(\vec{P}-\vec{q},T)+f(\omega,T)}
{{\rm i} W_m + \omega - E_B(\vec{P}-\vec{q},T)} \, \right]
\,\,\, ,\end{aligned}$$ with $f(\omega,T) = [\exp (\omega / T) - 1]^{-1}$ the meson Bose distribution function at temperature $T$. The former expression can be analytically continued onto the real energy axis, $G_{MB}(P_0+{\rm i} \varepsilon \, ,\vec{P}; T) = {\cal
G}_{MB}({\rm i} W_m \to P_0 + {\rm i} \varepsilon \, , \vec{P}; T )$, cf. Eq. (\[G\_ITF:Matsu-summed\]). With these medium modifications the meson-baryon retarded propagator at finite temperature (and density) reads $$\begin{aligned}
\label{eq:gmed}
{G}_{\bar K(K) N}(P_0+{\rm i} \varepsilon,\vec{P};T)
&=&\int \frac{d^3 q}{(2 \pi)^3}
\frac{M_N}{E_N (\vec{P}-\vec{q},T)} \nonumber \\
&\times &\left[ \int_0^\infty d\omega
S_{\bar K(K)}(\omega,{\vec q};T)
\frac{1-n_N(\vec{P}-\vec{q},T)}{P_0 + {\rm i} \varepsilon - \omega
- E_N
(\vec{P}-\vec{q},T) } \right. \nonumber \\
&+& \left. \int_0^\infty d\omega
S_{K (\bar K)}(\omega,{\vec q};T)
\frac{n_N(\vec{P}-\vec{q},T)} {P_0 +{\rm i} \varepsilon + \omega -
E_N(\vec{P}-\vec{q},T)} \right] \ ,\end{aligned}$$ for $\bar K(K)N$ states and $$\begin{aligned}
\label{eq:gmed_piY}
{G}_{\pi Y}(P_0+{\rm i} \varepsilon,\vec{P}; T)
&= & \int \frac{d^3 q}{(2 \pi)^3} \frac{M_{Y}}{E_{Y}
(\vec{P}-\vec{q},T)} \nonumber \\
& \times &
\int_0^\infty d\omega
S_\pi(\omega,{\vec q},T)
\left[
\frac{1+f(\omega,T)}
{P_0 + {\rm i} \varepsilon - \omega - E_{Y}
(\vec{P}-\vec{q},T) } \right.
\nonumber \\
& + &
\left.
\frac{f(\omega,T)}
{P_0 + {\rm i} \varepsilon + \omega - E_{Y}
(\vec{P}-\vec{q},T) } \right]\end{aligned}$$ for $\pi \Lambda$ or $\pi \Sigma$ states, where $P=(P_0,\vec{P})$ is the total two-particle momentum and ${\vec q}$ is the meson three-momentum in the nuclear medium rest frame. Note that for consistency with the free space meson-baryon propagator, given in Eq. (\[G\_vacuum\]), we have included the normalization factor $M_B / E_B$ in the baryon propagator. We have explicitly written the temperature dependence of the baryon energies indicating that we account for mean-field binding potentials as discussed above. The second term in the $\bar K
N$ loop function typically provides a small, real contribution for the energy range in $P_0$ we are interested in. In order to simplify the numerical evaluation of the self-consistent ${\bar K}N$ amplitude we replace $S_{K}(\omega, \vec q;T )$ by a free-space delta function in Eq. (\[eq:gmed\]). This approximation is sensible as long as the $K$ spectral function in the medium still peaks at the quasi-particle energy and the latter does not differ much from the energy in vacuum, as we will confirm in Sect. \[sec:Resul\]. In Eq. (\[eq:gmed\]) we have neglected the kaon distribution function, since we expect Bose enhancement to be relevant only for the lightest meson species in the range of temperatures explored in the present study, $T = 0$ – $150$ MeV.
The $\pi Y$ loop function, in particular, incorporates the $1+f(\omega ,T)$ enhancement factor which accounts for the contribution from thermal pions at finite temperature, cf. Eq. (\[eq:gmed\_piY\]). In this case, we have neglected the fermion distribution for the participating hyperons, which is a reasonable approximation for the range of temperature and baryonic chemical potential that we have studied (cf. Table \[table:dmuB\]).
In the case of $\eta \Lambda$, $\eta \Sigma$ and $K \Xi$ intermediate states, we simply consider the meson propagator in free space and include only the effective baryon energies modified by the mean-field binding potential, namely $$\begin{aligned}
G_l(P_0+{\rm i} \varepsilon,\vec{P};T)= \int \frac{d^3 q}{(2 \pi)^3} \,
\frac{1}{2 \omega_l (\vec q\,)} \frac{M_l}{E_l (\vec{P}-\vec{q},T)} \,
\frac{1}{P_0 +
{\rm i} \varepsilon - \omega_l (\vec{q}\,) - E_l (\vec{P}-\vec{q},T) } \, .
\label{eq:gprop}\end{aligned}$$ The latter channels are less relevant in the unitarization procedure of the $s$-wave scattering amplitude. They are important to maintain SU(3) symmetry through using a complete basis of states in the coupled-channels procedure, as well as for producing a better description of branching ratios between the various scattering transitions at threshold. However, the width and position of the $\Lambda (1405)$ are basically determined from the unitarized coupling of ${\bar K} N$ and $\pi \Sigma$ channels [@Oset:1997it]. In addition, the changes that kaons [@Waas:1996fy; @Tolos:2005jg] and $\eta$ mesons [@Waas:1997pe; @GarciaRecio:2002cu] experience in the medium at moderate densities are comparably weaker than for $\pi$ and ${\bar K}$, which justifies the simplification adopted here.
As mentioned above, all meson-baryon loop functions in our approach are regularized with a cut-off in the three-momentum integration, $q_{\rm max}$. We adopt here the regularization scale that was set in [@Oset:1997it], where the ${\bar K}N$ scattering amplitude was evaluated in free space, leading to a remarkable description of several ${\bar K} N$ scattering observables and the dynamical generation of the $\Lambda (1405)$ resonance with a single parameter, $q_{\rm max}=630$ MeV/c.
Finally, we obtain the in-medium $s$-wave $\bar K(K)$ self-energy by integrating $T_{\bar K (K) N}$ over the nucleon Fermi distribution at a given temperature, $$\begin{aligned}
\Pi^s_{\bar K(K)}(q_0,{\vec q};T)= \int \frac{d^3p}{(2\pi)^3}\,
n_N(\vec{p},T) \, [{T}^{(I=0)}_{\bar K(K)N}(P_0,\vec{P};T) +
3{T}^{(I=1)}_{\bar K(K)N}(P_0,\vec{P};T)]\ , \label{eq:selfd}\end{aligned}$$ where $P_0=q_0+E_N(\vec{p},T)$ and $\vec{P}=\vec{q}+\vec{p}$ are the total energy and momentum of the $\bar K(K)N$ pair in the nuclear medium rest frame, and $q$ stands for the momentum of the $\bar K(K)$ meson also in this frame. The kaon self-energy must be determined self-consistently since it is obtained from the in-medium amplitude, $ T_{\bar K(K)N}$, which requires the evaluation of the $\bar K(K)N$ loop function, $G_{\bar K(K)N}$, and the latter itself is a function of $\Pi_{\bar K (K)}(q_0, \vec q; T)$ through the kaon spectral function, cf. Eqs. (\[eq:spec\]), (\[eq:gmed\]). Note that Eq. (\[eq:selfd\]) is valid in cold nuclear matter. In the Appendix we provide a derivation of the $s$-wave self-energy from the kaon nucleon $T$-matrix in ITF and give arguments for the validity of Eq. (\[eq:selfd\]) as a sensible approximation.
$p$-wave kaon self-energy {#ssec:pwave-kaon-self-energy}
-------------------------
The main contribution to the $p$-wave kaon self-energy comes from the $\Lambda$ and $\Sigma$ pole terms, which are obtained from the axial-vector couplings in the Lagrangian ($D$ and $F$ terms in Eq. (\[chiralLag\])). The $\Sigma^* (1385)$ pole term is also included explicitly with couplings to the kaon-nucleon states which were evaluated from SU(6) symmetry arguments in [@Oset:2000eg].
In Ref. [@TOL06], the combined $s$- and $p$-wave $\bar K$ self-energy was obtained self-consistently in cold nuclear matter by unitarizing the tree level amplitudes in both channels. Unitarization of the $p$-wave channel, however, did not provide dramatic effects over the tree level pole terms. Those were used in [@Oset:2000eg] to evaluate the $p$-wave self-energy. The latter is built up from hyperon-hole ($Yh$) excitations and it provides sizable strength in the $\bar K$ spectral function below the quasi-particle peak and a moderate repulsion (as it is expected from the excitation of subthreshold resonances) at nuclear matter density. Medium effects and unitarization actually move the position of the baryon pole with respect to that in free space [@TOL06]. We incorporate this behaviour here in an effective way through the use of baryon mean field potentials and re-evaluate the relevant many-body diagrams for the $p$-wave self-energy at finite temperature and density, which considerably simplifies the numerical task. We shall work in the ITF, improving on some approximations of former evaluations [@Oset:2000eg; @Cabrera:2002hc; @Tolos:2002ud]. Hence, the total $K$ and ${\bar K}$ self-energy will consist of the sum of the $s$- and $p$-wave contributions described here and in the former section. Note that the $p$-wave self-energy enters the self-consistent calculation of the $s$-wave self-energy as the $K$ and ${\bar K}$ meson propagators in the intermediate states on the $s$-wave kaon-nucleon amplitude are dressed with the total self-energy at each iteration.
We can write the ${\bar K}$ $p$-wave self-energy as the sum of each of the $Yh$ contributions, $$\begin{aligned}
\label{eq:pwave-expression}
\Pi_{\bar{K}}^p(q_0,\vec{q}; T) &=& \frac{1}{2} \,
\tilde{V}^2_{\bar K N \Lambda} \, \vec{q} \,^2 \,
f_{\Lambda}^2(q_0,\vec{q}\,) \, U_{\Lambda N^{-1}}(q_0,\vec{q}; T)
\nonumber \\
&+& \frac{3}{2} \, \tilde{V}^2_{\bar{K} N \Sigma} \, \vec{q}\, ^2
\, f_{\Sigma}^2(q_0,\vec{q}\,) \, U_{\Sigma N^{-1}}(q_0,\vec{q};
T)
\nonumber \\
&+& \frac{1}{2} \, \tilde{V}^2_{\bar K N \Sigma^*} \, \vec{q}\, ^2
f_{\Sigma^*}^2(q_0,\vec{q}\,) \, U_{\Sigma^* N^{-1}}(q_0,\vec{q};
T) \ , \label{eq:self}\end{aligned}$$ where $U_{Y}$ stands for the $YN^{-1}$ Lindhard function at finite temperature and baryonic density, and $\tilde{V}^2_{\bar K NY}$ represents the $\bar K NY$ coupling from the chiral Lagrangian in a non-relativistic approximation (leading order in a $M_B^{-1}$ expansion) and includes the required isospin multiplicity. They can be found, for instance, in Ref. [@Oset:2000eg]. The $f_Y$ factors account for relativistic recoil corrections to the ${\bar
K}NY$ vertices, which improve on the lowest order approximation and still allow to write the self-energy in a simple form, where all the dynamical information from the $p$-wave coupling is factorized out of the momentum sum in the fermionic loop. These factors read $$\begin{aligned}
\label{eq:recoilfactors}
f_{\Lambda , \Sigma}^2(q_0,\vec{q}\,) &=&
\left[ 2 \,M_{\Lambda , \Sigma} + 2\,q_0\, (M_{\Lambda , \Sigma}-M_N)
- q^2 + 2 \, M_N E_{\Lambda , \Sigma} (\vec{q}\,) \right]
/ 4 \, M_N E_{\Lambda , \Sigma} (\vec{q}\,) \ \ \ ,
\nonumber \\
f_{\Sigma^*}^2(q_0,\vec{q}\,) &=&
(1 - q_0 /M_{\Sigma^*})^2 \ \ \ .\end{aligned}$$ We have also accounted for the finite size of the vertices by incorporating phenomenological hadronic form factors of dipole type via the replacement $\vec{q}\,^2 \to F_K (\vec{q}\,^2) \,
\vec{q}\,^2$ with $F_K (\vec{q}\,^2) = (\Lambda_K^2 / (\Lambda_K^2
+ \vec{q}\,^2))^2$, where $\Lambda_K=1050$ MeV. We provide explicit expressions for $U_{Y N^{-1}}$ in Appendix \[app-Linds\].
In Refs. [@Oset:2000eg; @Cabrera:2002hc] the $K$ and ${\bar K}$ self-energy in $p$-wave was obtained for cold nuclear matter, and it was extended to the finite temperature case in [@Tolos:2002ud]. We would like to present here a comparison of our results for different approximations as the ones used in former evaluations, particularly in the $T \to 0$ limit. In Fig. \[fig\_pwave\_comparison\] we present the imaginary part of the ${\bar K}$ $p$-wave self-energy as a function of the kaon energy, evaluated at nuclear matter density and two different kaon momenta. In the upper panels the dashed lines have been obtained by using, in Eq. (\[eq:self\]), the standard non-relativistic evaluation of the $YN^{-1}$ Lindhard function [@Oset:2000eg; @Cabrera:2002hc], the analytical expression of which is also quoted in Appendix \[app-Linds\]. The solid lines correspond to our calculation in Eq. (\[eq:Lind-rel-YN-elaborate\]) for $T=0$ MeV. Both results agree quite well which ensures that our calculation has the correct $T\to 0$ limit. The observed differences, particularly the threshold energies at which each $YN^{-1}$ component is open/closed, are related to the use of non-relativistic baryon energies (in dashed lines) and some further approximations that we discuss below.
At $T=100$ MeV (lower panels), the dashed lines correspond to the evaluation of the Lindhard function in [@Tolos:2002ud], which extends the zero-temperature, non-relativistic calculation by replacing the nucleon occupation number, $\theta(p_F-p)$, by the corresponding Fermi-Dirac distribution, $n_N(\vec{p},T)$. Note that, in our result (solid lines), the Matsubara sum automatically generates a non-vanishing term proportional to the hyperon distribution function, cf. Eqs. (\[eq:Lind-rel-YN\],\[eq:Lind-rel-YN-elaborate\]) in Appendix \[app-Linds\]. Despite the absence of a crossed kinematics mechanism (as for instance in $ph$ and $\Delta h$ excitations for pions) this guarantees that the imaginary part of the $\bar K$ self-energy identically vanishes at zero energy ($q_0=0$) as it follows from the (retarded) crossing property of the thermal self-energy, ${\rm
Im} \, \Pi_{\bar K}^p (-q_0,\vec{q};T) = - {\rm Im} \, \Pi_K^p
(q_0,\vec{q};T)$ [^2]. The two calculations at $T=100$ MeV exhibit some non-trivial differences. In the finite-$T$ extension of the non-relativistic result the $YN^{-1}$ structures and thresholds are more diluted and the $\Sigma$ component is not resolved even at low momentum, whereas in the relativistic result the three components are clearly identifiable at $q=150$ MeV$/c$. In both calculations the strength of the imaginary part extends to lower energies so that the energy gap in cold nuclear matter is absent here. However, as mentioned before, the relativistic calculation in the ITF vanishes exactly at $q_0=0$ whereas the non-relativistic, finite-$T$ extended result does not. In addition, the standard non-relativistic calculations (at $T=0$ and finite $T$) [@Oset:2000eg; @Cabrera:2002hc; @Tolos:2002ud] omit $\vec{p}\,^2$ terms proportional to $(M_Y^{-1}-M_N^{-1})$ in the energy balance of the two-particle propagator, which are responsible for cutting the available $YN^{-1}$ phase space for high external energies ($q_0$). As a consequence, the dashed lines in the lower panels exhibit a high energy tail which is attenuated by the nucleon distribution whereas the relativistic result has a clear (temperature dependent) end point. This happens for each of the excited hyperon components independently and it is particularly visible for the $\Sigma^*$. The $\Lambda$ and $\Sigma$ tails mix with each other and are responsible for the washing out of the $\Sigma$ structure at finite momentum.
![Imaginary part of the $\bar K$ $p$-wave self-energy from different evaluations of the $YN^{-1}$ Lindhard function (see text) at $\rho=\rho_0$. The upper panels correspond to $T=0$, whereas in the lower panels $T=100$ MeV.[]{data-label="fig_pwave_comparison"}](fig1.eps){width="14cm"}
Finally, the $K$ $p$-wave self-energy can be obtained following the same procedure as above. The excitation mechanisms in this case correspond exactly to the crossed kinematics of those of the $\bar K$. Taking advantage of the crossing property of the thermal self-energy, we obtain the $K$ self-energy from the $\bar K$ one replacing $q_0 \to - q_0$ in $\Pi_{\bar K}^p (q_0,\vec{q};T)$ (modulo a sign flip in the imaginary part). The crossed kinematics causes the $YN^{-1}$ excitations to be far off-shell. In cold nuclear matter, for instance, $\Pi_K^p$ is real and mildly attractive. At finite temperature, though, the fermion distributions of the nucleon and hyperon can accommodate low-energy (off-shell) kaons and $\Pi_K^p$ receives a small imaginary part which rapidly decays with increasing energy.
Pion self-energy {#ssec:pion-self-energy}
----------------
We briefly discuss here the relevant many-body mechanisms that modify the pion propagator, which enters the evaluation of the in-medium ${\bar K} N$ amplitude, cf. Eqs. (\[eq:BSalgeb\]), (\[eq:gmed\_piY\]). In cold nuclear matter, the pion spectral function exhibits a mixture of the pion quasi-particle mode and $ph$, $\Delta h$ excitations [@Oset:1989ey]. The meson-baryon chiral Lagrangian in Eq. (\[chiralLag\]) provides the $\pi NN$ $p$-wave vertex, while the $\pi N\Delta$ vertex can be determined from the standard non-relativistic derivation of the Raritta-Schwinger interaction Lagrangian. However, we shall use phenomenological $\pi NN$ and $\pi N \Delta$ coupling constants determined from analysis of pion nucleon and pion nucleus reactions. Their values are $f_N/m_\pi = 0.007244$ MeV$^{-1}$ and $f_{\Delta}/f_N=2.13$. The lowest order $p$-wave pion self-energy due to $ph$ and $\Delta h$ excitations then reads $$\label{eq:piself-ph-Dh}
\Pi_{\pi NN^{-1}+\pi\Delta N^{-1}}^p (q_0,\vec{q};T) =
\left( \frac{f_N}{m_{\pi}} \right) ^2
\vec{q}\,^2 \, \left[ U_{NN^{-1}} (q_0,\vec{q};T)
+ U_{\Delta N^{-1}} (q_0,\vec{q};T) \right]
\,\,\, ,$$ where the finite temperature Lindhard functions for the $ph$ and $\Delta h$ excitations are given in Appendix \[app-Linds\]. Note that, for convenience, we have absorbed the $\pi N \Delta$ coupling in the definition of $U_{\Delta N^{-1}}$.
The strength of the considered collective modes is modified by repulsive, spin-isospin $NN$ and $N\Delta$ short range correlations [@Oset:1981ih], which we include in a phenomenological way with a single Landau-Migdal interaction parameter, $g'=0.7$. The RPA-summed pion self-energy then reads $$\label{eq:piself-total}
\Pi^p_{\pi} (q_0,\vec{q};T) =
\frac{\left( \frac{f_N}{m_{\pi}} \right) ^2
F_{\pi}(\vec{q}\,^2) \, \vec{q}\,^2 \,
\left[ U_{NN^{-1}} (q_0,\vec{q};T) + U_{\Delta N^{-1}} (q_0,\vec{q};T) \right]}
{1 - \left( \frac{f_N}{m_{\pi}} \right) ^2 \, g' \,
\left[ U_{NN^{-1}} (q_0,\vec{q};T) + U_{\Delta N^{-1}} (q_0,\vec{q};T) \right]}
\,\,\, ,$$ which also contains the effect of the same monopole form factor at each $\pi NN$ and $\pi N \Delta$ vertex as used in $T=0$ studies, namely $F_{\pi}(\vec{q}\,^2) = (\Lambda_{\pi}^2 - m_{\pi}^2) /
[\Lambda_{\pi}^2 - (q_0)^2 + \vec{q}\,^2 ]$, with $\Lambda_{\pi}=1200$ MeV, as is needed in the empirical study of $NN$ interactions. Finally, for consistence with former evaluations of the pion self-energy, we have also accounted for one-body $s$-wave scattering and $2p2h$ mechanisms, following the results in Refs. [@Ramos:1994xy; @Seki:1983sh; @Meirav:1988pn], which we have kept to be the same as in $T=0$.
In Fig. \[fig:pion-spectral\] we show the pion spectral function at normal nuclear matter density for two different momenta. At $T=0$ (upper panels) one can easily distinguish the different modes populating the spectral function. At low momentum, the pion quasi-particle peak carries most of the strength together with the $ph$ structure at lower energies. The $\Delta h$ mode starts to manifest to the right hand side of the pion quasi-particle peak. Note that the pion mode feels a sizable attraction with respect to that in free space. At higher momentum, the excitation of the $\Delta$ is clearly visible and provides a considerable amount of strength which mixes with the pion mode. As a consequence, the latter broadens considerably. The solid lines include also the contributions from the pion $s$-wave self-energy and two-body absorption. These mechanisms, especially the latter, generate a background of strength which further broadens the spectral function, softening in particular the pion peak at low momentum and the $\Delta$ excitation at higher momentum.
The lower panels correspond to a temperature of $T=100$ MeV. The softening of the nucleon occupation number due to thermal motion causes a broadening of the three modes present in the spectral function. At $q=450$ MeV, the $ph$, $\Delta h$ and pion peaks are completely mixed, although some distinctive strength still prevails at the $\Delta h$ excitation energy. The $s$-wave and $2p2h$ self-energy terms completely wash out the structures that were still visible at zero temperature. For higher temperatures we have checked that no further structures can be resolved in the spectral function, in agreement with Ref. [@Rapp-rho-finiteT]. We however find differences at the numerical level, as we expected, since we have implemented different hadronic form factors and Landau-Migdal interactions in our model. Moreover, to keep closer to phenomenology, we have also implemented in our model the energy-dependent $p$-wave decay width of the $\Delta$, which favors the mixing of the $\Delta h$ excitation with the pion mode.
![Pion spectral function at $\rho=\rho_0$ for two different momenta and temperatures, $T=0$ (upper panels) and $T=100$ MeV (lower panels). The dashed lines correspond to the $p$-wave self-energy calculation including $ph$, $\Delta
h$ and short range correlations. The solid lines include, in addition, the (small) $s$-wave self-energy and the $2p2h$ absorption mechanisms.[]{data-label="fig:pion-spectral"}](fig2.eps){width="14cm"}
Results and Discussion {#sec:Resul}
======================
The $\bar K$ meson spectral function in a hot nuclear medium {#ssec:Resul-Kbar-spectral}
------------------------------------------------------------
![Imaginary part of the in-medium ${\bar K} N$ $s$-wave amplitude for $I=0$ and $I=1$ at $\rho_0$ as a function of the center-of-mass energy $P_0$, for $T=0$, $100$ MeV, and for the different approaches discussed in the text.[]{data-label="fig_amp"}](fig3.eps){width="14cm"}
We start this section by showing in Fig. \[fig\_amp\] the $s$-wave $\bar K N$ amplitude for $I=0$ and $I=1$ as a function of the center-of-mass energy, $P_0$, calculated at nuclear matter density, $\rho_0$, and temperatures $T=0$ MeV (first row) and $T=100$ MeV (second row). We have considered four different in-medium approaches: a first iteration that only includes Pauli blocking on the nucleon intermediate states (dotted lines), and three other self-consistent calculations of the $\bar K$ meson self-energy increasing gradually the degree of complexity: one includes the dressing of the $\bar K$ meson (long-dashed lines), another considers in addition the mean-field binding of the baryons in the various intermediate states (dot-dashed lines), and, finally, the complete model which includes also the pion self-energy (solid lines). Recall that the $I=0$ amplitude is governed by the behavior of the dynamically generated $\Lambda(1405)$ resonance.
We start commenting on the $T=0$ results shown in the upper panels, where we clearly see that the inclusion of Pauli blocking on the intermediate nucleon states generates the $I=0$ $\Lambda(1405)$ at higher energies than its position in free space. This has been discussed extensively in the literature [@Lutz; @Koch; @Ramos:1999ku; @Tolos:2002ud; @TOL06; @TOL00] and it is due to the restriction of available phase space in the unitarization procedure. The self-consistent incorporation of the attractive ${\bar K}$ self-energy moves the $\Lambda(1405)$ back in energy, closer to the free position, while it gets diluted due to the opening of the $\Lambda(1405) N \rightarrow \pi N \Lambda,
\pi N \Sigma$ decay modes [@Lutz; @Ramos:1999ku; @Tolos:2002ud; @TOL06]. The inclusion of baryon binding has mild effects, lowering slightly the position of the resonance peak. A similar smoothing behavior is observed for the $I=1$ amplitude as we include medium modifications on the intermediate meson-baryon states. As already pointed out in Ref. [@Ramos:1999ku; @Tolos:2002ud; @TOL06], when pions are dressed new channels are available, such as $\Lambda N N^{-1}$ or $\Sigma N N^{-1}$ (and similarly with $\Delta N^{-1}$ components), so the $\Lambda(1405)$ gets further diluted.
At a finite temperature of 100 MeV (lower panels), the $\Lambda(1405)$ resonance gets diluted and is produced, in general, at lower energies due to the smearing of the Fermi surface that reduces the Pauli blocking effects. When pions are dressed the strongly diluted resonance moves slightly to higher energies compared to the zero temperature case. The cusp like structures that appear at the lower energy side signal the opening of the $\pi\Sigma$ threshold on top of the already opened $YNN^{-1}$ one. Note that the cusp-like structure appears enhanced in the $I=0$ case. We believe that this is a manifestation at finite temperature and density of the two-pole structure of the $\Lambda(1405)$. As seen in Refs. [@Oller:2000fj; @Garcia-Recio:2002td; @Jido:2002yz; @Jido:2003cb; @Garcia-Recio:2003ks], this resonance is, in fact, the combination of two poles in the complex plane that appear close in energy and couple strongly to either $\bar K N$ or $\pi
\Sigma$ states. These two poles move apart and can be even resolved in a hot medium because density and temperature influence each of them differently. Although not shown in the figure, we find that, at twice nuclear matter density, the pole that couples more strongly to $\pi\Sigma$ states moves further below the $\pi\Sigma$ threshold and acquires a clear Breit-Wigner shape. This allows us to conclude that the cusp observed in the $I=0$ amplitude at $T=100$ MeV and $\rho=\rho_0$ is essentially a reflection of this pole. The shape of the corresponding resonance appears distorted with respect to a usual Breit-Wigner because the pole lies just below the threshold of the $\pi\Sigma$ channel to which it couples very strongly, a behavior known as Flatté effect [@Flatte:1976xu]. Note that these structures are not seen in the $T=0$ results because the $\pi\Sigma$ threshold in that case is located at around 1295 MeV, out of the range of the plot.
![ Real and imaginary parts of the $\bar K$ self-energy and spectral density, as functions of the $\bar K$ energy, for $q=0$ MeV/c and $q=450$ MeV/c. Results have been obtained at $T=0$ and $\rho_0$, including $(s+p)$-wave contributions, for the three self-consistent approaches discussed in the text. []{data-label="fig_selftot_T0"}](fig4.eps){width="14cm"}
Results for the ${\bar K}$ self-energy and spectral function at $\rho_0$ and $T=0$, including $s$- and $p$-wave contributions, obtained in the different self-consistent approaches are compared in Fig. \[fig\_selftot\_T0\]. At $q=0$ MeV/c, the inclusion of the baryon binding potential (dot-dashed lines) moves the quasi-particle peak of the spectral function, which is defined as $$E_{qp}(\vec{q}\,)^2=\vec{q}\,^2+m_{\bar K}^2+{\rm
Re}\,\Pi_{\bar K}(E_{qp}(\vec{q}\,),\vec{q}\,) \ , \label{eq:Qparticle}$$ to higher energies with respect to the case with no binding (dashed lines). The pion dressing (solid lines) further alters the behavior of the self-energy and, hence, that of the spectral function. The attraction of the antikaon mode decreases, while its width increases due to the opening of new decay channels induced by the $ph$ and $\Delta h$ pion excitations. At finite momentum, the $p$-wave $\Lambda N^{-1}$, $\Sigma N^{-1}$ and $\Sigma^* N^{-1}$ excitation modes are clearly visible around 300, 400 and 600 MeV, respectively. The latter mode mixes very strongly with the quasi-particle peak, making the differences between various self-consistent approaches to be less visible in the ${\bar K}$ spectral function. Similar results were obtained in Ref. [@TOL06]. The differences between both $T=0$ calculations arise mainly by the use in this work of different (and more realistic) baryon binding potentials. The effects of the particular details of the nucleon spectrum on the $\bar K$ spectral function have been noted recently in the $T=0$ study of Ref. [@Lutz:2007bh].
![Real and imaginary parts of the $\bar K$ self-energy and spectral function at $\rho_0$, for $q=0,450$ MeV/c and $T=0$, $100$ MeV as function of the $\bar K$ energy, including $s$- and $(s+p)$-wave contributions in the full self-consistent calculation. []{data-label="fig_selftot"}](fig5.eps){width="14cm"}
The effect of finite temperature on the different partial-wave contributions to the $\bar K$ self-energy is shown in Fig. \[fig\_selftot\]. In this figure we display the real and imaginary parts of the $\bar K$ self-energy together with the $\bar K$ spectral function for the self-consistent approximation that dresses baryons and includes the pion self-energy. We show the results as a function of the $\bar K$ energy for two different momenta, $q=0$ MeV/c (left column) and $q=450$ MeV/c (right column). The different curves correspond to $T=0$ and $T=100$ MeV including the $s$-wave and the $(s+p)$-wave contributions.
According to Eq. (\[eq:self\]), the present model does not give an explicit $p$-wave contribution to the ${\bar K}$ self-energy at zero momentum. The small differences between the $s$ and the $s+p$ calculations observed at $q=0$ MeV/c are due to the indirect effects of having included the $p$-wave self-energy in the intermediate meson-baryon loop. The importance of the $p$-wave self-energy is more evident at a finite momentum of $q=450$ MeV/c. The effect of the subthreshold $\Lambda N^{-1}$, $\Sigma N^{-1}$ and $\Sigma^*
N^{-1}$ excitations is repulsive at the $\bar K N$ threshold. This repulsion together with the strength below threshold induced by the those excitations can be easily seen in the spectral function at finite momentum (third row). The quasi-particle peak moves to higher energies while the spectral function falls off slowly on the left-hand side.
Temperature results in a softening of the real and imaginary part of the self-energy as the Fermi surface is smeared out. The peak of the spectral function moves closer to the free position while it extends over a wider range of energies.
![The $\bar K$ meson spectral function for $q=0$ MeV/c and $q=450$ MeV/c at $\rho_0$ and $2\rho_0$ as a function of the $\bar K$ meson energy for different temperatures and for the self-consistent calculation including the dressing of baryons and pions.[]{data-label="fig_spectot_Kbar"}](fig6.eps){width="14cm"}
For completeness, we show in Fig. \[fig\_spectot\_Kbar\] the evolution of the $\bar K$ spectral function with increasing temperature for two different densities, $\rho_0$ (upper row) and $2\rho_0$ (lower row), and two momenta, $q=0$ MeV/c (left column) and $q=450$ MeV/c (right column), in the case of the full self-consistent calculation which includes the dressing of baryons and the pion self-energy. At $q=0$ MeV/c the quasi-particle peak moves to higher energies with increasing temperature due to the loss of strength of the attractive effective ${\bar K}N$ interaction. Furthermore, the quasi-particle peak enhances its collisional broadening and gets mixed with the strength associated to the $\Lambda(1405)$ appearing both to the right (slow fall out) and left (cusp-like structures) of the peak. All these effects are less pronounced at a finite momentum of $q=450$ MeV/c. In this case, the region of the quasi-particle peak is exploring ${\bar K}$ energies of around 700 MeV, where the self-energy has a weaker energy dependence with temperature, as seen in Fig. \[fig\_selftot\]. We note that, as opposed to the zero momentum case, the width of the quasi-particle peak decreases with increasing temperature because of the reduction of the inter-mixing with the $\Sigma^* N^{-1}$ excitations which get diluted in a hot medium. As for the density effects, we just note that the quasi-particle peaks widens for larger nuclear density due to the enhancement of collision and absorption processes. In fact, a significant amount of strength is visible at energy values substantially below the quasi-particle peak. The fact that the $\bar{K}$ spectral function spreads to lower energies, even at finite momentum, may have relevant implications on the phenomenology of the $\phi$ meson propagation and decay in a nuclear medium. We will further elaborate on this point in the Conclusions section.
$K$ meson in nuclear matter at finite temperature {#ssec:Resul-Kaon-spectral}
-------------------------------------------------
![The $K$ meson spectral function for $q=0$ MeV/c and $q=450$ MeV/c at $\rho_0$ and $2\rho_0$ as a function of the $K$ meson energy for different temperatures.[]{data-label="fig_spectot_K"}](fig7.eps){width="14cm"}
The evolution with temperature and density of the properties of kaons is also a matter of high interest. In particular, kaons are ideal probes to test the high-density phase of relativistic heavy-ion collisions at incident energies ranging from 0.6A to 2AGeV and for studying the stiffness of the nuclear equation of state [@Forster:2007qk]. Moreover, there is a strong interconnection between the $K^+$, $K^-$ and $\phi$ channels, which can lead to important changes in the $\phi$-meson production in heavy-ion collisions [@Mangiarotti:2003es].
In the $S=1$ sector, only the $KN$ channel is available and, hence, the many-body dynamics of the $K$ meson in nuclear medium is simplified with respect to the $\bar K N$ case. The $K$ spectral function at $q=0$ and $q=450$ MeV/c is displayed in Fig. \[fig\_spectot\_K\] for the self-consistent calculation that includes the dressing of baryons, for $\rho_0$ (upper row) and $2\,\rho_0$ (lower row) and different temperatures. The $K$ meson is described by a narrow quasi-particle peak which dilutes with temperature and density as the phase space for collisional $KN$ states increases. The $s$-wave self-energy provides a moderate repulsion at the quasi-particle energy, which translates into a shift of the $K$ spectral function to higher energies with increasing density. In contrast to the $\bar K N$ case, the inclusion of $p$-waves has a mild effect on the kaon self-energy (compare thin-dashed lines to solid lines at $T=100$ MeV and $q=450$ MeV/c), as they arise from far off-shell $YN^{-1}$ excitations in crossed kinematics. These excitations provide a small, attractive and barely energy dependent contribution to the $K$ self-energy.
![The $K$ mass shift for $T=0$ MeV as a function of density, obtained within the self-consistent calculation and in the $T\rho$ approximation.[]{data-label="fig_upot_dens"}](fig8.eps){width="10cm"}
We can define the $K$ optical potential in the nuclear medium as $$U_{K}(\vec{q},T)=\frac{\Pi_{K}(E_{qp}(\vec{q}\,),\vec{q},T)}{2\sqrt{m_{K}^2+\vec{q}\,^2}} \ ,
\label{eq:Kpot}$$ which, at zero momentum, can be identified as the in-medium shift of the $K$ meson mass. The $K$ mass shift obtained in the self-consistent calculation that considers also the nucleon binding effects is displayed in Fig. \[fig\_upot\_dens\] as a function of the nuclear density for $T=0$ MeV. Our self-consistent results are compared to those of the low-density or $T \rho$ approximation, obtained by replacing the medium-dependent amplitude by the free-space one in Eq. (\[eq:selfd\]). We observe that the kaon potential at nuclear saturation density in the $T \rho$ approximation is 4 MeV less repulsive that in the case of the self-consistent approach, which gives a repulsion of 29 MeV. This value is in qualitative agreement with other self-consistent calculations [@Tolos:2005jg; @LUTZ-KORPA] and close to the 20 MeV of repulsion obtained in $K^+$ production on nuclei by the ANKE experiment of the COSY collaboration [@Nekipelov]. We conclude that the low-density theorem for densities below normal nuclear matter is fulfilled within 15 % due to the smooth energy dependence of the $K N$ interaction tied to the absence of resonant states close to the $KN$ threshold.
In-medium $\bar K$ and $K$ optical potentials at finite temperature {#ssec:optical}
-------------------------------------------------------------------
![The $\bar K$ potential for the full self-consistent calculation at $T=100$ MeV and $0.25\rho_0$, $\rho_0$ and $2\rho_0$ as a function of momentum. The $\bar K$ potential at $T=0$ and $\rho_0$ including $(s+p)$-waves is also shown. []{data-label="fig_upot_Kbar"}](fig9.eps){width="14cm"}
![The $K$ potential for the full self-consistent calculation at $T=100$ MeV and $0.25\rho_0$, $\rho_0$ and $2\rho_0$ as a function of momentum. The $K$ potential at $T=0$ and $\rho_0$ including $(s+p)$-waves is also shown.[]{data-label="fig_upot_K"}](fig10.eps){width="14cm"}
In this last subsection we provide the $K$ and ${\bar K}$ optical potentials at conditions reached in heavy-ion collisions for beam energies of the order or less than 2 AGeV, where temperatures can reach values of $T=100$ MeV together with densities up to a few times normal nuclear density [@Forster:2007qk; @FOPI].
In Figures \[fig\_upot\_Kbar\] and \[fig\_upot\_K\], we show the $\bar K$ and $K$ optical potentials at $T=100$ MeV for different densities ($0.25\rho_0$, $\rho_0$ and $2\rho_0$), including $s$- (dotted lines) and $(s+p)$-waves (solid lines), as functions of the meson momentum. In the case of $\rho_0$, we also show the potential at $T=0$ including $(s+p)$-waves (dashed lines).
The real part of the $\bar K$ potential becomes more attractive as we increase the density, going from $-4$ MeV at $0.25 \rho_0$ to $-45$ MeV at $2\rho_0$ for $q=0$, when both $s$- and $p$-waves are included. The repulsive $p$-wave contributions to the $\bar K$ potential become larger as density increases, reducing substantially the amount of attraction felt by the ${\bar K}$. Compared to a previous self-consistent calculation using the Jülich meson-exchange model [@TOL00; @Tolos:2002ud], here we observe a stronger dependence of the optical potential with the ${\bar K}$ momentum.
The imaginary part of the ${\bar K}$ optical potential at $T=100$ MeV is little affected by $p$-waves, which, as seen in Fig. \[fig\_selftot\], basically modify the ${\bar K}$ self-energy below the quasi-particle peak. The imaginary part of the potential shows a flat behavior at low momentum and, eventually, its magnitude decreases with increasing momentum as the quasi-particle energy moves away from the region of $YN^{-1}$ excitations.
With respect to the zero temperature case, shown for $\rho_0$ in the middle panels of Fig. \[fig\_upot\_Kbar\], the optical potential at $T=100$ MeV shows less structure. The real part amounts to basically half the attraction obtained at $T=0$ MeV, while the imaginary part gets enhanced at low momentum, due to the increase of collisional width, and reduced at high momentum, due to the decoupling of the ${\bar K}$ quasi-particle mode from the $\Sigma^*
N^{-1}$ one.
The $K$ meson potential changes from 7 MeV at $0.25 \rho_0$ to 74 MeV at $2\rho_0$ for $q=0$ MeV/c and receives its major contribution from the $s$-wave interaction, the $p$-wave providing a moderately attractive correction. The imaginary part moves from $-2$ MeV at $0.25 \rho_0$ to $-25$ MeV at $2\rho_0$ for $q=0$ MeV/c and its magnitude grows moderately with increasing momentum as the available phase space also increases.
Finite temperature affects the real part of the $K$ meson mildly, as can be seen for $\rho_0$ from comparing the dashed and solid lines in the middle panels of Fig. \[fig\_upot\_K\]. The magnitude of the imaginary part increases with temperature for all momenta, consistently with the increase of thermically excited nucleon states.
From our results for $\bar K$ and $K$ mesons, it is clear that $p$-wave effects can be neglected at subnuclear densities at the level of the quasi-particle properties. However, they become substantially important as density increases and are also responsible for a considerable amount of the strength at low energies in the spectral function.
Summary, conclusions and outlook {#sec:Conclusion}
================================
We have obtained the $\bar K$ and $K$ self-energies in symmetric nuclear matter at finite temperature from a chiral unitary approach, which incorporates the $s$- and $p$-waves of the kaon-nucleon interaction. At tree level, the $s$-wave amplitude is obtained from the Weinberg-Tomozawa term of the chiral Lagrangian. Unitarization in coupled channels is imposed by solving the Bethe-Salpeter equation with on-shell amplitudes. The model generates dynamically the $\Lambda
(1405)$ resonance in the $I=0$ channel. The in-medium solution of the $s$-wave amplitude, which proceeds by a re-evaluation of the meson-baryon loop function, accounts for Pauli-blocking effects, mean-field binding on the nucleons and hyperons via a temperature-dependent $\sigma-\omega$ model, and the dressing of the pion and kaon through their corresponding self-energies. This requires a self-consistent evaluation of the $K$ and $\bar K$ self-energies. The $p$-wave self-energy is accounted for through the corresponding hyperon-hole ($YN^{-1}$) excitations. Finite temperature expressions have been obtained in the Imaginary Time Formalism, giving a formal justification of some approximations typically done in the literature and, in some cases, improving upon the results of previous works. For instance, in this formalism, the Lindhard function of $YN^{-1}$ excitations automatically accounts for Pauli blocking on the excited hyperons and satisfies the analytical constraints of a retarded self-energy.
The $\bar K$ self-energy and, hence, its spectral function show a strong mixing between the quasi-particle peak and the $\Lambda(1405)N^{-1}$ and $YN^{-1}$ excitations. The effect of the $p$-wave $YN^{-1}$ subthreshold excitations is repulsive for the $\bar K$ potential, compensating in part the attraction provided by the $s$-wave ${\bar K} N$ interaction. Temperature softens the $p$-wave changes on the spectral function at the quasi-particle energy. On the other hand, together with the $s$-wave mechanisms, the $p$-wave self-energy provides a low-energy tail which spreads the spectral function considerably, due to the smearing of the Fermi surface for nucleons. Similarly, the size of the imaginary part of the potential decreases with momentum, as the $\bar K$ mode decouples from subthreshold absorption mechanisms.
The narrow $K$ spectral function dilutes with density and temperature as the number of collisional $KN$ states is increased. A moderate repulsion, coming from the dominant $s$-wave self-energy, moves the quasi-particle peak to higher energies in the hot and dense medium. The absence of resonant states close to threshold validates the use of the low-density theorem for the $K$ optical potential approximately up to saturation density. The inclusion of $p$-waves has a mild attractive effect on the $K$ self-energy and potential, which results from $YN^{-1}$ excitations in crossed kinematics.
The properties of strange mesons at finite temperature for densities of 2-3 times normal nuclear matter have been object of intensive research in the context of relativistic heavy-ion collisions at beam energies below 2 AGeV [@Forster:2007qk]. The comparison between the experimental results on production cross sections, energy distributions and polar angle distributions, and the different transport-model calculations has lead to several important conclusions, such as the coupling between the $K^-$ and the $K^+$ yields by strangeness exchange and the fact that the $K^+$ and $K^-$ mesons exhibit different freeze-out conditions. However, there is still debate on the influence of the kaon-nucleus potential on those observables. The in-medium modifications of the $\bar K$ and $K$ properties devised in this paper could be used in transport calculations and tested against the data from the current experimental programs in heavy ions [@Forster:2007qk; @FOPI].
The fact that the $\bar{K}$ spectral function spreads to low energies, even at finite momentum, may have relevant implications on the phenomenology of the $\phi$ meson propagation and decay in a nuclear medium. The reduced phase space for the dominant decay channel in vacuum, $\phi \to \bar K K$, makes the $\phi$ meson decay width a sensitive probe of kaon properties in a hot and dense medium (the $p$-wave nature of the $\phi \bar K K$ coupling further enhances this sensitivity). In [@Oset:2000eg; @Cabrera:2002hc] the $\phi$ meson mass and decay width in nuclear matter were studied from a calculation of the $\bar K$ and $K$ self-energies in a chiral unitary framework similar to the present work (the most relevant differences and novelties introduced in this work have been discussed in previous sections). The overall attraction of the $\bar K$ meson together with a sizable broadening of its spectral function (which reflects the fate of the $\Lambda (1405)$ in a nuclear medium), induced a remarkable increase of $\Gamma_{\phi}$ of almost one order of magnitude at $\rho=\rho_0$ as compared to the width in free space, as several decay mechanisms open in the medium such as $\phi N \to K Y$ and $\phi N \to K \pi Y$. The LEPS Collaboration [@Ishikawa:2004id] has confirmed that the $\phi$ meson width undergoes strong modifications in the medium from the study of the inclusive nuclear $\phi$ photoproduction reaction on different nuclei. The observed effects even surpass the sizable modifications obtained in [@Oset:2000eg; @Cabrera:2002hc] and predicted for the $\phi$ photoproduction reaction in [@Cabrera:2003wb].
At finite temperature (CERN-SPS, SIS/GSI and FAIR/GSI conditions), despite the $\bar K$ peak return towards its free position, we expect a similar or even stronger broadening of the $\phi$ meson, as $S_{\bar
K}$ further dilutes in the medium effectively increasing the available phase space. Note, in addition, that the presence of thermally excited mesons induces ”stimulated” $\phi \to \bar K K$ decays (as well as diffusion processes) [@Gale:1990pn; @Haglin:1994ap]. Since Bose enhancement is more effective on the lighter modes of the system, the low energy tail of the $\bar K$ spectral function may contribute substantially to the $\phi$ decay width.
At RHIC and LHC conditions, the hot medium is expected to have lower net baryonic composition. One may conclude, as a consequence, that the contribution from interactions with baryons will be smaller. The relevance of baryonic density effects even at high temperatures has been stressed for the $\rho$ and $\phi$ meson clouds in hot and dense matter [@Smith:1997xu; @Rapp:1999ej; @Rapp:2000pe]. In the case of $\phi \to \bar K K$ decays, a finite density of antibaryons allows the $K$ to interact with the medium through the charge-conjugated mechanisms described here for the $\bar K$, and viceversa (in the limit of $\mu_B =0$ the $\bar K$ and $K$ self-energies are identical). At small net baryon density, whereas the effective contribution from the real parts of the $K$, $\bar K$ self-energies tend to vanish, it is not the case for the imaginary parts, which are always cumulative. Thus even at small baryonic chemical potential, the presence of antibaryons makes up for the loss of reactivity from having smaller nuclear densities. Additionally, the relevance of kaon interactions with the mesonic gas (ignored in this work) becomes manifest in this regime, as it has been pointed out in [@AlvarezRuso:2002ib; @Holt:2004tp; @Faessler:2002qb; @Santini:2006cm]. This mechanisms, together with Bose enhancement of $\bar K K$ decays point towards a sizable increase of the $\phi$ reactivity even at very high temperatures.
Therefore, we plan to study the influence of the $\bar K K$ cloud on the properties of the $\phi$ meson in a nuclear medium at finite temperature [@Dani], extending our previous analysis for cold nuclear matter [@Oset:2000eg; @Cabrera:2002hc; @Cabrera:2003wb]. Such changes on the $\phi$ meson properties are a matter of interest in the current and future experimental heavy-ion programs [@FOPI; @HADES; @CBM]. In particular, the future FAIR facility at GSI will devote special attention to the in-medium vector meson spectral functions. The HADES experiment will operate at higher beam energy of the order of 8-10 AGeV, providing complementary information on the spectral function evolution of vector mesons to the current research program. On the other hand, CBM will measure the in-medium spectral functions of short lived vector mesons directly by their decay into dilepton pairs.
With this work we expect to pave the understanding of kaon properties in hot and dense matter and provide an essential ingredient for the $\phi$-meson phenomenology in heavy-ion collisions. Our results are based on a self-consistent many-body calculation at finite temperature which relies on a realistic model of the kaon nucleon interaction thoroughly confronted to the kaon nuclear phenomenology.
Acknowledgments
===============
We thank E. Oset for useful discussions. We also thank R. Rapp for helpful discussions and comments at the initial stage of the project. This work is partly supported by the EU contract FLAVIAnet MRTN-CT-2006-035482, by the contract FIS2005-03142 from MEC (Spain) and FEDER and by the Generalitat de Catalunya contract 2005SGR-00343. This research is part of the EU Integrated Infrastructure Initiative Hadron Physics Project under contract number RII3-CT-2004-506078. L.T. wishes to acknowledge support from the BMBF project “Hadronisierung des QGP und dynamik von hadronen mit charm quarks” (ANBest-P and BNBest-BMBF 98/NKBF98). D.C. acknowledges support from the “Juan de la Cierva” Programme (Ministerio de Educación y Ciencia, Spain).
Finite-temperature Lindhard functions {#app-Linds}
=====================================
We quote here some Lindhard function expressions in the ITF to point out the main differences with previous evaluations.
$YN^{-1}$ excitations
---------------------
In the ITF, the $YN^{-1}$ Lindhard function reads
$$\begin{aligned}
\label{eq:Lind-rel-YN} {\mathcal U}_{Y N^{-1}}(\omega_n,\vec{q};
T) = 2\, \int \frac{d^3p}{(2\pi)^3}
\frac{n_N(\vec{p},T)-n_Y(\vec{p}+\vec{q},T)} {{\rm i}\omega_n +
E_N(\vec{p},T) - E_Y(\vec{p}+\vec{q},T)} \ ,\end{aligned}$$
where $\omega_n$ is a bosonic Matsubara frequence (${\rm i} \omega_n={\rm i}2n\pi T$) and the factor 2 stands for spin degeneracy. Consistently with the approximations employed in this work, we have only kept the positive energy part of the baryon propagators, while keeping the baryon energies fully relativistic and containing also mean-field binding potentials. The nucleon and hyperon Fermi distributions, $n_{N,Y}(\vec{p},T)=
[e^{(E_{N,Y}(\vec{p},T)-\mu_B)/T}+1]^{-1}$, depend on the temperature and baryon chemical potential, so that for fixed $T$ and $\mu_B$ the nucleon and hyperon densities are given by $$\label{eq:NandYdensities} \rho_N = \nu_N\, \int
\frac{d^3p}{(2\pi)^3} n_N(\vec{p},T) \,\,\, , \ \ \ \rho_Y = \nu_Y
\, \int \frac{d^3p}{(2\pi)^3} n_Y(\vec{p},T) \,\,\, ,$$ with $\nu_B$ the corresponding spin-isospin degeneracy factors, namely $\nu_N=4$ and $\nu_Y = (2,6,12)$ for $Y=(\Lambda, \Sigma,
\Sigma^*)$. All the hyperons are considered as stable particles.
Note that, differently from the zero temperature case shown below, at finite temperature there are occupied hyperon states and therefore the $Yh$ excitation is suppressed by hyperon Pauli blocking, as it is evident from the hyperon distribution, $n_Y$, which appears subtracting in the numerator of the Lindhard function. This can be seen explicitly by rewriting $n_N-n_Y$ as $n_N (1+n_Y) - n_Y
(1+n_N)$.
Analytical continuation to real energies (${\rm i} \omega_n \to
q_0 + {\rm i} \varepsilon$) gives the following expressions for the real and imaginary parts of the finite temperature $YN^{-1}$ Lindhard function (including mean-field binding potentials), $$\begin{aligned}
\label{eq:Lind-rel-YN-elaborate} {\rm Re}\, U_{Y
N^{-1}}(q_0,\vec{q}; T) &=& \frac{1}{2\pi^2} \int_0^{\infty} dp\,
p^2\,\,{\cal P} \int_{-1}^{+1} du \,
\frac{n_N(\vec{q},T)-n_Y(\vec{p}+\vec{q},T)} {q_0 + E_N(\vec{p},T)
- E_Y(\vec{p}+\vec{q},T)} \,\,\, ,
\nonumber \\
{\rm Im}\, U_{Y N^{-1}}(q_0,\vec{q}; T) &=& - \pi \,
\frac{1}{2\pi^2} \int_0^{\infty} dp\, p^2\,
\frac{q_0+E_N(\vec{p},T) - \Sigma_Y}{p\,q} \,
\nonumber \\
&\times& [n_N(\vec{q},T)-n_Y(\vec{p}+\vec{q},T)]_{u_0} \, \theta
(1-|u_0|) \, \theta (q_0 +E_N(\vec{p},T) - \Sigma_Y) \,\,\,,
\nonumber \\\end{aligned}$$ with $u_0 \equiv u_0 (q_0,q,p) = [(q_0+E_N-\Sigma_Y)^2 - (M_Y^*)^2
- p^2 - q^2]/(2\,p\,q)$, where here $q$, $p$ refer to the modulus of the corresponding three-momentum.
In the limit of zero temperature and fixed baryon chemical potential, $\mu_B$, which then coincides with the nucleon Fermi energy, $E_F=E_N(p_F)$, with $p_F$ the Fermi momentum, we have $$\label{T0limit} n_N(\vec{p},T) \to n_N(\vec{p}\,) = \theta (p_F-p) \
, \ n_Y(\vec{p},T) \to 0 \ .$$ The $YN^{-1}$ Lindhard function then reads $$\label{LindYN-T0} U_{Y N^{-1}}(q_0,\vec{q}; \rho) = 2\, \int
\frac{d^3p}{(2\pi)^3} \frac{n_N(\vec{p}\,)} {q_0 + E_N(\vec{p}\,)
- E_Y(\vec{p}+\vec{q}\,) + {\rm i} \varepsilon} \ ,$$ and $\rho = 2\, p_F^3 / 3\,\pi^2$. In [@Oset:2000eg; @Cabrera:2002hc; @Tolos:2002ud] analytical expressions are provided for the non-relativistic Fermi gas (i.e., $E_B(\vec{p}\,)=M_B+\vec{p}\,^2/2M_B$ above), which we quote here for completeness, $$\begin{aligned}
\label{LindYN-T0-norel}
{\rm Re}\, U^{{\rm nr}}_{YN^{-1}}(q_0,\vec{q};\rho)
&=& \frac{3}{2} \rho
\frac{M_Y}{q p_F} \Bigg\{ z+ \frac{1}{2}(1-z^2) {\rm ln}
\frac{|z+1|}{|z-1|} \Bigg\} \ ,
\nonumber \\
{\rm Im}\, U^{{\rm nr}}_{YN^{-1}}(q_0,\vec{q};\rho)
&=&-\frac{3}{4} \pi \rho
\frac{M_Y}{q p_F} \lbrace (1-z^2) \theta(1-|z|) \rbrace \ ,\end{aligned}$$ with $$z=\frac{M_Y}{q p_F}\left\{q_0-\frac{\vec{q}\,^2} {2
M_Y}-(M_Y-M_N)\right\} \ .$$ We note that terms proportional to ($M_Y^{-1}-M_B^{-1}$) in the denominator of Eq. (\[LindYN-T0\]) are neglected when doing the angular integration to arrive to Eq. (\[LindYN-T0-norel\]).
In the literature, one often finds approximate expressions for the finite temperature Lindhard function that have been obtained from extensions of the former $T=0$ equations. In [@Tolos:2002ud] the finite temperature generalization of Eq. (\[LindYN-T0\]) was obtained by replacing the nucleon occupation number, $n_N(\vec{p}\,)
= \theta (p_F-p)$, by the corresponding Fermi distribution at finite temperature, $n_N(\vec{p}\,) \to n_N(\vec{p},T)$. Analytical expressions (up to a momentum integration) read $$\begin{aligned}
\label{LindYN-Tfinita-norel}
{\rm Re}\, U^{{\rm nr}}_{YN^{-1}}(q_0,\vec{q};T)
&=& \frac{1}{\pi^2}
\frac{M_Y}{q} \int dp \ p \ n_N(\vec{p},T)
\ {\rm ln} \frac{|z+1|}{|z-1|} \ ,
\nonumber \\
{\rm Im}\, U^{{\rm nr}}_{YN^{-1}}(q_0,\vec{q};T)
&=& - \frac{1}{\pi} \, T
\frac{M_N \, M_Y}{q} \ {\rm ln} \frac{1}{1-n_N(\vec{p}_m,T)}\ ,\end{aligned}$$ with $$\begin{aligned}
z&=&\frac{M_Y}{q p}\left\{q_0-\frac{\vec{q}\,^2}
{2 M_Y}-(M_Y-M_N)\right\} \ , \nonumber \\
p_m&=&\frac{M_Y}{q} \left | q_0-(M_Y-M_N)-\frac{\vec{q}\,^2}{2 M_Y}
\right | \ .\end{aligned}$$
$ph$ and $\Delta h$ excitations
-------------------------------
The evaluation of the $ph \equiv NN^{-1}$ Lindhard function at finite temperature and density can be found, for instance, in [@Rapp-rho-finiteT; @mattuck]. Analytical continuation to real energies from the ITF expression leads to $$\begin{aligned}
\label{eq:Lind-rel-ph}
U_{N N^{-1}}(q_0,\vec{q}; T) =
\nu_N \, \int \frac{d^3p}{(2\pi)^3} \frac{n_N(\vec{p},T)-n_N(\vec{p}+\vec{q},T)}
{q_0 + {\rm i}\varepsilon + E_N(\vec{p},T) - E_N(\vec{p}+\vec{q},T)}
\,\,\, ,\end{aligned}$$ with $\nu_N=4$. Similarly, for the $\Delta h$ Lindhard function we arrive at $$\begin{aligned}
\label{eq:Lind-rel-Dh}
U_{\Delta N^{-1}}(q_0,\vec{q}; T)
&=&
\nu_{\Delta} \,
\int \frac{d^3p}{(2\pi)^3}
\left[
\frac{n_N(\vec{p},T)-n_{\Delta}(\vec{p}+\vec{q},T)}
{q_0 + {\rm i} \frac{\Gamma_{\Delta}(q_0,\vec{q}\,)}{2}
+ E_N(\vec{p},T) - E_{\Delta}(\vec{p}+\vec{q},T)}
\right.
\nonumber \\
&+&
\left.
\frac{n_{\Delta}(\vec{p},T)-n_N(\vec{p}+\vec{q},T)}
{q_0 + {\rm i} \frac{\Gamma_{\Delta}(-q_0,\vec{q}\,)}{2}
+ E_{\Delta}(\vec{p},T) - E_N(\vec{p}+\vec{q},T)}
\right]
\,\,\, ,\end{aligned}$$ which we have written explicitly in terms of direct ($\Delta
N^{-1}$) plus crossed ($N\Delta^{-1}$) contributions. For convenience, the $\pi N\Delta$ coupling is absorbed in the definition of $U_{\Delta N^{-1}}$ and thus $\nu_{\Delta} =
\frac{16}{9} (f_{\Delta} / f_N)^2$. Note that in Eq. (\[eq:Lind-rel-Dh\]) we have accounted for the decay width of the $\Delta$ resonance. A realistic treatment of the $\Delta h$ mechanism should account for the full energy dependent $\Delta$ decay width into its dominant channel, $\pi N$. We implement the $\Delta$ decay width using the formulae in [@Oset:1989ey], in which $\Gamma_{\Delta}$ only depends on the pion energy and momentum (a more detailed study of the in-medium $\Delta$ self-energy at finite temperature and density has been reported in [@vanHees:2004vt]). $\Gamma_{\Delta}(q_0,\vec{q}\,)$ accounts for both direct and crossed kinematics and hence the retarded property of $U_{\Delta N^{-1}}$ is preserved. We have not accounted for binding effects on the nucleon and $\Delta$ in the pion self-energy. First, in the $ph$ excitation the baryonic potentials cancel to a large extent. Second, the binding potentials for the $\Delta$ resonance are not well known experimentally and we omit them. Therefore, for consistence, we do not dress the nucleon in the $\Delta h$ excitation either.
The $T\to 0$ limit (at nuclear matter conditions) of $U_{NN^{-1}}$ and $U_{\Delta N^{-1}}$ can be easily obtained with similar prescriptions as in the $YN^{-1}$ case, namely, $n_N(\vec{p},T)\to n_N(\vec{p}\,)$ and $n_{\Delta} \to 0$. Analytic expressions for the non-relativistic $ph$ and $\Delta h$ Lindhard functions can be found in [@Oset:1989ey].
$s$-wave self-energy from $T_{{\bar K}(K) N}$ {#app-swave}
=============================================
We derive in this section a general expression for the contribution to the kaon self-energy from the effective in-medium $\bar K (K) N$ scattering amplitude at finite temperature. Let us denote by $T_{{\bar K}(K)N}$ the isospin averaged kaon nucleon scattering amplitude. The kaon self-energy, $\Pi_{{\bar K}(K)N}$, follows from closing the nucleon external lines and, following the Feynman rules in the ITF, reads $$\label{s-wave-ITF}
\Pi_{{\bar K}(K)N} (\omega_n,\vec{q}; T) = T\, \sum_{m=-\infty}^{\infty}
\int \frac{d^3p}{(2\pi)^3} \, \frac{1}{{\rm i} W_m - E_N (\vec{p}\,)}
\, T_{{\bar K}(K)N} (\omega_n + W_m , \vec{P} ; T)
\ ,$$ where $\omega_n$ and $W_m$ are bosonic and fermionic Matsubara frequencies, respectively, ${\rm i} \omega_n = {\rm i} 2n\pi T$ and ${\rm i} W_m = {\rm i} (2m+1)\pi T + \mu_B$. The sum over the index $m$ is not straightforward since $T_{{\bar K}(K)N}$ depends on $m$ in a non-trivial way. To skip this complication, we can invoke a spectral representation for the $T$-matrix (inherited from the analytical structure of the meson-baryon loop function) and we have $$\begin{aligned}
\label{s-wave-ITF-2}
\Pi_{{\bar K}(K)N} (\omega_n,\vec{q}; T)
&=&
-T\, \sum_{m=-\infty}^{\infty}
\int \frac{d^3p}{(2\pi)^3} \,
\frac{1}{\pi}\int_{-\infty}^{\infty} d\Omega \,
\frac{{\rm Im}\,T_{{\bar K}(K)N}(\Omega,\vec{P};T)}
{[{\rm i} W_m - E_N (\vec{p}\,)] [{\rm i} \omega_n + {\rm i} W_m - \Omega]}
\nonumber \\
&=&
- \int \frac{d^3p}{(2\pi)^3} \,
\frac{1}{\pi}\int_{-\infty}^{\infty} d\Omega \,
\frac{{\rm Im}\,T_{{\bar K}(K)N}(\Omega,\vec{P};T)}
{{\rm i} \omega_n - \Omega + E_N (\vec{p}\,)}
\, [n_N(\vec{p},T) - n(\Omega,T)] \, ,
\nonumber \\\end{aligned}$$ with $n(\Omega,T) = [e^{(\Omega-\mu_B)/T}+1]^{-1}$ here. The former result, after continuation into the real energy axis (${\rm i}\omega_n \to q_0+{\rm
i}\varepsilon$), provides the thermal kaon self-energy evaluated from the kaon nucleon scattering amplitude. Note that it includes a Pauli blocking correction term, $n(\Omega,T)$, convoluted with the spectral strength from the imaginary part of the $T$-matrix. At the region in which the principal value of the spectral integration gets its major contribution, $\Omega \approx q_0 +
E_N(\vec{p}\,)$, the fermion distribution $n(\Omega,T)$ behaves as a slowly varying exponential tail (for the present temperatures under study). We can approximate this term by a constant, namely, $n(\Omega,T)\simeq n(q_0 +
E_N(\vec{p}\,),T)$ and take it out of the integral. The dispersion integral over $\Omega$ then recovers the full amplitude $T_{{\bar K}(K)N}$ and the self-energy can be approximated by: $$\label{s-wave-ITF-3}
\Pi_{{\bar K}(K)N} (q_0+{\rm i}\varepsilon,\vec{q}; T) =
\int \frac{d^3p}{(2\pi)^3} \,
T_{{\bar K}(K)N} (q_0 + E_N(\vec{p}\,),\vec{P};T) \,
[n_N(\vec{p},T) - n(q_0 + E_N(\vec{p}\,),T)]
\ .$$ Note that this procedure is exact for the imaginary part. Eq. (\[eq:selfd\]) follows from the former result by neglecting the Pauli blocking correction on the fermion degrees of freedom excited in the kaon nucleon amplitude (note that in the isospin zero amplitude the strength peaks around the $\Lambda(1405)$ resonance, and thus one expects this correction to be small with respect to that on the nucleon).
[999]{} E. Friedman and A. Gal, Phys. Rept. [**452**]{}, 89 (2007). R. Rapp and J. Wambach, Adv. Nucl. Phys. [**25**]{}, 1 (2000). V. Koch, Phys. Lett. B [**337**]{}, 7 (1994). J. Schaffner-Bielich, V. Koch and M. Effenberger, Nucl. Phys. A [**669**]{}, 153 (2000).
M. Lutz, Phys. Lett. B [**426**]{}, 12 (1998).
A. Ramos and E. Oset, Nucl. Phys. A [**671**]{}, 481 (2000).
L. Tolos, A. Ramos, A. Polls and T. T. S. Kuo, Nucl. Phys. A [**690**]{}, 547 (2001). L. Tolos, A. Ramos and A. Polls, Phys. Rev. C [**65**]{}, 054907 (2002).
L. Tolos, A. Ramos and E. Oset, Phys. Rev. C [**74**]{}, 015203 (2006). C. J. Batty, E. Friedman and A. Gal, Phys. Rept. [**287**]{}, 385 (1997). A. Baca, C. Garcia-Recio and J. Nieves, Nucl. Phys. A [**673**]{}, 335 (2000).
Y. Akaishi and T. Yamazaki, Phys. Rev. C [**65**]{}, 044005 (2002). A. Dote, H. Horiuchi, Y. Akaishi and T. Yamazaki, Phys. Rev. C [**70**]{}, 044313 (2004). Y. Akaishi, A. Dote and T. Yamazaki, Phys. Lett. B [**613**]{}, 140 (2005). E. Oset and H. Toki, Phys. Rev. C [**74**]{}, 015207 (2006). T. Suzuki [*et al.*]{}, Phys. Lett. B [**597**]{}, 263 (2004). M. Sato [*et al.*]{}, Phys. Lett. B [**659**]{}, 107 (2008); M. Iwasaki [*et al.*]{}, Nucl. Phys. A [**804**]{}, 186 (2008).
M. Agnello [*et al.*]{} \[FINUDA Collaboration\], Nucl. Phys. A [**775**]{}, 35 (2006).
A. Ramos, V. K. Magas, E. Oset and H. Toki, Nucl. Phys. A [**804**]{}, 219 (2008).
M. Agnello [*et al.*]{} \[FINUDA Collaboration\], Phys. Rev. Lett. [**94**]{}, 212303 (2005). V. K. Magas, E. Oset, A. Ramos and H. Toki, Phys. Rev. C [**74**]{}, 025206 (2006). T. Suzuki [*et al.*]{} \[KEK-PS E549 Collaboration\], arXiv:0709.0996 \[nucl-ex\]. M. Agnello [*et al.*]{} \[FINUDA Collaboration\], Phys. Lett. B [**654**]{}, 80 (2007). V. K. Magas, E. Oset and A. Ramos Phys. Rev. C [**77**]{}, 065210 (2008).
N. V. Shevchenko, A. Gal and J. Mares, Phys. Rev. Lett. [**98**]{}, 082301 (2007). N. V. Shevchenko, A. Gal, J. Mares and J. Revai, Phys. Rev. C [**76**]{}, 044004 (2007). Y. Ikeda and T. Sato, Phys. Rev. C [**76**]{}, 035203 (2007). A. Doté and W. Weise, proceedings of the IX International Conference on Hypernuclear and Strange Particle Physics, Mainz (Germany), October 10-14, 2006. Edited by J. Pochodzalla and Th. Walcher, (Springer, Germany, 2007), 249; A. Doté, T. Hyodo and W. Weise, Nucl. Phys. A [**804**]{}, 197 (2008).
A. Forster [*et al.*]{}, Phys. Rev. C [**75**]{}, 024906 (2007). http://www.gsi.de/forschung/kp/kp1/experimente/fopi/index.html
W. Cassing, L. Tolos, E. L. Bratkovskaya and A. Ramos, Nucl. Phys. A [**727**]{}, 59 (2003) L. Tolos, D. Cabrera, A. Ramos and A. Polls, Phys. Lett. B [**632**]{}, 219 (2006).
Y. Akiba [*et al.*]{} \[E-802 Collaboration\], Phys. Rev. Lett. [**76**]{}, 2021 (1996). S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. C [**72**]{}, 014903 (2005). D. Adamova [*et al.*]{} \[CERES Collaboration\], Phys. Rev. Lett. [**96**]{}, 152301 (2006).
T. Ishikawa [*et al.*]{}, Phys. Lett. B [**608**]{}, 215 (2005).
R. Muto [*et al.*]{}, Nucl. Phys. A [**774**]{} (2006) 723.
R. Nasseripour [*et al.*]{} \[CLAS Collaboration\], Phys. Rev. Lett. [**99**]{}, 262302 (2007).
T. Hatsuda and S. H. Lee, Phys. Rev. C [**46**]{}, 34 (1992). M. Asakawa and C. M. Ko, Nucl. Phys. A [**572**]{}, 732 (1994). S. Zschocke, O. P. Pavlenko and B. Kampfer, Eur. Phys. J. A [**15**]{}, 529 (2002). F. Klingl, T. Waas and W. Weise, Phys. Lett. B [**431**]{}, 254 (1998) E. Oset and A. Ramos, Nucl. Phys. A [**679**]{}, 616 (2001). D. Cabrera and M. J. Vicente Vacas, Phys. Rev. C [**67**]{}, 045203 (2003). W. Smith and K. L. Haglin, Phys. Rev. C [**57**]{}, 1449 (1998).
L. Alvarez-Ruso and V. Koch, Phys. Rev. C [**65**]{}, 054901 (2002).
D. Cabrera, L. Roca, E. Oset, H. Toki and M. J. Vicente Vacas, Nucl. Phys. A [**733**]{}, 130 (2004).
R. Arnaldi [*et al.*]{} \[NA60 Collaboration\], Phys. Rev. Lett. [**96**]{}, 162302 (2006).
http://www-hades.gsi.de
http://www.gsi.de/fair/experiments/CBM/
L. Tolos, A. Ramos and T. Mizutani, Phys. Rev. C [**77**]{}, 015207 (2008). E. Oset and A. Ramos, Nucl. Phys. A [**635**]{}, 99 (1998).
J. Gasser and H. Leutwyler, Nucl. Phys. B [**250**]{}, 465 (1985). U. G. Meissner, Rept. Prog. Phys. [**56**]{}, 903 (1993). V. Bernard, N. Kaiser and U. G. Meissner, Int. J. Mod. Phys. E [**4**]{}, 193 (1995). A. Pich, Rept. Prog. Phys. [**58**]{}, 563 (1995). G. Ecker, Prog. Part. Nucl. Phys. [**35**]{}, 1 (1995).
D. Jido, J. A. Oller, E. Oset, A. Ramos and U. G. Meissner, Nucl. Phys. A [**725**]{}, 181 (2003). J. A. Oller and E. Oset, Phys. Rev. D [**60**]{}, 074023 (1999).
J. A. Oller and U. G. Meissner, Phys. Lett. B [**500**]{}, 263 (2001).
J.I. Kapusta and C. Gale, [*Finite Temperature Field Theory Principles and Applications*]{}, 2nd. edition (Cambridge Univ. Press, 2006).
R. Machleidt, Adv. Nucl. Phys. [**19**]{}, 189 (1989). K. Tsushima and F. C. Khanna, Phys. Lett. B [**552**]{}, 138 (2003). K. Tsushima and F. C. Khanna, J. Phys. G [**30**]{}, 1765 (2004).
E. Oset, P. Fernandez de Cordoba, L. L. Salcedo and R. Brockmann, Phys. Rept. [**188**]{}, 79 (1990). A. Ramos, E. Oset and L. L. Salcedo, Phys. Rev. C [**50**]{}, 2314 (1994). T. Waas, N. Kaiser and W. Weise, Phys. Lett. B [**379**]{}, 34 (1996). T. Waas and W. Weise, Nucl. Phys. A [**625**]{}, 287 (1997). C. Garcia-Recio, J. Nieves, T. Inoue and E. Oset, Phys. Lett. B [**550**]{}, 47 (2002)
E. Oset, H. Toki and W. Weise, Phys. Rept. [**83**]{}, 281 (1982). R. Seki and K. Masutani, Phys. Rev. C [**27**]{}, 2799 (1983).
O. Meirav, E. Friedman, R. R. Johnson, R. Olszewski and P. Weber, Phys. Rev. C [**40**]{}, 843 (1989).
M. Urban, M. Buballa, R. Rapp and J. Wambach, Nucl. Phys. A [**673**]{}, 357 (2000).
C. Garcia-Recio, J. Nieves, E. Ruiz Arriola and M. J. Vicente Vacas, Phys. Rev. D [**67**]{}, 076009 (2003). D. Jido, A. Hosaka, J. C. Nacher, E. Oset and A. Ramos, Phys. Rev. C [**66**]{}, 025203 (2002). C. Garcia-Recio, M. F. M. Lutz and J. Nieves, Phys. Lett. B [**582**]{}, 49 (2004).
S. M. Flatte, Phys. Lett. B [**63**]{}, 224 (1976). M. F. M. Lutz, C. L. Korpa and M. Moller, Nucl. Phys. A [**808**]{}, 124 (2008).
A. Mangiarotti [*et al.*]{}, Nucl. Phys. A [**714**]{}, 89 (2003).
C. L. Korpa and M. F. M. Lutz, Heavy Ion Phys. [**17**]{}, 341 (2003).
M. Nekipelov [*et al.*]{}, Phys. Lett. B [**540**]{}, 207 (2002). C. Gale and J. I. Kapusta, Nucl. Phys. B [**357**]{}, 65 (1991).
K. L. Haglin and C. Gale, Nucl. Phys. B [**421**]{}, 613 (1994).
R. Rapp, Phys. Rev. C [**63**]{}, 054907 (2001). L. Holt and K. Haglin, J. Phys. G [**31**]{}, S245 (2005).
A. Faessler, C. Fuchs, M. I. Krivoruchenko and B. V. Martemyanov, Phys. Rev. Lett. [**93**]{}, 052301 (2004). E. Santini, G. Burau, A. Faessler and C. Fuchs, Eur. Phys. J. A [**28**]{}, 187 (2006).
D. Cabrera, L. Tolós and A. Ramos, [*in preparation*]{}.
R. D. Mattuck, [*A Guide To Feynman Diagrams In The Many Body Problem*]{}, 2nd. edition (Dover Publications, 2002).
H. van Hees and R. Rapp, Phys. Lett. B [**606**]{}, 59 (2005).
[^1]: In the case of pions, for instance, in an isospin symmetric nuclear medium, all the members of the isospin triplet ($\pi^\pm$, $\pi^0$) acquire the same self-energy and one can write $S_{\pi}(-q_0,\vec{q};T)=-S_{\pi}(q_0,\vec{q};T)$ with the subsequent simplification of Eq. (\[Lehmann\]).
[^2]: At zero energy ($q_0=0$) the $K$ and $\bar K$ modes cannot be distinguished and thus ${\rm Im} \,
\Pi_{\bar K}^p (0,\vec{q};T) = - {\rm Im} \, \Pi_K^p (0,\vec{q};T)
\equiv 0$. This is accomplished exactly in Eq. (\[eq:Lind-rel-YN-elaborate\]) in Appendix \[app-Linds\] for $q_0=0$.
|
---
abstract: 'We construct a Riemannian metric $g$ on $\mathbb{R}^4$ (arbitrarily close to the euclidean one) and a smooth simple closed curve $\Gamma\subset \mathbb R^4$ such that the unique area minimizing surface spanned by $\Gamma$ has infinite topology. Furthermore the metric is almost Kähler and the area minimizing surface is calibrated.'
address:
- |
School of Mathematics, Institute for Advanced Study, 1 Einstein Dr., Princeton NJ 05840, USA\
and Universität Zürich
- 'SISSA Via Bonomea 265, I34136 Trieste, Italy'
- 'Mathematisches Institut, Universität Leipzig, Augustusplatz 10, D-04109 Leipzig, Germany'
author:
- Camillo De Lellis
- Guido De Philippis
- Jonas Hirsch
title: Nonclassical minimizing surfaces with smooth boundary
---
Introduction
============
Consider a smooth closed simple curve $\Gamma$ in $\mathbb R^n$. The existence of oriented surfaces which bound $\Gamma$ and minimize the area can be approached in two different ways. Following the classical work of Douglas and Rado we can fix an abstract connected smooth surface $\Sigma_{{\textsl{g}}}$ of genus ${{\textsl{g}}}$ whose boundary $\partial \Sigma_{{\textsl{g}}}$ consists of a single connected component and look at smooth maps $\Phi: \Sigma_{{\textsl{g}}}\to \mathbb R^n$ with the property that the restriction of $\Phi$ to $\partial \Sigma_{{\textsl{g}}}$ is an homeomorphism onto $\Gamma$. We then consider the infimum $A_{{\textsl{g}}}(\Gamma)$ over all such $\Phi$ and all Riemannian metrics $h$ on $\Sigma$ of $$\int_{\Sigma_{{\textsl{g}}}} |\nabla \Phi|^2\,{\rm dvol}_h\, .$$ If $A_{{\textsl{g}}}(\Gamma) < A_{{{\textsl{g}}}-1} (\Gamma)$, then there is a minimizer $(\Phi, h)$ and the image of $h$ is an immersed surface of genus ${{\textsl{g}}}$, with possible branch points, see [@Douglas; @Shiffman39; @Courant40] and also [@Jost85; @TomiTromba88]. The second, more intrinsic, approach was pioneered later by De Giorgi, in the codimension $1$ case [@De-Giorgi55], and by Federer and Fleming in higher codimension [@FF]. They look at a suitable measure-theoretic generalization of smooth oriented surfaces, called integral currents $T$, whose generalized boundary is given by $\a{\Gamma}$ and minimize a suitable generalization of the area, called mass. In this framework a minimizer always exist and competitors do not have any topological restriction.
A basic question is whether the Federer-Fleming solution $T$ coincides with the Douglas-Rado solutions for some genus ${{\textsl{g}}}$. This is true if the curve $\Gamma$ is sufficiently regular ($C^{k,\alpha}$ for $k+\alpha>2$, because combining De Giorgi’s interior regularity theorem [@DG] with Hardt and Simon’s boundary regularity theorem [@HS], we know that every minimizer is an embedded $C^2$ surface up to the boundary $\Gamma$, in particular it has finite genus ${{\textsl{g}}}_0$. As corollaries, any conformal parametrization $\Phi$ of $T$ gives a minimizer in the sense of Douglas and Rado, while $A_{{\textsl{g}}}(\Gamma) = A_{{{\textsl{g}}}_0} (\Gamma)$ for every ${{\textsl{g}}}> {{\textsl{g}}}_0$. If we instead merely assume that $\Gamma$ has finite length, Fleming showed in [@Fleming56] that it is possible to have $A_{{{\textsl{g}}}+1} (\Gamma)<A_{{{\textsl{g}}}} (\Gamma)$ for ${{\textsl{g}}}$ arbitrarily large, implying in particular that every integral current minimizer has infinite topology, see also [@AlmgrenThurston77] for related phenomena.
In higher codimension, namely for $n\geq 4$, it is known that the minimizer $T$ is in general not regular, neither in the interior nor at the boundary. Concerning the interior regularity, it has been shown by Chang in [@Chang] that $T$ is smooth in ${{\mathbb R}}^n\setminus \Gamma$ up to a discrete set of singular branch points and self-intersections (we in fact refer to [@DSS1; @DSS2; @DSS3; @DSS4] for a complete proof, as Chang needs a suitable modification of the techniques of Almgren’s monumental monograph [@Alm] to start his argument, and the former has been given in full details in [@DSS3]). As a corollary we know therefore that for any point $p\not\in \Gamma$ there is a neighborhood $U$ in which $T$ is the union of finitely many topological disks. Nonetheless it is still an open problem whether “globally” such solutions $T$ have finite topology. So far this can be only concluded If $\Gamma$ is of class $C^{k,\alpha}$ for $k+\alpha >2$ and lies in the boundary of a uniformly convex open set, because Allard’s boundary regularity theorem [@AllB] rules out boundary singularities.
In the general case, however, very little is known about the boundary regularity of area minimizing integral currents. The first result has been established by the authors and A. Massaccesi in the recent work [@DDHM], which shows that, if $\Gamma$ is of class $C^{k,\alpha}$ for $k+\alpha >3$, then the set of regular boundary points is open and dense in $\Gamma$. On the other hand the same paper gives a smooth simple closed curve $\Gamma$ in ${{\mathbb R}}^4$ bounding a (unique) minimizer $T$ which has infinitely many singularities. Such $T$ is, however, still an immersed disk, which has a countable number of self-intersections accumulating towards a boundary branch point: it is, in particular, a Douglas-Rado solution with genus ${{\textsl{g}}}= 0$.
In his work [@White97] White conjectures that the Federer-Fleming solution has finite genus if $\Gamma$ is real analytic. If White’s conjecture were true, then the main theorem in [@White97] would imply that, for real analytic $\Gamma$, the set of boundary and interior singular points is finite and it would also exclude the presence of branch points at the boundary: the (finitely many) singular boundary points would all arise as self intersections.
As already mentioned, the example in [@DDHM] shows that the latter conclusion would certainly be false for smooth $\Gamma$ in ${{\mathbb R}}^4$. In this note we show that, if we perturb the Euclidean metric in an appropriate way, the same curve bounds a unique area minimizing integral current with infinite topology. In particular, if we look at White’s conjecture in Riemannian manifolds, real analyticity is a necessary assumption to exclude infinite topology of the Federer-Fleming solution. Our precise theorem is the following, where we denote by $\delta$ the standard Euclidean metric.
\[t:main\] For every $\varepsilon >0$ and every $N\in \mathbb N$ there is a smooth metric $g$ on ${{\mathbb R}}^4$, a smooth oriented curve $\Gamma$ in the unit ball ${{\mathbf B}}_1$ passing through the origin and a smooth oriented surface $\Sigma$ in ${{\mathbf B}}_1 \setminus \{0\}$ such that:
- $g = \delta$ on ${{\mathbb R}}^4\setminus {{\mathbf B}}_1$ and $\|g-\delta\|_{C^N} < \varepsilon$;
- $\a{\Sigma}$ is the unique area minimizing integral current in the Riemannian manifold $({{\mathbb R}}^4, g)$ which bounds $\a{\Gamma}$;
- $\Sigma$ has infinite topology.
In our example $\Sigma$ has (only) one singularity at the origin. The latter is a boundary singular point and $\Sigma$ displays a sequence of interior necks accumulating to it. A simple modification of our proof gives the existence of an area-minimizing current which bounds a smooth curve in a smooth Riemannian manifold and has an infinite number of interior branch points accumulating to the boundary. For the precise statement see Theorem \[t:branching\] below. For the proofs of both Theorem \[t:main\] and Theorem \[t:branching\] it is essential that we are allowed to perturb the Euclidean metric. In particular the question whether such examples can exist is in some Euclidean space remains open.
As pointed out, the question of whether the Federer-Fleming solution coincides with a Douglas-Rado solution is closely related to the regularity theory for area minimizers. We therefore close this introduction with a brief (and certainly not exhaustive) review of what is known for the Douglas-Rado solution. Interior branch points can be excluded in codimension $1$, i.e. for surfaces in ${{\mathbb R}}^3$, see [@Osserman70; @Alt72; @Alt73; @Gulliver73] and the discussion in [@DierkesHildebrandtTromba10 Section 6.4]. In higher codimension both interior branch points and self intersections are possible (primary examples are holomorphic curves in $\mathbb C^k = \mathbb R^{2k}$). Concerning boundary branch points, it is well known that they can exist in higher codimension if the boundary curve is just $C^k$. The example of [@DDHM] mentioned above shows that they can exist even if it is $C^\infty$, while the aforementioned paper of White [@White97] excludes their existence when $\Gamma$ is real analytic. In fact the same conclusion was drawn much earlier in codimension $1$ by a classical paper of Gulliver and Lesley, [@GulliverLesley73].
In codimension $1$ the existence of boundary branch points for the Douglas Rado solution is still an open question and it is probably the most important one in the field, we refer again to the discussion in [@DierkesHildebrandtTromba10 Section 6.4] for a detailed account of the known results. In [@Gulliver91] Gulliver provides an interesting example of a $C^\infty$ curve in $\mathbb R^3$ which bounds a minimal disk with one boundary branch point, however it is not known wether this surface is a Douglas-Rado solution. We note in passing that Gulliver’s proof gives as well a Douglas-Rado disk-type solution (in fact a Federer Fleming solution) in ${{\mathbb R}}^6$ spanning a $C^\infty$ curve and with a boundary branch point.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors would like to thank Claudio Arezzo and Emmy Murphy for several interesting discussions. The work of G.D.P is supported by the INDAM grant “Geometric Variational Problems”.
Preliminaries {#sec:prel}
=============
The Riemannian manifold $({{\mathbb R}}^4, g)$ of Theorem \[t:main\] has in fact a very special geometric structure, since it is an almost Kähler manifold.
\[d:Kaehler\] An almost complex structure on a smooth $4$-dimensional manifold $M$ is given by a smooth $(1,1)$ tensor $J$ with the property that $J^2 = - \operatorname{Id}$. The structure is almost Kähler if there is a smooth Riemannian metric $g$ with the properties that:
- $J$ is isometric, namely $g (JV, JW) = g (V,W)$ for every vector fields $V$ and $W$;
- The $2$-form defined by $\omega (V, W) := -g (V, JW)$ is closed.
$\omega$ will be called the almost Kähler form associated to the almost Kähler structure.
Theorem \[t:main\] will then be a corollary of the following
\[t:main2\] For every $\varepsilon >0$ and every $N\in \mathbb N$ there is a smooth metric $g$ on ${{\mathbb R}}^4$, a smooth oriented curve $\Gamma$ in the unit ball ${{\mathbf B}}_1$ passing through the origin and a smooth oriented surface $\Sigma$ in ${{\mathbf B}}_1 \setminus \{0\}$ such that:
- $g = \delta$ on ${{\mathbb R}}^4\setminus {{\mathbf B}}_1$ and $\|g-\delta\|_{C^N} < \varepsilon$;
- there is an almost complex structure $J$ for which Definition \[d:Kaehler\](i)&(ii) hold;
- $\a{\Sigma}$ bounds $\a{\Gamma}$ and the pull-back of the corresponding $\omega$ on $\Sigma$ is the volume form with respect to the metric $g$;
- $\Sigma$ has infinite topology.
Property (b2) is usually referred to as $\omega$ calibrating the surface $\Sigma$. It is a classical elementary, yet powerful, remark of Federer that the conditions (b1)-(b2) imply, by an inequality of Wirtinger, the minimality of the current $\a{\Sigma}$, cf. [@Fed]. Wirtinger’s theorem shows that $$\omega (V,W) \leq 1$$ whenever $$|V\wedge W|_g :=\sqrt{g (V,V) g (W,W) - g(V,W)^2} \leq 1$$ and that the equality holds if and only if $W = JV$. In the language of geometric measure theory Wirtinger’s inequality implies that the comass (relative to the metric $g$) of the form $\omega$ is $1$. Moreover, we infer from the second part of Wirtinger’s Theorem (the characterization of the equality case) that $\omega$ is pulled back to the standard volume form on $\Sigma$ if and only if there is a positively oriented tangent frame of the tangent bundle to $\Sigma$ of the form $\{V, JV\}$. Consider now any current (not necessarily integral!) $T$ which bounds $\a{\Gamma}$. Since $\omega$ is closed and ${{\mathbb R}}^4$ has trivial topology, $\omega$ has a primitive $\alpha$. We then must have $$T (\omega ) = T (d\alpha) = \int_\Gamma \alpha = \int_\Sigma d\alpha = \int_\Sigma \omega = {{\mathbf{M}}}(\a{\Sigma})\, .$$ On the other hand Wirtinger’s inequality implies that the comass of $\omega$ in the metric $g$ is $1$ and thus the mass of $T$ is necessarily larger than $T (\omega)$.
This shows that $\a{\Sigma}$ is area minimizing. In order to conclude that it is the unique minimizer, we must appeal to the boundary regularity theory developed in [@DDHM]. First of all observe that, by [@DDHM Theorem 2.1] the interior regular set $\Lambda := {\rm Reg}_i (T)$ of the current $T$ is connected, it is an orientable submanifold of $\mathbb R^4$ and (up to a change of orientation) $T = \a{\Lambda}$. Moreover, by [@DDHM Theorem 1.6] there is at least one point $p\in \Gamma\setminus \{0\}$ and a neighborhood $U$ of $p$ such that $\Lambda \cap U$ is a smooth oriented surface with smooth oriented boundary $\Gamma \cap U$. By the argument above we must have $T (\omega) = {{\mathbf{M}}}(T)$ and this implies, by Wirtinger’s Theorem, that the tangent planes to $\Lambda$ are invariant under the action of $J$. The same holds for the tangent planes to $\Sigma$. In particular, the tangents to $\Sigma$ and $\Lambda$ must coincide at every point $q\in \Gamma \cap U$ and they must have the same orientation. Since both are smooth minimal surfaces in $U$, the unique continuation for elliptic systems implies that they coincide in a neighborhood of $q$. Again, thanks to the unique continuation principle and the connectedness of $\Lambda$ we conclude that $\Lambda$ is in fact a subset of $\Sigma$. However, since they have the same area, this implies that $\a{\Sigma} = T$.
Proof of Theorem \[t:main2\]: Part I {#s:complex}
====================================
In this section we slightly modify the construction given in [@DDHM Section 2.3] to achieve a smooth curve $\Gamma$ in ${{\mathbb R}}^4$ and an integral current $T$ in ${{\mathbb R}}^4$ such that
- $T$ bounds $\a{\Gamma}$ and is area minimizing in $({{\mathbb R}}^4, \delta)$ (i.e. with respect to the Euclidean metric), in fact $T$ is induced by an holomorphic subvariety in ${{\mathbb R}}^4\setminus \Gamma$;
- $T$ is regular at $\Gamma\setminus \{0\}$;
- $0$ is an accumulation point for the interior singular set of $T$, denoted by ${\rm Sing}_i (T)$;
- At each $p\in {\rm Sing}_i (T)$ there is a neighborhood $U$ such that $T$ in $U$ consists of two holomorphic curves intersecting transversally at $p$.
First of all consider the complex plane with an infinite slit $$\mathbb K := \{z\in \mathbb C\} \setminus \{z\in \mathbb R : z\leq 0\}\, .$$ We consider the usual inverse $\arctan: \mathbb R \to (-\frac{\pi}{2}, \frac{\pi}{2})$ on the real axis of the trigonometric function $\tan$ and we fix a determination of the complex logarithm on $\mathbb K$ which coincides with $${\rm Log}\, z = \log |z| + i \arctan \frac{{\rm Im}\, z}{{\rm Re}\, z}\, .$$ on the open half plane $\mathbb H := \{z\in \mathbb C : {\mathrm {Re}\,}z > 0\}$. Correspondingly we define the functions $z^{-\alpha} = \exp (-\alpha {\rm Log}\, z)$ for $\alpha\in (0,1)$ and $$f_k (z) = \exp (- z^{-\alpha}) \sin \left({\rm Log}\, z + \frac{3-2k}{6} \pi i \right)\, \qquad \mbox{for $k=0,1,2,3$.}$$ Observe that:
- If we extend each $f_k$ to the origin as $0$, then $f_k$ is a smooth function over any wedge $$\mathbb K_a := \{z: - {\mathrm {Re}\,}z \leq a |{\mathrm {Im}\,}z|\}$$ with $a$ positive.
- Since $\exp (- z^{-\alpha})$ does not vanish on $\overline{\mathbb H}\setminus \{0\}$, the zero set $Z_k$ of $f_k$ in $\overline{\mathbb H}\setminus \{0\}$ is given by $$Z_k= \left\{z\in \overline{\mathbb H}: {\rm Log}\, z + \frac{3-2k}{6} \pi i \in \pi \mathbb Z\right\}\, ,$$ namely by $$\label{e:formula_Z_k}
Z_k = \left\{\exp \left(n \pi + i\frac{2k-3}{6}\pi \right): n \in \mathbb Z\right\}\, .$$
Consider next the function $$g(z) = \prod_{k=0}^3 f_k (z)\, .$$ We then conclude that $g$ is holomorphic on $\mathbb K$, it is $C^\infty$ on $\mathbb K_a$ for every $a>0$ and its zero set in $\overline{\mathbb H}$, which we denote by $Z$, is given by $$Z = \{0\} \cup \bigcup_{k=0}^3 Z_k\, .$$ Define now the map $G: \mathbb K\to \mathbb C^2$ by $G(z) = (z^3, g(z))$. We consider a smooth simple curve $\gamma\subset \mathbb K_1$ which in a neighborhood of the origin is tangent to the imaginary axis and we let $D\subset \mathbb K_1$ be the open disk bounded by $\gamma$. Following the arguments of [@DDHM Section 2.3] it is not difficult to see that $\gamma$ can be chosen so that:
- $\{g=0\} \cap \gamma = \{0\}$;
- $\{g=0\}\cap D\subset \overline{\mathbb H}$, hence $\{g=0\}\cap D \subset Z$ and, for each $k\in \{0, \ldots , 3\}$, it contains all sufficiently small elements of $Z_k$, namely there is a positive constant $c_0$ such that $\{z\in Z_k : |z|\leq c_0\} \subset D$.
The current $T := G_\sharp \a{D}$ is integer rectifiable, it has multiplicity one (in particular it coincides with $\a{G (D)}$) and $$\partial T = G_\sharp \partial \a{D} = G_\sharp \a{\gamma}\, .$$ Observe that $G (D)$ is an holomorphic curve of $\mathbb C^2$, which carries a natural orientation. If $\a{G(D)}$ denotes the corresponding integer rectifiable current, we then can follow the argument in [@DDHM Section 2.3] to show that $T = \a{G(D)}$ and Federer’s classical argument implies that $T$ is area minimizing for the standard Euclidean metric.
The arguments given in [@DDHM Section 2.3] show that $G_\sharp \a{\gamma} = \a{G (\gamma)}$ and $G (\gamma) \subset \mathbb C^2 = \mathbb R^4$ is a smooth embedded curve. The same arguments also show that $G (D)$ is a smooth immersed surface, that it is embedded outside the discrete set $G (Z)$ and that at each point $q\in G (Z\cap D)$ it consists of two holomorphic graphs intersecting transversally.
Proof of Theorem \[t:main2\]: Part II {#s:desingularization}
=====================================
In order to conclude the proof of Theorem \[t:main\] the idea is to modify the example of the previous section and substitute the self-intersection of each singular point $q\in G(Z\cap D)$ with a neck. In order for the new surface to be area minimizing we will then perturb the Euclidean metric and the standard complex structure to a nearby metric and a nearby almost Kähler structure. More precisely, order the points $\{p_k\}_{k\in \mathbb N}$ of the discrete set $G (Z\cap D)$. Fix sufficiently small balls ${{\mathbf B}}_{100r_k} (p_k)$ so that they are all disjoint and do not intersect the boundary curve $\Gamma$. Recall that $G (D) \cap {{\mathbf B}}_{100 r_k} (p_k)$ consist of two holomorphic disks intersecting transversally at $p_k$. In particular, we can assume that the two tangents to these disks are given by $\pi_1$ and $\pi_2$, where $\pi_1$ and $\pi_2$ are two distinct affine complex planes, namely $$\begin{aligned}
\pi_1 &= p_k + \{(z,w): a_1 z+ b_1 w =0\}\\
\pi_2 &=p_k + \{(z,w) : a_2 z+ b_2 w =0\}\end{aligned}$$ for two different points $[a_1, b_1], [a_2, b_2] \in \mathbb C \mathbb P^1$. The idea is to choose a sufficiently small $\eta_k>0$ and substitute the surface $G(D)$ inside ${{\mathbf B}}_{r_k} (p_k)$ with the holomorphic subvariety $$\Lambda_k := \{p_k + (z,w): (a_1z + b_1 w) (a_2z + b_2 w) = \eta_k\}\, ,$$ while glueing it back to the original surface $G(D)$ in the annulus ${{\mathbf B}}_{100 r_k} (p_k)\setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$.
If $\eta_k$ and $r_k$ are sufficiently small, we can assume that $\Lambda_k \cap {{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ and $G (D) \cap {{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ consist each of two annuli, respectively $\Lambda^1_k$, $\Lambda^2_k$ and $\Sigma^1_k$, $\Sigma^2_k$, where $\Lambda^i_k$ is close to $\Sigma^i_k$. Moreover, again by assuming that $\eta_k$ and $r_k$ are sufficiently small, each $\Lambda^i_k$ and $\Sigma^i_k$ are graphs of holomorphic functions over the plane $p_k +\pi_i$. We now wish to glue the surfaces $\Sigma^1_k$ and $\Lambda^1_k$ and $\Sigma^2_k$ and $\Lambda^2_k$ and modify the Euclidean metric and the standard Kähler structure in the annulus ${{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ to a nearby Riemannian metric with a corresponding almost Kähler structure, so that the glued surface is calibrated by the associated almost Kähler form. Both the new metric and the corresponding almost Kähler form will coincide with the Euclidean metric and the standard Kähler form outside of a neighborhood of the glued surface. By assuming $r_k$ and $\eta_k$ very small, we can reduce to perform such glueing in neighborhoods of the planar annuli $(p_k +\pi_1) \cap {{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ and $(p_k +\pi_2) \cap {{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$, which are disjoint. In particular we can assume that we glue the two pairs of surfaces and we modify the metric and the Kähler form in two separate regions. A schematic picture summarizing our discussion is given in the picture below.
\[fig:1\] ![A schematic picture of the procedure outlined above. The figure contains cross sections of the corresponding objects with the real affine plane $p_k + \mathbb R \times \mathbb R \subset \mathbb C \times \mathbb C$. In particular, $G (D)$ is pictured by the thick continuous curves, which in $p_k$ are tangent to the union of two crossing complex lines $p_k + \pi_1$ and $p_k + \pi_2$. The dashed lines represent the hyperbola $\Lambda_k$. The surface $\Sigma$ will coincide with the dashed lines in the inner ball, with the thick lines outside the outer ball and with a smooth interpolation between the two surfaces in the annular region. The interpolation will take place in the shadowed region, where both $\Lambda_k$ and $G(D)$ are graphical over the corresponding portion of $p_k + \pi_i$.](fig1.eps "fig:")
The corresponding metric $g_k$ will coincide with the euclidean one outside of the annulus and will have the property that, if we set $\bar{k} := \max \{k, N\}$, then $$\label{e:geometric}
\|g_k - \delta\|_{C^{\bar k}} < \varepsilon 2^{-\bar k-1}\, ,$$ where $\varepsilon$ is the constant of Theorem \[t:main2\]. The latter estimate will be achieved by choosing $\eta_k$ appropriately small, so that the graphs $\Lambda^i_k$ almost coincide with the graphs $\Sigma^i_k$.
The surface $\Sigma$ and the metric $g$ of Theorem \[t:main2\] will then be defined as follows:
- Outside of $\bigcup_k {{\mathbf B}}_{100 r_k} (p_k)$ $\Sigma$ coincides with $G (D)$ and the metric $g$ is the Euclidean metric.
- Inside each $\overline{{{\mathbf B}}_{r_k} (p_k)}$ $\Sigma$ coincides with the holomorphic submanifold $\Lambda_k$ and the metric $g$ is the Euclidean metric.
- In the annulus ${{\mathbf B}}_{100r_k} (p_k) \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ $\Sigma$ is the glued surface and $g$ is the metric $g_k$ described above.
The existence of the (local) glued surface and of the metric $g_k$ is thus the key point and is guaranteed by the glueing proposition below (after appropriate rescaling). In the rest of the note we use the following notation:
- $D_r (p) \subset \mathbb C$ is the disk centered at $p\in \mathbb C$ of radius $r$; $p$ will be omitted if it is the origin.
- $\omega_0$ is the Kähler form on $\mathbb R^4 = \mathbb C^2$ and $\delta$ is the Euclidean metric on $\mathbb R^4$.
- $J_0$ is the standard complex structure on $\mathbb R^4$, namely $J_0 (a,b,c,d) = (-b,a,-d,c)$.
- Norms on functions, tensors, etc. are computed with respect to the Euclidean metric.
\[l:glueing\] For every $\eta>0, N\in \mathbb N$ there is $\varepsilon >0$ with the following property. Assume that $f, h: D_{10} \setminus \overline{D}_1 \to \mathbb C$ are two holomorphic maps with $$\|f\|_{C^{N+2}} +\|h\|_{C^{N+2} }\leq \varepsilon\, .$$ Then there are
- a metric $g\in C^\infty$ with $\|g-\delta\|_{C^N} \leq \eta$ and $g=\delta$ outside $(D_9 \setminus \overline{D}_1)\times D_{2\eta}$,
- an almost Kähler structure $J$ compatible with $g$ such that $\|J-J_0\|_{C^N} \leq \eta$ and $J=J_0$ outside $(D_9 \setminus \overline{D}_1)\times D_{2\eta}$,
- an associated almost Kähler form $\omega$ with $\|\omega-\omega_0\|_{C^N} \leq \eta$ and $\omega=\omega_0$ outside $(D_9 \setminus \overline{D}_1)\times D_{2\eta}$
- and a function $\zeta: D_{10} \setminus \overline{D}_1 \to D_\eta$
such that
- $\zeta = ({\mathrm {Re}\,}f, {\mathrm {Im}\,}f)$ on $D_2\setminus \overline{D_1}$ and $\zeta = ({\mathrm {Re}\,}h, {\mathrm {Im}\,}h)$ on $D_{10}\setminus \overline{D_9}$;
- $\omega$ calibrates the graph of $\zeta$.
Proof of the glueing proposition
================================
Before coming to the proof, let us recall some known facts from symplectic geometry. First of all, a $2$-form $\alpha$ on $\mathbb R^{2n}$ is called nondegenerate if for every point $p\in \mathbb R^{2n}$ the corresponding skew-symmetric bilinear map $\alpha_p : \mathbb R^{2n}\times \mathbb R^{2n} \to \mathbb R$ is nondegenerate, namely $$\label{e:nondegenerate}
\forall u \in \mathbb R^{2n}\neq 0\; \exists v\in \mathbb R^{2n} \mbox{ with } \alpha_p (u,v)\neq 0\, .$$ Given a skew-symmetric form we can define $A_p: {{\mathbb R}}^{2n}\to {{\mathbb R}}^{2n}$ as $$\label{e:defA}
\omega_p(v,w)=-\delta(v,A_pw)$$ where we recall that $\delta$ is the Euclidean metric. The nondegeneracy condition is equivalent to $\ker A_p = \{0\}$. Note that if $\omega_0$ is the standard Kähler form of ${{\mathbb R}}^{2n}$, then $A_p=J_0$ and that $$\|\alpha-\omega_{0}\|_{C^N}\le \eta \Rightarrow \|A_p-J_0\|_{C^N}\le C\eta$$ In particular, any $2$-form which is sufficiently close to $\omega$ in the $C^0$ norm is necessarily nondegenerate.
We start with the following particular version of the Poincaré Lemma. Since we have not been able to find a precise reference, we give the explicit argument.
\[l:Poincare\] Assume $U\subset {{\mathbb R}}^4$ is a star-shaped domain with respect to the origin and let $\beta$ be a closed $2$-form, with the property that the pull back of $\beta$ on $\{X_3=X_4=0\}$ vanishes. Then there is a primitive $\alpha$ with the properties that
- $\alpha$ vanishes identically on $\{X_3=X_4=0\}$;
- $\|\alpha\|_{C^N} \leq C \|\beta\|_{C^{N+1}}$, where the constant $C$ depends only on the diameter of $U$.
First of all recall the standard formula for the primitive of a form given by integration along rays (cf. [@Spivak Theorem 4.1]). Namely, if $$\bar \beta = \sum_{i< j} \bar \beta_{ij}\, dX_{i} \wedge dX_{j}\, ,$$ then a primitive $\bar \alpha$ can be computed using the formula $$\label{e:Spivak}
\bar \alpha (X) = \sum_{i} \sum_{j} (-1)^{j-1} \int_0^1 t \bar \beta_{ij} (tX)\, dt\, X_{j} dX_{i}$$ with the convention that $\bar \beta_{ij} = -\bar \beta_{ji}$ if $i>j$. Using the latter expression we obviously have $\|\bar \alpha\|_{C^N} \leq C \|\bar \beta\|_{C^N}$. Moreover, if $\bar \beta$ vanishes identically on $\{X_3=X_4=0\}$ then clearly $\bar \alpha$ vanishes identically on $\{X_3=X_4=0\}$.
Given a general closed $2$-form $\beta$, we then look for a $1$-form $\vartheta$ which vanishes on $\{X_3=X_4 =0\}$ and with the property that $\bar \beta := \beta - d\vartheta$ vanishes on $\{X_3=X_4=0\}$. The resulting $\alpha$ will then be found as $\bar{\alpha} + \vartheta$, where $\bar\alpha$ is the primitive of $\bar \beta$ given in the formula . In order to find $\vartheta$ we first write $\beta$ in the form $$\beta = f\, dX_1\wedge dX_2 + \underbrace{(a_1 dX_1 + a_2\, dX_2)}_{=:\lambda} \wedge dX_3 + \underbrace{(b_1\, dX_1 + b_2\, dX_2 + b_3\, dX_3)}_{=:\mu} \wedge dX_4\, .$$ By assumption $f$ equals $0$ on $\{X_3=X_4=0\}$. Let us set $$\vartheta = -X_3 \mu -X_4 \lambda\, ,$$ so that $$\beta -d\vartheta= f\, dX_1\wedge dX_2 + X_3 d\mu + X_4 d\lambda\, .$$ Since $f$ vanishes on $\{X_3=X_4=0\}$ we then get the desired property that $\beta-d\vartheta$ vanishes on it as well.
We will focus on the construction of the triple, whereas the estimates are a simple consequence of the algorithm.
*Step 1: Definition of $\zeta$ and a new system of coordinates*: First we smoothly extend $f$ inside $D_1$ and we then define $\zeta$ as $$\zeta = ({\mathrm {Re}\,}f, {\mathrm {Im}\,}f) \varphi + ({\mathrm {Re}\,}h, {\mathrm {Im}\,}h) (1-\varphi)\, .$$ where $\varphi \in C^\infty_c (D_5)$ with $0\leq \varphi \leq 1$ and $\varphi\equiv 1$ on $D_4$. In particular $$\zeta=f \qquad\text{on \(D_4\)}\qquad\text{and}\qquad\zeta=h \qquad\text{outside \(D_5\)}.$$ We now choose a system of coordinates $X:=(X_1,\dots, X_4)$ such that $\|X-\operatorname{Id}\|_{C^{N+1}}\le C{\varepsilon}$, $$\label{e:system1}
\Sigma=\mathrm{graph}(\zeta)=\{X_3=X_4=0\}\,$$ and $$\label{e:system2}
T_{p} \Sigma=\operatorname{Ker}dX_3\cap \operatorname{Ker}dX_4\qquad T_{p} \Sigma^\perp=\operatorname{Ker}dX_1\cap \operatorname{Ker}dX_2$$ Note that this can be done by, for instance, taking normal coordinates around $\Sigma$, provided ${\varepsilon}$ is chosen sufficiently small.
More precisely, we first choose two vector fields $\xi, \tau$ along $\Sigma$ such that:
- $|\xi_p|=|\tau_p|=1$ and $\xi_p\perp \tau_p$ (in the euclidean metric);
- $T_{p}\Sigma^{\perp}=\mathrm{span} (\xi_p, \tau_p)$.
We set $$Y(x_1,x_2, x_3, x_4)=(x_1,x_2, \zeta_1(x_1,x_2), \zeta_2(x_1,x_2))+x_3 \xi(x,_1,x_2)+x_4 \tau(x_1,x_2).$$ where $\zeta=(\zeta_1, \zeta_2)$, $$p=(x_1,x_2, \zeta_1(x_1,x_2), \zeta_2(x_1,x_2))\in \Sigma, \qquad \xi(x,_1,x_2)=\xi_p \qquad {\color{blue}\tau}(x_1, x_2)={\color{blue}\tau}_p.$$ In order to get vector fields $\zeta$ and $\tau$ whose derivatives are under control, a standard procedure is to take the standard vector fields $e_3 = (0,0,1,0)$, $e_4 = (0,0,0,1)$, project them orthogonally onto $T_p^\perp \Sigma$ and apply the Gram-Schmidt orthogonalization procedure to them. Simple computations give that $$\|\xi-e_3\|_{C^{N+1}} + \|\tau-e_4\|_{C^{N+1}} \leq C \|\zeta\|_{C^{N+2}}\, .$$ Note in particular that, if ${\varepsilon}$ is chosen sufficiently small, $Y$ is a diffeomorphism onto its image and that the latter contains $D_{8}\times D_{8}$. Letting $X=Y^{-1}$ it is immediate to check that is satisfied and thus also the first equality in . To check the second one simply note, by the very definition of $X$, $$\{X_1=c_1, X_2=c_2\}=p+T_{p} \Sigma^\perp \qquad X(p)=(c_1,c_2,0,0).$$ From now on, with a slight abuse of notation, we will denote by $D_{r}\times D_{s}$ the product of disks in the $X$ system of coordinates, that is $$D_{r}\times D_{s}=\{ X_1^2+X_2^2< r^2\,, \,X_3^2+X_4^2< s^2\}$$ and we will work in the domain $D_8\times D_8$. Given that $\|DX - {\rm Id}\|_{C^0} + \|DY - {\rm Id}\|_{C^0} \leq \varepsilon$ and assuming, without loss of generality, that $X$ and $Y$ keep the origin fixed, such sets are comparable to the corresponding products $D^e_\rho \times D^e_\sigma$ in the euclidean system of coordinates, namely $$D^e_{C^{-1} r} \times D^e_{C^{-1} s} \subset D_r\times D_s \subset D^e_{C r} \times D^e_{C s}$$ where the constant $C$ approaches $1$ as ${\varepsilon}\to 0$.
*Step 2: Construction of the $2$ form*: We take $\sigma \ll \eta$ and, provided ${\varepsilon}\ll \sigma$, we claim the existence of a $2$-form $\omega$ on $D_{8}\times D_8$ such that
1. $\omega$ is closed (and hence exact);
2. The pull back of $\omega$ and $\omega_0$ are the same on $\Sigma$.
3. For all $p\ \in \Sigma\cap\bigl( (D_{7}\setminus \overline{D_{3}})\times D_{8}\bigr)$ $$\label{e:omegaspecial}
\omega_p=\omega_{p}(v,w)=0 \qquad \text{for all \(v\in T_{p} \Sigma\) and all \(w\in T_p\Sigma^\perp\)
}$$
4. $\omega=\omega_0$ outside of $D_7\setminus \overline{D}_3\times D_{2\sigma}$
5. $\|\omega-\omega_0\|_{C^N}\le \eta$.
To construct the form we observe that, on $\Sigma$, $i_\Sigma^\sharp \omega_0=a(X_1,X_2) dX_1\wedge dX_2$ for a suitable smooth function $a$. Extending $a$ constant in the $X_3,X_4$ coordinates we can write $$\label{e:omega0onsigma}
\omega_0=a(X_1, X_2)dX_1\wedge d X_2+\overline {\omega}$$ where $\bar \omega$ is pulled back to $0$ on $\Sigma$. Note that $d \bar \omega=0$ since $\omega$ is closed. Moreover $$\label{e:estimates}
\|a-1\|_{C^{N+1} (D_8)}+\|\overline {\omega}-dX_3\wedge dX_4\|_{C^{N+1} (D_8\times D_8)}\le o_{\varepsilon}(1)$$ where $ o_{\varepsilon}(1)\to 0$ as ${\varepsilon}\to 0$. We define $$\label{e:beta}
\beta=a(X_1,X_2) dX_1\wedge dX_2+dX_3\wedge dX_4.$$ We now apply Lemma \[l:Poincare\] to find a primitive $\vartheta$ of $\omega_{0}-\beta=\overline {\omega}-dX_3\wedge dX_4$ which equals $0$ on $\Sigma$. We also let $\varphi$ be a smooth cut-off function such that $$\begin{cases}
\varphi\equiv 0 &\text{on \((D_{6}\setminus D_{4})\times D_{\sigma}\)}.
\\
\varphi\equiv 1\qquad &\text{outside $(D_7 \setminus \overline{D}_3)\times D_{2\sigma}$}
\end{cases}$$ Note in particular that, provided ${\varepsilon}\ll \sigma$, $$\label{e:inclusione}
\Sigma\cap \{\varphi \neq 0\} \subset \operatorname{graph}f\cup \operatorname{graph}h\, .$$ We define $$\omega=\beta +d(\varphi \vartheta)=\beta+\varphi (\omega_0-\beta)+d\varphi\wedge \vartheta.$$ Clearly $\omega$ satisfies (a) and (d). Property (e) follows by choosing ${\varepsilon}\ll \sigma$ as a consequence of the construction of Lemma \[l:Poincare\] and of . Moreover since $\vartheta$ vanishes on $\Sigma$ and the pull-backs of $\beta$ and $\omega_0$ on $\Sigma$ are the same, also (b) is satisfied. To check (c) we note that due to and the definition of $\beta$ we have that $\beta_p(v,w)=0$ for all $p$ in the domain of $X$ and $v \in T_p\Sigma, w \in T_p\Sigma^\perp$. In particular is satisfied on $\{\varphi=0\}$. Since $f,h$ are holomorphic outside $D_2$, by , for $p\in \{\varphi \neq 0 \}\cap (D_8\setminus \overline{D_2})\times D_{8}$ the spaces $T_{p} \Sigma$ and $T_{p}\Sigma^\perp$ are perpendicular complex lines. Hence $\omega_0$ satisfies there, since $$\omega\bigr|_{\Sigma}=(1-\varphi)\beta+ \varphi \omega_0,$$ $\omega$ satisfies as well and (c) is verified.
*Step 3: Definition of the almost complex structure and of the metric*: To conclude the proof it will be enough to construct a metric $g$ and a compatible almost complex structure $J$. Here we follow a method used in [@Bellettini]. Let $A_p$ be the skew-symmetric matrix defined in . In particular $Q_p=-A_p^2 =A_pA_p^{t}$ defines a positive definite quadratic form and thus it admits a (positive definite) square root. We set $$g_{p}=\big(-A_p^2\big)^{-\frac{1}{2}}\qquad\text{and}\qquad J_{p}=g_{p}^{-1}A_p.$$ Note that $g_{p}$ and $J_p$ equal, respectively, the euclidean metric $\delta$ and the usual complex structure $J_0$ where $\omega=\omega_0$. Furthermore, since $g_p$ commutes with $A_p$, one immediately verifies that $$J_{p}^{2}=-\operatorname{Id}\qquad g_{p}(J_{p} v, J_{p} w)=g_{p}( v, w)\qquad \omega_{p}(v,w)=-g_{p}( v, J_{p} w)$$ so that the triple $(g, J, \omega)$ defines an almost Kähler structure on $D_8\times D_8$ which coincides with the canonical one where $\omega=\omega_0$. We are thus left to prove that $\omega$ calibrates $\Sigma$ on $(D_{10}\setminus \overline{D_{1}})\times D_{10}$. This is clear in the region where $\omega=\omega_0$, because in that region $\Sigma$ equals either the graph of $f$ or that of $h$ and these are holomorphic outside $D_1$. Hence it is enough to verify that $\Sigma$ is calibrated in $\bigl( (D_{7}\setminus \overline{D_{3}})\times D_{8}\bigr)$. To this end note that, if $p\in \Sigma\cap\bigl( (D_{7}\setminus \overline{D_{3}})\times D_{8}\bigr)$, by and the definition of $A_p$, $$0=\omega_{p}(v,w)=-\delta(v,A_pw)=\delta(A_pv, w) \qquad \mbox{ for all \(v\in T_{p}\Sigma\) , \(w\in T_{p}\Sigma^\perp\).}$$ In particular $A_p$ maps $T_p\Sigma$ into itself (and $T_{p} \Sigma^\perp$ into itself as well). The same is true then for $J_p$ and thus, if $g_p(v,v)=1$, $(v, J_pv)$ is a $g_p$-orthonormal frame of $T_{p}\Sigma$. This implies that $\omega$ is pulled back on $\Sigma$ to the $g$-volume form and concludes the proof.
Branching singularities
=======================
A simple modifications of the ideas outlined above proves the following
\[t:branching\] For every $\varepsilon >0$ and every $N\in \mathbb N$ there is a smooth metric $g$ on ${{\mathbb R}}^4$, a smooth oriented curve $\Gamma$ in the unit ball ${{\mathbf B}}_1$ passing through the origin and a smooth oriented surface $\Sigma$ in ${{\mathbf B}}_1 \setminus \{0\}$ such that:
- $g = \delta$ on ${{\mathbb R}}^4\setminus {{\mathbf B}}_1$ and $\|g-\delta\|_{C^N} < \varepsilon$;
- $\a{\Sigma}$ is the unique area minimizing integral current in the Riemannian manifold $({{\mathbb R}}^4, g)$ which bounds $\a{\Gamma}$;
- There is an finite number of branching singularities $p_k\in \Sigma\setminus \Gamma$ accumulating to the only boundary singular point $0$.
The idea of the proof is to produce the analogous to Theorem \[t:main2\] where the conclusion (c) therein is substituted by the conclusion (c’) above. Here we sketch the necessary modifications to the arguments given for Theorem \[t:main2\].
We start by constructing an example of an holomorphic subvariety inducing an area minimizing current $T$ as in Section \[s:complex\] where the property (iv) is however replaced by
- At each $p\in {\rm Sing}_i (T)$ there is a neighborhood $U$ such that $T$ in $U$ consists of [*four*]{} holomorphic curves intersecting transversally at $p$.
More precisely there are four distinct elements $[a_1, b_1], [a_2, b_2], [a_3, b_3], [a_4, b_4] \in \mathbb C \mathbb P^1$ such that the tangent cone to $T$ at $p$ is given by the union of four corresponding complex lines: $$\left\{(z,w) : \prod_{i=1}^4 (a_i z + b_i w) =0 \right\}\, .$$ In order to achieve such object we construct a similar function $g$ as in Section \[s:complex\], by defining $$f_k (z) = \exp (-z^\alpha) \sin \left({\rm Log}\, z + \frac{7 - 2k}{14} \pi i \right) \qquad \mbox{for $k = 0, 1 \ldots , 7$}$$ and $$g (z) = \prod_{k=0}^7 f_k (z)\, .$$ We then proceed as in Section \[s:complex\] to define the zero sets $Z_k$ of $f_k$ on $\overline{\mathbb H}\setminus \{0\}$, the set $Z = \{0\} \cup \bigcup_k Z_k$, the curve $\gamma$ and the corresponding disk $D$, where we require the properties analogous to (A) and (B) therein. We finally define the map $G (z) := (z^7, g (z))$ and the current $T$ is thus given by $\a{G(D)}$.
Next, proceeding as in Section \[s:desingularization\], in a sufficiently small ball of radius $r_k$ centered at $p_k\in {\rm Sing}_i (T)$ we wish to replace $G (D)$ with another holomorphic subvariety, which has a branching singularity at $p_k$. Since $G (D)$ is, at small scale, very close to the cone $$C_k :=\bigcup_{i=1}^4 \underbrace{\left\{p_k + (z,w) : (a_i z + b_i w) =0 \right\}}_{=:\pi_{k,i}}\, ,$$ the idea is to choose $$\Lambda_k := \left\{p_k + (z,w) : \prod_{i=1}^4 (a_i z + b_i w) = \eta_k (z^3-w^2)\right\}\, ,$$ where $\eta_k$ is again a very small parameter. Choosing $r_k$ and $\eta_k$ sufficiently small, we can ensure that $G (D) \cap {{\mathbf B}}_{100 r_k} \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ and $\Lambda_k \cap {{\mathbf B}}_{100 r_k} \setminus \overline{{{\mathbf B}}}_{r_k} (p_k)$ consist each of four annuli which are graphs over corresponding annular regions of the four distinct complex lines $\pi_{k, i}$, $i=1, \ldots, 4$. We can obviously engineer such graphs to be arbitrarily close to the corresponding planes, and hence to fall, after appropriating rescaling under the assumption of the glueing Proposition \[l:glueing\]. Hence the construction of $\Sigma$ and of the almost Kähler structure $(g, J, \omega)$ follows the same arguments.
[10]{}
W. K. Allard. On the first variation of a varifold: boundary behavior. , 101:418–446, 1975.
F. J. Almgren, Jr. , volume 1 of [*World Scientific Monograph Series in Mathematics*]{}. World Scientific Publishing Co. Inc., River Edge, NJ, 2000.
F. J. Almgren, Jr. and W. P. Thurston. Examples of unknotted curves which bound only surfaces of high genus within their convex hulls. , 105(3):527–538, 1977.
H. W. Alt. Verzweigungspunkte von [$H$]{}-[F]{}lächen. [I]{}. , 127:333–362, 1972.
H. W. Alt. Verzweigungspunkte von [$H$]{}-[F]{}lächen. [II]{}. , 201:33–55, 1973.
C. Bellettini. Semi-calibrated 2-currents are pseudoholomorphic, with applications. , 46(4):881–888, 2014.
S. X. Chang. Two-dimensional area minimizing integral currents are classical minimal surfaces. , 1(4):699–778, 1988.
R. Courant. The existence of minimal surfaces of given topological structure under prescribed boundary conditions. , 72:51–98, 1940.
E. [De Giorgi]{}. . , 4:95–113, 1955.
E. De Giorgi. . Seminario di Matematica della Scuola Normale Superiore di Pisa, 1960-61. Editrice Tecnico Scientifica, Pisa, 1961.
C. [De Lellis]{}, G. [De Philippis]{}, J. [Hirsch]{}, and A. [Massaccesi]{}. . , page arXiv:1809.09457, Sep 2018.
C. [De Lellis]{}, E. [Spadaro]{}, and L. [Spolaor]{}. . , Aug. 2015.
C. [De Lellis]{}, E. [Spadaro]{}, and L. [Spolaor]{}. . , Aug. 2015.
C. De Lellis, E. Spadaro, and L. Spolaor. Regularity [T]{}heory for 2-[D]{}imensional [A]{}lmost [M]{}inimal [C]{}urrents [II]{}: [B]{}ranched [C]{}enter [M]{}anifold. , 3(2):3:18, 2017.
C. De Lellis, E. Spadaro, and L. Spolaor. Uniqueness of tangent cones for two-dimensional almost-minimizing currents. , 70(7):1402–1421, 2017.
U. Dierkes, S. Hildebrandt, and A. J. Tromba. , volume 340 of [*Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer, Heidelberg, second edition, 2010. With assistance and contributions by A. Küster.
J. Douglas. Minimal surfaces of higher topological structure. , 40(1):205–298, 1939.
H. Federer. . Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969.
H. Federer and W. H. Fleming. Normal and integral currents. , 72:458–520, 1960.
W. H. Fleming. An example in the problem of least area. , 7:1063–1074, 1956.
R. Gulliver. A minimal surface with an atypical boundary branch point. In [*Differential geometry*]{}, volume 52 of [*Pitman Monogr. Surveys Pure Appl. Math.*]{}, pages 211–228. Longman Sci. Tech., Harlow, 1991.
R. Gulliver and F. D. Lesley. On boundary branch points of minimizing surfaces. , 52:20–25, 1973.
R. D. Gulliver, II. Regularity of minimizing surfaces of prescribed mean curvature. , 97:275–305, 1973.
R. Hardt and L. Simon. Boundary regularity and embedded solutions for the oriented [P]{}lateau problem. , 110(3):439–486, 1979.
J. Jost. Conformal mappings and the [P]{}lateau-[D]{}ouglas problem in [R]{}iemannian manifolds. , 359:37–54, 1985.
R. Osserman. A proof of the regularity everywhere of the classical solution to [P]{}lateau’s problem. , 91:550–569, 1970.
M. Shiffman. The [P]{}lateau problem for minimal surfaces of arbitrary topological structure. , 61:853–882, 1939.
M. Spivak. . W. A. Benjamin, Inc., New York-Amsterdam, 1965.
F. Tomi and A. J. Tromba. Existence theorems for minimal surfaces of nonzero genus spanning a contour. , 71(382):iv+83, 1988.
B. White. Classical area minimizing surfaces with real-analytic boundaries. , 179(2):295–305, 1997.
|
---
abstract: 'Scanning ion conductance microscopy (SICM) can image the surface topography of specimens in ionic solutions without mechanical probe–sample contact. This unique capability is advantageous for imaging fragile biological samples but its highest possible imaging rate is far lower than the level desired in biological studies. Here, we present the development of high-speed SICM. The fast imaging capability is attained by a fast Z-scanner with active vibration control and pipette probes with enhanced ion conductance. By the former, the delay of probe Z-positioning is minimized to sub-, while its maximum stroke is secured at . The enhanced ion conductance lowers a noise floor in ion current detection, increasing the detection bandwidth up to . Thus, temporal resolution 100-fold higher than that of conventional systems is achieved, together with spatial resolution around .'
author:
- Shinji Watanabe
- Satoko Kitazawa
- Linhao Sun
- Noriyuki Kodera
- Toshio Ando
bibliography:
- '../Biblio/reference\_20141021.bib'
title: 'Development of High-Speed Ion Conductance Microscopy'
---
[^1]
[^2]
Introduction
============
Tapping mode atomic force microscopy (AFM) [@hansma1994tapping] has been widely used to visualize biological samples in aqueous solution with high spatial resolution. However, when the sample is very soft, like eukaryotic cell surfaces, the intermittent tip-sample contact significantly deforms the sample and hence blurs its image [@zhang2012scanning; @ushiki2012scanning; @ando2018high]. Moreover, when the sample is extremely fragile, it is often seriously damaged [@seifert2015comparison; @ando2018high]. SICM was invented to overcome this problem [@hansma1989scanning]. SICM uses as a probe an electrolyte-filled pipette having a nanopore at the tip end, and measures an ion current that flows between an electrode inside the pipette and another electrode in the external bath solution. The ionic current resistance between the pipette tip and sample surface (referred to as the access resistance) increases when the tip approaches the sample. This sensitivity of access resistance to the tip–sample distance enables imaging of the sample surface without mechanical tip-sample contact [@del2014contact; @thatenhorst2014effect] (Fig. \[FIG1\]). To improve fundamental performances of SICM, several devices have recently been introduced, including a technique to control the pore size of pipettes [@steinbock2013controllable; @xu2017controllable; @sze2015fine] and a feedback control technique based on tip–sample distance modulation [@pastre2001characterization; @li2014phase; @li2015amplitude]. Moreover, the SICM nanopipette has recently been used to measure surface charge density [@mckelvey2014surface; @mckelvey2014bias; @page2016fast; @perry2015simultaneous; @perry2016surface; @klausen2016mapping; @fuhs2018direct] and electrochemical activity [@kang2017simultaneous; @takahashi2012topographical] as well as to deliver species [@bruckbauer2002writing; @babakinejad2013local; @page2017quantitative; @takahashi2011multifunctional]. Thus, SICM is now becoming a useful tool in biological studies, especially for characterizing single cells with very soft and fragile surfaces [@page2017multifunctional].
However, the imaging speed of SICM is low; it takes from a few minutes to a few tens of minutes to capture an SICM image, which is in striking contrast to AFM. High-speed AFM is already established [@ando2008high] and has been used to observe a variety of proteins molecules and organelles in dynamic action [@ando2014filming]. The slow performance of SICM is due mainly to a low signal-to-noise ratio (SNR) of ion current sensing, resulting in its low detection bandwidth (and hence low feedback bandwidth). Moreover, the low resonant frequency of the Z-scanner also limits the feedback bandwidth.
the vertical scan of the pipette towards the sample is performed with velocity $v_{\textrm{z}}$, the time delay of feedback control $t_{\textrm{delay}}$ causes an overshoot for the vertical scan by $t_{\textrm{delay}} \times v_{\textrm{z}}$. This overshoot distance should be smaller than the closest tip–sample distance ($d_{\textrm{c}}$) to be maintained during imaging (see the approach curve in Fig. \[FIG1\]). That is,
$$\label{fall_v}
v_{\textrm{z}} \leq \frac{d_{\textrm{c}}}{t_{\textrm{delay}}}.$$
An appropriate size of $d_{\textrm{c}}$ is related to the pipette geometry, such as the tip aperture radius $r_{\textrm{a}}$, the cone angle $\theta_{\textrm{c}}$, and the outer radius of the tip $r_{\textrm{o}}$ [@rheinlaender2009image; @del2014contact; @korchev1997specialized], but $d_{\textrm{c}} \approx 2 r_{\textrm{a}}$ is typically used to achieve highest possible resolution. The size of $t_{\textrm{delay}}$ can be roughly estimated from the resonant frequency of the Z-scanner $f_{\textrm{z}}$ and the bandwidth of ion current detection $B_{\textrm{id}}$, as $t_{\textrm{delay}} \approx 1/f_{\textrm{z}} + 1/B_{\textrm{id}}$. In typical SICM setups, the values of these parameters are $r_{\textrm{a}} \approx$ , $f_{\textrm{z}} \approx$ and $B_{\textrm{id}} \approx$ , yielding $v_{\textrm{z}} <$ . In the representative SICM imaging mode referred to as the hopping mode [@novak2009nanoscale], the tip-approach and retract cycle is repeated for a distance (hopping amplitude) of slightly larger than the sample height, $h_{\textrm{s}}$. For example, when $v_{\textrm{z}} =$ is used for the sample with $h_{\textrm{s}} \approx$ , it takes at least $>$ for pixel acquisition, which depends on the retraction speed. This pixel acquisition time corresponds to an imaging acquisition time longer than 8.3 min for 100 $\times$ 100 pixel resolution [@novak2014imaging]. When the pipette retraction speed can be set at much larger than the approach speed $v_{\textrm{z}}$, the imaging acquisition time can be improved but not much.
Several groups have attempted to increase $v_{\textrm{z}}$ [@shevchuk2012alternative; @novak2014imaging; @jung2015closed; @kim2015alternative; @li2014phase; @li2015phase]. One of approaches used is to mount a shear piezoactuator with a high resonant frequency (but with a small stroke length) on the Z-scanner and this fast piezoactuator is used as a ‘brake booster’ [@shevchuk2012alternative; @novak2014imaging]; that is, this piezoactuator is activated only in the initial retraction phase where the tip is in close proximity to the surface. This method could cancel an overshooting displacement and therefore increase $v_{\textrm{z}}$ by 10-fold. Another approach is to increase $B_{\textrm{id}}$ by the improvement of the SNR of current signal detection with the use of a current-source amplification scheme [@kim2015alternative] or by the use of AC bias voltage between the electrodes (the AC current in phase with the AC bias voltage is used as an input for feedback control) [@li2014phase; @li2015phase]. This bias voltage modulation method is further improved by capacitance compensation [@li2015amplitude]. The improvement of SICM speed performance by these methods is however limited to a few times at most. Very recently, two studies demonstrated fast imaging of live cells with the use of their high-speed SICM (HS-SICM) systems [@ida2017high; @simeonov2019high]. However, one of these studies used temporal tip–sample contact to alter hopping amplitude [@ida2017high], while the other used pipettes with $r_{\textrm{a}}$ = 80– and abandoned optical observation of the sample [@simeonov2019high]. Note that in SICM the temporal resolution has a trade-off relationship with the spatial resolution, as in the cases of other measurement techniques. Thus far, no attempts have been made to increase both $f_{\textrm{z}}$ and $B_{\textrm{id}}$ extensively, without compromise of the spatial resolution and non-contact imaging capability of SICM.
Here, we report the development of HS-SICM and demonstrate its high-speed and high resolution imaging capability. The image rate was improved by a factor of $\sim$100 or slightly more. This remarkable enhancement in speed was achieved by two improved performances:(i) fast pipette positioning achieved with the developed fast scanner and vibration suppression techniques and (ii) an enhanced SNR of current detection by reduction of the ionic resistance arising from the inside of the pipette (referred to as the pipette resistance). The improved $f_{\textrm{z}}$ of the Z-scanner resulted in a mechanical response time of $\sim$, corresponding to a $\sim$100-fold improvement over conventional SICM systems. The SNR of current detection was improved by a factor of $\sim$8, enhancing $B_{\textrm{id}}$ from 1 to or slightly higher. The HS-SICM system was demonstrated to be able to capture topographic images of low-height biological samples at 0.9– and live cells at 20–. These high imaging rate performances are compatible with spatial resolution of 15–.
![\[FIG1\] Working principle of SICM. The electrolyte-filled pipette with a nanopore at its end (see the transmission electron micrograph in the left panel) is mounted on the scanner. The ion current through the nanopore generated by the application of bias voltage between the two Ag/AgCl electrodes is measured by the ion current detector. The measured ion current, which is dependent on the tip–surface separation $d$, is used as a pipette Z-position control signal. ](Fig1.pdf)
Results and Discussion
======================
Strategy towards HS-SICM
------------------------
The speed of pipette approach towards the sample ($v_{\textrm{z}}$) is limited, as expressed by Eq. (\[fall\_v\]). As $v_{\textrm{z}}$ depends on $f_{\textrm{z}}$ and $B_{\textrm{id}}$, the improvement on both $f_{\textrm{z}}$ and $B_{\textrm{id}}$ is required to achieve HS-SICM. To increase $f_{\textrm{z}}$, we need a fast Z-scanner for displacing the pipette along its length. Note that all commercially available Z-scanners for SICM have $f_{\textrm{z}} <$ . Besides, we need to establish a method to mount the pipette ($\sim$ in length) to the Z-scanner in order to minimize the generation of undesirable vibrations of the pipette. We previously developed a fast XYZ scanner with $f_{\textrm{z}} \approx$ , a resonant frequency of $\sim$ in the XY directions, and stroke distances of $\sim$ and $\sim$ for Z and XY, respectively [@watanabe2017high]. In this study, we further improved the dynamic response of this fast scanner. Considering this high $f_{\textrm{z}}$ with improved dynamic response, we need to increase $B_{\textrm{id}}$ to the level of $\sim$. As the ion current change caused by an altered tip-sample distance is generally small ($\sim$), the current signal noise largely limits $B_{\textrm{id}}$. At a high frequency regime ($>$ ), the dominant noise source is the interaction between the amplifier’s current noise and the total capacitance at the input [@rosenstein2012integrated; @levis1993use; @rosenstein2013single]. Therefore, we have to lower the total capacitance and increase the current signal to achieve $B_{\textrm{id}} \approx$ , without increasing the pipette pore size.
![image](Fig2.pdf)
High-speed Z-scanner
--------------------
The structure of our fast scanner developed is shown in Fig. \[FIG2\]a-c. A key mechanism for minimizing unwanted Z-scanner vibrations is momentum cancellation; the hollow Z-piezoactuator is sandwiched with a pair of identical diaphragm-like flexures, so that the center of mass of the Z-piezoactuator hardly changes during its fast displacement ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 1). The pipette is mechanically connected only with the top flexure through being glued to the top clamp. Thanks to these designs, no noticeable resonance peaks are induced except at the resonant frequency of the Z-piezoactuator (Fig. \[FIG2\]e). We achieved a product value of (maximum displacement) $\times$ (resonant frequency) in this Z-scanner, which exceeds more than 10-fold the value of conventional designs of SICM Z-scanner, $\times$ 1–. In the present study, we further improved the dynamic response of Z-scanner. The sharp resonant peak shown in Fig. \[FIG2\]e (blue line) induces unwanted vibrations. In fact, the application of a square-like-waveform voltage to the Z-scanner (black line in Fig. \[FIG2\]f) generated an undesirable ringing displacement of the Z-scanner (blue line in Fig. \[FIG2\]f). To damp this ringing, we developed feedforward (FF) and feedback (FB) control methods (Fig. \[FIG2\]d). The FF control system was implemented in field-programmable-gate-array (FPGA). The gain controlled output signal from a mock Z-scanner (an electric circuit) with a transfer function similar to that of the real Z-scanner was first differentiated and then subtracted from the signal input to the Z-piezodriver [@kodera2005active]. Although this method was effective in reducing the Q-factor of the Z-scanner, the drift behavior of the transfer function of real Z-scanner would affect the reduced Q-factor during long-term scanning. To suppress the drift effect, the FB control implemented in an analog circuit was added as follows. The velocity of Z-scanner displacement was measured using the transimpedance amplifier via a small capacitor of $\sim$ positioned near the Z-scanner. The gain-controlled velocity signal was subtracted from the output of the FF controller. In this way, too fast movement of the Z-scanner was prevented [@kageshima2006wideband], resulting in nearly complete damping of unwanted vibrations, as shown with the red lines of Figs. \[FIG2\]e and \[FIG2\]f. Thus, the open-loop response time of Z-scanner, $Q/\pi f_{\textrm{z}}$, was improved from $\sim$ to (critical damping). Note that measured Z-scanner displacements (blue nad red lines) include the latency of the laser vibrometer used () ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 1, Fig. S2). The FF/FB damping control was also applied to the XY scanners to improve their dynamic response ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 1, Fig. S3).
Enhancement of SNR with Salt Concentration Gradient
---------------------------------------------------
![\[FIG4\] Spatial distribution of average concentration of K$^+$ and Cl$^-$ obtained by FEM simulation. The surface charge density of the tip was set at . (**a**) Average ion concentration profile (red) and its derivative (blue) along the white arrow shown in (**b**). $c_{\textrm{in}}$ (KCl) = , $c_{\textrm{out}}$ (KCl) = and $V_{\textrm{b}}$ = were used. The vertical axis represents the Z-distance from the tip aperture normalized with the tip aperture diameter; the tip aperture position is zero as indicated by the broken line. (**b**) Spatial distribution of average ion concentration under the same conditions as (**a**). (**c**) Enhancement of tip conductivity by ICG. The vertical axis represents the enhancement factor of the tip conductivity with respect to the ion conductivity at $c_{\textrm{in}}$ = . (**d, e**) Spatial distributions of average ion concentration at $V_{\textrm{b}}$ = (**d**) and $V_{\textrm{b}}$ = (**e**) for $c_{\textrm{in}}$ (KCl) = and $c_{\textrm{out}}$ (KCl) = . ](Fig3.pdf)
We describe here a method to improve $B_{\textrm{id}}$ by increasing the SNR of current signal sensing. In the frequency region $>$ , the dominant noise source of the ion current detector is the total capacitance at the transimpedance amplifier input, $\Sigma \textrm{C}$ [@rosenstein2012integrated; @levis1993use]:
$$I_{\textrm{RMS}} \propto B_{\textrm{id}}^{3/2} \Sigma \, \textrm{C},$$
where $I_{\textrm{RMS}}$ represents a root-mean-square current noise. The electrode-wiring and the pipette capacitance $C_\textrm{p}$ dominate the total capacitance. As the $C_{\textrm{p}}$ derives from the part of the pipette immersed in solution, thicker wall pipettes are useful in reducing $C_{\textrm{p}}$. We used quartz capillaries with a wall thickness of 0.5–. The total capacitance in our setup was estimated to be $\sim$ ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 2), yielding $I_{\textrm{RMS}}$ $\sim$ at $B_{\textrm{id}}$ = (although $I_{\textrm{RMS}}$ was $\sim$ at $B_{\textrm{id}}$ = ), which was still too large. Then, we decided to increase the ion current to improve the SNR further. Since the bias voltage ($V_{\textrm{b}}$) larger than a typical value of $\pm$ induces an unstable ion current [@clarke2012pipette], we need to reduce the pipette resistance ($R_{\textrm{p}}$). The ion current $I_{\textrm{i}}$ through the pipette opening is approximately described as $$I_{\textrm{i}}(d) = \frac{V_{\textrm{b}}}{R_{\textrm{a}}(d) + R_{\textrm{p}}},
\label{I(d)}$$ where $d$ is the tip-surface distance and $R_{\textrm{a}}$ is the access resistance that depends on $d$ [@edwards2009scanning]. In Eq. \[I(d)\], the surface charge-dependent ion current rectification in the pipette is not considered [@wei1997current]. $R_{\textrm{p}}$ is usually $\sim$100-times larger than $R_{\textrm{a}}$ even at $d \approx d_{\textrm{c}}$, and therefore, the reduction of $R_{\textrm{p}}$ directly increases $I_{\textrm{i}}$. To reduce $R_{\textrm{p}}$, we examined the ion concentration gradient (ICG) method; a pipette back-filled with a high salt solution is immersed in a low salt solution. Since the pipette opening is very small, a concentration gradient is expected to be formed only in the close vicinity of the pipette opening. Although several studies have been performed on ICG from the viewpoint of its effect on the ion current rectification in nanopores [@cao2011concentration; @deng2014effect; @yeh2014tuning], it is unclear whether or not the ICG method is really useful and applicable to SICM, as the physiological salt concentration used in the external bath solutions is relatively high. To check this issue, we first performed a finite element method (FEM) simulation using the coupled Poisson–Nernst–Planck (PNP) equations that have been widely adopted to study the transport behavior of charged species [@bazant2009towards; @klausen2016mapping; @perry2016characterization]. Full details of our PNP simulation setup are described in [Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 3 and Methods. Figures \[FIG4\]a,b show a FEM simulation result obtained for the spatial profile of total ion concentration $(c_{\textrm{K}^+} + c_{\textrm{Cl}^-})/2$, when KCl and physiological KCl solutions were used for the inside and outside of the pipette, respectively. As seen there, the region of ICG is confined in a small volume around the pipette opening, while the outside salt concentration is maintained at $\sim$ and $<$ in the regions distant from the opening by $\sim$2$r_{\textrm{a}}$ and $>$ $4r_{\textrm{a}}$, respectively. Figure \[FIG4\]c (square plots) shows a simulation result for changes of ion conductance 1/$R_{\textrm{p}}$ when the KCl concentration inside the pipette ($c_{\textrm{in}}$) was altered, while the outside bulk KCl concentration ($c_{\textrm{out}}$) was kept at . This result was very consistent with that obtained experimentally (red plots in Fig. \[FIG4\]c). The value of $1/R_{\textrm{p}}$ at $c_{\textrm{in}}$ = was $\sim$8-times larger than that at $c_{\textrm{in}}$ = . We also confirmed that the conditions of $c_{\textrm{in}}$ = and $c_{\textrm{out}}$ = generate a steady current with $|V_{\textrm{b}}| <$ , and hence, allow stable SICM measurements for $\leq r_{\textrm{a}} \leq$ . Note that the high KCl concentration region can be confined to a smaller space when a negative bias voltage is used because of an ion current rectification effect of the negatively charged pipette (Fig. \[FIG4\]d, e). To confirm the SNR enhancement of $I_{\textrm{i}}$ by ICG formed by the use of $c_{\textrm{in}}$ = and $c_{\textrm{out}}$ = , we measured the dynamic responses of $I_{\textrm{i}}$ to quick change of $d$ under $V_{\textrm{b}}$ = , in the presence and absence of IGC. To measure the responses, the pipette with $r_{\textrm{a}}$ = was initially positioned at a Z-point showing 5$\%$ reduction of $I_{\textrm{i}}$ (see Fig. \[FIG5\]a). Then, the pipette was quickly retracted by within , and after a while quickly approached by within (Fig. \[FIG5\]b, Top), by the application of a driving signal with a rectangle-like waveform (Fig. \[FIG5\]b, Bottom) to the developed Z-scanner. The ion current responses measured using the transimpedance amplifier with $B_{\textrm{id}}$ = are shown in Fig. \[FIG5\]b (Middle). With ICG, a clear response was observed (blue line), whereas without ICG no clear response was observed (red line) due to a large noise floor at this high bandwidth. With ICG, the SNR of detected current response increased linearly with increasing $V_{\textrm{b}}$ (Fig. \[FIG5\]c, blue plots; [Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 4, Fig. S8), although the instability of detected $I_{\textrm{i}}$ was confirmed at $V_{\textrm{b}} >$ (not shown). Thus, the SNR of current detection was $\sim$8 times improved by the ICG method (Fig. \[FIG5\]c).
The rising and falling times of the measured current changes with ICG were indistinguishable from those of the piezodriver voltage (Fig. \[FIG5\]b, Middle and Bottom), indicating no noticeable delay ($<$ $\sim$) in the measured current response. Note that the physically occurring (not measured) response of current change (or the rearrangement of ion distribution) must be much faster than the response of measured current changes, because the actual response is governed by the local mass transport time in the nanospace around the pipette opening. The response time is roughly estimated to be 133 by adopting the diffusion time ($\tau$) required for ion transport by a distance of $2r_{\textrm{a}} =$ : $2r_{\textrm{a}} = \sqrt{2D\tau}$, where $D$ is the nearly identical diffusion coefficient of K$^{+}$ and Cl$^{-}$ in water ($\sim$) [@robinson2002electrolyte].
Contrary to our expectation, the normalized approach curve ($I_{\textrm{i}}$ vs $d$) was nearly identical between the pipettes with and without ICG (Fig. \[FIG5\]a) although their $R_{\textrm{p}}$ values were largely different. This indicates a nearly identical $R_{\textrm{a}}/R_{\textrm{p}}$ ratio between the two cases. This result was confirmed by FEM simulations performed by the use of various surface charge densities of pipette and substrate in a range of 0– ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 5).
![\[FIG5\] Enhancement of ion current response by ICG. (**a**) Approach curves with (blue) and without (red) ICG method. (**b**) Dynamic response of ion current at $V_{\textrm{b}}$ = when the tip is vertically moved (shown in green) in close proximity to the glass surface, and its dependence on the use (shown in blue) and non-use (shown in red) of ICG method. (**c**) Increase of SNR of ion current measurement with increasing $V_{\textrm{b}}$ and its dependence on the use (blue) and non-use (red) of ICG method. ](Fig4.pdf)
In the final part of this subsection, we considered how SICM measurements with ICG would affect the membrane potential of live cells in a physiological solution. The ICG modulates local ion concentrations around the pipette tip end, which might induce a change in the local membrane potential only when the tip is in the close vicinity to the cell surface. However, it is difficult to perform experimental measurements of such a transient change of the local membrane potential. Here we estimated this change for nonexcitable HeLa cells used in this study and typical excitable cells, using the Goldman-Hodgkin-Katz voltage equation [@goldman1943potential; @hodgkin1949effect]. In this estimation, extracellular ion concentrations around the tip end were obtained by FEM simulations. Full details of this analysis are described in [Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 3. For both cell types, we found that their local membrane potentials were changed by the pipette tip with ICG to various extents depending on the value of $V_{\textrm{b}}$ (see Supplementary material SI 3, Tab. S3) . However, for nonexcitable HeLa cell, we expect that the net contribution of an ICG-induced local membrane potential change is negligible as the tip pore size is very small. It may not be however negligible for excitable cells, because a local membrane potential change would trigger the opening of voltage-gated sodium ion channels and thus generate action potential, which would propagate over the cell membrane. A quantitative estimation for this possibility is beyond the scope of the present study. Nevertheless, our FEM simulations indicate that the ICG-induced local membrane potential change can be attenuated by the use of different $V_{\textrm{b}}$ values and/or high concentration of NaCl solution instead of KCl solution ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 3, Tab. S4).
Evaluation of Improved $v_{\textrm{z}}$
---------------------------------------
![\[FIG6\] Evaluation of improved approach velocity. The pipette tip was periodically moved in the $z$-direction, in close proximity to the glass substrate (right panel). (Left panel) The green line indicates the time course of tip displacement estimated from the Z-scanner’s drive voltage. The red line indicates the detected ion current signal. The purple line indicates the velocity of tip displacement estimated from the output current of the Z-piezodriver. (Bottom panel) An enlarged view showing these three quantities. The ion current signal in the shaded region (shown in pink) is a false one (mostly leakage current) caused by a capacitive coupling between the Z-piezoactuator and the signal line of ion current detection. A set point value of 2$\%$ was used. ](Fig5.pdf)
![image](Fig6.pdf)
Here we describe a quantitative evaluation of how significantly the pipette approach velocity is improved by the ICG method and the developed Z-scanner. Besides, we describe a problem we have encountered during this evaluation. For this evaluation, the pipette filled with KCl was vertically moved above the glass substrate in KCl solution (the Z-displacement and its velocity are shown with the green and purple lines in Fig. \[FIG6\], respectively), while the ion current was measured using the transimpedance amplifier with $B_{\textrm{id}}$ = . For the initiation of $\sim$ retraction of the pipette by feedback control, the set point of ion current was set at 98$\%$ of the reference ion current (i.e., 2$\%$ reduction). In the approaching regime, the ion current decreased as the tip got close proximity to the surface (red line in Fig. \[FIG6\]). However, in the retraction regime following the deceleration phase, the detected ion current behaved strangely; $I_{\textrm{i}}$ initially decreased rather than increased and then reversed the changing direction, similar to the behavior of pipette Z-velocity ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 6, Fig. S10). We confirmed that this abnormal response of $I_{\textrm{i}}$ was due to a leakage current caused by a capacitive coupling between the Z-piezoactuator and the signal line detecting $I_{\textrm{i}}$. We could mitigate this adverse effect by subtracting the gain-controlled Z-velocity signal from the measured $I_{\textrm{i}}$ (Fig. S10). Although this abnormal response could not be completely cancelled as shown with the pink line in the shaded region of Fig. \[FIG6\], it affected neither the feedback control nor SICM imaging. This is because the $I_{\textrm{i}}$ signal in the retraction regime is not used in the operation of SICM. In the repeated approach and retraction experiments with the use of ICG method (Fig. \[FIG6\]), we achieved $v_{\textrm{z}}$ = for $r_{\textrm{a}}$ = , corresponding to 63$\%$ of the value estimated as 2 $\times$ /\[ + ()$^{-1}$\] = . This $v_{\textrm{z}}$ value attained here is more than 300 times improvement over the $v_{\textrm{z}}$ value used in a recent SICM imaging study on biological samples ( for $r_{\textrm{a}}$ = ) with a conventional design of SICM Z-scanner [@novak2014imaging]. Very recently, the Sch[ä]{}ffer group successfully increased $v_{\textrm{z}}$ up to for $r_{\textrm{a}}$ = 80– using their sample stage scanner and a step retraction sequence called ‘turn step’ [@simeonov2019high]. Our $v_{\textrm{z}}$ value achieved for even 3–4 times smaller $r_{\textrm{a}}$ still surpasses their result. We emphasize that the leakage of high salt to the outside of the tip is too small to change the bulk concentrations of ions outside and inside the tip. In addition, the region of ICG is confined to the vicinity of the tip for $V_{\textrm{b}}$ values ($V_{\textrm{b}} \leq |$$|$) typically used in SICM measurements (Figs. \[FIG4\]d, e), and therefore, the sample remains in the bath salt condition most of time as the time when the sample stays within a distance of $\sim$2 $\times$ $r_{\textrm{a}}$ from the tip opening is very short ($\sim$).
![image](Fig7.pdf)
![image](Fig8.pdf)
High-Speed Imaging of Grating Patterns
--------------------------------------
We evaluated the performance of our HS-SICM system by capturing topographic images of a sample made of polydimethylsiloxane that had a periodic 5 $\times$ checkerboard pattern with a height step of (Grating 1). For this imaging in hopping mode, we used $r_{\textrm{a}}$ = 5– and $B_{\textrm{id}}$ = , values smaller than those used in the above evaluation test. Therefore, we reduced $v_{\textrm{z}}$ to 150–. Other imaging conditions are $V_{\textrm{b}}$ = and number of pixels = 100–400 $\times$ 100. Figure \[FIG9\]a shows a topographic image of Grating 1 captured at over a 25 $\times$ area with 100 $\times$ 100 pixels. Figure \[FIG9\]b shows a topographic image of a rougher surface area of another grating sample (Grating 2) containing an object with a height of $\sim$ (Fig. \[FIG9\]c). Even for this rougher surface, its imaging was possible at . Figures \[FIG9\]d–f show images of a narrower area of Grating 2 marked with the small rectangle in Fig. \[FIG9\]b, captured at 3.5, 9 and , respectively. Although fine structures were more visible in the images captured at 9 and , this difference was not due to the lower imaging rates but due to the larger number of pixels. Averaged pixel rates were 2.85, 4.44 and for Figs. \[FIG9\]d, e and f, respectively, demonstrating high temporal and spatial resolution of our HS-SICM. Note that the imaging rate depends not only on $v_{\textrm{z}}$ and a number of pixels but also on the hopping amplitude, hopping rate and the performance of the lateral movement of our scanner. In [Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), Fig. S11, we show examples of images captured at higher rates ($\sim$4 and $\sim$).
High-Speed Imaging of Biological Samples
----------------------------------------
Next, we examined the applicability of our HS-SICM system to biological samples. The first test sample is a live HeLa human cervical cancer cell. The imaging was carried out in hopping mode using a pipette with $r_{\textrm{a}}$ = 5–, $B_{\textrm{id}}$ = and $V_{\textrm{b}}$ = . Figure \[FIG10\] shows topographic images of a peripheral edge region of a HeLa cell locomoting on a glass substrate in a phosphate buffer saline, captured at 20– with 200 $\times$ 100 pixels for a scan area of 12 $\times$ 12 . During the overall locomotion downwards until the cell disappearance from the imaging area within , the sheet-like structures (lamellipodia) with $\sim$ height were observed to grow and retract. In this imaging, the pixel rate was 670–, making a large contrast with the pixel rate of $\sim$ used in previous hopping-mode-SICM imaging of live cells without significant surface roughness [@shevchuk2012alternative; @novak2014imaging]. Additional HS-SICM images capturing the movement of a HeLa cell in a peripheral edge region are provided in Figs. S12 and S13 ([Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/rsi_sup_%28sw%29191112%28clear%29.pdf), SI 7). Bright-field optical microscope images before and after these SICM measurements are also provided in Fig. S14. In these imaging experiments, $r_{\textrm{a}}$ = 2–, $B_{\textrm{id}}$ = , $V_{\textrm{b}}$ = , pixel rate = , and hopping amplitude = are used.
To demonstrate the applicability of our HS-SICM system also to live cells with significant surface roughness, we next imaged a central region (2 $\times$ ) of a HeLa cell at , using pipettes with $r_{\textrm{a}}$ = 5–, $B_{\textrm{id}}$ = , $V_{\textrm{b}}$ = , and hopping amplitude of (Fig. \[FIG11\] and Supplementary Movie 1). The captured images show moving, growing and retracting microvilli with straight-shaped and ridge-like structures [@seifert2015comparison; @ida2017high; @gorelik2003dynamic]. The red arrows on frames 11–21 in Fig. \[FIG11\] indicate the formation and disappearance of a single microvillus. During this dynamic process captured with a pixel rate of , the full width at half maximum (FWHM) of the microvillus was less than (lower right in Fig. \[FIG11\]), when it was analyzed for the frame 15 (FWHM values of trace 1 and 2 are 38.8 $\pm$ and 42.3 $\pm$ , respectively). These values are less than the single pixel resolution in previous SICM measurements (100–) [@seifert2015comparison; @ida2017high]. As demonstrated here, we can get SICM images with higher spatial resolution without scarifying the temporal resolution, as shown in Table \[tab1\] (compare the values of pixel rate and $r_{\textrm{a}}$ among the three studies with respective HS-SICM systems developed).
In Figs. \[FIG10\] and \[FIG11\], slight discontinuities of images appear as noises between adjacent fast scan lines. In contrast, there are no such discontinuities in the images of stationary grating samples (Fig. \[FIG9\]a) captured with an even higher imaging rate than those used in Figs. \[FIG10\] and \[FIG11\]. Moreover, during these imaging experiments, significant ion current reductions that could be caused by tip–sample contact [@ida2017high] were not observed. Therefore, we speculate that the image discontinuities appeared in the images of live HeLa cells are due to autonomous movement of the cells and the small pipette aperture; small movements of HeLa cells that cannot be detected with a large aperture pipette ($r_a$ $\sim$ ) appear in our high resolution images.
[llllll]{} Observation & Image rate (s/frame) & Pixel rate (s$^{-1}$) & $r_{\textrm{a}}$ (nm) & PRDA (s$^{-1}$nm$^{-1}$) & Reference\
-------------
Endocytosis
Exocytosis
-------------
& 6 & 68 & 50 & 1.4 & Shevchuk et al. [@shevchuk2012alternative]\
Peripheral edge & 0.6 & 1707 & 80–100 & 17–21 & Simeonov and Sch[ä]{}ffer [@simeonov2019high]\
Peripheral edge & 20–28 & 714–1000 & 5–7.5 & 95–200 & This work\
Microvilli & 18 & 228 & 50–100 & 2.3–4.6 & Ida et al. [@ida2017high]\
Microvilli & 1.4 & 2926 & 80–100 & 29.3–36.6 & Simeonov and Sch[ä]{}ffer [@simeonov2019high]\
Microvilli & 20 & 455 & 5–7.5 & 60.7–91.0 & This work\
\[tab1\]
![image](Fig9.pdf)
Next, we performed HS-SICM imaging of actin filaments of $\sim$ in diameter, under the conditions of $r_{\textrm{a}}$ = 5–, $B_{\textrm{id}}$ = , $V_{\textrm{b}}$ = , pixel rate of 556–, and hopping amplitude of 100–. Figure \[FIG8\]a shows a topographic image captured at $\sim$ of an actin filament specimen placed on a glass substrate coated with positively charged aminopropyl-triethoxysilane. Figure \[FIG8\]b shows its enlarged image for the area shown with the red rectangle in Fig. \[FIG8\]a. The image exhibited a height variation along the filament, as indicated in Figs. \[FIG8\]c and d. The measured height obtained from the arrow position 3 is $\sim$. However, the measured heights obtained from the arrow positions 1, 2, 4, 5, and 6 were around . These results may indicate that the specimen partly contains vertically stacked two actin filaments. The measured FWHM was 38.1 $\pm$ for the arrow position 3. This value is 5 times larger than the diameter of an actin filament. This result can be explained by the side wall effect [@dorwling2018simultaneous]; the tip wall thickness, i.e., $r_{\textrm{o}}-r_{\textrm{i}}$, would expand the diameter of a small object measured with SICM [@rheinlaender2015lateral]. The side wall effect also explains that the measured FWHMs for the arrow positions 1 (66.0 $\pm$ ) and 2 (50.6 $\pm$ ) were larger than that for the arrow position 3. Despite the pixel size of 8 $\times$ , the crossover repeat of the two-stranded actin helix could not be resolved. This is not due to insufficient vertical resolution but due to insufficient lateral resolution of the pipette used.
Next, we used mica-supported neutral lipid bilayers containing biotin-lipid, instead of using the amino silane-coated glass substrate to avoid possible bundling of actin filaments on the positively charged surface. Figures \[FIG8\]e, f show images captured at for partially biotinylated actin filaments immobilized on the lipid bilayers through streptavidin molecules with a low surface density. Measured heights of these filaments were $\sim$6– (Figs. \[FIG8\]g and h). However, the measured height of the lipid bilayer was $\sim$ from the mica surface (Fig. \[FIG8\]), much larger than the bilayer thickness of $\sim$ [@leonenko2004investigation]. This large measured thickness is possibly due to a sensitivity of $I_{\textrm{i}}$ to negative charges on the mica surface [@mckelvey2014surface; @klausen2016mapping; @perry2016surface; @fuhs2018direct]. The surface charges of objects can change $d_{\textrm{c}}$ even at constant $V_{\textrm{b}}$ and constant set point [@klausen2016mapping], which can provide measured height largely deviating from real one. In the high resolution image of immobilized filaments (Fig. \[FIG8\]f), the measured value of FWHM was 33.3 $\pm$ (Fig. \[FIG8\]h).
As demonstrated here, our HS-SICM enables fast imaging of molecules without sacrificing the pixel resolution, unlike previous works [@novak2014imaging; @shevchuk2012alternative]. When the number of pixels was reduced to 50 $\times$ 50, we could achieve sub-second imaging for a low-height sample. [Supplementary Movie 2](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/movie_2.mp4) captured at with 50 $\times$ 50 pixels over a 0.8 $\times$ area shows high fluidity-driven morphological changes of polymers formed from a silane coupling agent placed on mica.
Outlook for Higher Spatiotemporal Resolution
--------------------------------------------
Finally, we discuss further possible improvements of HS-SICM towards higher spatiotemporal resolution. The speed performance of SICM can be represented by the value of pixel rate divided by $r_{\textrm{a}}$ (we abbreviate this quantity as PRDA), because of a trade-off relationship between temporal resolution and spatial resolution. In Table \[tab1\], PRDA values of our HS-SICM imaging are shown, together with those of HS-SICM imaging in other labs. Although PRDA depends on sample height, our HS-SICM system set a highest record, PRDA = 95–, in the imaging of a peripheral edge region of a HeLa cell. This record was attained by two means: the ICG method granting the high SNR of current detection and the high resonance frequency of the Z-scanner. Since we have not yet introduced other devices proposed previously for increasing the temporal resolution, there are still room for further speed enhancement. One of candidates to be added is (i) the ‘turn step’ procedure (applying a step function to the Z-piezodriver) developed by Simeonov and Sch[ä]{}ffer for rapid pipette retraction [@simeonov2019high]. Other candidates would be (ii) further current noise reduction of the transimpedance amplifier in a high frequency region, and (iii) lock-in detection of AC current produced by modulation of the pipette Z-position with small amplitude [@pastre2001characterization]. Since our Z-scanner has a much higher resonance frequency than ever before, we will be able to use high-frequency modulation ($\sim$) to achieve faster lock-in detection of AC current. For higher spatial resolution, we need to explore methods to fabricate a pipette with smaller $r_{\textrm{a}}$ and $r_{\textrm{o}}$, without significantly increasing $R_{\textrm{p}}$. One of possibilities would be the use of a short carbon nanotube inserted to the nanopore of a glass pipette with low $R_{\textrm{p}}$.
Conclusions
===========
HS-SICM has been desired to be established not only to improve the time efficiency of imaging but also to make it possible to visualize dynamic biological processes occurring in very soft, fragile or suspended (not on a substrate) samples that cannot be imaged with HS-AFM. As demonstrated in this study, the fast imaging capability of SICM can be achieved by the improvement of speed performances of pipette Z-positioning and ion current detection. The former was attained by the new Z-scanner and implementation of vibration damping techniques to the Z-scanner. The latter was attained by the minimization of the total capacitance at the amplifier input and by the reduction of $R_{\textrm{p}}$ achieved with the ICG method, resulting in an increased SNR of ion current detection. The resulting $v_{\textrm{z}}$ reached for $r_{\textrm{a}}$ = 5– and for $r_{\textrm{a}}$ = . The value of is larger than the recent fastest record achieved by Simeonov and Sch[ä]{}ffer: for $r_{\textrm{a}}$ = 80–. Consequently, the highest possible imaging rate was enhanced by $\sim$100-times, compared to conventional SICM systems. Even sub-second imaging is now possible for a scan area of 0.8 $\times$ with 50 $\times$ 50 pixels, without compromise of spatial resolution. The achieved speed performance will contribute to the significant extension of SICM application in biological studies.
Methods
=======
Fabrication of Nanopipettes
---------------------------
We prepared pipettes by pulling laser-heated quartz glass capillaries, QF100-70-7.5 (outer diameter, ; inner diameter, ; with filament) and Q100-30-15 (outer diameter, 1.0 ; inner diameter, ; without filament) from Sutter Instrument, using a laser puller (Sutter Instrument, P-2000). Just before pulling, we softly plasma-etched for at under oxygen gas flow (), using a plasma etcher (South Bay Technology, PE2000) to remove unwanted contamination inside the pipette. The size and cone angle of each pipette tip were estimated from its scanning electron micrographs (Zeiss, SUPRA 40VP), transmission electron micrographs (JOEL, JEM-2000EX), and measured electrical resistance. Pipettes prepared from QF100-70-7.5 were used for the conductance measurement shown in Fig. \[FIG4\]c.
Measurements of Z-scanner Transfer Function and Time Domain Response
--------------------------------------------------------------------
The Z-scanner displacement was measured with a laser vibrometer (Polytech, NLV-2500 or Iwatsu, ST-3761). The transfer function characterizing the Z-scanner response was obtained using an network analyzer (Agilent Technology, E5106B). A square-like-waveform voltage generated with a function generator (NF Corp., WF1948) and then amplified with a piezodriver (MESTECK, M-2141; gain, $\times$15; bandwidth, ) was used for the measurement of time domain response of the Z-scanner.
Measurement of Pipette Conductance with and without ICG
-------------------------------------------------------
To evaluate the enhancement of pipette conductance (1/$R_{\textrm{p}}$) by ICG, we prepared a pair of pipettes simultaneously produced from one pulled capillary, which exhibited a difference in $r_{\textrm{a}}$ less than $\pm$10$\%$, as confirmed by electrical conductance measurements under an identical condition. One of the pair of pipettes was applied to a conductance measurement under ICG, while the other to a conductance measurement without ICG. To obtain each plot in Fig. \[FIG4\]c, we measured (1/$R_{\textrm{p}}$) for more than 5 sets of pipettes. To avoid the non-linear current-potential problem arising from an ion current rectification effect, we used $V_{\textrm{b}}$ ranging between and .
Measurements of Approach Curves and Response of Ion Current
-----------------------------------------------------------
To obtain the approach curves ($I_{\textrm{i}}$ vs $d$) shown in Fig. \[FIG5\]a, a digital 6th-order low-pass filter was used with a cutoff frequency of . After the measurement of each curve under $V_{\textrm{b}}$ = , the pipette was moved to a Z-position where the ion current reduction by 5$\%$ had been detected in the approach curve just obtained. Next, the cut-off frequency of the low-pass filter was increased to 400 kHz for the measurement of fast ion current response and $V_{\textrm{b}}$ was set at a measurement value. Then, the experiment shown in Fig. \[FIG5\]b was performed. This series of measurements were repeated under different values of $V_{\textrm{b}}$ and with/without ICG. The identical pipette was used through the experiments to remove variations that would arise from varied pipette shapes. After completing the experiments without ICG, the KCl solution inside the pipette was replaced with a KCl solution by being immersed the pipette in a KCl solution for a sufficiently long time ($>$ ). The $R_{\textrm{p}}$ value of the pipetted with ICG prepared in this way was confirmed to be nearly identical to that of a similar pipette filled with a KCl solution from the beginning.
HS-SICM Apparatus
-----------------
The HS-SICM apparatus used in this study was controlled with home-written software built with Labview 2015 (National Instruments), which was also used for data acquisition and analysis. The HS-SICM imaging head includes the XYZ-scanner composed of AE0505D08D-H0F and AE0505D08DF piezoactuators for the Z and XY directions (both NEC/tokin), respectively, as shown in Fig. \[FIG2\]a. The Z- and XY-piezoactuators were driven using M-2141 and M-26110-2-K piezodrivers (both MESTEK), respectively. The overall control of the imaging head was performed with a home-written FPGA-based system (NI-5782 and NI-5781 with NI PXI-7954R for the Z- and XY-position control, respectively; all National Instruments). For coarse Z-positioning, the imaging head was vertically moved with an MTS25-Z8 stepping-motor-based linear stage (travel range, ; THORLABS). For the FF control, FPGA-integrated circuits were used, while homemade analog circuities were used for FB and noise filtering. The sample was placed onto the home-built XY-coarse positioner with a travel range of , which was placed onto an ECLIPSE Ti-U inverted optical microscope (Nikon). The ion current through the tip nanopore was detected via transimpedance amplifiers CA656F2 (bandwidth, ; NF) and LCA-400K-10M (bandwidth, ; FEMTO). A WF1948 function generator (NF) was used for the application of tip bias potential.
HS-SICM Imaging in Hopping Mode
-------------------------------
The tip was approached to the surface with $v_{\textrm{z}}$ (0.05–) until the ion current reached a set point value. The set point was set at 1–2$\%$ reduction of the “reference current” flowing when the tip was well far away from the surface (10– when the ICG method was adopted). The voltage applied to the Z-scanner yielding the set point current was recorded as the sample height at the corresponding pixel position. Here, the output from the transimpedance amplifier was high-pass filtered at 5– to suppress the effect of current drift on SICM imaging. Then, the pipette was retracted by a hopping distance (– within 20–, depending on the hopping amplitude, during which the pipette was moved laterally towards the next pixel position. After full retraction, the tip approaching was performed again. Through all the HS-SICM experiments, the pipette resistance did not show a significant change, indicating no break of the pipette tip during scanning. The value of FWHM was calculated from five height profiles of each HS-SICM image. The error of FWHM was estimated from the standard deviation.
Sample Preparation
------------------
**(1) Glass substrate**\
Cover slips (Matsunami Glass, C024321) cleaned with a piranha solution for 60 min at were used as a glass substrate.\
**(2) HeLa cells on glass**\
HeLa cells were cultured in Dulbecco’s Modified Eagle’s Medium (Gibco) supplemented with 10$\%$ fetal bovine serum. The cells were deposited on a MAS-coated glass (Matsunami Glass, S9441) and maintained in a humidified 5$\%$ CO$^2$ incubator at until observation. Then, the culture medium was changed to phosphate buffer saline (Gibco, PBS). Then, HS-SICM measurements were performed at room temperature.\
**(3) HeLa cells on plastic dish**\
HeLa cells were seeded on plastic dishes (AS ONE, 1-8549-01) in Dulbecco’s Modified Eagle’s Medium (Gibco) supplemented with 10$\%$ fetal bovine serum. The cells were incubated at with 5$\%$ CO$^2$ and measured by HS-SICM 3–4 days after seeding. Before HS-SICM measurements, the culture medium was changed to phosphate buffer saline (Gibco, PBS). HS-SICM measurements were performed at room temperature.\
**(4) Actin filaments on glass substrate**\
The glass surface was first coated with (3-aminopropyl) triethoxysilane (APTES; Sigma Aldrich). Then, a drop (12 ) of actin filaments prepared according to the method [@sakamoto2000direct] and diluted to 3 in Buffer A containing KCl, MgCl$_2$, EGTA, imidazole-HCl (pH7.6) was deposited to the glass surface and incubated for 15 . Unattached actin filaments were washed out with Buffer A.\
**(5) Actin filaments on lipid bilayer**\
The mica surface was coated with lipids containing 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) and 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-(cap biotinyl) (biotin-cap-DPPE) in a weight ratio of 0.99:0.01, according to the method [@yamamoto2010high]. Partially biotinylated actin filaments in Buffer A prepared according to the method [@kodera2010video] were immobilized on the lipid bilayer surface through streptavidin with a low surface density. Unattached actin filaments were washed out with Buffer A.\
FEM Simulations
---------------
We employed three-dimensional FEM simulations to study electrostatics and ionic mass transport processes in the pipette tip with ICG. In the simulation, we used the rotational symmetry along the pipette axis to reduce the simulation time. The full detail is described in SI 3. Briefly, the following set of equations were solved numerically:
$$\begin{aligned}
\label{eq_P}
&&\nabla^2 V = - \frac{F}{\varepsilon_0 \varepsilon} \Sigma_{j =1} ^2 Z_j c_j \\
\label{eq_NP}
&&{\bf J_j} = -D_j(c) \nabla c_j - \frac{F Z_j c_j D_j(c) }{RT} \nabla V , \nonumber \\
&&\nabla \cdot {\bf J}_j = 0 \\
\label{eq_SCD}
&&{\bf n} \cdot \nabla \phi = \frac{- \sigma}{\varepsilon_0 \varepsilon}.\end{aligned}$$
The Poisson equation Eq. (\[eq\_P\]) describes the electrostatic potential $V$ and electric field with a spatial charge distribution in a continuous medium of permittivity $\varepsilon$ containing the ions $j$ of concentration $c_j$ and charge $Z_j$. $F$ and $\varepsilon_0$ are the Faraday constant and the vacuum permittivity, respectively. We assume that the movement of the tip is sufficiently slow not to agitate the solution, and thus, the time-independent Nernst–Planck equation, Eq. (\[eq\_NP\]), holds, where $\textbf{J}_j$, $D_j (c)$, $R$ and $T$ are the ion flux concentration of $j$, concentration-dependent diffusion constant of $j$, gas constant, and temperature in kelvin, respectively. This equation describes the diffusion and migration of the ions. The boundary condition for Eq. (\[eq\_NP\]) is determined so that a zero flux or constant concentration condition is satisfied. On the other hand, the boundary conditions of Eq. (\[eq\_P\]) are given so that a fixed potential or the spatial distribution of the surface charge $\sigma$, as described in Eq. (\[eq\_SCD\]), holds. In Eq. (\[eq\_SCD\]), $\textbf{n}$ represents the surface normal vector.
Supplementary Materials {#supplementary-materials .unnumbered}
=======================
The following data are available as [Supplementary material](https://aip.scitation.org/doi/suppl/10.1063/1.5118360): performance of XYZ-Scanner, current noise in our SICM system, finite-element simulation, dynamic response of measured ion current with and without use of ICG method, simulated approach curves obtained with and without the use of ICG method, current noise caused by capacitive couplings between Z-scanner and signal line of current detection, high-speed SICM imagings of grating samples and peripheral edge of HeLa cells, HS-SICM images of microvilli dynamics of HeLa cell ([Movie 1](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/movie_2.mp4)), and HS-SICM images of polymers ([Movie 2](https://aip.scitation.org/doi/suppl/10.1063/1.5118360/suppl_file/movie_2.mp4)).\
This work was supported by a grant of JST SENTAN (JPMJSN16B4 to S.W.), Grant for Young Scientists from Hokuriku Bank (to S.W.), JSPS Grant-in-Aid for Young Scientists (B) (JP26790048 to S.W.), JSPS Grant-in-Aid for Young Scientists (A) (JP17H04818 to S.W.), JSPS Grant-in-Aid for Scientific Research on Innovative Areas (JP16H00799 to S.W.) and JSPS Grant-in-Aid for Challenging Exploratory Research (JP18K19018 to S.W.) and JSPS Grant-in-Aid for Scientific Research (S) (JP17H06121 and JP24227005 to T.A.). This work was also supported by a Kanazawa University CHOZEN project and World Premier International Research Center Initiative (WPI), MEXT , Japan.
[^1]:
[^2]:
|
---
abstract: |
Cyber threats affect all kinds of organisations. Risk analysis is an essential methodology for cybersecurity as it allows organisations to deal with the cyber threats potentially affecting them, prioritise the defence of their assets and decide what security controls should be implemented. Many risk analysis methods are present in cybersecurity models, compliance frameworks and international standards. However, most of them employ risk matrices, which suffer shortcomings that may lead to suboptimal resource allocations. We propose a comprehensive framework for cybersecurity risk analysis, covering the presence of both adversarial and non-intentional threats and the use of insurance as part of the security portfolio. A case study illustrating the proposed framework is presented, serving as template for more complex cases.
**Keywords**: cybersecurity, risk analysis, adversarial risk analysis, cyber insurance, resource allocation
author:
- 'D. Rios Insua'
- 'A. Couce-Vieira'
- 'J.A. Rubio'
- 'W. Pieters'
- 'K. Labunets'
- 'D. G. Rasines'
title: An Adversarial Risk Analysis Framework for Cybersecurity
---
Introduction {#sec:intro}
============
At present, all kinds of organisations are being critically impacted by cyber threats [@Anderson2008; @Cyberwarfare2013]. The Cyberspace is even described as a fifth military operational space in which movements by numerous countries are common [@LeakSource2014]. Risk analysis is a fundamental methodology to help manage such issues . With it, organizations can assess the risks affecting their assets and what security controls should be implemented to reduce the likelihood of such threats and/or their impacts, in case they are produced.
Numerous frameworks have been developed to screen cybersecurity risks and support resource allocation, including CRAMM [@CRAMM2003], ISO 27005 [@ISO27005], MAGERIT [@Magerit2012], EBIOS [@ANSSI1995], SP 800-30 [@NIST2012], or CORAS [@CORAS2001]. Similarly, several compliance and control assessment frameworks, like ISO 27001 [@ISO27001], Common Criteria [@CC2012], or CCM [@CSA2016] provide guidance on the implementation of cybersecurity best practices. These standards suggest detailed security controls to protect an organisation’s assets against risks. They have virtues, particularly their extensive catalogues of threats, assets and security controls providing detailed guidelines for the implementation of countermeasures and the protection of digital assets. Even though, much remains to be done regarding cybersecurity risk analysis from a methodological point of view. Indeed, a detailed study of the main approaches to cybersecurity risk management and control assessment reveals that they often rely on risk matrices, with shortcomings well documented in Cox [@Cox2008]: compared to more stringent methods, the qualitative ratings in risk matrices (likelihood, severity and risk) are more prone to ambiguity and subjective interpretation, and very importantly for our application area, they systematically assign the same rating to quantitatively very different risks, potentially inducing suboptimal security resource allocations. Hubbard and Seiersen [@hubbard] and Allodi an Massacci [@allodi] provide additional views on the use of risk matrices in cybersecurity. Moreover, with counted exceptions like IS1 [@HMG], those methodologies do not explicitly take into account the intentionality of certain threats. Thus, ICT owners may obtain unsatisfactory results in relation with the proper prioritisation of risks and the measures they should implement.
In this context, a complementary way for dealing with cyber risks through risk transfer is emerging: cyber insurance products, of very different nature and not in every country, have been introduced in recent years by companies like AXA, Generali, Allianz, or Zurich. However, cyber insurance has yet to take off [@Survey2017].
In this paper we propose a more rigorous framework for risk analysis in cybersecurity. We emphasise adversarial aspects for better prediction of threats as well as include cyber insurance. Sect. \[sec:araframe\] presents our framework, supported by a case study in Sect. \[sec:casestudy\]. We conclude with a brief discussion.
A cybersecurity adversarial risk analysis framework {#sec:araframe}
===================================================
We introduce our integrated risk analysis approach to facilitate resource allocation decision-making regarding cybersecurity. Our aim is to improve current cyber risk analysis frameworks, introducing dynamic schemes that incorporate all relevant parameters, including decision-makers’ preferences and risk attitudes [@Clemen2013] and the intentionality of adversaries. Moreover, we introduce decisions concerning cyber insurance adoption to complement other risk management decisions through risk transfer. Fielder et al. [@fielder2016] review and introduce various approaches to cyber security investment, which cover optimisation and/or game theoretic elements, under strong common knowledge assumptions. Our framework combines optimisation with an adversarial risk analysis (ARA) approach to deal with adversarial agents.
We present the framework stepwise, analysing the elements involved progressively. We describe the models through influence diagrams (ID) and bi-agent influence diagrams (BAID) [@Banks2015] detailing the relevant elements: assets, threats, security controls, costs and benefits. We provide a brief verbal description of the diagrams introduced and a generic mathematical formulation at each step.
System performance evaluation {#subsec:sysperformance}
-----------------------------
Fig. \[basicspe\] describes the starting outline for a system under study. Costs associated with system operation over the relevant planning period are indicated by $c$. Such costs are typically uncertain, modelled with a probability distribution $p(c)$. We introduce a utility function $u(c)$ [@Ortega2017] over costs to cater for risk attitudes. The evaluation of system performance under normal conditions, i.e. in absence of relevant cyber incidents, is based on its associated expected utility $$\psi_n = \int u(c) \, p(c) \ dc .$$
![Basic influence diagram for performance evaluation.[]{data-label="basicspe"}](basicspe)
![Cybersecurity attributes for performance evaluation.[]{data-label="ciaspe"}](ciaspe)
This basic scheme can be sophisticated in several directions. For example, there could be several performance functions. A typical case is to consider attributes concerning information availability ($a$), integrity ($i$) and confidentiality ($s$) [@Mowbray2013], Fig. \[ciaspe\]. These nodes could be, in turn, antecessors of the cost node. We use $p(a, i, s)$ as the distribution modelling uncertainty about system performance. If $u(a, i, s)$ represents the corresponding multi-attribute utility, the expected utility would be $$\psi_{n} = \iiint u(a, i, s) \, p(a, i, s) \ da \, di \, ds .$$ We use $p(a, i, s)$ if interrelationships between such attributes are expected. If this were not so, e.g. in the case of independence, we would describe graphically the model as in Fig. \[ciaspe\], through $$p(a, i, s) = p(a) \, p(i) \, p(s) .$$
Cybersecurity risk assessment {#subsec:ranacyber}
-----------------------------
Adopting the basic scheme in Fig. \[basicspe\], on which we focus to simplify the exposition, we consider the problem of cybersecurity risk assessment in Fig. \[racyber\]. For instance, consider a model with just two threats, one of them ($t_1$) physical (e.g., fire) and another one ($t_2$) cyber (e.g., DDoS attack[^1]). Both $t_1$ and $t_2$ are random variables. We also consider two types of assets, one traditional (e.g., facilities) and the other cyber (e.g., computers). Impacts over these assets are, respectively, $c_t$ and $c_c$ and, typically, uncertain. If there is a relationship between them given either threat, the corresponding model would be of the form $$p(c_t, c_c | t_1, t_2) \, p(t_1, t_2),$$ where $ p(t_1, t_2) $ describes the probability of the threats happening, and $ p(c_t, c_c | t_1, t_2) $ describes the probability over asset impacts, given the eventual occurrence of threats. Costs are added at the total cost node $c$, which aggregates those under normal circumstances with those due to the incidents. Then, the expected utility taking into account the threats and specific dependencies in Fig. \[racyber\] would be $$\psi_r = \int \dots \int \
u(c_n + c_t + c_c) \, p(c_n) \, p(c_{t} | t_{1}, t_{2}) \, p(c_{c} | t_{1}, t_{2}) \, p(t_{1}) \, p(t_{2}) \
dt_2 \, dt_1 \, dc_c \, dc_t \, dc_n.$$
![Risk assessment in cybersecurity.[]{data-label="racyber"}](racyber)
We have assumed that consequences are additive, but we could have a generic utility $ u(c_n, c_c, c_t) $. Finally, we evaluate the loss $ \psi_n - \psi_r $ in expected utility considering the threats against that under normal conditions. When it is sufficiently large, incidents are expected to harm the system significantly and we should manage such risks.
The model can be extended to include a bigger number of threats and assets, as well as additional types of costs. Finally, several utility nodes could be incorporated to describe the preferences of multiple stakeholders.
Risk mitigation in cybersecurity risk management {#subsec:rmancyber}
------------------------------------------------
The next step adds security controls to the model. We introduce a portfolio of them to reduce the likelihood of threats and/or their impact (Fig. \[rmcyber\]). Examples of controls include firewalls, employee training, or making regular backups.
![Risk assessment of cybersecurity controls.[]{data-label="rmcyber"}](rmcyber)
For simplicity, in Fig. \[rmcyber\] we assume that all controls influence over all events and impacts. It will not always be so: a fire detector makes less harmful, but not less likely, a fire; resource accounting mechanisms [@DDoS] managing access based on user privileges, make less likely a DDoS, but usually not less harmful. Node $e$ describes the portfolio of controls, whose cost we model through the distribution $p(c_e | e)$. Controls might influence on threat likelihoods $p(t_1 | e)$ and $p(t_2 | e)$, as well as on asset impact likelihoods $p(c_t | t_1, t_2, e)$ and $p(c_c |t_1, t_2, e)$. All costs are aggregated in the total cost node $c$, under appropriate additivity assumptions.
In this case, the expected utility when portfolio $e$ is implemented is $$\psi (e) = \int ...\int \
u(c_n + c_e + c_t + c_c) \, p(c_n) \, p(c_e | e) \, p(c_t | t_1, t_2, e) \, p(c_c | t_1, t_2, e) \, p(t_1 | e) \, p(t_2 | e) \ dt_2 \, dt_1 \, dc_e \, dc_t \, dc_c \, dc_n .$$ We would then look for the maximum expected utility portfolio solving for $$\psi_{e}^{*} = \max_{e \in E} \psi (e),$$ being $E$ the set of feasible portfolios. Based on the available controls, we define portfolios that meet different constraints which may be economic (e.g., not exceeding a budget), legal (e.g., complying with data protection laws), logistic or physical.
Cyber insurance in cybersecurity risk management {#subsec:cyberins}
------------------------------------------------
As a relevant element of increasing interest, we introduce cyber insurance. Its costs will typically depend on the implemented portfolio of controls, as in Fig. \[rtcyber\]: the better such portfolio is, the lower the premium will be. This cost will also depend on the assets to be protected. We could include the insurance within the portfolio of controls; however, it is convenient to represent them separately, since premiums will typically depend on the controls deployed.
![Cyber insurance in cybersecurity risk assessment.[]{data-label="rtcyber"}](rtcyber)
Decision node $i$ describes the cyber insurance adopted, with entailed costs $c_i$ with probability $p(c_i | i, e)$, although they will usually be deterministic. In addition, insurance and security controls will typically affect impacts, modelled through $p(c_t | t_1, t_2, e, i)$ and $p(c_c | t_1, t_2, e, i)$. Costs are aggregated in the total cost node $c$. The expected utility when portfolio $e$ is implemented together with insurance $i$ is $$\begin{gathered}
\psi (e, i) = \int \dots \int \
u(c_n + c_i + c_e + c_t + c_c) \, p(c_n) \, p(c_i | i, e) \, p(c_e | e) \, p(c_t | t_1, t_2, e, i) \, p(c_c | t_1, t_2, e, i) \, \times \\ \times \, p(t_1 | e) \, p(t_2 | e) \ dt_2 \, dt_1 \, dc_c \, dc_t \, dc_e \, dc_i \, dc_n.\end{gathered}$$ We seek the maximum expected utility portfolio-insurance pair through $$\max_{e \in E , i\in I} \psi (e, i),$$ where $I$ represents the insurance catalogue. The pair $(e,i)$ could be further restricted jointly, e.g., by a common budget constraint or legal requirements.
Adversarial risk analysis in cybersecurity {#subsec:aracyber}
------------------------------------------
As discussed previously, intentionality is a key factor when analysing certain cyber threats. We shall use ARA [@Banks2015] to model the intentions and strategic behaviour of adversarial cyber threats. Under ARA, the attacker has his own utility function $u_A$, seeking to maximise the effectiveness of his attack. This paradigm is applicable to multiple types of strategic interactions between attackers and defenders. Two of them are specially relevant in cybersecurity. First, the sequence *defence-attack*, in which the Defender deploys her security controls and the Attacker is able to observe them prior to attacking. Second, the sequence *defence-attack-defence*, in which the Defender deploys her preventive controls, then the Attacker observes them to decide his attack and, finally, the Defender recovers from the attack, should it be successful.
### Defence-attack model
The original examples, Figs. \[racyber\] and \[rmcyber\] evolve into Fig. \[figarada1\], modelling an adversarial case through a BAID with a Defender and an Attacker: physical threat $t_{1}$ remains unintentional whereas cyber threat $t_{2}$ becomes adversarial through a decision node for the Attacker, who needs to decide whether or not to launch an attack to his benefit. It corresponds to a sequential defence-attack model [@Banks2015].
![Adversarial risk analysis in cybersecurity: defence-attack problem.[]{data-label="figarada1"}](aradefatt)
![Attacker problem in the defence-attack model.[]{data-label="figarada2"}](araatt)
The Defender problem is described in Fig. \[rmcyber\]. Its resolution was covered in Sect. \[subsec:rmancyber\]. There, the cyber attack is described probabilistically through $p(t_{2} | e)$, which represents the probability that the Defender assigns to cyber threat $t_{2}$ materialising, had portfolio $e$ been adopted. However, the strategic nature of this problem, Fig. \[figarada1\], requires the analysis of the Attacker decision about which attack to perform. Under the ARA defence-attack paradigm, the Defender should analyse the Attacker strategic problem in Fig. \[figarada2\].
Specifically, given portfolio $e$, and assuming that the Attacker maximizes expected utility, the Defender would compute for each attack $t_{2}$, the expected utility for the Attacker $$\psi_A (t_2 | e) =
\iiint u_A (t_2, c_t, c_c) \, p_A (c_t | t_1, t_2, e) \, p_A (c_c |t_1, t_2, e) \, p_A (t_1 | e) \ dt_1 \, dc_c \, dc_t,$$ where $u_A$ and $p_A$ are, respectively, the utilities and probabilities of the Attacker. The Defender must then find the attack maximising the Attacker’s expected utility, $$\max_{t_2\in T_2} \psi_A (t_2 | e) ,$$ where $T_2$ is the attack set.
However, the Defender will not typically know $u_{A}$ and $p_{A}$. Suppose we are capable of modelling her uncertainty about them with random probabilities $P_{A}$ and a random utility function $U_{A}$ [@Banks2015]. Then, the optimal random attack given $e$ is $$T^{*}_{2} (e) =
\arg\max_{t_2 \in T_2}
\iiint U_A (t_2, c_t, c_c) \, P_A (c_t | t_1, t_2, e) \, P_A (c_c | t_1, t_2, e) \, P_A (t_1 | e) \ dt_1 \, dc_c \, dc_t.$$ Finally, the distribution over attacks that we were looking for satisfies $$p (t_2 | e) = P \big(T^*_2 (e) = t_2 \big) ,$$ assuming that $T_2$ is discrete (e.g., when referring to attack options), and, similarly, if they are continuous (e.g., when referring to attack efforts). Such distribution could be estimated through Monte Carlo (MC) simulation as in Algorithm \[algo1\] (Appendix), where the distribution of random utilities and probabilities is designated by $$F =
\Big(
U_{A} (t_2, c_t, c_c), P_A (c_t | t_1, t_2, e), P_A (c_c | t_1, t_2, e), P_A (t_1 | e)
\Big)$$
### Defence-attack-defence model
As mentioned, cybersecurity risk management also comprises reactive measures that can be put in place to counter an attack, should it happen. Therefore, we split the security portfolio into two groups: preventive security controls $e_p$ and reactive security controls $e_r | t_2$. This corresponds to a sequential defence-attack-defence model [@Banks2015] in which the first move is by the Defender (preventive portfolio $e_p$), the second is by the Attacker (attack after observing preventive controls, $t_2|e_p$) and the third one is by the Defender (reactive portfolio $e_r|t_2$).
![Adversarial risk analysis in cybersecurity: defence-attack-defence problem..[]{data-label="figaradad1"}](aradad_defatt)
The Defender problem is solved similarly to Sect. \[subsec:rmancyber\], reflecting changes caused by splitting the security control node. Specifically, the expected utility when portfolio $ e=(e_p, e_r) $ is implemented is $$\begin{gathered}
\psi (e) = \int ...\int
u(c_n + c_e + c_t + c_c) \, p(c_n) \, p(c_e | e_p, e_r) \, p(c_t | t_1, t_2, e_p, e_r) \, p(c_c | t_1, t_2, e_p, e_r) \, \times \\ \times \, p(t_1 | e_p) \, p(t_2 | e_p) \ dt_2 \, dt_1 \, dc_c \, dc_t \, dc_e \, dc_n .\end{gathered}$$ We would then look for the maximum expected utility portfolio $$(e_p^*, e_r^*) = \underset{(e_p, e_r) \in E_p \times E_r}{\arg\max} \psi(e_p, e_r),$$ where $E_p$ and $E_r$, respectively, define constraints for preventive and reactive portfolios, some of which could be joint. The Attacker problem providing $p(t|e_p)$ would be solved in a similar fashion than in the defence-attack case.
A case study template {#sec:casestudy}
=====================
We illustrate our framework for cybersecurity risk analysis with a defend-attack case study, which can serve as a template for more complex cases. The Defender is an SME dedicated to document management with 60 people and 90 computers. A cyber attack might affect, mainly, the online document management service. For confidentiality reasons, the number of relevant issues has been simplified and data conveniently masked. This simplification will allow us to better illustrate key modelling concepts and the overall scheme to follow for other case studies. Moreover, we include uncertain phenomena in which data are available and others in which it is not and, thus, we shall need to rely on expert judgement [@dias2018]. Prices and rates refer to Spain, where the incumbent SME is located.
In essence, we first structure the problem identifying assets, threats and security controls. The later may have implementation costs in exchange of reducing the threat likelihoods and/or eventual impacts. Subsequently, we assess the impacts that may have an effect on asset values to find the optimal risk management portfolio. Since adversarial threats are included, we also model the Attacker decision problem. Indeed, in this case there is a single potential Attacker which contemplates a DDoS attack with the objective of disrupting the SME services, causing an operational disruption and reputational damage with the consequent loss of customers, which would head to competitors, besides incurring in contractual penalties potentially affecting its continuity. Then, we simulate from this problem to obtain the attack probabilities, which feed back the Defender problem. In this way an optimal defence can be obtained. We consider a one-year planning horizon.
The problem we focus is on finding the optimal security portfolio and insurance product for the company, in the sense of maximising expected utility. Other formulations are discussed in Sect. \[subsec:further\].
Problem structuring {#subsec:problemstructuring}
-------------------
We structure the problem through the BAID in Fig. \[fig8\]. Lighter nodes refer to issues concerning solely the Defender; darker nodes refer to issues relevant only for the Attacker; nodes with an stripped background affect both. Should there be several attackers, we would use more background patterns or colours. Arcs have the same interpretation as in [@shachter1986evaluating]. The only non-standard arc is that linking the security controls node and the attack node, meaning that the Attacker will implement his action once he identifies the controls adopted by the Defender.
![image](casebaid.pdf)
### Assets
We first identify the Defender assets at risk. We could obtain them from catalogues like those of the methodologies mentioned in the Introduction. Here we consider:
- *Facilities*: Offices potentially affected by threats. Without them, the organisation could not operate.
- *Computer equipment*: The data centre and workstations essential for this organisation. Should they be affected, costs could be substantial.
- *Market share*, directly impacting the company profits.
Other assets not considered in this case include the company’s development software, business information, the mobile computing elements or the staff.
### Non-intentional threats
We consider threats over the identified assets deemed relevant and having non-intentional character. This may include threats traditionally insurable as well as new ones potentially cyber insurable. We use a simplification of the catalogues in the methodologies in the introduction:
- *Fire*: It may affect facilities, as well as computers, which could even destroyed. No impact over market share is contemplated, as the organisation has a backup system. We assume that a fire can occur only by accident, not considering the possibility of sabotage.
- *Computer virus*: Aimed at disrupting normal operations of computer systems. We consider this threat non-intentional, as most viruses propagate ubiquitously: their occurrence tends to be random from the defender perspective. It may degrade computer performance.
We model each threat with a probabilistic node associated with the Defender problem. Other non-intentional threats, not considered here, could be water damage, power outages or employee errors.
### Intentional threats
This category may include both cyber and physical threats. Again, we may use catalogues from, e.g., MAGERIT. We should identify the corresponding attackers, as well as their attack options available. In our case, we just consider a relevant attacker.
- *Competitor attack*: Our competitor may attempt a DDoS, to undermine the availability of the Defender’s site, compromising her customer services. Should it happen, it would impact negatively the Defender’s market share, damaging its reputation and, consequently, loosing customers to be gained by the Attacker. The decision is whether to launch the attack and the number of attempts.
We integrate attack options into a single decision node associated with the Attacker problem. Other intentional attacks, not modelled here, could include an abuse of access privileges, launching an advanced persistent threat, insiders or bombs.
### Uncertainties affecting threats
We consider now those uncertainties affecting the Defender’s assets.
- *Duration of DDoS*, will depend on the number of attacks and security controls deployed.
- *Fire duration*, which can be reduced with an anti-fire system.
Each one is modelled with a probabilistic node. Other related uncertainties could come, e.g., from a more detailed modelling of the virus (e.g., infection probability given the OS) or the fire propagation to adjacent buildings.
### Attacker uncertainties
Additionally, we consider uncertainties that the Attacker might find relevant in his problem and affect only him.
- *Detection of Attacker*. If detected, his reputation would suffer and might face legal prosecution.
Each of them is modelled with a probabilistic node. Other attacker uncertainties include the number of customers affected by the DDoS or the performance of the attack platform.
### Relevant security controls
We identify security controls relevant to counter the threats. We may use listings from the above mentioned methodologies. In our case we consider:
- *Anti-fire system*. It can detect a fire facilitating early mitigation.
- *Firewall*. It protects a network from malicious traffic.
- Implementation of *risk mitigation procedures* for cybersecurity and fire protection.
- *Cloud-based DDoS protection*, diverting DDoS traffic from the target to a cloud-based site absorbing malicious traffic.
We associate a Defender decision node with the security controls. Other measures, not included here, could be a system resource management policy, a cryptographic data protocol or a wiring protection.
### Insurance
We also consider the possibility of purchasing insurance to transfer risk. The premium will depend on the protected assets and contextual factors such as location, company type and, quite importantly, the implemented controls. Available insurance products are in Table \[table2\].
**Product** **Coverage**
---------------- -------------------
*No insurance* None
All of the above.
: \[table2\] Insurance product features.
We associate a Defender decision node with the insurance to be contracted. As its cost depends on the implemented controls, we include the corresponding decision node as a predecessor.
### Impacts for Defender
Having identified threats and assets, we present their potential impacts over the Defender’s interests:
- *Impact over facilities*: Economic losses caused by fire over them.
- *Impact over computers*: Economic losses caused by fire or viruses over computers. We split them into insurable impacts and non-insurable ones. We need this split to calculate the eventual insurance coverage.
- *Impact over market share*: Costs due to market share lost.
We model each impact with a probabilistic node. We also consider the impacts associated with safeguards.
- *Cost of security controls* implemented by Defender.
- *Cost of insurance* acquired.
- *Insurance coverage*.
Finally, a node aggregates all Defender’s consequences:
- *Total costs*: It summarises the above to establish the final monetary impact of the Defender problem.
The above cost nodes will be deterministic. Besides, we could also include other types such as corporate image or staff safety.
### Impacts for Attacker
We consider the following impacts:
- *Attacker earnings* from increasing market share, transferred from that lost by the Defender.
- *Costs when detected*, covering eventual sanctions by the regulator, legal costs as well as loss of customers and reputation, if detected.
- The final *results of attack* combines all previous earnings and costs, as well as those of undertaking the attack, such as acquiring malicious tools or hiring hackers.
We model the costs when detected as a probabilistic node. The remaining nodes are deterministic.
### Preferences
Value nodes describe how the corresponding agent evaluates consequences. We use the expected utility paradigm . We, therefore, include these nodes:
- *Utility of Defender:* Models the Defender preferences and risk attitudes over the total costs.
- *Utility of Attacker:* It describes the Attacker preferences and risk attitudes.
We include a value node for each of the utility functions.
### Defender and Attacker problems
Figs. \[fig9a\] and \[fig9b\] respectively represent the Defender and Attacker problems derived from the strategic problem in Fig. \[fig8\]. For the Defender problem, this converts the Attacker’s decision nodes into chance nodes and eliminates the Attacker’s nodes not affecting the Defender problem, as well as the corresponding utility node. Similarly for the Attacker. We use both diagrams to guide judgement elicitation from the Defender.
![image](casedef.pdf)
![Attacker problem.[]{data-label="fig9b"}](caseatt.pdf)
Assessing the Defender’s non-strategic beliefs and preferences {#subsec:defenderproblem}
--------------------------------------------------------------
We provide now the quantitative assessment of the Defender beliefs and preferences not requiring strategic analysis. Some of them will be based on data and expert judgement, others just on expert judgement due to the lack of data typical in many cybersecurity environments. As a consequence, we populate most nodes in her problem. We incorporate in Sect. \[subsec:attackerproblem\] those requiring strategic analysis. Finally, in Sect \[subsec:simuldefender\] we analyse the Defender problem to find the optimal controls and insurance. When incumbent, we provide the pertinent utility $u()$, random utility $U_A()$, probability $p()$, random probability $P_A()$ or deterministic model at the corresponding node.
### Economic value of Defender assets
We consider the following values for the assets at risk:
- *Facilities*: Their value is 5,000,000€, reflecting only acquisition costs.
- *Computer equipment*: Valued at 200,000€, under similar considerations.
- *Market share*: Currently estimated at 50%. Translated into next year foreseen profits, we value it at 1,500,000€.
### Modelling security controls {#subsubsec:modcontrols}
#### Security controls decision, $ s $:
The security portfolios that the Defender could implement derive from these options:
- Install an anti-fire system.
- Install a firewall to protect the infrastructure.
- Train employees on safety and cybersecurity procedures.
- Subscribe a cloud-based DDoS protection system with choice (2, 5, 10 or 1000) gbps.
We thus have 40 portfolios. These could be further constrained by, e.g., a budget, as in Sect. \[subsec:further\].
#### Cost of security controls, $ c_s | s $:
This node models the cost of implemented controls. Table \[table31\] provides their costs, from which we derive the portfolio costs.
**Security control** **Cost**
----------------------- ----------
Anti-fire system 1,500
Firewall 2,250
Risk mitigation proc. 2,000
: \[table31\] Cost of individual security controls.
### Modelling the insurance product {#subsubsec:modinsurance}
#### Insurance decision, $ i $:
This refers to the insurance product that the Defender could purchase (Table \[table32\]) once the controls have been selected.
#### Insurance cost, $ c_i | i $:
This models the insurance premiums. It depends on the controls implemented by the organisation (Table \[table32\]).
-------- ------ ----------- ----- -------
****
None Anti-fire Proc.
None 0 0 0 0
Trad. 500 300 500 500
Cyber 300 300 200 250
Compr. 700 500 600 650
-------- ------ ----------- ----- -------
: \[table32\] Insurance product cost.
#### Insurance coverage, $ g_i | i, b, q_i $:
This node models $ g_i $, the insurance product coverage reflected in Table \[table2\]. Traditional and comprehensive insurances cover 80% of burnt facilities and computer costs. The cyber and comprehensive insurances will cover 80% of the expenses related with virus removal.
### Modelling fire risk {#subsubsec:modfire}
#### Fire likelihood, $ p (f) $:
This node provides the annual probability of suffering a fire in our facility. We use data from the Vitoria fire brigade [@vitoria09], concerning interventions on industrial buildings (Table \[table3\]).
**Year** **Buildings** **Fires**
---------- --------------- -----------
2005 1220 32
2006 1266 29
2007 1320 30
2008 1347 28
2009 1314 28
: \[table3\] Industrial fire data in Vitoria (2005-2009).
The fire rate remains fairly stable over the years. We estimate the probability that an organization suffers a fire in a year using a beta-binomial model with prior $\beta e (1/2, 1/2)$. The posterior would be $$f | \textrm{data} \sim \beta e \Big(1/2 + \sum_{i=1}^{5} x_{i}, 1/2 + \sum_{i=1}^{5} (n_{i} - x_{i}) \Big) \equiv \beta e(147.5, 6320.5),$$ where $x_{i}$ designates the number of fires affecting industrial buildings and $n_{i}$, that of such buildings in the $i-$th year, $i= 1,...,5$. Such distribution can be reasonably summarised through its posterior expectation, $\hat{p} = 0.022$, since the posterior variance is small. Its value is estimated from the probability that there are no fires, $ p(0) = 1 - \hat{p} = 0.978$. The number of fires can be approximated with a Poisson $\mathcal{P}(0.022)$ distribution. However, we consider only the probability that one fire occurs, since probabilities beyond that are tiny ($p (f > 1) = 0.00024 $. Thus, the number $f$ of fires will follow $$f \sim \min [ 1, \mathcal{P}(0.022)].$$
#### Fire duration, $ p ( o | f, s ) $:
It is a major impact determinant [@Bagchi2013]: the longer the fire, the more damaging it will be. To study its duration, we employ the above Vitoria data. Fig. \[fig10\] presents the histogram of industrial fire durations, with modal duration between 30 minutes and one hour.
![Industrial fire duration histogram. Vitoria, Spain (2005-2009).[]{data-label="fig10"}](fig10)
Adopting the approach in [@wiper01], we model fire duration with a gamma $ \Gamma (\textrm{shape}=\gamma, \textrm{scale}=\gamma/\mu) $ distribution. We assume a non-informative, but proper, prior $\gamma \sim Exp(0.01) $ and $\mu \sim \textrm{Inv-}\Gamma(1, 1) $. No analytical expression for the posterior distribution is available, but we can introduce a Markov chain MC scheme to sample from $\mu | data$ and $\gamma | data$, [@wiper01]. Based on this, we estimate that E($\gamma | data$) $\approx 0.85$ and E($\mu | data$) $\approx 78$.
The only security control among the proposed ones that may have an effect on fire duration is the anti-fire system, which enables faster fire detection. Using expert judgement [@dias2018], we determine its threshold duration under the proposed system with, respectively, suggested minimum, modal and maximum durations of 1, 10 and 60 min. To mitigate expert overconfidence [@galway], we consider a triangular distribution with quantiles 0.05 at 1 and 0.95 at 60 min, resulting in $ Tri(0.8, 63, 10) $, which models the fire duration $ o $ if there is a fire ($f=1$) and portfolio $s$ contains the anti-fire system. On the other hand, $$o \sim \Gamma(0.85, 0.01089)$$ if the portfolio does not contain the anti-fire system.
#### Fire impact:
It models the impacts assuming that the fraction of affected assets is related with fire duration. After consulting with experts, we consider that a fire lasting 120 minutes would degrade the facilities by 100% in absence of controls. To simplify, we assume that the effect of fire duration is linear. This impact will be assessed in Sect. \[subsubsec:modimpact\].
Additionally, the impact over computer equipment derives from the percentage of facility degradation caused by fire. Assuming that computers are evenly distributed through the premises, a fire lasting 120 minutes would also degrade computer equipment by 100%. This impact is potentially insurable and will be modelled in Sect. \[subsubsec:modimpact\].
### The computer virus risk {#subsubsec:modvirus}
#### Computer virus likelihood, $ p ( v | s ) $:
This node provides the number of virus infections during a year in the organisation. The probability that a computer is infected in a month follows a binomial distribution $\mathcal{B}(m, q)$, where $q$ represents the probability that a computer gets infected. For infection duration, we assume that the virus remains active until detected through appropriate controls. Then, it becomes eradicated by the system administrator. Various statistics suggest that the rate of virus infections worldwide is 33% [@Panda2015], so we adopt $\hat{q} = 0.33 $ as the probability that a computer is infected a certain month. The organisation has 90 computers, which we assume have the same security controls and are equally likely of being infected. Since the analysis is for 12 months, we use $ m = 12 \cdot 90 = 1080 $. Additionally, we consider the effect of our controls:
1. If a firewall is implemented, the probability that a computer gets infected is reduced to $\hat{p} = 0.005$, not completely eliminating the threat, even if this includes continuous updating based on latest virus signatures.
2. If maintenance is implemented, the infection probability gets reduced by 50%, with firewall or not, as this control entails improvements in the organisation such as imposing safety requirements on acquired systems.
The number $v$ of infections is, therefore, modelled as in Table \[tablevirus\].
**** **Distribution**
-------------------- ---------------------------------------
Firewall and proc. $ v \sim \mathcal{B} (1080, 0.0025) $
Firewall $ v \sim \mathcal{B} (1080, 0.005) $
Procedure $ v \sim \mathcal{B} (1080, 0.1666) $
Otherwise $ v \sim \mathcal{B} (1080, 0.33) $
: \[tablevirus\] Number of annual virus infections.
#### Computer virus impact:
Viruses may affect the three information security dimensions.The impact on integrity and availability could lead to information corrupting or unavailability. Impacts over confidentiality are variable, as they depend on the stolen information. The average daily cost of these infections was estimated at 2.683 [@Solutionary2013], although this one varies according to the monetary value of the information and services the victim systems support. Bigger losses come from sophisticated campaigns (e.g., global ransomware like WannaCry) or targeted malware which, under our paradigm, should be better modelled as an adversarial threat. In our case, repairing a computer infected by a virus costs 31 (two technician hours). Insurance options potentially cover the removal of computer viruses. Therefore, this impact is modelled within the insurable aspects in Sect. \[subsubsec:modimpact\].
Most computer viruses cause performance reduction in aspects such as initialisation of OS. Although small, this causes time losses to the user. We assume that most of the work time of the organisation is in front of a computer (70%), and that it would take, on average, five days (40 h of work) to detect the problem. We also assume that when a computer is infected, 28 hours of its usage are affected by the virus. We model the time loss as a uniform distribution $ \mathcal{U}(0,0.05) $, representing that the percentage of lost time caused by a virus is between 0 and 5 %. The hourly cost of the employee is 20/hour. Therefore, for each virus infection, the cost would be $ 20 \times 28 \times \mathcal{U}(0, 0.05) $. Insurance options in node $ i $ do not cover this loss and, thus, we modelled it within the non-insurable aspects in Sect. \[subsubsec:modimpact\].
### Modelling the DDoS threat {#subsubsec:modddosdef}
We consider now non-strategic aspects of the DDoS threat. A model for the DDoS likelihood is in Sect. \[subsec:attackerproblem\].
#### DDoS Duration, $ p(l | a, s) $:
This node models the duration $l$ in hours of all successful DDoS attacks. Its length will depend on the intensity of the attacking campaign, how well crafted is the attack and the security controls implemented by the targeted organisation. Typical controls mitigating DDoS attacks involve configuring the digital system so that users and processes dedicate some resources for a certain period of time or distributing loads through a load balancer. An emerging alternative are cloud-based systems absorbing traffic from its customer site when they become victims of a DDoS. Otherwise, if no control is deployed, it would be virtually impossible to block such attack. Based on information in [@Securelist; @verizonddos], the average attack lasts 4 hours, averaging 1 gbps, with peaks of 10 gbps. We model $ l_j $, the length of the $j$-th individual DDoS attack as a $ \Gamma(4, 1)$, so that its average duration is 4 hours. This duration is conditional on whether the attack actually saturates the target, which depends on the capability of the DDoS platform minus the absorption of the cloud-based system. We assume that the Attacker uses a professional platform capable of 5 gbps attacks, modelled through a $ \Gamma(5, 1)$ distribution. We then subtract the $ s_{\textrm{gbps}} $ absorbed by the protection system to determine whether the DDoS is successful, which happens when its traffic overflows the protection system. Since the campaign might take $ a $ attacks, the output of this node is $$l = \sum_j^a l_j,$$ with $ l_j \sim \Gamma(4,1) $ if $ \Gamma(5,1) - s_{\textrm{gbps}} > 0 $, and $ l_j = 0 $, otherwise.
#### DDoS impact:
The DDoS duration might cause a reputational loss that would affect the organisation market share. Recall that the current market share is 50% valued at 1,500,000. To simplify, we assume that all market share is fully lost at a linear rate until lost in, say, 5-8 days of unavailability (120-192 hours of DDoS duration): in the fastest case the loss rate would be $ 0.5/120 = 0.00417 $ per hour, whereas in the slowest one it would be $0.0026 $. We model this with a uniform distribution $ \mathcal{U}(0.0026, 0.00417) $.
### Modelling impacts over assets {#subsubsec:modimpact}
We recall now the impacts over the assets.
#### Impact over facilities, $ p(b|o) $:
This node models monetary losses $b$ due to the degradation of facilities by fire. Following Sect. \[subsubsec:modfire\], we model $b$ through $$b \sim 5000000 \times \min \Big( 1, \frac{o}{120} \Big) .$$
#### Insurable impacts over computers, $ p(q_i | o, v) $:
This models the monetary losses $q_i$ due to degradation of computers to be covered by an insurance. This may be caused by fire, Sect. \[subsubsec:modfire\], and by repairing the computers infected with viruses, Sect. \[subsubsec:modvirus\]. We then model $q_i$ through $$q_i \sim 31 v + 200000 \times \min \Big( 1, \frac{o}{120} \Big) .$$
#### Non-insurable impacts over computers, $ p(q_n | v) $:
This models the monetary losses $q_m$ caused by degradation of computers not covered by insurance, due to the lost time caused by viruses over computer systems. Following Sect. \[subsubsec:modvirus\], we model $q_n$ through $$q_n \sim 560 v \times \mathcal{U}(0,0.05) .$$
#### Impact over market share, $ p( m | l ) $:
This models the monetary value $m$ of market share lost. Following Sect. \[subsubsec:modddosdef\], we use $$m \sim \min [1500000, l \times \mathcal{U} (0.0026, 0.00417)] .$$
#### Total costs for the Defender, $ c_d | g_i, c_i, c_s, m, b, q_i, q_n $:
This models the costs $c_d$ suffered by the Defender through $$c_d = m + b + q_i + q_n + c_s + c_i - g_i ,$$ being $c_s$ the cost of security controls, $c_i$ the cost of insurance, $g_i$ the insurance coverage (which reduces losses) and $m$, $b$, $q_i$ and $q_n$ the impacts over assets earlier described.
### Defender utility, $ u(c_d) $:
The organisation is constant risk averse over costs. Its utility function is strategically equivalent to $$u(c_d) = a - b \exp(k(-c_d)),$$ We adjust it calibrating the function with three costs: worst, best, and an intermediate one. The worst reasonable loss $\max c_d $ is based on the sum of all costs and impacts (except the computer virus one) which is equal to 6,755,300. Computer virus impacts do not have an upper limit; based on simulations, it is reasonable to assume that they would not exceed 50,000. Giving an additional margin, we assume that $\max c_d = 7000000$. The best loss is $\min{c_d} = 0$. For an intermediate cost $c_d^*$, we find its probability equivalent [@Ortega2017] $\alpha$ so that $u(c_d^*)= \alpha$. For instance, asking the company, we have $u(c_d^*= 2660000)\simeq .5$. Additionally, we rescale the costs to the (0,1) range through $1 - \frac{c_d}{7000000}$. Then, the utility function is $$u(c_d) = \frac{1}{\textrm{e}-1} \Bigg[ \exp\Bigg(1 - \frac{c_d}{7000000}\Bigg) - 1 \Bigg] .$$
Assessing the Attacker’s random beliefs and preferences {#subsec:attackerproblem}
-------------------------------------------------------
In the Defender problem, the competitor attack is a probabilistic node modelling the number of attacks launched by the Attacker, given the security controls implemented by the Defender. We model the Attacker problem and simulate it to forecast its actions to obtain the probability distribution.
We must estimate the probability that the Attacker executes the DDoS, given the Defender controls implemented. For that, we consider his decision problem in Fig. \[fig9b\]. Its solution would give the Attacker’s optimal action. However, as argued in Sect. \[subsec:aracyber\], we model our uncertainty about his preferences and beliefs through random utilities and probabilities to find the random optimal attack.
### Defender’s security controls
This node is probabilistic for the Attacker. However, we assume that he may observe through network exploration tools whether the Defender has implemented relevant controls against his attack.
### Competitor attack decision: $ a | s $
In the attacker problem, it is reflected in a decision node, modelling how many attacks (between 0 and 30) the DDoS campaign will consist of. Attackers usually give up once the attack has been mitigated and move onto the next target or try other disruption methods. However, when the sole objective is the victim, the Attacker might continue the campaign for several days, causing a pervasive impact. In our case, we assume that a DDoS platform would need a day to deploy their resources to launch a powerful and hidden DDoS.
### Duration of the DDoS: $ P_A(l | a, s) $
We base our estimation on that of the Defender (Sect. \[subsubsec:modddosdef\]). The length of the $j$-th individual DDoS attack is modelled through a random gamma distribution $ \Gamma_{\textrm{length}} (\upsilon, \upsilon/\mu) $ with $ \upsilon \sim \mathcal{U}(3.6,4.8) $ and $ \upsilon/\mu \sim \mathcal{U}(0.8,1.2) $, so that we add uncertainty about the average duration (between 3 and 6 hours) and the dispersion. Similarly, the attack gbps are modelled through a random gamma distribution $ \Gamma_{\textrm{gbps}} (\omega, \omega/\eta) $ with $ \omega \sim \mathcal{U}(4.8,5.6) $ and $ \omega/\eta \sim U(0.8,1.2) $. Next, we subtract $ s_{\textrm{gbps}} $ to $ \Gamma_{\textrm{gbps}} $ to determine whether the DDoS is successful, which happens when its traffic overflows the protection system. As in Sect. \[subsubsec:modddosdef\], the number $l$ of hours for which the site is unavailable during the campaign is modelled as $$l = \sum_j^a l_j,$$ with $ l_j \sim \Gamma_{\textrm{length}} $ if $ \Gamma_{\textrm{gbps}} - s_{\textrm{gbps}} > 0 $, and $ l_j = 0 $ otherwise.
### Impact over market share: $ P_A( m | l ) $
We base our estimation on that of the Defender (Sect. \[subsubsec:modimpact\]), adding some uncertainty around such assessment. The market share value and percentage are not affected by the uncertainty, as this information is available to both agents. However, we model uncertainty in the market loss rate so that the fastest one (5 days in the Defender problem) is between 4 and 6 days in the Attacker problem and the slowest one (8 for Defender) is between 7 and 9. Therefore, the random distribution describing the market loss $ m $ is $$m \sim \min \Big[ 1500000, l \times \mathcal{U}(\alpha, \beta) \Big]$$ with $ \alpha \sim \mathcal{U}(0.0021, 0.0031) $ and $ \beta \sim \mathcal{U}(0.00367, 0.00467) $.
### Attacker earnings: $ e | m $
This node models the Attacker gain $e$ in terms of market share, derived from the DDoS duration. As the sole competitor, we assume that $e$ corresponds to the share lost by the defender $ e = m $. The random uncertainty in the earnings is derived from the randomness of the preceding nodes.
### Detection of Attacker: $ P_A(t | a) $
This node represents the chance of the Attacker being detected. In most cyber attacks, the attacker is not identified or prosecuted[^2]. Detection probabilities are estimated via expert judgement at 0.2%, should the Attacker attempt a DDoS. He actually gambles his detection through a binomial distribution $ \mathcal{B}(a, 0.002) $ where the number of trials is the number $a$ of attacks and the detection probability is 0.002. To add some uncertainty, we model the detection probability for each attack through a beta distribution $ \beta e (2, 998) $[^3]. Therefore, the distribution determining the detection of the attacker $ t $ is modelled through a random binomial distribution that produces the output *detected* if $ \mathcal{B}(a, \phi) > 0 $ with $ \phi \sim \beta e (2, 998) $, and *not detected*, otherwise.
### Costs for Attacker when detected: $ p_A(c_t | t ) $
This node models the consequences associated with being detected when executing a DDoS. As a competitor, if the Attacker is disclosed, it would entail a serious discredit, together with compensation and legal costs besides incurring criminal responsibilities. To fix ideas, we use this cost decomposition:
- Expected reputational costs, due to the necessary communication actions to preserve credibility: 550,000.
- Expected legal costs: 30,000.
- Expected civil indemnities and regulatory penalties: 350,000.
- Expected suspension costs, related with losses derived from prohibition to operate for some time: 1,500,000.
To add uncertainty, we model the costs as a normal distribution with mean 2430000 and standard deviation 400000, i.e., $$c_t|t \sim \mathcal{N} (2430000, 400000) .$$
### Result of attack: $ c_a | e, c_t, a $
This node combines the attacker earnings and costs if detected, as well as the cost of undertaking the attacks. To estimate these, we consider that using a botnet to launch the DDoS attack would cost on average around 33 per hour [@Incapsula2015]. Each attack would take one day, entailing costs of 792. Therefore, $$c_a = e - c_t - 792 a .$$
### Attacker’s random utility: $ U_A(c_a) $
We assume that the Attacker is risk prone, with utility function strategically equivalent to $$u(c_{a}) = (c'_a)^{k_a} , \qquad k>1 ,$$
where $ c'_a $ are the costs $ c_a $ normalised to $ [0,1] $, and $ k_a $ the risk seeking attitude of the attacker. To induce uncertainty, we assume $k_{a}$ follows a $\mathcal{U}(8, 10)$ distribution. Therefore, the attacker random utility is $$U_A(c_{a}) = (c'_a)^{K_a}$$ with $ K_a \sim \mathcal{U}(8,10) $.
### Simulating the Attacker problem {#subsec:attackersolving}
Summarising the earlier assessments, the *distribution of random utilities and probabilities in the Attacker problem* is $$F = \Big ( U_A ( c_a ), p_A ( c_t | t), P_A ( t | a ), P_A ( m | l), P_A ( l | a, s) \Big) .$$
We calculate the *random optimal attack*, given the security controls $s$ implemented, as $$A^*(s) = \arg\max_{a} \int \dots \int \
U_A(c_a) \, p_A(c_t | t) \, P_A(t | a) \, P_A(m | l) \, P_A(l | a, s) \
dl \, dm \, dt \, dc_t .$$
To approximate it, we may use an MC approach as in Algorithm \[algo1\] (see Appendix), which we implemented in R. For each $ s $, the size of the DDoS protection system, we can assess the distribution of the random optimal attack. Table \[attdistro\] displays the probabilities of the attacks, conditional on the protection implemented, with $K=1000$. For instance, if the security portfolio contains no DDoS-protection system, an attack seems certain and its duration would be between 18 and 30 attacks (being 29 and 30 the most likely attack sizes). From this, we create the probability distribution $p(a|s)$, so that the Defender problem is fully specified and ready to be solved.
------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------- -------- -------- -------- -------- --------
**0** **1** **2** **3** **4** **5** **6** **7** **8** **9** **10** **11** **12** **13** **14** **15**
**1 tbps** 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
**10 gbps** 0.000 0.001 0.003 0.003 0.004 0.005 0.012 0.012 0.015 0.013 0.017 0.024 0.024 0.022 0.030 0.035
**5 gbps** 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.001 0.001 0.001 0.002
**2gbps** 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
**none** 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------- -------- -------- -------- -------- --------
------------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- --------
**16** **17** **18** **19** **20** **21** **22** **23** **24** **25** **26** **27** **28** **29** **30**
**1 tbps** 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
**10 gbps** 0.026 0.041 0.025 0.044 0.042 0.053 0.050 0.048 0.047 0.060 0.050 0.059 0.065 0.081 0.089
**5 gbps** 0.008 0.006 0.012 0.017 0.007 0.028 0.031 0.055 0.070 0.061 0.096 0.117 0.143 0.141 0.203
**2gbps** 0.000 0.000 0.002 0.001 0.002 0.013 0.013 0.020 0.034 0.069 0.091 0.112 0.144 0.223 0.276
**none** 0.000 0.000 0.003 0.001 0.004 0.008 0.010 0.022 0.042 0.058 0.081 0.105 0.173 0.246 0.247
------------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- -------- --------
Solution of the Defender problem {#subsec:simuldefender}
--------------------------------
Summarising earlier assessments about the Defender problem, the corresponding probabilities are $$G = \Big(p(m | l), p(q_n | v), p(q_i | o, v),
p(b | o), p(l | a, s), p(a | s), p(v | s), p(o | f,s), p(f) \Big) .$$
The expected utility when the security portfolio $s$ is implemented together with insurance $i$ is $$\begin{gathered}
\psi(s, i) = \int ... \int \
u( c_d ) \, p(m | l) \, p(q_n | v) \, p(q_i | o, v) \, p(b | o) \, p(l | a, s ) \, p(a | s) \, p(v | s) \, p(o | f, s) \, p(f) \\
df \, do \, dv \, da \, dl \, db \, dq_i \, dq_n \, dm .\end{gathered}$$
We can calculate the *optimal allocation* as the maximum expected utility portfolio-insurance pair $$(s^*, i^*) = \arg\max_{s , i} \psi (s, i).$$ We may use Algorithm \[algo4\] (see Appendix), to approximate the portfolio expected utilities and the optimal portfolio for the Defender. We have implemented it in R to calculate them (Table \[table6c\]). Specifically, *the best portfolio* consists of:
- 1 tpbs cloud-based DDoS protection system.
- Firewall.
- Anti-fire system.
- Comprehensive insurance.
**Anti-fire** **Firewall** **Procedure** **DDoS protection** **Insurance** **Expected utility**
--------------- -------------- --------------- --------------------- --------------- ----------------------
anti-fire firewall no procedure 1 tbps comprehensive 0.9954
anti-fire firewall no procedure 1 tbps traditional 0.9950
no anti-fire firewall no procedure 1 tbps comprehensive 0.9949
… … … … … …
no anti-fire no firewall no procedure no protection no insurance 0.8246
no anti-fire firewall no procedure no protection cyber 0.8246
anti-fire no firewall no procedure no protection no insurance 0.8242
Besides the ranking of countermeasures, we can obtain additional information from the simulation. For instance, the best security controls contain a firewall, 1 tbps DDoS protection and no risk management procedure. Additionally, the best portfolios also includes insurance, either traditional or comprehensive.
Further analysis {#subsec:further}
----------------
The previous ARA model can be used to perform other relevant analysis, as we briefly discuss.
### Sensitivity analysis
By introducing variations in the probabilities (e.g., probability of fire), we can evaluate the robustness of the previous solution by checking whether variations in the probabilities and parameters of the model alter the optimal solution or the relevance of different controls. This is specially relevant in a case like ours with little differences in expected utility among top alternatives and many inputs are purely of judgemental nature. The approach would require the implementation of additional algorithms for sensitivity analysis that indicate whether a small deviation in a parameter may lead to a large effect in the outcome of the model [@Rios1990]. Additionally, sensitivity analysis can be used to explore the maximum cyber insurance price that the Defender would be willing to pay. This may be used to price insurance products, as well as for finding the best portfolio for different cybersecurity budgets.
### Introducing constraints
As we mentioned, we may introduce constraints over the security portfolios. For example, we could add to the problem a budget limit of, say, 15,000. Then, our problem could involve only those portfolios satisfying such constraint. We can also consider constraints of insurance on security controls as in insurance policies there are some requirements regarding controls that the company should comply with to be insured. Other types of constraints could be dealt with similarly.
### Return on security investment
Our formulation focused on choosing the best portfolio, but an additional aspect that could be addressed with our model is calculating the return on security investment (ROSI) to assess the cost effectiveness of a cybersecurity budget [@enisaROSI; @Schatz2017]. Calculating the optimal solution over a range of budgets (e.g., from 5,0000 to 25,000) allows generating a function that, for a given budget, gives the optimal solution and expected utility to explore the return on risk mitigation investments. Additionally, we could find the optimal increase in the portfolio so as to attain a certain expected utility level or reach a certain risk appetite level.
Discussion {#sec:discussion}
==========
Current cybersecurity risk analysis frameworks provide a thorough knowledge base for understanding cyber threats, security policies and impacts over assets that depend on the digital infrastructure. However, such frameworks provide risk analysis methods that are not sufficiently formalised, neither comprehensive enough. Most of them suggest risk matrices as their main analytic basis, which provide a fast but frequently rudimentary study of risks.
Hence, we present an ARA framework providing a formal method supporting all steps relevant to undertake a comprehensive cybersecurity risk analysis. It implies structuring the cybersecurity problem as a decision model based on a multiagent influence diagrams. ARA enables the assessment of beliefs and preferences of the organisation regarding cybersecurity risks as well as the security portfolio and insurance they can implement to treat such risks. It takes into account, in addition to non-intentional threats, the strategic behaviour of adversarial threats. We model the intentional factor through the decision problems of the Attackers. The case introduced is a simplification of a real example but serves as a template for other cases. Among other things, we had to rely on expert judgement for the uncertainty nodes for which we lacked data.
From the decision-making point of view, ARA enables the calculation of optimal cybersecurity resource allocations, facilitating the selection of security and insurance portfolios. Furthermore, it also enables sensitivity analysis to evaluate whether the optimal portfolio remains as optimal, in case different elements affecting risk change. This may be used for insurance pricing.
Future work involves the application of this paradigm to study other cybersecurity adversarial problems. The proposed problem refers to strategic/tactical decisions; it would be interesting to develop dynamic schemes integrating strategic and operational decisions. Similarly, we shall address the development of parametric cyber insurance schemes, aimed at supporting the obtention of premiums that, as complement of the implemented controls, facilitate more effective risk management. Another relevant activity would be the development of a software environment that supports the implementation of the ARA framework for cybersecurity based on the R routines developed, as well as optimisation algorithms beyond enumeration, to reduce computational burden.
When compared with standard approaches in cybersecurity, our paradigm entails a more comprehensive method leading to a more detailed modelling of risk problems, yet more demanding in terms of analysis. We believe, though, that at many organisations, especially, critical infrastructures and sectors, the stakes at play are so high that the entailed additional work should be worth the effort.
[99]{}
Agence Nationale de la Sécurité des Systèmes d’Information (France). 2010. *Expression des Besoins et Identification des Objectifs de Sécurité*.
Allodi, L., Massacci, F. 2017. “Security Events and Vulnerability Data for Cybersecurity Risk Estimation”. *Risk Analysis*, Vol. 37, pp. 1606–1627.
Anderson, R. 2008. *Security Engineering*, Wiley.
Andress, J. and Winterfeld, S. 2013. *Cyber Warfare: Techniques, Tactics and Tools for Security Practitioners*. Elsevier.
Bagchi, A., Sprintson, A. and Singh, C. 2013. “Modeling the Impact of Fire Spread on an Electrical Distribution Network”. *Electric Power Systems Research*, Vol. 100, pp. 15–24.
Banks, D., Ríos, J. and Ríos Insua, D. 2015. *Adversarial Risk Analysis*. Francis and Taylor.
Central Communication and Telecommunication Agency (UK). 2003. *Risk Analysis and Management Method*.
Clemen, R. T., Reilly, T. 2013. *Making Hard Decisions with Decision Tools*. Cengage Learning.
Cloud Security Alliance. 2016. *Cloud Controls Matrix*.
The Common Criteria Recognition Agreement Members. 2009. *Common Criteria for Information Technology Security Evaluation, Version 3.1 Release 4*.
Cooke, R. and Bedford. T. 2001. *Probabilistic Risk Analysis: Foundations and Methods*. Cambridge University Press.
Cox, L. A. 2008. “What’s Wrong with Risk Matrices?”. *Risk Analysis*, Vol. 28, No. 2, pp. 497–512.
Departamento de Seguridad Ciudadana, Ayto. de Vitoria-Gasteiz (Spain). 2009. *Memoria 2009 del Servicio de Prevención Extinción de Incendios y Salvamentos*
Dias, L.C., Morton, A. and Quigley, J. 2018. *Elicitation: State of the Art and Science*. Springer.
European Network and Information Security Agency. 2012. *Introduction to Return on Security Investment*.
Federal Bureau of Investigation, Internet Crime Compliant Center (USA). 2016. *2016 Internet Crime Report*
Fielder, A., Panaousis, E., Malacaria, P., Hankin, C. and Smeraldi, F. 2016. “Decision Support Approaches for Cyber Security Investment”. *Decision Support Systems*, Vol. 86, pp. 13-23.
French, S. and Ríos Insua, D. 2000. *Statistical Decision Theory*. Wiley.
Galway, L. A. 2007. *Subjective probability distribution elicitation in cost risk analysis: A review*. Tech. Rep. 410, Rand Corporation.
Hubbard, D.W. and Seiersen, R. 2016. *How to Measure Anything in Cybersecurity Risk*. John Wiley & Sons.
Incapsula (USA). 2015. *Global DDoS Threat Landscape Report: Attacks Resemble Advanced Persistent Threats*.
International Organization for Standardization. 2013. *ISO/IEC 27001 – Information Security Management Systems - Requirements*.
International Organization for Standardization. 2013. *ISO/IEC 27005. Information Security Risk Management*.
Kaspersky, Securelist (Russia). 2016. *DDoS attacks in Q4 2016*.
Leak Source. 2014. “CSEC Document Reveals Suspected France Intelligence Spyware “Babar””. \[Retrieved 25/Sep/2017\]
Lund, M.S., Solhaug, B. and Stølen, K. 2010. *Model-driven risk analysis: the CORAS approach* Springer.
Marotta, A., Martinelli, F., Nanni, S., Orlando, A. and Yautsiukhin, A. 2017. “Cyber-insurance survey”. *Computer Science Review*. Vol. 24, pp 35–61
Milke, J. A., Kodur, V. and Marrion, C. 2002. “An overview of fire protection in buildings”. *Appendix A, World Trade Center Building Performance Study*. Federal Emergency Management Agency (USA).
Ministerio de Hacienda y Administraciones Públicas (Spain). 2012. *Metodología de Análisis y Gestión de Riesgos de los Sistemas de Información, version 3*.
Mirkovic, J. and Reiher, P. 2004. “A taxonomy of DDoS attack and DDoS defense mechanisms.” *ACM SIGCOMM Computer Communication Review* Vol.34, pp 39–45.
Mowbray, T. J. 2013. *Cybersecurity: Managing Systems, Conducting Testing, and Investigating Intrusions*. Wiley.
National Institute of Standards and Technology (USA) *NIST SP 800-30 Rev. 1 – Guide for Conducting Risk Assessments*.
National Technical Authority for Information Assurance (UK). 2012. *HMG IA Standard Number 1*.
Ortega, J., Radovic, V. and Rios Insua, D. 2018. “Utility elicitation”. In: Días, L., Morton, A. and Quigley, J., editors. *Handbook of judgement elicitation*. Springer.
Panda Security (Spain). 2015. *Informe PandaLabs Q2 2015*.
Rios Insua, D. 1990. *Sensitivity Analysis in Multi-objective Decision Making*. Springer.
Schatz, D. and Bashroush, R. 2017. “Economic valuation for information security investment: a systematic literature review”. *Information Systems Frontiers*, Vol. 19, No. 5, pp 1205–1228.
Shachter, R.D. 1986. “Evaluating influence diagrams”. *Operations Research* Vol. 34, No. 6, pp 871–882.
Solutionary (US). 2013. *Global Threat Intelligence Report*.
Verisign (USA). 2017. *Q1 2017 DDoS Trends Report*
Wiper, M, Rios Insua, D. and Ruggeri F. 2001. “Mixtures of gamma distributions with applications.” *Journal of Computational and Graphical Statistics* Vol. 10, pp. 440–454.
Appendix {#appendix .unnumbered}
========
$ \psi(s, i) = 0 $\
Approximate $$(\hat{s}^*, \hat{i}^*) = \arg\max_{s, i} \psi (s, i)$$
[^1]: A distributed denial of service (DDoS) is a network attack consisting of a high number of infected computers flooding with network traffic a victim computer or network device, making it inaccessible.
[^2]: For instance, the FBI Internet Crime Compliant Center prosecuted two cases, and investigated 73, of nearly 298,728 complaints received in 2016 [@2016fbi]
[^3]: Its mean is 0.002
|
---
abstract: 'In this paper we develop an algebraic framework that allows us to extend families of two-valued states on orthomodular lattices to Baer $^*$-semigroups. We apply this general approach to study the full class of two-valued states and the subclass of Jauch-Piron two-valued states on Baer $^*$-semigroups.'
author:
- |
[Hector Freytes]{}[^1] $^{1,2}$, [Graciela Domenech]{}$^{*}$ $^3$\
and [Christian de Ronde]{}$^{*}$ $^{4,5}$
bibliography:
- 'pom.bib'
title: ' Two-valued states on Baer $^*$-semigroups '
---
1\. Instituto Argentino de Matemática (IAM)\
Saavedra 15 - 3er piso - 1083 Buenos Aires, Argentina\
e-mail: hfreytes@dm.uba.ar - hfreytes@gmail.com\
2. Dipartimento di Matematica e Informatica “U. Dini”\
Viale Morgagni, 67/a - 50134 Firenze, Italia\
3. Instituto de Astronomía y Física del Espacio (IAFE)\
Casilla de Correo 67, Sucursal 28 . 1428 Buenos Aires, Argentina\
e-mail: domenech@iafe.uba.ar\
4. Instituto de Filosofía “Dr. Alejandro Korn”\
(UBA-CONICET), Buenos Aires, Argentina\
5. Center Leo Apostel (CLEA) and Foundations of the Exact Sciences (FUND)\
Brussels Free University, Krijgskundestraat 33 - 1160 Brussels, Belgium\
e-mail: cderonde@vub.ac.be
[**Keywords:**]{} Baer $^*$-semigroups, two-valued states, orthomodular lattices
[**PACS numbers:**]{} 02.10 De\
\[section\]
\[theo\][Definition]{}
\[theo\][Lemma]{}
\[theo\][Method]{}
\[theo\][Proposition]{}
\[theo\][Corollary]{}
\[theo\][Example]{}
\[theo\][Problem]{}
\[theo\][Remark]{}
\[theo\][Example]{}
Introduction
============
Recently, several authors have paid attention to the study of the concept of “state” by extending it to classes of algebras more general than the $\sigma$-algebras, as orthomodular posets [@DGG; @PUL], MV-algebras [@DV1; @KM; @KR; @NAV2; @PUL1] or effect algebras [@F; @RIE; @RIE2]. In the particular case of quantum mechanics (QM), different families of states are investigated not only because they provide different representations of the event structure of quantum systems [@NAV1; @TK1; @TK2] but also because of their importance in order to understand QM [@GUD; @JAU1; @PIR1; @PTAK].
In [@DFD], a general theoretical framework to study families of two-valued states on orthomodular lattices is given. We shall use these ideas for a general study of two-valued states extended to Baer $^*$-semigroups. Moreover, we investigate varieties of Baer $^*$-semigroups expanded with a unary operation that allows us to capture the notion of two-valued states in an algebraic structure.
The paper is organized as follows: Section \[BASICNOTION\] contains generalities on universal algebra, orthomodular lattices, and Baer $^*$-semigroups. In Section \[EVENTSEC\], motivations for a natural extension of the concept of two-valued state from orthomodular lattices to Baer $^*$-semigroups are presented. In Section \[ALGEBRAICAP\], we introduce the concept of $IE_
B^*$-semigroup. It is presented as a Baer $^*$-semigroup with a unary operation that enlarges the language of the structure. This operation is defined by equations giving rise to a variety denoted by ${\mathcal{IE}}^*_B$. In this way, ${\mathcal{IE}}^*_B$ defines a common abstract framework in which several families of two-valued states can be algebraically treated as unary operations on Baer $^*$-semigroups. In Section \[VARIETIES\], we give a decidable procedure to extend equational theories of two-valued states on orthomodular lattices to Baer $^*$-semigroups determining sub-varieties of ${\mathcal{IE}}^*_B$. In Section \[FULLCLASS\] and Section \[JAUCHPIRON\], we apply the results obtained in an abstract way to two important classes of two-valued states, namely the full class of two-valued states and the subclass of Jauch-Piron two-valued states. In Section \[APROBLEM\], we study some problems about equational completeness related to subvarieties of ${\mathcal{IE}}^*_B$. Finally, in Section \[OPERATORE\], we introduce subvarieties ${\mathcal{IE}}^*_B$ whose equational theories are determined by classes of two-valued states on orthomodular lattices.
Basic notions {#BASICNOTION}
=============
First we recall from [@Bur] some notions of universal algebra that will play an important role in what follows. A [*variety*]{} is a class of algebras of the same type defined by a set of equations. If ${\mathcal A}$ is a variety and ${\mathcal B}$ is a subclass of ${\mathcal A}$, we denote by ${\mathcal V}({\mathcal B})$ the subvariety of ${\mathcal A}$ generated by the class ${\mathcal B}$, i.e. ${\mathcal V}({\mathcal B})$ is the smallest subvariety of ${\mathcal A}$ containing ${\mathcal B}$. Let ${\mathcal A}$ be a variety of algebras of type $\tau$. We denote by Term$_{\mathcal A}$ the [*absolutely free algebra*]{} of type $\tau$ built from the set of variables $V = \{x_1, x_2,...\}$. Each element of Term$_{\mathcal
A}$ is referred as a [*term*]{}. We denote by Comp($t$) the complexity of the term $t$ and by $t = s$ the equations of Term$_{\mathcal A}$.
For $t\in$ Term$_{\mathcal A}$ we often write $t(x_1, \ldots x_n)$ to indicate that the variables occurring in $t$ are among $x_1,
\ldots x_n$. Let $A \in {\mathcal A}$. If $t(x_1, \ldots x_n) \in$ Term$_{\mathcal A}$ and $a_1,\dots, a_n \in A$, by $t^A(a_1,\dots,
a_n)$ we denote the result of the application of the term operation $t^A$ to the elements $a_1,\dots, a_n$. A [*valuation*]{} in $A$ is a function $v:V\rightarrow A$. Of course, any valuation $v$ in $A$ can be uniquely extended to an ${\mathcal A}$-homomorphism $v:$Term$_{\mathcal A} \rightarrow A$ in the usual way, i.e., if $t_1, \ldots, t_n \in$ Term$_{\mathcal A}$ then $v(t(t_1, \ldots,
t_n)) = t^A(v(t_1), \ldots, v(t_n))$. Thus, valuations are identified with ${\mathcal A}$-homomorphisms from the absolutely free algebra. If $t,s \in$ Term$_{\mathcal A}$, $A \models t = s$ means that for each valuation $v$ in $A$, $v(t) = v(s)$ and ${\mathcal A}\models t=s$ means that for each $A\in {\mathcal A}$, $A \models t = s$.
For each algebra $A \in {\mathcal A}$, we denote by Con($A$) the congruence lattice of $A$, the diagonal congruence is denoted by $\Delta$ and the largest congruence $A^2$ is denoted by $\nabla$. $\theta$ is called [*factor congruence*]{} iff there is a congruence $\theta^*$ on $A$ such that, $\theta \land \theta^* =
\Delta$, $\theta \lor \theta^* = \nabla$ and $\theta$ permutes with $\theta^*$. If $\theta$ and $\theta^*$ is a pair of factor congruences on $A$ then $A \cong A/\theta \times A/\theta^*$. $A$ is [*directly indecomposable*]{} if $A$ is not isomorphic to a product of two non trivial algebras or, equivalently, $\Delta,\nabla$ are the only factor congruences in $A$. We say that $A$ is [*subdirect product*]{} of a family of $(A_i)_{i\in I}$ of algebras if there exists an embedding $f: A \rightarrow \prod_{i\in I} A_i$ such that $\pi_i f : A\! \rightarrow A_i$ is a surjective homomorphism for each $i\in I$ where $\pi_i$ is the projector onto $A_i$. $A$ is [*subdirectly irreducible*]{} iff $A$ is trivial or there is a minimum congruence in Con($A$)$ - \Delta$. It is clear that a subdirectly irreducible algebra is directly indecomposable. An important result due to Birkhoff is that every algebra $A$ is a subdirect product of subdirectly irreducible algebras. Thus, the class of subdirectly irreducible algebras rules the valid equations in the variety ${\mathcal A}$.
Now we recall from [@KAL; @MM] some notions about orthomodular lattices. A [*lattice with involution*]{} [@Ka] is an algebra $\langle L, \lor, \land, \neg \rangle$ such that $\langle L, \lor,
\land \rangle$ is a lattice and $\neg$ is a unary operation on $L$ that fulfills the following conditions: $\neg \neg x = x$ and $\neg
(x \lor y) = \neg x \land \neg y$. An [*orthomodular lattice*]{} is an algebra $\langle L, \land, \lor, \neg, 0,1 \rangle$ of type $\langle 2,2,1,0,0 \rangle$ that satisfies the following conditions:
1. $\langle L, \land, \lor, \neg, 0,1 \rangle$ is a bounded lattice with involution,
2. $x\land \neg x = 0 $.
3. $x\lor ( \neg x \land (x\lor y)) = x\lor y $
We denote by ${\mathcal{OML}}$ the variety of orthomodular lattices. Let $L$ be an orthomodular lattice. Two elements $a,b$ in $L$ are [*orthogonal*]{} (noted $a \bot b$) iff $a\leq \neg b$. For each $a\in L$ let us consider the interval $[0,a] = \{x\in L : 0\leq x
\leq a \}$ and the unary operation in $[0,a]$ given by $\neg_a x =
\neg x \land a$. As one can readily realize, the structure $L_a =
\langle [0,a], \land, \lor, \neg_a, 0, a \rangle$ is an orthomodular lattice.
[*Boolean algebras*]{} are orthomodular lattices satisfying the [*distributive law*]{} $x\land (y \lor z) = (x \land y) \lor (x
\land z)$. We denote by ${\mathbf 2}$ the Boolean algebra of two elements. Let $L$ be an orthomodular lattice. An element $c\in L$ is said to be a [*complement*]{} of $a$ iff $a\land c = 0$ and $a\lor c
= 1$. Given $a, b, c$ in $L$, we write: $(a,b,c)D$ iff $(a\lor
b)\land c = (a\land c)\lor (b\land c)$; $(a,b,c)D^{*}$ iff $(a\land
b)\lor c = (a\lor c)\land (b\lor c)$ and $(a,b,c)T$ iff $(a,b,c)D$, (a,b,c)$D^{*}$ hold for all permutations of $a, b, c$. An element $z$ of $L$ is called [*central*]{} iff for all elements $a,b\in L$ we have $(a,b,z)T$. We denote by $Z(L)$ the set of all central elements of $L$ and it is called the [*center*]{} of $L$.
\[eqcentro\] Let $L$ be an orthomodular lattice. Then we have:
1. $Z(L)$ is a Boolean sublattice of $L$ [[@MM Theorem 4.15]]{}.
2. $z \in Z(L)$ iff for each $a\in L$, $a = (a\land z) \lor (a \land \neg z)$ [[@MM Lemma 29.9]]{}.
------------------------------------------------------------------------
Now we recall from [@AD; @FOU; @KAL] some notions about Baer $^*$-semigroups. A [*Baer $^*$-semigroup*]{} [@FOU] also called Foulis semigroup [@AD; @BLIJ; @KAL] is an algebra $\langle S,
\cdot , ^*, ', 0 \rangle$ of type $\langle 2,1,1,0 \rangle$ such that, upon defining $1= 0'$, the following conditions are satisfied:
1. $\langle S, \cdot \rangle$ is a semigroup,
2. $0\cdot x = x \cdot 0 = 0$,
3. $1\cdot x = x \cdot 1 = x$,
4. $(x \cdot y)^* = y^* \cdot x^*$,
5. $x^{**} = x $,
6. $x\cdot x' = 0$,
7. $x' \cdot x' = x' = (x')^*$,
8. $x'\cdot y \cdot (x\cdot y)' = y\cdot (x\cdot y)' $.
Let $S$ be a Baer $^*$-semigroup. An element $e\in S$ is a [*projector*]{} iff $e = e^* = e\cdot e$. The set of all projectors of $S$ is denoted by $P(S)$. A projector $e \in P(S)$ is said to be closed iff $e'' = e$. We denote by $P_c(S)$ the set of all closed projectors. Moreover we can prove that: $$P_c(S) = \{x': x\in S \}$$ We can define a partial order $\langle P(S), \leq \rangle$ as follows: $$e \leq f \Longleftrightarrow e \cdot f = e$$
In [[@MM Theorem 37.2]]{} it is proved that, for any $e, f
\in P_c(S)$, $e \leq f$ iff $e\cdot S \subseteq f\cdot S$. The facts stated in the next proposition are either proved in [@FOU] or follow immediately from the results in [@FOU]:
\[PBAER1\] Let $S$ be a Baer $^*$-semigroup. Then:
1. If $x,y \in P(S)$ and $x\leq y$ then $y'\leq x'$,
2. $(x\cdot y)'' = (x'' \cdot y)'' \leq y''$,
3. $(x^* \cdot x)'' = x''$,
4. for each $x\in P_c(S)$, $0\leq x \leq 1$,
5. $x \cdot y = 0$ iff $y = x'\cdot y$
------------------------------------------------------------------------
Observe that, item 5 was one of the original conditions in the definition of a Baer \*-semigroup in [@FOU]. In the presence of conditions 1 ...7 of the definition of a Baer \*-semigroup, the latter condition is equivalent to condition 8 (see [@AD Proposition 2]).
\[PRO1\][[@MM Theorem 37.8]]{} Let $S$ be a Baer $^*$-semigroup. For any $e_1, e_2 \in P_c(S)$, we define the following operations:
1. $e_1 \land e_2 = e_1\cdot (e_2' \cdot e_1)'$,
2. $e_1 \lor e_2 = (e_1' \land e_2')'$.
Then $\langle P_c(S), \land, \lor, ', 0,1 \rangle$ is an orthomodular lattice with respect to the order $\langle P(S), \leq
\rangle$.
------------------------------------------------------------------------
We can build a Baer $^*$-semigroup from an orthomodular lattice [@FOU]. In the following we briefly describe this construction.
Let $\langle A, \leq, 0,1 \rangle$ be a bounded partial ordered set. An order-preserving function $\phi: A \rightarrow A$ is called [*residuated function*]{} iff there is another order-preserving function $\phi^+: A \rightarrow A$, called a [*residual function*]{} of $\phi$ such that $\phi \phi^+(x) \leq x \leq \phi^+\phi (x)$. It can be proved that if $\phi$ admits a residual function $\phi^+$, it is completely determined by $\phi$.
[We will adopt the notation in [@AD $\S$1] in which residuated functions are written on the right. More precisely, if $\phi, \psi$ are residuated functions, $x\phi$ indicates the value $\phi(x)$ and $\psi \phi$ is interpreted as the function $x\psi \phi
= (x\psi) \phi$. ]{}
We denote by $S(A)$ the set of residuated functions of $A$. Let $\theta$ be the constant function in $A$ given by $x\theta = 0$. Clearly $\theta$ is an order-preserving function and $\theta^+$ is the constant function $x\theta^+ = 1$. Thus $\theta \in S(A)$ and $\langle S(A),
\circ, \theta \rangle$, where $\psi \circ \phi = \psi \phi$, is a semigroup.
\[PRO2\][[@AD Proposition 2]]{} Let $L$ be an orthomodular lattice. For each $a\in L$ we define
$x\phi_a = (x\lor \neg a) \land a$ [([*Sasaki projection*]{})]{}
If we define the following unary operations in $S(L)$:
- $\phi^*$: such that $x\phi^* = \neg ((\neg x) \phi^+ )$,
- $\phi': = \phi_{\neg 1\phi} $
then:
1. $\langle S(L), \circ, ^*, ', \theta \rangle$ is a Baer $^*$-semigroup.
2. $P_c(S(L)) = \{\phi_a : a \in L \}$,
3. $f_L:L \rightarrow P_c(S(L))$ such that $f_L(a) = \phi_a$ is an ${\mathcal{OML}}$-isomorphism.
------------------------------------------------------------------------
If $L$ is an orthomodular lattice, the Baer $^*$-semigroup $\langle
S(L), \circ, ^*, ', \theta \rangle$, or $S(L)$ for short, will be referred to as [*the Baer $^*$-semigroup of the residuated functions of $L$*]{}.
Let $L$ be an orthomodular lattice. We say that a Baer $^*$-semigroup $S$ [*coordinatizes*]{} $L$ iff $L$ is ${\mathcal{OML}}$-isomorphic to $P_c(S)$.
Two-valued states and Baer $^*$-semigroups {#EVENTSEC}
===========================================
The study of two-valued states becomes relevant in different frameworks. From a physical point of view, two-valued states are distinguished among the set of all classes of states because of their relation to hidden variable theories of quantum mechanics [@GUD]. Another motivation for the analysis of two-valued states is rooted in the study of algebraic and topological representations of the event structures in quantum logic. Examples of them are the characterization of Boolean orthoposets by means of two-valued states [@TK3] and the representation of orthomodular lattices via clopen sets in a compact Hausdorff closure space [@TK2], later extended to orthomodular posets in [@HP]. We are interested in a theory of two-valued states on Baer $^*$-semigroups as a natural extension of two-valued states on orthomodular lattice. Formally, a [*two-valued state*]{} on an orthomodular $L$ is a function $\sigma:L \rightarrow \{0,1\}$ satisfying the following :
1. $\sigma(1) = 1$,
2. if $x \bot y$ then $\sigma(x \lor y) = \sigma(x) + \sigma(y)$.
Let $L$ be an orthomodular lattice and $\sigma:L \rightarrow
\{0,1\}$ be a two-valued state. The following properties are derived directly from the definition of two-valued state:
$\sigma(\neg x) = 1-\sigma(x)$ and if $x\leq y$ then $\sigma(x) \leq \sigma(y)$
Based on the above mentioned two properties, in [@DFD], Boolean pre-states are introduced as a general theoretical framework to study families of two-valued states on orthomodular lattices. We shall use these ideas for a general study of two-valued states extended to Baer $^*$-semigroups. Thus, we first give the definition of Boolean pre-state.
Let $L$ be an orthomodular lattice. By a [*Boolean pre-state*]{} on $L$ we mean a function $\sigma:L \rightarrow \{0,1\}$ such that:
1. $\sigma(\neg x) = 1 - \sigma(x)$,
2. if $x\leq y$ then $\sigma(x) \leq \sigma(y)$.
Let us consider the orthomodular lattice $MO2 \times {\mathbf
2}$ whose Hasse diagram has the following form:
(60,20)(0,0) (18,20)[(5,-3)[21]{}]{} (18,20)[(5,-6)[21]{}]{} (18,20)[(0,-8)[13]{}]{} (18,20)[(-5,-6)[21]{}]{} (18,20)[(-5,-3)[22]{}]{}
(18,-18)[(5,6)[21]{}]{} (18,-18)[(-5,6)[21]{}]{} (18,-18)[(0,6)[12]{}]{} (18,-18)[(-5,3)[21]{}]{} (18,-18)[(5,3)[22]{}]{}
(18,20) (-3.3,7) (7,7) (29,7) (39,7) (18,-18) (18,7)
(-3,-5) (7.2,-5.2) (28.5,-5.3) (39.2,-5) (18,-6)
(21,26)[(-5,0)[$1$]{}]{} (-6,7)[(-5,0)[$\neg a$]{}]{} (6,7)[(-5,0)[$\neg b$]{}]{} (16,7)[(-5,0)[$\neg c$]{}]{} (35,7)[(-5,0)[$\neg d$]{}]{} (46,7)[(-5,0)[$\neg e$]{}]{} (21,-24)[(-5,0)[$0$]{}]{}
(-6,-5)[(-4,0)[$ a$]{}]{} (6,-5)[(-3,0)[$ b$]{}]{} (16,-5)[(-4,0)[$ c$]{}]{} (35,-5)[(-4,0)[$ d$]{}]{} (46,-5)[(-4,0)[$ e$]{}]{}
(7,7)[(5,-6)[11]{}]{} (18,7)[(5,-6)[10.5]{}]{} (18,7)[(-5,-6)[10.5]{}]{} (29,7)[(-5,-6)[10.5]{}]{}
(18,7)[(-5,-3)[21]{}]{} (39,7)[(-5,-3)[21]{}]{} (-3,7)[(5,-3)[21]{}]{} (18,7)[(5,-3)[21]{}]{}
If we define the function $\sigma: MO2 \times {\mathbf 2}
\rightarrow \{0,1\}$ such that: $$\sigma(x) = \cases {1, & if $x \in
\{1,\neg a, \neg b, \neg c, \neg d, \neg e\}$ \cr 0 , & if $x \in
\{0, a, b, c, d, e\}$ \cr}$$ we can see that $\sigma$ is a Boolean pre-state. This function fails to be a two-valued states since, $b
\leq \neg c$ but $\sigma(b\lor c) \not = \sigma(b) + \sigma(c)$. In fact $\sigma(b\lor c) = \sigma(\neg a) = 1$ and $\sigma(b) +
\sigma(c) = 0$.
We denote by ${\mathcal E}_B$ the category whose objects are pairs $(L,\sigma)$ such that $L$ is an orthomodular lattice and $\sigma$ is a Boolean pre-state on $L$. Arrows in ${\mathcal E}_B$ are $(L_1, \sigma_1) \stackrel{f}{\rightarrow} (L_2, \sigma_2) $ such that $f:L_1 \rightarrow L_2$ is an $OML$-homomorphism, and the following diagram is commutative:
(20,20)(0,0) (8,16)[(3,0)[5]{}]{} (2,10)[(0,-2)[5]{}]{} (10,4)[(1,1)[7]{}]{}
(2,10)[(13,0)[$\equiv$]{}]{}
(1,16)[(0,0)[$L_1$]{}]{} (20,16)[(0,0)[$\{0,1\}$]{}]{} (2,0)[(0,0)[$L_2$]{}]{} (2,20)[(17,0)[$\sigma_1$]{}]{} (2,8)[(-6,0)[$f$]{}]{} (18,2)[(-4,3)[$\sigma_2$]{}]{}
These arrows are called ${\mathcal E}_B$-homomorphisms.\
Let $L$ be an orthomodular lattice and let $\sigma:L \rightarrow
\{0,1\}$ be a Boolean pre-state. Since $L$ we can identify with $P_c(S(L))$, we ask whether the Boolean pre-state $\sigma$ admits a natural extension to the whole of $S(L)$. In other words, whether there exists some kind of function of the form $\sigma^*:S(L)
\rightarrow \{0,1\}$ such that the following diagram is commutative:
(20,20)(0,0) (8,16)[(3,0)[5]{}]{} (2,10)[(0,-2)[5]{}]{} (10,4)[(1,1)[7]{}]{}
(2,10)[(13,0)[$\equiv$]{}]{}
(1,16)[(0,0)[$L$]{}]{} (20,16)[(0,0)[$\{0,1\}$]{}]{} (2,0)[(0,0)[$S(L)$]{}]{} (2,20)[(17,0)[$\sigma$]{}]{} (2,8)[(-6,0)[$f_L$]{}]{} (18,2)[(-4,3)[$\sigma^*$]{}]{}
where $f_L$ is the ${\mathcal{OML}}$-isomorphism $f_L: L
\rightarrow P_c(S(L))$ given in Theorem \[PRO2\]-3. The simplest way to do this would be to associate with each element $\phi \in
S(L)$ an appropriate closed projection $\phi_x \in P_c(S(L))$ and to define $\sigma^*(\phi) = \sigma^*(\phi_x) = \sigma(x)$. An obvious choice for $\phi_x$ is $\phi'' = \phi_{1\phi}$. In virtue of this suggestion, we introduce the following concept:
\[BOOLEANSTAR\]
Let $S$ be a Baer $^*$-semigroup. A Boolean$^*$ pre-state over $S$ is a function $\sigma: S \rightarrow \{0,1\}$ such that
1. $\sigma(x') = 1 - \sigma(x)$
2. the restriction $\sigma/_{P_c(S)}$ is a Boolean pre-state on $P_c(S)$.
We denote by ${\mathcal E}^*_B$ the category whose objects are pairs $(S,\sigma)$ such that $S$ is a Baer $^*$-semigroup and $\sigma$ is a Boolean pre-state on $S$. Arrows in ${\mathcal E}^*_B$ are $(S_1,
\sigma_1) \stackrel{f}{\rightarrow} (S_2, \sigma_2) $ such that $f:S_1 \rightarrow S_2$ is a Baer $^*$-semigroup homomorphism, and the following diagram is commutative:
(20,20)(0,0) (8,16)[(3,0)[5]{}]{} (2,10)[(0,-2)[5]{}]{} (10,4)[(1,1)[7]{}]{}
(2,10)[(13,0)[$\equiv$]{}]{}
(1,16)[(0,0)[$S_1$]{}]{} (20,16)[(0,0)[$\{0,1\}$]{}]{} (2,0)[(0,0)[$S_2$]{}]{} (2,20)[(17,0)[$\sigma_1$]{}]{} (2,8)[(-6,0)[$f$]{}]{} (18,2)[(-4,3)[$\sigma_2$]{}]{}
These arrows are called ${\mathcal E}^*_B$-homomorphisms. Up to now we have presented a notion that would naturally extend the notion of Boolean pre-state to Baer $^*$-semigroups. However we have not yet proved that this extension may be formally realized. This will be shown in Theorem \[BSIGMA3\]. To see this, we first need the following basic results:
\[BS1\] Let $S$ be a Baer $^*$-semigroup and $\sigma$ be a Boolean$^*$ pre-state on $S$. Then
1. $\sigma(x'') = \sigma(x)$.
2. If $x,y \in P(S)$ and $x\leq y$ then, $\sigma(y')\leq \sigma(x')$ and $\sigma(x)\leq \sigma(y)$.
3. $\sigma(x \cdot y) = \sigma(x'' \cdot y) \leq \sigma(y)$.
4. $\sigma(x^* \cdot x ) = \sigma(x)$.
1\) Is immediate. 2) Suppose that $x,y \in P(S)$ and $x\leq y$. By Proposition \[PBAER1\]-1, $y'\leq x'$ and taking into account that $x',y' \in P_c(S)$, $\sigma(y')\leq \sigma(x')$. By Proposition \[PBAER1\]-1 again and since $y'\leq x'$ we have that $x'' \leq
y''$. Hence, by item 1, $\sigma(x) = \sigma(x'') \leq \sigma(y'') =
\sigma(y)$. 3) By Proposition \[PBAER1\]-2, $(x\cdot y)'' = (x''
\cdot y)'' \leq y''$. By item 1, $\sigma(x\cdot y) = \sigma((x\cdot
y)'') = \sigma((x'' \cdot y)'') = \sigma(x'' \cdot y)$. Since $(x''
\cdot y)''$ and $y''$ are closed projections, by item 1, we have that $\sigma(x'' \cdot y) =\sigma((x'' \cdot y)'') \leq \sigma(y'')
= \sigma(y)$. 4) By Proposition \[PBAER1\]-3 $(x^* \cdot x)'' =
x''$. Then, by item 1, $\sigma(x^* \cdot x) = \sigma((x^* \cdot
x)'') = \sigma(x'') = \sigma(x)$.
------------------------------------------------------------------------
\[BSIGMA3\] Let $S$ be a Baer $^*$-semigroup and $\sigma$ a Boolean pre-state on $P_c(S)$. Then $\sigma_S$ defined as $$\sigma_S(x) = \sigma(x'')$$ is the unique Boolean$^*$ pre-state on $S$ such that $\sigma_S
/_{P_c(S)} = \sigma $.
If $x\in S$ then $x'' \in P_c(S)$ and $\sigma(x'')$ is defined. Then $\sigma_S$ is well defined as a function. Note that if $x \in
P_c(S)$ then $\sigma_S(x) = \sigma(x'') = \sigma(x)$ since $'$ is an orthocomplementation on the orthomodular lattice $P_c(S)$. Thus $\sigma_S /_{P_c(S)} = \sigma$. Let $x\in S$. Then $\sigma_S(x') =
\sigma(x''') = 1- \sigma(x'') = 1- \sigma_S(x)$. Thus $\sigma_S$ is a Boolean$^*$ pre-state on $S$. Let $\sigma_1$ be a Boolean$^*$ pre-state on $S$ such that $\sigma_1 /_{P_c(S)} = \sigma $. Let $x\in S$. Since $x''\in P_c(S)$, by Proposition \[BS1\]-2, $\sigma_1(x) = \sigma_1(x'') = \sigma(x'') = \sigma_S(x)$. Hence $\sigma_1 = \sigma_S$ and $\sigma_S$ is the unique Boolean$^*$ pre-state on $S$ such that $\sigma_S /_{P_c(S)} = \sigma $.
------------------------------------------------------------------------
An algebraic approach for two-valued states on Baer $^*$-semigroups {#ALGEBRAICAP}
===================================================================
In this section we study a variety of Baer $^*$-semigroups enriched with a unary operation that allows us to capture the concept of two-valued states on Baer $^*$-semigroups in an equational theory. We begin this section showing a way to deal with families of Boolean pre-states on orthomodular lattices as varieties in which the concept of Boolean pre-states is captured by adding a unary operation to the orthomodular lattice structure.
Let $L$ be an orthomodular lattice and $\sigma: L \rightarrow
\{0,1\}$ be a Boolean pre-state. If we define the function $s:L
\rightarrow Z(L)$ such that $s(x) = 0^L$ if $\sigma(x) = 0$ and $s(x) = 1^L$ if $\sigma(x) = 1$. Then $s$ has properties s1...s5 in the following definition:
\[E\]
An [*orthomodular lattice with internal Boolean pre-state*]{} [$IE_B$-lattice for short]{} is an algebra $ \langle L, \land, \lor,
\neg, s, 0, 1 \rangle$ of type $ \langle 2, 2, 1,1, 0, 0 \rangle$ such that $ \langle L, \land, \lor, \neg, 0, 1 \rangle$ is an orthomodular lattice and $s$ satisfies the following equations for each $x,y \in L$:
1. $s(1) = 1$.
2. $s(\neg x) = \neg s(x)$,
3. $s(x \lor s(y)) = s(x) \lor s(y)$,
4. $y = (y \land s(x)) \lor (y \land \neg s(x)) $,
5. $s(x \land y) \leq s(x)\land s(y) $.
Thus, the class of $IE_B$-lattices is a variety that we call ${\mathcal {IE}}_B$. The following proposition provides the main properties of $IE_B$-lattices.
\[E1\] [[@DFD Proposition 3.5]]{} Let $L$ be a $IE_B$-lattice. Then we have:
1. $\langle s(L), \lor, \land, \neg, 0, 1 \rangle$ is a Boolean sublattice of $Z(L)$,
2. If $x\leq y$ then $s(x) \leq s(y)$,
3. $s(x) \lor s(y) \leq s(x\lor y)$,
4. $s(s(x)) = s(x)$,
5. $x\in s(L)$ iff $s(x) = x$,
6. $s(x\land s(y))= s(x)\land s(y)$.
------------------------------------------------------------------------
A crucial question that must be answered is under which conditions a class of two-valued states over an orthomodular lattice can be characterized by a subvariety of ${\mathcal{IE}}_B$. To do this, we first need the following two basic results:
\[FUNC00\][[@DFD Theorem 4.4]]{} Let $L$ be an $IE_B$-lattice. Then there exists a Boolean pre-state $\sigma:L \rightarrow \{0,1\}$ such that $\sigma(x) = 1$ iff $\sigma(s(x))=1$.
------------------------------------------------------------------------
Observe that, the Boolean pre-state in the last proposition is not necessarily unique. When we have an $IE_B$-lattice and a Boolean pre-state $\sigma:L \rightarrow \{0,1\}$ such that $\sigma(x) = 1$ iff $\sigma(s(x))=1$, we say that $s, \sigma$ are [*coherent*]{}. On the other hand, we can build $IE_B$-lattices from objects in the category ${\mathcal E}_B$ as shown the following proposition:
\[FUNC0\][[@DFD Theorem 4.10]]{} Let $L$ be an orthomodular lattice and $\sigma$ be a Boolean pre-state on $L$. If we define ${\mathcal I}(L) = \langle L, \land,
\lor, \neg, s_{\sigma}, 0,1 \rangle$ where $$s_{\sigma}(x) =
\cases {1^L, & if $\sigma(x)=1$ \cr 0^L , & if $\sigma(x)=0$ \cr}$$ then:
1. ${\mathcal I}(L)$ is a $IE_B$-lattice and $s_{\sigma}$ is coherent with $\sigma$.
2. If $(L_1, \sigma_1) \stackrel{f}{\rightarrow} (L_2, \sigma_2) $ is a ${\mathcal E}_B$-homomorphism then $f:{\mathcal I}(L_1) \rightarrow
{\mathcal I}(L_2)$ is a $IE_B$-homomorphism.
------------------------------------------------------------------------
Note that, ${\mathcal I}$ in the above proposition defines a functor of the form ${\mathcal I}: {\mathcal E}_B \rightarrow
{\mathcal{IE}}_B $. Now it is very important to characterize the class $\{{\mathcal I}(L): L \in {\mathcal{IE}}_B \}$. To do this, directly indecomposable algebras in ${\mathcal{IE}}_B$ play an important role and the following proposition provides this result:
\[PROD2\] [[@DFD Proposition 5.6]]{} Let $L$ be an $IE_B$-lattice. then:
1. $L$ is directly indecomposable in ${\mathcal{IE}}_B$ iff $s(L) =
{\mathbf 2}$.
2. If $L$ is directly indecomposable in ${\mathcal{IE}}_B$ then the function $$\sigma_s(x) = \cases {1, & if $s(x)=1^L$ \cr 0 , & if
$s(x)=0^L$ \cr}$$ is the unique Boolean pre-state coherent with $s$.
------------------------------------------------------------------------
Thus, an immediate consequence of Proposition \[FUNC0\] and Proposition \[PROD2\] is the following proposition:
\[PROD3\] Let ${\mathcal D}({\mathcal{IE}}_B)$ be the class of directly indecomposable algebras in ${\mathcal{IE}}_B$. Then $${\mathcal
D}({\mathcal{IE}}_B) = \{{\mathcal I}(L): L \in {\mathcal{IE}}_B
\}$$ and ${\mathcal I}: {\mathcal E}_B \rightarrow {\mathcal
D}({\mathcal{IE}}_B) $ is a categorical equivalence when we consider ${\mathcal D}({\mathcal{IE}}_B)$ as a category whose arrows are $IE_B$-homomorphisms.
------------------------------------------------------------------------
Since ${\mathcal D}({\mathcal{IE}}_B)$ contains the subdirectly irreducible algebras of ${\mathcal{IE}}_B$, we have that: $${\mathcal{IE}}_B \models t= s \hspace{0.4cm} \mathrm{iff}
\hspace{0.4cm} {\mathcal D}({\mathcal{IE}}_B) \models t= s$$ Hence, the class of orthomodular lattices admitting Boolean pre-states can be identified with the directly indecomposable algebras in ${\mathcal{IE}}_B$ that determine the variety ${\mathcal{IE}}_B$. We can use these ideas to give a general criterium to characterize families of two-valued states over orthomodular lattices by a subvariety of ${\mathcal{IE}}_B$.
Let ${\mathcal A}_I$ be a subvariety of ${\mathcal{IE}}_B$. We denote by ${\mathcal D}({\mathcal A}_I)$ the class of directly indecomposable algebras in ${\mathcal A}_I$.
\[DEF1\]
Let ${\mathcal A}$ be a subclass of ${\mathcal E}_B$ and let ${\mathcal A}_I$ be a subvariety of ${\mathcal{IE}}_B$. Then we say that ${\mathcal A}_I$ equationally characterizes ${\mathcal A}$ iff the following two conditions are satisfied
1. For each $(L, \sigma) \in {\mathcal A}$, $\langle {\mathcal I}(L),
\land, \lor, \neg, s_{\sigma}, 0,1 \rangle$ belong to ${\mathcal
D}({\mathcal A}_I)$ where $ s_{\sigma}(x) = \cases {1^L, & if
$\sigma(x)=1$ \cr 0^L, & if $\sigma(x)=0$ \cr} $
2. For each $L \in {\mathcal D}({\mathcal A}_I)$, $(L, \sigma_s ) \in
{\mathcal A}$ where $\sigma_s$, the unique Boolean pre-state coherent with $s$, is given by $ \sigma_s(x) = \cases {1, & if
$s(x)=1^L$ \cr 0 , & if $s(x)=0^L$ \cr} $
Since ${\mathcal D}({\mathcal A}_I)$ contains the subdirectly irreducible algebras of ${\mathcal A}_I$, we have that: $${{\mathcal
D}({\mathcal A}_I)} \models t=s \hspace{0.3cm} \mathrm{iff}
\hspace{0.3cm} {{\mathcal A}_I}\models t=s$$ where $t,s$ are terms in the language of ${\mathcal A}_I$.
Thus, when we say that a subclass ${\mathcal A}$ of ${\mathcal E}_B$ is equationally characterizable by a subvariety ${\mathcal A}_I$ of ${\mathcal IE}_B$ this means that the objects of ${\mathcal A}$ are identifiable with the directly indecomposable algebras of ${\mathcal A}_I$ according to the items [**I**]{} and [**E**]{} in Definition \[DEF1\].\
Taking into account the concept of $IE_B$-lattice we introduce a way to study the notion of Boolean$^*$ pre-state given in Definition \[BOOLEANSTAR\] via a unary operation added to the Baer $^*$-semigroups structure.
In fact, let $S$ be a Baer $^*$-semigroup. A unary operation $s$ on $S$ that allows us to capture the notion of Boolean$^*$ pre-state would have to satisfy the following basic conditions:
- $s(x') = s(x)'$.
- The restriction $s/_{P_c(S)}$ defines a unary operation in $P_c(S)$ such that $\langle P_c(S), \lor,\land, ', s/_{P_c(S)}, 0,1 \rangle$ is an $IE_B-{\mathrm lattice}$.
- $s$ should satisfy a version of Theorem \[BSIGMA3\] i.e., $s$ should be always obtainable as the unique extension of $s/_{P_c(S)}$.
These ideas motivate the following general definition:
\[E\]
An $IE^*_B$-semigroup is an algebra $ \langle S,
\cdot, ^*, ', s, 0 \rangle$ of type $ \langle 2, 1,1, 1, 0 \rangle$ such that $ \langle S, \cdot, ^*, ', 0 \rangle$ is a Baer $^*$-semigroup and $s$ satisfies the following equations for each $x,y \in S$:
1. $s(1) = 1$,
2. $s(x') = s(x)'$,
3. $s(x)'' = s(x)$,
4. $s(x' \lor s(y')) = s(x') \lor s(y')$,
5. $y' = (y' \land s(x)) \lor (y' \land s(x)') $,
6. $s(x' \land y') \leq s(x')\land s(y') $.
Thus, the class of $IE^*_B$-semigroups is a variety that we call ${\mathcal IE}^*_B$.
\[BS2\] Let $S$ be an $IE^*_B$-semigroup. Then
1. $s(x) \in Z(P_c(S))$.
2. $\langle P_c(S), \lor, \land, ', s/_{P_c(S)}, 0,1 \rangle$ is an $IE_B$-lattice and $\langle s(S), \lor, \land, ', 0,1 \rangle$ is a Boolean subalgebra of $Z(P_c(S))$.
3. $s(x'') = s(x)$.
4. If $x,y \in P(S)$ and $x\leq y$ then, $s(y')\leq s(x')$ and $s(x)\leq s(y)$.
5. $s(x \cdot y) = s(x'' \cdot y) \leq s(y)$.
6. $s(x^* \cdot x ) = s(x)$.
1 and 2) By bs3, for each $x\in S$, $s(x) \in P_c(S)$. Then, by Proposition \[eqcentro\]-2 and bs5, $s(x) \in Z(P_c(S))$. Since the image of $'$ is $P_c(S)$, from the rest of the axioms, $\langle
P_c(S), \lor, \land, ', s/_{P_c(S)}, 0,1 \rangle$ is an $IE_B$-lattice and $\langle s(S), \lor, \land, ', 0,1 \rangle$ is a Boolean subalgebra of $Z(P_c(S))$. 3,4,5,6) Follow from similar arguments used in the proof of Proposition \[BS1\].
------------------------------------------------------------------------
\[BSS3\] Let $S$ be a Baer $^*$-semigroup and $\langle P_c(S), \lor, \land,
', s, 0,1 \rangle$ be an $IE_B$-lattice. Then the operation $s_S:S
\rightarrow S $ such that: $$s_S(x) = s(x'')$$ defines the unique $IE^*_B$-semigroup structure on $S$ such that $s_S /_{P_c(S)} = s $.
If $x\in S$ then $x'' \in P_c(S)$ and $s(x'')$ is defined. Then $s_S$ is well defined as a function. Since $x\in P_c(S)$ iff $x=x''$, $s_S(x) = s(x'') = s(x)$ for each $x\in P_c(S)$. Thus $s_S /_{P_c(S)} = s$. Now we prove the validity of the axioms bs1,...,bs6.
bs1) Is immediate. bs2) $s_S(x') = s(x''') = s(x'')' = s_S(x)'$. bs3) $s_S(x)'' = s(x'')'' = s(x'')$ since $s(x) \in P_c(S)$ and $'$ is an orthocomplementation on $ P_c(S)$. Hence $s_S(x)'' = s_S(x)$. bs4, bs5, bs6) Follow from the fact that $s_S /_{P_c(S)} = s$ and $\langle P_c(S), \lor, \land, ', s, 0,1 \rangle$ is an $IE_B$-lattice. Hence $s_S$ defines an $IE^*_B$-semigroup structure on $S$ such that $s_S /_{P_c(S)} = s$.
Suppose that $ \langle S, \cdot, ^*, ', s_1, 0 \rangle$ is an $IE^*_B$-semigroup such that $s_1 /_{P_c(S)} = s $. Let $x\in S$. Since $x''\in P_c(S)$, by Proposition \[BS2\]-3, $s_1(x) =
s_1(x'') = s(x'') = s_S(x)$. Hence $s_1 = s_S$ and $s_S$ defines the unique $IE^*_B$-semigroup structure on $S$ such that $s_S /_{P_c(S)}
= s $.
------------------------------------------------------------------------
By Proposition \[BS2\] and Theorem \[BSS3\] we can see that the definition of $IE^*_B$-semigroup pre-state satisfies the conditions required by items a, b, c.
\[BS4\] Let $\langle L, \lor, \land, \neg, s, 0,1 \rangle$ be an $IE_B$-lattice and $S(L)$ be the Baer $^*$-semigroup of residuated functions of $L$. If for each Sasaki projection $\phi_a$ we define $\bar{s}(\phi_a) = \phi_{s(a)}$, then:
1. $\langle P_c(S(L)), \lor, \land,' , \bar{s}, 0,1 \rangle$ is an $IE_B$-lattice and $f:L \rightarrow P_c(S(L))$ such that $f(a) =
\phi_a$ is an $IE_B$-isomorphism.
2. The operation ${\bar{s}}_S(\varphi) = \phi_{s(\varphi(1))}$ defines the unique $IE^*_B$-semigroup structure on $S(L)$ such that $L$ is $IE_B$-isomorphic to $P_c(S(L))$.
1\) By Theorem \[PRO2\], there exists an $OML$-isomorphism $f:L
\rightarrow P_c(S(L))$. It is not very hard to see that the composition $\bar{s} = f s f^{-1}$ satisfies bs1,...,bs6 and $f(s(x)) = (f s f^{-1}) f(x) = \bar{s}f(x)$, i.e. $f$ preserves $\bar{s}$. Then $L$ is $IE_B$-isomorphic to $P_c(S(L))$.
2\) Let $\varphi \in S(L)$. Then ${\bar{s}}_S(\varphi) =
\phi_{s(\varphi(1))} = \bar{s}(\phi_{\varphi(1)}) =
\bar{s}(\phi_{\neg \neg \varphi(1)}) = s(\varphi'')$. Therefore ${\bar{s}}_S$ is the extension of $\bar{s}$ given in Theorem \[BSS3\]. Hence the operation ${\bar{s}}_S$ defines the unique $IE^*_B$-semigroup structure on $S(L)$ such that $L$ is $IE_B$-isomorphic to $P_c(S(L))$.
------------------------------------------------------------------------
\[BS5\] Let $ \langle S, \cdot, ^*, ', s, 0 \rangle$ be an $IE^*_B$-semigroup. Suppose that $S_1$ is a sub Baer $^*$-semigroup of $S$ and $P_c(S_1)$ is a sub $IE_B$-lattice of $P_c(S)$. Then the restriction $s/_{S_1}$ defines the unique $IE^*_B$-semigroup structure on $S_1$. In this way, $S_1$ is also a sub $IE^*_B$-semigroup of $S$.
Let $S_1$ be a sub Baer $^*$-semigroup of $S$ such that $P_c(S_1)$ is a sub $IE_B$-lattice of $P_c(S)$. If for each $x\in S_1$ we define $s_{S_1}(x) = s/_{P_c(S_1)}(x'') = s(x'')$ then, $s_{S_1} =
s/_{S_1}$ and, by Theorem \[BSS3\], it is defines the unique $IE^*_B$-semigroup structure on $S_1$ that coincides with $s/_{P_c(S_1)}$ in $P_c(S_1)$. In this way $S_1$ also results a sub $IE^*_B$-semigroup of $S$.
------------------------------------------------------------------------
\[PRO11\] Suppose that $(S_i)_{i\in I}$ is a family of $IE^*_B$-semigroups. Then, $\prod_{i\in I}P_c(S_i)$ is $IE_B$-lattice isomorphic to $P_c(\prod_{i\in I}S_i)$.
Since the operations in $\prod_{i\in I}S_i$ are defined pointwise, for each $(x_i)_{i\in I} \in \prod_{i\in I}S_i$, $(x_i)'_{i\in I} =
(x'_i)_{i\in I}$. Then it is straightforward to prove that $f((x_i)'_{i\in I}) = (x'_i)_{i\in I}$ defines an $OML$-isomorphism $f: P_c(\prod_{i\in I}S_i) \rightarrow \prod_{i\in I}P_c(S_i)$. We have to prove that this function preserves $s$. In fact $f(s((x_i)'_{i\in I})) = f( (s(x_i))'_{i\in I}) = (s(x_i)')_{i\in I}
= (s(x'_i))_{i\in I} = s((x'_i)_{i\in I}) = s(f((x_i)'_{i\in I}))$. Hence $f$ is an $IE_B$-lattice isomorphism.
------------------------------------------------------------------------
In what follow we study the relation between Boolean$^*$ pre-states and $IE^*_B$-semigroups.
\[FUNC32\] Let $ \langle S, \cdot, ^*, ', s, 0 \rangle$ be an $IE^*_B$-semigroup. Then there exists a Boolean$^*$ pre-state $\sigma:S \rightarrow \{0,1\}$ such that $s/_{P_c(S)}$ is coherent with $\sigma/_{P_c(S)}$.
By Proposition \[FUNC00\] there exists a Boolean pre-state $\sigma_0: P_c(S) \rightarrow \{0,1\} $ such that $s/_{P_c(S)}$ is coherent with $\sigma_0$. By Theorem \[BSIGMA3\], there exists a unique Boolean$^*$ pre-state $\sigma:S \rightarrow \{0,1\}$ such that $\sigma_0 = \sigma/_{P_c(S)}$. Hence $s/_{P_c(S)}$ is coherent with $\sigma/_{P_c(S)} = \sigma_0$.
------------------------------------------------------------------------
The following result gives a kind of converse of the last proposition:
\[FUNC33\] Let $S$ be a Baer $^*$-semigroup and $\sigma:S \rightarrow \{0,1\}$ be a Boolean$^*$ pre-state. If we define $$s_{\sigma}(x) = \cases
{1^{P_c(S)}, & if $\sigma(x)=1$ \cr 0^{P_c(S)} , & if $\sigma(x)=0$ \cr}$$then $ \langle S, \cdot, ^*, ', s_{\sigma}, 0 \rangle$ is an $IE^*_B$-semigroup and $s_\sigma/_{P_c(S)}$ is coherent with $\sigma/_{P_c(S)}$
By Proposition \[FUNC0\], $\langle P_c(S), \land, \lor,',
s_\sigma/_{P_c(S)}, 0,1 \rangle$ is an $IE_B$-lattice and $s_\sigma/_{P_c(S)}$ is coherent with $\sigma/_{P_c(S)}$. Since $\sigma(x) = \sigma(x'')$ then $s_\sigma(x) = s_\sigma(x'') =
s_\sigma/_{P_c(S)}(x'')$. Hence, by Theorem \[BSS3\], $s_\sigma$ defines the unique $IE^*_B$-semigroup structure on $S_1$ that extends $s_\sigma/_{P_c(S)}$.
------------------------------------------------------------------------
Varieties of $IE_B$-lattices determining varieties of $IE^*_B$-semigroups {#VARIETIES}
=========================================================================
When a family of two-valued states over an orthomodular lattice is equationally characterizable by a variety of $IE_B$-lattices in the sense of Definition \[DEF1\], the problem about the existence of a variety of $IE^*_B$-semigroups that somehow may be able to equationally characterize the mentioned family of two-valued states may be posed. The following definition provides a “natural candidate” for such a class of $IE^*_B$-semigroups.
\[AIASTE\]
Let ${\mathcal A}_I$ be a subvariety of ${\mathcal IE}_I$. Then we define the subclass ${\mathcal A}^*_I$ of ${\mathcal IE}^*_I$ as follows: $${\mathcal A}^*_I = \{S \in {\mathcal IE}^*_B: P_c(S)\in {\mathcal A}_I \}$$
Before proceeding, we have to make sure that ${\mathcal A}^*_I$ is a non-empty subclass of ${\mathcal IE}^*_I$.
\[NONEMPTY\] If ${\mathcal A}_I$ is a non-empty subvariety of ${\mathcal IE}_I$ then ${\mathcal A}^*_I$ is a non-empty subclass of ${\mathcal
IE}^*_B$.
Suppose that $ \langle L, \land, \lor, \neg, s, 0,1 \rangle$ belong to ${\mathcal A}_I$. By Theorem \[PRO2\] we can consider the Baer $^*$-semigroup $S(L)$ of residuated functions in $L$ in which $L$ is $OML$-isomorphic to $P_c(S(L))$. Identifying $L$ with $P_c(S(L))$, by Theorem \[BSS3\], there exists an operation $s_{S(L)}$ on $S(L)$ that defines the unique $IE^*_B$-semigroup structure on $S(L)$ such that $s_{S(L)}/_{P_c(S(L))} = s$. Hence $S(L) \in
{\mathcal A}^*_I$ and ${\mathcal A}^*_I$ is a non empty subclass of ${\mathcal IE}^*_B$.
------------------------------------------------------------------------
In what follows we shall demonstrate not only that ${\mathcal
A}^*_I$ is a variety but also we shall give a decidable method to find an equational system that defines ${\mathcal A}^*_I$ from an equational system that defines ${\mathcal A}_I$. In order to study this we first introduce the following concept:
\[TRANS\]
We define the [*$*$-translation*]{} $\tau:$ Term$_{{\mathcal
IE}_B} \rightarrow$ Term$_{{\mathcal IE}^*_B}$ as follows:
- $\tau(0) = 0$ and $\tau(1) = 1$
- $\tau(x) = x'$ for each variable $x$,
- $\tau(\neg t) = \tau(t)'$,
- $\tau (t \land s) = (\tau(t)'\cdot \tau(s))' \cdot \tau(s)$,
- $\tau(t \lor s) = \tau(\neg(\neg t \land \neg s))$,
- $\tau(s(t)) = s(\tau(t))$.
\[EXTENS\] Let $S$ be a $IE^*_B$-semigroup, $v:$ [Term]{}$_{{\mathcal
IE}^*_B}\rightarrow S $ be a valuation and $\tau$ be the $*$-translation. Then:
1. For each $t\in$ [Term]{}$_{{\mathcal IE}_B}$, $v(\tau(t)) \in
P_c(S)$,
2. There exists a valuation $v_c:$ [Term]{}$_{{\mathcal IE}_B}
\rightarrow P_c(S)$ such that for each $t\in$ [Term]{}$_{{\mathcal
IE}_B}$, $v_c(t) = v(\tau(t))$.
1\) Let $t\in$ Term$_{{\mathcal IE}_B}$. If $t$ is the form $\neg r$ then, $v(\tau(r)) = v(\tau(\neg r)) = v(\tau(r)') = v(\tau(r))' \in
P_c(S)$. If $t$ is the form $s(r)$ then, $v(\tau(s(r))) =
v(s(\tau(r))) = s(v(\tau(r))) \in Z(P_c(S)) \subseteq P_c(S)$. For the other case we use induction on the complexity of terms in Term$_{{\mathcal IE}_B}$. If Comp($t$)$ = 0$ then $t$ is $0$, $1$, or a variable $x$. In these cases $v(\tau(1)) = v(1) = 1^S$, $v(\tau(0)) = v(0) = 0^S$ and $v(\tau(x)) = v(x') = v(x)'$. Thus $v(\tau(t)) \in P_c(S)$. Assume that $v(\tau(t)) \in P_c(S)$ whenever Comp($t$)$ < n$. Suppose that Comp($t$)$ = n$. We have to consider the case in which $t$ is the form $p \land r$. Then $v(\tau(t)) = v(\tau(p\land r)) = v((\tau(p)'\cdot \tau(r))' \cdot
\tau(r)) = (v(\tau(p))'\cdot v(\tau(r)))' \cdot v(\tau(r)) =
v(\tau(r))\land v(\tau(p))$ because $v(\tau(r))\in P_c(S)$ and $v(\tau(p)) \in P_c(S)$. Thus $v(\tau(t)) \in P_c(S)$. This proves that for each $t\in$ Term$_{{\mathcal IE}_B}$ $v(\tau(t)) \in P_c(S)$.\
2) Consider the valuation $v_c:$ Term$_{{\mathcal IE}_B} \rightarrow
P_c(S)$ such that for each variable $x$, $v_c(x) = v(x')$. Now we proceed by induction on the complexity of terms in Term$_{{\mathcal
IE}_B}$. If Comp($t$)$ = 0$ then $t$ is $0$, $1$, or a variable $x$. Then, $v_c(1) = 1^S = v(1) = v(\tau(1))$, $v_c(0) = 0^S = v(0) =
v(\tau(0))$ and $v_c(x) = v(x') = v(\tau(x))$. Assume that $v_c(t)
= v(\tau(t))$ whenever Comp($t$)$ < n$. Suppose that Comp($t$)$ =
n$. We have to consider three possible cases:
$t$ is the form $\neg r$. Then $v_c(t) = v_c(\neg r) = v_c(r)' = v(\tau(r))' = v(\tau(r)') = v(\tau(\neg r)) = v(\tau(t))$.
$t$ is the form $p \land r$. $v_c(t) = v_c(p\land r) = v_c(p) \land
v_c(r) = v(\tau(p)) \land v(\tau(r)) = (v(\tau(p))'\cdot
v(\tau(r)))' \cdot v(\tau(r)) = v((\tau(p)'\cdot \tau(r))' \cdot
\tau(r)) = v(\tau(p\land r)) = v(\tau(t))$.
$t$ is the form $s(r)$. Then $v_c(t) = v_c(s(r)) = s(v_c(r)) = s(v(\tau(r))) = v(s(\tau(r))) = v(\tau(s(r))) = v(\tau(t))$.
This proves that for each $t\in$ Term$_{{\mathcal IE}_B}$, $v_c(t)
= v(\tau(t))$.
------------------------------------------------------------------------
\
\[EXTENS2\] Let $S$ be an $IE^*_B$-semigroup and $v:$ [Term]{}$_{{\mathcal
IE}_B} \rightarrow P_c(S) $ be a valuation. Then there exists a valuation $v^*:$ [Term]{}$_{{\mathcal IE}^*_B} \rightarrow S $ such that $t\in$ [Term]{}$_{{\mathcal IE}_B}$, $v^*(\tau(t)) =
v(t)$.
Consider the valuation $v^*:$ Term$_{{\mathcal IE}^*_B} \rightarrow
S $ such that for each variable $v^*(x) = v(\neg x)$. Let $t\in$ Term$_{{\mathcal IE}_B}$. We use induction on the complexity of terms in Term$_{{\mathcal IE}_B}$. If Comp($t$)$ = 0$ then $t$ is $0$, $1$, or a variable $x$. Then, $v^*(\tau(1)) = v^*(1) = 1^S =
v(1)$, $v^*(\tau(0)) = v^*(0) = 0^S = v(0)$ and $v^*(\tau(x)) =
v^*(x') = v^*(x)' = v(\neg x)' = v(x)'' = v(x)$ since $v(x) \in
P_c(S)$. Assume that $v^*(\tau(t)) = v(t) $ whenever Comp($t$)$ <
n$. Suppose that Comp($t$)$ = n$. We have to consider three possible cases:
$t$ is the form $\neg r$. Then $v^*(\tau(t)) = v^*(\tau(\neg r)) = v^*(\tau(r)') = v^*(\tau(r))' = v(r)' = v(\neg r) = v(t)$.
$t$ is the form $p \land r$. Then $v^*(\tau(t)) = v^*(\tau(p \land
r)) = v^*( (\tau(p)'\cdot \tau(r))' \cdot \tau(r)) =
(v^*(\tau(p))'\cdot v^*(\tau(r)))' \cdot v^*(\tau(r)) = (v(p)' \cdot
v(r))' \cdot v(r) = v(p)\land v(r) = v(p\land r) = v(t)$.
$t$ is the form $s(r)$. Then $v^*(\tau(t)) = v^*(\tau(s(r))) = v^*(s(\tau(r))) = s(v^*(\tau(r))) = s(v(r)) = v(s(r)) = v(t)$.
This proves that for each $t\in$ Term$_{{\mathcal IE}_B}$, $v^*(\tau(t)) = v(t)$.
------------------------------------------------------------------------
\
\[1EQ\] Let ${\mathcal A}_I$ be a subvariety of ${\mathcal IE}_I$ and assume that $\{t_i = s_i \}_{i\in I}$ is a set of equations in the language of ${\mathcal IE}_I$ that defines ${\mathcal A}_I$. Then $${\mathcal
A}^*_I = \{S \in {\mathcal IE}^*_B: \forall i\in I, S \models
\tau(t_i) = \tau(s_i) \}$$
On the one hand, assume that $S \in {\mathcal A}^*_I$ i.e., $P_c(S)\models t_i = s_i$ for each $i\in I$. Suppose that there exists $i_0 \in I$ such that $S \not \models \tau(t_{i_0}) =
\tau(s_{i_0})$. Then there exists a valuation $v:$ Term$_{{\mathcal
IB}_B} \rightarrow S $ such that $v(\tau(t_{i_0})) \not =
v(\tau(s_{i_0}))$. By Proposition \[EXTENS\] there exists a valuation $v_c:$ Term$_{{\mathcal IE}_B} \rightarrow P_c(S)$ such that for each $t\in$ Term$_{{\mathcal IE}_B}$, $v_c(t) =
v(\tau(t))$. Then $v_c(t_{i_0}) = v(\tau(t_{i_0})) \not =
v(\tau(s_{i_0})) = v_c(s_{i_0})$ and $P_c(S)\not \models t_{i_0} =
s_{i_0}$ which is a contradiction. Hence $S \models \tau(t_i) =
\tau(s_i)$ for each $i\in I$.
On the other hand, assume that $S \in {\mathcal IE}^*_B$ and $S
\models \tau(t_i) = \tau(s_i)$ for each $i\in I$. Suppose that there exists $i_0 \in I$ such that $P_c(S)\not \models t_{i_0} =
s_{i_0}$. Then there exists a valuation $v:$ Term$_{{\mathcal
IE}_B} \rightarrow P_c(S)$ such that $v(t_{i_0}) \not = v(s_{i_0})$. By Proposition \[EXTENS2\], there exists a valuation $v^*:$ Term$_{{\mathcal IE}^*_B} \rightarrow S $ such that for each $t\in$ Term$_{{\mathcal IE}_B}$, $v^*(\tau(t)) = v(t)$. Then $
v^*(\tau(t_{i_0})) = v(t_{i_0}) \not = v(s_{i_0}) =
v^*(\tau(s_{i_0})) $ and $S \not \models \tau(t_{i_0}) =
\tau(s_{i_0})$ which is a contradiction. Hence, for each $i\in I$, we have $P_c(S)\models t_i = s_i$.
------------------------------------------------------------------------
Baer $^*$-semigroups and the full class of two-valued states {#FULLCLASS}
============================================================
The category of orthomodular lattices admitting a two-valued state, noted by ${\mathcal TE}_B$, is the full sub-category of ${\mathcal
E}_B$ whose objects are $E_B$-lattices $(L,\sigma)$ satisfying the following condition: $$x\bot y \hspace{0.2cm} \Longrightarrow
\hspace{0.2cm} \sigma(x\lor y) = \sigma(x)+\sigma(y).$$ In [@DFD Theorem 6.3] it is proved that the variety $${\mathcal ITE}_B = {\mathcal IE}_B + \{s(x \lor (y \land \neg x) )= s(x) \lor s(y \land \neg x) \}$$ equationally characterizes ${\mathcal TE}_B$ in the sense of Definition \[DEF1\]. Thus, the objects of ${\mathcal TE}_B$ are identifiable to the directly indecomposable algebras of ${\mathcal
ITE}_B$. By Theorem \[1EQ\] we can give an equational theory in the frame of Baer $^*$-semigroups that captures the concept of two-valued state. In fact this is done through the variety $${\mathcal ITE}^*_B = {\mathcal IE}^*_B + \{s(x' \lor (y' \land x'') )=
s(x') \lor s(y' \land x'') \}$$
Baer $^*$-semigroups and Jauch-Piron two-valued states {#JAUCHPIRON}
======================================================
The category of orthomodular lattices admiting a Jauch-Piron two-valued state [@RU], noted by ${\mathcal JPE}_B$, is the full sub-category of ${\mathcal TE}_B$ whose objects are $E_B$-lattices $(L,\sigma)$ in ${\mathcal TE}_B$ also satisfying the following condition: $$\sigma(x) = \sigma(y) = 1 \hspace{0.3cm} \Longrightarrow
\hspace{0.3cm} \sigma(x\land y) = 1$$ In [@DFD Theorem 7.3] it is proved that the variety $${\mathcal IJPE}_B = {\mathcal ITE}_B
+ \{s(x) \land s(\neg x \lor y) = s(x\land y) \}$$ equationally characterizes ${\mathcal JPE}_B$ in the sense of Definition \[DEF1\]. Thus the objects of ${\mathcal JPE}_B$ are identifiable to the directly indecomposable algebras of ${\mathcal IJPE}_B$. By Theorem \[1EQ\] we can give an equational theory in the frame of Baer $^*$-semigroups that captures the concept of two-valued state. In fact this is done through the variety $${\mathcal IJPE}^*_B =
{\mathcal ITE}^*_B + \{ s(x') \land s(x'' \lor y') = s(x'\land y')\}$$
The problem of equational completeness in ${\mathcal A}^*_I$ {#APROBLEM}
============================================================
Let ${\mathcal A}$ be a family of $E_B$-lattices. Suppose that the subvariety ${\mathcal A}_I$ of ${\mathcal IE}_I$ equationally characterizes ${\mathcal A}$ in the sense of Definition \[DEF1\]. Then, through a functor ${\mathcal I}$, ${\mathcal A}$ is identifiable to the directly indecomposable algebras of the variety ${\mathcal A}_I$. In this way, we can state that ${\mathcal A}$ determines the equational theory of ${\mathcal A}_I$. With the natural extension of Boolean pre-states to Baer $^*$-semigroups, encoded in ${\mathcal A}^*_I$, this kind of characterization may be lost. More precisely, the class ${\mathcal A}$ may “not rule” the equational theory of ${\mathcal A}^*_I$ in the way ${\mathcal A}$ does with ${\mathcal A}_I$. The following example shows such a situation:
\[problem1\]
Let $\widetilde{{\mathcal B}}$ be the subclass of ${\mathcal
E}_B$ formed by the pairs $(B,\sigma)$ such that $B$ is a Boolean algebra and $\sigma$ is a Boolean pre-state. $\widetilde{{\mathcal
B}}$ is a non empty class since Boolean homomorphisms of the form $B\rightarrow {\mathbf 2} $ always exist for each Boolean algebra $B$ and they are examples of Boolean pre-states. It is clear that the class $${\widetilde{{\mathcal B}}}_I = {\mathcal IE}_B + \{x \land (y \lor z) =
(x\land y) \lor (x \land z ) \}$$ equationally characterizes the class $\widetilde{{\mathcal B}}$ in the sense of Definition \[DEF1\]. Note that ${\widetilde{{\mathcal B}}}_I$ may be seen as a sub-variety ${\widetilde{{\mathcal B}}}^*_I$ since, each algebra $B$ in ${\widetilde{{\mathcal B}}}_I$ in the signature $\langle
\land,
*, \neg, s, 0 \rangle$ where $*$ is the identity, is a $IE^*_B$-semigroup. Then the equational theory of ${\widetilde{{\mathcal B}}}_I$, as variety of $IE^*_B$-semigroups, is determined by the algebras of $\widetilde{{\mathcal B}}$. Note that algebras of ${\widetilde{{\mathcal B}}}_I$ are commutative Baer $^*$-semigroups and then we have $${\widetilde{{\mathcal B}}}_I \models x\cdot y = y \cdot x$$ What we want to point out is the following: ${\widetilde{{\mathcal
B}}}_I$ captures (although in some sense a trivial one) the concept of Boolean pre-states over Boolean algebras in a variety. Moreover $\widetilde{{\mathcal B}}$ also determines the equational theory of ${\widetilde{{\mathcal B}}}_I$ when ${\widetilde{{\mathcal B}}}_I$ is seen as a variety of $IE^*_B$-semigroup.
Let us now compare the last result with Definition \[TRANS\] and Theorem \[1EQ\]. The variety ${\widetilde{{\mathcal B}}}^*_I$ given by $$\begin{aligned}
{\widetilde{{\mathcal B}}}^*_I & = & {\mathcal IE}^*_B + \{ \tau(x \land (y \lor z)) = \tau ((x\land y) \lor (x \land z )) \} \\
& = & {\mathcal IE}^*_B + \{x' \land (y' \lor z') = (x'\land y')
\lor (x' \land z' ) \}\end{aligned}$$ is the biggest subvariety of ${\mathcal IE}^*_B$ whose algebras have a lattice of closed projections with Boolean structure and then ${\widetilde{{\mathcal B}}}_I \subseteq {\widetilde{{\mathcal
B}}}^*_I$. We shall prove that ${\widetilde{{\mathcal B}}}_I \not =
{\widetilde{{\mathcal B}}}^*_I$, i.e. the inclusion, is proper. In fact:
Let $B_4$ be the Boolean algebra of four elements $\{0, a, \neg a, 1
\}$ endowed with the operation $s(x) = x$ i.e., the identity on $B_4$. In this case $B_4 \in {\widetilde{{\mathcal B}}}_I$. According to Theorem \[PRO2\], we consider the Baer $^*$-semigroup $S(B_4)$ of residuated functions of $B_4$. Since we can identify $B_4$ with $P_c(S(B_4))$, by Proposition \[BSS3\] we can extend $s$ to $S(B_4)$. Therefore $S(B_4)$ may be seen as an algebra of ${\widetilde{{\mathcal B}}}^*_I$. Consider the function $\phi: B_4 \rightarrow B_4$ such that $0\phi =\phi(0) = 0$, $1\phi
=\phi(1) = 1$, $a\phi = \phi(a) = \neg a$ and $(\neg a)\phi =
\phi(\neg a) = a$. Note that $\phi$ is an order preserving function and the composition $\phi \phi = 1_{B_4}$. Hence $\phi$ is the residual function of itself and then $\phi \in S(B_4)$. Let $\phi_a$ be the Sasaki projection associated to $a$. Then $x\phi_a = (x\lor
\neg a) \land a = x \land a$. Note that $a(\phi \phi_a) =
(a\phi)\land a = \neg a \land a = 0$ and $a(\phi_a \phi) =
(a\phi_a)\phi = a\phi = \neg a$.
This proves that $\phi \phi_a \not = \phi_a \phi$ and then, $S(B_4)
\not \in {\widetilde{{\mathcal B}}}_I$ because ${\widetilde{{\mathcal B}}}_I$ is a variety of commutative Baer $^*$-semigroups. Thus ${\widetilde{{\mathcal B}}}_I \not =
{\widetilde{{\mathcal B}}}^*_I$ and the inclusion is proper.
Hence the directly indecomposable algebras of ${\widetilde{{\mathcal
B}}}_I$ considered as $IE^*_B$-semigroups do not determine the equational theory of ${\widetilde{{\mathcal B}}}^*_I$. Consequently $\widetilde{{\mathcal B}}$ does not significatively add to the equational theory of ${\widetilde{{\mathcal B}}}^*_I$.
Taking into account Example \[problem1\], the following problem may be posed:
- [*Let ${\mathcal A}$ be a class of $E_B$-lattices and suppose that the subvariety ${\mathcal A}_I$ of ${\mathcal IE}_I$ equationally characterizes ${\mathcal A}$ in the sense of Definition \[DEF1\]. Give a subvariety ${\mathcal G}^*$ of ${\mathcal A}^*_I$ in which we can determine the equational theory of ${\mathcal G}^*$ from the class ${\mathcal A}$.*]{}
We conclude this section defining the meaning of the statement that, the class ${\mathcal A}$ of $E_B$-lattices determines the equational theory of a subvariety of ${\mathcal A}^*_I$.
\[problem3\]
Let ${\mathcal A}$ be a class of $E_B$-lattices. Suppose that the variety ${\mathcal A}_I$ of $IE_B$-lattices equationally characterizes the class ${\mathcal A}$ and ${\mathcal I}: {\mathcal
A} \rightarrow {\mathcal D}({\mathcal A}_I)$ is the functor that provides the categorical equivalence between ${\mathcal A}$ and the category ${\mathcal D}({\mathcal A}_I)$ of directly indecomposable algebras of ${\mathcal A}_I$. We say that ${\mathcal A}$ determines the equational theory of a subvariety ${\mathcal G}^*$ of ${\mathcal
A}^*_I$ iff there exists a class operator $${\mathbb G} : {\mathcal
A}_I \rightarrow {\mathcal A}^*_I$$ such that:
1. For each $L \in {\mathcal A}$, ${\mathbb G}{\mathcal I}(L)$ is a directly indecomposable algebra in ${\mathcal A}^*_I$.
2. ${\mathcal G}^* = {\mathcal V}(\{{\mathbb G}{\mathcal I}(L): L \in
{\mathcal A} \})$.
In the next section, we will study a class operator, denoted by ${\mathbb S}_0$, that will allow us to define a subvariety of ${\mathcal A}^*_I$ whose equational theory is determinated by ${\mathcal A}$ in the sense of Definition \[problem3\].
The class operator ${\mathbb S}_0$ {#OPERATORE}
==================================
Let $\langle L, \land, \lor, \neg, s, 0,1 \rangle$ be an $IE_B$-lattice. By Corollary \[BS4\], we consider the $IE^*_B$-semigroup $S(L)$ of residuated functions of $L$. By abuse of notation, we also denote by $s$ the operation $s_{S(L)}$ on $S(L)$ where $s_{S(L)}(x) = s(x'')$. Let $S_0(L)$ be the sub Baer $^*$-semigroup of $S(L)$ generated by the Sasaki projections on $L$. In the literature, $S_0(L)$ is refereed as [*the small Baer $^*$-semigroup of products of Sasaki projections on $L$*]{} [@AD; @FOU]. By Corollary \[BS5\], $S_0(L)$ with the restriction $s/_{S_0(L)}$ is a sub $IE^*_B$-semigroup of $S(L)$. This $IE^*_B$-semigroup will be denoted by ${\mathbb S}_0(L)$.
Since $P_c({\mathbb S}_0(L))$ is $IE_B$-isomorphic to $L$, if ${\mathcal A}_I$ is a subvariety of ${\mathcal IE}_I$ and $L \in
{\mathcal A}_I$ then ${\mathbb S}_0(L) \in {\mathcal A}^*_I$. These results allow us to define the following class operator: $${\mathbb
S}_0: {\mathcal A}_I \rightarrow {\mathcal A}^*_I \hspace{0.4cm}s.t.
\hspace{0.2cm} L\mapsto {\mathbb S}_0(L)$$
\[HOMBAER\] Let $L_1, L_2$ be two $IE_B$-lattices and $f:L_1 \rightarrow L_2$ be an $IE_B$-homomorphism.
1. If $\phi_{a_1} \ldots \phi_{a_n}$ are Sasaki projections in $L_1$ then for each $x \in L_1$ we have $f(x\phi_{a_1} \ldots \phi_{a_n})
= f(x) \phi_{f(a_1)} \ldots \phi_{f(a_n)} $.
2. If $f$ is a surjective function then there exists an unique $IE^*_B$-homomorphism $g: {\mathbb S}_0(L_1) \rightarrow {\mathbb
S}_0(L_2)$ such that, identifying $L_1$ with $P_c({\mathbb
S}_0(L_1))$, $g/_{L_1} = f$ Moreover, $g$ is a surjective function.
3. If $f$ is bijective then ${\mathbb S}_0(L_1)$ and ${\mathbb
S}_0(L_2)$ are $IE^*_B$-isomorphic.
1\) We use induction on $n$. Suppose that $n=2$. Then $$\begin{aligned}
f(x\phi_{a_1} \phi_{a_2}) & = & f( (((x\lor \neg a_1)\land a_1) \lor \neg a_2) \land a_2 ) \\
& = & (((f(x)\lor \neg f(a_1))\land f(a_1)) \lor \neg f(a_2)) \land f(a_2) \\
& = &f(x)\phi_{f(a_1)} \phi_{f(a_2)}\end{aligned}$$ Suppose that the result holds for $m < n$. Then: $$\begin{aligned}
f(x\phi_{a_1} \ldots \phi_{a_n}) & = & f((x\phi_{a_1} \ldots \phi_{a_{n-1}} \lor \neg a_n) \land a_n ) \\
& = & (f(x) \phi_{f(a_1)} \ldots \phi_{f(a_{n-1})} \lor \neg f(a_n)) \land f(a_n) \\
& = & f(x)\phi_{f(a_1)} \ldots \phi_{f(a_n)}\end{aligned}$$
2\) Suppose that $f:L_1 \rightarrow L_2$ is a surjective $IE_B$-homomorphism. If $\phi \in {\mathbb S}_0(L_1)$ then $\phi =
\phi_{a_1} \ldots \phi_{a_n}$ where $\phi_{a_i}$ are Sasaki projections on $L_1$. We define the function $g: {\mathbb S}_0(L_1)
\rightarrow {\mathbb S}_0(L_2)$ such that $$g(\phi) = g(\phi_{a_1}
\ldots \phi_{a_n}) = \phi_{f(a_1)} \ldots \phi_{f(a_n)}$$ We first prove that $g$ is well defined. Suppose that $\phi =
\phi_{a_1} \ldots \phi_{a_n} = \phi_{c_1} \ldots \phi_{c_m}$. Let $b \in L_2$. Since $f$ is a surjective function then there exists $a\in L_1$ such that $f(a) = b$. Then by item 1, $$\begin{aligned}
b\phi_{f(c_1)} \ldots \phi_{f(c_m)} & = & f(a) \phi_{f(c_1)} \ldots \phi_{f(c_m)} \\
& = & f(a\phi_{c_1} \ldots \phi_{c_m}) \\
& = & f(a\phi_{a_1} \ldots \phi_{a_n}) \\
& = & f(a)\phi_{f(a_1)} \ldots \phi_{f(a_n)} \\
& = & b \phi_{f(a_1)} \ldots \phi_{f(a_n)}\end{aligned}$$ Thus $g(\phi_{a_1} \ldots \phi_{a_n}) = g(\phi_{c_1} \ldots
\phi_{c_m})$ and $g$ is well defined. Note that for each $a\in
L_1$, $g(\phi_a) = \phi_{f(a)}$ and then $g/_{L_1} = f$ identifying $L_1$ with $P_c({\mathbb S}_0(L_1))$. The surjectivity of $g$ follows immediately from the surjectivity of $f$. By definition of $g$, it is immediate that $g$ is a $\langle \circ, ^*, 0
\rangle$-homomorphism where $\psi \circ \phi = \psi \phi$. We prove that $g$ preserves the operation $'$. Suppose that $\phi =
\phi_{a_1} \ldots \phi_{a_n}$. By Theorem \[PRO2\] $\phi' =
\phi_{\neg 1\phi} = \phi_{\neg 1\phi_{a_1} \ldots \phi_{a_n}}$. By item 1 we have that $$\begin{aligned}
g(\phi') & = & g(\phi_{\neg 1\phi_{a_1} \ldots \phi_{a_n}}) \\
& = & \phi_{f(\neg 1\phi_{a_1} \ldots \phi_{a_n}} ) \\
& = & \phi_{\neg f(1)\phi_{f(a_1)} \ldots \phi_{f(a_n)}}\\
& = & \phi_{\neg 1 g(\phi)}\\
& = & g(\phi)'\end{aligned}$$ Thus, $g$ preserves the operation $'$. Now we prove that $g$ preserves $s$. By Proposition \[BS2\]-3 $s(\phi) = s(\phi'')$. Then there exists $a\in L_1$ such that $\phi'' = \phi_a$. By Corollary \[BS4\], $g(s(\phi)) = g(s(\phi_a)) = g(\phi_{s(a)}) =
\phi_{f(s(a))} = \phi_{s(f(a))} = s(\phi_{f(a)}) = s(g (\phi_a)) =
s(g(\phi'')) = s(g(\phi)'') = s(g(\phi))$ and $g$ preserves the operation $s$. Hence $g$ is a surjective $IE^*_B$-homomorphism such that, identifying $L_1$ with $P_c({\mathbb S}_0(L_1))$, $g/_{L_1} = f$ . We have to prove that $g$ is unique. Suppose that there exists an $IE^*_B$-homomorphism $h: {\mathbb S}_0(L_1) \rightarrow {\mathbb S}_0(L_2)$ such that $h/_{L_1} = f$. Let $\phi = \phi_{a_1} \ldots \phi_{a_n} \in {\mathbb S}_0(L_1)$. Then $ h(\phi) = h(\phi_{a_1} \ldots \phi_{a_n}) = h(\phi_{a_1}) \ldots (\phi_{a_n}) = f(\phi_{a_1}) \ldots f(\phi_{a_n})
= g(\phi_{a_1} \ldots \phi_{a_n} ) = g(\phi)$. Thus, $h = g$ and this proves the unicity of $g$.\
3) To prove this item, we assume that $f$ is bijective and use the function $g$ of item 2. Then we have to prove that $g$ is injective. Suppose that $g(\phi) = g(\psi)$ where $\varphi, \psi \in {\mathbb
S}_0(L_1)$. Suppose that $\phi = \phi_{a_1} \ldots \phi_{a_n}$ and $\psi =\phi_{c_1} \ldots \phi_{c_m}$. By item 1, for each $x\in
L_1$ we have that: $$\begin{aligned}
f(x\phi_{a_1} \ldots \phi_{a_n}) & = & f(x) \phi_{f(a_1)} \ldots \phi_{f(a_n)}\\
& = & f(x)g(\phi_{a_1} \ldots \phi_{a_n})\\
& = & f(x)g(\phi) \\
& = & f(x)g(\psi)\\
& = & f(x)\phi_{f(c_1)} \ldots \phi_{f(c_m)}\\
& = & f(x\phi_{c_1} \ldots \phi_{c_m})\end{aligned}$$ Since $f$ is bijective, $\phi_{a_1} \ldots \phi_{a_n}(x) = \phi_{c_1} \ldots \phi_{c_m}(x)$ and then $\phi = \psi$. Thus $g$ is bijective.
------------------------------------------------------------------------
\[HOMBAER1\] Let $A$ be a sub $IE_B$-lattice of $L$. Then there exists a sub $IE^*_B$-semigroup $S_A$ of ${\mathbb S}_0(L)$ such that $A$ is $IE_B$-isomorphic to $P_c(S_A)$.
Consider the set $$S_A = \bigcup_{n \in {\mathbb{N}}} \{
\phi_{a_1}\phi_{a_2}\ldots \phi_{a_n}: a_i\in A \}$$ where $\phi_{a_i}$ are Sasaki projections on $L$. Note that in general $S_A \not = {\mathbb S}_0(A)$ since the domain of Sasaki projections $\phi_{a_i}$ is $L$ (and not $A$). In [[@AD Proposition 10]]{} it is proved that $S_A$ is a sub Baer $^*$-semigroup of ${\mathbb S}_0(L)$ in which $A$ is $OML$-isomorphic to $P_c(S_A)$ and then, $A$ is $IE_B$-isomorphic to $P_c(S_A)$. Thus, by Corollary \[BS5\], $S_A$ is a sub $IE^*_B$-semigroup of ${\mathbb S}_0(L)$.
------------------------------------------------------------------------
\[HOMBAER3\] Let $S$ be an $IE^*_B$-semigroup and for each $a\in S$ we define the function $\psi_a: P_c(S) \rightarrow P_c(S)$ such that $\psi_a(x) =
(x a)''$. Then
1. If $a \in P_c(S)$ then $\psi_a = \phi_a$.
2. $f: S \rightarrow S(P_c(S))$ such that $f(a) = \psi_a$ is an $IE^*_B$-homomorphism.
3. Let $S_0$ be the sub $IE^*_B$-semigroup of $S$ generated by $P_c(S)$. If we consider the restriction $f/_{S_0}$ then Imag($f/_{S_0}$)$ = {\mathbb S}_0(P_c(S))$.
1\) Suppose that $a\in P_c(S)$. By [[@MM Lemma 37.10 ]]{}, $\psi_a(x) = (x a)'' = (x \lor \neg a) \land a = x\phi_a$. Hence $\psi_a = \phi_a$.
2\) In [[@AD Proposition 7]]{} is proved that $f$ preserves the operations $\langle \cdot,^*,',0 \rangle$. Then we have to prove that $f$ preserves the operation $s$. Note that, by item 1, $f/_{P_c(S)}$ is the $IE_B$-isomorphism $a\mapsto \phi_a$. Then, by Corollary \[BS4\], $f(s(a)) = s(f(a))$ for each $a\in P_c(S)$. Taking into account that for each $a\in S$, $s(a) = s(a'')$ we have that, $f(s(a)) = f(s(a'')) = s(f(a'')) = s(f(a)'') = s(f(a))$. Thus $f$ is an $IE^*_B$-homomorphism.
3\) Suppose that $\varphi \in {\mathbb S}_0(P_c(S))$. Then $\varphi =
\phi_{a_1} \ldots \phi_{a_n}$ for some $a_1, \ldots ,a_n$ in $P_c(S)$. If we consider the element $a = a_1 a_2 \ldots a_n$ then $a\in S_0$ and, since $f$ is an $IE^*_B$-homomorphism, $f(a) = f(a_1
a_2 \ldots a_n) = f(a_1) f(a_2) \ldots f(a_n) = \phi_{a_1}
\phi_{a_2} \ldots \phi_{a_n} = \varphi$. Imag($f/_{S_0}$)$ =
{\mathbb S}_0(P_c(S))$.
------------------------------------------------------------------------
\[PRODG0\] Let $(L_i)_{i\in I}$ be a family of $IE_B$-lattices. Then:
1. If $\vec{a} = (a_i)_{i\in I} \in \prod_{i\in I} L_i$ then the Sasaki projection $\phi_{\vec{a}}: \prod_{i\in I} L_i \rightarrow
\prod_{i\in I} L_i$ satisfies that for each $\vec{x} = (x_i)_{i\in
I}$, $\vec{x}\phi_{\vec{a}} = (x_i\phi_{a_i})_{i\in I}$.
2. If $\vec{a} = (a_i)_{i\in I}$ and $\vec{b} = (b_i)_{i\in I}$ are elements in $\prod_{i\in I} L_i$ then for each $\vec{x} =
(x_i)_{i\in I}$, $\vec{x} \phi_{\vec{a}} \phi_{\vec{b}} =
(x_i\phi_{a_i} \phi_{b_i})_{i\in I}$.
3. ${\mathbb S}_0(\prod_{i\in I} L_i)$ is $IE^*_B$-isomorphic to $\prod_{i\in I} {\mathbb S}_0 (L_i)$
1\) Let $\vec{a} = (a_i)_{i\in I} \in \prod_{i\in I} L_i$. Then $$\begin{aligned}
\vec{x} \phi_{\vec{a}} & = & ( (x_i)_{i\in I} \lor \neg (a_i)_{i\in I} ) \land (a_i)_{i\in I} \\
& = & (( x_i \lor \neg a_i ) \land a_i)_{i\in I}\\
& = & (x_i\phi_{a_i})_{i\in I}\end{aligned}$$
2\) Let $\vec{a} = (a_i)_{i\in I}$ and $\vec{b} = (b_i)_{i\in I}$ be two elements in $\prod_{i\in I} L_i$ and $\vec{x} = (x_i)_{i\in I}$. Then, by item 1, we have that: $$\begin{aligned}
\vec{x} \phi_{\vec{a}} \phi_{\vec{b}} & = & (\vec{x} \phi_{\vec{a}}) \phi_{\vec{b}} \\
& = & ((x_i\phi_{a_i})_{i\in I}) \phi_{\vec{b}}\\
& = & (x_i\phi_{a_i} \phi_{b_i})_{i\in I}\end{aligned}$$
3\) Follows from item 2.
------------------------------------------------------------------------
\[DIRECTINDB\] Let $L$ be an $IE_B$-lattice. Then, $L$ is directly indecomposable iff ${\mathbb S}_0(L)$ is directly indecomposable
Suppose that ${\mathbb S}_0(L)$ admits a non trivial decomposition in direct products of $IE^*_B$-semigroups i.e. ${\mathbb S}_0(L) =
\prod_{i\in I} S_i$. Then, by Proposition \[PRO11\], we can see that $L \approx_{IE_B} P_c({\mathbb S}_0(L)) \approx_{IE_B}
P_c(\prod_{i\in I} S_i) \approx_{IE_B} \prod_{i\in I} P_c(S_i)$. Thus $L$ admits a non trivial decomposition in direct products of $IE_B$-lattices.
Suppose that $L$ admits a non trivial decomposition in direct products of $IE_B$-lattices i.e. $L = \prod_{i\in I} L_i$. Then, by Proposition \[PRODG0\]-3, ${\mathbb S}_0(L) = {\mathbb
S}_0(\prod_{i\in I} L_i) \approx_{IE^*_B} \prod_{i\in I} {\mathbb
S}_0(L_i)$. Thus ${\mathbb S}_0(L)$ admits a non trivial decomposition in direct products of $IE^*_B$-semigroups.
------------------------------------------------------------------------
Let ${\mathcal A}_I$ be a variety of $IE_B$-lattices. We denote by ${\mathcal G}^*({\mathcal A}_I)$ the sub variety of ${\mathcal
A}_I^*$ generated by the class $\{{\mathbb S}_0(L): L \in {\mathcal
A}_I \}$. More precisely, $${\mathcal G}^*({\mathcal A}_I) = {\mathcal V}(\{{\mathbb S}_0(L): L \in {\mathcal
A}_I \})$$
We also introduce the following subclass of ${\mathcal
G}^*({\mathcal A}_I)$ $${\mathcal G}_D^*({\mathcal A}_I) =
\{{\mathbb S}_0(L): L \in {\mathcal D}({\mathcal A}_I) \}$$ where ${\mathcal D}({\mathcal A}_I)$ is the class of the direct indecomposable algebras of ${\mathcal A}_I$. By Proposition \[DIRECTINDB\], we can see that ${\mathcal G}_D^*({\mathcal A}_I)$ is a subclass of the direct indecomposable algebras of ${\mathcal
A}_I^*$.
\[COMPLETENESSEXP\] Let ${\mathcal A}_I$ be a variety of $IE_B$-lattices. Then $${\mathcal G}^*({\mathcal A}_I) \models t= r \hspace{0.3cm}
\mathrm{iff} \hspace{0.3cm} {\mathcal G}_D^*({\mathcal A}_I) \models
t= r$$
As regards to the non-trivial direction assume that $${\mathcal
G}_D^*({\mathcal A}_I) \models t(x_1, \ldots,x_n) = r(x_1,
\ldots,x_n)$$ Let ${\mathbb S}_0(L) \in {\mathcal G}^*({\mathcal
A}_I)$. By the subdirect representation theorem, there exists an $IE_B$-lattice embedding $\iota: L \hookrightarrow \prod_{i\in I}
L_i $ where $(L_i)_{i\in I}$ is a family of subdirectly irreducible algebras in ${\mathcal A}_I$. Therefore, $L_i \in {\mathcal
D}({\mathcal A}_I) $ and $ {\mathbb S}_0(L_i) \in {\mathcal
G}_D^*({\mathcal A}_I)$ for each $i\in I$. By Proposition \[HOMBAER1\], there exists an $IE^*_B$-semigroup embedding $\iota_F: F \hookrightarrow {\mathbb S}_0(\prod_{i\in I} L_i)$ where $L$ is $IE_B$-isomorphic to $P_c(F)$. By Proposition \[PRODG0\], we can assume that the $IE^*_B$-semigroup embedding $\iota_F$ is of the form $\iota_F: F \hookrightarrow \prod_{i\in I} {\mathbb
S}_0(L_i)$. By Proposition \[HOMBAER3\], if we consider the sub $IE^*_B$-semigroup $F_0$ of $F$ generated by $P_c(F)$ then, there exists a surjective $IE^*_B$-homomorphisms $f:F_0 \rightarrow
{\mathbb S}_0(L)$. The following diagram provides some intuition:
(90,20)(0,0) (3,10)[(0,-2)[5]{}]{} (20,16)[(0,0)[$F_0 \hookrightarrow F \stackrel{\iota_F}{\hookrightarrow} \prod_{i\in I} {\mathbb S}_0(L_i)$]{}]{} (3,-1)[(0,0)[${\mathbb S}_0(L)$]{}]{} (2,8)[(-6,0)[$f$]{}]{}
Since $F_0$ can be embedded into a direct product $\prod_{i\in I}
{\mathbb S}_0(L_i)$, where $ {\mathbb S}_0(L_i) \in {\mathcal
G}_D^*({\mathcal A}_I)$ for each $i\in I$, by hypothesis, we have that: $$F_0 \models t(x_1, \ldots,x_n) = r(x_1, \ldots,x_n)$$ Let $\vec{a}
= (a_1 \ldots a_n)$ be a sequence in ${\mathbb S}_0(L)$. Since $f$ is surjective, there exists a sequence $\vec{m} = (m_1,\ldots, m_n)$ in $F_0$ such that $f(\vec{m}) = (f(m_1),\ldots, f(m_n)) = \vec{a}$. Since $t^{F_0}(\vec{m}) = r^{F_0}(\vec{m})$ then $t^{{\mathbb
S}_0(L)}(\vec{a}) = f(t^{F_0}(\vec{m})) = f(r^{F_0}(\vec{m})) =
t^{{\mathbb S}_0(L)}(\vec{a})$. Hence ${\mathbb S}_0(L) \models
t(x_1, \ldots,x_n) = r(x_1, \ldots,x_n)$ and the equation holds in ${\mathcal G}^*({\mathcal A}_I)$.
------------------------------------------------------------------------
Even though the study of equations in Exp(${\mathcal A}_I$) is quite treatable from the result obtained in Theorem \[COMPLETENESSEXP\], we do not have in general a full description of the equational system that defines the variety Exp(${\mathcal A}_I$). The following corollary provides an interesting property about Exp(${\mathcal
A}_I$).
Let ${\mathcal A}_I$ be a variety of $IE_B$-lattices. Then $${\mathcal G}^*({\mathcal A}_I) \models s(x)\cdot y = y\cdot s(x).$$
Let $S$ be an algebra in ${\mathcal G}_D^*({\mathcal A}_I)$. Then for each $x\in S$, $s(x) \in \{0,1\}$ and $s(x)\cdot y = y\cdot
s(x)$. Hence by Theorem \[COMPLETENESSEXP\], ${\mathcal
G}^*({\mathcal A}_I) \models s(x)\cdot y = y\cdot s(x)$.
------------------------------------------------------------------------
\
Let ${\mathcal A}_I$ be a variety of $IE_B$-lattices. Note that the assignment $L \mapsto {\mathbb S}_0(L)$ defines a class operator of the form $${\mathbb S}_0:{\mathcal A}_I \rightarrow {\mathcal
G}^*({\mathcal A}_I) \subseteq {\mathcal A}^*_I$$ Taking into account Definition \[problem3\], by Proposition \[DIRECTINDB\] and Theorem \[COMPLETENESSEXP\] we can establish the following result:
\[DETER\] Let ${\mathcal A}$ be a class of $E_B$-lattices. Suppose that the variety ${\mathcal A}_I$ of $IE_B$-lattices equationally characterizes the class ${\mathcal A}$. Then the class ${\mathcal
A}$ determines the equational theory of ${\mathcal G}^*({\mathcal
A}_I)$.
------------------------------------------------------------------------
This last theorem provides a solution to the problem posed in Section \[problem1\].
Final remarks
=============
We have developed an algebraic framework that allows us to extend families of two-valued states on orthomodular lattices to Baer $^*$-semigroups. To do so, we have explicitly enriched this variety with a unary operation that captures the concept of two-valued states on Baer $^*$-semigroups as an equational theory. Moreover, a decidable method to find the equational system is given. We have also applied this general approach to study the full class of two-valued states and the subclass of Jauch-Piron two-valued states on Baer $^*$-semigroups.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors wish to thank an anonymous referee for his/her careful reading of our manuscript and useful comments on an earlier draft. His/her remarks have substantially improved our paper. This work was partially supported by the following grants: PIP 112-201101-00636, Ubacyt 2011/2014 635, FWO project G.0405.08 and FWO-research community W0.030.06. CONICET RES. 4541-12 (2013-2014).\
[10]{}
D. Adams, “Equational classes of Foulis semigroups and orthomodular lattices”, Proc. Univ. of Houston, Lattice Theory Conf.. Houston (1973) 486-497.
G. Birkhoff, and J. von Neumann, “The logic of quantum mechanics”, Ann. Math. [**27**]{} (1936) 823-843.
T. S. Blyth, M. F. Janowitz, “Residuation Theory”, Pergamon Press, (1972)
S. Burris, H. P. Sankappanavar, [*A Course in Universal Algebra*]{}, Graduate Text in Mathematics, Vol. 78. Springer-Verlag, New York Heidelberg Berlin, 1981.
G. Domenech, H. Freytes, C. de Ronde, “Equational characterization for two-valued states in orthomodular quantum systems”, Rep. Math. Phys. [**68**]{}, (2011) 65-83.
M. L. Dalla Chiara , R. Giuntini, R. Greechie, [*Reasoning in Quantum Theory, Sharp and Unsharp Quantum Logics*]{}, Kluwer, Dordrecht-Boston-London, 2004.
A. Dvurecenskij, “On States on MV-algebras and their Applications” J. Logic and Computation [**21**]{} (3), (2011) 407-427.
D. Foulis, [*Baer $^*$-semigroups*]{}, Proccedings of American Mathematical Society [**11**]{}, (1960) 648-654.
D. Foulis, “Observables, states, and symmetries in the context of $CB$-effect algebras”, Rep. Math. Phys. [**60**]{}, (2007) 329-346.
Gleason, A. M. “Measures on the closed subspaces of a Hilbert space", Journal of Mathematics and Mechanics [**6**]{}, (1957) 885-893.
S. Gudder, [*Stochastic Methods in Quantum Mechanics*]{}, Elseiver-North-Holand, New York 1979.
J. Jauch, [*Foundations of Quantum Mechanics*]{}, Addison Wesley, Reading, Mass, 1968.
J. A. Kalman, “Lattices with involution”, Trans. Amer. Math. Soc. [**87**]{}, (1958) 485-491.
G. Kalmbach, [*Ortomodular Lattices*]{}, Academic Press, London, 1983.
J. Kühr and D. Mundici, “De Finetti theorem and Borel states in $[0,1]$-valued logic”, Int J. Approx. Reason [**46**]{} (3), (2007) 605-616.
T. Kroupa, “Every state on semisimple MV-algebra is integral”, Fuzzy Sets Syst. [**157**]{}, (2006) 2771-2787.
J. Harding and P. Pták, “On the set representation of an orthomodular poset”, Colloquium Math. 89 (2001) 233-240
G. W. Mackey [*Mathematical foundations of quantum mechanics*]{} New York: W. A. Benjamin, Inc. 1963.
A. Messiah [*Quantum mechanics*]{}, Vol 1. Amsterdan: North-Holand Publishing Company 1961.
F. Maeda and S. Maeda, [*Theory of Symmetric Lattices*]{}, Springer-Verlag, Berlin, 1970.
M. Navara, “Descriptions of state spaces of orthomodular lattices”, Math. Bohemica [**117**]{}, (1992) 305-313.
M. Navara, “Triangular norms and measure of fuzzy set”, in [*Logical, Algebraic, Analytic and Probabilistic Aspect of Triangular Norms*]{}, Elsevier, Amsterdan 2005, 345-390.
J. von Neumann, [*Mathematical Foundations of Quantum Mechanics*]{}, Princeton University Press, 12th. edition, Princeton, 1996.
C. Piron, [*Foundations of Quantum Physics*]{}, Benjamin, Reading, Mass. 1976.
J.C. Pool, [*Baer $^*$-semigroups and the logic of quantum mechanics*]{}, Commun. Math. Phys. [**9**]{}, (1968) 118-141.
P. Pták, S. Pulmannová, [*Orthomodular Structures as Quantum Logics*]{}, Kluwer, Dordrecht, 1991.
S. Pulmannová, “Sharp and unsharp observables on s-MV algebras — A comparison with the Hilbert space approach”, Fuzzy Sets Syst. [**159**]{} (22), (2008) 3065-3077.
P. Pták, “Weak dispersion-free states and hidden variables hypothesis”, J. Math. Phys. [**24**]{} (1983) 839-840.
Z. Riecanová, “Continuous lattice effect algebras admitting order-continuous states”, Fuzzy Sets Syst. [**136**]{}, (2003) 41-54.
Z. Riecanová, “Effect algebraic extensions of generalized effect algebras and two-valued states”, Fuzzy Sets Syst. [**159**]{}, (2008) 1116-1122.
G.T. Rüttimann “Jauch-Piron states”, J. Math. Phys. [**18**]{} (1977) 189-193.
J. Tkadlec, “Partially additive measures and set representation of orthoposets”, J. Pure Appl. Algebra [**86**]{}, (1993) 79-94.
J. Tkadlec, “Partially additive states on orthomodular posets”, Colloquium Mathematicum [**LXII**]{}, (1995) 7-14.
J. Tkadlec, “Boolean orthoposets and two-valued states on them”, Rep. Math. Phys. [**31**]{}, (1992) 311-316.
[^1]: Fellow of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
|
---
abstract: 'An interpretation of Einstein-Hilbert gravity equations as Lagrangian reduction of Palatini gravity is made. The main techniques involved in this task are those developed in a previous work [@Capriotti2019] for Routh reduction in classical field theory. As a byproduct of this approach, a novel set of conditions for the existence of a vielbein for a given metric is found.'
address: |
Departamento de Matemática, Universidad Nacional del Sur\
Av. Alem 1253, 8000 Bahía Blanca, Argentina\
Instituto de Matemática de Bahía Blanca (INMABB), CONICET
author:
- Santiago Capriotti
title: Routh reduction of Palatini gravity in vacuum
---
[^1]
Introduction {#sec:Intro}
============
The relationship between Einstein-Hilbert and Palatini formulation of gravity has been studied in several places. Nevertheless, the main theoretical tool used in the discussion of the connection between these formulations of gravity appears to be some flavor of Hamiltonian reduction. For instance, [@2012GReGr..44.2337D] and [@Romano1993] use ADM formalism [@citeulike:820116] in order to establish the connection; it has been explored also in [@Cattaneo2019; @Ibort:2016xoo], where the correspondence is set by using a Hamiltonian structure on the set of fields at the boundary.
From this viewpoint, it becomes interesting to find a reduction scheme relating the Lagrangian formulation of Palatini and Einstein-Hilbert gravity directly, without the detour through Hamiltonian formalism. So far there exists two ways to implement reduction at the Lagrangian level, namely *Lagrange-Poincaré reduction* [@lopez00:_reduc_princ_fiber_bundl; @2003CMaPh.236..223L; @ellis11:_lagran_poinc] and *Routh reduction* [@Marsden00reductiontheory; @CrampinMestdagRouth; @eprints21388; @GarciaToranoAndres2016291; @Capriotti201723; @Capriotti2019]. Moreover, there are physical considerations that can be said in support of this kind of reduction: They deal not only with the reduction problem, but also with the reconstruction problem, and it is argued in [@Nawarajan:2016vzv] that reconstruction can be relevant from the physical point of view.
The problem with these approaches to Lagrangian reduction is that they work in the setting of *classical variational problems*, that is, variational problems where the velocity space is a jet space of the field bundle, and where the restrictions imposed on the fields are prescribed by the contact structure of the jet space. In the present work we want to study the reduction of a variational problem of a more general nature, namely, by using the formulation of Palatini gravity given in [@capriotti14:_differ_palat], where it was interpreted as an example of the so called *Griffiths variational problems* [@book:852048; @hsu92:_calcul_variat_griff]. In this approach, the field bundle is the bundle of frames (whose sections are the *vielbein*) but the jet space is replaced by a submanifold of the jet space of the frame bundle, namely, by the submanifold corresponding to the torsion zero constraint; also, the contact structure is changed by a set of differential constraints implementing the *metricity conditions*. Therefore, it is necessary to find a formulation of a Lagrangian reduction scheme taking into consideration these characteristics; because of its versatility, we will choose to work with the Routh reductions as formulated in [@Capriotti2019].
These considerations set the purposes of the following article: In one hand, to carry out a proof of concept for the generalization of Routh reduction to variational problems more general than those corresponding to first order field theory, generalizing the techniques employed in [@Capriotti2019]; on the other hand, to apply Routh reduction of field theory in the context of a meaningful example, namely, a formulation of gravity with basis.
In order to be more specific about the nature of this generalization, let us briefly describe how Routh reduction in field theory is performed:
- First, a unified formulation along the lines of [@GotayCartan] is constructed, and its ability for representing the extremals of the original variational problem is proved. This procedure must be done both for the original variational problem and the reduced one. The set of differential forms encoding the restrictions to be imposed on the fields are used at this stage.
- The momentum map is defined in the unified setting, and its momentum level sets are determined. It should be proved that the equations of motion can be naturally restricted to these sets.
- A connection on the bundle obtained by quotienting out the symmetry must be provided. Because of the characteristics of the contact structure on a jet space, this connection allows us to split the fields of the unified formalism. The splitting induced by the chosen connection allows us to define the Routhian and the force term for the reduced system.
- It is necessary to set a common ground for the comparison of the extremals of the unreduced and reduced variational problems. This is done by considering an affine subbundle of the bundle of forms on a fibred product of bundles; the factors in this product are the bundles of the unreduced and the reduced system.
- The equivalence between the extremals of the original variational problem and the reduced variational problem is checked by a map involving the translation along the force term (in the space of forms associated to the unified formalism). In the reconstruction of the extremals of the unreduced system from the reduced dynamics, it is necessary to impose some integrability conditions.
The two first items can also be done for Griffiths variational problems; when trying to reproduce the third item in this generalized context, we have to face the problem that the splitting induced by the chosen connection strongly depends on features of the contact structure. Nevertheless, the metricity constraints can be formulated using forms belonging to the contact structure, and so the hopes of reproducing the third item in this context increase. A solution to this problem is provided by Lemma \[lem:cont-bundle-decomp\]. No difficulties must arise from the last two items, as they are based on geometrical operations of general nature; the main results of the article are Theorem \[thm:routh-reduct-palat\] and Theorem \[thm:Reconstruction\], which describe reduction and reconstruction respectively.
The paper is organized as follows. In Section \[sec:LFT\] we will review some geometrical tools necessary for the construction of the variational problem for Palatini gravity we will use in this article; the actual construction of this variational problem, as well as the associated unified problem, is done in Section \[sec:vari-probl-palat\]. The symmetry considerations necessary to carry out the reduction are discussed in Section \[sec:symmetry-momentum\]; Section \[sec:local-coord-expr\] is technical, and contains some calculations used in the reduction and reconstruction theorems. In Section \[sec:metr-cont-struct\] the results achieved in the previous section are employed in the search of identifications between geometrical structures present in both the reduced and unreduced spaces: A remarkable fact in this vein is that the metricity constraints correspond after projection onto the quotient, with the contact structure of a jet bundle. Construction of the first order formalism for Einstein-Hilbert gravity (and its correspondence with the usual second order formalism) is delayed until Section \[sec:first-order-vari\]; also, an unified formalism for this variational problem is discussed in this section. The choice of a connection induces a splitting in the contact structure on the jet space of the frame bundle; in Section \[sec:cont-bundle-decomp\] the effects of this splitting in the variational formulation of Palatini gravity are analyzed. The Routhian is constructed in Section \[sec:routhian\]: It is shown that the Routhian for Palatini gravity is the (first order) Einstein-Hilbert Lagrangian. Finally, in Section \[sec:einst-hilb-grav\] the reduction theorem and the reconstruction theorem are proved. The main result of this section is the notion of *flat condition for a metric*, which is a helpful hypothesis in the proof of the reconstruction theorem.
### Notations {#notations .unnumbered}
We are adopting here the notational conventions from [@saunders89:_geomet_jet_bundl] when dealing with bundles and its associated jet spaces. Also, if $Q$ is a manifold, $\Lambda^p (Q)=\wedge^p(T^*Q)$ denotes the $p$-th exterior power of the cotangent bundle of $Q$. Moreover, for $k\leq l$ the set of $k$-horizontal $l$-forms on the bundle $\pi:P\to N$ is $$\wedge^l_k\left(Q\right):=\left\{\alpha\in\wedge^l\left(Q\right):v_1\lrcorner\cdots v_k\lrcorner\alpha=0\text{ for any }v_1,\cdots,v_k\text{ }\pi\text{-verical vectors}\right\}.$$ For the same bundle, the set of vectors tangent to $P$ in the kernel of $T\pi$ will be represented with the symbol $V\pi\subset TP$. In this regard, the set of vector fields which are vertical for a bundle map $\pi:P\to N$ will be indicated by $\mathfrak{X}^{V\pi}\left(P\right)$. The space of differential $p$-forms, sections of $\Lambda^p (Q)\to Q$, will be denoted by $\Omega^p(Q)$. [We also write $\Lambda^\bullet(Q)=\bigoplus_{j=1}^{\dim Q}\Lambda^j(Q)$]{}. If $f\colon P\to Q$ is a smooth map and $\alpha_x$ is a $p$-covector on $Q$, we will sometimes use the notation $\alpha_{f(x)}\circ T_xf$ to denote its pullback $f^*\alpha_x$. If $P_1\to Q$ and $P_2\to Q$ are fiber bundles over the same base $Q$ we will write $P_1\times_Q P_2$ for their fibred product, or simply $P_1\times P_2$ if there is no risk of confusion. Unless explicitly stated, the canonical projections onto its factor will be indicated by $$\text{pr}_i:P_1\times P_2\to P_i,\qquad i=1,2.$$ Given a manifold $N$ and a Lie group $G$ acting on $N$, the symbol $\left[n\right]_G$ for $n\in N$ will indicate the $G$-orbit in $N$ containing $n$; the canonical projection onto its quotient will be denoted by $$p_G^N:N\to N/G.$$ Also, if $\mathfrak{g}$ is the Lie algebra for the group $G$, the symbol $\xi_N$ will represent the infinitesimal generator for the $G$-action asssociated to $\xi\in\mathfrak{g}$. Finally, Einstein summation convention will be used everywhere.
Geometrical tools for Palatini gravity {#sec:LFT}
======================================
We will give a brief account of the construction carried out in [@doi:10.1142/S0219887818500445]. The basic bundle is the frame bundle $\tau:LM\rightarrow M$ on the spacetime manifold $M$ ($\text{dim}M=m$); because it is a principal bundle with structure group $GL\left(m\right)$, we can lift this action to the jet bundle $J^1\tau$, so that we obtain a commutative diagram $$\begin{tikzcd}[row sep=1.3cm,column sep=1.1cm]
&
J^1\tau
\arrow[swap]{dl}{\tau_{10}}
\arrow{dr}{p_{GL\left(m\right)}^{J^1\tau}}
\arrow{dd}{\tau_1}
&
\\
LM
\arrow[swap]{dr}{\tau}
&
&
C\left(LM\right)
\arrow{dl}{\overline{\tau}}
\\
&
M
&
\end{tikzcd}$$ where $C\left(LM\right):=J^1\tau/GL\left(m\right)$ is the so called *connection bundle of $LM$*, whose sections can be naturally identified with the principal connections of the bundle $\tau$ (for details, see [@springerlink:10.1007/PL00004852] and references therein). It is interesting to note that there exists an affine isomorphism $$F:J^1\tau\rightarrow LM\times_MC\left(LM\right):j_x^1s\mapsto\left(s\left(x\right),\left[j_x^1s\right]_{GL\left(m\right)}\right)$$ and under this correspondence, the $GL\left(m\right)$-action is isolated to the first factor in the product, namely $$F\left(j_x^1s\cdot g\right)=\left(s\left(x\right)\cdot g,\left[j_x^1s\right]_{GL\left(m\right)}\right).$$ It means that a section of the bundle $\tau_1$ is equivalent to a connection on $LM$ plus a moving frame $\left(X_1,\cdots, X_m\right)$ on $M$; although this moving frame has no direct physical interpretation, we can associate a metric to it, namely, in contravariant terms, $$g:=\eta^{ij}X_i\otimes X_j$$ for some nondegenerate symmetric matrix $\eta$ (see Equation below). It is the same to declare that the metric $g$ is the unique metric on $M$ making the moving frame $\left(X_1,\cdots,X_m\right)$ (pseudo)orthonormal, with the signature given by $\eta$.
The tautological form $\widetilde{\theta}\in\Omega^1\left(LM,{\mathbb{R}}^m\right)$ can be pulled back along $\tau_{10}$ to a $1$-form $\theta:=\tau_{10}^*\widetilde{\theta}$ on $J^1\tau$; moreover, the Cartan form $\widetilde{\omega}\in\Omega^1\left(J^1\tau,V\tau\right)$, given by the formula $$\left.\widetilde{\omega}\right|_{j_x^1s}:=T_{j_x^1s}\tau_{10}-T_xs\circ T_{j_x^1s}\tau_1,$$ gives rise to a $\mathfrak{gl}\left(m\right)$-valued $1$-form $\omega$ on $J^1\tau$, by using the identification $$V\tau\simeq LM\times\mathfrak{gl}\left(m\right).$$ By means of the canonical basis $\left\{e_i\right\}$ on ${\mathbb{R}}^m$ and $\left\{E^i_j\right\}$ on $\mathfrak{gl}\left(m\right)$, where $$\left(E_j^i\right)^q_p:=\delta^q_j\delta^i_p,$$ we can define the collection of $1$-forms $\left\{\theta^i,\omega^i_j\right\}$ on $J^1\tau$ such that $$\theta=\theta^ie_i,\qquad\omega=\omega ^j_iE^i_j.$$ We also have the formula $$\widetilde{\omega}=\omega ^j_i\left(E^i_j\right)_{J^1\tau},$$ where $A_{J^1\tau}\in\mathfrak{X}^{Vp_{GL\left(m\right)}^{J^1\tau}}\left(J^1\tau\right)$ is the infinitesimal generator associated to $A\in\mathfrak{gl}\left(m\right)$ for the lifted action. It can be proved that $\omega$ is a connection form for a principal connection on the bundle $$p_{GL\left(m\right)}^{J^1\tau}:J^1\tau\rightarrow C\left(LM\right).$$
Let us define $$\theta_0:=\theta^1\wedge\cdots\wedge\theta^m;$$ because every $u\in LM$ is a collection $u=\left(X_1,\cdots,X_m\right)$ of vectors on $\tau\left(u\right)\in M$, and $\theta^i$ is a $\tau_1$-horizontal $1$-form on $J^1\tau$, we can define the set of forms $$\left.\theta_{i_1\cdots i_k}\right|_{j_x^1s}:=X_{i_1}\lrcorner\cdots X_{i_k}\lrcorner\left.\theta_0\right|_{j_x^1s}$$ for $1\leq i_1,\cdots,i_k\leq m$, where $j_x^1s\in J^1\tau$ is any element such that $u=\tau_{10}\left(j_x^1s\right)$.
Additionally, let us fix a matrix $$\label{eq:EtaDefinition}
\eta:=
\begin{bmatrix}
-1&0&\cdots&0\\
0&1&&0\\
\vdots&&\ddots&\vdots\\
0&\cdots&0&1
\end{bmatrix}\in GL\left(m\right)$$ and let $\eta_{ij}$ its $\left(i,j\right)$-entry; we will represent with the symbol $\eta^{ij}$ the $\left(i,j\right)$-entry of its inverse. With these ingredients we can construct the *Palatini Lagrangian* $$\label{eq:PalatiniLagrangianInvariant}
{\mathcal{L}}_{PG}:=\eta^{ip}\theta_{ik}\wedge\Omega^k_p,$$ where $\Omega:=\Omega^i_jE^j_i$ is the curvature of the canonical connection $\omega$. This $m$-form will determine the dynamics of the vacuum gravity in this formulation.
Finally, let us describe a decomposition on $\mathfrak{gl}\left(m\right)$ induced by $\eta$. In fact, this matrix yields to a compact real form ${\mathfrak{u}}$ in $\mathfrak{gl}\left(m,{\mathbb{C}}\right)$, given by $${\mathfrak{u}}=\left\{\xi\in\mathfrak{gl}\left(m,{\mathbb{C}}\right):\xi^\dagger\eta+\eta\xi=0\right\}$$ and thus we have a Cartan decomposition $$\mathfrak{gl}\left(m,{\mathbb{C}}\right)={\mathfrak{u}}\oplus\mathfrak{s}.$$ Given the inclusion $$\mathfrak{gl}\left(m\right)\subset\mathfrak{gl}\left(m,{\mathbb{C}}\right),$$ we obtain the decomposition $$\mathfrak{gl}\left(m\right)={\mathfrak{k}}\oplus{\mathfrak{p}}.$$
Restrictions in Palatini gravity: Zero torsion submanifold and metricity forms {#sec:zero-tors-subm}
------------------------------------------------------------------------------
It is time to discuss the restrictions we must impose on the sections of $\tau_1$ in order to have a characterization of a gravity field in this description. Our aim is to describe a metric and a connection on the spacetime, and the restrictions to be considered will establish the relationship between them.
There are two types of conditions to be imposed to a section of $J^1\tau$, each of them motivated on physical grounds (which we will not discuss here):
1. The connection which is a solution for the field equations of Palatini gravity must be torsionless, and
2. this connection must be metric for the solution metric.
The canonical forms defined in the previous section allow us to set the *torsion form* $$T:=\left(d\theta^j+\omega^j_k\wedge\theta^k\right)\otimes e_j\in\Omega^2\left(J^1\tau,{\mathbb{R}}^m\right).$$ Now, every connection $\Gamma:M\rightarrow C\left(LM\right)$ gives rise to a section $\sigma_\Gamma:LM\rightarrow J^1\tau$ of the bundle $\tau_{10}:J^1\tau\rightarrow LM$, as the equivariance of the following diagram shows $$\begin{tikzcd}[row sep=1.3cm,column sep=1.1cm]
&
J^1\tau
\arrow{dl}{\tau_{10}}
\arrow{dr}{p_{GL\left(m\right)}^{J^1\tau}}
&
\\
LM
\arrow{dr}{\tau}
\arrow[dashed,bend left=45]{ur}{\sigma_\Gamma}
&
&
C\left(LM\right)
\arrow[swap]{dl}{\overline{\tau}}
\\
&
M
\arrow[dashed,bend right=45,swap]{ur}{\Gamma}
&
\end{tikzcd}$$ The interesting fact is that the pullback form $\sigma_\Gamma^*T$ coincides with the torsion of the connection $\Gamma$. Additionally, it can be proved that $T$ is a $1$-horizontal form on $\tau_1:J^1\tau\to M$, so that there exists a maximal (respect to the inclusion) submanifold $i_0:{\mathcal{T}}_0\hookrightarrow J^1\tau$ such that
1. ${\mathcal{T}}_0$ is transversal to the fibers of $\tau_1:J^1\tau\rightarrow M$ (namely, $T_{j^1_xs}\left({\mathcal{T}}_0\right)\oplus V_{j_x^1s}\tau_1=T_{j_x^1s}\left(J^1\tau\right)$), and
2. it annihilates the torsion, i.e. $$i_0^*T=0.$$
The transformation properties of the form $T$ allow us to conclude that ${\mathcal{T}}_0$ is $GL\left(m\right)$-invariant. The connections associated to sections of $J^1\tau$ taking values in ${\mathcal{T}}_0$ are torsionless, so that the zero torsion restriction can be achieved through the requirement that these sections would take values in this submanifold. Accordingly, we can use the affine isomorphism $F:J^1\tau\to LM\times C\left(LM\right)$, to define the *bundle of torsionless connections* as the bundle $\overline{\tau}':C_0\left(LM\right)\to M$ obtained by restricting $F$ to ${\mathcal{T}}_0$ $$C_0\left(LM\right):=\text{pr}_2\left(F\left({\mathcal{T}}_0\right)\right).$$ Moreover, the following lemma can be proved using standard facts about principal bundles [@KN1].
\[lem:torsion-zero-bundle\] The submanifold ${\mathcal{T}}_0\subset J^1\tau$ is a principal subbundle of the $GL\left(m\right)$-bundle $p_{GL\left(m\right)}^{J^1\tau}:J^1\tau\to C\left(LM\right)$, associated to the isomorphism $\text{id}:GL\left(m\right)\to GL\left(m\right)$.
These considerations give rise to the commutative diagram $$\begin{tikzcd}[row sep=1.3cm,column sep=1.1cm]
&
{\mathcal{T}}_0
\arrow[swap]{dl}{\tau_{10}'}
\arrow{dr}{p_{GL\left(m\right)}^{{\mathcal{T}}_0}=\left.p_{GL\left(m\right)}^{J^1\tau}\right|_{{\mathcal{T}}_0}}
&
\\
LM
\arrow{dr}{\tau'}
&
&
C_0\left(LM\right)
\arrow{dl}{\overline{\tau}'}
\\
&
M
&
\end{tikzcd}$$
Let $\left(x^\mu,e_k^\nu\right)$ be a set of adapted coordinates for $LM$ induced on $\tau^{-1}\left(U\right)$ by a set of coordinates $\left(x^\mu\right)$ on $U\subset M$; as usual, it induces coordinates $\left(x^\mu,e_k^\nu,e^\nu_{k\sigma}\right)$ on $\tau_1^{-1}\left(U\right)$. On this open set we have $$T=e_\sigma^i\left(e^k_\mu e_{k\nu}^\sigma dx^\mu\wedge dx^\nu\right)\otimes e_i$$ (where $\left(e^k_\mu\right)$ is the inverse matrix of $\left(e_k^\mu\right)$), so that the set ${\mathcal{T}}_0\cap\tau_1^{-1}\left(U\right)$ is described by the constraints $$e^k_\mu e_{k\nu}^\sigma=e^k_\nu e_{k\mu}^\sigma.$$
On the other hand, the metricity condition has differential nature: As we mentioned before, matrix $\eta$ determines a factorization of $\mathfrak{gl}\left(m\right)$ in a subalgebra ${\mathfrak{k}}$ (the subalgebra of $\eta$-Lorentz transformations) and an invariant subspace ${\mathfrak{p}}$. The explicit formulas for this decomposition are given by the projectors $$A_{\mathfrak{k}}:=\frac{1}{2}\left(A-\eta A^T\eta\right),\qquad A_{\mathfrak{p}}:=\frac{1}{2}\left(A+\eta A^T\eta\right)$$ for every $A\in\mathfrak{gl}\left(m\right)$. The metricity condition is imposed on a section $\Sigma:M\rightarrow J^1\tau$ by requiring that $$\Sigma^*\omega_{\mathfrak{p}}=0,$$ where $\omega_{\mathfrak{p}}$ is the ${\mathfrak{p}}$-component of the canonical connection $\omega$ respect to this decomposition. Taking into account the affine isomorphism $F:J^1\tau\to LM\times_MC\left(LM\right)$, this constraint means that the parallel transport of the connection $\text{pr}_2\circ F\circ\Sigma:M\to C\left(LM\right)$ leaves invariant the metric associated to the vielbein $\text{pr}_1\circ F\circ\Sigma:M\to LM$ (see Equation below).
Contact-like structure and admissible sections
----------------------------------------------
The scheme we will use for Routh reduction relies on the notion of unified formulation of a variational problem; it is a necessary step in order to avoid issues regarding the non regularity of the Lagrangian to be reduced [@GarciaToranoAndres2016291]. In this vein, we should mention that our approach to the unified formalism is strongly based on the groundbreaking work of Gotay [@GotayCartan]. Although the aim in the previously cited article is to extend the definition of the Poincaré-Cartan form to a generalized family of variational problems (the so called *Griffiths variational problems*, see [@book:852048; @hsu92:_calcul_variat_griff] and references), it can be readily seen that it can serve as a generalization of the unified formalism (as defined in [@1751-8121-40-40-005; @2004JMP....45..360E; @Prieto-Martinez2015203] and references therein) to these kind of variational problems, namely, when restricted to the particular case of the *classical variational problem* (the variational problem underlying the first order classical field theory, see [@Gotay:1997eg]), this construction reduces to the construction associated to the usual formulation of the unified formalism.
There are two crucial differences between a classical variational problem and a more general Griffiths variational problem:
- First of all, a classical variational problem (of first order) is formulated in a first order jet bundle, whereas a Griffiths variational problem can use in principle any bundle.
- More important (at least from the viewpoint of the present work) is the fact that the sections are integral for some characteristic set of forms. In the classical case, these set of forms are the contact forms, allowing us to restrict the set of forms to be varied to the holonomic sections of the jet bundle; the contact structure is replaced by another set of forms in the more general case.
\[rem:griffiths-notation\] As it is said in the previous paragraph, a Griffiths variational problem consists into three kind of data: A bundle $p:W\to M$, whose sections will be the fields of the theory, a Lagrangian form $\lambda\in\Omega^m\left(W\right)$ setting the dynamics, and a set of forms $\mathcal{I}\subset\Omega^\bullet\left(W\right)$ (more precisely, an *exterior differential system*) describing the set of differential restrictions on the fields. Accordingly, we will often specify a variational problem of this kind with the symbol $$\left(p:W\to M,\lambda,\mathcal{I}\right).$$ The variational problem underlying such triple consist into finding the extremals of the action $$S\left[s\right]:=\int_Ms^*\lambda$$ where the sections $s:M\to W$ of the bundle $p$ must be integral for the set of forms in $\mathcal{I}$, namely, $$s^*\alpha=0$$ for every $\alpha\in\mathcal{I}$.
The variational problem we will consider here for the Palatini gravity is not a classical one; it will differ from a variational problem of this kind in both of the aspects mentioned above:
- The relevant bundle is not the first order jet $J^1\tau$; instead, it is the subset ${\mathcal{T}}_0$ consisting into the jets associated to torsionless connections. Due to this fact, we will consider the pullback of the canonical forms and the restriction of maps from $J^1\tau$ to ${\mathcal{T}}_0$; unless explicitly stated, the new forms and maps will be indicated with the same symbols. An exception to this rule will be the restriction of the bundle maps $\tau_{10}$ and $\tau_1$, which will be indicated as $$\tau_{10}':{\mathcal{T}}_0\to LM\qquad\text{and}\qquad\tau_1':{\mathcal{T}}_0\to M.$$
- The forms we will use for the restriction of the sections of $\tau_1':{\mathcal{T}}_0\rightarrow M$ are not the whole set of contact forms $\left\{\omega_j^i\right\}$, but a geometrically relevant subset, namely, the components of the metricity forms $\omega_{\mathfrak{p}}$.
In order to establish the unified version of the equations of motion for Palatini gravity, it will be necessary to define the *metricity subbundle* $I_{PG}^m$ on ${\mathcal{T}}_0$, $$I_{PG}^m:=\left\{\eta^{ik}\beta_{kp}\wedge\omega_k^p:\beta_{ij}\in\wedge^{m-1}_1\left(T^*{\mathcal{T}}_0\right),\beta_{ij}-\beta_{ji}=0\right\}\subset\wedge^m_2\left(T^*{\mathcal{T}}_0\right),$$ where $\wedge^{m-1}_1\left(T^*{\mathcal{T}}_0\right)$ indicates the set of $\tau_1'$-horizontal $\left(m-1\right)$-covectors on ${\mathcal{T}}_0$. With the metricity subbundle in mind, we can define the affine subbundle $$\label{eq:BundleOfFormsOnT0}
W_{PG}:={\mathcal{L}}_{PG}+I_{PG}^m\subset\wedge^m_2\left(T^*{\mathcal{T}}_0\right),$$ which comes with the projection $$\tau_{PG}:W_{PG}\rightarrow {\mathcal{T}}_0:\alpha\in\wedge^m_2\left(T^*_{j^1_xs}{\mathcal{T}}_0\right)\mapsto j_x^1s.$$ Because this is a subbundle in the set of $m$-forms on ${\mathcal{T}}_0$, it has a canonical $m$-form $\lambda_{PG}$ on it given by $$\left.\lambda_{PG}\right|_\alpha\left(w_1,\cdots,w_m\right):=\alpha\left(T_\alpha\tau_{PG}\left(w_1\right),\cdots,T_\alpha\tau_{PG}\left(w_m\right)\right),\qquad w_1,\cdots,w_m\in T_\alpha\left(W_{PG}\right).$$
The variational problem for Palatini gravity {#sec:vari-probl-palat}
============================================
The variational problem we will work with in the present article is the following.
\[def:VariationalProblemPalatini\] The variational problem for Palatini gravity is given by the action $$S\left[\sigma\right]:=\int_U\sigma^*{\mathcal{L}}_{PG},$$ where $\sigma:U\subset M\rightarrow {\mathcal{T}}_0$ is any section of $\tau_1'$ such that $\sigma^*\omega_{\mathfrak{p}}=0$. According to Remark \[rem:griffiths-notation\], it is described by the triple $\left(\tau_1':{\mathcal{T}}_0\to M,{\mathcal{L}}_{\text{PG}},\left<\omega_{\mathfrak{p}}\right>\right)$, where $\left<\cdot\right>$ indicates the exterior differential system generated by the set of forms enclosed in the brackets.
The relevance of the unified formalism in dealing with variational problems is guaranteed by the following result [@doi:10.1142/S0219887818500445].
\[prop:FieldTheoryEqsWL\] A section $s\colon U\subset M\rightarrow {\mathcal{T}}_0$ is critical for the variational problem established in Definition \[def:VariationalProblemPalatini\] if and only if there exists a section $\Gamma\colon U\subset M\rightarrow W_{PG}$ such that
1. $\Gamma$ covers $s$, i.e. $\tau_{PG}\circ\Gamma=s$, and
2. $\Gamma^*\left(X\lrcorner d\lambda_{PG}\right)=0$, for all $X\in\mathfrak{X}^{V\left(\tau_1\circ\tau_{PG}\right)}(W_{PG})$.
$\Gamma$ is called a *solution* of the Palatini gravity equations of motion.
Although the proof in [@doi:10.1142/S0219887818500445] refers to sections of $\tau_1:J^1\tau\to M$, it can be also [readily adapted]{} to cover this case; in this regard, see Appendix \[app:LiftToTorsionZero\].
The situation described by Proposition \[prop:FieldTheoryEqsWL\]is summarized in the following diagram: $$\begin{tikzcd}[row sep=1.3cm,column sep=1.1cm]
W_{PG}
\arrow{dr}{\tau_1'\circ\tau_{PG}}
\arrow{rr}{\tau_{PG}}
&
&
{\mathcal{T}}_0
\arrow[swap]{dl}{\tau_1'}
\\
&
M
\arrow[dashed,bend left=40]{ul}{\Gamma}
\arrow[dashed,bend right=40,swap]{ur}{s}
&
\end{tikzcd}$$ We will see below (Section \[sec:first-order-vari\]) that the same can be done for (first order) Einstein-Hilbert variational problem; the reduction and reconstruction theorems (see Section \[sec:einst-hilb-grav\]) will be proved using these lifted systems.
Symmetry and momentum {#sec:symmetry-momentum}
=====================
We now discuss the presence of natural symmetries and their momentum maps for the unified formulation of Palatini gravity.
As we said above (see Lemma \[lem:torsion-zero-bundle\]), there exists a $GL\left(m\right)$-action on ${\mathcal{T}}_0$; nevertheless, the Lagrangian ${\mathcal{L}}_{PG}$ is preserved by the action of the subgroup $K\subset GL\left(m\right)$ composed of the linear transformations keeping invariant the matrix $\eta$, $$K:=\left\{A=\left(A_i^j\right):\eta_{ij}A^i_kA^j_l=\eta_{kl}\right\}.$$ We can lift the $GL\left(m\right)$-action to $\wedge^m\left({\mathcal{T}}_0\right)$; it results that the subbundle $I^m_{PG}$ is also preserved by the action of $K$, and so $$K\cdot W_{PG}\subset W_{PG}.$$ It is our aim to find a momentum map for this action, in the sense of the following definition.
A *momentum map* for the action of $K$ on $W_{PG}$ is a map $$J\colon W_{PG}\to \Lambda^{m-1}\left(T^*W_{PG}\right)\otimes{\mathfrak{k}}^*$$ over the identity in $W_{PG}$ such that $$\xi_{W_{PG}}\lrcorner d\lambda_{PG}=-dJ_\xi,$$ where $J_\xi$ is the $(m-1)$-form on $W_{PG}$ whose value at $\alpha\in W_{PG}$ is $J_\xi(\alpha)=\langle J(\alpha),\xi \rangle$.
A momentum map is $Ad^*$-*equivariant* if it satisfies $$\langle J(g\alpha),Ad_{g^{-1}}\xi \rangle=g \langle J(\alpha),\xi \rangle.$$
Thus, we obtain Noether’s theorem in this setting:
The momentum map $J$ is conserved along solutions of the Palatini gravity equations of motion.
Recall that $\Gamma\colon U\subset M\rightarrow W_{PG}$ is a solution for the Palatini gravity equations of motion if and only if $$\Gamma^*(Z\lrcorner d \lambda_{PG})=0$$ for any $\tau_1'\circ\tau_{PG}$-vertical vector field $Z$. Then for each $\xi\in{\mathfrak{k}}$ we have $$d(\Gamma^*J_\xi)=\Gamma^*(dJ_\xi)=\Gamma^*(-\xi_{W_{PG}}\lrcorner d \lambda_{PG})=0,$$ and therefore the momentum is conserved along solutions.
Accordingly, we think of a “momentum” $\widehat{\mu}$ as an element $\widehat{\mu}\in \Omega^{m-1}(W_{PG},\mathfrak{gl}\left(m\right)^*)$, i.e. as a $\mathfrak{gl}\left(m\right)^*$-valued $(m-1)$-form on $W_{PG}$; a conserved value $\widehat{\mu}$ of the momentum map is a closed one, i.e. $d\widehat{\mu}=0$.
The construction of a momentum map for the action on $W_{PG}$ is standard [@Gotay:1997eg]:
\[lem:mmap\] The map $J\colon W_{PG}\to \Lambda^{m-1}\left(T^*W_{PG}\right)\otimes {\mathfrak{k}}^*$ defined by $$\langle J(\alpha),\xi \rangle=\xi_{W_{PG}}(\alpha) \lrcorner \left.\lambda_{PG}\right|_\alpha,$$ for each $\xi\in{\mathfrak{k}}$, is an $Ad^*$-equivariant momentum map for the $GL\left(m\right)$-action on $W_{PG}$.
Now, because $$T\tau_{PG}\circ\xi_{W_{PG}}=\xi_{{\mathcal{T}}_0}\circ\tau_{PG},$$ then for every $\alpha\in W_{PG}$ it results that $$\begin{aligned}
\langle J(\alpha),\xi \rangle&=\xi_{W_{PG}}(\alpha) \lrcorner \left.\lambda_{PG}\right|_\alpha\\
&=\xi_{{\mathcal{T}}_0}\lrcorner\alpha\\
&=i_0^*\left[\xi_{J^1\tau}\lrcorner\left(\eta^{ip}\theta_{pk}\wedge\Omega^k_i+\eta^{ip}\beta_{pq}\wedge\omega^q_i\right)\right]\\
&=0\end{aligned}$$ for all $\xi\in{\mathfrak{k}}$. It means that the unique allowed momentum level set for this symmetry is $J=0$; accordingly, the isotropy group of this level set is $K$, and $$J^{-1}\left(0\right)=W_{PG}.$$
The other ingredient in Routh reduction is the factorization of the restriction bundle $I^m_{PG}$ induced by a connection $\omega_K$ on the underlying bundle $p_K^{LM}:LM\rightarrow\Sigma$, where $$\tau_\Sigma:\Sigma:=LM/K\to M$$ is the *bundle of metrics of signature $\eta$*. We will carry out this task in Section \[sec:cont-bundle-decomp\]; here we will construct this connection. To this end, consider the decomposition associated to the matrix $\eta$ (see Section \[sec:LFT\]). The connection $\omega_K$ on the bundle $p_K^{LM}:LM\rightarrow\Sigma$ is induced by this decomposition, namely $$\omega_K:=\pi_{\mathfrak{k}}\circ\omega_0,$$ where $\pi_{\mathfrak{k}}:\mathfrak{gl}\left(m\right)\to{\mathfrak{k}}$ is the canonical projector onto the ${\mathfrak{k}}$-factor in the Cartan decomposition and $\omega_0$ is a connection form on the principal bundle $\tau:LM\to M$. The $K$-invariance of the factor ${\mathfrak{p}}$, $$\text{Ad}_A{\mathfrak{p}}\subset{\mathfrak{p}}\qquad\forall A\in K$$ ensures us that it has the expected properties of a connection.
Finally, let us identify a candidate for the reduced bundle. In order to proceed, consider the adjoint bundle $\tau_{{\mathfrak{k}}}:\widetilde{{\mathfrak{k}}}\to\Sigma$; then, the following result holds.
The map $$\begin{aligned}
\Upsilon_{\omega}\colon J^1\tau&{\longrightarrow}\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\times_\Sigma{{\rm Lin}}\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)\right),\\
j_x^1s&\longmapsto \left(s\left(x\right),j^1_x\left[s\right]_K,\left[s\left(x\right),\omega_K\circ T_xs\right]_K\right).
\end{aligned}$$ is a bundle isomorphism.
The inverse of $\Upsilon_\omega$ is given by $$\label{eq:InverseUpsilon}
\Upsilon_\omega^{-1}\left(e,j_x^1\overline{s},\left[e,\widehat{\xi}\right]_K\right)=\left[v_x\in T_xM\xmapsto{\hspace{.7cm}}\left(T_x\overline{s}\left(v_x\right)\right)_e^H+\left(\widehat{\xi}\left(v_x\right)\right)_{LM}\left(e\right)\right],$$ where $\left(\cdot\right)^H_e,e\in LM,$ is the horizontal lift associated to $\omega_K$.
The map $\Upsilon_\omega$ enjoys a useful property: under this identification, the action of $K$ on $J^1\tau$ is simply $$g\cdot\big(e,j_x^1\overline{s},[e,\widehat{\xi}]_K\big)=\big(g\cdot e,j_x^1\overline{s},[e,\widehat{\xi}]_K\big).$$ This is a direct consequence of the equivariance of the principal connection $\omega_K$. As a result, we get the following corollary.
\[cor:identification\] There is an identification $$J^1\tau/K\simeq J^1\tau_\Sigma\times_{\Sigma}{{\rm Lin}}\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right).$$
The choice of a connection on the bundle $p_K^{LM}$ is allowing us to establish a relationship between the quotient space $J^1\tau/K$ and the jet bundle of the metric bundle $J^1\tau_\Sigma$, the latter being the relevant bundle in the Einstein-Hilbert approach to relativity. It will be studied in detail in Section \[sec:metr-cont-struct\].
Motivated by these considerations, we are in position to define what is the Lagrangian quotient bundle for Palatini gravity.
The bundle $J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$ is the *quotient bundle for Palatini gravity*.
In the next Sections we will explore a further simplification for this bundle, as well as a reduction for the Lagrangian responsible of the dynamics on these bundles.
Local coordinates expressions {#sec:local-coord-expr}
=============================
Here we will obtain some identities allowing us to write down the isomorphism $\Upsilon_\omega^{-1}$ in local terms. In order to proceed, we fix a coordinate chart on $M$, inducing coordinates $\left(x^\mu,e_k^\mu\right)$ on $LM$. As usual, we will indicate with $\left(x^\mu,e_k^\mu,e^\mu_{k\sigma}\right)$ the coordinates induced on $J^1\tau$. It can be proved that there exists a set of coordinates $\left(x^\mu,g^{\mu\nu},\Gamma^\sigma_{\mu\nu}\right)$ on $$J^1\tau/K=\Sigma\times C\left(LM\right)$$ and adapted to this decomposition, namely $$p_K^{LM}\left(x^\mu,e_k^\mu\right)=\left(x^\mu,\eta^{kl}e_k^\mu e^\nu_{l}\right).$$ In terms of these coordinates, we have $$p^{J^1\tau}_K\left(x^\mu,e_k^\mu,e^\mu_{k\sigma}\right)=\left(x^i,\eta^{ij}e_i^\mu e_j^\nu,-e^k_\mu e_{k\nu}^\sigma\right).$$ It means in particular that $$Tp_{K}^{LM}\left(\frac{\partial}{\partial x^\mu}\right)=\frac{\partial}{\partial x^\mu}$$ and $$\label{eq:E_Projection}
Tp_{K}^{LM}\left(\frac{\partial}{\partial e^\mu_k}\right)=\left(\eta^{kq}e^\rho_q\delta^\sigma_\mu+\eta^{kp}e^\sigma_p\delta_\mu^\rho\right)\frac{\partial}{\partial g^{\sigma\rho}}.$$ On the other hand, a principal connection on $LM$ can be written as $$\omega_0=-e^l_\mu\left(de^\mu_k-f^\mu_{k\sigma}dx^\sigma\right)E^k_l,$$ where $\left(f^\mu_{k\sigma}\right)$ is a collection of local functions on $M$; its Christoffel symbols will be $$\overline{\Gamma}^\sigma_{\rho\mu}=-e_\rho^kf_{k\mu}^\sigma.$$ Given our definition of the connection $\omega_K$ on the $K$-bundle $p_K^{LM}:LM\rightarrow\Sigma$, its components become $$\left[\left(\omega_0\right)_{\mathfrak{k}}\right]_k^l=-\eta_{kp}\left(\eta^{pq}e_\mu^l-\eta^{lq}e^p_\mu\right)\left(de_q^\mu-f^\mu_{q\sigma}dx^\sigma\right).$$ Now we will find the horizontal lift defined by $\omega_K$ for vector fields on $\Sigma$:
\[prop:Local\_Horizontal\_Lift\] The horizontal lift of vector fields on $\Sigma$ associated to the connection $\omega_K$ is locally given by $$\begin{aligned}
\left(\frac{\partial}{\partial x^\mu}\right)^H&=\frac{\partial}{\partial x^\mu}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left(g^{\alpha\sigma}\overline{\Gamma}^\beta_{\alpha\mu}-g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\right)\frac{\partial}{\partial e^\sigma_k}\\
\left(\frac{\partial}{\partial g^{\mu\nu}}\right)^H&=\frac{1}{4}g_{\rho\beta}e^\beta_k\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)\frac{\partial}{\partial e^\alpha_k}.
\end{aligned}$$
See Appendix \[sec:proof-prop-ref\].
This proposition has the following consequence, that will be important to work with the reduction of the Palatini variational problem.
Let $\left(x^\mu,g^{\mu\nu},g^{\mu\nu}_\sigma\right)$ be the induced coordinates on $J^1\tau_\Sigma$. Then $$\begin{aligned}
&\left(\frac{\partial}{\partial x^\sigma}+g^{\mu\nu}_\sigma\frac{\partial}{\partial g^{\mu\nu}}\right)^H=\frac{\partial}{\partial x^\sigma}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left[g^{\kappa\beta}_\sigma+\left(g^{\alpha\kappa}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\kappa_{\alpha\sigma}\right)\right]\frac{\partial}{\partial e^\kappa_k}.\label{eq:HorizontalLift}
\end{aligned}$$
According to Proposition \[prop:Local\_Horizontal\_Lift\], we have that $$\begin{aligned}
&\left(\frac{\partial}{\partial x^\sigma}+g^{\mu\nu}_\sigma\frac{\partial}{\partial g^{\mu\nu}}\right)^H=\cr
&=\frac{\partial}{\partial x^\sigma}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left(g^{\alpha\kappa}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\kappa_{\alpha\sigma}\right)\frac{\partial}{\partial e^\kappa_k}+\frac{1}{4}g^{\mu\nu}_\sigma g_{\rho\beta}e^\rho_k\left(\delta_\mu^\alpha\delta^\beta_\nu+\delta_\nu^\alpha\delta^\beta_\mu\right)\frac{\partial}{\partial e^\alpha_k}\cr
&=\frac{\partial}{\partial x^\sigma}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left[g^{\kappa\beta}_\sigma+\left(g^{\alpha\kappa}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\kappa_{\alpha\sigma}\right)\right]\frac{\partial}{\partial e^\kappa_k},
\end{aligned}$$ as required.
Let us now introduce coordinates on the vector bundle $\widetilde{{\mathfrak{k}}}$. In order to do this, let us suppose that $\left(\phi=\left(x^\mu\right),U\right)$ is a coordinate chart on $M$; then it is also a trivializing domain for the principal bundle $LM$, where $$t_U:\tau^{-1}\left(U\right)\to U\times GL\left(m\right):u=\left(X_1,\cdots,X_m\right)\mapsto\left(x^{\mu}\left(\tau\left(u\right)\right),e_k^\mu\left(u\right)\right)$$ if and only if $$X_k=e_k^\mu\left(u\right)\frac{\partial}{\partial x^\mu}.$$ Therefore we can define the coordinate chart $\left(\phi_{{\mathfrak{k}}},\tau_{{\mathfrak{k}}}^{-1}\left(U\right)\right)$ [@springerlink:10.1007/PL00004852]. In order to proceed, we use the correspondence between the space of sections of the adjoint bundle $\Gamma\tau_{\mathfrak{k}}$ and the set of $p_{K}^{LM}$-vertical $K$-invariant vector fields on $LM$.
Therefore, taking the base $\left\{E_\sigma^\rho\right\}$ on $\mathfrak{gl}\left(m\right)$ such that $$\left(E_\sigma^\rho\right)_\alpha^\beta=\delta^\beta_\sigma\delta_\alpha^\rho,$$ we can define the set of $GL\left(m\right)$-invariant $\tau$-vertical vector fields $\widetilde{E}_\sigma^\rho$ whose flow $\Phi^{\widetilde{E}_\sigma^\rho}_t:\tau^{-1}\left(U\right)\to\tau^{-1}\left(U\right)$ is given by $$\Phi^{\widetilde{E}_\sigma^\rho}_t\left(u\right):=t_U^{-1}\left(\tau\left(u\right),\left[\exp{\left(tE_\sigma^\rho\right)}\right]^\alpha_\beta e_i^\beta\left(u\right)\right);$$ it means that, locally, these vector fields are such that $$\label{eq:InvarianVectorExpression}
T_{u}t_U\left(\widetilde{E}_\sigma^\rho\left(u\right)\right)=e^\rho_i\frac{\partial}{\partial e_i^\sigma}.$$ In the following we will adopt the usual convention according to which the map $Tt_U$ is not explicitly written, namely, where $$\frac{\partial}{\partial e_i^\sigma}\qquad\text{and}\qquad Tt_U^{-1}\left(\frac{\partial}{\partial e_i^\sigma}\right)$$ are identified. We can write down any $p_K^{LM}$-vertical $K$-invariant vector field $Z$ on $LM$ as $$Z=A^\rho_\sigma\widetilde{E}^\sigma_\rho;$$ then, using Equation , we obtain the following result.
The vector field on $\tau^{-1}\left(U\right)$ given by $$Z=A^\rho_\sigma\widetilde{E}^\sigma_\rho$$ is $p_K^{LM}$-vertical if and only if $$g^{\sigma\alpha}A_\alpha^\rho+g^{\rho\alpha}A_\alpha^\sigma=0.$$
In fact, we have that $$\begin{aligned}
0&=T_up_K^{LM}\left(A_\sigma^\rho\widetilde{E}_\rho^\sigma\left(u\right)\right)\\
&=A_\sigma^\rho T_up_K^{LM}\left(\widetilde{E}_\rho^\sigma\left(u\right)\right)\\
&=A_\sigma^\rho e_i^\sigma\left(u\right) T_up_K^{LM}\left(\frac{\partial}{\partial e_i^\rho}\right)\\
&=A_\sigma^\rho e_i^\sigma\left(u\right)\eta^{iq}\left[e^\alpha_q\left(u\right)\delta^\beta_\rho+e^\beta_q\left(u\right)\delta^\alpha_\rho\right]\frac{\partial}{\partial g^{\alpha\beta}},
\end{aligned}$$ and the identity follows.
Therefore, we will have that $$\phi_{{\mathfrak{k}}}\left(\left[u,B\right]_K\right):=\left(x^\mu\left(u\right),\eta^{kl}e_k^\mu\left(u\right)e_l^\nu\left(u\right),A_{\sigma}^\rho\left(\left[u,B\right]_K\right)\right)$$ if and only if $g^{\sigma\alpha}A_\alpha^\rho+g^{\rho\alpha}A_\alpha^\sigma=0$ and $$\left[u,B\right]_K=A_{\sigma}^\rho\widetilde{E}_\rho^\sigma\left(u\right).$$
In order to relate the coordinates $A_\sigma^\rho$ with the element $\left[u,B\right]_K$, we need to look closely to the identification between $\Gamma\widetilde{{\mathfrak{k}}}$ and the set of $p_{K}^{LM}$-vertical $K$-invariant vector fields on $LM$. It uses the correspondence $$V\tau\simeq LM\times\mathfrak{gl}\left(m\right)$$ given by $$\left(u,B\right)\mapsto\left.\frac{\vec{\text{d}}}{\text{d}t}\right|_{t=0}\left(u\cdot\exp{\left(-tB\right)}\right).$$ In coordinates it reads $$\left(u=\left(X_1,\cdots,X_m\right),B=\left(B_i^j\right)\right)\mapsto-B_j^ie_i^\rho\frac{\partial}{\partial e_j^\rho},$$ and using Equation it becomes $$\left(u=\left(X_1,\cdots,X_m\right),B=\left(B_i^j\right)\right)\mapsto-B_j^ie_i^\rho e^j_\sigma\widetilde{E}^\sigma_\rho.$$ Therefore, it results that $$\widehat{A}_\rho^\sigma\left(u,B\right)=-e_\rho^iB_i^je_j^\sigma$$ is a $GL\left(m\right)$-invariant function on $LM\times\mathfrak{gl}\left(m\right)$ when $GL\left(m\right)$ acts on $\mathfrak{gl}\left(m\right)$ by the adjoint action; therefore, it gives us the set of functions $A_\rho^\sigma$ on $\tau_{\mathfrak{k}}^{-1}\left(U\right)\subset\widetilde{{\mathfrak{k}}}$ that completes the coordinates $\phi_{\mathfrak{k}}$.
\[lem:Coords-On-KF\] The map $\phi_{\mathfrak{k}}:\tau_{\mathfrak{k}}^{-1}\left(U\right)\to U\times\mathbb{R}^{2m^2}$ given by $$\phi_{\mathfrak{k}}\left(\left[u,B\right]_K\right)=\left(x^\mu\left(u\right),\eta^{kl}e_k^\mu\left(u\right)e_l^\nu\left(u\right),-e_\rho^i\left(u\right)B_i^je_j^\sigma\left(u\right)\right)$$ defines a set of coordinates on $\tau_{\mathfrak{k}}^{-1}\left(U\right)$.
According to the previous discussion, it is only necessary to prove that for any $B\in{\mathfrak{k}}$, i.e. such that $$\eta^{ik}B_k^j+\eta^{jk}B_k^i=0,$$ the corresponding element on $T_uLM$, $$Z=-e_\rho^i\left(u\right)B_i^je_j^\sigma\left(u\right)\widetilde{E}^\rho_\sigma$$ verifies the constraint $$T_up_K^{LM}\left(Z\right)=0.$$ But it follows that $$\begin{aligned}
g^{\rho\alpha}A_\alpha^\sigma+g^{\sigma\alpha}A_\alpha^\rho&=-g^{\rho\alpha}e_\alpha^iB_i^je_j^\sigma-g^{\sigma\alpha}e_\alpha^iB_i^je_j^\rho\\
&=-\eta^{ik}B_i^j\left(e_k^\sigma e_j^\rho+e_k^\rho e_j^\sigma\right)\\
&=-\left(\eta^{ik}B_k^j+\eta^{jk}B_k^i\right)e_j^\rho e_k^\sigma\\
&=0,
\end{aligned}$$ as required.
Metricity and contact structures on the quotient space {#sec:metr-cont-struct}
======================================================
In this section we will use the local expressions obtained in the Section \[sec:local-coord-expr\] in order to study the relationship between the quotient bundle $$J^1\tau/K\simeq\Sigma\times C\left(LM\right)$$ and the bundle $J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$. So far, we have the diagram $$\label{eq:GOmegaDiagram}
\begin{tikzcd}[row sep=1.3cm,column sep=3.1cm]
J^1\tau
\arrow{r}{\Upsilon_\omega}
\arrow{d}{p_K^{J^1\tau}}
&
\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\right)
\arrow{d}{\text{pr}_{23}}
\\
J^1\tau/K
\arrow{r}{\overline{\Upsilon}_\omega}
\arrow{d}{\sim}
&
J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}
\arrow{dl}{G_\omega}
\\
\Sigma\times C\left(LM\right)
&
\end{tikzcd}$$ defining the diffeomorphism $G_\omega$; here $\overline{\Upsilon}_\omega$ is the map induced by $\Upsilon_\omega$. In short, we will prove that the introduction of a connection on the bundle $p_K^{LM}:LM\to\Sigma$ allows us to split a principal connection on $\tau:LM\to M$ into horizontal and vertical degrees of freedom. Moreover, this splitting will be powerful enough to relate the metricity forms $\omega_{\mathfrak{p}}$ and the contact structure on the quotient bundle $\tau_\Sigma:\Sigma\to M$.
First, let us stress that Lemma \[lem:Coords-On-KF\] allows us to set coordinates on the bundle $$\overline{p}:\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\to\Sigma.$$ In fact, any element $\left(g_x,\alpha\right)\in\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$ admits coordinates $\left(x^\mu,g^{\mu\nu},A_{\sigma\rho}^\mu\right)$ if and only if $\left(x^\mu,g^{\mu\nu}\right)$ are the corresponding coordinates for $g_x\in\Sigma$ and $$\alpha\left(\frac{\partial}{\partial x^\rho}\right)=A_{\sigma\rho}^\mu\widetilde{E}^\sigma_\mu\left(e_x\right),$$ where $e_x\in LM$ is any element in $\left(p_K^{LM}\right)^{-1}\left(g_x\right)$.
It is important to see the isomorphism $\Upsilon_\omega$ restricted to ${\mathcal{T}}_0$. In order to properly set this result, let us construct the pullback bundles $$\begin{tikzcd}[row sep=1.3cm,column sep=1.1cm]
\left(p_{K}^{LM}\right)^*\left(J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\right)
\arrow{r}{}
\arrow{d}{}
&
J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}
\arrow{d}{}
\\
LM
\arrow{r}{p_K^{LM}}
&
\Sigma
\end{tikzcd}$$ and $$\begin{tikzcd}[row sep=1.3cm,column sep=3.1cm]
\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\right)
\arrow[swap]{r}{\text{pr}_2}
\arrow{d}{\text{pr}_1}
&
J^1\tau_\Sigma
\arrow{d}{\left(\tau_\Sigma\right)_{10}}
\\
LM
\arrow{r}{p_K^{LM}}
&
\Sigma
\end{tikzcd}$$
The zero torsion submanifold ${\mathcal{T}}_0$ has some nice properties regarding the decomposition induced by the connection $\omega_K$.
The canonical projection $$\begin{aligned}
\text{pr}_\Sigma:\left(p_K^{LM}\right)^*&\left(J^1\tau_\Sigma\times_\Sigma{{\rm Lin}}\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)\right){\longrightarrow}\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\right)\\
&\left(e,j_x^1\overline{s},\left[e,\widehat{\xi}\right]_K\right)\xmapsto{\hspace{2.2cm}}\left(e,j_x^1\overline{s}\right)
\end{aligned}$$ restricted to the submanifold $${\mathcal{T}}_0':=\Upsilon_\omega\left({\mathcal{T}}_0\right)\subset\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\times_\Sigma{{\rm Lin}}\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)\right)$$ is a diffeomorphism between ${\mathcal{T}}_0'$ and $\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\right)$.
The proof of this proposition will be local. Using Equation and the coordinates introduced above, we have that $$\begin{aligned}
&\frac{\partial}{\partial x^\sigma}+e^{\mu}_{k\sigma}\frac{\partial}{\partial e^\mu_k}=\cr
&=\left(\frac{\partial}{\partial x^\sigma}+g^{\mu\nu}_\sigma\frac{\partial}{\partial g^{\mu\nu}}\right)^H+A_{\rho\sigma}^\mu\widetilde{E}^\rho_\mu\left(e_x\right)\cr
&=\frac{\partial}{\partial x^\sigma}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left[g^{\mu\beta}_\sigma+\left(g^{\alpha\mu}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\mu_{\alpha\sigma}\right)\right]\frac{\partial}{\partial e^\mu_k}-A_{\rho\sigma}^\mu e^\rho_k\frac{\partial}{\partial e^\mu_k},
\end{aligned}$$ namely $$e^{\mu}_{k\sigma}=\frac{1}{2}g_{\beta\rho}e^\rho_k\left[g^{\mu\beta}_\sigma+\left(g^{\alpha\mu}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\mu_{\alpha\sigma}\right)\right]-A_{\rho\sigma}^\mu e^\rho_k.$$ Then it follows that, for the $K$-invariant functions $\Gamma^\mu_{\nu\sigma}$, $$\begin{aligned}
\Gamma_{\rho\sigma}^\mu&=-e^k_\rho e^{\mu}_{k\sigma}\cr
&=-\frac{1}{2}g_{\beta\rho}\left[g^{\mu\beta}_\sigma+\left(g^{\alpha\mu}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\mu_{\alpha\sigma}\right)\right]+A_{\rho\sigma}^\mu.\label{eq:GammaInTermsQuotient}
\end{aligned}$$ It means that the set ${\mathcal{T}}_0'$ is locally given by the equation $$\begin{gathered}
\frac{1}{2}g_{\beta\sigma}\left[g^{\mu\beta}_\rho+\left(g^{\alpha\mu}\overline{\Gamma}^\beta_{\alpha\rho}-g^{\alpha\beta}\overline{\Gamma}^\mu_{\alpha\rho}\right)\right]-\frac{1}{2}g_{\beta\rho}\left[g^{\mu\beta}_\sigma+\left(g^{\alpha\mu}\overline{\Gamma}^\beta_{\alpha\sigma}-g^{\alpha\beta}\overline{\Gamma}^\mu_{\alpha\sigma}\right)\right]+\\
+A_{\rho\sigma}^\mu-A_{\sigma\rho}^\mu=0.
\end{gathered}$$ Let us define the set of quantities $$A_{\mu\nu\sigma}:=g_{\mu\rho}A^\rho_{\nu\sigma};$$ then using this equation and the fact that $$A_{\mu\nu\sigma}+A_{\nu\mu\sigma}=g_{\mu\rho}A^\rho_{\nu\sigma}+g_{\nu\rho}A^\rho_{\mu\sigma}=0,$$ we can conclude, from Proposition \[Prop:UniqueSolution\], that the elements $A^\mu_{\nu\sigma}$ are uniquely determined by the fact that they belong to ${\mathcal{T}}_0'$. In other words, the set $$\left(\text{pr}_\Sigma\right)^{-1}\left(e,j_x^1\overline{s}\right)\cap{\mathcal{T}}_0'$$ consists into a single element.
Restricting $G_\omega$ to ${\mathcal{T}}_0$ (see Diagram ), we obtain the following result, that permits us to reconstruct Levi-Civita connection from a section of the reduced bundle.
Let $$\sigma:M\rightarrow J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$$ be a section of the composite map $$J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}{{\longrightarrow}}\Sigma\stackrel{\tau_\Sigma}{{\longrightarrow}}M$$ such that $\text{pr}_1\circ\sigma:M\to J^1\tau_\Sigma$ is a holonomic section and $$\mathop{\text{Im}}{\sigma}\subset\text{pr}_{23}\left({\mathcal{T}}_0'\right).$$ Then $$\Gamma_\sigma:=\text{pr}_2\circ G_\omega\circ\sigma:M\rightarrow C_0\left(LM\right)$$ is the Levi-Civita connection associated to the metric $g_\sigma:=\text{pr}_1\circ G_\omega\circ\sigma$.
Locally, the map $G_\omega$ is given by Equation . Therefore, from the proof of the previous Proposition and using Proposition \[Prop:UniqueSolution\], we will have that the elements $$\Gamma_{\mu\nu\sigma}:=g_{\mu\rho}\Gamma^\rho_{\nu\sigma}$$ are uniquely determined by the set of equations $$\begin{aligned}
&\Gamma_{\mu\rho\sigma}-\Gamma_{\rho\mu\sigma}=0\cr
&\Gamma_{\mu\rho\sigma}+\Gamma_{\rho\mu\sigma}=-g_{\mu\alpha}g_{\rho\beta}g^{\alpha\beta}_\sigma.\label{eq:GInTermsGamma}
\end{aligned}$$ It means that $$\Gamma_{\mu\rho\sigma}=-\frac{1}{2}\left(g_{\mu\alpha}g_{\rho\beta}g^{\alpha\beta}_\sigma+g_{\rho\alpha}g_{\sigma\beta}g^{\alpha\beta}_\mu-g_{\sigma\alpha}g_{\mu\beta}g^{\alpha\beta}_\mu\right).$$ Now, using the definition $$g_{\mu\nu,\sigma}:=-g_{\mu\alpha}g_{\nu\beta}g^{\alpha\beta}_\sigma$$ we obtain $$\label{eq:GammaInTermsQuotientJetVariables}
\Gamma_{\rho\sigma}^\mu=\frac{1}{2}g^{\mu\alpha}\left(g_{\alpha\nu,\sigma}+g_{\rho\sigma,\alpha}-g_{\sigma\alpha,\nu}\right).$$ Because $\text{pr}_1\circ\sigma$ is holonomic, we have that $$g^{\mu\nu}_\sigma=\frac{\partial g^{\mu\nu}}{\partial x^\sigma},$$ as required.
Let us define $${\mathcal{T}}_0'':=G_\omega^{-1}\left(\Sigma\times C_0\left(LM\right)\right)=\overline{\Upsilon}_\omega\left(p_K^{J^1\tau}\left({\mathcal{T}}_0\right)\right);$$ then, we need to draw our attention to the diagram in Figure \[fig:MapsInvolvedRouth\].
As a consequence of Formula , we obtain the following corollary; in short, it says that in the reduced bundle $J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$, the degrees of freedom associated to the factor $\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$ are superfluous.
The map $$\left.\text{pr}_1\right|_{{\mathcal{T}}_0''}:{\mathcal{T}}_0''\to J^1\tau_\Sigma$$ is a bundle isomorphism over the identity on $\Sigma$.
Locally, composite map $\overline{\Upsilon}_\omega\circ p_K^{J^1\tau}$ is given by $$\overline{\Upsilon}_\omega\circ p_K^{J^1\tau}\left(\left[x^\mu,e_k^\nu,e^\sigma_{k\rho}\right]_K\right)=\left(x^\mu,\eta^{ij}e_i^\mu e_j^\nu,g^{\mu\nu}_\sigma\right),$$ where coordinates $g_\sigma^{\mu\nu}$ are calculated using Equation .
For the last result of the Section, we need any of the composite maps $$\begin{tikzcd}[row sep=1.3cm,column sep=.7cm]
J^1\tau
\arrow{r}{\Upsilon_\omega}
\arrow{d}{\overline{\Upsilon}_\omega\circ p^{J^1\tau}_K}
&
\left(p_{K}^{LM}\right)^*\left(J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\right)
\arrow{r}{\text{pr}_\Sigma}
&
\left(p_K^{LM}\right)^*\left(J^1\tau_\Sigma\right)
\arrow{d}{\text{pr}_2}
\\
J^1\tau_\Sigma\times_\Sigma\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}
\arrow{rr}{\text{pr}_1}
&
&
J^1\tau_\Sigma.
\end{tikzcd}$$ As we mentioned above, the splitting induced by the connection form $\omega_K$ allows us to relate the metricity forms with a contact structure on the quotient bundle.
\[prop:Metricity-Horizontal\] The metricity forms are $\left(\text{pr}_2\circ\text{pr}_\Sigma\circ\Upsilon_\omega\right)$-horizontal (also $\left(\text{pr}_1\circ\overline{\Upsilon}_\omega\circ p^{J^1\tau}_K\right)$-horizontal). In fact, $$Tp_K^{LM}\circ\omega_{\mathfrak{p}}=\left(\text{pr}_2\circ\text{pr}_\Sigma\circ\Upsilon_\omega\right)^*\overline{\omega}=\left(\text{pr}_1\circ\overline{\Upsilon}_\omega\circ p^{J^1\tau}_K\right)^*\overline{\omega}$$ where $\overline{\omega}$ is the contact form on $J^1\tau_\Sigma$.
In local coordinates, we have that $$\left(\text{pr}_2\circ\text{pr}_\Sigma\circ\Upsilon_\omega\right)\left(x^\mu,e^\nu_k,e^\nu_{k\sigma}\right)=\left(x^\mu,g^{\mu\nu},g^{\mu\nu}_{\sigma}\right),$$ where $g^{\mu\nu}_{\sigma}$ is calculated using Equation . On the other hand, the metricity forms have the following local expression [@capriotti14:_differ_palat] $$\label{eq:GammaInLocalTerms}
\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i=e^i_\mu e^j_\nu\left[dg^{\mu\nu}+\left(g^{\mu\sigma}\Gamma_{\sigma\rho}^\nu+g^{\nu\sigma}\Gamma_{\sigma\rho}^\mu\right)dx^\rho\right].$$ Using Equation , it follows that $$\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i=e^i_\mu e^j_\nu\left(dg^{\mu\nu}-g^{\mu\nu}_\sigma dx^\sigma\right),$$ namely, the metricity condition is horizontal with respect to the projection $$\text{pr}_2\circ\text{pr}_\Sigma\circ\Upsilon_\omega:J^1\tau{\longrightarrow}J^1\tau_\Sigma,$$ and the form in the base manifold is nothing but the generator of the contact structure.
First order variational problem for Einstein-Hilbert gravity {#sec:first-order-vari}
============================================================
We have enough background to define a variational problem on $J^1\tau_\Sigma$ for Einstein-Hilbert gravity. The *Einstein-Hilbert Lagrangian form* will be defined as the unique $2$-horizontal $m$-form ${\mathcal{L}}_{EH}^{\left(1\right)}$ on $J^1\tau_\Sigma$ such that $$\left(\text{pr}_1\circ\overline{\Upsilon}_\omega\circ p_K^{J^1\tau}\right)^*{\mathcal{L}}_{EH}^{\left(1\right)}=i_0^*{\mathcal{L}}_{PG}.$$ Recall also that in local terms, Palatini Lagrangian can be written as $$\begin{gathered}
\label{eq:QuotientPalatiniLag}
{{\mathcal{L}}_{PG}}=\epsilon_{\mu_1\cdots\mu_{n-2}\gamma\kappa}\sqrt{\left|\det{g}\right|}g^{\kappa\phi} d x^{\mu_1}\wedge\cdots\wedge d x^{\mu_{n-2}}\wedge\left(d\Gamma^\gamma_{\rho\phi}\wedge d x^\rho+\Gamma^\sigma_{\delta\phi}\Gamma^\gamma_{\beta\sigma} d x^\beta\wedge d x^\delta\right).\end{gathered}$$
We are pursuing here to establish the equivalence between the classical variational problem associated to the Lagrangian density ${\mathcal{L}}_{EH}:J^2{\tau}_\Sigma\rightarrow\wedge^mM$ (see [@doi:10.1063/1.4998526]) and the variational problem $\left(J^1{\tau}_\Sigma,{\mathcal{L}}_{EH}^{\left(1\right)},\mathcal{I}^\Sigma_{\text{con}}\right)$. As we have said above, the main difference between these variational problems is related to the nature of the Lagrangian form: In the latter, this form is not a horizontal form on $J^1{\tau}_\Sigma$, meanwhile in the former case the Lagrangian form on $J^2{\tau}_\Sigma$ is specified through a Lagrangian density, that gives rise to a horizontal form on this jet bundle. The following lemma tells us how these Lagrangians are related.
\[lem:DensityAndLagFormEH\] It results that $${\mathcal{L}}_{EH}=h{\mathcal{L}}_{EH}^{\left(1\right)}$$ where $h:\Omega^m\left(J^1{\tau}_\Sigma\right)\rightarrow\Omega^m\left(J^2{\tau}_\Sigma\right)$ is the horizontalization operator [@krupka15:_introd_global_variat_geomet_atlan].
Recall that the horizontalization operator is defined by the map $$h_{j_x^2s}:=T_xj^1s\circ T_{j^2_xs}\left({\tau}_\Sigma\right)_2,$$ where $\left(\tau_\Sigma\right)_2:J^2\tau_\Sigma\to M$ is the canonical projection of the $2$-jet bundle of the metric bundle onto $M$. In terms of the coordinates $\left(x^\mu,g^{\mu\nu},g^{\mu\nu}_{\alpha},g^{\mu\nu}_{\alpha\beta}\right)$ on $J^2{\tau}_\Sigma$, we have that $$hdg^{\mu\nu}=g^{\mu\nu}_\alpha dx^\alpha,\quad hdg^{\mu\nu}_\alpha=g^{\mu\nu}_{\alpha\beta}dx^\beta.$$ The result follows from a (rather lenghty) calculation, using expression and the formula for the Christoffel symbols .
The occurrence of the horizontalization operator in this lemma is crucial for our purposes, as the following proposition shows.
\[thm:IntegralsAndHorizontalization\] Let $\pi:E\rightarrow M$ be a bundle on a (compact) manifold $M$ of dimension $m$. For any $\alpha\in\Omega^m\left(J^k\pi\right)$ and any section $s\in\Gamma\pi$, we have that $$\int_M\left(j^ks\right)^*\alpha=\int_M\left(j^{k+1}s\right)^*h\alpha.$$
It follows from the formula $$T_xj^ks=T_xj^ks\circ T_{j^{k+1}_xs}\pi_{k+1}\circ T_xj^{k+1}s,$$ that holds for every $x\in M$and $s\in\Gamma\pi.$
It is immediate to prove the desired equivalence.
The classical variational problem specified by the Lagrangian density ${\mathcal{L}}_{EH}$ on $J^2{\tau}_\Sigma$ and the variational problem $\left(J^1{\tau}_\Sigma,{\mathcal{L}}_{EH}^{\left(1\right)},\mathcal{I}^\Sigma_{\text{con}}\right)$ have the same set of extremals.
From Lemma \[lem:DensityAndLagFormEH\] and using Theorem \[thm:IntegralsAndHorizontalization\], we see that $g:M\rightarrow\Sigma$ is an extremal for the action integral $$g\mapsto\int_M\left(j^2g\right)^*{\mathcal{L}}_{EH}$$ if and only if it is an extremal for the action integral $$g\mapsto\int_M\left(j^1g\right)^*\lambda_{EH},$$ as required.
As usual [@GotayCartan], the equations of motion of this variational problem can be lifted to a space of forms on $J^1\tau_\Sigma$. Let us define the affine subbundle $$W_{EH}:={\mathcal{L}}_{EH}^{\left(1\right)}+{I}^\Sigma_{{\rm con},2}\subset\wedge^m\left(J^1\tau_\Sigma\right).$$ Here, for every $j_x^1s\in J^1\tau_\Sigma$, $$\begin{aligned}
\left.I^\Sigma_{{\rm con},2}\right|_{j_x^1s}&=\mathcal{L}\,\Big\{{\alpha}_{s\left(x\right)}\circ(T_{j_x^1s}\left(\tau_\Sigma\right)_{10}-T_xs\circ T_{j_x^1s}\left(\tau_\Sigma\right)_1)\wedge\beta:\\
&\hskip10em{\alpha}_{s\left(x\right)}\in T_{s\left(x\right)}^*\Sigma,\beta\in\left(\Lambda^{m-1}_1\left(J^1\tau_\Sigma\right)\right)_{j_x^1s}\Big\},\end{aligned}$$ is the corresponding fiber for the contact subbundle on $J^1\tau_\Sigma$. The canonical map will be denoted by $$\tau_{EH}:W_{EH}\to J^1\tau_\Sigma.$$
We will indicate with $\lambda_{EH}$ the pullback of the canonical $m$-form on $\wedge^m\left(J^1\tau_\Sigma\right)$ to $W_{EH}$. Then we have a result analogous to Proposition \[prop:FieldTheoryEqsWL\] in the context of (first order) Einstein-Hilbert formulation.
\[prop:FieldTheoryEqsEinsteinHilbertCase\] A section $s\colon U\subset M\rightarrow J^1\tau_\Sigma$ is a critical holonomic section for the variational problem $\left(J^1{\tau}_\Sigma,{\mathcal{L}}_{EH}^{\left(1\right)},\mathcal{I}^\Sigma_{\text{con}}\right)$ if and only if there exists a section $\Gamma\colon U\subset M\rightarrow W_{EH}$ such that
1. $\Gamma$ covers $s$, i.e. $\tau_{EH}\circ\Gamma=s$, and
2. $\Gamma^*\left(X\lrcorner d\lambda_{EH}\right)=0$, for all $X\in\mathfrak{X}^{V\left(\left(\tau_\Sigma\right)_1\circ\tau_{EH}\right)}(W_{EH})$.
This proposition provides us with a unified formalism for Einstein-Hilbert gravity, based on the first order formulation. For the corresponding formalism associated to the second order formulation, see [@doi:10.1063/1.4998526].
Contact bundle decomposition for Palatini gravity {#sec:cont-bundle-decomp}
=================================================
Now we will recall some general facts regarding the decomposition induced for the connection $\omega_K$ [@Capriotti2019] on the bundle of forms $W_{PG}$ defined in Equation . The contact structure on $J^1\tau$ gives rise to the contact subbundle on ${\mathcal{T}}_0$ given by $$\begin{aligned}
\label{eq:ContactFields}
\left.I^m_{{\rm con},2}\right|_{j_x^1s}&=\mathcal{L}\,\Big\{{\alpha}_{s\left(x\right)}\circ(T_{j_x^1s}\tau_{10}'-T_xs\circ T_{j_x^1s}\tau_1')\wedge\beta:\cr
&\hskip10em{\alpha}_{s\left(x\right)}\in T_{s\left(x\right)}^*\left(LM\right),\beta\in\left(\Lambda^{m-1}_1\left({\mathcal{T}}_0\right)\right)_{j_x^1s}\Big\},\end{aligned}$$ where $\mathcal{L}$ indicates linear closure. There is an splitting of $I^m_{{\rm con},2}$ induced by the choice of a connection on the principal bundle $p^{LM}_K\colon LM\to \Sigma$. Its construction is as follows. We denote by $\omega_K\in\Omega^1(LM,{\mathfrak{k}})$ the chosen connection and consider the following splitting of the cotangent bundle: $$T^*\left(LM\right)=(p^{LM}_K)^*\big(T^*\Sigma\big) \oplus (LM\times {\mathfrak{k}}^*).$$ The identification is obtained as follows: $$\begin{aligned}
(p^{LM}_K)^*\big(T^*\Sigma\big) \oplus (LM\times {\mathfrak{k}}^*)&\to T^*\left(LM\right),\\
(e,\widehat{\alpha}_{[u]},\sigma)&\mapsto \alpha_u=\widehat{\alpha}_{[u]}\circ T_up_K^{LM}+\langle\sigma,\omega_K(\cdot)\rangle.\end{aligned}$$ Accordingly, we have an splitting of contact bundle $$\label{eq:splittingcontact_LFT}
I^m_{{\rm con},2}=\widetilde{I^m_{{\rm con},2}}\oplus I^m_{{\mathfrak{k}}^*,2},$$ with $$\begin{aligned}
\left.\widetilde{I^m_{{\rm con},2}}\right|_{j_x^1s}&=\mathcal{L}\,\Big\{\widehat{\alpha}_{\left[s\left(x\right)\right]}\circ T_{s\left(x\right)}p_K^{LM}\circ(T_{j_x^1s}\tau_{10}'-T_xs\circ T_{j_x^1s}\tau_1')\wedge\beta:\\
&\hskip10em\widehat{\alpha}_{\left[s\left(x\right)\right]}\in T_{\left[s\left(x\right)\right]}^*\Sigma,\beta\in\left(\Lambda^{m-1}_1{\mathcal{T}}_0\right)_{j_x^1s}\Big\},\\
\left.I_{{\mathfrak{k}}^*,2}^m\right|_{j_x^1s}&=\left\{\langle\sigma\stackrel{\wedge}{,}\omega_K\circ(T_{j_x^1s}\tau_{10}'-T_xs\circ T_{j_x^1s}\tau_1')\rangle:\sigma\in\left(\Lambda^{m-1}_1{\mathcal{T}}_0\otimes{\mathfrak{k}}^*\right)_{j_x^1s}\right\}.\end{aligned}$$ The symbol $\langle\cdot\stackrel{\wedge}{,} \cdot\rangle$ denotes the natural contraction, defined as follows: For elements of the form $\alpha_1\otimes\nu$, $\alpha_2\otimes \eta$ with $\nu,\eta\in{\mathfrak{k}}$ and $\alpha_1,\alpha_2$ forms, we have $\langle\alpha_1\otimes\nu \stackrel{\wedge}{,} \alpha_2\otimes\eta \rangle=\langle\nu,\eta\rangle \alpha_1\wedge\alpha_2$. For a general element in the linear closure, extend linearly.
We can split our metricity subbundle $I^m_{PG}$ using the inclusion $$I^m_{PG}\subset I^m_{\text{con},2},$$ namely $$I^m_{PG}=\left(I^m_{PG}\cap\widetilde{I^m_{{\rm con},2}}\right)\oplus\left(I^m_{PG}\cap I^m_{{\mathfrak{k}}^*,2}\right).$$ But we have the following fact.
\[lem:cont-bundle-decomp\] For every $j_x^1s\in{\mathcal{T}}_0$ $$I^m_{PG}\subset\widetilde{I^m_{{\rm con},2}}.$$
Let us work in the coordinates considered above; therefore, we have Equation for the projector $Tp_K^{LM}$ and also $$T_{j_x^1s}\tau_{10}'-T_xs\circ T_{j_x^1s}\tau_1'=\frac{\partial}{\partial e^\mu_k}\otimes\left(de^\mu_k-e^\mu_{k\alpha}dx^\alpha\right).$$ Then $$\begin{aligned}
T_{s\left(x\right)}p_K^{LM}\circ&\left(T_{j_x^1s}\tau_{10}'-T_xs\circ T_{j_x^1s}\tau_1'\right)=\\
&=T_{s\left(x\right)}p_K^{LM}\left(\frac{\partial}{\partial e^\mu_k}\right)\otimes\left(de^\mu_k-e^\mu_{k\alpha}dx^\alpha\right)\\
&=\frac{\partial}{\partial g^{\rho\sigma}}\otimes\left[\eta^{kq}\left(e_q^\rho de_k^\sigma+e_q^\sigma de_k^\rho\right)-\eta^{kq}\left(e_q^\rho e_{k\alpha}^\sigma+e_q^\sigma e_{k\alpha}^\rho\right)dx^\alpha\right]\\
&=\frac{\partial}{\partial g^{\rho\sigma}}\otimes\left[dg^{\rho\sigma}-\left(g^{\rho\beta}e_\beta^ke^\sigma_{k\alpha}+g^{\sigma\beta}e_\beta^ke^\rho_{k\alpha}\right)dx^\alpha\right]\\
&=\frac{\partial}{\partial g^{\rho\sigma}}\otimes\left[dg^{\rho\sigma}+\left(g^{\rho\beta}\Gamma^\sigma_{\beta\alpha}+g^{\sigma\beta}\Gamma^\rho_{\beta\alpha}\right)dx^\alpha\right]
\end{aligned}$$ that will be rise to a set of generators of the bundle $I^m_{PG}$ (see Equation ).
This result is compatible with the fact that the whole subbundle $W_{PG}$ is in the zero level set for the momentum map. We will return on that below.
First order Einstein-Hilbert Lagrangian as Routhian {#sec:routhian}
===================================================
Because $J\equiv0$ the Routhian density will coincide with the Lagrangian ${\mathcal{L}}_{PG}$. This density will induce a density on the reduced bundle, which we will define next.
First, we write $\overline{p}: {\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\to\Sigma$ for the obvious projection. In principle, the bundle ${\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$ would be the field bundle for the reduced system; nevertheless, we will show next (see Lemma \[lem:routhian\_pr1\_horizontal\] below) that the Routhian, namely, the Lagrangian form for this reduced system, will be horizontal for the projection onto the jet space of the base bundle $\Sigma$.
In particular, one can consider the map: $$\begin{aligned}
q:J^1\left(\tau_\Sigma\circ\overline{p}\right)&{\longrightarrow}J^1\tau_\Sigma\times {\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)},\\
j_x^1\sigma&\longmapsto\left(j_x^1\left(\overline{p}\circ\sigma\right),\sigma(x)\right)\end{aligned}$$ projecting onto the quotient bundle for Palatini gravity. So, we can formulate the reduced system as a first order field theory by taking the bundle $\tau_\Sigma\circ\overline{p}:{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\to M$ as the basic field bundle. Nevertheless, there are some identifications that will permit us to simplify further this basic bundle.
In order to proceed, let use the connection $\omega_K$ to define the maps fitting in the following diagram: $$\begin{tikzcd}[ampersand replacement=\&, column sep=1.2cm, row
sep=.9cm]
{{\mathcal{T}}_0} \arrow[swap]{d}{\left.p_{K}^{J^1\tau}\right|_{{\mathcal{T}}_0}} \arrow{r}{f_\omega} \&
{{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}} \&
{J^1\left(\tau_\Sigma\circ\overline{p}\right)}
\arrow[swap]{l}{\left(\tau_\Sigma\circ\overline{p}\right)_{10}}
\arrow{d}{q}
\\
{{\mathcal{T}}_0/K}
\arrow[swap]{rr}{g_\omega:=\left.\overline{\Upsilon}_\omega\right|_{{\mathcal{T}}_0/K}}
\&
\&
{J^1\tau_\Sigma\times{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}}
\end{tikzcd}$$ The definitions are as follows: $$\begin{aligned}
f_\omega:{\mathcal{T}}_0& {\longrightarrow}{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)},\\
j_x^1s&\longmapsto\left[s\left(x\right),\omega_K\circ T_xs\right]_K.\\
g_\omega:{\mathcal{T}}_0/{K}& {\longrightarrow}J^1\tau_\Sigma\times{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)},\\
\left[j_x^1s\right]_{K} &\longmapsto\big(j_x^1\left(p_K^{LM}\circ s\right),\left[s\left(x\right),\omega_K\circ T_xs\right]_K\big).\end{aligned}$$
The map $g_\omega$ is the identification from Corollary \[cor:identification\]. Since the Lagrangian density ${\mathcal{L}}_{PG}$ is invariant under $K$, it defines a reduced density on ${\mathcal{T}}_0/K$ which can be seen as a density on $J^1\tau_\Sigma\times\mathop{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$. We will denote it by $\overline{{\mathcal{L}}}_{PG}$: $$\big(g_\omega\circ p_{K}^{J^1\tau}\big)^*\overline{{\mathcal{L}}}_{PG}={\mathcal{L}}_{PG},\qquad \overline{{\mathcal{L}}}_{PG} \in \Omega_2^m\big(J^1\tau_\Sigma\times\mathop{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\big).$$
The $m$-form $\overline{{\mathcal{L}}}_{PG}\in\Omega^m\left(J^1\tau_\Sigma\times\mathop{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\right)$ is the *Routhian* for the variational problem $\left({\mathcal{T}}_0,{\mathcal{L}}_{PG},\mathcal{I}_{PG}^m\right)$.
Then, we are ready to prove a characteristic property for the Routhian associated to the reduction of Palatini gravity.
\[lem:routhian\_pr1\_horizontal\] The Routhian $\overline{{\mathcal{L}}}_{PG}$ is $\text{pr}_1$-horizontal, where $$\text{pr}_1:J^1\tau_\Sigma\times\mathop{\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}{\longrightarrow}J^1\tau_\Sigma$$ is the projection onto the first factor of the fibred product.
It follows from Equations and that $$\label{eq:EinsteinLagVsPalatiniLag}
\text{pr}_1^*{\mathcal{L}}_{EH}^{\left(1\right)}=\overline{{\mathcal{L}}}_{PG},$$ as required.
In short, Routhian $\overline{{\mathcal{L}}}_{PG}$ does not depend on the fiber coordinates $A^\sigma_{\mu\rho}$ of the bundle $\overline{p}: {\rm Lin}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}\to\Sigma$; it is just the pullback along $\text{pr}_1$ of the first order Lagrangian for Einstein-Hilbert gravity.
In the usual Routh reduction, the reduced Routhian is a $m$-form on $J^1\left(\tau_\Sigma\circ\overline{p}\right)$; in this case, Lemma \[lem:routhian\_pr1\_horizontal\] allows us to consider the form ${\mathcal{L}}_{EH}^{\left(1\right)}$ on $J^1\tau_\Sigma$ as the Routhian. Therefore, we can forget about the degrees of freedom associated to the factor $\mathop{\text{Lin}}{\left(\tau_\Sigma^*TM,\widetilde{{\mathfrak{k}}}\right)}$ in the quotient bundle, and take as the quotient bundle for Palatini gravity the jet bundle $J^1\tau_\Sigma$; this is the way in which we will proceed from this point.
Einstein-Hilbert gravity as Routh reduction of Palatini gravity {#sec:einst-hilb-grav}
===============================================================
We will devote the present section to establish the two main results of the article, namely, Theorem \[thm:routh-reduct-palat\] regarding reduction of Palatini gravity and Theorem \[thm:conn-induc-pullb\] dealing with reconstruction of metrics verifying Einstein equations of gravity. The strategy, as we mention in the introduction, is to compare the equations of motion (lifted to the corresponding spaces of forms $W_{EH}$ and $W_{PG}$) in a bundle containing every relevant degree of freedom; this role is played below by the pullback bundle $F^*_\omega\left(W_{EH}\right)$. So, let us define $$F_\omega:=\text{pr}_1\circ g_\omega\circ p_K^{J^1\tau}:{\mathcal{T}}_0{\longrightarrow}J^1\tau_\Sigma,$$ namely $$F_\omega\left(j_x^1s\right)=j_x^1\left(p_K^{LM}\circ s\right)$$ for every $j_x^1s\in{\mathcal{T}}_0$. Then we have the diagram $$\label{eq:DiagramEHVsPG}
\begin{tikzcd}[row sep=1.3cm,column sep=1cm]
W_{PG}
\arrow[swap]{dr}{\pi_{PG}}
\arrow[hook]{r}{}
&
\wedge_2^m\left(T^*{\mathcal{T}}_0\right)
\arrow{d}{\overline{\tau}^m_{{\mathcal{T}}_0}}
&
F_\omega^*\left(W_{EH}\right)
\arrow{dl}{\text{pr}_1^\omega}
\arrow[swap]{l}{\widetilde{F_\omega}}
\arrow{r}{\text{pr}_2^\omega}
&
W_{EH}
\arrow{d}{\pi_{EH}}
\\
&
{\mathcal{T}}_0
\arrow[swap]{rr}{F_\omega}
&
&
J^1\tau_\Sigma
\end{tikzcd}$$ where $$\widetilde{F_\omega}:F_\omega^*\left(W_{EH}\right){\longrightarrow}\wedge_2^m\left(T^*{\mathcal{T}}_0\right):\left(j_x^1s,\rho\right)\mapsto\rho\circ T_{j_x^1s}F_\omega$$ and $$\begin{aligned}
&\text{pr}_1^\omega:F_\omega^*\left(W_{EH}\right){\longrightarrow}{\mathcal{T}}_0,\qquad\text{pr}_2^\omega:F_\omega^*\left(W_{EH}\right){\longrightarrow}W_{EH}\end{aligned}$$ are the canonical projections of the pullback bundle.
\[lem:FwVsWPG\] The bundle map $\widetilde{F_\omega}$ is an affine bundle isomorphism on ${\mathcal{T}}_0$ between $W_{PG}$ and $F_\omega^*\left(W_{EH}\right)$.
It is consequence of Equation and Proposition \[prop:Metricity-Horizontal\].
We will use Diagram as a mean to compare the equations of motion of Palatini gravity and Einstein-Hilbert gravity; the idea is to use Propositions \[prop:FieldTheoryEqsWL\] and \[prop:FieldTheoryEqsEinsteinHilbertCase\] in order to represent these equations in terms of the spaces of forms $W_{PG}$ and $W_{EH}$ respectively, and to pull them back to the common space $F_\omega^*\left(W_{EH}\right)$. Crucial to this strategy is the following result.
\[prop:RelationBetweenCanonicalForms\] It is true that $$\widetilde{F_\omega}^*\lambda_{PG}=\left(\text{pr}_2^\omega\right)^*\lambda_{EH}.$$
Let $\left(j_x^1s,\rho\right)\in F^*_\omega\left(W_{EH}\right)$ be an arbitrary element in this pullback bundle; then using Diagram we will have that $$\begin{aligned}
\left.\lambda_{PG}\right|_{\rho\circ T_{j_x^1s}F_\omega}\circ T_{\left(j_x^1s,\rho\right)}\widetilde{F_\omega}&=\left(\rho\circ T_{j_x^1s}F_\omega\right)\circ T_{\rho\circ T_{j_x^1s}F_\omega}\overline{\tau}^m_{{\mathcal{T}}_0}\circ T_{\left(j_x^1s,\rho\right)}\widetilde{F_\omega}\\
&=\left(\rho\circ T_{j_x^1s}F_\omega\right)\circ T_{\left(j_x^1s,\rho\right)}\text{pr}_1^\omega\\
&=\rho\circ T_\rho\pi_{EH}\circ T_{\left(j_x^1s,\rho\right)}\text{pr}_2^\omega\\
&=\left.\lambda_{EH}\right|_\rho\circ T_{\left(j_x^1s,\rho\right)}\text{pr}_2^\omega,
\end{aligned}$$ where it was used that $\pi_{EH}:W_{EH}\to J^1\tau_\Sigma$ is the restriction of the canonical projection $$\overline{\tau}^m_{J^1\tau_\Sigma}:\wedge^m_2\left(T^*J^1\tau_\Sigma\right){\longrightarrow}J^1\tau_\Sigma$$ to $W_{EH}$. This identity proves the Proposition.
Routh reduction of Palatini gravity {#sec:routh-reduct-palat}
-----------------------------------
We are now ready to prove the first result on Routh reduction of Palatini gravity.
\[thm:routh-reduct-palat\] Let $\widehat{Z}:U\subset M\to {\mathcal{T}}_0$ be a section that obeys the Palatini gravity equations of motion. Then the section $$F_\omega\circ\widehat{Z}:U\to J^1\tau_\Sigma$$ is holonomic and obeys the Einstein-Hilbert gravity equations of motion.
The idea of the proof is encoded in the following diagram $$\label{eq:EqsMotionEHPG}
\begin{tikzcd}[row sep=1.7cm,column sep=1.1cm]
W_{PG}
\arrow{d}{\pi_{PG}}
&
F_\omega^*\left(W_{EH}\right)
\arrow[swap]{l}{\widetilde{F_\omega}}
\arrow{dl}{\text{pr}_1^\omega}
\arrow{r}{\text{pr}_2^\omega}
&
W_{EH}
\arrow[swap]{d}{\pi_{EH}}
\\
{\mathcal{T}}_0
\arrow[near end]{rr}{F_\omega}
\arrow{dr}{\tau_1}
&
&
J^1\tau_\Sigma
\arrow{dl}{\left(\tau_\Sigma\right)_1}
\\
&
M
\arrow[dashed,bend left=25]{ul}{\widehat{Z}}
\arrow[dashed,bend left=75]{uul}{\Gamma}
\arrow[dashed,bend right=10,swap,near end,crossing over]{uu}{\Gamma'}
\arrow[dashed,bend right=70,swap]{uur}{\widetilde{\Gamma}}
&
\end{tikzcd}$$ Using Proposition \[prop:FieldTheoryEqsWL\], we construct $\Gamma:U\to W_{PG}$ out of $\widehat{Z}$; the Palatini gravity equations of motion will become $$\Gamma^*\left(Z\lrcorner d\lambda_{PG}\right)=0$$ for any $Z\in\mathfrak{X}^{V\left(\tau_1\circ\pi_{PG}\right)}\left(W_{PG}\right)$. Using Lemma \[lem:FwVsWPG\] we can define $$\Gamma':=\left(\widetilde{F}_\omega\right)^{-1}\circ\Gamma:U{\longrightarrow}F_\omega^*\left(W_{EH}\right);$$ then the Palatini equations of motion translate into $$\left({\Gamma}'\right)^*\left({Z}'\lrcorner d\widetilde{F_\omega}^*\lambda_{PG}\right)=0$$ for any ${Z}'\in\mathfrak{X}^{V\left(\tau_1\circ\text{pr}_1^{\omega}\right)}\left(F^*\left(W_{EH}\right)\right)$. Then using Proposition \[prop:RelationBetweenCanonicalForms\] and the fact that $\text{pr}_2^\omega:F^*_\omega\left(W_{EH}\right)\to W_{EH}$ is a submersion, we can conclude that the section $$\widetilde{\Gamma}:=\text{pr}_2^\omega\circ\Gamma':U\to W_{EH}$$ obeys the equations of motion $$\widetilde{\Gamma}^*\left(\widetilde{Z}\lrcorner d\lambda_{EH}\right)=0,$$ where $\widetilde{Z}\in\mathfrak{X}^{V\left(\left(\tau_\Sigma\right)_1\circ\pi_{EH}\right)}\left(W_{EH}\right)$ is an arbitrary vertical vector field on $W_{EH}$. Also, using Diagram , we have that $$\pi_{EH}\circ\widetilde{\Gamma}=\pi_{EH}\circ\text{pr}_2^\omega\circ\Gamma'=F_\omega\circ\text{pr}_1^\omega\circ\Gamma'=F_\omega\circ\widehat{Z}.$$ The theorem then follows from Proposition \[prop:FieldTheoryEqsEinsteinHilbertCase\].
...and reconstruction {#sec:...and-reconstr}
---------------------
It is time now to give a (somewhat partial) converse to Theorem \[thm:routh-reduct-palat\]. That is, given a section $\zeta:U\subset M\to\Sigma$ such that $j^1\zeta:U\to J^1\tau_\Sigma$ is extremal for the Einstein-Hilbert variational problem, find a section $$\widehat{Z}:U\to{\mathcal{T}}_0$$ such that $F_\omega\circ\widehat{Z}=j^1\zeta$ and $\widehat{Z}$ is an extremal for the Palatini variational problem. From Figure \[fig:MapsInvolvedRouth\] it is clear that we need to lift the section $j^1\zeta$ through the quotient map $\left.p_K^{J^1\tau}\right|_{{\mathcal{T}}_0}:{\mathcal{T}}_0\to{\mathcal{T}}_0/K$, which has the structure of a principal bundle on ${\mathcal{T}}_0/K$. It is clear that any principal bundle can be trivialized by a convenient restriction of the base space. As discussed in [@Capriotti2019], it is not the way in which this kind of reconstruction problems are solved. Rather, the problem of lifting sections along the projection in a principal bundle is reduced to the problem of deciding if certain connection is flat; moreover, it is expected that this connection is related to the connection used to define the Routhian. We will present in this section a theory of reconstruction along these lines. With this goal in mind, we will recall here some of the details developed in [@Capriotti2019]; for proofs we refer to the original article. We begin with a pair of diagrams :
$$\label{dia:covering}
\begin{tikzcd}[column sep=1cm, row
sep=1.4cm]
P \arrow[rr,"p^P_G"]\arrow[dr,swap,"\pi"] & & [1ex]P/G \arrow[dl,"\overline{\pi}"]\\
& M \arrow[ur,dashed,bend right=45,swap,"\zeta"]\arrow[ul,dashed,bend left=45,"s"]&
\end{tikzcd}\hspace{3em}
\begin{tikzcd}[column sep=1.8cm, row
sep=1.4cm]
\zeta^*P \arrow[r,"\text{pr}_2"]\arrow[d,swap,"\text{pr}_1"] &[1ex] P\arrow[d,"p^P_G"]\\
M\arrow[r,swap,"\zeta"] & P/G
\end{tikzcd}$$
Then we have the following result.
\[lem:TrivialToSections\] There exists a section $s:M\rightarrow P$ covering the section $\zeta\colon M\to P/G$ if and only if $\zeta^*P$ is a trivial bundle.
Using that $\zeta^*P$ is a principal bundle, being trivial can be characterized in terms of a flat connection [@KN1]:
\[thm:conn-induc-pullb\] Let $\pi:P\rightarrow M$ be a $G$-principal bundle with $M$ simply connected. Then $P$ is trivial if and only if there exists a flat connection on $P$.
If $M$ is not simply connected, then one can ask for a flat connection with trivial holonomy and obtain a similar result. For the sake of simplicity, we will assume that $M$ is simply connected to apply Theorem \[thm:conn-induc-pullb\] when needed. For later use, we also observe that the section constructed in the proof of Theorem \[thm:conn-induc-pullb\] has horizontal image w.r.t. the given connection.
We now wish to apply the previous discussion to the case of the bundle $p_K^{{\mathcal{T}}_0}:=\left.p_K^{J^1\tau}\right|_{{\mathcal{T}}_0}:{\mathcal{T}}_0\to{\mathcal{T}}_0/K$. We have the situation depicted in Diagram (left): $Z\colon M\to{\mathcal{T}}_0/K$ is a given section and $\zeta\colon M\to \Sigma$ is the induced section. The basic question we want to address is the following: does there exist a section $\widehat{Z}:M\rightarrow {\mathcal{T}}_0$ [such that]{} $p_K^{{\mathcal{T}}_0}\circ\widehat{Z}=Z$? $$\label{dia:covering2}
\begin{tikzcd}[column sep=1cm, row
sep=1cm]
{\mathcal{T}}_0 \arrow[rr,"p^{{\mathcal{T}}_0}_K"]\arrow[d,"\tau_{10}'"] &&{\mathcal{T}}_0/K\arrow[d,swap,"\overline{\tau}'_{10}"]\\
LM\arrow[rr,"p^{LM}_K"]\arrow[dr,"\tau'"] & & \Sigma \arrow[dl,swap,"\tau_\Sigma"] \\
& M\arrow[ur,swap,dashed,"\zeta",bend right=25]\arrow[uul,dashed,"\widehat{Z}",bend left=85]\arrow[uur,swap,dashed,"Z",bend right=85]&
\end{tikzcd}
\hspace{1em}
\begin{tikzcd}[column sep=2cm, row
sep=1.8cm]
\zeta^*\left(LM\right) \arrow[r,"\text{pr}_2"]\arrow[d,swap,"\text{pr}_1"] &LM\arrow[d,"p_K^{LM}"]\\
M\arrow[r,swap,"\zeta"] & \Sigma
\end{tikzcd}$$ Now, using the fact that $${\mathcal{T}}_0/K\simeq\Sigma\times C_0\left(LM\right),\qquad{\mathcal{T}}_0\simeq LM\times C_0\left(LM\right)$$ we have that $Z=\left(\zeta,\Gamma\right)$ is composed by the metric times the Levi-Civita connection $\Gamma$; therefore, we will have that $$\widehat{Z}=\left(\widehat{\zeta},\Gamma\right)$$ where $\widehat{\zeta}:M\to LM$ is some lift of the section $\zeta:M\to\Sigma$. Then, we can then construct the pullback bundle $\zeta^*(LM)$ (Diagram , right) and particularize Lemma \[lem:TrivialToSections\] to conclude the following:
\[lem:FlatConnectionIffSectionJ1pi\] Assume that $M$ is simply connected. If $\zeta^*\left(LM\right)$ admits a flat connection then there exists a section $\widehat{Z}:M\rightarrow {\mathcal{T}}_0$ such that $$\left(p_K^{LM}\circ\tau_{10}'\right)\circ\widehat{Z}=\zeta$$ and $\widehat{Z}^*\omega_{\mathfrak{p}}=0$. Conversely, every such section gives rise to a flat connection on $\zeta^*\left(TM\right)$.
Because $\zeta^*\left(LM\right)$ is a $K$-principal bundle, Theorem \[thm:conn-induc-pullb\] and Lemma \[lem:TrivialToSections\] allow us to find a section $\widehat{\zeta}:M\to LM$ iff there exists a flat connection on it. Thus if $\omega^\zeta$ is flat, we can construct a lift $\widehat{\zeta}:M\to LM$ for $\zeta$ and so $$\widehat{Z}=\left(\widehat{\zeta},\Gamma\right)$$ is the desired lift to ${\mathcal{T}}_0$, where $\Gamma:M\to C\left(LM\right)$ is the Levi-Civita connection for $\zeta$.
Conversely, let us suppose that we have a lift $$\widehat{\zeta}:=\tau_{10}'\circ\widehat{Z}:M\to LM$$ for the metric $\zeta:M\to\Sigma$. Recall that, for every $\left(x,u\right)\in\zeta^*\left(LM\right)$, $$T_{\left(x,u\right)}\zeta^*\left(LM\right)=\left\{\left(v_x,V_u\right):T_x\zeta\left(v_x\right)=T_up_K^{LM}\left(V_u\right)\right\}\subset T_xM\times T_u\left(LM\right).$$ Then we construct the following $K$-invariant distribution $H$ on $\zeta^*\left(LM\right)$: If $k\in K$ fullfils the condition $u=\widehat{\zeta}\left(x\right)\cdot k$, then $$H_{\left(x,u\right)}:=\left\{\left(v_x,T_{\widehat{\zeta}\left(x\right)}R_k\left(T_x\widehat{\zeta}\left(v_x\right)\right)\right):v_x\in T_xM\right\}.$$ It can be shown that it defines a flat connection on $\zeta^*\left(LM\right)$.
So, in order to find a lift for the section $\zeta$, it is sufficient to construct a flat connection on the $K$-principal bundle $\zeta^*\left(LM\right)$.
To this end, we will define $$\omega^\zeta:=\pi_{{\mathfrak{k}}}\circ\left(\text{pr}_2\right)^*\omega_0\in\Omega^1\left(\zeta^*\left(LM\right),{\mathfrak{k}}\right)$$ where $\omega_0\in\Omega^1\left(LM,\mathfrak{gl}\left(m\right)\right)$ is a principal connection on $LM$ and $\pi_{\mathfrak{k}}:\mathfrak{gl}\left(m\right)\to{\mathfrak{k}}$ is the canonical projection onto ${\mathfrak{k}}$. Lemma \[lem:FlatConnectionIffSectionJ1pi\] allows us to establish the following definition, inspired in the analogous concept from regular Routh reduction.
We will say that a metric $\zeta:M\to\Sigma$ satisfies the *flat condition respect the principal connection $\omega_0\in\Omega^1\left(LM,\mathfrak{gl}\left(m\right)\right)$* if and only if the associated connection $\omega^\zeta$ is flat.
This condition yields to a relationship between the metric $\zeta:M\to\Sigma$ and the principal connection $\omega_0$; the physical relevance of this relationship remains unclear for the author. Mathematically, it means that, even in the case that the bundle $p_K^{LM}:LM\to\Sigma$ is nontrivial, it could be the case when restricted to the image of the section $\zeta$.
Also, it is necessary to establish the following result regarding the map $F_\omega$.
The following diagram commutes $$\label{eq:F_OmegaCommutative_Diagram}
\begin{tikzcd}[row sep=1.3cm,column sep=1.5cm]
{\mathcal{T}}_0
\arrow[swap]{d}{p_K^{{\mathcal{T}}_0}}
\arrow{r}{F_\omega}
&
J^1\tau_\Sigma
\arrow{d}{\left(\tau_\Sigma\right)_{10}}
\\
{\mathcal{T}}_0/K
\arrow{r}{\overline{\tau}'_{10}}
&
\Sigma
\end{tikzcd}$$
In fact, for $j_x^1s\in{\mathcal{T}}_0$ we have $$\begin{aligned}
\left(\tau_\Sigma\right)_{10}\left(F_\omega\left(j_x^1s\right)\right)&=\left(\tau_\Sigma\right)_{10}\left(j_x^1\left(p_K^{LM}\circ s\right)\right)\\
&=\left[s\left(x\right)\right]_K
\end{aligned}$$ and also $$\overline{\tau}'_{10}\left(p_K^{{\mathcal{T}}_0}\left(j_x^1s\right)\right)=\overline{\tau}'_{10}\left(\left[j_x^1s\right]_K\right)=\left[s\left(x\right)\right]_K,$$ and the lemma follows.
With this in mind, we are ready to formulate the reconstruction side of this version of Routh reduction for Palatini gravity.
\[thm:Reconstruction\] Let $\zeta:M\to\Sigma$ be a metric satisfying the flat condition and the Einstein-Hilbert equations of motion. Then there exists a section $$\widehat{Z}:M\to{\mathcal{T}}_0$$ that is extremal of the Griffiths variational problem for Palatini gravity.
The holonomic lift $$j^1\zeta:M\to J^1\tau_\Sigma$$ is extremal for the variational problem $\left(J^1\tau_\Sigma,{\mathcal{L}}_{EH}^{\left(1\right)},cI^\Sigma_{\text{con}}\right)$; then, by Proposition \[prop:FieldTheoryEqsEinsteinHilbertCase\], there exists a section $$\widetilde{\Gamma}:M\to W_{EH}$$ such that $\tau_{EH}\circ\widetilde{\Gamma}=j^1\zeta$ and $$\label{eq:WidetildeGammaLambdaEH}
\widetilde{\Gamma}^*\left(X\lrcorner d\lambda_{EH}\right)=0$$ for all $Z\in\mathfrak{X}^{V\left(\left(\tau_\Sigma\right)_1\circ\tau_{EH}\right)}\left(W_{EH}\right)$.
On the other hand, by Lemma \[lem:FlatConnectionIffSectionJ1pi\] we have a lift $$\widehat{Z}:M\to{\mathcal{T}}_0$$ such that $$\overline{\tau}_{10}'\circ p_K^{{\mathcal{T}}_0}\circ\widehat{Z}=\zeta;$$ by Diagram we have that $$\label{eq:HolonomicSectionZHat}
\zeta=\overline{\tau}_{10}'\circ p_K^{{\mathcal{T}}_0}\circ\widehat{Z}=\left(\tau_\Sigma\right)_{10}\circ F_\omega\circ\widehat{Z}.$$ We will define the map $$\Gamma':=\left(\widehat{Z},\widetilde{\Gamma}\right):M\to{\mathcal{T}}_0\times W_{EH}$$ and show that it is a section of $\text{pr}_1^\omega:F^*_\omega\left(W_{EH}\right)\to{\mathcal{T}}_0$; namely, we have to show that $$F_\omega\circ\widehat{Z}=\pi_{EH}\circ\widetilde{\Gamma}.$$ It is important to this end to note that the conclusion of Proposition \[prop:Metricity-Horizontal\] can be translated to this context into $$Tp_K^{LM}\circ\omega_{\mathfrak{p}}=F_\omega^*\overline{\omega};$$ moreover, by Lemma \[lem:FlatConnectionIffSectionJ1pi\] we know that $\widehat{Z}^*\omega_{\mathfrak{p}}=0$, so $$\left(F_\omega\circ\widehat{Z}\right)^*\overline{\omega}=\widehat{Z}^*\left(F_\omega^*\overline{\omega}\right)=Tp_K^{LM}\circ\left(\widehat{Z}^*\omega_{\mathfrak{p}}\right)=0.$$ Then the section $$F_\omega\circ\widehat{Z}:M\to J^1\tau_\Sigma$$ is holonomic; finally, from Equation we must conclude that $$F_\omega\circ\widehat{Z}=j^1\zeta.$$ But $j^1\zeta=\pi_{EH}\circ\widetilde{\Gamma}$ by construction of the section $\widetilde{\Gamma}$; then $$F_\omega\circ\widehat{Z}=\pi_{EH}\circ\widetilde{\Gamma}$$ and $\Gamma'$ is a section of $F_\omega^*\left(W_{EH}\right)$, as required.
Now define the section $${\Gamma}:=\widetilde{F}_\omega\left(\widehat{Z},\widetilde{\Gamma}\right):M\to W_{PG};$$ Therefore, for any $Z\in\mathfrak{X}^{V\left(\tau_1\circ\pi_{PG}\right)}\left(W_{PG}\right)$ that is $\left(\text{pr}_2^\omega\circ F_\omega^{-1}\right)$- projectable, we have that $$\begin{aligned}
\Gamma^*\left(Z\lrcorner d\lambda_{PG}\right)&=\left(\Gamma'\right)^*\left(\left(TF_\omega^{-1}\circ Z\right)\lrcorner dF_\omega^*\lambda_{PG}\right)\\
&=\left(\Gamma'\right)^*\left(\left(TF_\omega^{-1}\circ Z\right)\lrcorner d\left(\text{pr}_2^\omega\right)^*\lambda_{EH}\right)\\
&=\left(\text{pr}_2^\omega\circ\Gamma'\right)^*\left(\left(T\text{pr}_2^\omega\circ TF_\omega^{-1}\circ Z\right)\lrcorner d\lambda_{EH}\right)\\
&=\widetilde{\Gamma}^*\left(\left(T\text{pr}_2^\omega\circ TF_\omega^{-1}\circ Z\right)\lrcorner d\lambda_{EH}\right)\\
&=0
\end{aligned}$$ because $\widetilde{\Gamma}$ obeys Equation .
Conclusions and outlook {#sec:conclusions-outlook}
=======================
We were able to adapt the Routh reduction scheme developed in [@Capriotti2019] to the case of affine gravity with vielbeins. It suggests that this formalism could be fit to deal with Griffiths variational problems more general than the classical, at least with cases when the differential restrictions are a subset of those imposed by the contact structure. Extensions of this scheme to gravity interacting with matter fields will be studied elsewhere.
An important algebraic result
=============================
First we want to state the following algebraic proposition.
\[Prop:UniqueSolution\] Let $\left\{c_{ijk}\right\}$ be a set of real numbers such that $$\begin{cases}
c_{ijk}\mp c_{jik}=b_{ijk}\cr
c_{ijk}\pm c_{ikj}=a_{ijk}
\end{cases}$$ for some given set of real numbers $\left\{a_{ijk}\right\}$ and $\left\{b_{ijk}\right\}$ such that $b_{ijk}\mp b_{jik}=0$ and $a_{ijk}\pm a_{ikj}=0$. Then $$c_{ijk}=\frac{1}{2}\left(a_{ijk}+a_{jki}-a_{kij}+b_{ijk}+b_{kij}-b_{jki}\right)$$ is the unique solution for this linear system.
From first equation we see that $$\pm c_{jik}=c_{ijk}-b_{ijk}.$$ The trick now is to form the following combination $$\begin{aligned}
a_{ijk}+a_{jki}-a_{kij}&=c_{ijk}\pm c_{ikj}+c_{jki}\pm c_{jik}-\left(c_{kij}\pm c_{kji}\right)\cr
&=2c_{ijk}-b_{ijk}-b_{kij}+b_{jki}\end{aligned}$$ where in the permutation of indices was used the remaining condition.
Proof of Proposition \[prop:FieldTheoryEqsWL\] {#app:LiftToTorsionZero}
==============================================
In order to do this proof, it will be necessary to bring some facts from [@doi:10.1142/S0219887818500445]. First, we have the bundle isomorphism on ${\mathcal{T}}_0$ $$W_{PG}\simeq E_2$$ where $p_2':E_2\to{\mathcal{T}}_0$ is the vector bundle $$E_2:=\wedge^{m-1}_1\left({\mathcal{T}}_0\right)\otimes S^*\left(m\right),$$ with $S^*\left(m\right):=\left(\mathbb{R}^m\right)^*\odot\left(\mathbb{R}^m\right)^*$ the set of symmetric forms on $\mathbb{R}^m$, and $$\wedge^{m-1}_1\left({\mathcal{T}}_0\right):=\left\{\gamma\in\wedge^{m-1}\left({\mathcal{T}}_0\right):\gamma\text{ is horizontal respect to the projection }\tau_1':{\mathcal{T}}_0\to M\right\}.$$ The bundle $E_2$ is a bundle of forms with values in a vector space; therefore, it has a canonical $\left(m-1\right)$- form $$\Theta:=\Theta_{ij}e^i\odot e^j.$$ Using the structure equations for the canonical connection on $J^1\tau$ (pulled back to ${\mathcal{T}}_0$), we have that the differential of the Lagrangian form $\lambda_{PG}$ is given by $$\begin{gathered}
\label{eq:FormulaFordLambda0}
\left.d\lambda_{PG}\right|_\rho=\left[2\eta^{kp}\left(\omega_{\mathfrak{p}}\right)_k^i\wedge\theta_{il}-\left(\omega_{\mathfrak{p}}\right)^s_s\wedge\eta^{kp}\theta_{kl}+\eta^{ip}\left.\Theta_{il}\right|_{\beta}\right]\wedge\Omega^l_p+\\
+\eta^{ik}\left[\left.d\Theta_{ij}\right|_{\beta}+\eta^{rq}\eta_{li}\left.\Theta_{rj}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^l_q-\left.\Theta_{ip}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^p_j\right]\wedge\left(\omega_{\mathfrak{p}}\right)^j_k.\end{gathered}$$ The equations of motion $$\label{eq:LiftedEqsOfMotion}
\Gamma^*\left(X\lrcorner d\lambda_{PG}\right)=0,\qquad X\in\mathfrak{X}^{V\left(\tau_1'\circ\tau_{PG}\right)}\left(W_{PG}\right)$$ are obtained by choosing a convenient set of vertical vector fields; because of the identification given above, it is sufficient to give a set of vertical vector fields on ${\mathcal{T}}_0$ and on $E_2$. It results that a global basis of vertical vector fields on ${\mathcal{T}}_0$ is $$B':=\left\{A_{J^1\tau},M_{rs}\left(\theta^r,\left(E^s_j\right)_{LM}\right)^V:A\in\mathfrak{gl}\left(m\right),M_{pq}-M_{qp}=0\right\};$$ in fact, the equation defining ${\mathcal{T}}_0$ $$e_\rho^ke_{k\sigma}^\mu=e_\sigma^ke_{k\rho}^\mu$$ is invariant by the $GL\left(m\right)$-action, and also $$\left(\theta^r,\left(E^s_j\right)_{LM}^V\right)\cdot\left(e_\rho^ke_{k\sigma}^\mu-e_\sigma^ke_{k\rho}^\mu\right)=e^\mu_j\left(e^s_\sigma e^r_\rho-e^r_\sigma e^s_\rho\right).$$ Given that $E_2$ is a vector bundle on ${\mathcal{T}}_0$, any section $\beta:{\mathcal{T}}_0\to E_2$ gives rise to a vertical vector field; the equations of motion associated to these kind of vector fields are the metricity conditions $$\omega_{\mathfrak{p}}=0.$$ Therefore, fixing an Ehresmann connection on the bundle $p_2':E_2\to{\mathcal{T}}_0$, we can produce the set of vertical vector fields on $E_2$ $$\left(A_{{\mathcal{T}}_0}\right)^H,\qquad M:=M_{rs}\left(\theta^r,\left(E^s_j\right)_{LM}\right)^H;$$ the equations of motion associated to $M$ are $$\eta^{ks}\Gamma^*\left(M_{rs}\Theta_{kt}\wedge\theta^r\right)=0.$$ The unique solution of these equations is $\Theta_{kt}=0$. In fact, by writing $$\Gamma^*\Theta_{kt}=\eta^{lp}N_{ktp}\theta_l$$ and taking into account the symmetry properties of $M_{pq}$, we have that the set of quantities $N_{pqr}$ must satisfy $$N_{pqr}-N_{qpr}=0,\qquad N_{pqr}+N_{prq}=0;$$ by Proposition \[Prop:UniqueSolution\], it results that $N_{pqr}=0$, as desired. The rest of the equations of motion can be calculated is the same fashion that in the $J^1\tau$ case; therefore, the equations are equivalent to the equations for the extremals of the Palatini variational problem.
Proof of Proposition \[prop:Local\_Horizontal\_Lift\] {#sec:proof-prop-ref}
=====================================================
First, let us write down $$\left(\frac{\partial}{\partial x^\mu}\right)^H=M^\nu_\mu\frac{\partial}{\partial x^\nu}+N_{\mu k}^\nu\frac{\partial}{\partial e^\nu_k}.$$ Then from $$\label{eq:HorizontalLiftProjected}
Tp_K^{LM}\left(\left(\frac{\partial}{\partial x^\mu}\right)^H\right)=\frac{\partial}{\partial x^\mu}$$ it results $$M^\nu_\mu=\delta^\nu_\mu;$$ the condition $$\omega_K\left(\left(\frac{\partial}{\partial x^\mu}\right)^H\right)=0$$ implies $$N^\nu_{\mu k}\left(\eta^{pk}e_\nu^l-\eta^{lk}e_\nu^p\right)+\eta^{lq}e^p_\sigma f^\sigma_{q\mu}-\eta^{pq}e^l_\sigma f^\sigma_{q\mu}=0.$$ In order to understand this equation for the unknowns $N^\nu_{\mu k}$, let us change of variables through the formula $$N^\nu_{\mu k}=g_{\sigma\rho}e^\rho_kN^{\nu\sigma}_\mu;$$ in terms of these new variables, and the Christoffel symbols of the connection $\omega_0$ $$\overline{\Gamma}^\sigma_{\alpha\mu}=-e_\alpha^kf_{k\mu}^\sigma,$$ the previous equations can be expressed as $$\begin{aligned}
0&=g_{\alpha\rho}e^\rho_kN^{\nu\alpha}_\mu\left(\eta^{pk}e_\nu^l-\eta^{lk}e_\nu^p\right)-\eta^{lq}e^p_\sigma e_q^\alpha \overline{\Gamma}^\sigma_{\alpha\mu}+\eta^{pq}e^l_\sigma e^\alpha_q \overline{\Gamma}^\sigma_{\alpha\mu}\\
&=N^{\nu\alpha}_\mu\left(g_{\alpha\rho}e^\rho_k\eta^{pk}e_\nu^l-g_{\alpha\rho}e^\rho_k\eta^{lk}e_\nu^p\right)+\left(\eta^{pq}e^\alpha_qe^l_\sigma-\eta^{lq}e^\alpha_qe^p_\sigma\right)\overline{\Gamma}^\sigma_{\alpha\mu}\\
&=N^{\nu\alpha}_\mu\left(g_{\alpha\rho}g^{\rho\beta}e_\beta^pe_\nu^l-g_{\alpha\rho}g^{\rho\beta}e_\beta^le_\nu^p\right)+\left(e^p_\beta e^l_\sigma-e^l_\beta e^p_\sigma\right)g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\\
&=N^{\sigma\beta}_\mu\left(e_\beta^pe_\sigma^l-e_\beta^le_\sigma^p\right)+\left(e^p_\beta e^l_\sigma-e^l_\beta e^p_\sigma\right)g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\\
&=\left(e_\beta^pe_\sigma^l-e_\beta^le_\sigma^p\right)\left(N^{\sigma\beta}_\mu+g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\right).\end{aligned}$$ The operator in the left is essentially an antisymmetrizator, because of the formula $$e^\mu_pe^\nu_l\left(e_\beta^pe_\sigma^l-e_\beta^le_\sigma^p\right)=\delta_\beta^\mu \delta_\sigma^\nu-\delta_\beta^\nu \delta_\sigma^\mu;$$ therefore $$\label{eq:NFromGamma}
N^{\sigma\beta}_\mu+g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}=S^{\sigma\beta}_\mu,$$ where $$S^{\sigma\beta}_\mu-S^{\beta\sigma}_\mu=0.$$ Finally, from the condition we obtain $$N_{\mu k}^\sigma\left(\eta^{kq}e_q^\rho\delta_\sigma^\alpha+\eta^{kq}e_q^\alpha\delta_\sigma^\rho\right)=0$$ or, in terms of the variables $N^{\nu\sigma}_\mu$ $$\begin{aligned}
0&=g_{\nu\beta}e^\beta_kN_{\mu}^{\sigma\nu}\left(\eta^{kq}e_q^\rho\delta_\sigma^\alpha+\eta^{kq}e_q^\alpha\delta_\sigma^\rho\right)\\
&=N_{\mu}^{\sigma\nu}\left(g_{\nu\beta}e^\beta_k\eta^{kq}e_q^\rho\delta_\sigma^\alpha+g_{\nu\beta}e^\beta_k\eta^{kq}e_q^\alpha\delta_\sigma^\rho\right)\\
&=N_{\mu}^{\sigma\nu}\left(g_{\nu\beta}g^{\beta\rho}\delta_\sigma^\alpha+g_{\nu\beta}g^{\beta\alpha}\delta_\sigma^\rho\right)\\
&=N_{\mu}^{\sigma\nu}\left(\delta_{\nu}^{\rho}\delta_\sigma^\alpha+\delta_{\nu}^{\alpha}\delta_\sigma^\rho\right).\end{aligned}$$ From Equation it results that $$g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}+g^{\alpha\sigma}\overline{\Gamma}^\beta_{\alpha\mu}-2S^{\sigma\beta}_\mu=0,$$ or equivalently $$N^{\sigma\beta}_\mu=\frac{1}{2}\left(g^{\alpha\sigma}\overline{\Gamma}^\beta_{\alpha\mu}-g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\right).$$ Therefore, we have $$\left(\frac{\partial}{\partial x^\mu}\right)^H=\frac{\partial}{\partial x^\mu}+\frac{1}{2}g_{\beta\rho}e^\rho_k\left(g^{\alpha\sigma}\overline{\Gamma}^\beta_{\alpha\mu}-g^{\alpha\beta}\overline{\Gamma}^\sigma_{\alpha\mu}\right)\frac{\partial}{\partial e^\sigma_k}.$$
Additionally, we need to construct the horizontal lifts $$\left(\frac{\partial}{\partial g^{\mu\nu}}\right)^H=P_{\mu\nu}^\sigma\frac{\partial}{\partial x^\sigma}+Q_{\mu\nu k}^\sigma\frac{\partial}{\partial e^\sigma_k}$$ with $P_{\mu\nu}^{\sigma}-P_{\nu\mu}^{\sigma}=0,Q_{\mu\nu k}^{\sigma}-Q_{\nu\mu k}^\sigma=0$. The equation $$\label{eq:HorizontalLiftE}
Tp_K^{LM}\left(\left(\frac{\partial}{\partial g^{\mu\nu}}\right)^H\right)=\frac{\partial}{\partial g^{\mu\nu}}$$ and the identity imply $$P_{\mu\nu}^{\sigma}\frac{\partial}{\partial x^\sigma}+Q_{\mu\nu k}^{\sigma}\left(\eta^{kq}e_q^\rho\delta_\sigma^\alpha+\eta^{kq}e_q^\alpha\delta_\sigma^\rho\right)\frac{\partial}{\partial g^{\alpha\rho}}=\frac{\partial}{\partial g^{\mu\nu}},$$ namely $$P_{\mu\nu}^{\sigma}=0$$ and (given the symmetry properties of $g^{\mu\nu}$) $$\begin{aligned}
\frac{1}{2}\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)&=Q_{\mu\nu k}^{\sigma}\left(\eta^{kq}e_q^\rho\delta_\sigma^\alpha+\eta^{kq}e_q^\alpha\delta_\sigma^\rho\right)\cr
&=\eta^{kq}e_q^\rho Q_{\mu\nu k}^\alpha+\eta^{kq}e_q^\alpha Q_{\mu\nu k}^{\rho}.\label{eq:Condition_One_E}\end{aligned}$$ The horizontality condition $$\omega_K\left(\left(\frac{\partial}{\partial g^{\mu\nu}}\right)^H\right)=0$$ will be equivalent to $$\left(\eta^{pk}e^l_\sigma-\eta^{lk}e^p_\sigma\right)Q_{\mu\nu k}^\sigma=0.
\label{eq:Condition_Two_E}$$ These conditions can be understood by introducing the variables $$Q_{\mu\nu}^{\alpha\rho}:=\eta^{kq}e_q^\rho Q_{\mu\nu k}^{\alpha};$$ then, Equation becomes $$Q_{\mu\nu}^{\alpha\rho}+Q_{\mu\nu}^{\rho\alpha}=\frac{1}{2}\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)$$ and Equation is equivalent to $$Q_{\mu\nu}^{\alpha\rho}-Q_{\mu\nu}^{\rho\alpha}=0.$$ Therefore $$\begin{aligned}
Q_{\mu\nu}^{\alpha\rho}+Q_{\mu\nu}^{\rho\alpha}&=\frac{1}{2}\left(Q_{\mu\nu}^{\alpha\rho}+Q_{\mu\nu}^{\rho\alpha}\right)+\frac{1}{2}\left(Q_{\mu\nu}^{\alpha\rho}-Q_{\mu\nu}^{\rho\alpha}\right)\\
&=\frac{1}{4}\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)\end{aligned}$$ and so $$\begin{aligned}
\left(\frac{\partial}{\partial g^{\mu\nu}}\right)^H&=Q_{\mu\nu k}^{\alpha}\frac{\partial}{\partial e^\alpha_k}\\
&=\frac{1}{4}\eta_{kl}e^l_\rho\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)\frac{\partial}{\partial e^\alpha_k}\\
&=\frac{1}{4}g_{\rho\beta}e^\beta_k\left(\delta_\mu^\alpha\delta^\rho_\nu+\delta_\nu^\alpha\delta^\rho_\mu\right)\frac{\partial}{\partial e^\alpha_k}.\end{aligned}$$
[10]{} \[1\][\#1]{} \[2\]\[\][[arXiv:\#2](http://arxiv.org/abs/#2)]{}
Arnowitt R., Deser S., Misner C.W., [The Dynamics of General Relativity]{}, *General Relativity and Gravitation* **40** (2004), 1997–2027, , <http://dx.doi.org/10.1007/s10714-008-0661-1>.
Barbero-Li[ñ]{}án M., Echeverr[í]{}a-Enr[í]{}quez A., de Diego D.M., Mu[ñ]{}oz-Lecanda M.C., Román-Roy N., Skinner–Rusk unified formalism for optimal control systems and applications, *Journal of Physics A: Mathematical and Theoretical* **40** (2007), 12071, <http://stacks.iop.org/1751-8121/40/i=40/a=005>.
Capriotti S., Differential geometry, Palatini gravity and reduction, *Journal of Mathematical Physics* **55** (2014), 012902, <http://scitation.aip.org/content/aip/journal/jmp/55/1/10.1063/1.4862855>.
Capriotti S., Routh reduction and Cartan mechanics, *Journal of Geometry and Physics* **114** (2017), 23 – 64, <http://www.sciencedirect.com/science/article/pii/S0393044016302959>.
Capriotti S., Unified formalism for Palatini gravity, *International Journal of Geometric Methods in Modern Physics* (2017), 1850044, , <http://www.worldscientific.com/doi/abs/10.1142/S0219887818500445>.
Capriotti S., Garc[í]{}a-Tora[ñ]{}o Andr[é]{}s E., Routh reduction for first-order Lagrangian field theories, *Letters in Mathematical Physics* **109** (2019), 1343–1376, <https://doi.org/10.1007/s11005-018-1140-6>.
M., [Mu[ñ]{}oz Masqu[é]{}]{} J., The geometry of the bundle of connections, *Mathematische Zeitschrift* **236** (2001), 797–811, 10.1007/PL00004852, <http://dx.doi.org/10.1007/PL00004852>.
M., [Ratiu]{} T.S., [Reduction in Principal Bundles: Covariant Lagrange-Poincar[é]{} Equations]{}, *Communications in Mathematical Physics* **236** (2003), 223–250.
M., Ratiu T.S., Shkoller S., Reduction in Principal Fiber Bundles: Covariant Euler-Poincaré Equations, *Proceedings of the American Mathematical Society* **128** (2000), pp. 2155–2164, <http://www.jstor.org/stable/119711>.
Cattaneo A.S., Schiavina M., The Reduced Phase Space of Palatini–Cartan–Holst Theory, *Annales Henri Poincar[é]{}* **20** (2019), 445–480, <https://doi.org/10.1007/s00023-018-0733-z>.
Crampin M., Mestdag T., Routh’s procedure for non-Abelian symmetry groups, *Journal of Mathematical Physics* **49** (2008), 032901, <http://scitation.aip.org/content/aip/journal/jmp/49/3/10.1063/1.2885077>.
N., [Pons]{} J.M., [On the equivalence of the Einstein-Hilbert and the Einstein-Palatini formulations of general relativity for an arbitrary connection]{}, *General Relativity and Gravitation* **44** (2012), 2337–2352, .
A., [L[ó]{}pez]{} C., [Mar[í]{}n-Solano]{} J., [Mu[ñ]{}oz-Lecanda]{} M.C., [Rom[á]{}n-Roy]{} N., [Lagrangian-Hamiltonian unified formalism for field theory]{}, *Journal of Mathematical Physics* **45** (2004), 360–380, .
D.C.P., [Gay-Balmaz]{} F., [Holm]{} D.D., [Ratiu]{} T.S., [Lagrange-Poincar[é]{} field equations]{}, *Journal of Geometry and Physics* **61** (2011), 2120–2146, .
E., Mestdag T., Yoshimura H., Implicit Lagrange–Routh equations and Dirac reduction, *Journal of Geometry and Physics* **104** (2016), 291 – 304, <http://www.sciencedirect.com/science/article/pii/S0393044016300365>.
Gaset J., Rom[á]{}n-Roy N., Multisymplectic unified formalism for Einstein-Hilbert gravity, *Journal of Mathematical Physics* **59** (2018), 032502, , <https://doi.org/10.1063/1.4998526>.
Gotay M., An exterior differential system approach to the [C]{}artan form, in Symplectic geometry and mathematical physics. Actes du colloque de géométrie symplectique et physique mathématique en l’honneur de Jean-Marie Souriau, Aix-en-Provence, France, June 11-15, 1990., Editors P. Donato, C. Duval, J. Elhadad, G. Tuynman, Progress in Mathematics. 99. Boston, MA, Birkhäuser, 1991, 160–188.
Gotay M., Isenberg J., Marsden J., [Momentum maps and classical relativistic fields. I: Covariant field theory]{} (1997),, .
Griffiths P., [E]{}xterior [D]{}ifferential [S]{}ystems and the [C]{}alculus of [V]{}ariations, Progress in Mathematics, Birkhauser, 1982.
Hsu L., Calculus of Variations via the [G]{}riffiths formalism, *J. Diff. Geom.* **36** (1992), 551–589.
Ibort A., Spivak A., [On A Covariant Hamiltonian Description of Palatini’s Gravity on Manifolds with Boundary]{} (2016),, .
Kobayashi S., Nomizu K., Foundations of Differential Geometry, Vol. 1, Wiley, 1963.
Krupka D., Introduction to Global Variational Geometry, Atlantis Studies in Variational Geometry, Atlantis Press, 2015.
Langerock B., [Castrill[ó]{}n L[ó]{}pez]{} M., Routh Reduction for Singular Lagrangians, *Int. J. Geom. Methods Mod. Phys* **7** (2010), 1451–1489, <http://eprints.ucm.es/21388/>.
Marsden J.E., Ratiu T.S., Scheurle J., Reduction theory and the Lagrange-Routh Equations, *J. Math. Phys* **41** (2000), 3379–3429.
Prieto-Mart[í]{}nez P.D., Rom[á]{}n-Roy N., A new multisymplectic unified formalism for second order classical field theories, *Journal of Geometric Mechanics* **7** (2015), 203–253, <http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=11245>.
Rajan D., Visser M., [Global properties of physically interesting Lorentzian spacetimes]{}, *Int. J. Mod. Phys.* **D25** (2016), 1650106, .
Romano J.D., Geometrodynamics vs. connection dynamics, *General Relativity and Gravitation* **25** (1993), 759–854, <https://doi.org/10.1007/BF00758384>.
Saunders D.J., The Geometry of Jet Bundles, Cambridge University Press, 1989.
[^1]: The author thanks the CONICET for financial support.
|
---
abstract: 'Ultracompact X-ray binaries (UCXBs) have orbital periods shorter than about 80 minutes and typically consist of a neutron star that accretes hydrogen-poor matter from a white dwarf companion. Angular momentum loss via gravitational wave radiation drives mass transfer via Roche-lobe overflow. The late-time evolution of UCXBs is poorly understood – all 13 known systems are relatively young and it is not clear why. One question is whether old UCXBs actually still exist, or have they become disrupted at some point? Alternatively they may be simply too faint to see. To investigate this, we apply the theories of dynamical instability, the magnetic propeller effect, and evaporation of the donor, to the UCXB evolution. We find that both the propeller effect and evaporation are promising explanations for the absence of observed long-period UCXBs.'
---
Introduction
============
Ultracompact X-ray binaries (UCXBs) are a subclass of low-mass X-ray binaries and usually consist of a white dwarf transferring mass to a neutron star or black hole companion [@savonije1986 (Savonije et al. 1986)]. Today, about 13 systems are known, all Galactic. Apart from their short periods they distinguish themselves from other X-ray binaries by a very low optical to X-ray flux ratio. UCXBs are important objects to study because of the absence of hydrogen in the accretion disk and X-ray bursts. They are also excellent test objects for the common-envelope phase, which usually happens twice in the process of their formation. UCXBs are also likely progenitors of millisecond radio pulsars.
During their evolution, UCXBs transfer mass via Roche-lobe overflow because they lose angular momentum via the emission of gravitational waves. Because the mass ratio of the donor and accretor decreases, their orbits expand. Within the age of the Universe they can reach orbital periods of about 80 min [@deloye2003 (Deloye & Bildsten 2003)]. At longer orbital periods, the mass transfer rates decrease and their orbits expand slower because gravitational wave emission becomes weaker. For this reason, an UCXB is expected to spend most of its life at a long orbital period, and we expect the majority of the population to have periods longer than about 60 min.
Here we address the glaring discrepancy between this theoretical argument, and the fact that all known UCXBs have periods *shorter* than 60 min. At some point in their lives, UCXBs must become either invisible to our instruments, or become disrupted. The latter could be caused by a dynamical instability in the mass loss by the donor. Furthermore, radiation from the accretion disk and millisecond pulsar accretor can potentially evaporate the donor [@ruderman1989 (Ruderman et al. 1989)]. Finally, invisibility may be caused by the magnetosphere of the millisecond pulsar disrupting the inner accretion disk, source of most X-ray emission, and reducing the accretion rate.
Dynamical instability of the donor star
=======================================
The evolution of UCXBs and other very low mass ratio binaries is poorly understood. In particular, the dynamical behavior of a large accretion disk (relative to the orbit) is unclear. Along with matter, angular momentum is transported from the donor to the accretion disk. The evolution of an UCXB depends strongly on the capacity of the binary to return angular momentum from the disk to the orbit via a torque. It had been suggested by [@yungelson2006] among others that the outwards angular momentum transport could be hampered by gaps in the disk at radii that are resonant with the orbital period, or that the disk would no longer fit inside the Roche lobe of the accretor. At very low mass ratio, reduced feedback has a catastrophic effect on the donor; it cannot be contained in its Roche lobe anymore, with runaway mass loss as a result. By simulating accretion disks in extremely low mass ratio binaries such as the one in Fig. \[fig:accdisk\] we find that this feedback mechanism most likely remains effective even at very low mass ratio, and we expect UCXBs to avoid disruption by dynamical instability. This confirmed a result by [@priedhorsky1988].
![SPH simulation of an accretion disk in dynamical equilibrium for mass ratio of $0.001$. The solid curve shows the equatorial Roche lobes. The dashed circles indicate the 3:1 (inner) and 2:1 (outer) resonances with the orbital period. Figure adapted from [@vanhaaften2012evo].[]{data-label="fig:accdisk"}](vanhaaftenl_fig1.eps){width="3.4in"}
Evaporation of the donor star
=============================
Last year [@bailes2011] discovered a (detached) companion orbiting the millisecond radio pulsar PSR J1719–1438. Remarkable is the extremely low mass function, pointing to a companion mass close to $0.001\ M_{\odot}$, as well as the short orbital period of 131 min. The authors suggested that this system could be the descendant of an UCXB, provided it could become detached at some stage. However, the orbital period of 131 min is significantly longer than the $\sim 80$ min expected by gravitational-wave dominated evolution. Among other things, we investigated the effect of a fast, isotropic wind from the donor on the evolution of an UCXB. Because the donor star in an UCXB contains most of the system’s angular momentum in its orbit around the center of mass, this wind is effective in removing angular momentum from the system, besides the losses via gravitational wave emission. The evolution would speed up and longer orbital periods could be reached within the age of the Universe. In [@vanhaaften2012j1719] we studied the effect of such a wind on the time it takes for an UCXB to reach a period of 131 min. Even a low wind mass loss rate of a few times $10^{-13}\ M_{\odot}\ \mathrm{yr}^{-1}$ is sufficient, suggesting that UCXBs could indeed produce a system like PSR J1719–1438.
This hypothesis is supported by 16-year observations by the *Rossi XTE* All-Sky Monitor, summarized in Fig. \[fig:asm\]. Most of the UCXBs with orbital periods longer than about 40 min are much brighter than expected from a model with a degenerate donor star and evolution driven by gravitational wave emission. This behavior is consistent with additional angular momentum loss, for example by a donor wind.
![UCXB time-averaged luminosity against orbital period. The solid curve shows the evolution of an UCXB with a helium white dwarf donor. The dashed curve represents the evolution of an UCXBs with initially a helium burning donor [@nelemans2010 (Nelemans et al. 2010)]. The triangles represent the lower bounds on the luminosity from the *Rossi XTE* All-Sky Monitor. The stars above these represent a luminosity estimate based on an extrapolation of the light curves. The circles show the time-averaged luminosities. Filled symbols are confirmed UCXBs, open symbols are candidates. Symbols that correspond to the same source are connected by a gray line for clarity. Figure taken from [@vanhaaften2012asm].[]{data-label="fig:asm"}](vanhaaftenl_fig2.eps){width="3.4in"}
Very recently, after this conference, direct evidence of a wind from the donor has been presented with the discovery of PSR J1311–3430 [@pletsch2012 (Pletsch et al. 2012)], an UCXB with a 93.8 min orbital period [@romani2012; @kataoka2012 (Romani 2012, Kataoka et al. 2012)] and a helium donor [@romanietal2012 (Romani et al. 2012)] showing donor evaporation.
The magnetic propeller effect
=============================
If UCXBs survive up to old age and long orbital periods, their mass transfer rates decrease with increasing periods, and the magnetic fields of the neutron stars can more easily dominate the Keplerian flow in the inner accretion disk. For a fast spinning neutron star, matter in the disk can be accelerated by the field lines, leading to a reduction in accretion, or even to matter being ejected from the binary system [@davidson1973 (Davidson & Ostriker 1973)]. We modeled the spin up and spin down of the neutron star based on the mass transfer rate as function of time in an UCXB. Initially the neutron star spins up due to accretion at a high rate, followed by spin down as the propeller effect becomes effective (Fig. \[fig:spin\]). We found that enough rotational energy is stored in the neutron star during the spin-up phase for the propeller effect to prevent accretion completely for the remainder of the evolution. Therefore it is conceivable that the luminosity of an UCXB with a low mass transfer rate is affected by the propeller effect.
![Spin period of a neutron star accretor with a residual magnetic field of $10^{8.5}$ G versus orbital period, with the propeller effect (solid), or in the hypothetical case of continued accreting (dashed). All observed millisecond pulsar-UCXBs are shown by circles (white circles for candidate UCXBs). Figure adapted from [@vanhaaften2012evo].[]{data-label="fig:spin"}](vanhaaftenl_fig3.eps){width="3.4in"}
Conclusion
==========
The absence (until very recently) of observed UCXBs with orbital periods longer than one hour could be explained if their evolution is sped up by a donor wind, especially if this eventually leads to detachment of the donor. Furthermore, the propeller effect is capable of significantly reducing the X-ray luminosity of old UCXBs.
2011, *Science*, 33, 1717
1973, *ApJ*, 179, 585
2003, *ApJ*, 598, 1217
2012, *ApJ*, 757, 176
2010, *MNRAS*, 401, 1347
2012, *Science*, doi:10.1126/science.1229054
1988, *ApJ*, 333, 895
2012, *ApJ*, 754, L25
2012, *ArXiv e-prints*, 1210.6884v1
1989, *ApJ*, 336, 507
1986, *A&A*, 155, 51
2012a, *A&A*, 541, A22
2012b, *A&A*, 537, A104
2012c, *A&A*, 543, A121
2006, *A&A*, 454, 559
|
---
abstract: 'Some new counterparts of Bessel’s inequality for orthornormal families in real or complex inner product spaces are pointed out. Applications for some Grüss type inequalities are also empahsized.'
address: |
School of Computer Science and Mathematics\
Victoria University of Technology\
PO Box 14428, MCMC 8001\
Victoria, Australia.
author:
- 'S.S. Dragomir'
date: 'May 19, 2003'
title: Some New Results Related to Bessel and Grüss Inequalities for Orthogonal Families in Inner Product Spaces
---
Introduction\[s1\]
==================
In [@SSDa], the author has proved the following result which provides both a Grüss type inequality for orthogonal families of vectors in real or complex inner products as well as, for $x=y,$ a counterpart of Bessel’s inequality.
\[t1.1\]Let $\left\{ e_{i}\right\} _{i\in I}$ be a family of orthornormal vectors in $H,$ i.e., $\left\langle e_{i},e_{j}\right\rangle =0$ if $i\neq j$ and $\left\Vert e_{i}\right\Vert =1,$ $i,j\in I,$ $F$ a finite part of $I,$ $\phi _{i},\gamma _{i},\Phi _{i},\Gamma _{i}\in \mathbb{R}$ $%
\left( i\in F\right) $, and $x,y\in H.$ If either$$\begin{aligned}
\func{Re}\left\langle \sum_{i=1}^{n}\Phi _{i}e_{i}-x,x-\sum_{i=1}^{n}\phi
_{i}e_{i}\right\rangle & \geq 0,\ \label{1.1} \\
\func{Re}\left\langle \sum_{i=1}^{n}\Gamma
_{i}e_{i}-y,y-\sum_{i=1}^{n}\gamma _{i}e_{i}\right\rangle & \geq 0, \notag\end{aligned}$$or, equivalently,$$\begin{aligned}
\left\Vert x-\sum_{i\in F}\frac{\Phi _{i}+\phi _{i}}{2}e_{i}\right\Vert &
\leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}, \label{1.2} \\
\left\Vert y-\sum_{i\in F}\frac{\Gamma _{i}+\gamma _{i}}{2}e_{i}\right\Vert
& \leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Gamma _{i}-\gamma
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}, \notag\end{aligned}$$hold, then we have the inequality$$\begin{aligned}
0& \leq \left\vert \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert
\label{1.3} \\
& \leq \frac{1}{4}\left( \sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\cdot \left( \sum_{i\in
F}\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}
\notag \\
& \ \ \ \ \ \ \ \ \ \ \ -\left[ \func{Re}\left\langle \sum_{i\in F}\Phi
_{i}e_{i}-x,x-\sum_{i\in F}\phi _{i}e_{i}\right\rangle \right] ^{\frac{1}{2}}
\notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \times \left[ \func{Re}\left\langle \sum_{i\in
F}\Gamma _{i}e_{i}-y,y-\sum_{i\in F}\gamma _{i}e_{i}\right\rangle \right] ^{%
\frac{1}{2}} \notag \\
& \leq \frac{1}{4}\left( \sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\cdot \left( \sum_{i\in
F}\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}.
\notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in the sense that it cannot be replaced by a smaller constant.
In the follow up paper [@SSDb], and by the use of a different technique, the author has pointed out the following result as well:
\[t2\]Let $\left\{ e_{i}\right\} _{i\in I}$, $F,$ $\phi _{i},\gamma
_{i},\Phi _{i},\Gamma _{i}$ and $x,y$ be as in Theorem \[t1.1\]. If either (\[1.1\]) or (\[1.2\]) holds, then we have the inequality$$\begin{aligned}
0& \leq \left\vert \left\langle x,y\right\rangle -\sum_{i=1}^{n}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert
\label{1.4} \\
& \leq \frac{1}{4}\left( \sum_{i=1}^{n}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\cdot \left(
\sum_{i=1}^{n}\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{%
\frac{1}{2}} \notag \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\sum_{i\in F}\left\vert \frac{\Phi
_{i}+\phi _{i}}{2}-\left\langle x,e_{i}\right\rangle \right\vert \left\vert
\frac{\Gamma _{i}+\gamma _{i}}{2}-\left\langle y,e_{i}\right\rangle
\right\vert \notag \\
& \leq \frac{1}{4}\left( \sum_{i=1}^{n}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\cdot \left(
\sum_{i=1}^{n}\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{%
\frac{1}{2}}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in the sense that it cannot be replaced by a smaller constant.
It has also been shown that the bounds provided by the second inequality in (\[1.3\]) and the second inequality in (\[1.4\]) cannot be compared in general.
A New Counterpart of Bessel’s Inequality\[s2\]
==============================================
The following counterpart of Bessel’s inequality holds.
\[t2.1\]Let $\left\{ e_{i}\right\} _{i\in I}$ be a family of orthornormal vectors in $H,$ $F$ a finite part of $I,$ and $\phi _{i},\Phi
_{i}$ $\left( i\in F\right) ,$ real or complex numbers such that $\sum_{i\in
F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) >0.$ If $x\in H$ is such that either
1. $\func{Re}\left\langle \sum_{i\in F}\Phi _{i}e_{i}-x,x-\sum_{i\in
F}\phi _{i}e_{i}\right\rangle \geq 0;$or, equivalently,
2. $\left\Vert x-\sum_{i\in F}\frac{\phi _{i}+\Phi _{i}}{2}%
e_{i}\right\Vert \leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Phi
_{i}-\phi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}};$
holds, then one has the inequality $$\left\Vert x\right\Vert ^{2}\leq \frac{1}{4}\cdot \frac{\sum_{i\in F}\left(
\left\vert \Phi _{i}\right\vert +\left\vert \phi _{i}\right\vert \right) ^{2}%
}{\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) }%
\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}.
\label{2.1}$$The constant $\frac{1}{4}$ is best possible in the sense that it cannot be replaced by a smaller constant.
Firstly, we observe that for $y,a,A\in H,$ the following are equivalent$$\func{Re}\left\langle A-y,y-a\right\rangle \geq 0 \label{2.2}$$and$$\left\Vert y-\frac{a+A}{2}\right\Vert \leq \frac{1}{2}\left\Vert
A-a\right\Vert . \label{2.3}$$Now, for $a=\sum_{i\in F}\phi _{i}e_{i},$ $A=\sum_{i\in F}\Phi _{i}e_{i},$ we have$$\begin{aligned}
\left\Vert A-a\right\Vert & =\left\Vert \sum_{i\in F}\left( \Phi _{i}-\phi
_{i}\right) e_{i}\right\Vert =\left[ \left\Vert \sum_{i\in F}\left( \Phi
_{i}-\phi _{i}\right) e_{i}\right\Vert ^{2}\right] ^{\frac{1}{2}} \\
& =\left( \sum_{i\in F}\left\vert \Phi _{i}-\phi _{i}\right\vert
^{2}\left\Vert e_{i}\right\Vert ^{2}\right) ^{\frac{1}{2}}=\left( \sum_{i\in
F}\left\vert \Phi _{i}-\phi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}},\end{aligned}$$giving, for $y=x,$ the desired equivalence.
Now, observe that$$\begin{gathered}
\func{Re}\left\langle \sum_{i\in F}\Phi _{i}e_{i}-x,x-\sum_{i\in F}\phi
_{i}e_{i}\right\rangle \\
=\sum_{i\in F}\func{Re}\left[ \Phi _{i}\overline{\left\langle
x,e_{i}\right\rangle }+\overline{\phi _{i}}\left\langle x,e_{i}\right\rangle %
\right] -\left\Vert x\right\Vert ^{2}-\sum_{i\in F}\func{Re}\left( \Phi _{i}%
\overline{\phi _{i}}\right) ,\end{gathered}$$giving, from (i), that$$\left\Vert x\right\Vert ^{2}+\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{%
\phi _{i}}\right) \leq \sum_{i\in F}\func{Re}\left[ \Phi _{i}\overline{%
\left\langle x,e_{i}\right\rangle }+\overline{\phi _{i}}\left\langle
x,e_{i}\right\rangle \right] . \label{2.4}$$
On the other hand, by the elementary inequality$$\alpha p^{2}+\frac{1}{\alpha }q^{2}\geq 2pq,\ \ \alpha >0,\ p,q\geq 0;$$we deduce$$2\left\Vert x\right\Vert \leq \frac{\left\Vert x\right\Vert ^{2}}{\left[
\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) \right] ^{%
\frac{1}{2}}}+\left[ \sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi
_{i}}\right) \right] ^{\frac{1}{2}}. \label{2.5}$$Dividing (\[2.4\]) by $\left[ \sum_{i\in F}\func{Re}\left( \Phi _{i}%
\overline{\phi _{i}}\right) \right] ^{\frac{1}{2}}>0$ and using (\[2.5\]), we obtain$$\left\Vert x\right\Vert \leq \frac{1}{2}\frac{\sum_{i\in F}\func{Re}\left[
\Phi _{i}\overline{\left\langle x,e_{i}\right\rangle }+\overline{\phi _{i}}%
\left\langle x,e_{i}\right\rangle \right] }{\left[ \sum_{i\in F}\func{Re}%
\left( \Phi _{i}\overline{\phi _{i}}\right) \right] ^{\frac{1}{2}}},
\label{2.6}$$which is also an interesting inequality in itself.
Using the Cauchy-Buniakowsky-Schwartz inequality for real numbers, we get$$\begin{aligned}
\sum_{i\in F}\func{Re}\left[ \Phi _{i}\overline{\left\langle
x,e_{i}\right\rangle }+\overline{\phi _{i}}\left\langle x,e_{i}\right\rangle %
\right] & \leq \sum_{i\in F}\left\vert \Phi _{i}\overline{\left\langle
x,e_{i}\right\rangle }+\overline{\phi _{i}}\left\langle x,e_{i}\right\rangle
\right\vert \label{2.7} \\
& \leq \sum_{i\in F}\left( \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right) \left\vert \left\langle x,e_{i}\right\rangle
\right\vert \notag \\
& \leq \left[ \sum_{i\in F}\left( \left\vert \Phi _{i}\right\vert
+\left\vert \phi _{i}\right\vert \right) ^{2}\right] ^{\frac{1}{2}}\left[
\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}%
\right] ^{\frac{1}{2}}. \notag\end{aligned}$$Making use of (\[2.6\]) and (\[2.7\]), we deduce the desired result ([2.1]{}).
To prove the sharpness of the constant $\frac{1}{4},$ let us assume that (\[2.1\]) holds with a constant $c>0,$ i.e., $$\left\Vert x\right\Vert ^{2}\leq c\cdot \frac{\sum_{i\in F}\left( \left\vert
\Phi _{i}\right\vert +\left\vert \phi _{i}\right\vert \right) ^{2}}{%
\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) }%
\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2},
\label{2.8}$$provided $x,$ $\phi _{i},\Phi _{i},i\in F$ satisfies (i).
Choose $F=\left\{ 1\right\} ,$ $e_{1}=e,$ $\left\Vert e\right\Vert =1,$ $%
\phi _{i}=m,$ $\Phi _{i}=M$ with $m,M>0,$ then, by (\[2.8\]) we get$$\left\Vert x\right\Vert ^{2}\leq c\frac{\left( M+m\right) ^{2}}{mM}%
\left\vert \left\langle x,e\right\rangle \right\vert ^{2} \label{2.9}$$provided$$\func{Re}\left\langle Me-x,x-me\right\rangle \geq 0. \label{2.10}$$If $x=me,$ then obviously (\[2.10\]) holds, and by (\[2.9\]) we get$$m^{2}\leq c\frac{\left( M+m\right) ^{2}}{mM}m^{2}$$giving $mM\leq c\left( M+m\right) ^{2}$ for $m,M>0.$ Now, if in this inequality we choose $m=1-\varepsilon ,$ $M=1+\varepsilon $ $\left(
\varepsilon \in \left( 0,1\right) \right) ,$ then we get $1-\varepsilon
^{2}\leq 4c$ for $\varepsilon \in \left( 0,1\right) ,$ from where we deduce $%
c\geq \frac{1}{4}.$
\[r2.2\]By the use of (\[2.6\]), the second inequality in (\[2.7\]) and the Hölder inequality, we may state the following counterparts of Bessel’s inequality as well:$$\begin{gathered}
\left\Vert x\right\Vert ^{2}\leq \frac{1}{2}\cdot \frac{1}{\left[ \sum_{i\in
F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) \right] ^{\frac{1}{2}}%
} \label{2.11} \\
\times \left\{
\begin{array}{l}
\max\limits_{i\in F}\left\{ \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right\} \sum\limits_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert \\
\\
\left[ \sum\limits_{i\in F}\left( \left\vert \Phi _{i}\right\vert
+\left\vert \phi _{i}\right\vert \right) ^{p}\right] ^{\frac{1}{p}}\left(
\sum\limits_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert
^{q}\right) ^{\frac{1}{q}},\text{ } \\
\hfill \text{for\ }p>1,\ \frac{1}{p}+\frac{1}{q}=1 \\
\\
\max_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert
\sum\limits_{i\in F}\left[ \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right] .%
\end{array}%
\right. \end{gathered}$$
The following corollary holds.
\[c2.3\]With the assumption of Theorem \[t2.1\] and if either (i) or (ii) holds, then$$0\leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert ^{2}\leq \frac{1}{4}M^{2}\left( \mathbf{%
\Phi },\mathbf{\phi },F\right) \sum_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert ^{2}, \label{2.12}$$where$$M\left( \mathbf{\Phi },\mathbf{\phi },F\right) :=\left[ \frac{\sum_{i\in
F}\left\{ \left( \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right) ^{2}+4\left[ \left\vert \Phi _{i}\overline{\phi _{i}}%
\right\vert -\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) \right]
\right\} }{\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right)
}\right] ^{\frac{1}{2}}. \label{2.12.1}$$The constant $\frac{1}{4}$ is best possible.
The inequality (\[2.12\]) follows by (\[2.1\]) on subtracting the same quantity $\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle
\right\vert ^{2}$ from both sides.
To prove the sharpness of the constant $\frac{1}{4},$ assume that (\[2.12\]) holds with $c>0,$ i.e., $$0\leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert ^{2}\leq cM^{2}\left( \mathbf{\Phi },%
\mathbf{\phi },F\right) \sum_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert ^{2} \label{2.13}$$provided the condition (i) holds.
Choose $F=\left\{ 1\right\} ,$ $e_{1}=e,$ $\left\Vert e\right\Vert =1,$ $%
\phi _{i}=\phi ,$ $\Phi _{i}=\Phi ,$ $\phi ,\Phi >0$ in (\[2.13\]) to get$$0\leq \left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle
\right\vert ^{2}\leq c\frac{\left( \Phi -\phi \right) ^{2}}{\phi \Phi }%
\left\vert \left\langle x,e\right\rangle \right\vert ^{2}, \label{2.14}$$provided$$\left\langle \Phi e-x,x-\phi e\right\rangle \geq 0. \label{2.14.1}$$If $H=\mathbb{R}^{2},$ $x=\left( x_{1},x_{2}\right) \in \mathbb{R}^{2},$ $%
e=\left( \frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right) $ then we have$$\begin{aligned}
\left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle
\right\vert ^{2}& =x_{1}^{2}+x_{2}^{2}-\frac{\left( x_{1}+x_{2}\right) ^{2}}{%
2}=\frac{1}{2}\left( x_{1}-x_{2}\right) ^{2}, \\
\left\vert \left\langle x,e\right\rangle \right\vert ^{2}& =\frac{\left(
x_{1}+x_{2}\right) ^{2}}{2}\end{aligned}$$and by (\[2.14\]) we get$$\frac{\left( x_{1}-x_{2}\right) ^{2}}{2}\leq c\frac{\left( \Phi -\phi
\right) ^{2}}{\phi \Phi }\cdot \frac{\left( x_{1}+x_{2}\right) ^{2}}{2}.
\label{2.15}$$Now, if we let $x_{1}=\frac{\phi }{\sqrt{2}},$ $x_{2}=\frac{\Phi }{\sqrt{2}}$ $\left( \phi ,\Phi >0\right) $ then obviously$$\left\langle \Phi e-x,x-\phi e\right\rangle =\sum_{i=1}^{2}\left( \frac{\Phi
}{\sqrt{2}}-x_{i}\right) \left( x_{i}-\frac{\phi }{\sqrt{2}}\right) =0,$$which shows that (\[2.14.1\]) is fulfilled, and thus by (\[2.15\]) we obtain$$\frac{\left( \Phi -\phi \right) ^{2}}{4}\leq c\frac{\left( \Phi -\phi
\right) ^{2}}{\phi \Phi }\cdot \frac{\left( \Phi +\phi \right) ^{2}}{4}$$for any $\Phi >\phi >0.$ This implies$$c\left( \Phi +\phi \right) ^{2}\geq \phi \Phi \label{2.16}$$for any $\Phi >\phi >0.$
Finally, let $\phi =1-\varepsilon ,$ $\Phi =1+\varepsilon $, $\varepsilon
\in \left( 0,1\right) $. Then from (\[2.16\]) we get $4c\geq 1-\varepsilon
^{2}$ for any $\varepsilon \in \left( 0,1\right) $ which produces $c\geq
\frac{1}{4}.$
\[r2.4\]If $\left\{ e_{i}\right\} _{i\in I}$ is an orthornormal family in the real inner product $\left( H;\left\langle \cdot ,\cdot \right\rangle
\right) $ and $M_{i},m_{i}\in \mathbb{R}$, $i\in F$ ($F$ is a finite part of $I$) and $x\in H$ are such that $M_{i},m_{i}\geq 0$ for $i\in F$ with $%
\sum_{i\in F}M_{i}m_{i}\geq 0$ and$$\left\langle \sum_{i\in F}M_{i}e_{i}-x,x-\sum_{i\in
F}m_{i}e_{i}\right\rangle \geq 0,$$then we have the inequality$$0\leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left[ \left\langle
x,e_{i}\right\rangle \right] ^{2}\leq \frac{1}{4}\cdot \frac{\sum_{i\in
F}\left( M_{i}-m_{i}\right) ^{2}}{\sum_{i\in F}M_{i}m_{i}}\cdot \sum_{i\in F}%
\left[ \left\langle x,e_{i}\right\rangle \right] ^{2}. \label{2.17}$$The constant $\frac{1}{4}$ is best possible.
The following counterpart of the Schwarz’s inequality in inner product spaces holds.
\[c2.5\]Let $x,y\in H$ and $\delta ,\Delta \in \mathbb{K}$ $\left(
\mathbb{K}=\mathbb{C},\mathbb{R}\right) $ with the property that $\func{Re}%
\left( \Delta \overline{\delta }\right) >0.$ If either$$\func{Re}\left\langle \Delta y-x,x-\delta y\right\rangle \geq 0 \label{2.18}$$or, equivalently,$$\left\Vert x-\frac{\delta +\Delta }{2}\cdot y\right\Vert \leq \frac{1}{2}%
\left\vert \Delta -\delta \right\vert \left\Vert y\right\Vert \label{2.19}$$holds, then we have the inequalities$$\begin{aligned}
\left\Vert x\right\Vert \left\Vert y\right\Vert & \leq \frac{1}{2}\cdot
\frac{\func{Re}\left[ \Delta \overline{\left\langle x,y\right\rangle }+%
\overline{\delta }\left\langle x,y\right\rangle \right] }{\sqrt{\Delta
\overline{\delta }}} \label{2.20} \\
& \leq \frac{1}{2}\cdot \frac{\left\vert \Delta \right\vert +\left\vert
\delta \right\vert }{\sqrt{\Delta \overline{\delta }}}\left\vert
\left\langle x,y\right\rangle \right\vert , \notag\end{aligned}$$$$\begin{aligned}
0& \leq \left\Vert x\right\Vert \left\Vert y\right\Vert -\left\vert
\left\langle x,y\right\rangle \right\vert \label{2.21} \\
& \leq \frac{1}{2}\cdot \frac{\left( \sqrt{\left\vert \Delta \right\vert }-%
\sqrt{\left\vert \delta \right\vert }\right) ^{2}+2\left( \sqrt{\Delta
\overline{\delta }}-\sqrt{\func{Re}\left( \Delta \overline{\delta }\right) }%
\right) }{\sqrt{\Delta \overline{\delta }}}\left\vert \left\langle
x,y\right\rangle \right\vert , \notag\end{aligned}$$$$\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}\leq \frac{1}{4}%
\cdot \frac{\left( \left\vert \Delta \right\vert +\left\vert \delta
\right\vert \right) ^{2}}{\func{Re}\left( \Delta \overline{\delta }\right) }%
\left\vert \left\langle x,y\right\rangle \right\vert ^{2}, \label{2.22}$$and $$0\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}-\left\vert
\left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{%
\left( \left\vert \Delta \right\vert +\left\vert \delta \right\vert \right)
^{2}+4\left( \left\vert \Delta \overline{\delta }\right\vert -\func{Re}%
\left( \Delta \overline{\delta }\right) \right) }{\func{Re}\left( \Delta
\overline{\delta }\right) }\left\vert \left\langle x,y\right\rangle
\right\vert ^{2}. \label{2.23}$$The constants $\frac{1}{2}$ and $\frac{1}{4}$ are best possible.
The inequality (\[2.20\]) follows from (\[2.6\]) on choosing $F=\left\{
1\right\} ,$ $e_{1}=e=\frac{y}{\left\Vert y\right\Vert },$ $\Phi _{1}=\Phi
=\Delta \left\Vert y\right\Vert ,$ $\phi _{1}=\phi =\delta \left\Vert
y\right\Vert $ $\left( y\neq 0\right) .$ The inequality (\[2.21\]) is equivalent with (\[2.20\]). The inequality (\[2.22\]) follows from ([2.1]{}) for $F=\left\{ 1\right\} $ and the same choices as above. Finally, (\[2.23\]) is obviously equivalent with (\[2.22\]).
Some Grüss Type Inequalities\[s3\]
==================================
The following result holds.
\[t3.1\]Let $\left\{ e_{i}\right\} _{i\in I}$ be a family of orthornormal vectors in $H,$ $F$ a finite part of $I$, $\phi _{i},\Phi _{i},$ $\gamma _{i},\Gamma _{i}\in \mathbb{K},\ i\in F$ and $x,y\in H.$ If either$$\begin{aligned}
\func{Re}\left\langle \sum_{i\in F}\Phi _{i}e_{i}-x,x-\sum_{i\in F}\phi
_{i}e_{i}\right\rangle & \geq 0, \label{3.1} \\
\func{Re}\left\langle \sum_{i\in F}\Gamma _{i}e_{i}-y,y-\sum_{i\in F}\gamma
_{i}e_{i}\right\rangle & \geq 0, \notag\end{aligned}$$or, equivalently,$$\begin{aligned}
\left\Vert x-\sum_{i\in F}\frac{\Phi _{i}+\phi _{i}}{2}e_{i}\right\Vert &
\leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}, \label{3.2} \\
\left\Vert y-\sum_{i\in F}\frac{\Gamma _{i}+\gamma _{i}}{2}e_{i}\right\Vert
& \leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Gamma _{i}-\gamma
_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}, \notag\end{aligned}$$hold, then we have the inequality$$\begin{aligned}
0& \leq \left\vert \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert
\label{3.3} \\
& \leq \frac{1}{4}M\left( \mathbf{\Phi },\mathbf{\phi },F\right) M\left(
\mathbf{\Gamma },\mathbf{\gamma },F\right) \left( \sum_{i\in F}\left\vert
\left\langle x,e_{i}\right\rangle \right\vert ^{2}\right) ^{\frac{1}{2}%
}\left( \sum_{i\in F}\left\vert \left\langle y,e_{i}\right\rangle
\right\vert ^{2}\right) ^{\frac{1}{2}}, \notag\end{aligned}$$where $M\left( \mathbf{\Phi },\mathbf{\phi },F\right) $ is defined in ([2.12.1]{}).
The constant $\frac{1}{4}$ is best possible.
Using Schwartz’s inequality in the inner product space $\left(
H,\left\langle \cdot ,\cdot \right\rangle \right) $ one has$$\begin{gathered}
\left\vert \left\langle x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle
e_{i},y-\sum_{i\in F}\left\langle y,e_{i}\right\rangle e_{i}\right\rangle
\right\vert ^{2} \label{3.4} \\
\leq \left\Vert x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle
e_{i}\right\Vert ^{2}\left\Vert y-\sum_{i\in F}\left\langle
y,e_{i}\right\rangle e_{i}\right\Vert ^{2}\end{gathered}$$and since a simple calculation shows that $$\left\langle x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle
e_{i},y-\sum_{i\in F}\left\langle y,e_{i}\right\rangle e_{i}\right\rangle
=\left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle$$and $$\left\Vert x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle e_{i}\right\Vert
^{2}\leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle
x,e_{i}\right\rangle \right\vert ^{2}$$for any $x,y\in H,$ then by (\[3.4\]) and by the counterpart of Bessel’s inequality in Corollary \[c2.3\], we have$$\begin{aligned}
& \left\vert \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert ^{2}
\label{3.5} \\
& \leq \left( \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert
\left\langle x,e_{i}\right\rangle \right\vert ^{2}\right) \left( \left\Vert
y\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle y,e_{i}\right\rangle
\right\vert ^{2}\right) \notag \\
& \leq \frac{1}{4}M^{2}\left( \mathbf{\Phi },\mathbf{\phi },F\right)
\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert
^{2}\cdot \frac{1}{4}M^{2}\left( \mathbf{\Gamma },\mathbf{\gamma },F\right)
\sum_{i\in F}\left\vert \left\langle y,e_{i}\right\rangle \right\vert ^{2}.
\notag\end{aligned}$$Taking the square root in (\[3.5\]), we deduce (\[3.3\]).
The fact that $\frac{1}{4}$ is the best possible constant follows by Corollary \[c2.3\] and we omit the details.
The following corollary for real inner product spaces holds.
\[c3.2\]Let $\left\{ e_{i}\right\} _{i\in I}$ be a family of orthornormal vectors in $H,$ $F$ a finite part of $I$, $M_{i},m_{i},$ $%
N_{i},n_{i}\geq 0,\ i\in F$ and $x,y\in H$ such that $\sum_{i\in
F}M_{i}m_{i}>0,$ $\sum_{i\in F}N_{i}n_{i}>0$ and$$\left\langle \sum_{i\in F}M_{i}e_{i}-x,x-\sum_{i\in
F}m_{i}e_{i}\right\rangle \geq 0,\ \ \ \ \left\langle \sum_{i\in
F}N_{i}e_{i}-y,y-\sum_{i\in F}n_{i}e_{i}\right\rangle \geq 0. \label{3.6}$$Then we have the inequality$$\begin{aligned}
0& \leq \left\vert \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle y,e_{i}\right\rangle \right\vert ^{2}
\label{3.7} \\
& \leq \frac{1}{16}\cdot \frac{\sum_{i\in F}\left( M_{i}-m_{i}\right)
^{2}\sum_{i\in F}\left( N_{i}-n_{i}\right) ^{2}\sum_{i\in F}\left\vert
\left\langle x,e_{i}\right\rangle \right\vert ^{2}\sum_{i\in F}\left\vert
\left\langle y,e_{i}\right\rangle \right\vert ^{2}}{\sum_{i\in
F}M_{i}m_{i}\sum_{i\in F}N_{i}n_{i}}. \notag\end{aligned}$$The constant $\frac{1}{16}$ is best possible.
In the case where the family $\left\{ e_{i}\right\} _{i\in I}$ reduces to a single vector, we may deduce from Theorem \[t3.1\] the following particular case first obtained in [@SSDc].
\[c3.3\]Let $e\in H,$ $\left\Vert e\right\Vert =1,$ $\phi ,\Phi ,\gamma
,\Gamma \in \mathbb{K}$ with $\func{Re}\left( \Phi \overline{\phi }\right)
,$ $\func{Re}\left( \Gamma \overline{\gamma }\right) >0$ and $x,y\in H$ such that either $$\func{Re}\left\langle \Phi e-x,x-\phi e\right\rangle \geq 0,\ \ \func{Re}%
\left\langle \Gamma e-y,y-\gamma e\right\rangle \geq 0, \label{3.8}$$or, equivalently,$$\left\Vert x-\frac{\phi +\Phi }{2}e\right\Vert \leq \frac{1}{2}\left\vert
\Phi -\phi \right\vert ,\ \ \ \ \ \left\Vert y-\frac{\gamma +\Gamma }{2}%
e\right\Vert \leq \frac{1}{2}\left\vert \Gamma -\gamma \right\vert
\label{3.9}$$holds, then$$0\leq \left\vert \left\langle x,y\right\rangle -\left\langle
x,e\right\rangle \left\langle e,y\right\rangle \right\vert \leq \frac{1}{4}%
M\left( \Phi ,\phi \right) M\left( \Gamma ,\gamma \right) \left\vert
\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ,
\label{3.10}$$where$$M\left( \Phi ,\phi \right) :=\left[ \frac{\left( \left\vert \Phi \right\vert
-\left\vert \phi \right\vert \right) ^{2}+4\left[ \left\vert \phi \Phi
\right\vert -\func{Re}\left( \Phi \overline{\phi }\right) \right] }{\func{Re}%
\left( \Phi \overline{\phi }\right) }\right] ^{\frac{1}{2}}.$$The constant $\frac{1}{4}$ is best possible.
\[r3.4\]If $H$ is real, $e\in H,$ $\left\Vert e\right\Vert =1$ and $%
a,b,A,B\in \mathbb{R}$ are such that $A>a>0,$ $B>b>0$ and$$\left\Vert x-\frac{a+A}{2}e\right\Vert \leq \frac{1}{2}\left( A-a\right) ,\
\ \left\Vert y-\frac{b+B}{2}e\right\Vert \leq \frac{1}{2}\left( B-b\right) ,
\label{3.11}$$then$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle
\left\langle e,y\right\rangle \right\vert \leq \frac{1}{4}\cdot \frac{\left(
A-a\right) \left( B-b\right) }{\sqrt{abAB}}\left\vert \left\langle
x,e\right\rangle \left\langle e,y\right\rangle \right\vert . \label{3.12}$$The constant $\frac{1}{4}$ is best possible.
If $\left\langle x,e\right\rangle ,$ $\left\langle y,e\right\rangle \neq 0,$ then the following equivalent form of (\[3.12\]) also holds$$\left\vert \frac{\left\langle x,y\right\rangle }{\left\langle
x,e\right\rangle \left\langle e,y\right\rangle }-1\right\vert \leq \frac{1}{4%
}\cdot \frac{\left( A-a\right) \left( B-b\right) }{\sqrt{abAB}}.
\label{3.13}$$
Some Companion Inequalities\[s4\]
=================================
The following companion of the Grüss inequality also holds.
\[t4.1\]Let $\left\{ e_{i}\right\} _{i\in I}$ be a family of orthornormal vectors in $H,$ $F$ a finite part of $I$, $\phi _{i},\Phi
_{i}\in \mathbb{K},\ \left( i\in F\right) $, $x,y\in H$ and $\lambda \in
\left( 0,1\right) ,$ such that either$$\func{Re}\left\langle \sum_{i\in F}\Phi _{i}e_{i}-\left( \lambda x+\left(
1-\lambda \right) y\right) ,\lambda x+\left( 1-\lambda \right) y-\sum_{i\in
F}\phi _{i}e_{i}\right\rangle \geq 0 \label{4.1}$$or, equivalently,$$\left\Vert \lambda x+\left( 1-\lambda \right) y-\sum_{i\in F}\frac{\Phi
_{i}+\phi _{i}}{2}\cdot e_{i}\right\Vert \leq \frac{1}{2}\left( \sum_{i\in
F}\left\vert \Phi _{i}-\phi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}},
\label{4.2}$$holds. Then we have the inequality$$\begin{gathered}
\func{Re}\left[ \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right] \label{4.3}
\\
\leq \frac{1}{16}\cdot \frac{1}{\lambda \left( 1-\lambda \right) }\sum_{i\in
F}M^{2}\left( \mathbf{\Phi },\mathbf{\phi },F\right) \sum_{i\in F}\left\vert
\left\langle \lambda x+\left( 1-\lambda \right) y,e_{i}\right\rangle
\right\vert ^{2}.\end{gathered}$$The constant $\frac{1}{16}$ is the best possible constant in (\[4.3\]) in the sense that it cannot be replaced by a smaller constant.
Using the known inequality$$\func{Re}\left\langle z,u\right\rangle \leq \frac{1}{4}\left\Vert
z+u\right\Vert ^{2}$$we may state that for any $a,b\in H$ and $\lambda \in \left( 0,1\right) $ $$\func{Re}\left\langle a,b\right\rangle \leq \frac{1}{4\lambda \left(
1-\lambda \right) }\left\Vert \lambda a+\left( 1-\lambda \right)
b\right\Vert ^{2}. \label{4.4}$$Since$$\left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle =\left\langle
x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle e_{i},y-\sum_{i\in
F}\left\langle y,e_{i}\right\rangle e_{i}\right\rangle ,$$for any $x,y\in H,$ then, by (\[4.4\]), we get$$\begin{aligned}
& \func{Re}\left[ \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right] \label{4.5}
\\
& =\func{Re}\left[ \left\langle x-\sum_{i\in F}\left\langle
x,e_{i}\right\rangle e_{i},y-\sum_{i\in F}\left\langle y,e_{i}\right\rangle
e_{i}\right\rangle \right] \notag \\
& \leq \frac{1}{4\lambda \left( 1-\lambda \right) }\left\Vert \lambda \left(
x-\sum_{i\in F}\left\langle x,e_{i}\right\rangle e_{i}\right) +\left(
1-\lambda \right) \left( y-\sum_{i\in F}\left\langle y,e_{i}\right\rangle
e_{i}\right) \right\Vert ^{2} \notag \\
& =\frac{1}{4\lambda \left( 1-\lambda \right) }\left\Vert \lambda x+\left(
1-\lambda \right) y-\sum_{i\in F}\left\langle \lambda x+\left( 1-\lambda
\right) y,e_{i}\right\rangle e_{i}\right\Vert ^{2} \notag \\
& =\frac{1}{4\lambda \left( 1-\lambda \right) }\left[ \left\Vert \lambda
x+\left( 1-\lambda \right) y\right\Vert ^{2}-\sum_{i\in F}\left\vert
\left\langle \lambda x+\left( 1-\lambda \right) y,e_{i}\right\rangle
\right\vert ^{2}\right] . \notag\end{aligned}$$If we apply the counterpart of Bessel’s inequality from Corollary \[c2.3\] for $\lambda x+\left( 1-\lambda \right) y,$ we may state that$$\begin{gathered}
\left\Vert \lambda x+\left( 1-\lambda \right) y\right\Vert ^{2}-\sum_{i\in
F}\left\vert \left\langle \lambda x+\left( 1-\lambda \right)
y,e_{i}\right\rangle \right\vert ^{2} \label{4.6} \\
\leq \frac{1}{4}M^{2}\left( \mathbf{\Phi },\mathbf{\phi },F\right)
\sum_{i\in F}\left\vert \left\langle \lambda x+\left( 1-\lambda \right)
y,e_{i}\right\rangle \right\vert ^{2}.\end{gathered}$$Now, by making use of (\[4.5\]) and (\[4.6\]), we deduce (\[4.3\]).
The fact that $\frac{1}{16}$ is the best possible constant in (\[4.3\]) follows by the fact that if in (\[4.1\]) we choose $x=y,$ then it becomes (i) of Theorem \[t2.1\], implying for $\lambda =\frac{1}{2}$ (\[2.12\]), for which, we have shown that $\frac{1}{4}$ was the best constant.
\[r4.2\]If in Theorem \[t4.1\], we choose $\lambda =\frac{1}{2},$ then we get$$\func{Re}\left[ \left\langle x,y\right\rangle -\sum_{i\in F}\left\langle
x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right] \leq \frac{1}{%
4}M^{2}\left( \mathbf{\Phi },\mathbf{\phi },F\right) \sum_{i\in F}\left\vert
\left\langle \frac{x+y}{2},e_{i}\right\rangle \right\vert ^{2}, \label{4.7}$$provided$$\func{Re}\left\langle \sum_{i\in F}\Phi _{i}e_{i}-\frac{x+y}{2},\frac{x+y}{2}%
-\sum_{i\in F}\phi _{i}e_{i}\right\rangle \geq 0$$or, equivalently,$$\left\Vert \frac{x+y}{2}-\sum_{i\in F}\frac{\Phi _{i}+\phi _{i}}{2}\cdot
e_{i}\right\Vert \leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \Phi
_{i}-\phi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}. \label{4.8}$$
Integral Inequalities\[s5\]
===========================
Let $\left( \Omega ,\Sigma ,\mu \right) $ be a measure space consisting of a set $\Omega ,$ a $\sigma -$algebra of parts $\Sigma $ and a countably additive and positive measure $\mu $ on $\Sigma $ with values in $\mathbb{R}%
\cup \left\{ \infty \right\} .$ Let $\rho \geq 0$ be a $\mu -$measurable function on $\Omega .$ Denote by $L_{\rho }^{2}\left( \Omega ,\mathbb{K}%
\right) $ the Hilbert space of all real or complex valued functions defined on $\Omega $ and $2-\rho -$integrable on $\Omega ,$ i.e.,$$\int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert
^{2}d\mu \left( s\right) <\infty . \label{5.1}$$
Consider the family $\left\{ f_{i}\right\} _{i\in I}$ of functions in $%
L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) $ with the properties that$$\int_{\Omega }\rho \left( s\right) f_{i}\left( s\right) \overline{f_{j}}%
\left( s\right) d\mu \left( s\right) =\delta _{ij},\ \ \ i,j\in I,
\label{5.2}$$where $\delta _{ij}$ is $0$ if $i\neq j$ and $\delta _{ij}=1$ if $i=j.$ $%
\left\{ f_{i}\right\} _{i\in I}$ is an orthornormal family in $L_{\rho
}^{2}\left( \Omega ,\mathbb{K}\right) .$
The following proposition holds.
\[p5.1\]Let $\left\{ f_{i}\right\} _{i\in I}$ be an orthornormal family of functions in $L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) ,$ $F$ a finite subset of $I,$ $\phi _{i},\Phi _{i}\in \mathbb{K}$ $\left( i\in
F\right) $ such that $\sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi
_{i}}\right) >0$ and $f\in L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) ,$ so that either$$\int_{\Omega }\rho \left( s\right) \func{Re}\left[ \left( \sum_{i\in F}\Phi
_{i}f_{i}\left( s\right) -f\left( s\right) \right) \left( \overline{f}\left(
s\right) -\sum_{i\in F}\overline{\phi _{i}}\text{ }\overline{f_{i}}\left(
s\right) \right) \right] d\mu \left( s\right) \geq 0 \label{5.3}$$or, equivalently,$$\int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) -\sum_{i\in F}%
\frac{\Phi _{i}+\phi _{i}}{2}f_{i}\left( s\right) \right\vert ^{2}d\mu
\left( s\right) \leq \frac{1}{4}\sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}. \label{5.4}$$Then we have the inequality$$\left( \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right)
\right\vert ^{2}d\mu \left( s\right) \right) ^{\frac{1}{2}}\leq \frac{1}{2}%
\cdot \frac{1}{\left[ \sum_{i\in F}\func{Re}\left( \Phi _{i}\overline{\phi
_{i}}\right) \right] ^{\frac{1}{2}}} \label{5.5}$$$$\times \left\{
\begin{array}{l}
\max\limits_{i\in F}\left\{ \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right\} \dsum\limits_{i\in F}\left\vert \dint_{\Omega }\rho
\left( s\right) f\left( s\right) \overline{f_{i}}\left( s\right) d\mu \left(
s\right) \right\vert \\
\\
\left[ \dsum\limits_{i\in F}\left( \left\vert \Phi _{i}\right\vert
+\left\vert \phi _{i}\right\vert \right) ^{p}\right] ^{\frac{1}{p}}\left(
\dsum\limits_{i\in F}\left\vert \dint_{\Omega }\rho \left( s\right) f\left(
s\right) \overline{f_{i}}\left( s\right) d\mu \left( s\right) \right\vert
^{q}\right) ^{\frac{1}{q}},\text{ } \\
\hfill \text{\ for \ }p>1,\ \frac{1}{p}+\frac{1}{q}=1 \\
\\
\max\limits_{i\in F}\left\vert \dint_{\Omega }\rho \left( s\right) f\left(
s\right) \overline{f_{i}}\left( s\right) d\mu \left( s\right) \right\vert
\dsum\limits_{i\in F}\left[ \left\vert \Phi _{i}\right\vert +\left\vert \phi
_{i}\right\vert \right] .%
\end{array}%
\right.$$In particular, we have$$\begin{gathered}
\int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert
^{2}d\mu \left( s\right) \label{5.6} \\
\leq \frac{1}{4}\cdot \frac{\sum_{i\in F}\left( \left\vert \Phi
_{i}\right\vert +\left\vert \phi _{i}\right\vert \right) ^{2}}{\sum_{i\in F}%
\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) }\sum\limits_{i\in
F}\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{%
f_{i}}\left( s\right) d\mu \left( s\right) \right\vert ^{2}.\end{gathered}$$The constant $\frac{1}{4}$ is best possible in both inequalities.
The proof is obvious by Theorem \[t2.1\] and Remark \[r2.2\]. We omit the details.
The following proposition also holds.
\[p5.2\]Assume that $f_{i},f,\phi _{i},\Phi _{i}$ and $F$ satisfy the assumptions of Proposition \[p5.1\]. Then we have the following counterpart of Bessel’s inequality:$$\begin{aligned}
0& \leq \int_{\Omega }\rho \left( s\right) f^{2}\left( s\right) d\mu \left(
s\right) -\sum\limits_{i\in F}\left\vert \int_{\Omega }\rho \left( s\right)
f\left( s\right) \overline{f_{i}}\left( s\right) d\mu \left( s\right)
\right\vert ^{2} \label{5.7} \\
& \leq \frac{1}{4}M^{2}\left( \mathbf{\Phi },\mathbf{\phi },F\right) \cdot
\sum\limits_{i\in F}\left\vert \int_{\Omega }\rho \left( s\right) f\left(
s\right) \overline{f_{i}}\left( s\right) d\mu \left( s\right) \right\vert
^{2}, \notag\end{aligned}$$where, as above,$$M\left( \mathbf{\Phi },\mathbf{\phi },F\right) :=\left[ \frac{%
\sum\limits_{i\in F}\left\{ \left( \left\vert \Phi _{i}\right\vert
-\left\vert \phi _{i}\right\vert \right) ^{2}+4\left[ \left\vert \phi
_{i}\Phi _{i}\right\vert -\func{Re}\left( \Phi _{i}\overline{\phi _{i}}%
\right) \right] \right\} }{\func{Re}\left( \Phi _{i}\overline{\phi _{i}}%
\right) }\right] ^{\frac{1}{2}}. \label{5.8}$$The constant $\frac{1}{4}$ is the best possible.
The following Grüss type inequality also holds.
\[p5.3\]Let $\left\{ f_{i}\right\} _{i\in I}$ and $F$ be as in Proposition \[p5.1\]. If $\phi _{i},\Phi _{i},\gamma _{i},\Gamma _{i}\in
\mathbb{K}$ $\left( i\in F\right) $ and $f,g\in L_{\rho }^{2}\left( \Omega ,%
\mathbb{K}\right) $ so that either$$\begin{aligned}
\int_{\Omega }\rho \left( s\right) \func{Re}\left[ \left( \sum_{i\in F}\Phi
_{i}f_{i}\left( s\right) -f\left( s\right) \right) \left( \overline{f}\left(
s\right) -\sum_{i\in F}\overline{\phi _{i}}\text{ }\overline{f_{i}}\left(
s\right) \right) \right] d\mu \left( s\right) & \geq 0, \label{5.9} \\
\int_{\Omega }\rho \left( s\right) \func{Re}\left[ \left( \sum_{i\in
F}\Gamma _{i}f_{i}\left( s\right) -g\left( s\right) \right) \left( \overline{%
g}\left( s\right) -\sum_{i\in F}\overline{\gamma _{i}}\text{ }\overline{f_{i}%
}\left( s\right) \right) \right] d\mu \left( s\right) & \geq 0, \notag\end{aligned}$$or, equivalently,$$\begin{aligned}
\int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) -\sum_{i\in F}%
\frac{\Phi _{i}+\phi _{i}}{2}\cdot f_{i}\left( s\right) \right\vert ^{2}d\mu
\left( s\right) & \leq \frac{1}{4}\sum_{i\in F}\left\vert \Phi _{i}-\phi
_{i}\right\vert ^{2}, \label{5.10} \\
\int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) -\sum_{i\in F}%
\frac{\Gamma _{i}+\gamma _{i}}{2}\cdot f_{i}\left( s\right) \right\vert
^{2}d\mu \left( s\right) & \leq \frac{1}{4}\sum_{i\in F}\left\vert \Gamma
_{i}-\gamma _{i}\right\vert ^{2}, \notag\end{aligned}$$then we have the inequality$$\begin{gathered}
\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{%
g\left( s\right) }d\mu \left( s\right) \right. \label{5.11} \\
-\left. \sum_{i\in F}\int_{\Omega }\rho \left( s\right) f\left( s\right)
\overline{f_{i}}\left( s\right) d\mu \left( s\right) \int_{\Omega }\rho
\left( s\right) f_{i}\left( s\right) \overline{g\left( s\right) }d\mu \left(
s\right) \right\vert \\
\leq \frac{1}{4}M\left( \mathbf{\Phi },\mathbf{\phi },F\right) M\left(
\mathbf{\Gamma },\mathbf{\gamma },F\right) \left( \sum_{i\in F}\left\vert
\int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{f_{i}}\left(
s\right) d\mu \left( s\right) \right\vert ^{2}\right) ^{\frac{1}{2}} \\
\times \left( \sum_{i\in F}\left\vert \rho \left( s\right) f_{i}\left(
s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert
^{2}\right) ^{\frac{1}{2}},\end{gathered}$$where $M\left( \mathbf{\Phi },\mathbf{\phi },F\right) $ is defined in ([5.8]{}).
The constant $\frac{1}{4}$ is the best possible.
The proof follows by Theorem \[t3.1\] and we omit the details.
In the case of real spaces, the following corollaries provide much simpler sufficient conditions for the counterpart of Bessel’s inequality (\[5.7\]) or for the Grüss type inequality (\[5.11\]) to hold.
\[c5.4\]Let $\left\{ f_{i}\right\} _{i\in I}$ be an orthornormal family of functions in the real Hilbert space $L_{\rho }^{2}\left( \Omega \right) ,$ $F$ a finite part of $I,$ $M_{i},m_{i}\geq 0$ $\left( i\in F\right) ,$ with $\sum_{i\in F}M_{i}m_{i}>0$ and $f\in L_{\rho }^{2}\left( \Omega
\right) $ so that$$\sum_{i\in F}m_{i}f_{i}\left( s\right) \leq f\left( s\right) \leq \sum_{i\in
F}M_{i}f_{i}\left( s\right) \text{ \ for \ }\mu -\text{a.e. }s\in \Omega .
\label{5.12}$$Then we have the inequality$$\begin{aligned}
0& \leq \int_{\Omega }\rho \left( s\right) f^{2}\left( s\right) d\mu \left(
s\right) -\sum_{i\in F}\left[ \int_{\Omega }\rho \left( s\right) f\left(
s\right) f_{i}\left( s\right) d\mu \left( s\right) \right] ^{2} \label{5.13}
\\
& \leq \frac{1}{4}\cdot \frac{\sum_{i\in F}\left( M_{i}-m_{i}\right) ^{2}}{%
\sum_{i\in F}M_{i}m_{i}}\cdot \sum_{i\in F}\left[ \int_{\Omega }\rho \left(
s\right) f\left( s\right) f_{i}\left( s\right) d\mu \left( s\right) \right]
^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible.
\[c5.5\]Let $\left\{ f_{i}\right\} _{i\in I}$ and $F$ be as above. If $%
M_{i},m_{i},N_{i},n_{i}\geq 0$ $\left( i\in F\right) $ with $\sum_{i\in
F}M_{i}m_{i},\sum_{i\in F}N_{i}n_{i}>0$ and $f,g\in L_{\rho }^{2}\left(
\Omega \right) $ such that$$\sum_{i\in F}m_{i}f_{i}\left( s\right) \leq f\left( s\right) \leq \sum_{i\in
F}M_{i}f_{i}\left( s\right) \label{5.14}$$and$$\sum_{i\in F}n_{i}f_{i}\left( s\right) \leq g\left( s\right) \leq \sum_{i\in
F}N_{i}f_{i}\left( s\right) \text{ \ for \ }\mu -\text{a.e. }s\in \Omega ,$$then we have the inequality$$\begin{gathered}
\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) g\left(
s\right) d\mu \left( s\right) \right. \label{5.15} \\
-\left. \sum_{i\in F}\int_{\Omega }\rho \left( s\right) f\left( s\right)
f_{i}\left( s\right) d\mu \left( s\right) \int_{\Omega }\rho \left( s\right)
g\left( s\right) f_{i}\left( s\right) d\mu \left( s\right) \right\vert \\
\leq \frac{1}{4}\left( \frac{\sum_{i\in F}\left( M_{i}-m_{i}\right) ^{2}}{%
\sum_{i\in F}M_{i}m_{i}}\right) ^{\frac{1}{2}}\left( \frac{\sum_{i\in
F}\left( N_{i}-n_{i}\right) ^{2}}{\sum_{i\in F}N_{i}n_{i}}\right) ^{\frac{1}{%
2}} \\
\times \left[ \sum_{i\in F}\left( \int_{\Omega }\rho \left( s\right) f\left(
s\right) f_{i}\left( s\right) d\mu \left( s\right) \right) ^{2}\sum_{i\in
F}\left( \int_{\Omega }\rho \left( s\right) g\left( s\right) f_{i}\left(
s\right) d\mu \left( s\right) \right) ^{2}\right] ^{\frac{1}{2}}. \notag\end{gathered}$$
[9]{} S.S. DRAGOMIR, A counterpart of Bessel’s inequality in inner product spaces and some Grüss type related results, *RGMIA Res. Rep. Coll.*, **6**(2003), Supplement, Article 10. `[ON Line: http://rgmia.vu.edu.au/v6(E).html]`
S.S. DRAGOMIR, On Bessel and Grüss inequalities for orthornormal families in inner product spaces, *RGMIA Res. Rep. Coll.*, **6**(2003), Supplement, `[ON Line: http://rgmia.vu.edu.au/v6(E).html]`
S.S. DRAGOMIR, Some Grüss’ type inequalities in inner product spaces, *J. Ineq. Pure & Appl. Math.,* to appear. \[ONLINE: `http://jipam.vu.edu.au`\]
|
---
abstract: 'Sensitivity to the supersymmetric scalar states $\phi$ at the future linear ${{e^+e^-}}$ and photon colliders is discussed. In particular it is illustrated a search strategy for massive sgoldstinos, the supersymmetric partners of the goldstino.'
---
=-1cm
Introduction
=============
In the Supersymmetric extension of the Standard Model, once Supersymmetry is spontaneously broken the gravitino ${\rm{\tilde G}}$ can acquire a mass absorbing the degrees of freedom of the goldstino. The mechanism is analogous to the spontaneous breaking of the electro-weak symmetry in the Standard Model, when Z and W bosons acquire mass absorbing the goldstone bosons.
A very light gravitino ${\rm{\tilde G}}$ as predicted by supersymmetric models [@ref:GMSB] has been searched for at LEP and Tevatron experiments [@ref:lepbound; @ref:cmsbound] and the sensitivity to its signatures of an experiment at a future linear collider has been studied [@ref:gravmio]. Limits on the ${\rm{\tilde G}}$ mass are related to the supersymmetry-breaking scale $\sqrt{F}$.
It has been pointed out [@ref:prz] that in such supersymmetric extensions of the Standard Model with a light gravitino, the effective theory at the weak scale must contain also the supersymmetric partner of the goldstino, called sgoldstino. The production of this particle, which could be massive, may be relevant at the LEP and Tevatron energies [@ref:przhad] if the supersymmetry-breaking scale and the sgoldstino mass are not too large. Two states are considered in [@ref:prz; @ref:przhad], S CP-even and P CP-odd. Assuming R-parity conservation, it has to be noticed that, while the goldstino is R-odd, the sgoldstino is R-even and therefore it can be produced together with Standard Model particles.
At LEP 2 sgoldstino signatures have been searched for by the DELPHI experiment [@ref:delphisgold] and preliminary results from CDF [@ref:cdfsgold] show the higher sensitivity of hadron colliders. None of the two searches found an evidence for such states.
At an ${{e^+e^-}}$ collider one of the most interesting channels for the production of such scalars (from now on the symbol $\phi$ will be used to indicate a generic state) is the process ${e^+e^-\rightarrow \phi \gamma}$ which depends on the $\phi$ mass $m_{\phi}$ and on $\sqrt{F}$:
$$\frac{d \sigma} {dcos\theta} (e^+e^-\rightarrow \phi \gamma )
=\frac{\left|\Sigma\right|^2 s}{64 \pi F^2}
\left( 1- \frac{m_{\phi}^2}{s} \right)^3 (1+cos^2\theta)
\label{dsigma}$$
where $\theta$ is the scattering angle in the centre-of-mass and
$$\left|\Sigma\right|^2=\frac{e^2 M_{\gamma\gamma}^2}{2s}+
\frac{g_Z^2(v_e^2+a_e^2) M_{\gamma Z}^2 s}{2(s-m_Z^2)^2}+
\frac{e g_Z v_e M_{\gamma\gamma}M_{\gamma Z}}{s-m_Z^2}$$
with $v_e=sin^2 \theta_W -1/4$, $a_e=1/4$ and $g_Z=e/(sin \theta_W cos \theta_W)$. The parameters $M_{\gamma\gamma}$ and $M_{\gamma Z}$ are related to the diagonal mass term for the $U(1)_Y$ and $SU(2)_L$ gauginos $M_1$ and $M_2$:
$$M_{\gamma\gamma}= M_1 cos^2 \theta_W+ M_2 sin^2 \theta_W,~
M_{\gamma Z}= (M_2-M_1) sin \theta_W cos \theta_W.$$
Other interesting processes are due to $\gamma \gamma$- or gg-fusion occurring, respectively, at ${{e^+e^-}}$ and hadron colliders. In both cases the production cross sections are proportional to the corresponding widths: $$\sigma({{e^+e^-}}\rightarrow {{e^+e^-}}\phi)\propto \sigma_0^{\gamma \gamma}=\frac{4 \pi^2}{ m^3_{\phi}}\Gamma(\phi \rightarrow \gamma \gamma),
\sigma(p\bar{p} \rightarrow \phi) \propto \sigma_0^{g g}=\frac{\pi^2}{8 m^3_{\phi}}\Gamma(\phi \rightarrow g g)$$ and they can be obtained, respectively, from the photon and gluon distribution functions.
The decay modes $\phi \rightarrow \gamma \gamma$ and $\phi \rightarrow gg$ widths are $$\Gamma(\phi\rightarrow \gamma \gamma)=\frac{m_{\phi}^3 M_{\gamma\gamma}^2}{32 \pi F^2}
\label{phitogam}$$ and $$\Gamma(\phi \rightarrow g g)= \frac{m_{\phi}^3 M_3^2}{4 \pi F^2}$$ where $M_3$ is the gluino mass. As noticed in [@ref:przhad] the production formulae are similar in form to those for a light SM Higgs production in Born approximation where $\Gamma(H\rightarrow\gamma \gamma)$ and $\Gamma(H\rightarrow g g)$ substitute the $\phi$ widths. It is straightforward to apply the same correspondence between these two different physical cases to the $\phi$ production on photon colliders. With a reverse substitution, an effective production cross section in the narrow-width approximation can be deduced from the studies of Higgs Physics at a $\gamma \gamma$ collider [@ref:Telnov]: $$\sigma^{eff}=\frac{dL_{\gamma \gamma}}{dW_{\gamma \gamma}}\frac{m_{\phi}}{L_{\gamma\gamma}} \times
\frac{4\pi^2 \Gamma(\phi\rightarrow \gamma \gamma)}{m_{\phi}^3}
\label{sggphi}$$ where $dL_{\gamma \gamma}/dW_{\gamma \gamma}$ is the luminosity spectrum in the two photon center-of-mass $W_{\gamma \gamma}$ and $L_{\gamma\gamma}$ is defined as the luminosity at the high $\gamma \gamma$ energy peak. All the above formulae depend on model dependent mass parameters. In [@ref:prz] two sets for these parameters are considered to give numerical examples. They are reported in Table. \[tab:param\].
$M_1$ $M_2$ $M_3$
---- ------- ------- -------
1) 200 300 400
2) 350 350 350
: Two choices for the gaugino mass parameters (in GeV) relevant for the sgoldstino production and decay. []{data-label="tab:param"}
The total width for a large interval of the parameter space is dominated by $\Gamma(\phi \rightarrow g g)$ and it is narrow (below the few GeV order) except for the region with small $\sqrt{F}$ where the production cross section is expected to be very large.
In this note the sensitivity to these states of an experiment at a ${{e^+e^-}}$ linear collider with a center-of mass energy of 500 GeV and the sensitivity of an experiment at a photon collider obtained from the same energy primary ${{e^+e^-}}$ beams are evaluated. An integrated luminosity of 500 fb$^{-1}$ for the ${{e^+e^-}}$ collisions is considered with a reduction factor for the $\gamma \gamma$ interactions.
${{e^+e^-}}$ collider
=====================
The search for these scalars at a future linear collider can be an upgrade of the analysis done at LEP where the two decay channels $\phi \rightarrow \gamma \gamma$ and $\phi \rightarrow gg$ were considered [@ref:delphisgold]. For the present sensitivity evaluation only the dominant channel is considered. The $\phi \rightarrow g g$ decay gives rise to events with one photon and two jets. An irreducible background from $e^+ e^- \rightarrow q \bar{q} \gamma$ events is associated to this topology and therefore the signal must be searched for as an excess of events over the background expectations for every mass hypothesis. To select $g g \gamma$ candidate events the following selection criteria can be defined:
- an electromagnetic energy cluster identified as photon with a polar angle $\theta>20^{\circ}$; the angle between the photon and the nearest jet must be greater than $10^{\circ}$;
- no electromagnetic cluster with $\theta< 5^{\circ}$;
- to remove $\gamma \gamma$ fusion events: the total multiplicity $>10$; the charged multiplicity $> 5$; the energy in transverse plane $> 0.12\cdot \sqrt{s}$; the sum of absolute values of track momentum along thrust axis $>0.20 \cdot\sqrt{s}$;
- to remove Bhabha background: reject the events with electromagnetic cluster with $E> 0.45 \cdot \sqrt{s}$ and low track multiplicity;
- to reduce ${{q\bar{q}}}\gamma$ events: $ |cos(\theta_p)|<.995 $ where $\theta_p$ is the polar angle of missing momentum; the visible energy greater than $0.60 \cdot \sqrt{s}$; reject events with c or b tag;
- to remove WW background the events are reconstructed forcing into 2 jets topology but removing from jetization the tracks associated to the photon cluster. Events are removed if $ y_{cut}>0.02$.
The polar angle acceptance for a $\phi \gamma$ signal produced as in (\[dsigma\]) is about $80 \%$. It has been evaluated by generating 4-vectors corresponding to the prompt photon and to the $\phi$ decay products. Considering the DELPHI results [@ref:delphisgold], the selection efficiency inside the acceptance region is assumed to be of the order of 50 $\%$.
The associated photon is monochromatic (except for the region with small $\sqrt{F}$ where the production cross-section is expected to be very large) for a given center-of-mass energy. Therefore the signal can be detected as a peak in the photon energy distribution of the selected events. In addition, the photon energy could be determined very precisely by means of kinematic constraints if a final state three body topology is assumed. However, the presence of the beamstrahlung ($2.8 \%$ of mean beam-energy loss [@ref:brink]) induces a smearing on the photon energy which is comparable with or larger than the experimental resolution. On the other hand, the signal can be searched for directly in the jet-jet invariant mass distribution. Clearly the detector performance plays a crucial role in the optimal search strategy. Here a jet energy resolution following the $\sigma_E^{jet}/E=40\%/\sqrt{E} \oplus 2\%$ dependence and an error of about one degree in the jet angle reconstruction is assumed. With these assumptions the direct mass search is convenient or comparable w.r.t. the recoil photon search. The mass resolution is given in Fig. \[m\_res\].
The background rate depends on the considered $\phi$ mass hypothesis as it can be seen in Fig. \[ms\_lc\] where the reconstructed jet-jet invariant mass of ${{q\bar{q}}}\gamma$ events generated with PYTHIA [@ref:pythia] in the acceptance region is shown. The events are scaled in order to reproduce the number of expected events with an integrated $L_{{{e^+e^-}}}$ luminosity of 500 fb$^{-1}$. However the statistical fluctuations are not reproduced.
Given the background event distribution as function of $m_{\phi}$ and the detection efficiency for any $\phi$ mass hypothesis it is possible to estimate a $95\%$ Confidence Level cross section limit for the $\phi$ production cross section. Only statistical fluctuations are considered here. The bin to bin fluctuations on the number of background events due to the reduced Monte Carlo statistics are removed by a spline function.
By comparing the experimental limits with the production cross section computed from (\[dsigma\]) it is possible to determine a $95 \%$ Confidence Level excluded region on the parameter space and a $5~\sigma$ discovery region. The beamstrahlung effects which are more relevant than the Initial State Radiation one’s are taken into account. The limit and the $5~\sigma$ regions are shown in Fig. \[excl\]. The $\phi$ width for all the considered $m_{\phi}$ values is smaller than the experimental resolution in all the points corresponding to the limit curves. Therefore the limit has been computed integrating the signal only over the experimental resolution. The region where the expected width is larger than the experimental resolution is indicated in Fig. \[excl\]. For $m_{\phi}<420$ it is possible to cover this region of parameter space given the high cross section. This is no more true for $m_{\phi}> 420$ GeV where the decreasing cross section and the increasing width result in a drop of experimental sensitivity.
In the near future the Fermilab Tevatron Collider is expected to increase the luminosity by a factor $\sim 20$ [@ref:tdrcdf] and consequently an increase of about 1.5 in their $\sqrt{F}$ limits can be envisaged. The limits shown in Fig. \[excl\] are then competitive with the future improved Tevatron results.
At ${{e^+e^-}}$ colliders, additional information can be obtained by searching for the associated $\phi {{\rm Z}^0}$ production as described in [@ref:prz]. As far as the production cross section is considered, competitive results are expected in the $m_{\phi}<\sqrt{s}-m_Z$ region. However, since this channel has a different final state topology requiring a more sophisticated analysis, it is not considered here.
$\gamma \gamma$ collider
========================
The effective cross section given in eq. (\[sggphi\]) depends on the luminosity factor $f_L=\frac{dL_{\gamma \gamma}}{dW_{\gamma \gamma}}\frac{m_{\phi}}{L_{\gamma \gamma }} $. In the photon collider projects [@ref:gamgampro] there are several possible scenarios concerning the photon energy spectra. It may be desirable a photon energy distribution peaked as much as possible toward the primary electron/positron energy. In [@ref:Telnov] $f_L=7$ is assumed and $L_{\gamma \gamma}$ is taken as the integral luminosity for $z>z_{min}=0.65$ where $z=W_{\gamma \gamma}/2E_e$ and $E_e$ is the primary electron beam energy. The luminosity high energy peak is expected to have a FWHM of $\sim 10-15 \%$ with a sharp edge at $z\sim0.8$. Therefore the unexcluded $m_{\phi}-\sqrt{F}$ parameter space achievable at these machines with $2E_e=500$ GeV ensures that the $\phi$ width is negligible.
The effective cross section obtained with $f_L=7$ is much higher (several orders of magnitude, depending on $m_{\phi}$) than the ${{e^+e^-}}\rightarrow \phi \gamma$ cross section with the same parameters. Considering the photon and gluon decay channels, the signal would appear as a peak of two high energy photons or jets with no transversal missing energy. The two jets final state has to compete with large Standard Model background which can be suppressed using polarized photon beams with polarizations $\lambda_1, \lambda_2$: $\sigma({\gamma\gamma \rightarrow {{q\bar{q}}}}) \propto 1-\lambda_1 \lambda_2$ while $\sigma({\gamma\gamma \rightarrow \phi}) \propto 1+\lambda_1 \lambda_2$. However, taking into account QCD corrections [@ref:bord; @ref:jikia2], the ${{q\bar{q}}}g$ final state with unresolved gluon jet gives rise to a sizeable background which may be hard to reject. Therefore, despite of the smaller decay branching ratio, only the two photons final state which has a very little Standard Model background is considered here.
The selection of events with two collinear high energetic photons is rather simple and the LEP experience can be used [@ref:lepgg]. An efficient way to select photons and to reject electrons is to require two energy clusters in the electromagnetic calorimeter not associated to hits in the vertex detector. Events with tracks detected in the other tracking devices only in one hemisphere can be accepted to recover photon conversions. Other requirements are:
- acollinearity between the e.m. clusters smaller than $30 ^{\circ}$;
- acoplanarity smaller than $5^{\circ}$;
- polar angle $\theta>30^{\circ}$;
- $E_{\gamma}>0.9 \cdot z_{min} \cdot E_e$.
The detection efficiency is very high ($> 90\%$) in the region $W_{\gamma \gamma}/2E_e> z_{min}$ and the acceptance for the decay of a scalar particle is $86\%$.
The irreducible Standard Model background of $\gamma \gamma \rightarrow \gamma \gamma$ events has been discussed in [@ref:Jikia; @ref:Gounaris]. In the $W_{\gamma \gamma}$ region above 200-250 GeV the cross section is in the range 8-14 fb for $\theta > 30 {^{\circ}}$ and then, assuming $L_{\gamma \gamma}\sim0.15 \cdot L_{{{e^+e^-}}}$, the number of expected events is of the order of 600-1000. As a consequence any New Physics signal has to exceed the corresponding statistical error ( which is of the order of 3 to 4 $\%$) and the systematic uncertitude including the precision on the background calculation. For the present sensitivity study an overall background uncertitude of 5 $\%$ is assumed, leaving more detailed analysis of the signal and background including the comparison of their angular distributions to a later stage. With these assumptions, the sensitivity to a scalar state decaying in two photons is given by the expected 95 $\%$ Confidence Level limit on the cross section times branching ratio and it is $$\sigma ({\gamma \gamma \rightarrow \phi})
\times B.R.(\phi \rightarrow \gamma \gamma) < 1 ~\rm{fb}$$ at the 95 $\%$ Confidence Level for $m_{\phi} \sim 400$ GeV. This value is obtained following the hypothesis that the whole luminosity is collected at the maximal energy spectrum available with 250 GeV electron beams. The actual sensitivity for several $\phi$ mass hypothesis depends on the machine run strategy, on the available energy spectrum and on the photon beam polarization. Nevertheless, taking the given limit just as an evaluation of the order of magnitude for the sensitivity, it is worth investigating the effect on the supersymmetry breaking scale from (\[phitogam\]) and (\[sggphi\]). In particular, defining as a reference cross-section-branching-ratio-product the value $\sigma B$ obtained with $M_{\gamma \gamma} =350$ GeV and a 10$\%$ branching ratio to two photons, the limit on $\sqrt{F}$ and the $5~\sigma$ signal can be expressed in terms of the ratio $R=\sigma^{eff} \times B.R.(\phi \rightarrow \gamma \gamma)/\sigma B$. They are then proportional to $R^{\frac{1}{4}}$ as shown in Fig. \[lim\_gamgam\]. The sensitivity is clearly much larger than the one expected at the ${{e^+e^-}}$ machines.
Conclusions
===========
The sensitivity to the supersymmetric scalar $\phi$ at the future linear ${{e^+e^-}}$ and photon colliders is such that unexplored parameter space regions can be investigated. The ${{e^+e^-}}$ machines with center of mass energy of 500 GeV can set limits for the production of sgoldstino scalars up to about 420 GeV. These limits are competitive w.r.t. the expected future results from the Tevatron RUN II. The sensitivity at the photon colliders obtained from the same electron-positron beam energy is expected to be much higher for $m_{\phi}\sim 400$ GeV.
Acknowledgements {#acknowledgements .unnumbered}
----------------
3 mm We want to thank F. Zwirner for useful explanations on the theoretical framework and for suggestions on the experimental possibilities, A. Castro for discussions concerning the future CDF results and M. Mazzucato for comments and for reading the manuscript.
[99]{} P.Fayet, Phys. Lett. [**B86**]{} (1979) 272;\
J.Ellis, K.Enqvist and D.Nanopoulos, Phys. Lett. [**B151**]{} (1985) 357;\
D.Dicus, S.Nandi, and J.Woodside, Phys. Rev. [**D**]{} 43(1991) 2951\
A. Brignole, F. Feruglio and F. Zwirner Nucl. Phys. B [**516**]{} (1998) 13\
and references therein. L3 collab., O. Adriani et al., Phys. Lett. [**B297**]{} (1992) 469;\
ALEPH collab., R Barate et al., CERN-PPE 97-122, subm. to Phys. Lett. B;\
DELPHI collab., P. Abreu et al., Eur. Phys. J. [**C17**]{} (2000)53. CDF collab.,T. Affolder et al. preprint hep-ex/0003026, subm. to Phys. Rev. Lett. P. Checchia, [*Sensitivity to the Gravitino mass from single-photon spectrum at TESLA Linear Collider* ]{}, Proc. of Worldwide Study on Physics and Experiments with Future Linear ${{e^+e^-}}$ Colliders (1999) 376, hep-ph/9911208. E. Perazzi, G. Ridolfi and F. Zwirner, Nucl. Phys. [**B574**]{} (2000) 3. E. Perazzi, G. Ridolfi and F. Zwirner, Nucl. Phys. [**B590**]{} (2000) 287. DELPHI collab., P. Abreu et al., Phys. Lett. [**B494**]{} (2000) 203. CDF preliminary results available in\
$http://www-cdf.fnal.gov/physics/exotic/run\_1b\_sgoldstino/sgold\_public.html$. V.I. Telnov, Int. J. Mod. Phys [**A13**]{} (1998) 2399. R. Brinkmann, [*The TESLA Linear Collider*]{}, Proc. of Worldwide Study on Physics and Experiments with Future Linear ${{e^+e^-}}$ Colliders (1999) 599. T. Sjöstrand, Comp. Phys. Comm. [**39**]{} (1986) 347; LU TP 95-20 CERN TH 7112-93 (hep-ph/9508391. CDF collab., [ *The CDF II Detector Techinal Design Report*]{}, FERMILAB-Pub-96/390-E. V.I. Telnov, [ *Gamma-gamma, gamma-electron Colliders: Physics, Luminosities, Backgrounds*]{}, Proc. of Worldwide Study on Physics and Experiments with Future Linear ${{e^+e^-}}$ Colliders (1999) 475;\
Tohru Takahashi, [ *Gamma-gamma Collider*]{}, Proc. of the International Workshop on High Energy Photon Colliders June 14 - 17, 2000 DESY Hamburg, Germany, to be published in Nucl.Inst. and Meth. A. D.L. Borden et al., Phys. Rev. [**D50**]{} (1994) 4499. G.Jikia and A. Tkabladze, Phys. Rev. [**D54**]{} (1996) 2030. ALEPH collab., Phys. Lett. [**B429**]{} (1998) 201;\
DELPHI collab., P. Abreu et al., Phys. Lett. [**B491**]{} (2000) 67;\
OPAL collab., G. Abbiendi et al., Phys. Lett. [**B465**]{} (1999) 303;\
L3 collab., M. Acciari et al., Phys. Lett. [**B475**]{} (2000) 198. G.Jikia and A. Tkabladze, Phys. Lett. [**B323**]{} (1994) 453. G.J. Gounaris, [ *The processes $\gamma \gamma \rightarrow \gamma \gamma,~\gamma Z,~ ZZ$*]{}, Proc. of the International Workshop on High Energy Photon Colliders June 14 - 17, 2000 DESY Hamburg, Germany, hep-ph/0008170, to be published in Nucl. Inst. and Meth. A.
|
---
abstract: 'A graph is $(d_1, \ldots, d_k)$-colorable if its vertex set can be partitioned into $k$ nonempty subsets so that the subgraph induced by the $i$th part has maximum degree at most $d_i$ for each $i\in\{1, \ldots, k\}$. It is known that for each pair $(d_1, d_2)$, there exists a planar graph with girth $4$ that is not $(d_1, d_2)$-colorable. This sparked the interest in finding the pairs $(d_1, d_2)$ such that planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable. Given $d_1\leq d_2$, it is known that planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable if either $d_1\geq 2$ and $d_1+d_2\geq 8$ or $d_1=1$ and $d_2\geq 10$. We improve an aforementioned result by providing the first pair $(d_1, d_2)$ in the literature satisfying $d_1+d_2\leq 7$ where planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable. Namely, we prove that planar graphs with girth at least $5$ are $(3, 4)$-colorable.'
address:
- '$^1$ Department of Mathematics, Hankuk University of Foreign Studies, Yongin-si, Gyeonggi-do, Republic of Korea, ilkyoo@hufs.ac.kr.'
- |
$^2$Department of Mathematics\
The College of William and Mary\
Williamsburg, VA, 23185, gyu@wm.edu.
- |
$^3$School of Mathematics and Statistics\
Shangdong Normal University\
Jinan, China, xiazhang@sdnu.edu.cn, pandarhz@sina.com
author:
- Ilkyoo Choi$^1$
- Gexin Yu$^2$
- 'Xia Zhang$^{3,*}$'
title: 'Planar graphs with girth at least 5 are $(3,4)$-colorable'
---
\[section\] \[thm\][Lemma]{} \[thm\][Proposition]{} \[thm\][Corollary]{} \[thm\][Conjecture]{} \[thm\][Claim]{} \[thm\][Observation]{} \[thm\][Question]{} \[thm\][Definition]{} \[thm\][Example]{} \[thm\][Remark]{}
¶[[P]{}]{}
[^1]
[*Keywords:*]{} [Improper coloring; planar graph; discharging method]{}
Introduction
============
All graphs in this paper are finite and simple, which means no loops and no multiple edges. For an integer $k$, let $[k]=\{1, \ldots, k\}$. Given a graph $G$, let $V(G)$ and $E(G)$ denote its vertex set and edge set, respectively. A graph is [*$(d_1, \ldots, d_k)$-colorable*]{} if its vertex set can be partitioned into $k$ nonempty subsets so that the subgraph induced by the $i$th part has maximum degree at most $d_i$ for each $i\in [k]$. This notion is known as [*improper coloring*]{}, or [*defective coloring*]{}, and has recently attracted much attention. Improper coloring is a relaxation of the traditional proper coloring, however, it also opens up an opportunity to gain refined information on partitioning the graph compared to the traditional proper coloring.
The Four Color Theorem [@1977ApHa; @1977ApHaKo] states that the vertex set of a planar graph can be partitioned into four independent sets; this means that every planar graph is $(0, 0, 0, 0)$-colorable since an independent set induces a graph with maximum degree at most $0$. A natural question to ask is what happens when we try to partition the vertex set of a planar graph into fewer parts. Already in 1986, Cowen, Cowen, and Woodall [@1986CoCoWo] proved that a planar graph is $(2, 2, 2)$-colorable. The previous result is sharp since Eaton and Hull [@1999EaHu] and independently Škrekovski [@1999Sk] both acknowledged the existence of a planar graph that is not $(1, h, l)$-colorable for any given $h$ and $l$; for an explicit construction see [@unpub_ChEs]. Hence, improper coloring of a planar graph with no restriction is completely solved for $k\geq 3$.
Since sparser graphs are easier to color, a natural direction of research is to consider sparse planar graphs, and a popular sparsity condition is imposing a restriction on girth. Grötzsch’s theorem [@1959Gr] states that a planar graph with girth at least $4$ is $(0, 0, 0)$-colorable. Therefore it only remains to consider partitioning the vertex set of a planar graph into two parts. Moreover, since there exists a planar graph with girth $4$ that is not $(d_1, d_2)$-colorable for each pair $(d_1, d_2)$ (see [@2015MoOc] for an explicit construction), there has been a considerable amount of research towards improper coloring planar graphs with girth at least $5$. For various results regarding improper coloring planar graphs with girth at least $6$ or other sparse graphs that are not necessarily planar, see [@2010BoIvMoOcRa; @2013BoKoYa; @2011BoKo; @2014BoKo; @2006HaSe; @2015MoOc]. Similar research has also been done for graphs on surfaces as well [@unpub_ChEs].
In this paper, we focus on planar graphs with girth at least $5$. Škrekovski [@2000Sk] showed that planar graphs with girth at least $5$ are $(4, 4)$-colorable and Borodin and Kostochka [@2014BoKo] proved a result that implies planar graphs with girth at least $5$ are $(2, 6)$-colorable. Answering a question by Raspaud, Choi and Raspaud [@2015ChRa] proved that planar graphs with girth at least $5$ are $(3, 5)$-colorable. Recently, Choi et al. [@2017ChChJeSu] proved that planar graphs with girth at least $5$ are $(1, 10)$-colorable, which answered a question by Montassier and Ochem [@2015MoOc] in the affirmative. By a construction of Borodin et al. [@2010BoIvMoOcRa], it is also known that planar graphs with girth at least $5$ (even $6$) are not necessarily $(0, d)$-colorable for an arbitrary $d$. As a conclusion, there are only finitely many pairs $(d_1, d_2)$ that are unknown for which planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable. To sum up, all previous knowledge about improper coloring planar graphs with girth at least $5$ is the following:
Given $d_1\leq d_2$, planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable if
1. $d_1\geq 2$ and $d_1+d_2\geq 8$ [@2014BoKo; @2015ChRa; @2000Sk] or
2. $d_1=1$ and $d_2\geq 10$ [@2017ChChJeSu].
In this paper, we prove the following theorem, which reveals the first pair $(d_1, d_2)$ satisfying $d_1+d_2\leq 7$ where planar graphs with girth at least $5$ are $(d_1, d_2)$-colorable.
\[thm:main\] Planar graphs with girth at least $5$ are $(3,4)$-colorable.
The above theorem also improves the best known answer to the following question, which was explicitly stated in [@2015ChRa]:
What is the minimum $d^3_2$ such that planar graphs with girth at least $5$ are $(3, d^3_2)$-colorable?
Since Montassier and Ochem [@2015MoOc] constructed a planar graph with girth $5$ that is not $(3, 1)$-colorable, along with Theorem \[thm:main\], this shows that $d^3_2\in\{2, 3, 4\}$. Theorem \[thm:main\] is an improvement to the previously best known bound, which was by Choi and Raspaud [@2015ChRa]. It would be remarkable to determine the exact value of $d^3_2$.
Section 2 will reveal some structural properties of a minimum counterexample to Theorem \[thm:main\]. In Section 3, we will show that a minimum counterexample to Theorem \[thm:main\] cannot exist via discharging, hence proving the theorem.
We end the introduction with some definitions that will be used throughout the paper. A [*$d$-vertex*]{}, a [*$d^-$-vertex*]{}, and a [*$d^+$-vertex*]{} is a vertex of degree $d$, at most $d$, and at least $d$, respectively. A [*$d$-neighbor*]{} of a vertex is a neighbor that is a $d$-vertex. A $d$-vertex is a [*poor $d$-vertex*]{} (or [*$dp$-vertex*]{}) and a [*semi-poor $d$-vertex*]{} (or [*$ds$-vertex*]{}) if it has exactly one and two, respectively, $3^+$-neighbors; otherwise, it is called a [*rich vertex*]{} (or [*$dr$-vertex*]{}). A [*$dr^+$-vertex*]{} is a rich $d^+$-vertex. A $ds^+$-$vertex$ is a $d^+$-vertex with at least two $3^+$-neighbors. A $dp^-$-$vertex$ is a $d^-$-vertex with at most one $3^+$-neighbor. An edge $uv$ is a [*heavy edge*]{} if both $u$ and $v$ are $5^+$-vertices, and neither $u$ nor $v$ is a $5p$-, $5s$-, or $6p$-vertex.
Throughout the paper, let $G$ be a counterexample to Theorem \[thm:main\] with the minimum number of $3^+$-vertices, and subject to that choose one with the minimum number of $|V|+|E|$. It is easy to see that $G$ must be connected and there are no $1$-vertices in $G$. From now on, given a (partially) $(3, 4)$-colored graph, let $i$ be the color of the color class where maximum degree $i$ is allowed for $i\in\{3, 4\}$. We say a vertex with color $i$ is [*$i$-saturated*]{} if it already has $i$ neighbors of the same color. A vertex is [*saturated*]{} if it is either $3$-saturated or $4$-saturated.
Structural Lemmas
=================
In this section, we reveal useful structural properties of $G$.
\[lem:edge\] Every edge $xy$ of $G$ has an endpoint with degree at least $5$.
Suppose to the contrary that $x$ and $y$ are both $4^-$-vertices. Since $G-xy$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-xy$. If either $\varphi(x)\neq\varphi(y)$ or $\varphi(x)=\varphi(y)=4$, then $\varphi$ is also a $(3, 4)$-coloring of $G$. Otherwise, $\varphi(x)=\varphi(y)=3$, and at least one of $x, y$ is $3$-saturated in $G-xy$. For one $3$-saturated vertex in $\{x, y\}$, we may recolor it with the color $4$, since all of its neighbors have color $3$ in $G$. In all cases we end up with a $(3,4)$-coloring of $G$, which is a contradiction.
\[lem:3vx\] There is no $3$-vertex in $G$.
Suppose to the contrary that $v$ is a $3$-vertex of $G$ with neighbors $v_1, v_2, v_3$. By Lemma \[lem:edge\], we know that $v_1, v_2, v_3$ are $5^+$-vertices. Obtain a graph $H$ from $G-v$ by adding paths $v_1u_1v_2, v_2u_2v_3, v_3u_3v_1$ of length two between the neighbors of $v$. See Figure \[fig:3vx\] for an illustration. Note that $H$ is planar and still has girth at least $5$ since the pairwise distance between $v_1, v_2, v_3$ did not change. Since $H$ has fewer $3^+$-vertices than $G$, there is a $(3, 4)$-coloring $\varphi$ of $H$.
![Obtaining $H$ from $G$ in Lemma \[lem:3vx\].[]{data-label="fig:3vx"}](fig-3-vx.pdf)
Without loss of generality, we may assume $\varphi(u_1)=\varphi(u_2)$. Since each of $v_1, v_2, v_3$ has a neighbor in $\{u_1, u_2\}$, using the color $\varphi(u_1)$ on $v$ gives a $(3, 4)$-coloring of $G$, which is a contradiction.
\[vertex-degree\] If $v$ is an $8^-$-vertex of $G$, then in every $(3, 4)$-coloring of $G-v$, $v$ has a saturated neighbor in $G-v$ that cannot be recolored. In particular,
1. if $d(v)=2$, then for each $i\in\{3, 4\}$, $v$ has an $i$-saturated $(i+2)^+$-neighbor $u$ that cannot be recolored. Moreover, if $u$ is an $8^-$-vertex, then $u$ has a $j$-saturated $(j+2)^+$-neighbor where $\{i, j\}=\{3, 4\}$.
2. if $d(v)\in\{ 4, 5\}$, then $v$ has a $4$-saturated neighbor that is either a $9^+$-vertex or a $6s^+$-vertex.
3. if $d(v)\in\{6, 7, 8\}$, then $v$ has a saturated neighbor that is either a $9^+$-vertex or a $5s^+$-vertex.
Since $G-v$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there exists a $(3,4)$-coloring $\varphi$ of $G-v$. Note that for each $i\in\{3, 4\}$, since letting $\varphi(v)=i$ cannot be a $(3, 4)$-coloring of $G$, $v$ has either an $i$-saturated neighbor or $i+1$ neighbors with the color $i$. Since $v$ is an $8^-$-vertex, $v$ cannot have both four neighbors of color $3$ and five neighbors of color $4$. Let $j\in\{3, 4\}$ such that $v$ has at most $j$ neighbors with color $j$, one of which is $j$-saturated. If every $j$-saturated neighbor of $v$ can be recolored, then we can color $v$ with $j$, a contradiction. Hence, $v$ must have at least one $j$-saturated neighbor that cannot be recolored.
Let $u$ be a non-recolorable $j$-saturated neighbor of $v$ and let $\{i, j\}=\{3, 4\}$. We know that $u$ is a $(j+2)^+$-vertex, because it is adjacent to $v$, $j$ neighbors colored with $j$, and at least one neighbor $x$ colored with $i$ (since $u$ cannot be recolored with $i$). Moreover, if $d(u)\leq 8$, then $x$ must be $i$-saturated. In particular,
1. if $d(v)=2$, then $v$ has both a non-recolorable $3$-saturated neighbor and a non-recolorable $4$-saturated neighbor. For $j\in \{3,4\}$, the $j$-saturated neighbor has degree at least $j+2$, and if its degree is at most $8$, then it has an $i$-saturated neighbor of degree at least $i+2$, where $\{i,j\}=\{3,4\}$.
2. if $d(v)\in\{4, 5\}$, then $v$ must have a non-recolorable $4$-saturated neighbor $u$. So $u$ is either a $9^+$-vertex or a $6s^+$-vertex.
3. if $d(v)\in\{6,7, 8\}$, then $u$ must be either a $9^+$-vertex or a $5s^+$-vertex.
This finishes the proof of this lemma.
\[lem:6face\] Let $C$ be a $6$-face $u_1u_2u_3u_4u_5u_6$ of $G$.
1. If $C$ contains three $2$-vertices and a $5$-vertex, then the other two vertices are $7^+$-vertices.
2. If $C$ contains exactly two $2$-vertices, then $C$ contains at most two $5p$-vertices. Moreover,
1. if $C$ contains exactly one $5p$-vertex, then it contains at most two of $5s$-vertices and $6p$-vertices;
2. if $C$ contains two $5p$-vertices, then either $C=F_{6a}$ (see Figure \[figure-special-6-face\]) or it contains neither $5s$-vertices nor $6p$-vertices.
3. If $C$ contains exactly one $2$-vertex, then it contains at most one $5p$-vertex. Moreover,
1. if $C$ contains exactly one $5p$-vertex, then it contains at most two of $5s$-vertices and $6p$-vertices;
2. if $C$ contains no $5p$-vertices, then it contains at most four of $5s$-vertices and $6p$-vertices.
4. If $C$ contains no $2$-vertex, then it contains no poor vertices and at most four $5s$-vertices.
Note that by Lemma \[lem:edge\], no two $2$-vertices are adjacent to each other. We will show that if $C$ is not one of the above, then we can obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
[**[ (a)]{}**]{}: Let $u_1, u_3, u_5$ be the $2$-vertices and let $u_4$ be a $5$-vertex of $C$. By Lemma \[vertex-degree\] $(i)$, both $u_2$ and $u_6$ are $6^+$-vertices, so without loss of generality, suppose to the contrary that $u_6$ is a $6$-vertex. Since $G-u_5$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices does not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-u_5$. By Lemma \[vertex-degree\] $(i)$, we know $u_4$ is $3$-saturated and has a $4$-saturated $6^+$-neighbor and $u_6$ is $4$-saturated and has a $3$-saturated $5^+$-neighbor. Hence, $\varphi(u_3)=3$ and $\varphi(u_1)=4$.
If $\varphi(u_2)=3$, then recolor $u_3$ with $4$ and color $u_5$ with $3$ to obtain a $(3, 4)$-coloring of $G$. If $\varphi(u_2)=4$, then recolor $u_1$ with $3$ and color $u_5$ with $4$ to obtain a $(3, 4)$-coloring of $G$. [**(b)**]{}: Note that each $5p$-vertex on $C$ must have a $2$-neighbor on $C$, and by Lemma \[vertex-degree\] $(i)$, each 2-vertex has at most one $5p$-neighbor. So $C$ contains at most two $5p$-vertices because it has exactly two $2$-vertices.
[**(b1)**]{} Assume that $u_1$ is the unique $5p$-vertex on $C$. By Lemma \[vertex-degree\] $(ii)$, none of $u_2,u_6$ is a $5s$- or $6p$-vertex. If $u_4$ is neither a $5s$-vertex nor a $6p$-vertex, then $C$ contains at most two of $5s$-vertices and $6p$-vertices. If $u_4$ is a $6p$-vertex, then either $u_3$ or $u_5$ is a $2$-vertex, so again $C$ contains at most two of $5s$-vertices and $6p$-vertices. If $u_4$ is a $5s$-vertex, then by Lemma \[vertex-degree\] $(ii)$, one of $u_3$ and $u_5$ must be a $6s^+$-vertex, a $9^+$-vertex, or a $2$-vertex. Therefore, $C$ contains at most two of $5s$-vertices and $6p$-vertices.
[**(b2)**]{} Now assume that $C$ contains two $5p$-vertices. Observe that if $u_1, u_4$ are the two $5p$-vertices on $C$, then by Lemma \[vertex-degree\] $(ii)$, none of $u_2,u_3,u_5,u_6$ is a $5s$-vertex or a $6p$-vertex, as claimed. Therefore, we may assume that $u_1, u_3$ are the two $5p$-vertices on $C$.
Note that $u_2$ cannot be a $2$-vertex by Lemma \[vertex-degree\] $(i)$. So both $u_4$ and $u_6$ are 2-vertices. By Lemma \[vertex-degree\] $(i)$ and $(ii)$, both $u_2$ and $u_5$ are $6^+$-vertices. We may assume that $u_5$ is a $6p$-vertex, for otherwise $C$ contains neither $5s$-vertices nor $6p$-vertices. Assume that $C$ is not a special 6-face $F_{6a}$, which implies that $u_2$ is a $6$-vertex. By Lemma \[vertex-degree\] $(i)$, in a $(3,4)$-coloring $\varphi$ of $G-u_6$, we know $u_1$ is $3$-saturated and $u_5$ is 4-saturated and both are non-recolorable. It follows that $u_2$ is $4$-saturated, $u_4$ is colored with $4$ and non-recolorable, and furthermore $u_3$ is 3-saturated. Now we can recolor $u_4, u_3, u_2, u_1$ with $3, 4, 3, 4$ respectively, and color $u_6$ with $3$ to obtain a $(3, 4)$-coloring of $G$. [**(c)**]{}: Let $u_1$ be the unique $2$-vertex on $C$. A $5p$-vertex must have a $2$-neighbor on $C$, and by Lemma \[vertex-degree\] $(i)$, a $2$-vertex has at most one $5p$-neighbor, so $C$ contains at most one $5p$-vertex.
[**(c1)**]{} Assume that $C$ has one $5p$-vertex $u_2$. By Lemma \[vertex-degree\] $(i)$ and $(ii)$, $u_6$ cannot be a $5$-vertex, and $u_3$ cannot be a $5s$-vertex or a $6p$-vertex. If $u_6$ is not a $6p$-vertex, then $C$ has at most two of $5s$-vertices and $6p$-vertices. If $u_6$ is a $6p$-vertex, then $u_4$ and $u_5$ cannot be both $5s$-vertices by Lemma \[vertex-degree\] $(ii)$. Note that either $u_4$ or $u_5$ cannot be $6p$-vertices since $C$ has only one $2$-vertex $u_1$.
[**(c2)**]{} Now assume that $C$ contains no $5p$-vertices. Consider three consecutive vertices $u_{i-1}, u_i, u_{i+1}$ on $C$. If $u_i$ is a $6p$-vertex, then either $u_{i-1}$ or $u_{i+1}$ must be a $2$-vertex. If $u_i$ is a $5s$-vertex, then by Lemma \[vertex-degree\] $(ii)$, either $u_{i-1}$ or $u_{i+1}$ is a $6s^+$-vertex, a $9^+$-vertex, or a $2$-vertex. Therefore, $C$ contains at most four of $5s$-vertices and $6p$-vertices.
[**(d)**]{}: If $C$ contains no 2-vertex, then it contains neither a $5p$-vertex nor a $6p$-vertex. By Lemma \[vertex-degree\] $(ii)$, a $5$-vertex must have a $6^+$-neighbor, so the two $3^+$-neighbors of a $5s$-vertex cannot be both $5s$-vertices. Therefore, $C$ contains at most four $5s$-vertices.
\[lem:F2\] If $F_{6b}$ is a $6$-face with three $2$-vertices and three $6p$-vertices (see Figure \[figure-special-6-face\]), then $F_{6b}$ cannot share an edge with a $5$-face with two $2$-vertices.
Let $C=u_1\ldots u_6$ be an $F_{6b}$ with three $2$-vertices $u_1, u_3, u_5$ and three $6p$-vertices. Note that two $2$-vertices cannot be adjacent to each other by Lemma \[lem:edge\]. Suppose to the contrary that a $5$-face $C'$ shares an edge with $C$. Then $C$ and $C'$ share exactly two edges and without loss of generality, assume that $C'=u_6u_1u_2v_1v_2$ and, by symmetry, we may assume that $v_1$ is a $2$-vertex.
Since $G-u_1$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-u_1$. By Lemma \[vertex-degree\] $(i)$, both $u_2$ and $u_6$ are non-recolorable and one of $u_2$ and $u_6$ is $3$-saturated and the other is $4$-saturated.
First assume that $u_6$ is $3$-saturated and $u_2$ is $4$-saturated. Since $u_2$ is a $6$-vertex, by Lemma \[vertex-degree\] $(i)$, $u_2$ must have exactly one $3$-saturated neighbor and all other neighbors are colored with the color $4$. In particular, $\varphi(v_1)=4$. Also, by Lemma \[vertex-degree\] $(i)$, $u_6$ has a $4$-saturated neighbor, which must be $v_2$. Hence, we can recolor $v_1$ with the color $3$ and color $u_1$ with the color $4$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
Now assume that $u_6$ is $4$-saturated and $u_2$ is $3$-saturated. By Lemma \[vertex-degree\] $(i)$, $u_6$ must have a $3$-saturated neighbor, which must be $v_2$, and all other neighbors are colored with the color 4. In particular, $\varphi(u_5)=4$. Also, by Lemma \[vertex-degree\], we know that $u_2$ must have a $4$-saturated neighbor, which is neither $u_3$ nor $v_1$. If $\varphi(v_1)=3$, then we can recolor $v_1$ with the color $4$ and color $u_1$ with the color $3$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction. Therefore, $\varphi(v_1)=4$, which further implies that $\varphi(u_3)=3$. Now, if we can recolor $u_3$ with the color $4$, then we can color $u_1$ with the color $3$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction. Hence, $u_4$ must be $4$-saturated, and in particular $\varphi(u_4)=4$. Finally, we can recolor $u_5$ with the color $3$ and color $u_1$ with $4$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
\[lem:5-face-1-2vtx\] If $C$ is a $5$-face $u_1\ldots u_5$ with exactly one $2$-vertex $u_1$, then either
- $C$ contains at most two of $5p$-, $5s$-, and $6p$-vertices, or
- $C$ is a special $5$-face $F_{5c}$ or $F_{5d}$ in Figure \[figure-special-6-face\].
Assume that $C$ contains at least three $5p$-, $5s$-, and $6p$-vertices. By symmetry, we may assume that $u_3$ is a $5s$-vertex. Note that by Lemma \[vertex-degree\], $u_2$ is not a $5p$-vertex, and $u_3$ has a $6s^+$-neighbor or $9^+$-neighbor, which is either $u_2$ or $u_4$. If $u_2$ is a $6s^+$-vertex or $9^+$-vertex, then both $u_3$ and $u_4$ are $5s$-vertices, so by Lemma \[vertex-degree\], both $u_2$ and $u_5$ are $6s^+$-vertices or $9^+$-vertices, which is a contradiction. Therefore we may assume that $u_4$ is a $6s^+$-vertex or $9^+$-vertex. Now $u_2$ is a $5s$-vertex or $6p$-vertex, and $u_5$ is a $5p$-, $5s$-, or $6p$-vertex.
First assume that $u_2$ is a $5s$-vertex. Since $G-u_1$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-u_1$. By Lemma \[vertex-degree\] $(i)$, $u_2$ must be $3$-saturated and $u_5$ must be a $4$-saturated $6p$-vertex. This further implies that $u_4$ is $3$-saturated. Note that $u_2$ must have a $4$-saturated neighbor and three neighbors of color $3$. Since $\varphi(u_4)=3$, we know $u_3$ cannot be the $4$-saturated neighbor of $u_2$, so $\varphi(u_3)=3$. Now, since $u_3$ has neither five neighbors colored with the color $4$ nor a $4$-saturated neighbor, $u_3$ can be recolored with $4$. Now, by recoloring $u_3$ with the color $4$ and coloring $u_1$ with the color $3$, we obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
Now assume that $u_2$ is a $6p$-vertex. Let $u$ be a 2-neighbor of $u_3$ that is not on $C$. Since $G-u$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-u$. By Lemma \[vertex-degree\] $(ii)$, $u_3$ is $3$-saturated and $u_3$ has a $4$-saturated $6^+$-neighbor $x$. If $x=u_2$, then we can recolor $u_2$ with the color 3, and color $u_3$ and $u$ with the colors 4 and 3, respectively, to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
Therefore $x=u_4$, which implies that $\varphi(u_4)=4$ and $\varphi(u_2)=3$. Since recoloring $u_2$ with the color $4$ must not be possible, we know that all neighbors of $u_2$, except $u_3$, are colored with the color $4$. In particular, $\varphi(u_1)=4$ and $u_1$ is non-recolorable. This further implies that $u_5$ is $3$-saturated and non-recolorable. Now, $u_4$ must be $4$-saturated and non-recolorable. That is to say, $u_4$ must have four neighbors colored with $4$. Moreover, $u_4$ must have either a $3$-saturated neighbor other than $u_3$, $u_5$, or at least four neighbors other than $u_3$ colored with $3$. Hence, $u_4$ is a $7r^+$-vertex or $9s^+$-vertex, that is, $C$ is either $F_{5c}$ or $F_{5d}$.
\[lem:7-face\] If $F$ is a $7$-face, then one of the following is true:
- $F$ has at most six $2$-, $5p$-, $5s$-, or $6p$-vertices;
- $F$ has at least two $5s$-vertices;
- $F$ is a special $7$-face $F_7$ (see Figure \[figure-special-6-face\]).
Note that two $2$-vertices cannot be adjacent to each other by Lemma \[lem:edge\]. Suppose to the contrary that $F$ contains seven of $2$-, $5p$-, $5s$-, and $6p$-vertices, and at most one $5s$-vertex. Denote the vertices around $F$ by $u_1,u_2,\ldots, u_7$ in order. Without loss of generality, we may assume that one vertex $u_1$ is a $5s$-vertex, for otherwise, two $6p^-$-vertices would be adjacent to each other, which contradicts Lemma \[vertex-degree\] $(ii)$ and $(iii)$. All other vertices of $F$ are $2$-vertices and $6p^-$-vertices.
Without loss of generality, we may assume that $u_2, u_4, u_6$ are $2$-vertices and $u_3, u_5, u_7$ are $6p^-$-vertices. Since $u_2$ is a $2$-vertex, by Lemma \[vertex-degree\] $(i)$, we know that $u_3$ is a $6p$-vertex. Since a $5p$-vertex cannot have a $5s$-neighbor by Lemma \[vertex-degree\] $(ii)$, we know that $u_7$ must be a $6p$-vertex. If $u_5$ is a $6p$-vertex, then $F$ is a special face $F_7$.
The only remaining case is when $u_5$ is a $5p$-vertex and $u_3,u_7$ are $6p$-vertices. Since $G-u_4$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there is a $(3, 4)$-coloring $\varphi$ of $G-u_4$. By Lemma \[vertex-degree\] $(i)$, $u_3$ and $u_5$ is $4$-saturated and $3$-saturated, respectively. In particular, $\varphi(u_3)=\varphi(u_2)=4$ and $\varphi(u_5)=\varphi(u_6)=3$. This further implies that $u_1$ is $3$-saturated and $u_7$ is $4$-saturated. Now, recoloring $u_2, u_1, u_7$ with the color $3, 4, 3$, respectively, and coloring $u_4$ with the color $4$ gives a $(3, 4)$-coloring of $G$, which is a contradiction.
Discharging
===========
For each element $x\in V(G)\cup F(G)$, let $\mu(x)$ and $\mu^*(x)$ denote the [*initial charge*]{} and [*final charge*]{}, respectively, of $x$. Let $\mu(x)=d(x)-4$, so by Euler’s formula, $$\sum_{x\in V(G)\cup F(G)} \mu(x)=-8.$$
![Special $7$-face, 6-faces, and 5-faces[]{data-label="figure-special-6-face"}](figure-special-7-face.pdf "fig:") ![Special $7$-face, 6-faces, and 5-faces[]{data-label="figure-special-6-face"}](figure-special-6-face.pdf "fig:")\
![Special $7$-face, 6-faces, and 5-faces[]{data-label="figure-special-6-face"}](figure-special-5-face.pdf "fig:") ![Special $7$-face, 6-faces, and 5-faces[]{data-label="figure-special-6-face"}](figure-special-5-face-Z.pdf "fig:")
Here are the discharging rules:
1. Let $v$ be a $5^+$-vertex. Then $v$ gives $\frac{1}{2}$ to each adjacent $2$-vertex; moreover,
1. if $d(v)\ge 8$, then $v$ gives $\frac{1}{2}$ to each adjacent $5p$-, $5s$-, $6p$-vertex and incident heavy edge;
2. if $d(v)=7$, then $v$ first gives $\frac{1}{2}$ to each adjacent $5p$-vertex and $6p$-vertex, then distributes its remaining charge evenly to adjacent $5s$-vertices and incident heavy edges;
3. if $d(v)\in \{5,6\}$, then $v$ distributes its remaining charge evenly to its adjacent $5p$-vertices and incident heavy edges.
2. A heavy edge distributes its charge evenly to the two incident faces.
3. Let $f$ be a $5^+$-face. Then $f$ gives $\frac{1}{2}$ to each encountered incident $2$-vertex on a boundary walk of $f$; moreover,
1. if $d(f)\ge 8$, then $f$ gives $\frac{1}{2}$ to each encountered incident $5p$-, $5s$-, and $6p$-vertex on a boundary walk of $f$. (This means that each cut-vertex on $f$ that is a $5p$-, $5s$-, or $6p$-vertex is given at least $1$ in total.)
2. if $d(f)=7$ and $f\not=F_7$, then $f$ first gives $\frac{1}{2}$ to each incident $5p$-vertex and $6p$-vertex, then distributes its remaining charge evenly to each incident $5s$-vertex (if any exist). If $f=F_7$, then $f$ gives $\frac{3}{8}$ to each incident $5s$-vertex and $6p$-vertex;
3. if $d(f)=6$ and $f\not=\{F_{6a}, F_{6b}\}$, then $f$ first gives $\frac{1}{2}$ to each incident $5p$-vertex and $\frac{1}{4}$ to each incident $5s$-vertex and $6p$-vertex, then distributes its remaining charge evenly to incident $5s$- and $6p$-vertices (if any exist); if $f=F_{6a}$, then $f$ gives $\frac{3}{8}$ to each incident $5p$-vertex and $\frac{1}{4}$ to the incident $6p$-vertex; if $f=F_{6b}$, then $f$ gives $\frac{1}{6}$ to each incident $6p$-vertex;
4. for the case $d(f)=5$, if $f$ is incident with two $2$-vertices, then it distributes its charge evenly to each incident $5p$-, $5s$-, and $6p$-vertex (if any exist); if $f$ has at most one $2$-vertex and $f\not\in \{F_{5a},F_{5b},F_{5c}, F_{5d}\}$, then $f$ first gives $\frac{1}{4}$ to each incident $5p$-, $5s$-, and $6p$-vertex, then it distributes its remaining charge evenly to each incident $5p$-, $5s$-, and $6p$-vertex (if any exist); if $f\in \{F_{5a}, F_{5b}\}$, then it gives $\frac{1}{2}$ to the incident $6p$-vertex and its remaining charge to the $5s$-vertex; if $f\in \{F_{5c}, F_{5d}\}$, then it gives $\frac{1}{4}$ to each incident $5p$-vertex and $6p$-vertex, and its remaining charge evenly to incident $5s$-vertices.
\[lem-face-charge\] If $f$ is a $5^+$-face, then $\mu^*(f)\geq 0$.
We show that each face has nonnegative charge after the required charges by (R3).
If $f$ is a $5$-face, then $\mu(f)=1$ and $f$ is incident with at most two $2$-vertices. Clearly, $\mu^*(f)\geq 1-1=0$ by (R3d) and Lemma \[lem:5-face-1-2vtx\].
If $f$ is a $6$-face, then $\mu(f)=2$ and $f$ is incident with at most three $2$-vertices by Lemma \[lem:edge\].
Case 1: $f$ has at most one incident 2-vertex. By Lemma \[lem:6face\] (c) and (d), and (R3c), $\mu^*(f)\geq 2-\max\{\frac{1}{2}\cdot 2+\frac{1}{4}\cdot 4,\frac{1}{4}\cdot6\}= 0$.
Case 2: $f$ has two incident 2-vertices. By Lemma \[lem:6face\] (b), $f$ has at most two incident $5p$-vertices. If $f$ has no incident $5p$-vertex, then $\mu^*(f)\geq 2-\frac{1}{2}\cdot 2-\frac{1}{4}\cdot 4=0$. If $f$ has one incident $5p$-vertex, then $f$ has at most two of $5s$-vertices and $6p$-vertices by Lemma \[lem:6face\] (b), so $\mu^*(f)\geq 2-\frac{1}{2}\cdot 3-\frac{1}{4}\cdot 2=0$. If $f$ has two incident $5p$-vertices, then $f$ is either a special face $F_{6a}$ or has neither $5s$-vertices nor $6p$-vertices. Therefore, $\mu^*(f)\geq 2-\frac{1}{2}\cdot 2-\frac{3}{8}\cdot 2-\frac{1}{4}=0$ or $\mu^*(f)\geq 2-\frac{1}{2}\cdot 4=0$.
Case 3: $f$ has three incident 2-vertices. If $f$ is incident with a $5$-vertex, then the other two vertices on $f$ are $7^+$-vertices by Lemma \[lem:6face\] (a), so $\mu^*(f)\geq 2-\max\{\frac{1}{2}\cdot 4,\frac{1}{2}\cdot 3+\frac{1}{4}\}=0$. If $f$ is a special face $F_{6b}$ in Figure \[figure-special-6-face\], then $\mu^*(f)\geq 2-\frac{1}{2}\cdot 3-\frac{1}{6}\cdot 3=0$. If $f$ is not $F_{6b}$, then $\mu^*(f)\geq 2-\frac{1}{2}\cdot 3-\frac{1}{4}\cdot 2=0$.
If $f$ is a $7$-face, then $\mu(f)=3$. By (R3b) and Lemma \[lem:7-face\], $\mu^*(f)\geq 7-4-\max\{\frac{1}{2}\cdot 6, \frac{1}{2}\cdot 5, \frac{1}{2}\cdot 3+\frac{3}{8}\cdot 4\}=0$. If $f$ is a $8^+$-face, then $\mu^*(f)\ge d(f)-4-\frac{1}{2}d(f)\ge 0$ by (R3a).
Now we consider the final charge of an arbitrary vertex. Note that if a $5p$-, $5s$-, or $6p$-vertex is a cut vertex, then it must be visited more than once on a boundary walk of a $8^+$-face, thus it gets at least $1$ from the face by (R3). Therefore in the following three lemmas, we always assume that a $5p$-vertex or $5s$-vertex is in five different faces and a $6p$-vertex is in six different faces.
If $u$ is a $5p$-vertex, then $\mu^*(u)\geq 0$.
By (R1), $u$ gives out $4\cdot \frac{1}{2}=2$ to its adjacent $2$-vertices. To show $\mu^*(u)\ge 0$, we need to prove that $u$ receives at least $1$ by the discharging rules. Let $N(u)=\{u_0, v_1, v_2, v_3, v_4\}$ where $d(u_0)>2$ and $d(v_i)=2$ for $i\in[4]$. For $i\in[4]$, let $u_i$ be the neighbor of $v_i$ that is not $u$. We assume that the five faces incident with $u$ are $A,B,C,D,E$ as shown in Figure \[figure-5p\].
![A $5p$-vertex $u$ incident with five $5^+$-faces and a $6p$-vertex $u$ incident with six $5^+$-faces.[]{data-label="figure-5p"}](figure-5p.pdf "fig:") ![A $5p$-vertex $u$ incident with five $5^+$-faces and a $6p$-vertex $u$ incident with six $5^+$-faces.[]{data-label="figure-5p"}](figure-6p.pdf "fig:")
To get some idea regarding the degrees of the vertices on the five faces incident with $u$, we consider a $(3,4)$-coloring $\varphi$ of $G-u$, which exists since the number of edges decreased and the number of $3^+$-vertices did not increase. By Lemma \[vertex-degree\] $(ii)$, $u_0$ is a $4$-saturated $6^+$-vertex and the four $2$-neighbors of $u$ are colored with the color $3$. Since $u_0$ is non-recolorable, if $d(u_0)\leq 8$, then $u_0$ has a $3$-saturated neighbor and four neighbors of color $4$. Furthermore, since no neighbor of $u$ is recolorable, for $i\in[4]$, $u_i$ is a $4$-saturated $6^+$-neighbor and if $d(u_i)\leq 8$, then $u_i$ has a $3$-saturated neighbor.
[**Case 1.**]{} $u$ is incident with a special $6$-face $F_{6a}$.
By the ordering of the degrees of the vertices on $F_{6a}$, the special $6$-face must be either $A$ or $B$. Without loss of generality, assume that $A$ is a special $6$-face $F_{6a}$ so that $u_1$ is a $6p$-vertex and $u_0$ is a $7s^+$-vertex. As both $u_1$ and $u_2$ are $4$-saturated, and $u_1$ is adjacent to a $3$-saturated vertex, we conclude that $u_1$ cannot be adjacent to $u_2$. Otherwise, $u_1$ has two $3^+$-neighbors, which implies that $u_1$ is not a poor vertex. Hence, $E$ is a $6^+$-face. By (R3), $u$ gets $\frac{1}{2}$ from $E$ and $\frac{3}{8}$ from $A$, and by (R1), $u$ gets $\frac{1}{2}$ from $u_0$. So $u$ gets at least $1$ in total, as desired.
[**Case 2.**]{} $u$ is not incident with a special 6-face and either $A$ or $B$ is a non-special $6^+$-face.
Note that by (R3), $u$ receives at least $\frac{1}{2}$ from each of its incident $6^+$-faces that are not special. So we may assume that $u$ is incident with exactly one $6^+$-face and four $5$-faces. Without loss of generality, assume that $A$ is a $6^+$-face and let $B=uu_0w_2u_4v_4$. Note that $u_4$ is non-recolorable, which means that it has either a $3$-saturated neighbor or at least four neighbors colored with $3$. Since a $3^+$-neighbor $u_3$ of $u_4$ is $4$-saturated, we know that $u_4$ cannot be a $6p$-vertex. Therefore, $B$ is not a special $5$-face.
1. We may assume that $u_0$ is a $6$-vertex. For otherwise, $u$ also gets $\frac{1}{2}$ from $u_0$ by (R1), thus $u$ gets at least $1$ in total.
2. We may assume that $w_2$ is not a $2$-vertex. For otherwise, as $\varphi(u_0)=\varphi(u_4)=4$, $w_2$ is colored or can be recolored with $3$, and then $u_0$ must be a $7^+$-vertex, which contradicts (1).
3. We may assume that $w_2$ is a $5s$-vertex. For otherwise, none of $u_0,w_2,u_4$ is a $5p$-, $5s$-, or $6p$-vertex, so $u$ receives at least $\frac{1}{2}$ from $B$ by (R3d), thus $u$ get at least $1$ in total.
4. We may assume that each of $u_3$ and $u_2$ is either a $6r^+$-vertex or a $9^+$-vertex, and $u_1$ is either a $6s^+$-vertex or $9^+$-vertex. For $z\in\{u_3,u_2,u_1\}$, observe that each $z$ must have either a $3$-saturated neighbor (other than $u_i$s) or four neighbors colored with 3 (other than $v_i$s).
5. We may assume that $u_4$ is either a $8^+$-vertex or a $7r$-vertex. It must be that $\varphi(w_2)=3$, for otherwise, we can recolor $w_2$ with the color $3$ and color $u$ with $4$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction. If $d(u_4)\leq 7$, then $u_4$ must have a $3$-saturated neighbor that is not $w_2$, for otherwise, we could recolor $u_0, w_2, u_4$ with $3,4,3$, respectively, and then color $u$ with the color $4$, a contradiction. This implies that $u_4$ is a $7$-vertex and has at least three $5^+$-neighbors, i.e. $w_2, u_3$ and another $5s^+$- or $9^+$-neighbor, so $u_4$ is a $7r$-vertex.
Now, $u_4u_3, u_3u_2, u_2u_1$ are all heavy edges. Since each of $u_3$ and $u_2$ has at least two $5^+$-neighbors that are not $5p$-, $5s$- and $6p$-vertices, by (R1), the heavy edges $u_4u_3, u_3u_2, u_2u_1$ get at least $\frac{1}{3}+\frac{1}{6}$, $\frac{1}{6}\cdot 2$, $\frac{1}{6}$, respectively, from $u_4, u_3, u_2$. By (R2) and (R3d), $u$ receives at least $\frac{1}{2}(\frac{1}{3}+\frac{1}{6}+\frac{1}{6}\cdot 2+\frac{1}{6})=\frac{1}{2}$ from faces $C,D,E$, and thus a total of $1$, as desired.
[**Case 3.**]{} $u$ is not incident with a special $6$-face and both $A$ and $B$ are $5$-faces.
Let $A=uu_0w_1u_1v_1$ and $B=uu_0w_2u_4v_4$.
1. If $k$ is the number of vertices in $\{w_1, w_2\}$ that is either a $2$-vertex or a $5s$-vertex, then $d(u_0)\geq 6+k$. This is because if $w_i$ is either a $2$-vertex or a $5s$-vertex, then $\varphi(w_i)=3$, otherwise we can recolor $w_i$ with $3$ and color $u$ with $4$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction. The lower bound on $d(u_0)$ follows since $u_0$ is $4$-saturated and cannot be recolored with the color 3.
2. We may assume that $C,D,E$ are $5$-faces. For otherwise, $u$ gets at least $\frac{1}{2}$ from an incident $6^+$-face by (R3). Now, if $d(u_0)\ge 7$, then $u$ gets another $\frac{1}{2}$ from $u_0$ by (R1), for a total of $1$. If $d(u_0)= 6$, then each of $w_1$ and $w_2$ is neither $2$-vertex nor $5s$-vertex by (1), and thus each of $u_0,w_1,w_2$ is not a $2/5p/5s/6p$-vertex. By (R3d), $u$ gets at least $\frac{1}{4}\cdot 2$ from $A$ and $B$, for a total of $1$.
3. We observe each of $u_3$ and $u_2$ is either a $9^+$-vertex or a $6r^+$-vertex. This follows from the fact that each of $u_3$ and $u_2$ is $4$-saturated, has two $4$-saturated neighbors, and is not recolorable with $3$ (which implies either a $3$-saturated neighbor or at least four 3-colored neighbors other than $v_2$ and $v_3$).
4. Assume $d(u_1), d(u_4)\leq 8$. For $i\in[2]$, $u_{3i-2}$ is a $7s^+$-vertex if $d(w_i)=2$ and is a $6s^+$-vertex if $d(w_i)\geq 3$.
Now, $u_1u_2, u_2u_3, u_3u_4$ are all heavy edges. By (R1), $u_2$ sends at least $\min\left\{\frac{6-4-3\cdot\frac{1}{2}}{3}, \frac{7-4-5\cdot \frac{1}{2}}{2}, \frac{1}{2}\right\}= \frac{1}{6}$ to each of $u_1u_2$ and $u_2u_3$, and likewise, $u_3$ sends at least $\frac{1}{6}$ to each of $u_2u_3$ and $u_3u_4$.
- Assume that both $w_1$ and $w_2$ are $2$-vertices. Now, $u_0$ is an $8^+$-vertex and gives $\frac{1}{2}$ to $u$ by (R1a). Also, $u_1$ ($u_4$, respectively) is a $7s^+$-vertex and gives at least $\frac{7-4-5\cdot\frac{1}{2}}{2}=\frac{1}{4}$ to the heavy edge $u_1u_2$ ($u_3u_4$, respectively) by (R1). By (R2) and (R3), $C,D,E$ give at least $\frac{1}{2}(\frac{1}{6}\cdot 4+\frac{1}{4}\cdot 2)>\frac{1}{2}$ to $u$.
- Without loss of generality, assume that $w_1$ is a $2$-vertex and $w_2$ is a $3^+$-vertex. Now, $u_0$ is a $7^+$-vertex and gives $\frac{1}{2}$ to $u$ by (R1), and the $5$-face $B$ gives at least $\frac{1}{4}$ to $u$ by (R3). Then $u$ gets at least $\frac{1}{2}+\frac{1}{4}$ from $u_0$ and $B$, and at least $\frac{1}{2}(\frac{1}{6}\cdot 4)>\frac{1}{4}$ from $C,D,E$ by (R3).
- Finally, assume that neither of $w_1,w_2$ is a $2$-vertex. By (R3d), each of $A,B$ gives at least $\frac{1}{4}$ to $u$. Furthermore, each of $A,B$ gives at least $\frac{1}{2}$ to $u$ if neither $w_1$ nor $w_2$ is a $5s$-vertex. (In the case, each of $A,B$ cannot contain any $2/5p/5s/6p$-vertex other than $u,v_1,v_4$ because each of $u_0,u_1,u_4$ is $4$-saturated, non-recolorable and has at least two $3^+$-neighbors.) Now if either $w_1$ or $w_2$ is a $5s$-vertex, then $u_0$ is a $7^+$-vertex, and thus $u_0$ gives $\frac{1}{2}$ to $u$ so $u$ gets a total of $\frac{1}{4}\cdot2+\frac{1}{2}\geq 1$.
Hence, $u$ always gets at least $1$, as desired.
If $u$ is a $6p$-vertex, then $\mu^*(u)\geq 0$.
The initial charge of $u$ is $2$, and by (R1c), $u$ gives out $\frac{1}{2}\cdot 5$ to its $2$-neighbors. To show $\mu^*(u)\ge 0$, we need to prove that $u$ receives $\frac{1}{2}$ by the discharging rules. Let $N(u)=\{u_0, v_i: i\in[5]\}$ where $d(u_0)>2$ and $d(v_i)=2$ for $i\in [5]$. For $i\in [5]$, let $u_i$ be the neighbor of $v_i$ that is not $u$. We assume that the six faces incident with $u$ are $A, B, C, D, E, F$ as shown in Figure \[figure-5p\].
Since $G-u$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there exists a $(3,4)$-coloring $\varphi$ of $G-u$. By Lemma \[vertex-degree\], either $\varphi(v_i)=4$ for $i\in[5]$ and $u_0$ is $3$-saturated and non-recolorable, or at least four of $v_i$’s are colored with $3$ and $u_0$ is $4$-saturated and non-recolorable. In the former case, $u_i$ with $i\in[5]$ are $3$-saturated and non-recolorable, and in the latter case, at least four of the $u_i$’s are $4$-saturated and non-recolorable.
1. \[i-u0\] We may assume that $d(u_0)\le 6$. By (R1), $u$ gets $\frac{1}{2}$ from $u_0$ if $d(u_0)\ge 7$.
2. Also, we may assume that $u$ is not incident with a special face $F_{6b}$.
If $u$ is incident with $F_{6b}$, then by Lemma \[lem:F2\], $u$ is also incident with two other faces where each face is not a $5$-face with two $2$-vertices. By (R3), each face that is not a 5-face with two $2$-vertices sends at least $\frac{1}{6}$ to $u$, plus $F_{6b}$ sends $\frac{1}{6}$ to $u$. Thus, $u$ gets a total of at least $\frac{1}{2}$.
3. We may assume that $A$ is a $5$-face with two $2$-vertices.
By (R3), each face that is neither a 5-face with two 2-vertices nor $F_{6b}$ gives at least $\frac{1}{4}$ to $u$, so we may assume that one of $A$ and $B$, say $A$, must be a $5$-face with two $2$-vertices.
4. $B$ is not a $5$-face with two $2$-vertices, and furthermore we may assume that $C,D,E,F$ are $5$-faces with two 2-vertices.
Suppose that $B$ is a $5$-face with two $2$-vertices, so that both $w_1$ and $w_2$ are $2$-vertices. If $u_0$ is 3-saturated, then both $u_1$ and $u_5$ are $3$-saturated. So $w_1,w_2$ are colored or can be recolored with $4$. This implies that $u_0$ is a $7^+$-vertex, which contradicts (\[i-u0\]). If $u_0$ is $4$-saturated, then either $u_1$ or $u_5$ is $4$-saturated. Without loss of generality assume that $u_1$ is $4$-saturated, so either $\varphi(w_1)=3$ or $w_1$ can be recolored with $3$. This implies that $u_0$ is a $7^+$-vertex, which contradicts (\[i-u0\]).
Now $u$ receives at least $\frac{1}{4}$ from $B$ by (R3). We may assume that $C,D,E,F$ are 5-faces with two 2-vertices, for otherwise, $u$ receives another $\frac{1}{4}$ to get a total of at least $\frac{1}{2}$.
5. $B$ is not a special face $F_7$.
Without loss of generality, assume that $B$ is a special face $F_7$, which sends $\frac{3}{8}$ to $u$. This implies that $u_0$ is a $5s$-vertex, which further implies that $u_0$ is 3-saturated and $u_i$ is 3-saturated for each $i\in[5]$. Note that $A$ is a $5$-face with two $2$-vertices. Since $u_0$ is non-recolorable, it has a $4$-saturated ($6s^+$- or $9^+$-)neighbor, which means that $\varphi(w_1)=3$. Recolor $w_1$ with $4$ and then color $u$ with $3$, we obtain a $(3, 4)$-coloring of $G$, a contradiction.
6. \[i-5Fab\] We may assume that $B$ is a $6^-$-face. Moreover, if $B$ is a 5-face, then it can be neither $F_{5a}$ nor $F_{5b}$. Otherwise, $u$ receives at least $\frac{1}{2}$ by (R3a), (R3b), (R3d).
7. \[i-6face\] $B$ must be a $6$-face.
Suppose otherwise. From above, assume that $B$ is a 5-face with at most one 2-vertex. Note that $B$ must have exactly one 2-vertex since $v_5$ is a 2-vertex. By (R3), $B$ gives $u$ at least $\frac{1}{2}$ if $u$ is the only $5p$-, $5s$-, or $6p$-vertex on $B$. So consider the case when $B$ is a $5$-face with one $2$-vertex $v_5$ and at least two $5p$-, $5s$-, or $6p$-vertices. Note that none of $u_0,w_2,u_5$ can be a $6p$- or $5p$-vertex.
Assume that $u_0$ is $3$-saturated. Then $\varphi(v_i)=4$ and $u_i$ is $3$-saturated for $i\in[5]$. The 2-vertex $w_1$ is colored or can be recolored with $4$. Therefore $u_0$ is a $6s^+$-vertex. Thus, either $u_5$ or $w_2$ is a $5s$-vertex. Since $B$ is not $F_{5a}$ or $F_{5b}$ by (\[i-5Fab\]), when one of $u_5$ and $w_2$ is a $5s$-vertex, the other one is a $6^-$-vertex. Now if $w_2$ is a $5s$-vertex, then $w_2$ is colored or can be recolored with $4$ without making $w_2$ $4$-saturated, so $u_0$ must have another $4$-saturated neighbor. Thus, $d(u_0)\ge 7$, which contradicts (\[i-u0\]). If $u_5$ is a $5s$-vertex, then $w_2$ must be the $4$-saturated neighbor of $u_5$ and $u_0$. Thus, we can recolor $u_0,u_5$ with $4$ and $w_2$ with $3$, and color $u$ with $3$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
Assume that $u_0$ is $4$-saturated. Then $u_0$ is a $6s^+$-vertex. Now, since $d(u_0)\leq 6$, the $2$-vertex $w_1$ cannot be colored or recolored with $3$. This implies that $\varphi(w_1)=4$ and $\varphi(u_1)=3$, and moreover, $\varphi(v_1)=4$. Furthermore, for $i\in[5]-\{1\}$, $\varphi(v_i)=3$ and $u_i$ is $4$-saturated. Since $u_5$ is $4$-saturated, it is a $6s^+$-vertex. So $w_2$ is a $5s$-vertex, and $\varphi(w_2)=3$ or $w_2$ can be recolored with $3$. Again, since $B$ is neither $F_{5a}$ nor $F_{5b}$, we know $d(u_5)\le 6$. Then $w_2$ is the only $3$-saturated neighbor of $u_0$ and $u_5$. So by recoloring $u_0,w_2, u_5$ with $3,4,3$, and coloring $u$ with $4$, we obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
From now on, denote $B=uu_0w_2w_2'u_5v_5$.
8. \[i-2vx\] Either $w_2$ or $w_2'$ is a $2$-vertex.
For otherwise, $B$ contains exactly one $2$-vertex $v_5$. Moreover, the only $6p^-$-vertex that $B$ contains is $u$. We may assume that $B$ contains at least three $5s$-vertices, for otherwise $u$ gets at least $\frac{1}{3}\cdot( 6-4-\frac{1}{2})=\frac{1}{2}$ from $B$ by (R3c). Since no $5s$-vertex can be adjacent to two $5s$-vertices, by Lemma \[vertex-degree\], we know either $w_2$ or $w_2'$ is not a $5s$-vertex, and both $u_0$ and $u_5$ are $5s$-vertices. Now, both $u_0$ and $u_5$ cannot be $4$-saturated, thus they are both $3$-saturated. Moreover, $\varphi(v_i)=4$ and $\varphi(u_i)=3$ for $i\in [5]$. Now we can recolor $w_1$ with $4$ and color $u$ with $3$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
9. $u_0$ cannot be a $5p/5s/6p$-vertex.
As $u$ is a $6p$-vertex, $u_0$ must be a $5s^+$- or $9^+$-vertex by Lemma \[vertex-degree\] (iii). If $u_0$ is a $5s$-vertex, then $u_0$ is $3$-saturated, thus $\varphi(v_i)=4$ and $\varphi(u_i)=3$ for each $i\in [5]$. This means that the $2$-vertex $w_1$ is colored or can be recolored with $4$. Since $u_0$ is non-recolorable, it has a $4$-saturated neighbor and thus $u_0$ is a $6^+$-vertex, a contradiction.
10. $u_5$ cannot be a $5p$- or $6p$-vertex.
Suppose otherwise. Then $w_2'$ is a $2$-vertex and $u_4$ is the unique $3^+$-neighbor of $u_5$. Since $G-v_5$ has fewer edges than $G$, it has a $(3,4)$-coloring $\phi$. Then $u_5$ and $u$ are respectively $3$- and $4$-saturated and both are non-recolorable.
First let $u_5$ be $3$-saturated. Then $u_4$ is the unique $4$-saturated neighbor of $u_5$. So $v_4$ can be recolored $3$. But $\phi(v_i)=\phi(u_0)=4$ for $i\in [3]$, to make $u$ $4$-saturated. So we can recolor $u$ with $3$, a contradiction.
Now assume that $u_5$ is $4$-saturated, which means that $u_5$ is a $6p$-vertex. Then $u_4$ is the unique $3$-saturated neighbor of $u_5$. Now $v_4$ can be recolored with $4$. So $u_0$ is $4$-saturated and $v_i$ for $i\in [3]$ is colored with $3$, which implies that $u_i$ for $i\in [3]$ is colored with $4$. Then $w_1$ can be recolored with $3$. On the other hand, since $u_5$ is a $4$-saturated $6p$-vertex, $w_2'$ must be a $2$-vertex colored with $4$. Thus $w_2$ must be colored with $3$. (For otherwise, $w_2'$ can be recolored with $3$, a contradiction.) As $u_0$ is $4$-saturated, $d(u_0)\ge 7$, a contradiction.
11. If $u_5$ is a $5s$-vertex, then $u_5$ and $u$ are the only $5p/5s/6p$-vertices on $B$.
Let $u_5$ be a $5s$-vertex. Again $G-v_5$ has a $(3,4)$-coloring $\phi$, in which $u_5$ is $3$-saturated and $u$ is $4$-saturated and both are non-recolorable, by Lemma \[vertex-degree\]. So $\phi(v_i)=4$ for $i\in [4]$ and $u_0$ is $3$-saturated. Then $\phi(u_i)=3$ for $i\in [4]$. So the $2$-vertex $w_1$ can be recolored with $4$.
If $w_2'$ is a $2$-vertex, then $\phi(w_2')=3$ and $w_2$ must be $4$-saturated. In this case, $w_2$ is the only $4$-saturated neighbor of $u_0$ (note that $d(u_0)\le 6$), so $w_2$ cannot be a $6p$- or $5s$-vertex. So $u_5$ and $u$ are the only $5p/5s/6p$-vertices on $B$, as desired.
So $w_2$ is a $2$-vertex and $w_2'$ is the $4$-saturated neighbor of $u_5$. We claim $\phi(w_2)=3$, for otherwise, $u_0$ has to be a $7^+$-vertex to be $3$-saturated and non-recolorable. Now $w_2'$ is non-recolorable and $4$-saturated, it must be a $6s^+$-vertex or an $8^+$-vertex. So again, $u_5$ and $u$ are the only $5p/5s/6p$-vertices on $B$, as desired.
By (9)-(11), $B$ contains at most two $5p/5s/6p$-vertices, thus $u$ receives at least $\frac{1}{2}$ from $B$ by (R3c), as desired.
If $u$ is a $5s$-vertex, then $\mu^*(u)\geq 0$.
The initial charge of $u$ is $1$, and by (R1c), $u$ gives out $\frac{1}{2}\cdot 3$ to its $2$-neighbors. To show $\mu^*(u)\ge 0$, we need to prove that $u$ receives $\frac{1}{2}$ by the discharging rules.
Let $N(u)=\{x,y,v_1,v_2,v_3\}$ with $d(x),d(y)>2$ and $d(v_i)=2$ and let $u_i$ be the other neighbor of $v_i$ for $i\in[3]$. Depending on whether $x,y,u$ are on the same face or not, we could have two different embeddings around $u$ (see Figure \[figure-5s\]).
![Two possible embeddings containing $5s$-vertex $u$ with five $5$-faces.[]{data-label="figure-5s"}](figure-5s.pdf)
Since $G-v_2$ is a graph with fewer edges than $G$ and the number of $3^+$-vertices did not increase, there exists a $(3,4)$-coloring $\varphi$ of $G-v_2$. By Lemma \[vertex-degree\], $u$ is $3$-saturated and $u_2$ is $4$-saturated, and both are non-recolorable. Without loss of generality, we may assume that $x$ is $4$-saturated and $\varphi(y)=\varphi(v_1)=\varphi(v_3)=3$. Also, $u_1$ and $u_3$ are $4$-saturated and non-recolorable.
We may assume $d(x),d(y)\le 7$, for otherwise, $u$ gets at least $\frac{1}{2}$ by (R1a). Moreover, by Lemma \[lem:7-face\] and (R3), $u$ receives at least $\frac{1}{4}$ from each incident $6^+$-face, so we assume that $u$ is incident with at most one $6^+$-face.
[**Case 1.**]{} $x,y,u$ are not on the same face (see Figure \[figure-5s\] (I) for an illustration). We let $w_1, w_2$ be the neighbors of $x$ on $A$ and $B$, respectively.
1. $u$ is not incident with a special $5$-face $F_{5b}$ or $F_{5c}$. This follows from the fact that $u$ is adjacent to a 2-vertex on each incident face.
2. We may assume that neither of $B,D$ is a special $5$-face $F_{5a}$ or $F_{5d}$. It follows that neither of $B,D$ is a special $5$-face.
By symmetry, let $B$ be a special 5-face $F_{5a}$ or $F_{5d}$. Then $x$ is a $7s^+$-vertex, $w_2$ is a $5s^+$-vertex, and $u_3$ is a $6p$-vertex. So $u_1u_3\not\in E(G)$, and $C$ must be a $6^+$-face. By (R1), $u$ receives at least $\frac{1}{4}$ from $x$, and by (R3), $u$ receives at least $\frac{1}{4}$ from $C$.
3. We may assume that $d(w_1)>2$ and $d(w_2)=2$.
Suppose otherwise that $d(w_1)=2$. When $A$ is a $6^+$-face, $u$ receives at least $\frac{1}{4}$ from $A$ by Lemma \[lem:7-face\] and (R3). Moreover, $B$ must be a $5$-face with two $2$-vertices, for otherwise, $u$ receives another $\frac{1}{4}$ from $B$ by (2) and (R3d). So $w_2$ is a $2$-vertex and is colored or can be recolored with $3$. Since $x$ is $4$-saturated and non-recolorable, $x$ must be a $7s^+$-vertex with another $3$-saturated ($5s^+$- or $9^+$-)neighbor other than $u$. Thus $u$ receives at least $\frac{1}{4}$ from $x$ by (R1), and then $\frac{1}{2}$ in total. When $A$ is a $5$-face, $w_1$ is colored or can be recolored with $3$. Again, $x$ is a $7s^+$-vertex and gives $u$ at least $\frac{1}{4}$. Also, we may assume that $B$ is a $5$-face with two $2$-vertices, for otherwise, $u$ can receive another $\frac{1}{4}$ from $B$ by (2) and (R3). So $w_2$ is colored or can be recolored with $3$. This means that $x$ is an $8^+$-vertex, a contradiction. Therefore, we assume that $d(w_1)>2$.
Now suppose that $d(w_2)>2$. By (R3), $u$ receives at least $\frac{1}{4}$ from $B$, since $B$ is a $6^+$-face or a $5$-face with only one $2$-vertex. Since $d(w_1)>2$, we may assume that $A$ is a special $5$-face, for otherwise, $u$ can receive another $\frac{1}{4}$ from $A$ by (R3). By (1), $A$ is an $F_{5a}$ or $F_{5d}$. Clearly, $x$ is a $7s^+$-vertex with at least two $5s^+$-neighbors. So $u$ receives $\frac{1}{4}$ from $x$ by (R1) as well. Thus we assume that $d(w_2)=2$.
4. We may assume that $A$ is a special $5$-face and $B,C,D,E$ are $5$-faces.
If $A$ is not a special $5$-face, then $u$ can receive at least $\frac{1}{4}$ from $A$ by (R3) since $A$ is a $6^+$-face or a $5$-face with only one $2$-vertex (note that $d(w_1)>2$). If $B$ is a $6^+$-face, $u$ receives another $\frac{1}{4}$ from $B$. Otherwise, $B$ is a $5$-face with two $2$-vertices because $d(w_2)=2$. So $w_2$ is colored or can be recolored with $3$. As $x$ is $4$-saturated and non-recolorable, $x$ must be a $7s^+$-vertex and can give $u$ at least $\frac{1}{4}$ by (R1).
By (1), $A$ is an $F_{5a}$ or $F_{5d}$. Therefore, $x$ is a $7s^+$-vertex with at least two $5s^+$-neighbors. Then $u$ receives at least $\frac{1}{4}$ from $x$ by (R1). If one of $B,C,D,E$ is a $6^+$-face, then $u$ can receive another $\frac{1}{4}$ by (R3). Therefore we may assume that $B, C, D,E$ are $5$-faces.
5. We may assume that $A$ is an $F_{5a}$.
For otherwise, by (1) and (4), $A$ must be $F_{5d}$. Then $d(x)= 7$, $u_2$ is a $6p$-vertex and $w_1$ is a $5s$-vertex. So $w_1$ is the only $3$-saturated neighbor of the $6p$-vertex $u_2$. Since $x$ has at least two $5s^+$-neighbors, $u$ gets at least $\frac{1}{4}$ from $x$ by (R1). We may assume that $B$ is a $5$-face with two $2$-vertices, for otherwise, $u$ receives another $\frac{1}{4}$ from $B$. Now that $w_2$ is colored or can be recolored with $3$. Since $x$ is $4$-saturated, non-recolorable and $d(x)=7$, $w_1$ is the only $3$-saturated neighbor of $x$. Thus we can recolor $x, w_1, u_2, u$ with $3, 4, 3, 4$, respectively, then color $v_2$ with $4$ to obtain a $(3, 4)$-coloring of $G$, which is a contradiction.
By (5), $u_2,w_1,x$ are $6p$-, $6s^+$-, $7s^+$-vertices, respectively. Clearly, $x$ gives at least $\frac{1}{4}$ to $u$ and the heavy edge $xw_1$. By (2) and (4), both $B$ and $D$ are $5$-faces with two $2$-vertices, for otherwise, $u$ can receive another $\frac{1}{4}$ from $B$ or $D$. Then both of $w_2,w_3$ are $2$-vertices. Also, $C$ is a $5$-face. Note that $w_2$ is colored or can be recolored with $3$ and both of $u_1,u_3$ are $4$-saturated and non-recolorable. So each of $u_1,u_3$ has either a $3$-saturated neighbor or at least four neighbors colored with $3$. This implies that $u_1$ is a $6s^+$- or $8^+$-vertex, and $u_3$ is either a $8^+$-vertex or a $7s^+$-vertex with another $5s^+$- or $9^+$-neighbor distinct from $u_1$. So $u_3$ gives at least $\frac{1}{4}$ to the heavy edge $u_1u_3$. Then $u$ receives $\frac{1}{4}+\frac{1}{2}\cdot(\frac{1}{4}+\frac{1}{4})=\frac{1}{2}$ from $x$, $A$ and $C$ in total.
[**Case 2.**]{} $x,y,u$ are in the same face, denoted by $A$ (see for example Figure \[figure-5s\] (II)). Let $w_1$ be the neighbor of $y$ on $E$ and $w_2$ be the neighbor of $x$ on $B$.
1. $u$ must be incident with a special $5$-face $F_{5a}, F_{5b}, F_{5c}$ or $F_{5d}$.
Assume that $u$ has none of the special 5-faces. By (R3), each $6^+$-face or $5$-face with at most one $2$-vertex gives at least $\frac{1}{4}$ to $u$, in particular, $A$ gives $\frac{1}{4}$ to $u$. So all other faces are $5$-faces with two $2$-vertices. This implies that $d(w_1)=d(w_2)=2$.
Recall that $x$ and $u_3$ are $4$-saturated. Then $w_2$ is colored or can be recolored with $3$. Note that $d(x)\le 6$, for otherwise, $u$ gets $\frac{1}{4}$ from $x$. So $u,w_2$ are the only neighbors of $x$ of color $3$. Now recolor $x$ with $3$ and $u$ with $4$, and we can color $v_2$ with $3$, a contradiction.
2. None of $C,D,B,E$ is a special $5$-face.
Clearly $C,D$ cannot be special $5$-faces. If $B$ or $E$ is a special $5$-face, then they only could be in $\{F_{5a}, F_{5d}\}$. By symmetry, assume that $B$ is a special 5-face. Then $u_3, w_2, x$ are $6p$-, $5s^+$- and $7s^+$-vertices, respectively. Since $u_3$ is a $6p$-vertex, $u_3u_2\not\in E(G)$, so $C$ is a $6^+$-face, thus $u$ gets at least $\frac{1}{4}$ from $C$ by (R3c). So $u$ gets at least $\frac{1}{2}$ since $u$ gets at least $\frac{1}{4}$ from $x$ by (R1), a contradiction.
3. $A$ cannot be a special $5$-face.
Clearly, $A$ cannot be a special $5$-face $F_{5a}$. So we may assume that $A$ is a special $5$-face in $\{F_{5b}, F_{5c}, F_{5d}\}$. So it contains a $7^+$-vertex which is $x$ or $y$. By (R1), $u$ receives at least $\frac{1}{4}$ from the $7^+$-vertex. We may assume that $B,E$ are $5$-faces with two $2$-vertices (for otherwise, $u$ receives at least $\frac{1}{4}$ from them). Then $d(w_1)=d(w_2)=2$.
Note that $w_2$ is colored or can be recolored with $3$, since both $u_3$ and $x$ are $4$-saturated. Since $u_3$ is non-recolorable, it must have a $3$-saturated neighbor and four neighbors of color $4$, so $u_3$ must be a $7^+$-vertex. Note that $u_2$ must be a $6r^+$-vertex or an $8^+$-vertex and $u_1$ be a $6s^+$-vertex or an $8^+$-vertex, since they are all $4$-saturated and non-recolorable. By (R1), $u_2$ gives at least $\frac{6-4-3\cdot \frac{1}{2}}{3}=\frac{1}{6}$ to each of the heavy edges $u_2u_3$ and $u_2u_1$, and $u_3$ gives at least $\frac{7-4-5\cdot \frac{1}{2}}{2}=\frac{1}{4}$ to the heavy edge $u_2u_3$. So by (R2) and (R3), $u$ gets at least $\frac{1}{2}(\frac{1}{4}+\frac{1}{6}\cdot 2)>\frac{1}{4}$ from $C$ and $D$. So $u$ gets at least $\frac{1}{2}$.
Now (2) and (3) contradict (1).
Every vertex $u\in V(G)$ has $\mu^*(u)\geq 0$.
We consider the cases according to the degree of $u$. Clearly, $\mu^*(u)=2-4+4\cdot\frac{1}{2}=0$ if $d(u)=2$ by the rules. If $d(u)\ge 8$, then $\mu^*(u)\ge d(u)-4-d(u)\cdot \frac{1}{2}\ge 0$. If $d(u)=7$, then $u$ has at least one neighbor that is not a $2$-, $5p$-, or $6p$-vertex by Lemma \[vertex-degree\]. Thus by (R1b), $\mu^*(u)\ge 7-4- \frac{6}{2}\ge 0$ . For $d(u)\in \{5,6\}$, the lemmas have shown that $\mu^*(u)\ge 0$. Note that $d(u)\not=3$ and $4$-vertices have initial and final charges $4-4=0$.
[99]{}
K. Appel and W. Haken. Every planar map is four colorable. I. Discharging. Illinois J. Math., 21(3):429-490, 1977. K. Appel, W. Haken, and J. Koch. Every planar map is four colorable. II. Reducibility. Illinois J. Math., 21(3):491-567, 1977. O. V. Borodin, A. O. Ivanova, M. Montassier, P. Ochem, and A. Raspaud. Vertex decompositions of sparse graphs into an edgeless subgraph and a subgraph of maximum degree at most $k$. J. Graph Theory, 65(2):83-93, 2010. O. V. Borodin, A. Kostochka, and M. Yancey. On 1-improper 2-coloring of sparse graphs. Discrete Math., 313(22):2638-2649, 2013. O. V. Borodin and A. V. Kostochka. Vertex decompositions of sparse graphs into an independent set and a subgraph of maximum degree at most 1. Sibirsk. Mat. Zh., 52(5):1004-1010, 2011. O. V. Borodin and A. V. Kostochka. Defective 2-colorings of sparse graphs. J. Combin. Theory Ser. B, 104:72-80, 2014. Hojin Choi, Ilkyoo Choi, Jisu Jeong, and Geewon Suh. $(1,k)$-coloring of graphs with girth at least five on a surface. J. Graph Theory, 84(4):521-535, 2017. I. Choi and L. Esperet. Improper coloring of graphs on surfaces. ArXiv e-prints, March 2016. Ilkyoo Choi and André Raspaud. Planar graphs with girth at least 5 are $(3,5)$-colorable. Discrete Math., 338(4):661-667, 2015. L. J. Cowen, R. H. Cowen, and D. R.Woodall. Defective colorings of graphs in surfaces: partitions into subgraphs of bounded valency. J. Graph Theory, 10(2):187-195, 1986. Nancy Eaton and Thomas Hull. Defective list colorings of planar graphs. Bull. Inst. Combin. Appl., 25:79-87, 1999. Herbert Grötzsch. Zur Theorie der diskreten Gebilde. VII. Ein Dreifarbensatz für dreikreisfreie Netze auf der Kugel. Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg. Math.-Nat. Reihe, 8:109-120, 1958/1959. Frédéric Havet and Jean-Sébastien Sereni. Improper choosability of graphs and maximum average degree. J. Graph Theory, 52(3):181-199, 2006. M. Montassier and P. Ochem. Near-colorings: non-colorable graphs and NP-completeness. Electron. J. Combin., 22(1):Paper 1.57, 13, 2015. R. Škrekovski. List improper colourings of planar graphs. Combin. Probab. Comput., 8(3):293-299, 1999. Riste Škrekovski. List improper colorings of planar graphs with prescribed girth. Discrete Math., 214(1-3):221-233, 2000.
[^1]: $*$Corresponding author.\
The first author’s research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07043049), and also by the Hankuk University of Foreign Studies Research Fund. The research of the second author is supported in part by the NSA grant H98230-16-1-0316 and the Natural Science Foundation of China (11728102). The last author’s research is supported by the Shandong Provincial Natural Science Foundation (No. ZR2019MA032, ZR2014JL001, ZR2016AQ01), the National Natural Science Foundation of China (No.11701342) and the Excellent Young Scholars Research Fund of Shandong Normal University of China.
|
---
abstract: 'The ultra-low density limit of Saha ionization formula suggests that, in this limit, matter would prefer to remain ionized.This has a very important implication for cosmic structures known as Voids.These are ultra-low density (much less than average density of matter in the Universe) regions in the galactic clusters and superclusters.The ionization formula implies that matter trapped in the Voids should be ionized.Therefore, we expect a very faint radiation glow from the Voids resulting from the motion of the charged particles.'
address: |
THEORETICAL PHYSICS DIVISION\
BHABHA ATOMIC RESEARCH CENTRE,CENTRAL COMPLEX\
TROMBAY, MUMBAI-400085, INDIA
author:
- MOFAZZAL AZAM
title: SAHA IONIZATION FORMULA AND THE VOIDS
---
.8 in
Introduction
============
The Saha ionization formula [@saha20] has played a very impotant role in the development of astrophysics.The ultra-low density limit of this formula has been known for a long time [@feynman63].In this limit, the formula suggests that the atoms in equillibrium prefer to remain in ionized state.This ionization, just from “expansion” as the density goes down, has been listed as one of the surprises in theoretical physics by Peierls [@peierls79]. In this paper we point that this ultra-low density limit of Saha ionization formula is very relevant for the cosmic structures known as Voids.These ultra low density (much less than average density of matter in the Universe) regions dominate the volume in the Universe. The ionization formula implies that matter trapped in the Voids should be ionized.Therefore, we expect a very faint radiation glow from the Voids resulting from the motion of the charged
Ultra-low desity limit of the Saha ionization formula
=====================================================
The ionization formula is given by, $$\begin{aligned}
\frac{n_{e}n_{i}}{n_a}=\frac{1}{v_a}e^{-W/kT}
\end{aligned}$$ In the equation above, $n_e$ , $n_i$, $n_a$ are the densities of electrons, ions and atoms(not ionized) respectively. $W$ is the ionization potential, $T$ is the temparature and $k$ is the Boltzman constant.The volume occupied by a bound electron at temparature T is represented by $v_a$. It is, essentially, the volume contained within a thermal de Brogle wave lenghth. $$\begin{aligned}
v_a=\lambda_{th}~^{3}=\Big(~\frac{2\pi \hbar^2}{m_{e}kT}~\Big)^{3/2}
\end{aligned}$$ Let us consider a box of volume $V$ which ,to start with, contains N number of hydrogen atoms.Let a fraction $X$ of them be ionized.In this case , $~n_e~=~\frac{N}{V}~X~=~n_i~$ and $~n_a~=~(1-X)~\frac{N}{V}~$.Substituting these values in the ionization formula we obtain, $$\begin{aligned}
\frac{X^2}{1-X}~\frac{N}{V}~=~\frac{1}{v_a}e^{-W/kT}\end{aligned}$$ From the equation above, we see that the fraction of charged particles in equillibrium increases when we increase the volume(i.e., decrease the density).In the ultra-low density ($~\frac{N}{V}~ \rightarrow~~ 0$ or $~\frac{V}{N}~ \rightarrow~~\infty$ ), atoms would prefer to remain ionized [@feynman63; @peierls79; @ghosh98].Before we discuss the nature and consequences of the ultra-low density limt, let us introduce the cosmic structures known as Voids [@zeldovich82; @varun95] which are, essentially, underdense (less than the average density of matter in the universe) regions in galatic clusters and supeclusters. This requires review of some aspects of standard cosmology and the theory of structure formation at the large scale.We review very briefly, in next section, the materials relevant for our discussion.
The Voids
=========
It is very well established through observation that at distance scales of the order of 200-300 megapersec, the Universe is isotropic and homogenous.This means that if we pick up a region of the Universe of dimension 200-300 megapersec at any distance and in any direction, it will contain the same amount of matter.Therefore, at this scale the density of matter can be considered to be constant.In Newtonian gravity, such a distribution of matter implies that at every point in space the potential and force are unbounded [@landau75]. This dilemma is resolved in the General Theory of Relativity.For an isotropic and homogenous distribution of matter one assumes the Friedman-Robertson-Walker metric, given by the line element $$\begin{aligned}
ds^{2}= dt^{2}-a^{2}(t)(\frac{dr^{2}}{1-kr^{2}}+r^{2}d\theta^{2}+r^{2}\sin^{2} \theta d\phi^{2})
\end{aligned}$$ in which the Einstein equation, $$\begin{aligned}
R_{\mu \nu}-\frac{1}{2} g_{\mu \nu}R=\frac{8\pi G}{3} T_{\mu \nu}
\end{aligned}$$ takes the simple form [@landau75], $$\begin{aligned}
\frac{\dot{a}^{2} +ka^{2}}{a^{4}} = \frac{8\pi G}{3} \rho_{0}
\end{aligned}$$ where $a(t)$ is the scale factor, $\rho_{0}$ is the averaged constant density, and $k=1, -1$ or $0$, respectively for closed, open and flat universe. This equation along with the equation of state describes the isotropic and homogenous universe.The equations clearly show that the isotropic and homogenous distribution of matter can not be stable- the Universe expands.The constant density serves as a source term for the evolution of the scale factor.It does not give rise to attractive gravitational force.Therefore, the question that arises is: what is the source of gravity in the large scale?The answer is obtained as follows. When the mean free path of the particles is small, matter can be treated as an ideal fluid and the Newton’s equations governing the motion of gravitating collisionless particles in an expanding Universe can be written in terms of $ ~{\bf x}= {\bf r}/a ~$ (the comoving space coordinate), ${\bf v} ={\bf \dot{r}}-H{\bf r}=a{\bf \dot{x}}$ (the peculiar velocity field, H is the Hubble constant), $\phi({\bf x},t)$ (the Newton gravitational potential) and $\rho({\bf x},t)$ (the matter density). This give us the following set of equations [@varun95; @strauss95]. Firstly, the Euler equation, $$\begin{aligned}
\frac{\partial (a {\bf v})}{\partial t}+({\bf v.\nabla_{x}}){\bf v}=
-\frac{1}{\rho}{\bf \nabla_{x}} P-{\bf \nabla_{x}}\phi
\end{aligned}$$ Next the continuity equation $$\begin{aligned}
\frac{\partial \rho}{\partial t}+ 3H\rho +\frac{1}{a}{\bf \nabla_{x}}
(\rho {\bf v}) =0
\end{aligned}$$ And, finally the Poisson equation $$\begin{aligned}
{\bf \nabla_{x}}^{2} \phi =4\pi Ga^{2}(\rho-\rho_{0})
=4\pi Ga^{2} \rho_{0} \delta
\end{aligned}$$ where $\rho_{0}$ is the mean background density and $\delta=\rho/\rho_{0}-1$ is the density contrast.
Therefore, at large scale, the source of gravity is not the average density $\rho_{0}$ but the density fluctuations, $\delta \rho >0$. It is a subject of study in theory of structure formation as to what kind of density fluctuation would grow in time and lead to the formation of galaxies, and clusters and superclusters of galaxies [@Padmanabhan93; @Peebles80; @varun95; @strauss95]. It is important to remember that at the scale of dimensions, 200 - 300 megapersecs, the Universe is homogenous and isotropic and acquires constant density, and therefore, if in some subregion $\delta \rho >0$, there must be some subregion where $\delta \rho <0$, so as to reproduce the constant density profile.These domains with $\delta \rho <0$ are known as [**Voids**]{} [@zeldovich82; @varun95]. Note that for Voids $ ~\delta\rho/\rho_{0} ~$ is always bounded bellow by $-1$ .Such regions of Voids dominate the volume in the universe giving rise to cellular structures with the clusters and superclusters of galaxies forming string like walls around them.Existence of Voids are supported by direct observation as well as numerical simulation of hydrodynamic equations [@varun95; @ryden96; @hoyle01; @antonu00]. The observed Voids seem to have dimension of several (tens of) megapersecs.
Conclusion
==========
The Voids are the ultra-low density regions in the Universe, and these are the regions where one would expect to observe the consequences of ultra-low density limit of the Saha ionization formula.As discussed before, in this limit, atoms would prefer to remain ionized.At this stage, the important question is: what is the source of the ionization energy? There are the starlights but at high red shift, their intensity is very low. The most common source of ionization energy at high red shift is the lights from Quasars.
The second source which may sound somewhat speculative is the following.In the begining the Voids expand faster than the Universe [@varun95].However, in the radiation domination era the whole Universe is kept in a single equillibrium state by the radiation field.During the decoupling of radiation, this equillibrium is destroyed, and in the process some ions may remain trapped in the Voids.
The Saha ionization formula implies that , at ultra- low density, once the ionization takes place there is hardly any chance for recombination [@feynman63; @peierls79; @ghosh98].Therefore, the motion of the charged particles in the Voids should create a faint radiation glow
[99]{} Saha,M.N. 1920,Phil.Mag.[**40**]{} 472 Feynman,R.P., Leighton,R. and Sands,M. 1963 The Feynman Lectures on Physics, Chapter [**42**]{}, Section [**42.3**]{},The Addison-Wisely Publishing Inc Peierls,R. 1979,Surprises in Theoretical Physics,Princeton University Press, Princeton, NJ, [**pp 52-5**]{}, Ghosh,K. and Ghosh,G. 1998, Eur.J.Phys. [**19**]{} 7 Zeldovich,Ya.B., Einasto,J. and Sandarin,S.F. 1982, Nature [**300**]{} 407 Sahni,V. and Coles,P. 1995, Physics Reports [**262**]{} 1 (Section 5) Landau,L.D. and Lifshitz,E.M. 1975, The Classical Theory of Fields, Pergamon Press Ltd., Oxford ,England,[**4th.Ed.**]{}, Chapter[**12**]{} Padmanabhan,T. 1993, Structure Formation in the Universe, Cambridge, Cambridge University Press Peebles,P.J.E. 1980, The Large Scale Structure in the Universe, Princeton University Press, Princeton Strauss,M.A. and Willick,J.A. 1995, Physics Reports [**261**]{} 271 Ryden,B.S. and Mellot,A.L. 1996, Astrophys.J [**470**]{} 160 Hoyle,F. and Vogeley,M.S. 2001, arXiv:astro-ph/0109357 Antonuccio-Delogu V. et.al. 2000, Mon.Not.R.Soc.[**000**]{} 1
|
---
abstract: 'Zipf’s law is a hallmark of several complex systems with a modular structure, such as books composed by words or genomes composed by genes. In these component systems, Zipf’s law describes the empirical power law distribution of component frequencies. Stochastic processes based on a sample-space-reducing (SSR) mechanism, in which the number of accessible states reduces as the system evolves, have been recently proposed as a simple explanation for the ubiquitous emergence of this law. However, many complex component systems are characterized by other statistical patterns beyond Zipf’s law, such as a sublinear growth of the component vocabulary with the system size, known as Heap’s law, and a specific statistics of shared components. This work shows, with analytical calculations and simulations, that these statistical properties can emerge jointly from a SSR mechanism, thus making it an appropriate parameter-poor representation for component systems. Several alternative (and equally simple) models, for example based on the preferential attachment mechanism, can also reproduce Heaps’ and Zipf’s laws, suggesting that additional statistical properties should be taken into account to select the most-likely generative process for a specific system. Along this line, we will show that the temporal component distribution predicted by the SSR model is markedly different from the one emerging from the popular rich-gets-richer mechanism. A comparison with empirical data from natural language indicates that the SSR process can be chosen as a better candidate model for text generation based on this statistical property. Finally, a limitation of the SSR model in reproducing the empirical “burstiness” of word appearances in texts will be pointed out, thus indicating a possible direction for extensions of the basic SSR process.'
author:
- Andrea Mazzolini
- Alberto Colliva
- Michele Caselle
- 'Matteo Osella[^1]'
bibliography:
- 'SSR.bib'
title: 'Heaps’ law, statistics of shared components and temporal patterns from a sample-space-reducing process'
---
Introduction
============
A large number of complex systems have a modular structure. For example, genomes can be viewed as an assembly of genes, written texts are composed of words, and several man-made systems such as softwares or LEGO toys are built starting from basic components. Systems with this modular structure can be described using the general framework of *component systems* [@Mazzolini2018]: an ensemble of realizations (e.g., genomes, books, LEGO toys) that are simply defined by the statistics of their elementary components (genes, words, LEGO bricks). One of the prominent and ubiquitous feature of these complex component systems is a high level of heterogeneity in the usage of components. Typically, the component abundances follow the famous Zipf’s law. This statistical law was first observed [@zipf1949human] and then extensively studied in the context of quantitative linguistics [@li2002zipf; @piantadosi2014zipf; @altmann2015statistical], and essentially refers to the empirical fact that word abundances in a written text scale as a power law of the word rank, i.e., the position in the list of words sorted by their abundances. Moreover, the exponent is usually close to -1. An analogous behaviour has been observed in a huge variety of other complex systems [@newman2005power; @mitzenmacher2004brief], from genome composition [@huynen1998frequency], to firm sizes [@axtell2001zipf]. Several possible theoretical explanations have been proposed for the “universal" emergence of Zipf’s law [@mitzenmacher2004brief; @newman2005power]. Stochastic growth with a *preferential attachment* mechanism, i.e., frequent components have higher probability to further increase their frequency, naturally leads to a power-law distribution of component abundances. This rich-gets-richer mechanism is at the basis of many stochastic models introduced to describe different component systems such as the Yule-Simon’s model [@yule1925mathematical; @simon1955class] and its different variants introduced in linguistics [@zanette2005dynamics; @gerlach2013stochastic], the Chinese Restaurant Process [@pitman1997two], different models based on the Polya’s urn scheme [@polya1930quelques; @johnson1977urn; @tria2014dynamics] or on a duplication-innovation dynamics [@rosanova2017].
Zipf’s law has also been interpreted as a sign of critical behaviour, leveraging on the general correspondence between the emergence of power-law statistics and criticality in statistical mechanics [@Mora2011]. Following this analogy, Zipf’s law can be identified with the probability distribution of the microstates, and a power law is expected if the system is close to a critical point. This critical state could also emerge as a dynamical consequence of local interactions and without the need of fine tuning as in the *self-organized-criticality* framework [@bak1987self; @Mora2011; @usher1995dynamic]. Without invoking criticality, the *random-group-formation* model instead tries to explain the widespread emergence of Zipf’s law from an entropy maximization principle in the general process of partitioning of elements into categories (or balls into boxes) [@baek2011zipf]. Zipf’s law can also emerge from more complex models based on the idea that components have specific networks of dependencies and that these relations determine their co-occurence in a realization [@Iacopini2018; @Mazzolini2018a].
More recently, an interesting and simple alternative route for the emergence of Zipf’s law has been proposed [@corominas2015understanding]. The candidate mechanims is based on a Sample-Space-Reducing (SSR) process in which the number of accessible states gets smaller as the process dynamics unfolds, defining a ”history-dependent“ random process. In the perspective of component systems, the SSR process translates into a stochastic growth process in which the number of possible components that can be added to a realization progressively reduces as the realization grows. The composition of a text of natural language has been used as an illustrative example [@corominas2015understanding; @thurner2015understanding]. Indeed, in the writing process the usage of a specific word limits the possible choices for the following word due to semantic and syntactic constraints. Therefore, the actual number of accessible components reduces with respect to the full vocabulary as a sentence is progressively formed. The SSR process provides a minimal (parameter-poor) description for all systems characterized by this reduction of the state space during evolution and can naturally and robustly generate Zipf’s law [@corominas2015understanding; @corominas2016extreme].
However, Zipf’s law is not the only statistical regularity that is ubiquitously found in empirical complex component systems. A realistic candidate generative model for these systems (e.g., for natural language) is expected to reproduce all these statistical patterns jointly. Therefore, it is necessary to fully characterize the theoretical predictions of the SSR mechanism with respect to these other statistical properties of component systems and compare them with the known empirical trends. A clear theoretical understanding of the model predictions can also make the SSR model an effective simple ”null model“ that can be used to disentangle in empirical datasets general statistical effects due to the state-space reduction (the main model assumption) from system-specific features due to functional or architectural constraints. The general purpose of this work is precisely to fully characterize the statistical properties beyond Zipf’s law emerging from the SSR mechanism. In particular, we will focus on the statistical features of empirical systems that are detailed below.
A statistical regularity that is often observed in component systems displaying a Zipf’s law is Heaps’ law. This law describes the sublinear growth of the number of different components (i.e. the observed vocabulary) with the system size (i.e., the total number of components), and has been observed in several empirical systems from linguistics to genomics [@herdan1964quantitative; @altmann2015statistical; @heaps1978information; @cattuto2007semiotic; @zhang2009discovering; @CosentinoLagomarsino2009]. In models based on equilibrium ensembles, such as the random-group-formation model [@baek2011zipf], the vocabulary is typically a fixed parameter, thus this scaling cannot be addressed straightforwardly. On the other hand, stochastic growth models based on preferential attachment can be easily extended by introducing a rate of arrival of new components conveniently chosen to capture the empirical Heaps’ law [@zanette2005dynamics; @CosentinoLagomarsino2009; @gerlach2013stochastic; @tria2014dynamics]. The first question we will address is whether Heaps’ law can naturally emerge from the SSR mechanism and what is its analytical form depending on the model parameters (Section \[sec:heaps\]).
Moreover, Zipf’s and Heaps’ law are not in general independent. In fact, models that build realizations using a simple random sampling of components with relative abundances given by the Zipf’s law naturally predict Heaps’ law, and a precise relation between the exponents of the two power-law behaviours [@van2005formal; @lu2010zipf; @eliazar2011growth; @font2014log]. A basic assumption of the random sampling procedure is the complete independence between components, and thus the absence of correlations. This assumption is in principle violated by the SSR process that could introduce temporal correlations between components due to the temporal evolution of the state-space. We will address the question of the relation between the Zipf’s and Heaps law obtained by the SSR model by analytical calculations and simulations, and test whether the effect of possible correlations can actually be observed in the Heaps-Zipf relation (Section \[sec:heaps\]).
In addition to Heaps’ and Zipf’s laws, a relevant statistical property of component systems is the distribution of shared components [@Mazzolini2018]. This statistics describes the number of components that are in common to a certain number of realizations, for example the number of words that are in common to a certain number of books. This system property is captured by the distribution of occurrences, defined as the fraction of realizations in which a component is present. A rare component (small occurrence) appears in a small fraction of realizations, while a common or core component (high occurrence) is present across essentially the whole ensemble of realizations. The distribution of occurrences is well studied in genomics, where the occurrence distribution of the basic components (genes or protein domains) has a peculiar U shape [@touchon2009organised; @koonin2008genomics; @pang2013universal]. This means that there is large number of core and very rare genes with respect to genes shared by an intermediate number of genomes. At the same time, the distribution behaviour for small occurrences is well captured by a power-law decay [@pang2013universal]. This pattern has gained large attention in the field because of its robustness and generality across taxonomic levels, giving rise to questions about the evolutionary mechanisms at the basis of its origin [@Haegeman2012; @Lobkovsky2013]. Recently, we have extended the analysis of this statistical property to component systems from linguistics and technological systems [@Mazzolini2018] and we showed how a random-sampling model that assumes Zipf’s law can capture several features of empirical occurrence distributions.
Here, we will study the occurrence distribution that can be obtained from an ensemble of realizations built with the SSR process (Section \[sec:U\]). In particular, we will show that the SSR model is a good candidate generative model for component systems as it can jointly reproduce Zipf’s law, Heaps’ law, and the statistics of shared components often found in empirical systems. Classic models based on preferential attachment, such as Simon’s model [@gerlach2013stochastic; @zanette2005dynamics] or the Chinese Restaurant process [@CosentinoLagomarsino2009], can reproduce Heaps’ and Zipf’s law, but cannot be used to study statistical properties across different realizations, such as the occurrence distribution. In fact, in these models the components in a realization are only characterized by their occupation number (their abundances) and are not labelled in any other way. Therefore, there is no natural way to compare the presence or absence of a specific component across independent realizations.
Moreover, there is another critical point of preferential attachment models, especially as models of text generation. In fact, by construction, the components that are selected at the beginning of a realization are expected to show a higher abundance with respect to the one selected at the end [@bern2010; @baek2011zipf]. This is a natural consequence of preferential attachment, since there is a higher probability of re-using components that are present for longer times. However, this bias is typically not observed in empirical texts [@bern2010], as we will further test with an illustrative example. The question is if the SSR model also suffers from an analogous bias or if it can represent a more realistic simple generative model for texts. To tackle this question, we will introduce a measure of asymmetry in the temporal (or positional) component distribution and use it to analyze how components of different frequencies are distributed along a realization in the SSR model in comparison with results from a model based on preferential attachment (Yule-Simon’s model), and with empirical data (Section \[sec:book\]).
Finally, a non-trivial correlation pattern in the word occurrences have been observed in natural language, and have been interpreted as an emergent consequence of the language communication purpose, in which complex ideas and concepts have to be projected into a one dimensional sequence of words [@altmann2009beyond; @schenkel1993long; @alvarez2006hierarchical; @altmann2012origin]. This correlation pattern can be quantified by looking at the inter-occurrence distance between words, which is highly non-random in data [@altmann2009beyond; @font2014log]. We will analyze this quantity for realizations of the SSR model (Section \[sec:book\]) showing that, in this case, the model cannot reproduce the empirical trend. This discrepancy suggests a possible direction to extend the basic formulation of the SSR model in order to fully reproduce the complex correlation properties of natural language.
Methods
=======
Definition of the sample-space-reducing process (SSR) {#sec:SSR_def}
-----------------------------------------------------
![ [**Schematic representation of the sample-space-reducing (SSR) process.**]{} a) At the first step, all the $N$ labelled states are accessible, and $\mu$ balls (the $\mu=2$ red balls in the example) select among them with uniform probability. At the next step, each ball divides into $\mu$ new balls, which jump to states labelled with an index lower than the one of the starting state. In the illustrated example, the red ball in state $4$ splits into two green balls that can only jump to states $\lbrace 3,2,1 \rbrace$. When a ball reaches state $1$, it is removed from the process. Finally, all existing balls will reach this state, thus ending a “cascade”. The process then restarts with $\mu$ balls thrown over the sample space as in step $1$. The SSR process can be interpreted as a stochastic growth process in which the visited states represent components (e.g., words) that are progressively added to a realization (e.g, a book). Therefore, the component statistics of a realization of size $M$ corresponds to the statistics of visited states of the process $\phi_{M}^{\mu}$, as depicted in panel b. []{data-label="fig:sketch"}](Fig1.pdf){width="48.00000%"}
The basic sample-space-reducing (SSR) process [@corominas2015understanding] is defined as follows. A sample space $V$ is composed by $N$ possible states which are labelled and ordered $ \lbrace N, \ldots, 1 \rbrace$. A stochastic process is defined over this sample space. At the first time step, one of the states is randomly chosen, for instance the state $k$. At the following time step, only the last $k-1$ states are accessible, i.e. the sub-set $ \lbrace k-1, \ldots, 1 \rbrace$, and the process selects one of them with uniform probability. The procedure is iterated, while the sample space progressively reduces at each iteration, until the “cascade” ends with the obligated selection of state $1$. After hitting the final state $1$, the process can be re-started with again $N$ accessible states with equal visiting probability. We denote this process as $\phi_{M}$, where $M$ indicates final time step or equivalently the total number of visited states (with their multiplicity). During the growth process, the partial time is indicated with $m$, $m \in [1,M]$. Therefore, $\phi_{M}$ generates a realization $r$ of size M as a specific ordered sequence of visited states $r = (x_1, \ldots, x_M)$ with $x_i \in V$. To translate this general procedure into a concrete example of a component system, a realization $r$ can be visualized as a text of natural language. The SSR process composes the text by adding at each time step $m$ a word $x_m$ among the possible $N$ word types in the dictionary. An ensemble of $R$ realizations can be built as the result of $R$ independent runs of this stochastic process, specifying the final time steps/sizes $\lbrace M_1, \ldots, M_R \rbrace$.
The basic SSR assumption is that the choice of a word restricts the space of the possible words than can follow it for semantic or structural reasons [@thurner2015understanding], at least for the duration of a cascade. The definition of the SSR process implies a visiting probability of state $i$ that follows $P_i = i^{-1}$ [@corominas2015understanding]. This naturally translates for sufficiently long times $M$ (or equivalently for sufficiently large realization sizes) into an average occupation frequency $f_i$ described by the well-known Zipf’s law $f_i \propto i^{-1}$, where $f_i$ corresponds to the normalized number of times that component $i$ has been used in a realization of given size $M$, i.e., $f_i = \sum_{m=1}^{m=M} \delta_{x_m,i} / M$ for all possible states $i = 1, \ldots, N$.
The SSR process can be generalized by adding a multiplicative process in order to obtain a visiting probability that follows a power law with arbitrary exponent [@corominas2017sample]. This generalization is schematically depicted in Figure \[fig:sketch\]. At the first iteration $\mu$ balls are randomly thrown over a sample space $V$ of $N$ possible states. Thus, $\mu$ states $\lbrace x_1, \ldots, x_\mu \rbrace$, with $x_k \in \lbrace N, \ldots, 1 \rbrace$, are independently selected with uniform probability among the $N$ possible ones. At the next time step, each of these $\mu$ balls generates again $\mu$ balls that can only fall into states with a lower label, following the SSR prescription. For example, a ball in state $k$ generates $\mu$ balls that can only bounce on the $k-1$ still accessible states. When a ball reaches the final state $1$, it is removed from the process. Eventually, all the generated balls will reach this final state, thus completing the cascade. The process can then restart with $\mu$ balls that can randomly choose among the $N$ states. We denote this generalized process as $\phi_{M}^{\mu}$, where $M$ is the number of visited states (or the realization size) and $\mu$ is the free parameter of the multiplicative process. In general, for large realizations $M\gg1$, the number of times the state $i$ is selected by $\phi_{M}^{\mu}$ is simply proportional to $i^{-\mu}$ [@corominas2017sample], thus generalizing the classic Zipf’s law.
The process is not only defined for integer values of $\mu$. In fact, in general, the number of new balls can be extracted from a distribution (that has to be defined) with average $\mu$. In the following, we will consider a Poisson distribution with average $\mu$. However, we checked numerically that the generalized Zipf’s law [@corominas2017sample], as well as the results we will present for the Heaps’ law and for the statistics of shared components do not change if a constant (i.e., no variance) is used for the case of integer $\mu$, or if the different prescription presented in ref. [@corominas2017sample] for non-integer $\mu$ values is adopted.
Results
=======
The SSR process naturally generates a sublinear scaling of the number of different components with the realization size (Heaps’ law) {#sec:heaps}
------------------------------------------------------------------------------------------------------------------------------------
Every text of natural language presents a natural ordering of words defined by the reading/writing process from the first $m=1$ word to the end of the text at $m=M$. For all component systems whose realizations have this temporal ordering of components, it is possible to evaluate over a single realization how the number of different components $h$ grows with the realization size $m$. More in general, the same scaling can be analyzed for component systems even without a natural ordering of components (for example for genomes as composed by genes or for LEGO toys) if the sizes $M$ of the available realizations span a sufficiently large range. As discussed in the Introduction, in several empirical systems this quantity follows a sublinear and approximately power-law function $h(m) \propto m^{\nu}$ (with $\nu < 1$), known as Heaps’s law [@herdan1964quantitative; @altmann2015statistical; @heaps1978information; @cattuto2007semiotic; @zhang2009discovering; @CosentinoLagomarsino2009; @tria2014dynamics]. Each run of the SSR process also generates an ordered sequence of components (or visited states), and the question is what is the predicted scaling of $h(m)$ for this stochastic process.
Fig. \[fig:heaps\]a reports the rank plots of the component frequencies for different realizations of the SSR process $\phi_{m}^{\mu}$ with different values of the parameter $\mu$. As expected [@corominas2015understanding; @corominas2017sample], they all follow Zipf’s law
$$\label{eq:occ_prob}
p(i) = \frac{i^{-\mu}}{\alpha} \hspace{1cm} \alpha = \sum_{i=1}^{N} i^{-\mu},$$
with a power law exponent defined by $\mu$.
At the same time, Fig. \[fig:heaps\]b shows the corresponding scaling of $h(m)$, which increases sublinearly, with a steepness dependent on $\mu$, before saturating at the asymptotic value $h(m) = N$ defined by the total finite number of possible states. Therefore, the behaviour is qualitatively compatible with the empirical Heaps’ law suggesting that the SSR process can be a good generative model for both Zipf’s and Heaps’ laws.
![[**Zipf’s law and Heaps’ law from the SSR model.**]{} Panel (a) shows the rank plot of the component frequencies for four realizations of the SSR model with different values of $\mu$. Section \[sec:SSR\_def\] presents in detail the model definition. The simulations confirm the theoretical expectation in Eq. \[eq:occ\_prob\]: the power law exponent is simply defined by the value of $\mu$. Panel (b) shows that the number of different components $h(m)$ grows sublinearly with the realization size. For each parameter value, four independent trajectories are reported to give a qualitative idea of the small dispersion around the average. All trajectories saturate to the asymptotic value $h(m) = N$ (black dotted line), where $N$ is the second free parameter of the model setting the total number of possible components or the vocabulary size. Here, $N=10^4$ for all simulations. The black dashed lines represent the analytical expression in Eq. \[eq:H(m)\]. The good overlap with simulations indicates that this analytical approximation can well reproduce the Heaps’ law generated by a SSR process. []{data-label="fig:heaps"}](Fig2.pdf){width="80.00000%"}
In order to characterize analytically the sublinear growth of $h(m)$ and the precise relation between the two laws in Fig. \[fig:heaps\], we introduce an approximation that neglects the possible correlations that the SSR process can introduce. As an example of possible correlations inherent to the SSR process, consider the case of $\phi^{(\mu=1)}_M$, thus with just one ball in the scheme of Fig. \[fig:sketch\]. The sequence of components selected during a single cascade is strictly decreasing, implying that on the time scale of a cascade (which on average last approximately $\log N$ [@corominas2015understanding]) correlations between sites are present. Whether correlations in SSR process are actually sufficiently strong to generate deviations from the behaviour of $h(m)$ that can be predicted by neglecting them will be evaluated a posteriori.
If correlations are neglected, a realization of the SSR process can be approximated by assuming that at each time step a component is independently drawn with an extraction probability defined by the visiting probability in Eq. \[eq:occ\_prob\]. This approximation defines a random sampling process with replacement from a fixed number of possible components $N$.
Similar approaches based on random sampling have been previously used to establish the statistical link between Zipf’s law and Heaps’ law in quantitative linguistics [@van2005formal; @lu2010zipf; @eliazar2011growth; @font2014log]. For example, a Poisson process with arrival rates of different components described by Zipf’s law has been used to compute the sublinear vocabulary growth [@eliazar2011growth]. Similarly, Heaps’ law has been computed from models based on independent component extractions with [@van2005formal] or without [@font2014log] replacement, where extraction probabilities were defined by a given (power-law) distribution. Our starting assumptions are analogous to the one presented in ref. [@van2005formal], i.e., a random extraction process with replacement from a power law Zipf’s law. However, we will consider the general case of a finite number of possible components $N$, as prescribed by the SSR process we want to approximate, while ref. [@van2005formal] focuses on the asymptotic behaviour of Heaps’ law in the limit of $N \rightarrow \infty$. In general, previous results will be recovered as specific limiting cases of our framework.
Using the assumption of independent extractions, we can write the probability of choosing for the first time the component $i$ at the step $m$ as $$\left(1 - p(i)\right)^{m-1} p(i).$$ This implies that the component $i$ is selected at least one time after $m$ steps with probability $$\label{eq:exist_prob}
q_m(i) = \sum_{l=1}^{m} \left(1-p(i)\right)^{l-1} p(i) =
1 - \left(1 - p(i)\right)^m .$$ Therefore, the average value of the number of different components, i.e., the expectation for the Heaps’ law, can be expressed as $$\label{eq:H(m)1}
\langle h(m) \rangle = \sum_{i=1}^N q_m(i) =
N - \sum_{i=1}^N \left(1 - \frac{i^{-\mu}}{\alpha} \right)^m .$$ The above implicit expression is equivalent to the one presented in [“Lemma 1”]{} of ref. [@van2005formal]. In order to get explicit and more intuitive predictions from Eq. \[eq:H(m)1\], some relevant limiting cases can be considered. We start by looking at the regime of large realization sizes $m \gg 1$. In this regime, the only relevant terms in the summation are those that satisfy $i^{-\mu}/\alpha \ll 1$. Under such a condition, we can take advantage of the logarithm first-order approximation: $$\left(1-\frac{i^{-\mu}}{\alpha}\right)^{m} =
\exp \left( m \log \left( 1-\frac{i^{-\mu}}{\alpha}\right) \right)
\approx \exp \left( -m \frac{i^{-\mu}}{\alpha} \right) .$$ Substituting all the addends with these exponential forms, and approximating the summation with an integral, one obtains $$\langle h(m) \rangle \approx N - \int_{1}^N \mathrm{d}i \; \exp \left(-m \frac{i^{-\mu}}{\alpha} \right) .$$ This last expression can be evaluated with the change of variables $z = m i^{-\mu}/\alpha$ and by making use of the definition of the upper incomplete gamma function $\Gamma(n,t) = \int_t^\infty e^{-x} x^{n-1} dx$, thus obtaining the expression: $$\label{eq:H(m)}
\langle h(m) \rangle \approx N - \frac{1}{\mu} \left( \frac{m}{\alpha} \right)^{1/\mu} \Gamma \left( -\frac{1}{\mu}, \frac{m}{N^\mu \alpha} \right) .$$ Note that, even though the special function is defined for a positive first argument, it can be extended to negative values by analytic continuation. Figure \[fig:heaps\]b shows an extremely good overlap between the prediction of Eq. \[eq:H(m)\] and simulations of the SSR process. This good agreement suggests that the Heaps-Zipf relation defined by a random sampling model with no correlations between components is also satisfied by the SSR model. Therefore, correlations present in the history-dependent process do not affect significantly the average behavior of the vocabulary growth with the realization size.
A deeper characterization of how the SSR parameters can affect Heaps’ law can be obtained by looking at some other limiting cases of Eq. \[eq:H(m)\]. First, we analyze the dynamical approach to saturation. Given that the total number of possible states $N$ is finite in the SSR process, in the long run ($m \rightarrow \infty$) all realization will reach the horizontal asymptote $\langle h(m) \rangle = N$. However, the model-parameter values determine the dynamics of this approach to saturation. Indeed, Fig. \[fig:heaps\] (b) shows that by decreasing $\mu$ the systems discovers new states more quickly, thus making the Heaps’ law steeper. This observation can be make quantitative by approximating the gamma function in Eq. \[eq:H(m)\] with its asymptotic series for $m \gg \alpha N^\mu$. This approximation leads to the expression $$\label{eq:H_saturation}
\langle h(m) \rangle \approx N \left( 1 - \frac{1}{\mu} \frac{\alpha N^\mu}{m} \exp \left( -\frac{m}{\alpha N^\mu} \right) \right) .$$ This exponential form shows that the time scale (or equivalently the size scale) for saturation is defined by the quantity $$\label{eq:crit_size}
\tilde{m} = \alpha N^\mu.$$ When the realization size is larger than this scale, i.e., $m\gg\tilde{m}$, essentially all the different states have been visited and $\langle h(m) \rangle \approx N$. As expected, the time scale of saturation is defined by the total number of states, but the velocity of the exploration of those states depends on the exponent of the Zipf’s law.
The opposite regime of $m \ll \tilde{m}$ represents Heaps’ law at the beginning of the growth process. Note that the Eq. \[eq:H(m)\] was derived in the limit of $m \gg 1$, therefore $N$ have to chosen sufficiently large to satisfy both conditions. In this case, the gamma function can be reformulated using its recurrence relation $\Gamma(n+1,t) = n\Gamma(n,t) + t^n e^{-t}$. Moreover, the upper gamma can be expressed as the difference between the classical Euler gamma and the lower incomplete gamma function, thus obtaining the following expression $$\langle h(m) \rangle \approx N \left[ 1 - \exp \left( \frac{m}{\tilde{m}} \right)
+ \left( \frac{m}{\tilde{m}} \right)^{1/\mu} \left( \Gamma \left( 1-\frac{1}{\mu} \right) - \gamma \left( 1-\frac{1}{\mu}, \frac{m}{\tilde{m}} \right) \right) \right] ,$$ where $\gamma(n,t) = \int_0^t e^{-x} x^{n-1} dx$. Applying the limit $\frac{m}{\tilde{m}} \rightarrow 0$ and approximating the exponential function and the lower gamma to the first order, i.e., $\gamma(n,t) \rightarrow t^n / n$, we have
$$\langle h(m) \rangle \approx N \left[ \; \frac{m}{\tilde{m}} \; \frac{1}{1-\mu} + \left( \frac{m}{\tilde{m}} \right)^{1/\mu} \Gamma \left( 1-\frac{1}{\mu} \right) \right] .$$
This expression indicates that the asymptotic behaviour for $m \ll \tilde{m}$ crucially depends on $\mu$. When $\mu < 1$, the second term is negligible with respect the first one, while the opposite is true if $\mu > 1$. The case $\mu = 1$ is singular, but can be evaluated integrating by parts Eq. \[eq:H(m)\] and using the definition of the exponential integral function, $E_1(z) = \int_z^\infty e^{-x} x^{-1} dx$ and its asymptotic expansion.
We can summarize the results for Heaps’ law in the far-from-saturation regime ($\frac{m}{\tilde{m}} \rightarrow 0$) as $$\langle h(m) \rangle \approx
\begin{cases}
\left( m (\mu - 1) \right)^{1/\mu} \Gamma \left( 1-\frac{1}{\mu}\right) & \hspace{0.3cm} \text{for } \mu > 1
\\[5pt]
\frac{m}{\ln{N}} \ln \left( \frac{\ln{N} N}{m} \right) & \hspace{0.3cm} \text{for } \mu = 1 ,
\\[5pt]
m & \hspace{0.3cm} \text{for } \mu < 1
\end{cases}
\label{eq:h(s)_approx}$$ Here, we used the explicit expressions for the normalization factor $\alpha$, which is present in definition of the size scale $\tilde{m}$ (Eq. \[eq:crit\_size\]). Using an integral approximation of the sum in Eq. \[eq:occ\_prob\], this factor is $\alpha \approx 1/(\mu-1)$ for $\mu > 1$, $\alpha \approx \ln{N}$ for $\mu = 1$, and $\alpha \approx N^{1-\mu}/(1-\mu)$ for $\mu < 1$. The expression above fully characterizes the different growth regimes of the number of visited states for the SSR process when the realizations are much smaller than the sample space, which is often the case in empirical systems such as texts of natural language. The presence of a transition between a linear growth regime for $\mu < 1$ to a sublinear power-law behavior for $\mu > 1$ is in agreement with a derivation of Heaps’ law from a Poisson growth process assuming Zipf’s law [@eliazar2011growth].
In conclusion, the SSR model can jointly reproduce Heaps’ and Zipf’s law, and the link between the two laws can be safely calculated by neglecting correlation in the stochastic process.
The statistics of shared components from an ensemble of realizations of the SSR process {#sec:U}
---------------------------------------------------------------------------------------
As anticipated in the Introduction, the SSR process provides an ideal framework to investigate statistical patterns of components across different realizations. In fact, given a fixed sample space with $N$ states labelled from $N$ to $1$, it is possible to analyze how many states or components are shared by $R$ independent realizations of the process $\phi^{\mu}_M$. In other words, the occurrence distribution $p(o)$ can be computed, where the occurrence $o$ of a state (or component) is defined as the fraction of realizations in which the state has been selected. Three examples (for three $\mu$ values) of occurrence distributions obtained from an ensemble of SSR realizations are shown in Fig. \[fig:u\](a). All the three curves display the characteristic U shape often present in empirical data [@Lobkovsky2013; @Mazzolini2018; @pang2013universal] due to the presence of a peak at low occurrences and a second peak at $o=1$ defining the “core” components. Moreover, the log-log representation in the inset of Fig. \[fig:u\](a) shows that for low occurrences the trend is well approximated by a power-law decay.
![[**Component occurrence distribution from a SSR process.** ]{} The first panel (a) shows the component occurrence distribution for three ensembles of $R = 1000$ realizations of the SSR process. Each ensemble has a different value of $\mu$, while the other two parameters describing the size $M$ of realizations and the number $N$ of possible states are fixed ($N =M= 10^4$). The distributions obtained by numerical simulations are in good agreement with the analytical predictions of Eq. \[eq:plaw\_u\] (dashed black curves). The distribution left boundaries $o_{left}$ predicted by Eq. \[eq:occ\_boundaries\] are indicated with vertical dotted lines. The inset shows that the same distributions in double-logarithmic scale display a power law decay with an exponent well described by Eq. \[eq:plaw\_u\_lim\]. In panel (b) this exponent is estimated for ensembles generated by stochastic simulations of the SSR model with different parameter values: $\mu$ values are reported on the x-axis, while $M$ and $N$ values are indicated in the legend. Each dot is obtained through a least square fit of the occurrence distribution. The fitted region is defined as $[o_{left} + \epsilon_1; o_{right} - \epsilon_2]$, where $o_{left}$ and $o_{right}$ are the boundaries defined by Eqs. \[eq:occ\_boundaries\]. $\epsilon_1$ and $\epsilon_2$ are two positive arbitrary constants chosen to select the power-law part of the distribution, thus removing the finite-size cut-off for occurrences near $o_{left}$, and the increasing part on the right-tail of the distribution that defines the “core” components. The estimated exponents are compared with the analytical expectation (black dashed line) from Eq. \[eq:plaw\_u\_lim\], which is independent of $M$ and $N$, showing a good agreement.[]{data-label="fig:u"}](Fig3.pdf){width="80.00000%"}
Also in this case, it is possible to derive analytical expectations for the statistics of shared components by neglecting possible correlations in the SSR process. This approximation is again equivalent to a random sampling assumption, in which realizations are obtained by independent extractions of components with probabilities defined by the Zipf’s law generated by the SSR process (Eq. \[eq:occ\_prob\]).
Here, the observable we are interested in is the component occurrence, which is defined by the probability that component $i$ is present in a realization of size $M$ as described by Eq. \[eq:exist\_prob\]. Therefore, the average fraction $o_i$ of the $R$ realizations in which the component $i$ is present is given by the expression $$\label{eq:occ}
\langle o_i \rangle = \frac{1}{R} \sum_{j=1}^R q_M(i) = 1 - \left(1 - \frac{i^{-\mu}}{\alpha} \right)^M.$$ Note that here we are considering the most simple case in which the probabilities $q_M(i)$ are identical for each realization, i.e., all realizations have the same $M$ and $\mu$. Therefore, $q_M(i)$ do not depend on the index $j$ and the summation is trivial. In general, the scenario in empirical systems can be much more complicated and better described by an ensemble of realizations with different sizes $\lbrace M_j \rbrace$, and coming from slightly different multiplicative processes, i.e., different $\lbrace \mu_j \rbrace$.
The expression in Eq. \[eq:occ\] represents the expected occurrence values for a process of random extractions of components from a fixed power law abundance distribution with exponent $\mu$ [@Mazzolini2018]. Here, we want to test if this formulation can well approximate the results from a SSR process. More specifically, the analytical formula for the component occurrence distribution can be calculated [@Mazzolini2018] as $$\label{eq:plaw_u}
p(o) = \frac{ \left( 1 - o \right)^{1/M - 1}}{\langle h(M R) \rangle \; \mu M \alpha^{1/\mu} .
\left( 1 - (1 - o)^{1/M} \right)^{1/\mu + 1}} ,$$ This distribution is defined in the interval $[o_{left}, o_{right}]$, where: $$\label{eq:occ_boundaries}
o_{left} = \langle o_N \rangle = 1 - \left(1 - \frac{\langle h(M R) \rangle^{-\mu}}{\alpha} \right)^M \hspace{1cm}
o_{right} = \langle o_1 \rangle = 1 - \left(1 - \frac{1}{\alpha} \right)^M .$$ Note that $\langle h(M R) \rangle$ is the expected number of observed different components in the ensemble, given by Eq. \[eq:H(m)\] using the total system size $M R$.
This analytical expression is compared with simulations of the SSR process in Fig. \[fig:u\](a) showing that also for the statistics of shared components the random sampling approximation can well reproduce the model results. Also the analytical expressions of the distribution boundaries in Eq. \[eq:occ\_boundaries\] (vertical dotted lines) are accurate. For the sake of simplicity, the simulations were performed close to the saturation regime, i.e., with a system size $M R$ is much larger than the critical scale $\tilde{m}=N^\mu \alpha$. This allows to simplify the expression $\langle h(M R) \rangle \approx N$ in Eq. \[eq:plaw\_u\], as discussed in the previous section (see Eq. \[eq:H\_saturation\]).
The size scale of saturation $\tilde{m}$ defined by Eq. \[eq:crit\_size\] plays an important role also in determining the global shape of the occurrence distribution. In fact, the left boundary in Eq. \[eq:occ\_boundaries\] close to the saturation regime (for example for a large number of realizations $R$, such that $M R \gg \tilde{m}$) can be simply expressed as $o_{left} \approx 1 - \exp \left( - M / \tilde{m} \right)$. Therefore, the minimal occurrence coincides with zero only when $M \ll \tilde{m}$. On the other hand, $o_{left}$ approaches the maximal occurrence $1$ if $M \gg \tilde{m}$, implying that all the components in the ensemble are present in all the realizations.
Another characteristic feature of the occurrence distribution is the power law decay for rare components reported in the inset of Fig. \[fig:u\](a). This behaviour can be understood by looking at the limit $o \ll 1$ and $M \gg 1$ in Eq. \[eq:plaw\_u\]. In fact, in this limit we have
$$\label{eq:plaw_u_lim}
p(o) \approx \frac{M^{1/\mu}}{\alpha^{1/\mu} \; \mu \; \langle h(M R) \rangle} \; o^{-1/\mu - 1} .$$
The expression above gives a very simple prediction for the exponent of the power law decay, which depends only on $\mu$. This prediction is verified in Fig. \[fig:u\](b), showing that the derived simple relation linking Zipf’s law and the statistics of shared components can be safely applied to the SSR process.
Temporal dependence of component distributions in the SSR model, in preferential attachment models, and in empirical texts of natural language {#sec:book}
-----------------------------------------------------------------------------------------------------------------------------------------------
We have shown that the SSR model can reproduce Zips’s and Heaps’ law jointly (Section \[sec:heaps\]), making it a good candidate model for many component systems characterized by these laws such as texts of natural language. The two free model parameters $\mu$ and $N$ can be estimated directly from data since $N$ is the total number of components (e.g., the text vocabulary), while $\mu$ can be set to match the empirical Zipf’s law. This is shown for the illustrative example of Darwin’s book “On the origin of species” in Fig. \[fig:oos\]. Thee SSR model can produce a realization of the same size $M$ of the text in analysis with a similar word abundance statistics (Fig. \[fig:oos\](a)) and, at the same time, makes a prediction for the vocabulary growth that can well approximate the empirical trend (Fig. \[fig:oos\](b)). Similar results can be obtained using stochastic growth processes based on a preferential attachment mechanism [@zanette2005dynamics; @gerlach2013stochastic; @tria2014dynamics; @CosentinoLagomarsino2009] inspired to the Yule-Simon’s model [@yule1925mathematical; @simon1955class], the Chinese Restaurant Process [@pitman1997two] or the Polya’s urn scheme [@polya1930quelques]. This makes extremely hard to select the most-likely generative mechanism from these two average behaviours alone.
However, there is a peculiar feature of models based on preferential attachment that emerges by looking at the distribution of components along a realization, thus following its natural temporal order. Taking again the example of a text generated with a rich-gets-richer mechanism, rare worlds should be more densely present late in the book, while the opposite should be true for common words. In fact, components that are selected for the first time at the end of the process have a lower chance of being re-selected and trigger the preferential attachment mechanism. To make this intuition more quantitative, we introduce a measure of local component density and use it to analyze to what extent this positional (or equivalently temporal) bias allows to actually distinguish the SSR model, models based on preferential attachment and empirical data.
![[**Comparing the SSR model predictions with statistical patterns from Darwin’s book “On the origin of species” and from a model based on preferential attachment.**]{} The Zipf’s law for the word frequencies in “ On the origin of species” (blue line) is compared with the corresponding distribution from a SSR process with $\mu = 1$ (green line-dot curve) in panel (a). The number of possible states $N$ and the realization size $M$ are fixed to match the book’s vocabulary $N = 9132$ and size $M = 178820$. With this parameter matching, the SSR model can also well approximate the empirical Heaps’ laws as panel (b) shows. Panel (c) focuses on the relative density of words (Eq. \[eq:density\]) $\rho$ for words of low and high frequencies. Here, $\rho$ is computed on a moving window of $10^4$ consecutive words. The model formulation presented in ref. [@zanette2005dynamics] is used as implementation of a Yule-Simon’s process (orange dashed lines), with parameter values $\nu=0.8$ and $\alpha_0=0.5$ fixed to match the empirical Zipf’s and Heaps’ laws. For low frequency (in the interval $f \in [10^{-6}, 10^{-4}]$) words, the Simon’s model shows a specific increasing relative density along the book (left plot), while the density decreases for high frequency words ($f \in [10^{-3}, 10^{-1}]$). Finally, panel (d) displays the inter-occurrence distance distribution $p(\tau)$, with $\tau$ defined by Eq. \[eq:interocc\] for $k>1$, and evaluated for all words with total abundances between $2$ and $10^3$. The distribution obtained from Darwin’s book (blue line) is compared with the one measured on $20$ realizations of the SSR model (green line-dot), and with reshuffled realizations of the model (purple dashed line). The theoretical expectation for a random Poisson process is reported as a black dotted line.g. []{data-label="fig:oos"}](Fig4.pdf){width="70.00000%"}
Using the notation introduced in Section \[sec:SSR\_def\], a realization/book $r$ is an ordered sequence of words/components $r = (x_1, \ldots, x_M)$, where each component instance belongs to the vocabulary, i.e., $x_m \in V = \lbrace 1, \ldots, N \rbrace$. A portion of given size of the book $s_{m,\Delta m} = (x_m, \ldots, x_{m + \Delta m})$ can be defined as the sub-sequence of consecutive words from position $m$ to $m + \Delta m$. By definition, we have $s_{ 1, M-1} = r$. We are interested in the density of words of a given frequency class at different positions $m$. Therefore, we can select the subset of words of the vocabulary $v_{f_1,f_2} \subset V$ whose frequencies are in the interval $[f_1, f_2]$. Finally, the relative density $\rho(m, \Delta m, f_1, f_2 )$ of components belonging to the frequency class $v_{f_1,f_2}$ within the portion of the realization $s_{m,\Delta m}$ is $$\label{eq:density}
\rho(m, \Delta m, f_1, f_2) = \frac{n(s_{m,\Delta m}, \; v_{f_1,f_2})}{\Delta m} - \frac{n(r, \; v_{f_1,f_2})}{M}.$$ $n(s, v)$ is the number of times that components belonging to class $v$ appear in $s$, i.e., $n(s, v) = \sum_{x \in s} \sum_{c \in v} \delta_{x,c}$. The relative density $\rho(m, \Delta m, f_1, f_2)$ measures the difference between the local density in $s_{m,\Delta m}$ and the average density across the whole realization $n(r, \; v_{f_1,f_2})/M$ for components of a given frequency. Therefore, this quantity is positive if components of frequency $v$ are enriched at a certain position $m$ of the realization. The two plots in Fig.\[fig:oos\](c) describe how the density of words with low ($f_1 = 10^{-6}$, $f_2 = 10^{-4}$) and high ($f_1 = 10^{-3}$, $f_2 = 10^{-1}$) frequency varies by moving the window $s_{m,\Delta m}$ from the beginning to the end of the book. The line associated to the Simon’s model (as defined in ref. [@zanette2005dynamics]) shows a clear increasing/decreasing trend for words of low/high frequencies. This trend quantifies the expected positional (or temporal) bias inherent to models based on preferential attachment. On the contrary, the SSR model do not have a specific positional bias for components of different frequencies. It simply predicts local fluctuations around the $\rho = 0$ line for all frequency classes. A similar trend of local word density is present in the empirical example from “On the origin of species”, confirming that words of different frequencies are equally spread across real texts [@bern2010; @baek2011zipf].
This marked qualitative difference between the SSR process and stochastic processes based on preferential attachment can be used for model selection, and suggests that the SSR mechanism is better suited to represent the text generation process with respect to the often-invoked rich-gets-richer scenario.
The SSR model cannot reproduce the complex temporal correlation patterns of real texts
--------------------------------------------------------------------------------------
While the average local density of words do not display a specific temporal trend in empirical texts, it seems to show marked fluctuations (Fig. \[fig:oos\](c)). In fact, several studies showed the presence of non-trivial correlation patterns in the temporal distribution of words across texts [@altmann2015statistical; @altmann2009beyond; @altmann2012origin; @font2014log]. More specifically, the appearance of instances of the same word typically displays a bursty behavior [@altmann2009beyond], which essentially implies the presence of clusters of word instances. Intuitively, this behaviour can be traced back to semantic reasons. For example, if a character of a novel has a role only in a small part of the story-line, the appearance of his/her name will be localized in a corresponding relatively small region of the text.
This statistical pattern can be quantified by looking at the distribution of inter-occurrence distances between words [@altmann2009beyond; @font2014log]. Given a word $i \in V$ of frequency $f_i$, we can compute its $k$th inter-occurrence distance $\tau_i^{(k)}$ as the number of other words between its $(k-1)$th and $k$th appearances normalized by the average distance, which is simply given by the inverse frequency $1/f_i$. In other words, the relative $k$th inter-occurrence distance is defined as
$$\label{eq:interocc}
\tau_i^{(k)} = \left( l_i^{(k)} - l_{i}^{(k-1)} \right) f_i,$$
where $l_i^{(k)} \in \lbrace 1, \ldots, M \rbrace$ represents the position of the $k$th appearance of $i$, with the convention that $l_i^{(0)} = 0$. For a completely random distribution of words, the stochastic variable $\tau_i^{(k)}$ follows approximately an exponential distribution with average $1$ for any word frequency $f_i$ and any value of $k$ [@font2014log]. Therefore, there is a unique null expectation that can be compared to the empirical distribution $p(\tau)$ measured for words of different values of $f_i$ and for different $k$. This comparison is reported in Fig. \[fig:oos\](d) for one empirical example, and confirms that the word inter-occurrence distance has a marked excess of short distances with respect to the random expectation (dashed line) for $k>1$. Note that for $k=1$, $\tau_i^{(1)}$ simply defines the time of first appearance of words, thus it is closely connected to the vocabulary growth (Heaps’ law), and the distribution $p(\tau^{(k=1)})$ has been shown to be compatible with the random Poisson expectation [@font2014log].
The SSR model cannot reproduce the empirical clustering of words, and in fact its prediction for the inter-occurrence distances is well approximated by the exponential random expectation (Fig. \[fig:oos\](d)). This means that, at this scale of observation, the ordering of components in SSR realizations is compatible with random ordering. In fact, the inter-occurrence distance distribution from a SSR realization is not significantly different from its reshuffled version (dashed purple line in Fig. \[fig:oos\](d)), in which the temporal order of components is randomized. The small deviation between the reshuffled realizations and the theoretical exponential expectation at small distances is due to the presence of a frequency-dependent lower bound in Eq. \[eq:interocc\], i.e., $\tau_k^{(min)}=f$. On a finer scale, this equivalence between the SSR model and the random ordering should be violated by the presence of cascades of decreasing order of selected states, during which, for example, the same component cannot be selected multiple times (for $\mu\simeq1$). However, this effect seems not strong enough to introduce substantial deviations from a random model. In conclusion, also for this observable, the correlation structure induced by the SSR model is negligible, and the model predicts that components are approximately homogeneously scattered across its realizations.
Discussion
==========
The presence of common or universal statistical patterns in complex component systems across different fields has attracted a lot of attention [@Mazzolini2018; @Holovatch2017; @altmann2015statistical; @koonin2011], and several alternative mechanisms have been proposed to be at the origin of these laws. Besides the inherent interest in understanding the generative processes at the basis of these emergent patterns, simple and parameter-poor models of these systems are also extremely useful as statistical “null” models that can be used to disentangle general statistical effects from system-specific patterns due to functional or architectural constraints [@Mazzolini2018]. This is particularly true in genomics, where one is typically interested in identifying the features that have been selected by evolution to perform specific biological functions [@koonin2011].
In this paper, we have shown that the SSR mechanism can be added to the list of the simple statistical models that can jointly reproduce Zipf’s and Heaps’ laws [@tria2014dynamics; @CosentinoLagomarsino2009; @gerlach2013stochastic; @Iacopini2018; @Mazzolini2018a; @zanette2005dynamics]. Moreover, the SSR model is an appropriate modelling framework to analyze properties of the statistics of shared components, which characterize the number of components in common to a given fraction of realizations. In particular, the SSR mechanism can naturally produce the U-shaped distribution of occurences that has been observed and intensively studied in genomics [@pang2013universal; @Haegeman2012; @Collins2012; @Lobkovsky2013; @Baumdicker2012; @Mazzolini2018]. This model property marks a relevant difference with respect to commonly used models based on an innovation-duplication dynamics inspired by the classic Yule-Simon’s model [@yule1925mathematical; @simon1955class; @gerlach2013stochastic; @zanette2005dynamics], the Chinese Restaurant Process [@pitman1997two; @CosentinoLagomarsino2009] or the Polya’s Urn scheme [@polya1930quelques; @johnson1977urn; @tria2014dynamics]. In fact, in these models the components (or states) can be distinguished only through their occupation numbers, while the SSR model, without adding much complexity, has an inherent labelling of the states that allows to compare the component composition of independent process realizations.
The precise links between several features of these different statistical patterns generated by the SSR mechanisms are well approximated by analytical expressions that neglect possible correlation structures in the model. In other words, a random sampling framework that only assumes the component abundance distribution set by the model seems to capture other statistical model properties, such as the average number of states discovered in time. Similarly, the theoretical relation that is often used in linguistics to connect Zipf’s law and Heaps’ law is based on an equivalent random sampling framework [@eliazar2011growth; @van2005formal; @lu2010zipf; @font2014log]. Interestingly, also when these statistical patterns are generated with more complex models explicitly based on networks of component dependencies [@Iacopini2018; @Mazzolini2018a], thus with a strong intrinsic correlation structure, they do not significantly deviate from the random sampling prediction [@Mazzolini2018a]. This surprising phenomenology suggests that average statistical laws, such as Zipf’s and Heaps’ laws, do not contain enough information about the microscopic dynamics to clearly distinguish between alternative generative mechanisms. High-order statistical observables, such as two-point correlations between components [@Mazzolini2018a] or fluctuation scalings [@gerlach2014] could thus be necessary to actually select the more appropriate model for a given empirical component system.
Following this line of reasoning, we introduced a measure of local density of components along a temporally-ordered realization, focusing on the specific empirical example of texts of natural language. The temporal distribution of components of different frequencies can clearly distinguish realizations of the SSR process with respect to realizations built with a preferential attachment mechanism. In fact, the rich-gets-richer scenario leads to a high density of low-frequency words at the end of realizations. We showed that the SSR model does not introduce this bias, which is indeed not present in real texts. This result identifies the SSR model as a better representation of the text generation process with respect to models based on preferential attachment that are often used in this context [@zanette2005dynamics; @gerlach2013stochastic].
While the SSR mechanism seems remarkably effective in reproducing several average empirical trends despite its simplicity, it is reasonable to expect that its two-parameter formulation has to be extended to fully capture all statistical properties of complex systems such as language. To identify a possible direction for future model extensions, we analyzed the inter-occurrence distance distribution in the model and in empirical data. In real texts, this distribution deviates from the uncorrelated scenario of words randomly scattered along the text. In fact, it is characterized by an enrichment for short distances that is due to the tendency of instances of the same word to cluster. The presence of topic-dependent structures, epitomized by the subdivision in paragraphs and chapters, has been suggested as a possible origin of the temporal correlation patterns observed in texts [@alvarez2006hierarchical; @altmann2012origin]. The SSR process clearly does not encode any of these complex features, and consistently we showed that it cannot reproduce the empirical “burstiness” of word appearances.
This limitation of the model suggests a possible direction for future extensions of its basic formulation. One possibility would be to include a long-term memory in the state selection process in order to introduce temporal autocorrelations. A similar approach has been explored to extend the Yule-Simon’s model [@cattuto2006yule]. An alternative route could be to consider an underlying spatial organization of the sample space over which the dynamics unfolds. Along this line, a model specifically designed for text generation has been studied [@thurner2015understanding]. The model is inspired to the SSR mechanism, but it is essentially a random walk over empirical networks of words, in which a link is present if two words are found to be consecutive in the text at least one time. While a relation between the network structure and the emergence of Zipf’s law has been found, other emergent statistical properties of the model and their direct comparison with data are still to be characterized. More general models based on the presence of a network of component dependencies have been recently studied [@Mazzolini2018a; @Iacopini2018], showing that they can reproduce Heaps’ and Zipf’s laws. Moreover, an edge-reinforced random walk on a complex dependency network can also generate non-random inter-occurrence distance distributions [@Iacopini2018]. Identifying the precise relations between these different network-based models, their key different predictions, the specific role of topology, and how these models are related to the general SSR principle are all interesting directions for future investigations.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Marco Gherardi, Jacopo Grilli and Marco Cosentino Lagomarsino for useful discussions. This work was supported by the “Departments of Excellence 2018 - 2022” Grant awarded by the Italian Ministry of Education, University and Research (MIUR) (L. 232/2016)
[^1]: To whom correspondence should be addressed. Email: mosella@to.infn.it
|
---
abstract: 'In this paper we discuss a problem mentioned by Eliashberg in his paper [@Eliashberg]. He has asked if two completed Weinstein structures $(\hat{X},\lambda_0,\phi_0)$ and $(\hat{X},\lambda_1,\phi_1)$ on the same symplectic manifold $(\hat{X},\omega)$ can be homotoped through Weinstein structures. We discuss this problem and prove a weak partial result by assuming some additional conditions.'
address: |
Presidency University, Kolkata, India.\
e-mail:mukherjeesauvik@gmail.com\
author:
- Sauvik Mukherjee
title: Weinstein Homotopies
---
introduction {#1}
============
In this paper we discuss a problem mentioned by Eliashberg in his paper [@Eliashberg]. He has asked if two completed Weinstein structures $(\hat{X},\lambda_0,\phi_0)$ and $(\hat{X},\lambda_1,\phi_1)$ on the same symplectic manifold $(\hat{X},\omega)$ can be homotoped through Weinstein structures. We discuss this problem and prove a weak partial result by assuming some additional conditions.\
We begin with the basic definitions. Let $(X,\omega)$ be a $2n$-dimensional symplectic domain with boundary with an exact symplectic form $\omega$ and primitive form $\lambda$ i.e, $d\lambda=\omega$.\
A Liouville form is a choice of a primitive form $\lambda$ such that $\lambda_{\mid \partial X}$ is a contact form on $\partial X$ and the orientation on $\partial X$ by the form $\lambda \wedge d\lambda^{n-1}_{\mid \partial X}$ coincides with its orientation as the boundary of $(X,\omega)$. The $\omega$-dual vector field $Z$ of $\lambda$ is called the Liouville vector field. $Z$ satisfies $L_Z\omega=\omega$ and hence its flow is conformally symplectically expanding.\
Every Liouville domain $X$ can be completed in the following way.Set $$\hat{X}=X\cup(\partial X\times [0,\infty))$$ and extend $\lambda$ on $\hat{X}$ as $e^s(\lambda_{\mid \partial X})$ on the attached end. Given a Liouville domain $\mathcal{L}=(X,\omega,\lambda)$ consider the compact set $$Core(\mathcal{L})=\cap_{t>0}Z^{-t}(X)$$It is called Core or the Skeleton of the Liouville domain.\
Let $\lambda_0$ and $\lambda_1$ be two Liouville forms on a fixed symplectic manifold $(X,\omega)$, moreover let $Z_0$ and $Z_1$ be the respective Liouville vector fields. Then obviously $\lambda_1=\lambda_0+dh$ for some $h:X\to \mathbb{R}$ and $Z_1=Z_0+Z_h$ where $Z_h$ is the hamiltonian vector field for $h$.\
A Liouville cobordism $(W,\omega,Z)$ is a cobordism $W$ with an exact symplectic form $\omega$ such that the Liouville vector field $Z$ points inward along $\partial_{-}W$ and outward along $\partial_{+}W$.\
\[completed liouville\] On the infinite end of $\hat{X}$, the Liouville vector field is given by $\partial_s$ irrespective of the choice of the Liouville form $\lambda$ on $X$.
Now we shall define the Weinstein structures. For this we need to recall few notions. A complete vector field is a vector field whose flow exists for all forward and backward time.\
Let $\phi$ be a Morse function. A vector field $X$ is called gradient-like for $\phi$ if it satisfies $$X.\phi\geq \delta (|X|^2+|d\phi|^2)$$ for some $\delta >0$ and $|X|$ is with respect to some Riemannian metric and $|d\phi|$ is with respect to its dual metric.\
([@Kai]) \[Weinstein-def\] A Weinstein manifold $(X,\omega,Z,\phi)$ is a symplectic manifold $(X,\omega)$ with a complete Liouville vector field $Z$ which is gradient like with respect to the exhausting Morse function $\phi$. A Weinstein cobordism $(W,\omega,Z,\phi)$ is a Liouville cobordism $(W,\omega,Z)$ whose Liouville vector field $Z$ is gradient-like with respect to a Morse function $\phi$ which is constant on the boundary. A Weinstein cobordism with $\partial_{-}W=\Phi\ (empty)$ is called a Weinstein domain.
In [@Eliashberg] Eliashberg has asked the following question.\
[**Problem:**]{} Let $(\hat{X},\lambda_0,\phi_0)$ and $(\hat{X},\lambda_1,\phi_1)$ be two completed Weinstein structures on the same symplectic manifold $(\hat{X},\omega)$. Are they homotopic as Weinstein structures ?\
Obviously if $Z_0$ and $Z_1$ are the respective Liouville vector fields then $Z_1=Z_0+Z_h$ for $h$ satisfying $\lambda_1=\lambda_0+dh$ and hence $$Z_t=Z_0+Z_{th}=Z_0+tZ_{h},\ t\in [0,1]$$ gives a homotopy of Liouville vector fields. However $(X,\omega,Z_t)$ may not be a Liouville homotopy. We refer the reader [@Kai] for a precise definition of Liouville homotopy.\
On Weinstein cobordisms a similar result has been proved in [@Kai] although the Weinstein structures need to flexible. We refer the reader to [@Kai] for a precise definition of flexible Weinstein structures.
([@Kai]) \[flexible\] Let $(W,\omega_0,\lambda_0,\phi_0)$ and $(W,\omega_1,\lambda_1,\phi_1)$ be two flexible Weinstein structures on the same cobordism $W$ with dimension $2n>4$ which coincide on $Op(\partial_{-}W)$. Let $\eta_t$ be a homotopy rel $Op(\partial_{-}W)$ of non-degenerate two forms on $W$ connecting $\omega_0$ and $\omega_1$. Then there exists a homotopy of flexible Weinstein structures connecting the given ones.
Let us now return to the question asked by Eliashberg. We assume that all the zeros of $Z_t$ are non-degenerate for all $t\in [0,1]$. So the zeros of $Z_t$ executes curves $\gamma_i(t),\ i=1,...,k$ (say). We consider $\tilde{X}=\hat{X}\times [0,1]$ and define vector field $Z(x,t)=Z_t(x)$ on $\tilde{X}$. The curves $\gamma_i$’s define curves $\Gamma_i$’s on $\tilde{X}$ as follows $$\Gamma_i(t)=(\gamma_i(t),t)$$ Consider two tubular neighborhoods of $\Gamma_i$ as $\Gamma_i\subset N'_i\subset N''_i$. Let $\Psi_i:\tilde{X}\to \mathbb{R}$ be cutoff functions such that $\Psi_i=1\ on\ N'_i$ and $\Psi_i=0\ outside\ N''_i$. Define $\tilde{Z}$ on $\tilde{X}$ by canonically removing the zeros of $Z$ as follows. Define $\tilde{Z}$ close to $\Gamma_i$ as $$\tilde{Z}(x,t)=\Psi_i(x,t)\partial_t+(1-\Psi_i(x,t))Z(x,t)$$ Let $\tilde{\mathcal{F}}$ be the foliation defined by $\tilde{Z}$. Then $\tilde{\mathcal{F}}$ is a regular foliation.
\[Liou-Uni-Open\] We call the homotopy of the Liouville vector field $Z_t$ uniformly open if it satisfies
1. All the zeros of $Z_t$ are non-degenerate for all $t\in [0,1]$\
2. The foliation $\tilde{\mathcal{F}}\times \tilde{\mathcal{F}}$ on $\tilde{X}\times \tilde{X}$ is uniformly open\
Please see \[Uniform open\] bellow for the definition of Uniformly open foliation. Now we state the main theorem of this paper.
\[Main\] Let $(\hat{X},\lambda_0,\phi_0)$ and $(\hat{X},\lambda_1,\phi_1)$ be two completed Weinstein structures on the same symplectic manifold $(\hat{X},\omega)$ and let the homotopy of the Liouville vector field $Z_t$ is uniformly open (\[Liou-Uni-Open\]), moreover $Z_0$ and $Z_1$ do not have a common zero. Then $(\hat{X},\lambda_0,\phi_0)$ and $(\hat{X},\lambda_1,\phi_1)$ can joined by a homotopy of Weinstein structures for which the underlying symplectic structure $\omega$ remains fixed.
In \[flexible\] the underlying symplectic structure is not fixed.
$h$-Principle
=============
This section does not have any new result, we just recall some facts from the theory of $h$-principle which we shall need in our proof.\
Let $X\to M$ be any fiber bundle and let $X^{(r)}$ be the space of $r$-jets of jerms of sections of $X\to M$ and $j^rf:M\to X^{(r)}$ be the $r$-jet extension map of the section $f:M\to X$. If $X=M\times N$ then $X^{(r)}$ is denoted as $J^r(M,N)$. A section $F:M\to X^{(r)}$ is called holonomic if there exists a section $f:M\to X$ such that $F=j^rf$. In the following we use the notation $Op(A)$ to denote a small open neighborhood of $A\subset M$ which is unspecified.\
Let $\mathcal{R}$ be a subset of $X^{(r)}$. Then $\mathcal{R}$ is called a differential relation of order $r$. $\mathcal{R}$ is said to satisfy $h$-principle if any section $F:M\to \mathcal{R}\subset X^{(r)}$ can be homotopped to a holonomic section $\tilde{F}:M\to \mathcal{R}\subset X^{(r)}$ through sections whose images are contained in $\mathcal{R}$. Put differently, if the space of sections of $X^{(r)}$ landing into $\mathcal{R}$ is denoted by $Sec \mathcal{R}$ and the space of holonomic sections of $X^{(r)}$ landing into $\mathcal{R}$ is denoted by $Hol \mathcal{R}$ then $\mathcal{R}$ satisfies $h$-principle if the inclusion map $Hol \mathcal{R}\hookrightarrow Sec \mathcal{R}$ induces a epimorphism at $0$-th homotopy group $\pi_0$. $\mathcal{R}$ satisfies parametric $h$-principle if $\pi_k(Sec \mathcal{R}, Hol \mathcal{R})=0$ for all $k\geq 0$.\
Let $p:X\to M$ be a fiber bundle and by $Diff_MX$ we denote the fiber preserving diffeomorphisms $h_X:X\to X$, i.e, $h_X\in Diff_MX$ if and only if there exists diffeomorphism $h_M:M\to M$ such that the following diagram commutes
$$\xymatrix@=2pc@R=2pc{
& X\ar@{->}[rr]^{h_X} \ar@{->}[d]_{p} & & X \ar@{->}[d]^{p}\\
& M\ar@{->}[rr]^{h_M} & & M\\
}$$
Let $\pi:Diff_MX\to Diff M$ be the projection $h_X\mapsto h_M$. We call a fiber bundle $p:X\to M$ natural if there exists a homomorphism $j:Diff M\to Diff_MX$ such that $\pi \circ j=id$. For a natural fiber bundle $p:X\to M$ the associated jet bundle $X^{(r)}\to M$ is also natural. The lift is given by $$j^r:Diff M\to Diff_MX^{(r)},\ h\mapsto h_*$$ where $h_*(s)=J^r_{j(h)\circ \bar{s}}(h(m))$, $s\in X^{(r)}$, $m=p^r(s)\in M$ and $\bar{s}$ is a local section near $m$ which represents the $r$-jet $s$. Observe $(h^{-1})_*=(h_*)^{-1}$ and hence define $h^*=h_*^{-1}$.\
For a natural fiber bundle $X\to M$, a differential relation $\mathcal{R}\subset X^{(r)}$ is called $Diff M$-invariant if the action $s\mapsto h_*s,\ h\in Diff M$, leaves $\mathcal{R}$ invariant.\
([@Gromov]) \[gromov\] If a relation $\mathcal{R}$ is open and $Diff M$-invariant on an open manifold $M$ then it satisfies parametric $h$-principle.
Bertelson’s Uniformly Open Foliations
=====================================
In this section we recall some result from [@Bertelson] and [@Bertelson1].\
([@Bertelson]) \[Uniform open\] A foliated manifold $(M,\mathcal{F})$ is called uniformly open if there exists a function $f:M\to [0,\infty)$ such that
1. $f$ is proper,\
2. $f$ has no leafwise local maxima,\
3. $f$ is $\mathcal{F}$-generic.
\[\*\] Observe that if $dim \mathcal{F}=1$ then $(M,\mathcal{F})$ can not be uniformly open as on a one dimensional manifold, a critical point will be either a local maximum or minimum.
So let us explain the notion $\mathcal{F}$-generic. In order to do so we need to define the singularity set $\Sigma^{(i_1,i_2,...,i_k)}(f)$ for a map $f:M\to W$. $\Sigma^{i_1}(f)$ is the set $$\{p\in M:dim(ker(df)_p)=i_1\}$$ It was proved by Thom [@Thom] that for most maps $\Sigma^{i_1}(f)$ is a submanifold of $M$. So we can restrict $f$ to $\Sigma^{i_1}(f)$ and construct $\Sigma^{(i_1,i_2)}(f)$ and so on. In [@Thom] it has been proved that there exists $\Sigma^{(i_1,...,i_k)}\subset J^k(M,W)$ such that $(j^kf)^{-1}\Sigma^{(i_1,...,i_k)}=\Sigma^{(i_1,...,i_k)}(f)$.\
Let us set $W=\mathbb{R}$ as this is the only situation we need. Let $(M,\mathcal{F})$ be a foliated manifold with a leaf $F$. Define the restriction map $$r_F:J^k(M,\mathbb{R})\to J^k(F,\mathbb{R}):j^kf(x)\mapsto j^k(f_{\mid F})(x)$$ Define foliated analogue of the singularity set as $$\Sigma^{(i_1,i_2,...,i_k)}_{\mathcal{F}}:=\cup_{\{F\ leaf\ of\ \mathcal{F}\}} r_F^{-1}\Sigma^{(i_1,i_2,...,i_k)}$$
([@Bertelson]) A smooth real valued function $f:M\to \mathbb{R}$ is called $\mathcal{F}$-generic if the first jet $j^1f \pitchfork \Sigma^{(n)}_{\mathcal{F}}$ and the second jet $j^2f \pitchfork \Sigma^{(i_1,i_2)}_{\mathcal{F}}$ for all $(i_1,i_2)$.
([@Bertelson]) \[Foliated invariant\] An isotopy of the manifold $M$ is a family $\psi_t,\ t\in [0,1]$ of diffeomorphisms of $M$ such that the map $\psi:M\times [0,1]\to M\ :(x,t)\mapsto \psi_t(x)$ is smooth and $\psi_0=id_M$. Consider a foliation $\mathcal{F}$ on $M$. A foliated isotopy of $(M,\mathcal{F})$ is an isotopy $\psi_t$ of $M$ that preserves the foliation $\mathcal{F}$, that is, $(\psi_t)_*(T\mathcal{F})=T\mathcal{F}$ for all $t\in [0,1]$. A relation $\mathcal{R}$ is called foliated invariant on $(M,\mathcal{F})$ if the action by foliated isotopies leaves $\mathcal{R}$ invariant.
([@Bertelson]) \[bertelson\] On an uniformly open foliated manifold, any open, foliated invariant differential relation satisfies the parametric $h$-principle.
In [@Bertelson1] Bertelson has contructed counter examples that without the uniformly open condition \[bertelson\] fails.\
Main Theorem
============
In this section we prove \[Main\]. Let us first set some notations. First of all we have the Liouville vector fields $Z_0$ and $Z_1=Z_0+Z_h$ and let $Z_t=Z_0+tZ_1$ be the homotopy of uniformly open Liouville vector field. So we have
1. $Z_0.\phi_0\geq \delta (|Z_0|^2+|d\phi_0|^2)$\
2. $Z_0.\phi_1+Z_h.\phi_1\geq \delta' (|Z_0+Z_h|^2+|d\phi_1|^2)$\
With equality occurs in the above inequalities at the zeros of $Z_0$ and $Z_0+Z_h$. Define $$\phi_t=(1-t)\phi_0+t\phi_1$$ Then observe that $$Z_0.d\phi_t=(1-t)Z_0.d\phi_0+tZ_0.d\phi_1$$ Now consider $$\begin{array}{rcl}
Z_0.d\phi_t+Z_{th}.d\phi_1 &=& (1-t)Z_0.d\phi_0+t[Z_0.d\phi_1+Z_h.d\phi_1]\\
&\geq & (1-t)\delta (|Z_0|^2+|d\phi_0|^2)+t\delta' (|Z_0+Z_h|^2+|d\phi_1|^2)\\
&\geq & min(\delta,\delta')[(1-t)|Z_0|^2+t|Z_0+Z_h|^2+(1-t)|d\phi_0|^2+t|d\phi_1|^2]
\end{array}$$ So we get
$$\begin{array}{rcl}
\frac{(Z_0.d\phi_t+Z_{th}.d\phi_1)}{|Z_0+tZ_h|^2+|d\phi_t+d\phi_1|^2} &\geq & min(\delta,\delta')[\frac{(1-t)|Z_0|^2+t|Z_0+Z_h|^2}{(1-t)^2|Z_0|^2+t^2|Z_0+Z_h|^2+2t(1-t)|Z_0||Z_0+Z_h|+|d\phi_t+d\phi_1|^2}\\
& & +\frac{(1-t)|d\phi_0|^2+t|d\phi_1|^2}{(1-t)^2|Z_0|^2+t^2|Z_0+Z_h|^2+2t(1-t)|Z_0||Z_0+Z_h|+|d\phi_t+d\phi_1|^2}]
\end{array}$$
Recall that (according to \[completed liouville\]) on the infinite end of $\hat{X}$ the Liouville vector fields $Z_0$ and $Z_0+Z_h$ are equal to $\partial_s$. Moreover since $Z_0$ and $Z_1$ do not have a common zero $|d\phi_t+d\phi_1|>0$ and since $\Gamma_i$’s are compact $|d\phi_t+d\phi_1|$ is bounded bellow.\
So the right hand side of the above inequality is bounded and hence the right hand side is equal to $\tilde{\delta}$ (say). So we get $$(Z_0.d\phi_t+Z_{th}.d\phi_1)\geq \tilde{\delta} [|Z_0+tZ_h|^2+|d\phi_t+d\phi_1|^2]$$
Without loss of generality we assume that $Z_0\pitchfork Z_h$ otherwise we can use a relative version of $h$-principle.\
We replace $t$ by a new parameter $t'=f(t)$ where $f:[0,1]\to [0,1]$ is such that $f=0$ on $[0,\epsilon]$ and $f=1$ on $[1-\epsilon,1]$. We can replace the parameter in the above inequality.\
Define one forms $\alpha_{t'}$ and $\eta_{t'}$ as follows. First $\alpha_{t'}$, $\alpha_{t'}(Z_0)=d\phi_{t'}(Z_0)$, $Z_{h}\in ker(\alpha_{t'}),\ for\ t\in [\epsilon,1-\epsilon],\ t'=f(t)$ and $\alpha_0=d\phi_0,\ \alpha_1=d\phi_1$. Similarly $\eta_{t'}(Z_{t'h})=\eta_{t'}(t'Z_h)=d\phi_1(t'Z_h)$, $Z_0\in ker\eta_{t'}\ for\ t\in [\epsilon,1-\epsilon],\ t'=f(t)$ and $\eta_0=d\phi_1=\eta_1$. So we have $$(\alpha_{t'}+\eta_{t'})(Z_0+t'Z_h)=(\alpha_{t'}(Z_0)+\eta_{t'}(Z_{t'h}))\geq \tilde{\delta} [|Z_0+t'Z_h|^2+|\alpha_{t'}+\eta_{t'}|^2]$$ Now extending on $\tilde{X}$ and regularizing $Z_{t'}=Z_0+t'Z_h$ we get $\tilde{Z}$ as in \[1\]. We extend $\alpha_{t'}$ and $\eta_{t'}$ to $\tilde{X}$ as $\alpha'(x,t')=\alpha_{t'}(x)\ and\ \eta'(x,t')=\eta_{t'}(x)$. Adjusting $\alpha'$ and $\eta'$ near $\Gamma_i$’s (\[1\]) to $\tilde{\alpha}$ and $\tilde{\eta}$ so that $$(\tilde{\alpha}+\tilde{\eta})(\tilde{Z})>\tilde{\delta}[|\tilde{Z}|^2+|\tilde{\alpha}+\tilde{\eta}|^2]$$
Now we come to the $h$-principle part. Consider $M=\tilde{X}\times \tilde{X}$ and the trivial bundle $P\times P:M\times \mathbb{R}^2=\tilde{X}\times \mathbb{R}\times \tilde{X}\times \mathbb{R}\to M$ where $P:\tilde{X}\times \mathbb{R}\to \tilde{X}$ is the projection on the first factor. Observe that $$(M\times \mathbb{R}^2)^{(1)}=(\tilde{X}\times \mathbb{R})^{(1)}\times (\tilde{X}\times \mathbb{R})^{(1)}$$ Note that this does not happen in case of higher order jet extensions as there will be mixed derivatives.\
Observe that the section space $\Gamma(\tilde{X}\times \mathbb{R})=C^{\infty}(\tilde{X},\mathbb{R})$. There is a natural affine fibration $L:(\tilde{X}\times \mathbb{R})^{(1)}\to T^*(\tilde{X}\times \mathbb{R})$ given by $L(j^1f(x))=df_x$ where $f\in C^{\infty}(\tilde{X},\mathbb{R})$. Define the relation $\mathcal{R}\subset (M\times \mathbb{R}^2)^{(1)}$ as $$\mathcal{R}=\{(j^1f_0,j^1f_1)\in (M\times \mathbb{R}^2)^{(1)}:L(j^1f_i)(\tilde{Z})>\tilde{\delta}[|\tilde{Z}|^2+|L(j^1f_i)|^2]\ for\ i=0,1,\ for\ some\ \tilde{\delta}>0\}$$ Obviously $(\tilde{\alpha}+\tilde{\eta},\tilde{\alpha}+\tilde{\eta})\in Sec\mathcal{R}$. Next we shall show that $\mathcal{R}$ is open and invariant under $\tilde{\mathcal{F}}\times \tilde{\mathcal{F}}$-foliated isotopy. This will conclude the proof of \[Main\] in view of \[bertelson\]. Only thing one needs to do is the following. Let $(f_0,f_1)$ is a resulting solution. Choose either $f_0$ or $f_1$ say $f_0$. Then define $f_t$ as $$f_t(x)=f_0(x,t)$$ Now we have to re-introduce the singularities. Let $g_t$ be a family of Morse functions defined near $\Gamma_i$ with index same as the index of $Z$ along $\Gamma_i$. Let $\beta$ be a cutoff function such that $\beta=1$ on a tubular neighborhood $\mathcal{N}_i \supset N''_i$ and $\beta=0$ outside $\mathcal{N}'_i\supset \mathcal{N}_i$.Let $\beta_t(x)=\beta(x,t)$. Observe $$Z_t(\beta_tg_t+(1-\beta_t)f_t)=[\beta_tZ_t(g_t)+(1-\beta_t)Z_t(f_t)]+g_tZ_t(\beta_t)-f_tZ_t(\beta_t)$$ Observe that $Z_t(\beta_t)$ has compact support and $g_t$ is of the form $$a+x_1^2+...+x_k^2-x_{k+1}^2+...+x_{2n}^2$$ So if we take $a$ large enough then $(g_tZ_t(\beta_t)-f_tZ_t(\beta_t))>0$ and obviously compactly supported. So we get the desired result.\
The relation $\mathcal{R}$ is open and invariant under the action of $\tilde{\mathcal{F}}\times \tilde{\mathcal{F}}$-foliated isotopies.
Openness of $\mathcal{R}$ follows directly from the definition of $\mathcal{R}$.\
For second part we see $\psi_s^*(df)(\tilde{Z})=df(d\psi_s(\tilde{Z}))\geq cdf(\tilde{Z})$, where $c$ is a positive real number. Positive as $\psi_0=id$ and $M\times\mathbb{R}^2$ is connected. So $$\psi_s^*(df)(\tilde{Z})\geq cdf(\tilde{Z})>c\tilde{\delta}[|\tilde{Z}|^2+|df|^2]$$
[99]{} Bertelson, Mélanie. A $h$-principle for open relations invariant under foliated isotopies. J. Symplectic Geom. 1(2), 369425 (2002).
Bertelson, Mélanie Foliations associated to regular Poisson structures. Commun. Contemp. Math. 3 (2001), no. 3, 441–456. (Reviewer: Edith Padrón)
Eliashberg, Yakov. Weinstein manifolds revisited. arXiv.
Gromov, M. Partial Differential relations.
Eliashberg, Y.; Mishachev, N. Introduction to the h-principle. Graduate Studies in Mathematics, 48. American Mathematical Society, Providence, RI, 2002. xviii+206 pp. ISBN: 0-8218-3227-1 (Reviewer: John B. Etnyre)
R. Thom, Les singularités des application différentiables, Ann. Inst. Fourier (Grenoble), 6 (1955-1956) 43-87.
Cieliebak, Kai; Eliashberg, Yakov. *From Stein to Weinstein and back*, American Mathematical Society Colloquium Publications, vol. 59, American Mathematical Society, Providence, RI, 2012, Symplectic geometry of affine complex manifolds. MR 3012475 1,4,19,20,24,25
|
---
abstract: 'The reactions $\gamma \gamma \rightarrow \eta_c \eta_c$ and $\gamma \gamma \rightarrow \eta_c + X$ are discussed within the three gluon exchange model. We give predictions for the differential cross-sections and discuss feasibility of measuring these processes at LEP2 and TESLA. The total cross-sections were estimated to be approximately equal to 40 fb and 120 fb for $\gamma \gamma \rightarrow \eta_c \eta_c$ and $\gamma \gamma \rightarrow \eta_c + X$ respectively assuming exchange of elementary gluons that corresponds to the odderon with intercept equal to unity. These values can be enhanced by a factor equal to 1.9 and 2.1 for LEP2 and TESLA energies if the odderon intercept is equal to 1.07. The estimate of cross-sections $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c \eta_c) $ and $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c X) $ for untagged $e^+$ and $e^-$ is also given.'
---
[**Possible probe of the QCD odderon singularity\
through the quasidiffractive**]{} $\eta_c$ [**production in**]{} $\gamma \gamma$ [**collisions .**]{}\
[L. Motyka]{}$^a$, [J. Kwieciński]{}$^b$,\
$^a$[*Institute of Physics, Jagellonian University, Cracow, Poland*]{}\
$^b$[*Department of Theoretical Physics,\
H. Niewodniczański Institute of Nuclear Physics, Cracow, Poland*]{}
[TPJU-1/98]{}\
[February 1998]{}
The dominant contribution in the high-energy limit of perturbative QCD is given by the exchange of interacting gluons [@LIPATOV1; @LIPATOV2]. The fact that the gluons have spin equal to unity automatically implies that the cross-sections corresponding to the exchange of elementary gluons are independent of the incident energies while the interaction between gluons leads to increasing cross-section. Besides the pomeron [@LIPATOV1; @LIPATOV2; @LIPATOV3; @BFKL; @GLR] one also expects presence of the so called “odderon" singularity [@LIPATOV1; @LIPATOV2; @LIPATOV3; @BARTELS; @KP]. In the leading logarithmic approximation the pomeron is described by the BFKL equation which corresponds to the sum of ladder diagrams with the reggeized gluon exchange along the ladder. The odderon is described by the three-gluon exchange. Unlike pomeron which corresponds to the vacuum quantum numbers and so to the positive charge conjugation the odderon is characterised by $C=-1$ (and I=0) i.e. it carries the same quantum numbers as the $\omega$ Regge pole. The (phenomenologically) determined intercept $\lambda_{\omega}$ of the $\omega$ Regge pole is approximately equal to $1/2$ [@DOLA]. The novel feature of the odderon singularity corresponding to the gluonic degrees of freedom is the potentially very high value of its intercept $\lambda_{odd}\gg\lambda_{\omega}$. The exchange of the three (noninteracting) gluons alone generates singularity with intercept equal to unity while interaction between gluons described in the leading logarithmic approximation by the BKP equation [@BARTELS; @KP; @JW] is even capable to boost the odderon intercept above unity [@GLN] [^1]. The energy dependence of the amplitudes corresponding to $C=-1$ exchange becomes similar to the diffractive ones which are controlled by the pomeron exchange.
Possible tests of both QCD perturbative pomeron as well as of odderon have to rely on (semi) hard processes where presence of hard scale can justify the use of perturbative QCD. Very useful measurement in this respect is the very high energy exclusive photo (or electro) production of heavy charmonia (i.e. $J/\Psi$ [@RYSKIN; @BRODSKY] or $\eta_c$ [@MSN; @CKMS; @IKS] etc. for probing the QCD pomeron or odderon respectively). The estimate of the odderon contribution to the photo- (or electro-) production of even charge conjugation mesons does however require model assumptions about the coupling of the three gluon system to a proton. It would therefore be useful to consider the process which could in principle be calculated entirely within perturbative QCD. The relevant measurement which fulfills those criteria is the exclusive quasidiffractive production of even $C$ charmonia in $\gamma \gamma$ collisions or, to be precise, the processes: $\gamma \gamma \rightarrow \eta_c \eta_c$ or $\gamma \gamma \rightarrow \eta_c + X $[@GINZBURG; @ODDRUS]. The main purpose of our paper is to present the theoretical and phenomenological description of the double $\eta_c$ production in high energy $\gamma \gamma$ collisions and of the process $\gamma \gamma \rightarrow \eta_c + X$ with the large rapidity gap between $\eta_c$ and the hadronic system $X$ assuming the three gluon exchange mechanism. In our paper we shall follow the formalism developed in ref [@ODDRUS] where the production of pseudosclar mesons in $\gamma \gamma$ collisions within the three gluon exchange mechanism is discussed with great detail.
The kinematics of the three gluon exchange diagram to the processes: $\gamma^{*}(q_1) + \gamma^{*}(q_2) \rightarrow \eta_c \eta_c$ and $\gamma^{*}(q_1) + \gamma^{*}(q_2) \rightarrow \eta_c + X$ is illustrated in Fig. 1a and Fig. 1b. The amplitude $M^{ij}$ for the process $\gamma^{*}(q_1) + \gamma^{*}(q_2) \rightarrow \eta_c \eta_c$ which corresponds to the transverse polarisation of both photons can be written in the following way: $$M^{ij}=W^2 C_{qq} \int { d^2{\mbox{\boldmath $\delta$}}_{1} d^2{\mbox{\boldmath $\delta$}}_{2} \over
{\mbox{\boldmath $\delta$}}_{1}^2{\mbox{\boldmath $\delta$}}_{2}^2{\mbox{\boldmath $\delta$}}_{3}^2 }
\Phi^i_{\gamma \eta_c}(Q_1^2,{\mbox{\boldmath $\delta$}}_{1}, {\mbox{\boldmath $\delta$}}_{2}, {\mbox{\boldmath $\Delta$}} )
\Phi^j_{\gamma \eta_c}(Q_2^2,-{\mbox{\boldmath $\delta$}}_{1}, - {\mbox{\boldmath $\delta$}}_{2}, -{\mbox{\boldmath $\Delta$}} )
\label{mij}$$ where $i$ and $j$ are the polarisation indices and $C_{qq}$ is given by: $$C_{qq}={10\over 9 \pi N_c^2} \alpha_s^3(m_c^2)
\label{cqq}$$ and ${\mbox{\boldmath $\Delta$}} $ and ${\mbox{\boldmath $\delta$}}_{i}$ denote the transverse components of $\Delta$ and $\delta_i$ while $Q_i^2 = -q_i^2$.
The relevant diagrams describing the impact factor $\Phi^{i,j}_{\gamma \eta_c}$ are given in Fig. 2. Besides the diagrams presented in Fig. 2 one has also to include those with the reversed direction of the quark lines. In the nonrelativistic approximation one gets the following expression for $\Phi^{i}_{\gamma \eta_c}$: $$\Phi^{i}_{\gamma \eta_c}=
F_{\eta_c} \sum_{k=1}^2 \epsilon_{ik} \left[
{{\mbox{\boldmath $\Delta$}} ^k\over \bar M_1^2+ {\mbox{\boldmath $\Delta$}} ^2} +
\sum_{s=1}^3 {2{\mbox{\boldmath $\delta$}}_{s} ^k - {\mbox{\boldmath $\Delta$}}^k \over
\bar M_1^2+(2{\mbox{\boldmath $\delta$}}_{s} - {\mbox{\boldmath $\Delta$}} )^2}
\right]
\label{impfi}$$ where $$F_{\eta_c} =\sqrt{
{4m_{\eta_c} \Gamma_{\eta_c \rightarrow \gamma \gamma} \over
\alpha q_c^2}}
\label{fetac}$$ and $$\bar M_1^2=4m_c^2 + Q_1^2 .
\label{barm12}$$ In equations (\[impfi\],\[fetac\],\[barm12\]) $m_c, m_{\eta_c}, \Gamma_{\eta_c \rightarrow \gamma \gamma}$ and $q_c$ denote the charmed quark mass, the mass of the $\eta_c$, the $\eta_c$ radiative width and the charge of the charm quark respectively. The formula for the impact factor $\Phi^j_{\gamma \eta_c}$ corresponding to the lower vertex is given by equation (\[impfi\]) after changing the polarisation index $i$ into $j$ and after reversing the sign of ${\mbox{\boldmath $\delta$}}_{s}$ and of ${\mbox{\boldmath $\Delta$}} $ in this equation and after changing $\bar M_1^2$ into $\bar M_2^2$ given by equation (\[barm12\]) with $Q_2^2$ instead of $Q_1^2$. The approximations leading to the formula (\[impfi\]) are discussed in [@CKMS].
For $Q_1 ^2 = Q_2 ^2 \simeq 0$ it is convenient to represent the amplitude $M^{ij}$ in the following way: $$M^{ij}=W^2\left(M_1 g^{ij} +
M_2{{\mbox{\boldmath $\Delta$}} ^i {\mbox{\boldmath $\Delta$}} ^j\over {\mbox{\boldmath $\Delta$}} ^2}\right)
\label{mijdec}$$ The corresponding formula for the differential cross-section averaged over the transverse photons polarizations for the process $\gamma \gamma \rightarrow \eta_c \eta_c$ reads: $${d\sigma \over dt} = {1\over 64 \pi} [(M_1+M_2)^2 + M_1^2]
\label{dsdt}$$ For real $\gamma-s$ we have, of course, set $Q_{1,2}^2=0$.
The process $\gamma \gamma \rightarrow \eta_c + X$ is given by the diagram of Fig. 1b and the amplitude which corresponds to this diagram can be written as below:
$$M_{\gamma \gamma \rightarrow \eta_c + X} = W^2 \tilde C_{qq}
\int {d^2{\mbox{\boldmath $\delta$}}_{1}d^2{\mbox{\boldmath $\delta$}}_{2}\over
{\mbox{\boldmath $\delta$}}_{1}^2 {\mbox{\boldmath $\delta$}}_{2}^2 {\mbox{\boldmath $\delta$}}_{3}^2}
\Phi_{\gamma \eta_c}(Q_2^2,-{\mbox{\boldmath $\delta$}}_{1}, -{\mbox{\boldmath $\delta$}}_{2}, -{\mbox{\boldmath $\Delta$}} )
\Phi_{\gamma X}(Q_1^2,{\mbox{\boldmath $\delta$}}_{1}, {\mbox{\boldmath $\delta$}}_{2}, {\mbox{\boldmath $\Delta$}} ,p_1,p_2)
\label{metax}$$
where $$\Phi_{\gamma \eta_c}= \sum_{i,j=1}^2\epsilon_{2}^i
\epsilon_{ij}\Phi^j_{\gamma \eta_c}
\label{phirel}$$ and $$\tilde C_{qq} = {10 \over 9\pi N_c ^2}[\alpha_s (m_c ^2) \tilde\alpha_s]^{3/2}.
\label{cqqbar}$$ The coupling constant $\tilde\alpha_s$ is defined by the hard scale characteristic for the upper vertex in Fig. 1b. The impact factor $\Phi_{\gamma X}$ is given by the following equation: $$\Phi_{\gamma X}(Q_1^2,{\mbox{\boldmath $\delta$}}_{1}, {\mbox{\boldmath $\delta$}}_{2},
{\mbox{\boldmath $\Delta$}} , p_1, p_2)=
-ieq_f \bar u(p_1)[m_f R \epsilon_1\hspace{-0.7em}/\,
+ 2z\bar Q \epsilon_1 +
{\bar Q}\hspace{-0.7em}/\,
\epsilon_1\hspace{-0.7em}/\,]
q_2\hspace{-0.7em}/\; v(p_2)
\label{phietax}$$ where $${\mbox{\boldmath $\bar Q$}} = \left[{{\mbox{\boldmath $p$}}_{1}\over m_f^2 + {\mbox{\boldmath $p$}}_{1}^2} + \sum_{s=1}^3
{{\mbox{\boldmath $\delta$}}_{s}-{\mbox{\boldmath $p$}}_{1}\over m_f^2 +
({\mbox{\boldmath $\delta$}}_{s}- {\mbox{\boldmath $p$}}_{1})^2}\right]
+({\mbox{\boldmath $p$}}_{1} \rightarrow {\mbox{\boldmath $p$}}_{2})
\label{qbar}$$ and $$R = \left[{-1\over m_f^2 + {\mbox{\boldmath $p$}}_{1}^2} + \sum_{s=1}^3
{1\over m_f^2 +({\mbox{\boldmath $\delta$}}_{s}- {\mbox{\boldmath $p$}}_{1})^2}\right]
+({\mbox{\boldmath $p$}}_1 \rightarrow {\mbox{\boldmath $p$}}_2).
\label{r}$$ In equations (\[phietax\], \[qbar\],\[r\]) $\bar Q$ is a four-vector with transverse components ${\mbox{\boldmath $\bar Q$}}$ and vanishing longitudinal componenets, $m_f$ denotes the mass of the (light) quark produced as the pair of $q \bar q$ jets, $q_f$ its charge while $z$ is the component of the photon four momentum $q_1$ carried by a quark jet. The four momenta $p_1$ and $p_2$ denote the four momenta of the quark (antiquark) in the final state and ${\mbox{\boldmath $p$}}_1$, ${\mbox{\boldmath $p$}}_2$ are their transverse parts respectively. When calculating the cross-sections we shall make the “equivalent quark approximation” [@ODDRUS; @GINZVM] which corresponds to the approximation of setting $m_f$ equal to zero and to retaining the dominant term in ${\mbox{\boldmath $\bar Q$}}$ i.e.: $${\mbox{\boldmath $\bar Q$}} = {{\mbox{\boldmath $p$}}_{1}\over {\mbox{\boldmath $p$}}_{1}^2} +
{{\mbox{\boldmath $p$}}_{2}\over {\mbox{\boldmath $p$}}_{2}^2}.
\label{eqqa}$$
The remaining terms in (\[qbar\]) can also be large for ${\mbox{\boldmath $\delta$}}_s \simeq
{\mbox{\boldmath $p$}}_i$ but their dependence on ${\mbox{\boldmath $\delta$}}_s$ is such that after the integration over ${\mbox{\boldmath $\delta$}}_s$ performed in (\[metax\]) they are suppressed in comparison to the leading terms.
The differential cross-section for the process $\gamma\gamma \rightarrow \eta_c + X(q\bar q)$ averaged over photon polarisations is given by the following formula: $$d\sigma = {(2\pi)^4 \over 2 W^2}\; \sum_{\{ ... \} }
M^+ _{\gamma\gamma \rightarrow \eta_c + X}
M _{\gamma\gamma \rightarrow \eta_c + X} \;
dPS_3 (\gamma\gamma \rightarrow \eta_c + X(q\bar q))
\label{dsigetax}$$ where $\{ ... \}$ stands for incident photons helicities, outgoing light quark colours, flavours and polarisations and $dPS_3 (\gamma\gamma \rightarrow \eta_c + X(q\bar q))$ is the standard parametrisation of the Lorentz invariant tree-body phase space. The decomposition property $$dPS_3 (\gamma\gamma \rightarrow \eta_c + X(q\bar q)) =
(2\pi)^3\; dM_X ^2 \; dPS_2(\gamma\gamma \rightarrow \eta_c + X)
\; dPS_2 (X \rightarrow q\bar q)
\label{decompps}$$ is employed and integration over invariant mass squared $M_X ^2$ of the $q\bar q$ system is performed. The remaining integration over the two body phase space $ dPS_2 (X \rightarrow q\bar q)$ in the applied approximation can not be extended to the regions ${\mbox{\boldmath $p$}}_i ^2 \simeq 0$ since the integral would then be divergent. We do therefore introduce, following ref. [@ODDRUS; @GINZVM] a physical cut-off $\mu$ which is defined by the light quarks constituent masses and we set for its magnitude $\mu = 0.3\,{\rm GeV}$. The integral is bounded from above by the condition ${\mbox{\boldmath $p$}}_i ^2 < {\mbox{\boldmath $\Delta$}} ^2$ which assures the diffractive nature of the process. Thus we obtain a logarithmic expression ${\rm ln\;} |t| / \mu ^2$ arising from the integral $\int_{\mu ^2} ^{|t|} d{\mbox{\boldmath $p$}}_1 ^2 / {\mbox{\boldmath $p$}}_1 ^2$. Finally the differential cross-section reads $${d\sigma \over dt} = {1 \over 64\pi} \;
\left| \tilde C_{qq} \int {d^2{\mbox{\boldmath $\delta$}}_{1}d^2{\mbox{\boldmath $\delta$}}_{2}\over
{\mbox{\boldmath $\delta$}}_{1}^2 {\mbox{\boldmath $\delta$}}_{2}^2 {\mbox{\boldmath $\delta$}}_{3}^2}
\Phi_{\gamma \eta_c}(Q_2^2,-{\mbox{\boldmath $\delta$}}_{1}, -{\mbox{\boldmath $\delta$}}_{2}, -{\mbox{\boldmath $\Delta$}} ,p_1,p_2)
\right|^2 \; {2\alpha \bar e ^2 \over 3\pi}
\; {\rm ln\;} {|t| \over \mu ^2}
\label{diffetax}$$ where $\bar e ^2 = N_c (e_u ^2 + e_d ^2 + e_s ^2) = 2$ and $e_i$ are the charges of the light quarks $u$, $d$ and $s$ respectively.
In Fig. 4a we show the differential cross-section of the process $\gamma \gamma \rightarrow \eta_c \eta_c$. Unlike the photoproduction of $\eta_c$ on a nucleon target this cross-section does not vanish at $t=0$. The result of the theoretical calculations can be conveniently represented for $|t| < 15 \,{\rm GeV}^2$ by the exponential form: $${d\sigma \over dt} =
6.6\;{\rm fb/GeV}^2 \,{\rm exp}\, (0.167 \,{\rm GeV}^{-2}\,t).
\label{exp1}$$ For the integrated cross-section we get $\sigma^{tot}_{\eta_c \eta_c} = 43\,{\rm fb}$ The differential cross section for the process $\gamma \gamma \rightarrow \eta_c + X$ is presented in Fig. 4b. For $|t| < 15\,{\rm GeV}^2$ it can be fitted to the exponential form: $${d\sigma \over dt} =
64\;{\rm fb/GeV}^2\,{\rm exp}\, (0.25 \,{\rm GeV}^{-2}\,t).
\label{exp2}$$ The integrated cross-section is now $\sigma^{tot}_{\eta_c X}(|t|>3\,{\rm GeV}^2) = 120\,{\rm fb}$. In our calculation we set $\alpha_s (m_c^2) = 0.38$, $\tilde\alpha_s =
0.3$, $m_c = 1.4$ GeV, $m_{\eta_c}= 2.98$ GeV and $\Gamma_{\eta_c \rightarrow \gamma\gamma}=7$ keV.
The calculated cross-sections are energy independent since they correspond to the exchange of three elementary and non-interacting gluons. In this approximation the odderon singularity has its intercept $\lambda_{odd}$ equal to unity. Interaction between gluons can boost this intercept above unity and one can take approximately this effect into account by multiplying the cross-sections by the enhancement factor $$A_{enh}(W^2)=\bar x^{2(1-\lambda_{odd})}
\label{aenh}$$ where $$\bar x = {M_{\eta_c}^2\over W^2}.
\label{barx}$$ The $\gamma \gamma$ system in $e^+e^-$ collisions has a continuous spectrum. The C.M. energy squared $W^2$ of the $\gamma \gamma$ system is: $$W^2=z_1z_2s
\label{w2}$$ where $z_i$ are the energy fractions of electrons (positrons) carried by the exchanged photons and $s$ denotes the C.M. energy squared of the $e^+e^-$ system. The distribution of those energy fractions is given by the standard flux-factors $f_{\gamma/e}(z,Q_{min},Q_{max})$ of the virtual photons. In the equivalent photon approximation it is given by the following formula [@YELLOW]: $$f_{\gamma/e}(z,Q_{min},Q_{max})={\alpha\over 2 \pi}
\left[ {1 + (1-z)^2 \over z}\;{\rm ln}\,{Q_{max}^2\over Q_{min}^2} \right]
\label{flux}$$ where we have neglected a small term proportional to the electron mass squared $m_e^2$. $Q_{min}^2$ and $Q_{max}^2$ in eq. (\[flux\]) denote the minimal and maximal values of the photon virtuality. For untagged experiments the former is given by the kinematical limit: $$Q_{min}^2 = {m_e^2 z\over 1-z}
\label{pmin}$$ and the latter by the antitagging condition $\theta_{e^{\pm}} < \theta_{max}$ which gives $$Q_{max}^2=(1-z)E_{beam}^2 \theta_{max}^2
\label{pmax}$$ where $\theta_{e^{\pm}}$ denotes the scattering angle of the scattered $e^{\pm}$. Following ref. [@YELLOW] we set $\theta_{max}=30\,{\rm mrad}$. The cross-section for the process $e^+e^- \rightarrow e^+e^- + Y$ which for untagged $e^{\pm}$ corresponds to the production of the hadronic state Y in the collision of almost real (virtual) photons is given by the following convolution integral:
$$\sigma_{e^+e^- \rightarrow e^+e^- + Y} =$$ $$\int_0^1dz_1 \int_0^1dz_2 \Theta (W^2-W_{Y0}^2)
\sigma_{\gamma \gamma \rightarrow Y}(W^2)
f_{\gamma/e}(z_1,Q_{min},Q_{max})f_{\gamma/e}(z_2,Q_{min},Q_{max}).
\label{conv}$$ In order to estimate the effective enhancement factor due to $\lambda_{odd} > 1$ we have compared the convolution integrals: $$I(s,\lambda_{odd}) =$$ $$\int_0^1dz_1 \int_0^1dz_2 \Theta\left(z_{max}-
M_{\eta_c}^2/(z_1z_2s)\right)
f_{\gamma/e}(z_1,Q_{min},Q_{max})f_{\gamma/e}(z_2,Q_{min},Q_{max})
\bar x^{2(1-\lambda_{odd})}
\label{is}$$ for $\lambda_{odd}=1$ with that calculated for $\lambda_{odd}=1.07$. We set $z_{max}=0.05$ so that the $\gamma \gamma$ system is in the high enegy (i.e. Regge) region. The ratio $A=I(s,\lambda_{odd}=1.07)/I(s,\lambda_{odd}=1)$ should give the expected enhancement factor for the given value of $s$. We get $A=1.9$ and $A=2.1$ for the LEP2 and TESLA energies respectively. The relatively small change in $A$ with increasing $s$ is caused by the fact that the convolution integral is dominated by small values of $z_i$.
The magnitude of the estimated cross-sections $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c \eta_c) $ and $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c X)$ for untagged $e^+$ and $e^-$ in the the final state are summarized in Table 1. We give values of those cross sections for two different C.M. energies of incident leptons corresponding to LEP2 and TESLA energies and for two different values of the odderon intercept $\lambda_{odd} = 1$ and $\lambda_{odd} = 1.07$. It may be seen from this table that the cross-sections are very small and so it may in particular be difficult to measure them with presently available luminosity at LEP2 [@YELLOW].\
Table 1:The estimated cross-sections $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c \eta_c) $ and $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c X)$.
$\sqrt{s}$ \[GeV\] $\lambda_{odd} - 1$ $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c \eta_c)$ \[fb\] $\sigma (e^+ e^- \rightarrow e^+ e^- \eta_c X)$ \[fb\]
-------------------- --------------------- ------------------------------------------------------------- --------------------------------------------------------
180 0 1.3 4
180 0.07 2.5 7
500 0 3.5 10
500 0.07 7.4 21
To sum up we have applied the formalism developed in ref.[@ODDRUS] to the quantitative analysis of the quasidiffractive processes $\gamma \gamma
\rightarrow \eta_c \eta_c$ and $\gamma \gamma \rightarrow \eta_c X(q\bar q)$ within the three gluon exchange mechanism. The main merit of those processes is that the corresponding cross-sections can be, in principle calculated within perturbative QCD. We have estimated the corresponding cross-sections for the processes $e^+e^- \rightarrow e^+e^- \eta_c \eta_c$ and $e^+e^- \rightarrow e^+e^- \eta_c X$ with untagged $e^+e^- $ which were found to be within the range 1—20 fb depending upon incident C.M. energy $\sqrt{s}$ and on the magnitude of odderon intercept.
Acknowledgments {#acknowledgments .unnumbered}
===============
L.M. is grateful to J. Czyżewski for inspiring discussions and to DESY Theory Division for hospitality. This research has been supported in part by the Polish Committee for Scientific Resarch grants N0 2 P03B 89 13 and 2 P30B 044 14.
[99]{} L.N. Lipatov, in “Perturbative QCD", ed. A.H. Mueller, World Scientific, Singapore 1989, p. 441. L.N. Lipatov, Proceedings of the $2^{nd}$ Cracow Epiphany Conference on Proton Structure, ed. M. Jeżabek and J. Kwieciński, Acta. Phys. Polon. [**26**]{} (1996) 1245. L.N. Lipatov, preprint DESY 96 - 132, hep-ph/9610276. E.A. Kuraev, L.N. Lipatov, V.F. Fadin, Zh. Eksp. Teor. Fiz. [**72**]{} (1977) 373; Ya.Ya. Balitzkij, L.N. Lipatov, Yad. Fiz. [**28**]{} (1978) 822; J.B. Bronzan, R.L. Sugar, Phys. Rev. [**D17**]{} (1978) 585; T. Jaroszewicz, Acta. Phys. Polon. [**B11**]{} (1980) 965. L.V. Gribov, E.M. Levin, M.G. Ryskin, Phys. Rep. [**100**]{} (1983) 1. J. Bartels, Nucl. Phys. [**B175**]{} (1980) 365. J. Kwieciński, M. Praszałowicz, Phys. Lett. [**B94**]{} (1980) 413. A. Donnachie, P.V. Landshoff, Phys. Lett. [**B296**]{} (1992) 227. R. Janik, J. Wosiek, Phys.Rev.Lett.[**79**]{} (1997) 2935. P. Gauron, L.N. Lipatov, B. Nicolescu, Phys. Lett. [**B304**]{} (1993) 334; Z. Phys. [**C63**]{} (1994) 253. M.A. Braun, hep-ph/9801352. M.G. Ryskin, Z. Phys. [**C57**]{} (1993) 89. S. Brodsky et al., Phys. Rev. [**D50**]{} (1994) 3134. A. Schäfer, L. Mankiewicz, O. Nachtmann, Proceedings of the Workshop “Physics at HERA“, Hamburg, October 29-30, 1991, Edited by W. Buchmüller and G. Ingelman; W. Kilian and O. Nachtman, hep-ph/9712371. J. Czyżewski, J. Kwieciński, L. Motyka, M. Sadzikowski, Phys. Lett. [**B398**]{} (1997) 400. R. Engel, D.Yu. Ivanov, R. Kirschner, L. Szymanowski, hep-ph/9707362. I.F. Ginzburg, D.Yu. Ivanov, Nucl. Phys. (Proc. Suppl.) [**25B**]{} (1991) 224; I.F. Ginzburg Yad. Fiz. [**56**]{} (1993) 45. I.F. Ginzburg, D.Yu. Ivanov. Nucl. Phys. [**B388**]{}(1992) 376. I.F. Ginzburg, S.L. Panfil, V.G. Serbo, Nucl. Phys. [**B284**]{} (1987) 685 Report of the Working Group on ”$\gamma \gamma$ Physics“, P. Aurenche and G.A. Schuler (conveners), Proceedings of the Workshop on ”Physics at LEP2" , editors: G. Altarelli, T. Sjöstrand and P. Zwirner, preprint CERN 96-01.
[^1]: The results of ref. [@GLN] are however in conflict with recent analysis [@BRAUN]
|
---
abstract: 'The aim of this paper is to construct many examples of rational surface automorphisms with positive entropy by means of the concept of orbit data. We show that if an orbit data satisfies some mild conditions, then there exists an automorphism realizing the orbit data. Applying this result, we describe the set of entropy values of the rational surface automorphisms in terms of Weyl groups.'
author:
- |
Takato Uehara\
\
Graduate School of Mathematics, Kyushu University\
744 Moto-oka, Nishi-ku, Fukuoka 819-0395 Japan[^1]
title: |
**Rational Surface Automorphisms\
with Positive Entropy [^2]**
---
Introduction {#sec:intro}
============
In this paper, we consider automorphisms on compact complex surfaces with positive entropy. According to a result of S. Cantat [@C], a surface admitting an automorphism with positive entropy must be either a K3 surface, an Enriques surface, a complex torus or a rational surface. For rational surfaces, rather few examples had been known (see [@C], Section 2). However, some rational surface automorphisms with invariant cuspidal anticanonical curves have been constructed recently. Bedford and Kim [@BK1; @BK2] found some examples of automorphisms by studying an explicit family of quadratic birational maps on $\mathbb{P}^2$, and then McMullen [@M] gave a synthetic construction of many examples. More recently, Diller [@D] sought automorphisms from quadratic maps that preserve a cubic curve by using the group law for the cubic curve. We stress the point that these automorphisms can be all obtained from quadratic birational maps. The aim of this paper is to construct yet more examples of rational surface automorphisms with positive entropy from general birational maps on $\mathbb{P}^2$ preserving a cuspidal cubic curve.
Let $F : X \to X$ be an automorphism on a rational surface $X$. From results of Gromov and Yomdin [@G; @Y], the [*topological entropy*]{} $h_{\mathrm{top}}(F)$ of $F$ is calculated as $h_{\mathrm{top}}(F)= \log \lambda(F^*)$, where $\lambda(F^*)$ is the spectral radius of the action $F^* : H^2(X,\mathbb{Z}) \to H^2(X,\mathbb{Z})$ on the cohomology group. Therefore, when handling the topological entropy of a map, we need to discuss its action on the cohomology group, which can be described as an element of a Weyl group acting on a Lorentz lattice. The [*Lorentz lattice $\mathbb{Z}^{1,N}$*]{} is the lattice with the Lorentz inner product given by $$\label{eqn:lorentz}
\mathbb{Z}^{1,N}= \bigoplus_{i=0}^{N} \mathbb{Z} \cdot e_{i}, \quad \quad
(e_i,e_j) =
\left\{
\begin{array}{ll}
1 & (i=j=0), \\[2mm]
-1 & (i=j=1,\dots, N), \\[2mm]
0 & (i \neq j).
\end{array}
\right.$$ For $N \ge 3$, the [*Weyl group $W_N \subset
O(\mathbb{Z}^{1,N})$*]{} is the group generated by $(\rho_i)_{i=0}^{N-1}$, where $\rho_{i} : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ is the reflection defined by $$\label{eqn;refl}
\rho_i(x)=x + (x,\alpha_i) \cdot \alpha_i, \quad \quad
\alpha_i:=
\left\{
\begin{array}{ll}
e_0-e_1-e_2-e_3 & (i=0), \\[2mm]
e_i-e_{i+1} & (i=1,\dots, N-1).
\end{array}
\right.$$ We call the $W_N$-translate $\Phi_N:= \bigcup_{i=0}^{N-1} W_N \cdot \alpha_i$ of the elements $(\alpha_0, \dots,\alpha_{N-1})$ the [*root system* ]{} of $W_N$, and each element of $\Phi_N$ a [*root*]{}. On the other hand, if $\lambda(F^*)>1$, then there is a blowup $\pi : X \to \mathbb{P}^2$ of $N$ points $(p_1, \dots, p_N)$ (see [@N1]), which gives an expression of the cohomology group : $H^2(X;\mathbb{Z})=\mathbb{Z} [H] \oplus \mathbb{Z} [E_1]
\oplus \cdots \oplus \mathbb{Z} [E_N]$, where $H$ is the total transform of a line in $\mathbb{P}^2$, and $E_i$ is the total transform over $p_i$. Moreover, there is a natural marking isomorphism $\phi_{\pi} : \mathbb{Z}^{1,N} \to H^2(X,\mathbb{Z})$, sending the basis as $\phi_{\pi}(e_0) = [H]$ and $\phi_{\pi}(e_i)=[E_i]$ for $i=1, \dots, N $. It is known (see [@N2]) that there is a unique element $w \in W_N$ such that the following diagram commutes: $$\label{eqn:diag}
~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{CD}
\mathbb{Z}^{1,N} @> w >> \mathbb{Z}^{1,N} \\
@V \phi_{\pi} VV @VV \phi_{\pi} V \\
H^2(X,\mathbb{Z}) @> F^* >> H^2(X;\mathbb{Z}).
\end{CD}
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ Then $w$ is said to be [*realized*]{} by $(\pi,F)$ (see also [@M]). A question at this stage is whether a given element $w \in W_N$ is realized by some pair $(\pi,F)$.
Again let us consider a blowup $\pi : X \to\mathbb{P}^2$ and an automorphism $F : X \to X$. Through $\pi$, $F$ descends to a birational map $f : \mathbb{P}^2 \to \mathbb{P}^2$ on the projective plane $\mathbb{P}^2$, and it, in turn, is expressed as a composition $f=f_n \circ f_{n-1} \circ \cdots \circ f_1 : \mathbb{P}^2 \to \mathbb{P}^2$ of quadratic birational maps $f_i : \mathbb{P}_{i-1}^2 \to \mathbb{P}_i^2$ with $\mathbb{P}_i^2 = \mathbb{P}^2$ from Noether’s theorem. Since the inverse of any quadratic map is also a quadratic map and a quadratic map has three points of indeterminacy, we denote the indeterminacy sets of $f_i$ and of $f_i^{-1}$ by $I(f_i)=\{ p_{i,1}^{+},p_{i,2}^{+},p_{i,3}^{+} \} \subset \mathbb{P}_{i-1}^2$ and $I(f_i^{-1})=\{ p_{i,1}^{-},p_{i,2}^{-},p_{i,3}^{-} \}
\subset \mathbb{P}_{i}^2$ respectively. Write $p_{\overline{\iota}}^{\pm}=p_{i,\iota}^{\pm}$ with $\overline{\iota} \in \mathcal{K}(n)
:=\{ \overline{\iota}=(i,\iota) \, | \, i=1,2, \dots, n, \, \iota=1,2,3 \}$. Then, there is a unique permutation $\sigma$ of $\mathcal{K}(n)$ and a unique function $\mu : \mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ such that the following condition holds for any $\overline{\iota} \in \mathcal{K}(n)$: $$\label{eqn:orbit}
p_{\overline{\iota}}^{m} \neq p_{\overline{\iota}'}^+ \quad
(0 \le m < \mu(\overline{\iota}), \, \overline{\iota}' \in \mathcal{K}(n)),
\qquad p_{\overline{\iota}}^{\mu(\overline{\iota})}=
p_{\sigma(\overline{\iota})}^+,$$ where $p_{\overline{\iota}}^{0}:=p_{\overline{\iota}}^{-}$, and for $m \ge 1$, $p_{\overline{\iota}}^{m}$ is defined inductively by $p_{\overline{\iota}}^{m}:=f_r(p_{\overline{\iota}}^{m-1}) \in \mathbb{P}_r^2$ with $r \equiv i+m~\text{mod}~n$. Moreover, we denote by $\kappa(\overline{\iota})$ the number of points among $p_{\overline{\iota}}^0, p_{\overline{\iota}}^1,\dots,
p_{\overline{\iota}}^{\mu(\overline{\iota})}$ lying on $\mathbb{P}_n^2$ or, in other words, $\kappa(\overline{\iota})=(\mu(\overline{\iota})+i+1-i_1)/n$ with $(i_1,\iota_1):=\sigma(\overline{\iota})$. It is easy to see that $\kappa(\overline{\iota}) \ge 1$ provided $i_1 \le i$. This observation leads us to the following definition.
\[def:data\] An [*orbit data*]{} is a triplet $\tau=(n,\sigma,\kappa)$ consisting of
- a positive integer $n$,
- a permutation $\sigma$ of $\mathcal{K}(n):=\{ (i,\iota) \, | \, i=1,2, \dots, n, \, \iota=1,2,3 \}$, and
- a function $\kappa : \mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ such that $\kappa(\overline{\iota}) \ge 1$ provided $i_1 \le i$, where $(i_1,\iota_1) =\sigma(\overline{\iota})$.
Note that an orbit data $\tau$ restores the function $\mu : \mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ given by $\mu(\overline{\iota})= \kappa(\overline{\iota}) \cdot n + i_1 -i-1$.
\[def:real’\] An $n$-tuple $\overline{f}=(f_1, \dots,f_n)$ of quadratic birational maps $f_i$ is called a [*realization*]{} of an orbit data $\tau$ if condition (\[eqn:orbit\]) holds for any $\overline{\iota} \in \mathcal{K}(n)$.
A question here is whether a given orbit data $\tau$ admits some realization $\overline{f}$.
To answer this, we consider a class of birational maps preserving a cuspidal cubic $C$ on $\mathbb{P}^2$. Let $\mathcal{Q}(C)$ be the set of quadratic birational maps $f : \mathbb{P}^2 \to \mathbb{P}^2$ satisfying $f(C)=C$ and $I(f) \subset C^*$, where $C^*$ is the smooth locus of $C$. The smooth locus $C^*$ is isomorphic to $\mathbb{C}$ and is preserved by any map $f \in \mathcal{Q}(C)$. Thus, the restriction $f|_{C^*}$ is an automorphism expressed as $$f|_{C^*} : C^* \to C^*, \quad t \mapsto \delta(f) \cdot t + k_f$$ for some $\delta(f) \in \mathbb{C}^\times$ and $k_f \in \mathbb{C}$. For an $n$-tuple $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$, we define the [*determinant*]{} of $\overline{f}$ by $\delta(\overline{f}):=\prod_{i=1}^n \delta(f_i)$. Moreover, to state our main theorems, we introduce the condition $$\label{eqn:condi}
~~~~~~~~~~~~~~~~~~~~~~~~~~~
w_{\tau}^{\ell_{\tau}}(\alpha) \neq \alpha
\quad \quad (\alpha \in \Gamma_{\tau}),
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ where $w_{\tau}$ is an element of $W_N$ with $N:=\sum_{\overline{\iota} \in \mathcal{K}(n)} \kappa(\overline{\iota})$, $\ell_{\tau}$ is a positive integer and $\Gamma_{\tau}$ is a [*finite*]{} subset of $\Phi_N$, which are canonically determined by $\tau$. These definitions will be given in Section \[sec:def\] (see Definitions \[def:latiso\], \[def:mpi\] and \[def:root\]). Condition (\[eqn:condi\]) is referred to as the [ *realizability condition*]{}, for reasons that become clear in the following theorem.
\[thm:main1\] Assume that an orbit data $\tau$ satisfies $\lambda (w_{\tau}) > 1$ and the realizability condition (\[eqn:condi\]). Then, there is a unique realization $\overline{f}_{\tau}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ of $\tau$ such that $\delta(\overline{f}_{\tau})=\lambda(w_{\tau})$. Moreover, $\tau$ determines a blowup $\pi_{\tau} : X_{\tau} \to \mathbb{P}^2$ of $N$ points on $C^*$ in a canonical way, which lifts $f_{\tau}:=f_n \circ \cdots \circ f_1$ to an automorphism $F_{\tau} : X_{\tau} \to X_{\tau}$: $$\begin{CD}
X_{\tau} @> F_{\tau} >> X_{\tau} \\
@V \pi_{\tau} VV @VV \pi_{\tau} V \\
\mathbb{P}^2 @> f_{\tau} >> ~\mathbb{P}^2.
\end{CD}$$ Finally, $(\pi_{\tau},F_{\tau})$ realizes $w_{\tau}$ and $F_{\tau}$ has positive entropy $h_{\mathrm{top}}(F_{\tau})= \log \lambda(w_{\tau}) > 0$.
As seen in Theorem \[thm:main3\], almost all orbit data satisfy the realizability condition (\[eqn:condi\]). Furthermore, even if an orbit data $\tau$ does not satisfy the realizability condition (\[eqn:condi\]), its sibling $\check{\tau}$ does satisfy the condition.
\[thm:main2\] For any orbit data $\tau$ with $\lambda(w_{\tau}) > 1$, there is an orbit data $\check{\tau}$ satisfying $\lambda(w_{\tau})=\lambda(w_{\check{\tau}}) > 1$ and the realizability condition (\[eqn:condi\]), and thus $\check{\tau}$ is realized by $\overline{f}_{\check{\tau}}$.
Moreover, we give a sufficient condition for (\[eqn:condi\]), which enables us to see clearly that almost all orbit data are realized, and to obtain an estimate for the entropy.
\[thm:main3\] Assume that an orbit data $\tau=(n,\sigma,\kappa)$ satisfies
1. $n \ge 2$,
2. $\kappa(\overline{\iota}) \ge 3$ for any $\overline{\iota} \in \mathcal{K}(n)$, and
3. if $\overline{\iota} \neq \overline{\iota}'$ satisfy $i_m=i_m'$ and $\kappa(\overline{\iota}_m)=\kappa(\overline{\iota}_m')$ for any $m \ge 0$, then $\overline{\iota}_m \neq \overline{\iota}'$ for any $m \ge 0$, where $\overline{\iota}_m=(i_m,\iota_m) := \sigma^m (\overline{\iota})$.
Then the orbit data $\tau$ satisfies $2^n-1 < \lambda(w_{\tau}) < 2^n$ and the realizability condition (\[eqn:condi\]). In particular, $F_{\tau}$ has positive entropy $\log (2^{n}-1) < h_{\mathrm{top}}(F_{\tau}) < \log 2^n$.
Diller [@D] constructs, by studying single quadratic maps preserving $C$, automorphisms with positive entropy realizing orbit data $\hat{\tau}=(1,\hat{\sigma},\hat{\kappa})$. As is seen in Example \[ex:auto\], there is an orbit data $\tau$ such that $F_{\tau}$ is not topologically conjugate to the iterates of $F_{\hat{\tau}}$ that Diller constructs for any $\hat{\tau}=(1,\hat{\sigma},\hat{\kappa})$.
Now, let us come back to consider an element $w$ of the Weyl group $W_N$. It is connected with orbit data by the fact that $w$ is expressed as $w=w_{\tau}$ for some orbit data $\tau$ (see Proposition \[pro:iden\]). Thus, Theorem \[thm:main1\] extends the result of McMullen [@M] which states that if $w$ has spectral radius $\lambda(w) > 1$ and no periodic roots in $\Phi_N$, that is, $w^{k}(\alpha) \neq \alpha$ for any $\alpha \in \Phi_N$ and $k \ge 1$, then $w$ is realized by a pair $(\pi,F)$. However, since the roots and the periods are infinite, it is rather difficult to see whether $w$ has no periodic roots. On the other hand, in condition (\[eqn:condi\]), the set $\Gamma_{\tau}$ is finite and the period $\ell_{\tau}$ is fixed. Thus, once an orbit data $\tau$ with $w=w_{\tau}$ is fixed, it is easier to check that $w$ satisfies condition (\[eqn:condi\]). In Example \[ex:auto\], we give an example of $w$ realized by a pair $(\pi,F)$ and admitting periodic roots.
In general, the topological entropy of any automorphism $F : X \to X$ is expressed as $h_{\mathrm{top}}(F)= \log \lambda(w)$ for some $w \in W_N$ (see Proposition \[pro:expent\]). Conversely, by Theorem \[thm:main3\], the logarithm of an arbitrary element in the set $$\label{eqn:PV}
\Lambda:=\{ \lambda(w) \ge 1 \, | \, w \in W_N, \, N \ge 3 \}$$ gives rise to the entropy of some automorphism.
\[cor:main\] For any $\lambda \in \Lambda$, there is an automorphism $F : X \to X$ of a rational surface $X$ such that $h_{\mathrm{top}}(F)= \log \lambda$. Moreover, we have $$\{h_{\mathrm{top}}(F) \, | \, F : X \to X
\text{ is a rational surface automorphism} \}=
\{\log \lambda \, | \, \lambda \in \Lambda\}.$$
The core of the proofs of these theorems is to find concretely the configuration of the points $\{p_{\overline{\iota}}^0, \cdots, p_{\overline{\iota}}^{\mu(\overline{\iota})}\}_{
\overline{\iota} \in \mathcal{K}(n)}$, which are blown up to yield an automorphism. Indeed, the configuration is determined by an eigenvector of $w_{\tau}$. Then, our investigations on the existence of a realization are divided into two steps. The first step is to check that $\tau$ admits a tentative realization (see Definition \[def:tenta\]). Tentative realization is a necessary condition for realization. Moreover, Proposition \[pro:tenta\] states that a tentative realization $\overline{f}$ of $\tau$ with $\delta(\overline{f})=\lambda(w_{\tau})$ exists if and only if $w_{\tau}$ has no periodic roots in a finite subset $\Gamma_{\tau}^{(1)}$ of $\Gamma_{\tau}$. The second step is to check that $\tau$ is compatible with the configuration as in Proposition \[pro:real\], or that the tentative realization $\overline{f}$ is indeed a realization of $\tau$. Proposition \[pro:ind\] shows that $\overline{f}$ is a realization of $\tau$ if and only if $w_{\tau}$ has no periodic roots in $\Gamma_{\tau}^{(2)}:=\Gamma_{\tau} \setminus \Gamma_{\tau}^{(1)}$. When $\tau$ does not pass these two inspections, its sibling $\check{\tau}$ with $\lambda(w_{\check{\tau}})=\lambda(w_{\tau})$, determining essentially the same configuration as $\tau$, satisfies the realizability condition (\[eqn:condi\]) and admits a realization. On the other hand, under the assumptions in Theorem \[thm:main3\], Proposition \[pro:det\] gives an estimate for the spectral radius $\lambda(w_{\tau})$ and shows the absence of periodic roots in $\Gamma_{\tau}^{(1)}$, and then Proposition \[pro:cri\] guarantees the absence of periodic roots in $\Gamma_{\tau}^{(2)}$, which proves Theorem \[thm:main3\].
This article is organized as follows. After defining the element $w_{\tau} \in W_N$, the integer $\ell_{\tau}$ and the finite subset $\Gamma_{\tau}$ in Section \[sec:def\], we describe eigenvectors of $w_{\tau}$ explicitly in Section \[sec:Weyl\]. Section \[sec:const\] is devoted to giving a method for constructing a rational surface automorphism from a realization of $\tau$. In Section \[sec:tenta\], we discuss the existence of a tentative realization of $\tau$, and in Section \[sec:real\], we investigate whether it is indeed a realization and prove Theorems \[thm:main1\]–\[thm:main3\] and Corollary \[cor:main\]. Finally, Propositions \[pro:det\] and \[pro:cri\] are proved in Section \[sec:proof\].
Definitions and Example {#sec:def}
=======================
As is mentioned in the Introduction, an orbit data $\tau$ canonically determines the element $w_{\tau} \in W_{N}$, the integer $\ell_{\tau}$, and the finite subset $\Gamma_{\tau}$ of $\Phi_N$, which appear in the realizability condition (\[eqn:condi\]). In this section, we give these definitions, and also give an example of an orbit data that admits a realization. Moreover, it is shown in the last part of this section (see Proposition \[pro:iden\]) that any element $w$ in $W_N$ is expressed as $w = w_{\tau}$ for some orbit data $\tau$.
First, we recall the Weyl group action on $\mathbb{Z}^{1,N}$ for $N \ge 3$, where $\mathbb{Z}^{1,N}$ is the Lorentz lattice with Lorentz inner product given in (\[eqn:lorentz\]). The [*Weyl group $W_N \subset O(\mathbb{Z}^{1,N})$*]{} is the group generated by the reflections $(\rho_i)_{i=0}^{N-1}$ given in (\[eqn;refl\]), which preserves the Lorentz inner product on $\mathbb{Z}^{1,N}$. The $W_N$-translate $\Phi_N:= \bigcup_{i=0}^{N-1} W_N \cdot \alpha_i$ of the elements $(\alpha_i)$ is called the [*root system*]{} of $W_N$, and each element of $\Phi_N$ is called a [*root*]{} of $W_N$.
For an orbit data $\tau=(n,\sigma,\kappa)$ (see Definition \[def:data\]), we consider the lattice $$L_{\tau}:= \mathbb{Z} e_0 \oplus \bigr( \oplus_{\overline{\iota} \in
\mathcal{K}(n)} \oplus_{k=1}^{\kappa(\overline{\iota})} \mathbb{Z}
e_{\overline{\iota}}^{k} \bigl) \cong \mathbb{Z}^{1,N} \qquad
(N=\sum_{\overline{\iota} \in \mathcal{K}(n)} \kappa(\overline{\iota}) ),$$ with the inner product given by $$\left\{
\begin{array}{ll}
(e_0,e_0)=1 & \\[2mm]
(e_{\overline{\iota}}^{k},e_{\overline{\iota}}^{k})=-1 \qquad &
(\overline{\iota} \in \mathcal{K}(n), \quad 1 \le k \le
\kappa(\overline{\iota})) \\[2mm]
(e_0,e_{\overline{\iota}}^{k})=
(e_{\overline{\iota}}^{k},e_{\overline{\iota}'}^{k'})=0 \qquad &
((\overline{\iota},k) \neq (\overline{\iota}',k')). \\
\end{array}
\right.$$ Then the automorphism $r_{\tau} : L_{\tau} \to L_{\tau}$ is defined by $$r_{\tau} : \left\{
\begin{array}{lll}
e_0 & \mapsto e_0 & ~ \\[2mm]
e_{\sigma_{\tau}(\overline{\iota})}^1 & \mapsto
e_{\overline{\iota}}^{\kappa(\overline{\iota})} \, & \\[2mm]
e_{\overline{\iota}}^k & \mapsto e_{\overline{\iota}}^{k-1} &
(2 \le k \le \kappa(\overline{\iota})),
\end{array}
\right.$$ where $\sigma_{\tau}(\overline{\iota})=\sigma^{m}(\overline{\iota})$ with $m \ge 1$ determined by the relations $\kappa(\sigma^{k}(\overline{\iota}))=0$ for $1 \le k <m$, and $\kappa(\sigma^{m}(\overline{\iota})) \ge 1$. Note that $\sigma_{\tau}$ becomes a permutation of $\{\overline{\iota} \in \mathcal{K}(n) \, | \,
\kappa(\overline{\iota}) \ge 1\}$, and so $e_{\sigma_{\tau}(\overline{\iota})}^1$ is well-defined. The automorphism $r_{\tau}$ is an element of the subgroup $\langle \rho_1,\dots,\rho_{N-1} \rangle \subset W_N$ generated by $\rho_1,\dots,\rho_{N-1}$. On the other hand, for $1 \le j \le n$, the automorphism $q_j : L_{\tau} \to L_{\tau}$ is defined by $$q_j : \left\{
\begin{array}{lll}
e_0 & \mapsto 2 e_0 - \sum_{\iota=1}^3 e_{(j,\iota)_{\tau}}^1 & ~ \\[2mm]
e_{(j,\iota^{(1)})_{\tau}}^1 & \mapsto e_0
- e_{(j,\iota^{(2)})_{\tau}}^1 - e_{(j,\iota^{(3)})_{\tau}}^1 \, &
(\{\iota^{(1)},\iota^{(2)},\iota^{(3)}\}=\{1,2,3\}) \\[2mm]
e_{\overline{\iota}}^k & \mapsto e_{\overline{\iota}}^k & (\text{otherwise}),
\end{array}
\right.$$ where $(j,\iota^{(\nu)})_{\tau}=\sigma^{m_{\nu}}(j,\iota^{(\nu)})$ with $m_{\nu} \ge 0$ determined by the relations $\kappa(\sigma^{k}(j,\iota^{(\nu)}))=0$ for $0 \le k <m_{\nu}$, and $\kappa(\sigma^{m_{\nu}}(j,\iota^{(\nu)})) \ge 1$. We notice that if $\iota^{(1)} \neq \iota^{(2)}$ then $(j,\iota^{(1)})_{\tau} \neq (j,\iota^{(2)})_{\tau}$. Indeed, assume the contrary that $\sigma^{m_1}(j,\iota^{(1)})=\sigma^{m_2}(j,\iota^{(2)})$ for $m_1 > m_2$, or $\sigma^{m}(j,\iota^{(1)})=(j,\iota^{(2)})$ for $m=m_1-m_2 > 0$. As $(j_k,(\iota^{(1)})_k):=\sigma^{k}(j,\iota^{(1)})$ satisfies $\kappa(j_k,(\iota^{(1)})_k)=0$ for $0 \le k \le m-1 \le m_1-1$, one has $j=j_0 < j_1 < \cdots < j_{m}=j$, which is a contradiction. The automorphism $q_j$ is conjugate to $\rho_0$ under the action of $\langle \rho_1,\dots,\rho_{N-1} \rangle$.
Now we define the lattice automorphism $w_{\tau} : \mathbb{Z}^{1,N} \to
\mathbb{Z}^{1,N}$.
\[def:latiso\] For an orbit data $\tau=(n,\sigma,\kappa)$, we define the lattice automorphism $w_{\tau} : L_{\tau} \to L_{\tau}$ by $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
w_{\tau} := r_{\tau} \circ q_1 \circ
\cdots \circ q_n : L_{\tau} \to L_{\tau}.
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ We sometimes write $w_{\tau} : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$.
Indeed, it is easily seen that $w_{\tau}$ is an element of $W_N$. Through the isomorphism $w_{\tau} : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$, the definition of the integer $\ell_{\tau}$ is given in the following manner.
\[def:mpi\] The positive integer $\ell_{\tau}$ is defined as the minimal positive integer satisfying $d^{\ell_{\tau}}=1$ for any eigenvalue $d$ of $w_{\tau}$ that is a root of unity.
Before determining the finite set $\Gamma_{\tau}$, we define a set $\mathcal{T}(\tau)$ of $n$-tuples $(\prec_{i})_{i=1}^n$ of total orders $\prec_i$ on the subset $\mathcal{K}(n)_{i}:=\{(i,1),(i,2),(i,3)\}$ of $\mathcal{K}(n)$ (see also Remark \[rem:inf2\]). Recall that the orbit data $\tau=(n,\sigma,\kappa)$ defines the function $\mu : \mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ by $$\label{eqn:nu'}
\mu(\overline{\iota}):=
\kappa(\overline{\iota}) \cdot n+i_1-i-1=
\theta_{i,i_1-1}(\kappa(\overline{\iota})),$$ where $\overline{\iota}_m=(i_m,\iota_m) =\sigma^m(\overline{\iota})$ for $\overline{\iota}=(i,\iota) \in \mathcal{K}(n)$, and $$\label{eqn:vartheta}
\theta_{i,i'}(k) :=
k \cdot n + i'-i. \\[2mm]$$ Moreover, let $G_{\tau} : \{1, \dots, n\} \to \{1,2,3\}$ be the function defined by $$G_{\tau}(i):=
\left\{
\begin{array}{ll}
1 \quad & (\mu(\overline{\iota}) \neq \mu(\overline{\iota}') \text{ for }
\overline{\iota} \neq \overline{\iota}' \in \mathcal{K}(n)_i) \\[2mm]
2 \quad & (\mu(\overline{\iota})=\mu(\overline{\iota}') \neq
\mu(\overline{\iota}'')
\text{ for } \{ \overline{\iota}, \overline{\iota}', \overline{\iota}''\} =
\mathcal{K}(n)_i) \\[2mm]
3 \quad & (\mu(\overline{\iota})=\mu(\overline{\iota}') =
\mu(\overline{\iota}'')
\text{ for } \{ \overline{\iota}, \overline{\iota}', \overline{\iota}''\} =
\mathcal{K}(n)_i). \\[2mm]
\end{array}
\right.$$
\[def:order\] Let $\mathcal{T}(\tau)$ be the set of all $n$-tuples $(\prec_{i})_{i=1}^n$ of total orders $\prec_i$ on $\mathcal{K}(n)_{i}$ inductively satisfying the following conditions. First, for the minimal integer $i^{(1)} \in \mathcal{N}(0):=\{1, \dots, n\}$ such that $G_{\tau}(i^{(1)})=\min \{G_{\tau}(i) \,|\,i\in \mathcal{N}(0) \}$, the total order $\prec_{i^{(1)}}$ satisfies $\overline{\iota}
\prec_{i^{(1)}} \overline{\iota}'$ provided $\mu(\overline{\iota}) < \mu(\overline{\iota}')$. Moreover, for $1 \le m \le n-1$, suppose that $i^{(m)} \in
\mathcal{N}(m-1)$ is given. Put $\mathcal{N}(m):=\mathcal{N}(m-1) \setminus \{i^{(m)}\}$. If there are distinct elements $\overline{\iota}, \overline{\iota}' \in \mathcal{K}(n)_{i^{(m+1)}}$ for some $i^{(m+1)} \in \mathcal{N}(m)$ such that $i^{(m)}=i_1=i_1'$ and $\mu(\overline{\iota}) = \mu(\overline{\iota}')$, then $\prec_{i^{(m+1)}}$ satisfies $\overline{\iota} \prec_{i^{(m+1)}} \overline{\iota}'$ provided that either $\mu(\overline{\iota}) = \mu(\overline{\iota}')$ and $\overline{\iota}_1 \prec_{i^{(m)}} \overline{\iota}_1'$, or $\mu(\overline{\iota}) < \mu(\overline{\iota}')$. Otherwise, for the minimal integer $i^{(m+1)} \in \mathcal{N}(m)$ such that $G_{\tau}(i^{(m+1)})=\min \{G_{\tau}(i) \,|\,i\in \mathcal{N}(m) \}$, the total order $\prec_{i^{(m+1)}}$ satisfies $\overline{\iota} \prec_{i^{(m+1)}} \overline{\iota}'$ provided $\mu(\overline{\iota}) < \mu(\overline{\iota}')$.
We define the finite subsets of the root system by $$\begin{aligned}
\Gamma_{\tau}^{(1)} & := & \big\{ \alpha_{j}^c \, \big| \, j=1, \dots,n
\big\} \subset \Phi_N,
\label{eqn:roots1} \\[2mm]
\Gamma_{\tau}^{(2)} & := & \overline{\Gamma}_{\tau}^{(2)} \setminus
\check{\Gamma}_{\tau}^{(2)} \subset \Phi_N,
\label{eqn:roots2} \\[2mm]
\overline{\Gamma}_{\tau}^{(2)} & := &
\big\{ \alpha_{\overline{\iota},\overline{\iota}'}^k \, \big|
\, \overline{\iota}=(i,\iota),\overline{\iota}'=(i',\iota') \in
\mathcal{K}(n), 0 \le
\theta_{i,i'}(k) \le \mu(\overline{\iota}) \big\} \subset \Phi_N,
\nonumber\end{aligned}$$ where $\alpha_{j}^c$ and $\alpha_{\overline{\iota},\overline{\iota}'}^k$ are the roots given by $$\begin{aligned}
\alpha_{j}^c & := & q_n \circ \cdots \circ q_{j+1}
(e_0 -e_{(j,1)_{\tau}}^1-e_{(j,2)_{\tau}}^1-e_{(j,3)_{\tau}}^1),
\label{eqn:root1} \\[2mm]
\alpha_{\overline{\iota},\overline{\iota}'}^k & := &
q_n \circ \cdots \circ q_{i'+1} ( e_{\overline{\iota}_{\tau}}^{k+1} -
e_{\overline{\iota}_{\tau}'}^1).
\label{eqn:root2}\end{aligned}$$ Moreover, $\check{\Gamma}_{\tau}^{(2)}$ is the set of roots $\alpha_{\overline{\iota},\overline{\iota}'}^k$ in $\overline{\Gamma}_{\tau}^{(2)}$ satisfying the following conditions for a given $(\prec_{i}) \in \mathcal{T}(\tau)$:
1. If $\theta_{i,i'}(k)>0$, then either $\mu(\overline{\iota}) = \mu(\overline{\iota}')+\theta_{i,i'}(k)$ and $\overline{\iota}_1' \prec_{i_1} \overline{\iota}_1$, or $\mu(\overline{\iota}) > \mu(\overline{\iota}')+\theta_{i,i'}(k)$.
2. When $\theta_{i,i'}(k)=0$, then $\overline{\iota}' \prec_{i} \overline{\iota}$ if and only if either $\mu(\overline{\iota}) = \mu(\overline{\iota}')$ and $\overline{\iota}_1' \prec_{i_1} \overline{\iota}_1$, or $\mu(\overline{\iota}) > \mu(\overline{\iota}')$.
Indeed, the definitions of $\check{\Gamma}_{\tau}^{(1)}$ and $\Gamma_{\tau}^{(1)}$ are independent of the choice of $(\prec_i) \in \mathcal{T}(w)$. Moreover, it should be noted that if $\overline{\iota}$ and $\overline{\iota}'$ satisfy $\theta_{i,i'}(k)=0$ then they are elements of $\mathcal{K}(n)_i$ and satisfy either $\overline{\iota}' \prec_{i} \overline{\iota}$ or $\overline{\iota} \prec_{i} \overline{\iota}'$. Furthermore, if they satisfy $\mu(\overline{\iota}')+\theta_{i,i'}(k) = \mu(\overline{\iota})$ then $\overline{\iota}_1$ and $\overline{\iota}_1'$ are elements of $\mathcal{K}(n)_{i_1}$ and satisfy either $\overline{\iota}_1' \prec_{i_1} \overline{\iota}_1$ or $\overline{\iota}_1 \prec_{i_1} \overline{\iota}_1'$. Now, we define the finite set $\Gamma_{\tau}$.
\[def:root\] The finite subset $\Gamma_{\tau}$ of the root system $\Phi_{N}$ is defined by $$\Gamma_{\tau} :=\Gamma_{\tau}^{(1)} \cup \Gamma_{\tau}^{(2)} \subset \Phi_N.$$
\[ex:auto\] Now we consider the orbit data $\tau=(n,\sigma,\kappa)$, where $n=2$, $\sigma=\text{id}$, $\kappa(1,\iota)=3$ and $\kappa(2,\iota)=4$ for any $\iota=1,2,3$. Then $\tau$ satisfies the assumptions in Theorem \[thm:main3\], and thus $w_{\tau}$ is realized by the pair $(\pi_{\tau},F_{\tau})$, where $\pi_{\tau} : X_{\tau} \to \mathbb{P}^2$ is a blowup of $21$ points. A little calculation shows that the entropy of $F_{\tau}$ is given by $h_{\mathrm{top}}(F_{\tau})=\log \lambda(F_{\tau}^*) \approx 1.35442759$, where $\lambda(F_{\tau}^*) \approx 3.87454251$ is a root of the equation $t^6-4t^5+t^4-2t^3+t^2-4t+1=0$. Moreover, the element $w_{\tau} \in W_{21}$ admits periodic roots $\alpha_{\overline{\iota},\overline{\iota}'}^0$ with $i=i' \in \{ 1,2 \}$ and $\iota \neq \iota' \in \{1,2,3\}$, which are not contained in $\Gamma_{\tau}$. Therefore, the automorphism $F_{\tau}$ does not appear in the paper of McMullen [@M]. On the other hand, for any data $\hat{\tau}=(1,\hat{\sigma},\hat{\kappa})$, let $F_{\hat{\tau}} : X_{\hat{\tau}} \to X_{\hat{\tau}}$ be an automorphism that Diller in [@D] constructs from a single quadratic map preserving a cuspidal cubic. We claim that, for any $m \ge 1$, $F_{\tau}$ is not topologically conjugate to $F_{\hat{\tau}}^m$. Indeed, assume the contrary that $F_{\tau}$ is topologically conjugate to $F_{\hat{\tau}}^m$ for some data $\hat{\tau}$ and $m \ge 1$. Since $X_{\tau}$ is obtained by blowing up $21$ points, so is $X_{\hat{\tau}}$, which means that $\sum_{\iota=1}^3 \hat{\kappa}(1,\iota)=21$. Thus, there are $570$ possibilities for $\hat{\tau}$. Moreover, one has $\lambda(F_{\tau}^*)=\lambda(F_{\hat{\tau}}^*)^m$. However, with the help of a computer, it may be easily seen that there are no data $\hat{\tau}$ and $m \ge 1$ satisfying the conditions $\sum_{\iota=1}^3 \hat{\kappa}(1,\iota)=21$ and $\lambda(F_{\tau}^*)=\lambda(F_{\hat{\tau}}^*)^m$. Therefore our claim is proved.
We conclude this section by establishing the following proposition.
\[pro:iden\] For any $w \in W_N$, there is an orbit data $\tau=(n,\sigma,\kappa)$ with $\sum_{\overline{\iota} \in \mathcal{K}(n)} \kappa(\overline{\iota})=N$ such that $w= w_{\tau}: \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ under some identification $\{e_j \, | \, j=1,\dots,N\}=\{e_{\overline{\iota}}^k \, | \,
\overline{\iota} \in \mathcal{K}(n), \, k=1,\dots,\kappa(\overline{\iota})\}$.
[*Proof*]{}. Since $w$ is an element of $W_N$, it can be expressed as $$w= \wp_0 \cdot \rho_0 \cdot \wp_1 \cdots \rho_0 \cdot
\wp_{m-1} \cdot \rho_0 \cdot \wp_{m},$$ where $\wp_k$ is a permutation of $(e_j)_{j=1}^N$. The expression can be written as $$w = \big(\wp_0 \cdots \wp_{m} \big) \cdot
\big\{ (\wp_1 \cdots \wp_{m})^{-1} \cdot \rho_0 \cdot
(\wp_1 \cdots \wp_{m}) \big\} \cdots
\big\{ (\wp_{m-1} \cdot \wp_{m})^{-1} \cdot \rho_0 \cdot (\wp_{m-1} \cdot
\wp_{m}) \big\} \cdot \big\{ \wp_{m}^{-1} \cdot \rho_0 \cdot \wp_{m} \big\}.$$ Let $\wp:=\wp_0 \cdots \wp_{m}$ be the permutation on the basis elements $(e_j)_{j=1}^N$, and let $\hat{m} \ge 0$ be the number of orbits $\{\wp^k(e_j) \, | \, k \ge 0\}$ not containing $(\wp_i \cdots \wp_{m})^{-1}(e_{\iota})$ for any $i=1,\dots,m$ and $\iota=1,2,3$. Then put $q_{k+2 \hat{m}}:= (\wp_k \cdots \wp_{m})^{-1}
\cdot \rho_0 \cdot (\wp_k \cdots \wp_{m})$ and $e_{2\hat{m}+i,\iota}^1:=(\wp_i \cdots \wp_{m})^{-1}(e_{\iota})$ for $\iota=1,2,3$. Moreover, there are functions $\hat{\kappa} : \{1, \dots, \hat{m} \} \to \mathbb{Z}_{\ge 1}$ and $\check{\kappa} : \mathcal{K}(m;\hat{m}):= \{(2\hat{m}+i,\iota) \, | \,
i=1,\dots,m,\, \iota=1,2,3\} \to \mathbb{Z}_{\ge 0}$, and a permutation $\hat{\sigma}$ of $\mathcal{K}(m;\hat{m})$, such that the following relations hold:
- if $e_{2\hat{m}+i,\iota}^1=e_{2\hat{m}+i'',\iota''}^1$ for some $i < i''$, then $\check{\kappa}(2\hat{m}+i,\iota)=0$ and $\hat{\sigma}(2\hat{m}+i,\iota)=(2\hat{m}+i',\iota')$, where $(2\hat{m}+i',\iota')$ is determined by the relations $e_{2\hat{m}+i,\iota}^1=e_{2\hat{m}+i',\iota'}^1$ and $i'= \min \{i'' > i \, | \, e_{2\hat{m}+i,\iota}^1=
e_{2\hat{m}+i'',\iota''}^1 \}$,
- after reordering $(e_j)_{j=1}^N$, the permutation $\wp$ is expressed as $$\wp : \left\{
\begin{array}{lll}
e_{2j,3}^k & \mapsto e_{2j,3}^{k-1} & (j=1, \dots, \hat{m},
\quad k \in \mathbb{Z}/\hat{\kappa}(j) \mathbb{Z}) \\[2mm]
e_{\hat{\sigma}(2\hat{m}+i,\iota)}^1 & \mapsto
e_{2\hat{m}+i,\iota}^{\check{\kappa}(2\hat{m}+i,\iota)} & (i=1,\dots,m, \quad
\check{\kappa}(2\hat{m}+i,\iota) \ge 1) \\[2mm]
e_{2\hat{m}+i,\iota}^k & \mapsto e_{2\hat{m}+i,\iota}^{k-1} &
(2 \le k \le \check{\kappa}(2\hat{m}+i,\iota), \, i=1,\dots,m).
\end{array}
\right.$$
Then the data $\tau=(n,\sigma,\kappa)$ is defined by $n:=m+2\hat{m}$ and $$\sigma(i,\iota) :=
\left\{
\begin{array}{ll}
(i+1,\iota) & (\text{either } \iota=1,2 \text{ and } i=1,\dots, 2\hat{m},
\text{ or } \iota=3 \text{ and } i=1,3,\dots,2\hat{m}-1 ) \\[2mm]
(i-1,\iota) & (\iota=3, \, i=2,4,\dots,2\hat{m} ) \\[2mm]
(1,\iota') & (\hat{\sigma}(i,\iota)=
(2\hat{m}+1,\iota'), \, \iota'=1,2) \\[2mm]
\hat{\sigma}(i,\iota) & (\text{otherwise}),
\end{array}
\right.$$ $$\kappa(i,\iota) :=
\left\{
\begin{array}{ll}
0 & (\text{either } \iota=1,2 \text{ and } i=1,\dots, 2\hat{m},
\text{ or } \iota=3 \text{ and } i=1,3,\dots,2\hat{m}-1 ) \\[2mm]
\hat{\kappa}(i/2) & (\iota=3, \, i=2,4,\dots,2\hat{m} ) \\[2mm]
\check{\kappa}(i,\iota) & (\text{otherwise}),
\end{array}
\right.$$ which gives expressions $\wp=r_{\tau} \circ q_1 \circ \cdots \circ q_{2 \hat{m}}$ and $w=w_{\tau}=r_{\tau} \circ q_1 \circ \cdots \circ q_n$. Thus the proposition is established. $\Box$
The Weyl Group Action {#sec:Weyl}
=====================
Let us consider the eigenvalues of an automorphism $w : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ in $W_N$. If the spectral radius $\lambda(w)$ of $w$ is strictly greater than $1$, in other words $w$ admits an eigenvalue $d$ that is not a root of unity, then the eigenvector $\overline{v}_{d}$ of $w$ corresponding to $d$ determines whether $z \in \mathbb{Z}^{1,N}$ is a periodic vector of $w$, as is stated in Lemma \[lem:per\]. Moreover, we find the coefficients of $\overline{v}_{d}$ by expressing $w$ as $w=w_{\tau}$ for an orbit data $\tau$.
Let $w : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ be a lattice automorphism in $W_N$. It is known that the characteristic polynomial $\chi_{w}(t)$ of $w$ can be expressed as $$\chi_{w}(t) =
\left\{
\begin{array}{ll}
R_w(t) \quad & (\lambda(w)=1) \\[2mm]
R_w(t) S_w(t) \quad & (\lambda(w)>1),
\end{array}
\right.$$ where $R_w(t)$ is a product of cyclotomic polynomials, and $S_w(t)$ is a Salem polynomial, namely, the minimal polynomial of a Salem number. Here, a [*Salem number*]{} is an algebraic integer $\delta > 1$ whose conjugates other than $\delta$ satisfy $|\delta'| \le 1$ and include $1/\delta < 1$. Therefore, if $w \in W_N$ satisfies $\lambda(w) >1$, then its unique eigenvalue $\delta$ with $|\delta| > 1$ is a Salem number $\delta = \lambda(w) > 1$.
Now assume that $\lambda(w) >1$. Then there is a direct sum decomposition of the real vector space $\mathbb{R}^{1,N}:=\mathbb{Z}^{1,N} \otimes_{\mathbb{Z}} \mathbb{R}$ as $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
\mathbb{R}^{1,N}= V_w \oplus V_w^c,
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ such that the decomposition is preserved by $w$, and $S_w(t)$ and $R_w(t)$ are the characteristic polynomials of $w|_{V_w}$ and $w|_{V_w^c}$ , respectively. We notice that $V_w^c$ is the orthogonal complement of $V_w$ with respect to the Lorentz inner product. Moreover, let $\ell_{w}$ be the the minimal positive integer satisfying $d^{\ell_{w}}=1$ for any root $d$ of the equation $R_{w}(t)=0$. Then we have the following lemma.
\[lem:per\] Assume that $\delta= \lambda(w) > 1$, and let $\check{\delta}$ be an eigenvalue of $w$ that is not a root of unity. Then, for a vector $z \in \mathbb{Z}^{1,N}$, the following are equivalent.
1. $(z, \overline{v}_{\check{\delta}})=0$, where $\overline{v}_{d}$ is the eigenvector of $w$ corresponding to an eigenvalue $d$.
2. $z \in V_w^c \cap \mathbb{Z}^{1,N}$.
3. $z$ is a periodic vector of $w$ with period $\ell_{w}$.
[*Proof*]{}. $(1) \Longrightarrow (2)$. First, we notice that $\overline{v}_d$ can be chosen so that $\overline{v}_d \in \mathbb{Z}^{1,N} \otimes_{\mathbb{Z}} \mathbb{Z}[d]$. Thus, the coefficient of $e_i$ in $\overline{v}_{\check{\delta}}$, and thus that in $\overline{v}_{\delta'}$ for any conjugate $\delta'$, are expressed as $(\overline{v}_{\check{\delta}})_i=\upsilon_i(\check{\delta})$ and $(\overline{v}_{\delta'})_i=\upsilon_i(\delta')$ for some $\upsilon_i(x) \in \mathbb{Z}[x]$. Since $z \in \mathbb{Z}^{1,N}$, we have $\sum z_i \cdot \upsilon_i(x) \in \mathbb{Z}[x]$, and so $(z, \overline{v}_{\delta'})=\sum z_i \cdot \upsilon_i(\delta')=0$ from the relation $(z, \overline{v}_{\check{\delta}})=
\sum z_i \cdot \upsilon_i(\check{\delta})=0$. Thus it follows that $z \in V_w^c \cap \mathbb{Z}^{1,N}$.
$(2) \Longrightarrow (3)$. For any eigenvalues $d, d'$, we have $$(\overline{v}_d,\overline{v}_{d'})=(w(\overline{v}_d),w(\overline{v}_{d'}))
=d \cdot d' \cdot (\overline{v}_d,\overline{v}_{d'}),$$ which means that $(\overline{v}_d,\overline{v}_{d'})=0$ if $d \cdot d' \neq 1$. In particular, one has $(\overline{v}_{\delta},\overline{v}_{\delta})=
(\overline{v}_{1/\delta},\overline{v}_{1/\delta})=0$. Moreover, since $\overline{v}_{\delta}, \overline{v}_{1/\delta} \in \mathbb{R}^{1,N}$ are linearly independent over $\mathbb{R}$, $(\overline{v}_{\delta}, \overline{v}_{1/\delta})$ is nonzero, and thus either $(\overline{v}_{\delta}+\overline{v}_{1/\delta},
\overline{v}_{\delta}+\overline{v}_{1/\delta})$ or $(\overline{v}_{\delta}-\overline{v}_{1/\delta},
\overline{v}_{\delta}-\overline{v}_{1/\delta})$ is positive. As $\mathbb{R}^{1,N}$ has signature $(1,N)$ and $V_w$ has signature $(1,s)$ for some $s \ge 1$, $V_w^c$ is negative definite. This shows that $w|_{V_w^c}$ has finite order. Since any eigenvalue $d$ of $w|_{V_w^c}$ satisfies $d^{\ell_w}=1$, we have $w^{\ell_w}(z)=z$.
$(3) \Longrightarrow (1)$. Assume that $w^{\ell_{w}}(z)=z$. We express $z$ as $z=z' + z''$ for some $z' \in V_w$ and $z'' \in V_w^c$, and then express $z'$ as $z'=\sum_{S_w(d)=0} z_d \cdot \overline{v}_d$ for some $z_d \in \mathbb{C}$. Under the assumption that $w^{\ell_w}(z)=z$, one has $\sum_{S_w(d)=0} z_d \cdot \overline{v}_d= z'=w^{\ell_{w}} (z')=
\sum_{S_w(d)=0} d^{\ell_w} \cdot z_d \cdot \overline{v}_d$. This means that $z_d=d^{\ell_w} \cdot z_d$ for any $d$ with $S_w(d)=0$. Since $d$ is not a root of unity, $z_d$ is zero for any $d$. Therefore, we have $z'=0$ and $z=z'' \in V_w^c$, and the assertion is established. $\Box$
\[rem:ev\] For an orbit data $\tau$, the positive integer $\ell_{\tau}$ (see Definition \[def:mpi\]) satisfies $$\ell_{\tau}=\ell_{w_{\tau}}.$$
Next, we describe eigenvectors of the lattice automorphism $w_{\tau} : \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ for a given orbit data $\tau=(n,\sigma,\kappa)$. To this end, we consider the system of equations $$\begin{aligned}
v_{i,1}+v_{i,2}+v_{i,3} & = & - \sum_{k=1}^{i-1} s_k
+(d-2) \cdot s_i - d \sum_{k=i+1}^n s_k, \quad (1 \le i \le n),
\label{eqn:SB} \\[2mm]
v_{\overline{\iota}_1} & = & d^{\kappa(\overline{\iota})} \cdot
v_{\overline{\iota}} +(d-1)
\cdot s_{i_1} \qquad (\overline{\iota} \in \mathcal{K}(n)),
\label{eqn:expab}\end{aligned}$$ where $d \in \mathbb{C} \setminus \{0\}$, $v=(v_{\overline{\iota}})_{\overline{\iota} \in \mathcal{K}(n)} \in
\mathbb{C}^{3n}$, $s=(s_{r})_{r=1}^n \in \mathbb{C}^n$, and $\overline{\iota}_m=(i_m,\iota_m)=\sigma^m(\overline{\iota})$ for $\overline{\iota}=(i,\iota) \in \mathcal{K}(n)$.
\[pro:eigen\] Let $\tau$ be an orbit data, and $\overline{v}$ be a vector in $L_{\tau}
\otimes_{\mathbb{Z}} \mathbb{C}$ expressed as $$\overline{v} = v_0 \cdot e_0 +
\sum v_{\overline{\iota}}^{k} \cdot e_{\overline{\iota}}^k \in L_{\tau}
\otimes_{\mathbb{Z}} \mathbb{C}.$$ If $\overline{v}$ is an eigenvector $\overline{v}=\overline{v}_d$ of $w_{\tau}$ corresponding to an eigenvalue $d$ different from $1$, then there is a unique pair $(v,s) \in (\mathbb{C}^{3n} \setminus \{0\}) \times \mathbb{C}^{n}$ such that the following conditions hold:
1. $(d,v,s)$ satisfies equations (\[eqn:SB\]) and (\[eqn:expab\]).
2. $v_{\overline{\iota}}^k=d^{k-1} \cdot v_{\overline{\iota}}$ for any $\overline{\iota} \in \mathcal{K}(n)$ and $1 \le k \le \kappa(\overline{\iota})$.
3. $ v_0 = k(s)$, where $k(s)$ is given by $$\label{eqn:fix}
~~~~~~~~~~~~~~~~~~~~~~~~~~~
k(s) := \sum_{k=1}^{n} s_k.
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$
Conversely, if $\overline{v}$ satisfies conditions (1)–(3) for some triplet $(d,v,s) \in (\mathbb{C} \setminus \{0,1\}) \times
(\mathbb{C}^{3n} \setminus \{0\}) \times \mathbb{C}^{n}$, then $\overline{v}$ is an eigenvector $\overline{v}=\overline{v}_{d}$ of $w_{\tau}$ corresponding to an eigenvalue $d$.
[*Proof*]{}. Assume that $\overline{v}$ is an eigenvector corresponding to $d \neq 1$. For any $1 \le k \le \kappa(\overline{\iota})-1$, the coefficient of $e_{\overline{\iota}}^{k}$ in $w_{\tau}(\overline{v})$ is $v_{\overline{\iota}}^{k+1}$. Hence, one has $v_{\overline{\iota}}^{k+1}= d \cdot v_{\overline{\iota}}^k$, and $v_{\overline{\iota}}^k=d^{k-1} \cdot v_{\overline{\iota}}^1$. Moreover, we put $v_{\overline{\iota}}:=v_{\overline{\iota}}^1$ and choose $s_1, \dots, s_n$ so that $v_0=\sum_{i=1}^n s_i$ and $v_{j}=- \sum_{i=1}^{j-1} s_i +(d-2) \cdot s_j - d \sum_{i=j+1}^n s_i$ for $2 \le j \le n$, where $v_{j}:=v_{j,1}+v_{j,2}+v_{j,3}$, and $v_{\overline{\iota}}:=v_{\overline{\iota}_1}-(d-1) \cdot s_{i_1}$ if $\kappa(\overline{\iota})=0$. Note that $s_1, \dots, s_n$ are determined uniquely since $d \neq 1$. Now we claim that $v_{1}=(d-2) \cdot s_1 - d \sum_{i=2}^n s_i$, and that the following relation holds for $1 \le j \le n+1$: $$\label{eqn:wexp}
q_j \circ \cdots \circ q_n (\overline{v}) =
v^j \cdot e_0 + \hspace{-3mm}
\sum_{\substack{\ i < j \\ \kappa(\overline{\iota}) \ge 1}} \hspace{-2mm}
v_{\overline{\iota}} \cdot e_{\overline{\iota}}^1
+ \hspace{-3mm}
\sum_{\substack{\ i < j \le i_1 \\ \kappa(\overline{\iota})=0}} \hspace{-2mm}
v_{\overline{\iota}} \cdot e_{\overline{\iota}_{\tau}}^1
+ \hspace{-3mm}
\sum_{\substack{\ j \le i \\ \kappa(\overline{\iota}_{-1}) \ge 1}}
\hspace{-3mm} \Big\{ v_{\overline{\iota}} - (d-1) \cdot s_i \Big\} \cdot
e_{\overline{\iota}_{\tau}}^1
+ \sum_{k \ge 2} d^{k-1} \cdot v_{\overline{\iota}}
\cdot e_{\overline{\iota}}^k,$$ where $v^j:=\sum_{i=1}^{j-1} s_i +d \sum_{i=j}^{n} s_i$. Indeed, if $j=n+1$, the relation is trivial. Assume that the relation holds when $j+1 \ge 2$. Then the automorphism $q_j$ changes only the coefficients $v^{j+1}$ and $v_{j,\iota}$ in $q_{j+1} \circ \cdots \circ q_n (v)$ as follows: $$\displaystyle q_j \Big( v^{j+1} \cdot e_0 +
\sum_{\iota=1}^3 v_{j,\iota} \cdot e_{(j,\iota)_{\tau}}^1 \Big) =
\big( 2 v^{j+1} + v_j \big) \cdot e_0
- \sum_{\iota=1}^3 \big(v^{j+1} + v_{j,\iota'} +
v_{j,\iota''} \big) \cdot e_{(j,\iota)_{\tau}}^1,$$ where $\{\iota,\iota',\iota''\}=\{1,2,3\}$. Therefore, when $j \ge 2$, equation (\[eqn:wexp\]) holds from the facts that $2 v^{j+1} + v_j= v^{j}$, $v^{j+1} + v_{j,\iota'} + v_{j,\iota''}=
(d -1) \cdot s_j- v_{j,\iota}$, and $\big\{ v_{j,\iota} - (d-1) \cdot s_{j} \big\} \cdot e_{(j,\iota)_{\tau}}^1=
v_{j_{-1},\iota_{-1}} \cdot e_{(j_{-1},\iota_{-1})_{\tau}}^1$ if $\kappa(j_{-1},\iota_{-1})=0$. Moreover, since the coefficient of $e_0$ in $w_{\tau} (\overline{v})$ is $d \cdot v_0$ and $r_{\tau}$ fixes $e_0$, the coefficient of $e_0$ in $q_1 \circ \cdots \circ q_n (\overline{v})$ is expressed as $2 v^2 + v_1=d \cdot v_0=v^1$, which yields $v_{1}=(d-2) \cdot s_1 - d \sum_{i=2}^n s_i$. Thus, the coefficient of $e_{1,\iota}$ in $q_1 \circ \cdots \circ q_n (\overline{v})$ is given by $v_{1,\iota}-(d-1) \cdot s_1$ and equation (\[eqn:wexp\]) holds when $j=1$. The claim follows from these observations. In particular, $(d,v,s)$ satisfies equation (\[eqn:SB\]).
By the above claim, we have $$q_1 \circ \cdots \circ q_n (\overline{v}) =
v^1 \cdot e_0 + \sum_{\kappa(\overline{\iota}) \ge 1}
\Big\{ v_{\overline{\iota}_1} - (d-1) \cdot s_{i_1} \Big\} \cdot
e_{\sigma_{\tau}(\overline{\iota})}^1
+ \sum_{k \ge 2} d^{k-1} \cdot v_{\overline{\iota}}
\cdot e_{\overline{\iota}}^k,$$ as $(\overline{\iota}_1)_{\tau}=\sigma_{\tau}(\overline{\iota})$. Thus, the coefficient of $e_{0}$ in $w_{\tau}(\overline{v})$ is $v^1= d \cdot k(s)$. Similarly, the coefficient of $e_{\overline{\iota}}^{\kappa(\overline{\iota})}$ in $w_{\tau}(\overline{v})$ is given by $v_{\overline{\iota}_1}-(d-1) \cdot s_{i_1}$. This means that $v_{\overline{\iota}_1}-(d-1) \cdot s_{i_1}=
d \cdot v_{\overline{\iota}}^{\kappa(\overline{\iota})}=
d^{\kappa(\overline{\iota})} \cdot v_{\overline{\iota}}$ and that $(d,v,s)$ satisfies equation (\[eqn:expab\]). Moreover we have $v \neq 0$, since if $v = 0$, then one has $s=0$ from (\[eqn:expab\]), and so $\overline{v}=\overline{v}_d=0$, which is a contradiction.
The rest of the statement in the proposition immediately follows from the above discussion, and the proof is complete. $\Box$
Now assume that $d$ is not a root of unity. Then equation (\[eqn:expab\]) is equivalent to the expression $$\label{eqn:expre}
v_{\overline{\iota}} = v_{\overline{\iota}}(d) = -
\frac{ d^{ \varepsilon_{|\overline{\iota}|}}
\cdot (d-1)}{d^{ \varepsilon_{|\overline{\iota}|}}-1}
\bigl( d^{-\varepsilon_{1}} \cdot s_{i_1} +
d^{-\varepsilon_{2}} \cdot s_{i_2} + \cdots +
d^{-\varepsilon_{|\overline{\iota}|}} \cdot s_{i_{|\overline{\iota}|}} \bigr),$$ where $|\overline{\iota}|:=\# \{\overline{\iota}_m \, | \, m \ge 0 \}$ and $\varepsilon_r:=\varepsilon_r(\overline{\iota})=\sum_{k=0}^{r-1}
\kappa(\overline{\iota}_k)$. Let $\overline{c}_{\overline{\iota},j}(d)$ and $c_{i,j}(d)$ be the polynomials of $d$ defined by $$\begin{aligned}
v_{\overline{\iota}}(d) & = &
\sum_{j=1}^n \overline{c}_{\overline{\iota},j}(d) \cdot s_j,
\label{eqn:c1} \\
v_i(d) &:= &
v_{i,1}(d)+ v_{i,2}(d)+v_{i,3}(d)
=- \sum_{j=1}^n c_{i,j}(d) \cdot s_j,
\label{eqn:c2}\end{aligned}$$ and let $\mathcal{A}_n(d,x)$ be the $n \times n$ matrix having the $(i,j)$-th entry: $$\label{eqn:matA}
\mathcal{A}_n(d,x)_{i,j} =
\left\{
\begin{array}{ll}
d-2 + x_{i,i} & (i=j) \\[2mm]
-1 + x_{i,j} & (i>j) \\[2mm]
-d + x_{i,j} & (i<j)
\end{array}
\right.$$ with $x=(x_1, \dots, x_n)=(x_{ij}) \in M_n(\mathbb{R})$. Then equations (\[eqn:SB\]) and (\[eqn:expre\]) yield $$\label{eqn:mat}
\mathcal{A}_{\tau}(d)\, s=0, \qquad \qquad
s = \left(
\begin{array}{c}
s_1 \\[-1mm]
\vdots \\[-1mm]
s_n
\end{array}
\right),$$ where $\mathcal{A}_{\tau}(d) := \mathcal{A}_n(d,c(d))$ with $c(d):=(c_{i,j}(d))$. Finally, let $\chi_{\tau}(d)$ be the determinant $|\mathcal{A}_{\tau}(d)|$ of the matrix $\mathcal{A}_{\tau}(d)$.
\[cor:sol\] Assume that $d$ is not a root of unity. Then,
1. $d$ is a root of $\chi_{\tau}(t)=0$ if and only if $d$ is a root of $S_{w_{\tau}}(t)=0$.
2. If $d$ is a root of $S_{w_{\tau}}(t)=0$, then there is a unique solution $(v,s) \in (\mathbb{C}^{3n} \setminus \{0\}) \times
(\mathbb{C}^{n} \setminus \{0\})$ of (\[eqn:SB\]) and (\[eqn:expab\]), up to a constant multiple. Conversely, if there is a solution $(v,s) \neq (0,0) \in \mathbb{C}^{3n} \times \mathbb{C}^{n}$ of (\[eqn:SB\]) and (\[eqn:expab\]), then $d$ is a root of $S_{w_{\tau}}(t)=0$.
3. If $(v,s) \neq (0,0) \in \mathbb{C}^{3n} \times \mathbb{C}^{n}$ satisfies (\[eqn:SB\]) and (\[eqn:expab\]), then $v$ and $s$ are nonzero, and $s$ is a unique solution of (\[eqn:mat\]). Conversely, if $s \neq 0$ is a solution of (\[eqn:mat\]), then $(v,s)$ satisfies (\[eqn:SB\]) and (\[eqn:expab\]), where $v \neq 0$ is given in (\[eqn:expre\]).
[*Proof*]{}. First, we notice that if $(v,s) \neq (0,0) \in \mathbb{C}^{3n} \times \mathbb{C}^{n}$ satisfies (\[eqn:SB\]) and (\[eqn:expab\]), then we have $v \neq 0$ and $s \neq 0$. Indeed, if $v=0$ then $s=0$ from (\[eqn:expab\]), and if $s=0$ then $v=0$ from (\[eqn:expre\]). Now assume that $d$ is a root of $\chi_{\tau}(t)=0$ that is not a root of unity. Then there is a solution $s \neq 0$ of (\[eqn:mat\]). Moreover, $(v,s)$ satisfies (\[eqn:SB\]) and (\[eqn:expab\]), where $v$ is given in (\[eqn:expre\]), and thus is nonzero. By Proposition \[pro:eigen\], there is an eigenvector $\overline{v}_d$ of $w_{\tau}$, which shows that $S_{w_{\tau}}(d)=0$. Conversely, assume that $d$ is a root of $S_{w_{\tau}}(t)=0$. Since the eigenvector $\overline{v}_{d}$ is unique, there is a unique solution $(v,s) \in (\mathbb{C}^{3n} \setminus \{0\}) \times \mathbb{C}^{n}$ of (\[eqn:SB\]) and (\[eqn:expab\]), which yields $s \neq 0$. Moreover, $s$ is a unique solution of (\[eqn:mat\]), up to a constant multiple. Indeed, if $s \neq s'$ are solutions of (\[eqn:mat\]), then there are solutions $(v,s) \neq (v',s')$ of (\[eqn:SB\]) and (\[eqn:expab\]), which is a contradiction. Thus $d$ is a root of $\chi_{\tau}(t)=0$. $\Box$
Construction of Rational Surface Automorphisms {#sec:const}
==============================================
In this section, we develop a method for constructing a rational surface automorphism from a composition $f=f_n \circ \cdots \circ f_1 : Y \to Y$ of general birational maps $f_i : Y_{i-1} \to Y_i$ between smooth rational surfaces and a generalized orbit data $\tau$. If the data $\tau$ is compatible with the maps $\overline{f}=(f_1,\dots,f_n)$, $f$ lifts to an automorphism $F: X \to X$ through a blowup $\pi : X \to Y$. Moreover, in the special case where $\overline{f}$ are quadratic birational maps on $\mathbb{P}^2$ and $\tau$ is an original orbit data, we calculate the action $F^* : H^2(X;\mathbb{Z}) \to H^2(X;\mathbb{Z})$ of the automorphism $F$, which shows that $w_{\tau}$ is realized by $(\pi,F)$.
First we collect some terminology about complex surfaces (see also [@A]). Let $Y$ be a smooth projective irreducible surface, and $\pi_y : Y_y \to Y$ be the blowup of a point $y$ on $Y$ with the exceptional divisor $E_y$ in $Y_y$. Then each point on $E_y$ is called a [*point in the first infinitesimal neighbourhood of $y$ on $Y$*]{}. Moreover, for $i > 0$, we inductively define a [*point in the $i$-th infinitesimal neighbourhood of $y$ on $Y$*]{} as a point in the first infinitesimal neighbourhood of some point in the $(i-1)$-th infinitesimal neighbourhood of $y$ on $Y$, where a point in the $0$-th infinitesimal neighbourhood of $y$ is interpreted as $y$ itself. A point in the $i$-th infinitesimal neighbourhood of $y$ on $Y$ for some $i > 0$ is called [*a point infinitely near to $y$ on $Y$*]{}, or [*an infinitely near point on $Y$*]{}. We sometimes call the points on $Y$ [*the proper points*]{} to distinguish them from the infinitely near points. In what follows, a point $y' \in Y$ means that either it is proper on $Y$ or it is infinitely near to some proper point on $Y$, and $y_1=y_2$ means that they are both in the same infinitesimal neighbourhood of a proper point and are equal. Moreover, through the blowup $\pi_y : Y_y \to Y$, any point $y' \in Y_y$ is identified with $\pi_y(y')$, and $\pi_y(y')$ is also denoted by $y' \in Y$. Then, a point in the $i$-th infinitesimal neighbourhood of $y'$ on $Y_y$ is in the $(i-1)$-th infinitesimal neighbourhood of $y$ on $Y$ or in the $i$-th infinitesimal neighbourhood of $y'$ on $Y$, according to whether $y' \in E_y$ or $y' \notin E_y$. For two points $y_1, y_2$ of $Y$, we write $y_1 < y_2$ if $y_2$ is infinitely near to $y_1$, and write $y_1 \approx y_2$ if either $y_1=y_2$ or $y_1<y_2$ or $y_1 > y_2$. A [*cluster $I \subset X$*]{} is a finite set of proper or infinitely near points on $X$ such that if $y \in I$ and $y' < y$, then $y' \in I$. From the cluster $I=\{y_1, \dots,y_N \}$, one can construct the blowup $\pi_{I} : \widetilde{Y} \to Y$ of the points in $I$, that is, the composition $$\label{eqn:blowup}
\pi_I : \widetilde{Y}=Y_N \to Y_{N-1} \to \cdots \to Y_0 =Y$$ of blowups $\pi_i : Y_i \to Y_{i-1}$ of a point $y_{k_i} \in I$ such that if $y_{k_i} < y_{k_j}$ then $i < j$. Note that the surface $\widetilde{Y}$ is determined uniquely by the cluster $I$, namely, if $\pi_I' : \widetilde{Y}' \to Y$ is constructed from another ordering $(y_{k_1'},\dots,y_{k_N'})$ of $I$, then there is a unique isomorphism $g : \widetilde{Y} \to \widetilde{Y}'$ such that $\pi_I = \pi_I' \circ g : \widetilde{Y} \to Y$.
An example of clusters is indeterminacy sets of birational surface maps. Let $Y_+$ and $Y_-$ be smooth surfaces, and $f : Y_+ \to Y_-$ be a birational map with its inverse $f^{-1} : Y_- \to Y_+$. In general, $f^{\pm 1}$ may admit clusters $I(f^{\pm 1})$ in $Y_{\pm}$ on which $f^{\pm 1}$ are not defined. The clusters $I(f^{\pm 1})$ are called the [*indeterminacy sets of $f^{\pm 1}$*]{}. Moreover, any blowup $\pi_{+} : \widetilde{Y}_{+} \to Y_{+}$ of a cluster $I \subset X_{+}$ uniquely lifts $f : Y_+ \to Y_-$ to $\widetilde{f}=f \circ \pi_{+} : \widetilde{Y}_+ \to Y_-$, which determines the point $\widetilde{f}(y)$ for any proper point $y \in \widetilde{Y}_+ \setminus I(\widetilde{f})$. When regarding $y$ as a point on $Y_+$, we write $f(y)=\widetilde{f}(y)$. In this setting, the following properties hold:
- $I(\widetilde{f})=I(f) \setminus I$.
- If $y < y' \in Y_+$ and $y \notin I(f)$, then $y' \notin I(f)$ and $f(y) < f(y')$.
- If $y \notin I(f)$ and $f(y) \approx y' \in I(f^{-1})$, then $y' < f(y)$.
- If a proper point $y \notin I(f)$ satisfies $f(y) \approx \hspace{-.99em}/\hspace{.70em} y'$ for any $y' \in I(f^{-1})$, then $f(y)$ is also a proper point on $Y_-$.
Now we consider a smooth rational surface $X$, that is, a surface birationally equivalent to $\mathbb{P}^2$, and an automorphism $F : X \to X$ of $X$. By theorems of Gromov and Yomdin, the topological entropy of $F$ is given by $h_{\mathrm{top}}(F)= \log \lambda(F^*)$, where $\lambda(F^*)$ is the spectral radius of the action $F^* : H^2(X;\mathbb{Z}) \to H^2(X;\mathbb{Z})$ on the cohomology group. In this paper, we are interested in the case where $F : X \to X$ has positive entropy $h_{\mathrm{top}}(F)>0$ or, in other words, $\lambda(F^*)>1$. Then, the surface $X$ is characterized as follows (see [@H2; @N1]).
\[pro:H1\] If $X$ admits an automorphism $F : X \to X$ with $\lambda(F^*) > 1$, then there is a birational morphism $\pi : X \to \mathbb{P}^2$.
It is known that any birational morphism $\pi : X \to \mathbb{P}^2$ is expressed as $\pi=\pi_I$ for some cluster $I=\{x_1,\dots,x_N \} \subset \mathbb{P}^2$, where $\pi_I$ is the blowup of $I$ given in (\[eqn:blowup\]) with $Y=\mathbb{P}^2$ and $\widetilde{Y}=X$. Then $\pi : X \to \mathbb{P}^2$ determines an expression of the cohomology group: $H^2(X;\mathbb{Z})=\mathbb{Z} [H] \oplus \mathbb{Z} [E_1]
\oplus \cdots \oplus \mathbb{Z} [E_N]$, where $H$ is the total transform of a line in $\mathbb{P}^2$, and $E_i$ is the total transform of the point $x_i$. The intersection form on $H^2(X;\mathbb{Z})$ is given by $$\left\{
\begin{array}{ll}
([H],[H])=1, & ~ \\[2mm]
([E_i],[E_j])=-\delta_{i,j}, & (i,j=1,\dots, N), \\[2mm]
([H],[E_i])=0, & (i=1,\dots, N).
\end{array}
\right.$$ Therefore $H^2(X;\mathbb{Z})$ is isometric to the Lorentz lattice $\mathbb{Z}^{1,N}$ given in (\[eqn:lorentz\]). Namely, there is a natural marking isomorphism $\phi_{\pi} : \mathbb{Z}^{1,N} \to H^2(X,\mathbb{Z})$, sending the basis as $$\phi_{\pi}(e_0) = [H], \quad \quad
\phi_{\pi}(e_i)=[E_i] \quad (i=1, \dots, N).$$ The marking $\phi_{\pi}$ is isometric and determined uniquely by $\pi : X \to \mathbb{P}^2$ in the sense that if $\phi_{\pi}$ and $\phi_{\pi}'$ are markings determined by $\pi$, then there is an element $\wp \in \langle \rho_1,\dots,\rho_{N-1} \rangle$, acting by a permutation on the basis elements $(e_1,\dots,e_N)$, such that $\phi_{\pi}= \phi_{\pi}' \circ \wp$. The following proposition indicates a role of the Weyl group $W_N$ (see [@DO; @H1; @N2]).
\[pro:H2\] For any birational morphism $\pi : X \to \mathbb{P}^2$ and any automorphism $F: X \to X$, there is a unique element $w \in W_N$ such that diagram (\[eqn:diag\]) commutes.
Thus, a pair $(\pi,F)$ determines $w$ uniquely, up to conjugacy by an element of $\langle \rho_1,\dots,\rho_{N-1} \rangle$. In this case, the element $w$ is said to be [*realized*]{} by $(\pi,F)$, and the entropy of $F$ is expressed as $h_{\mathrm{top}}(F)= \log \lambda (w)$. Summing up these discussions, we have the following proposition.
\[pro:expent\] The entropy of any automorphism $F: X \to X$ on a rational surface $X$ is given by $h_{\mathrm{top}}(F)= \log \lambda$ for some $\lambda \in \Lambda$, where $\Lambda$ is given in (\[eqn:PV\]).
Indeed, when $F : X \to X$ satisfies $\lambda(F^*)=1$, the entropy of $F$ is expressed as $h_{\mathrm{top}}(F)= \log \lambda(e)$ with the unit element $e \in W_N$.
\[rem:zeroent\] If $\pi : X \to \mathbb{P}^2$ is a blowup of $N$ points with $N \le 9$, and $F : X \to X$ is an automorphism, then it follows that $h_{\mathrm{top}}(F)=0$ (see e.g. [@M]).
Next we turn our attention to a method for constructing rational surface automorphisms. Let $Y_1, \dots,Y_n$ be general smooth rational surfaces, and $\overline{f}:=(f_1,\dots,f_n)$ be an $n$-tuple of birational maps $f_r : Y_{r-1} \to Y_r$ with $Y_0:=Y_n$, and let $I(f_r)=\{ p_{r,1}^{+},\dots,p_{r,j_+(r)}^{+} \} \subset Y_{r-1}$ and $I(f_r^{-1})=\{ p_{r,1}^{-},\dots,p_{r,j_-(r)}^{-} \} \subset Y_{r}$ be the indeterminacy sets of $f_r$ and of $f_r^{-1}$, respectively. Put $\mathcal{K}_{\pm}(\overline{f}) :=
\{\overline{\iota}=(i,\iota)\,|\,i=1,\dots,n, \, \iota=1,\dots, j_{\pm}(i)\}$ and $\mathcal{K}(\overline{f}):=\mathcal{K}_{-}(\overline{f})$. Then it turns out that the cardinalities of the sets $\mathcal{K}_{\pm}(\overline{f})$ are the same, that is, $\sum_{r=1}^n j_+(r)=\sum_{r=1}^n j_-(r)$, since $Y_n=Y_0$. Moreover, for $m \ge 0$ and $\overline{\iota} \in \mathcal{K}(\overline{f})$, we inductively put $$\label{eqn:p}
p_{\overline{\iota}}^0 := p_{\overline{\iota}}^- \in Y_i, \qquad \qquad
p_{\overline{\iota}}^m := f_{r} (p_{\overline{\iota}}^{m-1}) \in Y_r \quad
(r \equiv i+m~\text{mod}~n).$$ Note that a point $p_{\overline{\iota}}^m$ is well-defined if $p_{\overline{\iota}}^{m'} \notin I(f_{r'+1})$ for any $0 \le m' < m$, and that $p_{\overline{\iota}}^{m}$ is a point of $Y_r$ if and only if $m=\theta_{i,r}(k) \ge 0$ for some $k \ge 0$, where $\theta_{i,r}(k)$ is given in (\[eqn:vartheta\]).
To define the concept of realization, let us introduce a [*generalized orbit data $\tau=(n,\sigma,\kappa)$ for $\overline{f}$*]{} consisting of the integer $n \ge 1$, a bijection $\sigma : \mathcal{K}(\overline{f}) \to \mathcal{K}_+(\overline{f})$ and a function $\kappa : \mathcal{K}(\overline{f}) \to \mathbb{Z}_{\ge 0}$ such that $\kappa(\overline{\iota}) \ge 1$ provided $i_1 \le i$, where $(i_1,\iota_1) =\sigma(\overline{\iota})$.
\[def:real\] Let $\overline{f}$ be an $n$-tuple of birational maps and $\tau=(n,\sigma,\kappa)$ be a generalized orbit data for $\overline{f}$. Then $\overline{f}$ is called a [*realization*]{} of $\tau$ if the following condition holds for any $\overline{\iota} \in \mathcal{K}(\overline{f})$: $$\label{eqn:orbit1}
p_{\overline{\iota}}^{m} \neq p_{\overline{\iota}'}^+ \quad
(0 \le m < \mu(\overline{\iota}), \, \overline{\iota}' \in
\mathcal{K}_+(\overline{f})), \qquad
p_{\overline{\iota}}^{\mu(\overline{\iota})}=p_{\sigma(\overline{\iota})}^+.$$
We should notice that in condition (\[eqn:orbit1\]), two points $p_{\overline{\iota}}^{m}$ and $p_{\overline{\iota}'}^+$ may satisfy $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}'}^+$ but $p_{\overline{\iota}}^{m} \neq p_{\overline{\iota}'}^+$. From a realization $\overline{f}$ of $\tau$, we construct an automorphism. To this end, we define
\[def:pp\] A pair $(p_{\overline{\iota}}^{-}, p_{\overline{\iota}'}^{+})$ of indeterminacy points of $f_{i}^{-1}$ and $f_{i'}$ with $\overline{\iota}=(i,\iota) \in \mathcal{K}(\overline{f})$ and $\overline{\iota}'=(i',\iota') \in \mathcal{K}_+(\overline{f})$ is called a [*proper pair of $\overline{f}$ with length $\mu$* ]{} if the following three conditions hold:
1. $p_{\overline{\iota}}^{-}$ and $p_{\overline{\iota}'}^{+}$ are proper points on $Y_{i}$ and $Y_{i'-1}$, respectively.
2. $p_{\overline{\iota}}^{m} \approx \hspace{-.99em}/\hspace{.70em}
p_{r+1,j}^+$ for any $0 \le m=\theta_{i,r}(k) < \mu$, and $p_{\overline{\iota}}^{m} \approx \hspace{-.99em}/\hspace{.70em} p_{r,j}^-$ for any $0 < m=\theta_{i,r}(k) \le \mu$.
3. $p_{\overline{\iota}'}^+=p_{\overline{\iota}}^{\mu}$.
Assume that $(p_{\overline{\iota}}^{-}, p_{\overline{\iota}'}^{+})$ is a proper pair. Then one has $p_{\overline{\iota}}^{m} \approx \hspace{-.99em}/\hspace{.70em}
p_{\overline{\iota}}^{m'}$ when $m \neq m'$. Indeed, if $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}}^{m+\hat{m}}$ for some $\hat{m} > 0$, then one has $p_{\overline{\iota}}^{\mu-\hat{m}} \approx
p_{\overline{\iota}}^{\mu} =p_{\overline{\iota}'}^+$, which is a contradiction. Let $\pi_r : Y_r' \to Y_r$ be the blowup of distinct proper points $(p_{\overline{\iota}}^{m})$ with $0 \le m=\theta_{i,r}(k) \le \mu$. Then the blowups $\overline{\pi}:=(\pi_r)$ lift $f_r : Y_{r-1} \to Y_r$ to $f_r':= \pi_r^{-1} \circ f_r \circ \pi_{r-1} : Y_{r-1}' \to Y_r'$ (see Figure \[fig:blowup\]). We say that [*$\overline{f'}:=(f_1',\dots,f_n')$ is obtained from $\overline{f}$ by the proper pair $(p_{\overline{\iota}}^{-}, p_{\overline{\iota}'}^{+})$*]{} through the blowups $\overline{\pi}$. In this case, one has $$\label{eqn:elim}
I(f_r')=
\left\{
\begin{array}{ll}
I(f_{i'}) \setminus \{ p_{\overline{\iota}'}^+ \} & (r=i') \\[2mm]
I(f_r) & (r \neq i'),
\end{array}
\right. \quad
I((f_r')^{-1})=
\left\{
\begin{array}{ll}
I(f_i^{-1}) \setminus \{ p_{\overline{\iota}}^- \} & (r=i) \\[2mm]
I(f_r^{-1}) & (r \neq i),
\end{array}
\right.$$ and also $\mathcal{K}_+(\overline{f'})=\mathcal{K}_+(\overline{f})
\setminus \{ \overline{\iota}' \}$ and $\mathcal{K}(\overline{f'})=\mathcal{K}(\overline{f})
\setminus \{ \overline{\iota} \}$.
\[lem:real\] An $n$-tuple $\overline{f}$ of birational maps is a realization of a generalized orbit data $\tau=(n,\sigma,\kappa)$ if and only if there is a total order $\overline{\iota}^{(1)} \prec \overline{\iota}^{(2)}
\prec \cdots \overline{\iota}^{(\nu)} $ on $\mathcal{K}(\overline{f})$, with $\nu
:= \sum_{r=1}^n j_+(r)=\sum_{r=1}^n j_-(r)$, such that $(p_{\overline{\iota}^{(j)}}^-,p_{\sigma(\overline{\iota}^{(j)})}^+)$ is a proper pair of $\overline{g}_{j-1}$ with length $\mu(\overline{\iota}^{(j)})$ for any $j=1,\dots, \nu$, where $\overline{g}_{0}:=\overline{f}$ and $\overline{g}_{j}$ is inductively obtained from $\overline{g}_{j-1}$ by the proper pair $(p_{\overline{\iota}^{(j)}}^-,p_{\sigma(\overline{\iota}^{(j)})}^+)$. Moreover, $\mu(\overline{\iota})$ is defined by $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
\mu(\overline{\iota}):=\theta_{i,i_1-1}(\kappa(\overline{\iota})),
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ where $\overline{\iota}_1=(i_1,\iota_1)=\sigma(\overline{\iota})$.
[*Proof*]{}. Assume that $\overline{f}$ is a realization of $\tau$ and there are proper pairs $(p_{\overline{\iota}^{(\ell)}}^-,p_{\sigma(\overline{\iota}^{(\ell)})}^+)$ of $\overline{g}_{\ell-1}=(g_{\ell-1,r} :
Z_{\ell-1,r-1} \to Z_{\ell-1,r})_{r=1}^n$ with length $\mu(\overline{\iota}^{(\ell)})$ for $\ell=1, \dots,j$, where $\overline{g}_{\ell}=(g_{\ell,r} : Z_{\ell,r-1} \to Z_{\ell,r})_{r=1}^n$ is obtained from $\overline{g}_{\ell-1}$ by the proper pair $(p_{\overline{\iota}^{(\ell)}}^-,p_{\sigma(\overline{\iota}^{(\ell)})}^+)$ through the blowups $\overline{\pi}_{\ell}=(\pi_{\ell,r} : Z_{\ell,r} \to Z_{\ell-1,r})_{r=1}^n$. Take an element $\overline{\iota}^{(j+1)}=(i^{(j+1)},\iota^{(j+1)}) \in
\mathcal{K}(\overline{g}_{j})=
\mathcal{K}(\overline{f}) \setminus \{ \overline{\iota}^{(1)},
\dots, \overline{\iota}^{(j)} \}$ such that $p_{\overline{\iota}^{(j+1)}}^-$ is a proper point of $Z_{j,i^{(j+1)}}$ and $$\mu(\overline{\iota}^{(j+1)}) = \min \{ \mu(\overline{\iota}) \, | \,
\overline{\iota} \in \mathcal{K}(\overline{g}_j) \text{ and }
p_{\overline{\iota}}^- \text{ is a proper point on } Z_{j,i} \}.$$ It is enough to show that $(p_{\overline{\iota}^{(j+1)}}^-,p_{\sigma(\overline{\iota}^{(j+1)})}^+)$ is a proper pair of $\overline{g}_{j}$ with length $\mu(\overline{\iota}^{(j+1)})$. First, assume the contrary that $p_{\overline{\iota}^{(j+1)}}^{m} \approx p_{\overline{\iota}'}^-$ for some $0 < m \le \mu(\overline{\iota}^{(j+1)})$ and $\overline{\iota}' \in \mathcal{K}(\overline{g}_{j})$. Then we have $p_{\overline{\iota}^{(j+1)}}^{m} > p_{\overline{\iota}'}^-$. The minimality of $\mu(\overline{\iota}^{(j+1)})$ yields $p_{\overline{\iota}'}^{m'} \notin I(g_{j,i''})$ for any $0 \le m' < \mu(\overline{\iota}^{(j+1)}) - m$, and so $p_{\overline{\iota}^{(j+1)}}^{\mu(\overline{\iota}^{(j+1)})} >
p_{\overline{\iota}'}^{\mu(\overline{\iota}^{(j+1)}) - m}$. Since $p_{\overline{\iota}^{(j+1)}}^{\mu(\overline{\iota}^{(j+1)})}=
p_{\sigma(\overline{\iota}^{(j+1)})}^+ \in I(g_{j,i_1^{(j+1)}})$ and $I(g_{j,i_1^{(j+1)}})$ is a cluster, $p_{\overline{\iota}'}^{\mu(\overline{\iota}^{(j+1)}) - m}$ is also an element of $I(g_{j,i_1^{(j+1)}})$ and thus is equal to $p_{\sigma(\overline{\iota}')}^+$. This means that $\mu(\overline{\iota}')=\mu(\overline{\iota}^{(j+1)})-m <
\mu(\overline{\iota}^{(j+1)})$, which contradicts the assumption that $\mu(\overline{\iota}^{(j+1)})$ is minimal. Thus, we have $p_{\overline{\iota}^{(j+1)}}^{m} \approx \hspace{-1.00em}/\hspace{.50em}
p_{\overline{\iota}'}^-$ for any $0 < m \le \mu(\overline{\iota}^{(j+1)})$ and $\overline{\iota}' \in \mathcal{K}(\overline{g}_{j})$. In particular, $p_{\overline{\iota}^{(j+1)}}^{m}$ is proper on $Z_{j,r}$ with $r \equiv i^{(j+1)}+m \text{~mod~} n$. Since $p_{\overline{\iota}^{(j+1)}}^{m} \neq p_{\overline{\iota}'}^+$ for any $0 \le m < \mu(\overline{\iota})$ and $I(g_{j,i'})$ is a cluster, one has $p_{\overline{\iota}^{(j+1)}}^{m}
\approx \hspace{-.99em}/\hspace{.70em} p_{\overline{\iota}'}^+$ for any $0 \le m < \mu(\overline{\iota})$. Moreover, as $p_{\overline{\iota}^{(j+1)}}^{\mu(\overline{\iota}^{(j+1)})}=
p_{\sigma(\overline{\iota}^{(j+1)})}^{+}$ and $p_{\overline{\iota}^{(j+1)}}^{\mu(\overline{\iota}^{(j+1)})}$ is proper, the point $p_{\sigma(\overline{\iota}^{(j+1)})}^{+}$ is also proper. Therefore, $(p_{\overline{\iota}^{(j+1)}}^-,p_{\sigma(\overline{\iota}^{(j+1)})}^+)$ is a proper pair of $\overline{g}_{j}$ with length $\mu(\overline{\iota}^{(j+1)})$.
Conversely, assume that there is a total order $\overline{\iota}^{(1)} \prec \overline{\iota}^{(2)}
\prec \cdots \overline{\iota}^{(\nu)} $ on $\mathcal{K}(\overline{f})$ such that $(p_{\overline{\iota}^{(j)}}^-,p_{\sigma(\overline{\iota}^{(j)})}^+)$ is a proper pair of $\overline{g}_{j-1}$ with length $\mu(\overline{\iota}^{(j)})$ for each $j=1,\dots, \nu$. Note that $I(f_{r+1}) \subset
I((\pi_{1,r} \circ \cdots \circ \pi_{j-1,r})^{-1}) \cup I(g_{j-1,r+1})$. Therefore we have $p_{\overline{\iota}^{(j)}}^m \notin I(f_{r+1})$ for any $0 \le m < \mu(\overline{\iota}^{(j)})$, since $p_{\overline{\iota}^{(j)}}^m \notin I(g_{j-1,r+1})$ and $p_{\overline{\iota}^{(j)}}^m > p \in
I((\pi_{1,r} \circ \cdots \circ \pi_{j-1,r})^{-1})$ if $p_{\overline{\iota}^{(j)}}^m \approx p$. As $p_{\overline{\iota}^{(j)}}^{\mu(\overline{\iota}^{(j)})}=
p_{\sigma(\overline{\iota}^{(j)})}^{+}$, condition (\[eqn:orbit1\]) holds for any $\overline{\iota}^{(j)} \in \mathcal{K}(\overline{f})$, completing the proof. $\Box$
Assume that $\overline{f}$ is a realization of $\tau$. Then it turns out that the compositions $\pi_r:=\pi_{1,r} \circ \cdots \circ \pi_{\nu,r} : X_r \to Y_r$ are blowups of $N=\sum_{\overline{\iota} \in \mathcal{K}(\overline{f})}
\kappa(r,\overline{\iota})=
\sum_{\overline{\iota} \in \mathcal{K}(\overline{f})} \kappa(\overline{\iota})$ points $\{ p_{\overline{\iota}}^m \, | \, \overline{\iota} \in
\mathcal{K}(\overline{f}), \, 0 \le m \le
\mu(\overline{\iota}), \, i+m \equiv r~\text{mod}~n \}$, where $\kappa(r,\overline{\iota})$ is given by $$\label{eqn:nuk}
\kappa(r,\overline{\iota}) :=
\left\{
\begin{array}{ll}
\kappa(\overline{\iota})+1 & (\text{if } i<i_1 \text{ and } i \le r \le i_1-1)
\\[2mm]
\kappa(\overline{\iota})-1 & (\text{if } i_1 \le i \text{ and } i_1-1 < r < i)
\\[2mm]
\kappa(\overline{\iota}) & (\text{if otherwise}). \\[2mm]
\end{array}
\right. \\[2mm]$$ From (\[eqn:elim\]), the blowups $\pi_r : X_r \to Y_r$ lift $f_r : Y_{r-1} \to Y_{r}$ to a biholomorphism $F_r : X_{r-1} \to X_{r}$: $$\begin{CD}
X_r @> F_{r+1} >> X_{r+1} \\
@V \pi_r VV @VV \pi_{r+1} V \\
Y_r @> f_{r+1} >> Y_{r+1},
\end{CD}$$ and $\pi_{\tau}:=\pi_{n} : X_{\tau} \to Y$ also lifts $f:=f_n \circ \cdots \circ f_1 : Y \to Y$ to the automorphism $F_{\tau}:=F_n \circ \cdots \circ F_1 : X_{\tau} \to X_{\tau}$, where $X_{\tau}:=X_0=X_n$ and $Y:=Y_0=Y_n$.
0.1in
( 48.5000, 25.7000)( 3.5000,-26.0000)
(7.5000,-18.5000)[(0,0)\[lb\][$f_i$]{}]{}(15.1000,-18.6000)[(0,0)\[lb\][$f_{i+1}$]{}]{}(22.9000,-18.6000)[(0,0)\[lb\][$f_{i+2}$]{}]{}(39.5000,-18.5000)[(0,0)\[lb\][$f_{i'-1}$]{}]{}(47.5000,-18.5000)[(0,0)\[lb\][$f_{i'}$]{}]{}(13.0000,-22.0000)[(0,0)\[lb\][$p_{\overline{\iota}}^0$]{}]{}(21.0000,-22.0000)[(0,0)\[lb\][$p_{\overline{\iota}}^1$]{}]{}(41.3000,-22.1000)[(0,0)\[lb\][$p_{\overline{\iota}}^{\mu}=p_{\overline{\iota}'}^+$]{}]{}
(7.5000,-5.5000)[(0,0)\[lb\][$f_i'$]{}]{}(15.1000,-5.6000)[(0,0)\[lb\][$f_{i+1}'$]{}]{}(22.7000,-5.5000)[(0,0)\[lb\][$f_{i+2}'$]{}]{}(39.5000,-5.5000)[(0,0)\[lb\][$f_{i'-1}'$]{}]{}(47.5000,-5.5000)[(0,0)\[lb\][$f_{i'}'$]{}]{}
(13.0000,-2.0000)[(0,0)\[lb\][$E_{p_{\overline{\iota}}^0}$]{}]{}(21.0000,-2.0000)[(0,0)\[lb\][$E_{p_{\overline{\iota}}^1}$]{}]{}(45.0000,-2.0000)[(0,0)\[lb\][$E_{p_{\overline{\iota}}^{\mu}}$]{}]{}(26.6000,-13.4000)[(0,0)\[lb\][blowup]{}]{}
We now restrict our attention to the case where each component of $\overline{f}=(f_1,\dots,f_n)$ is a quadratic birational map with $Y_r=\mathbb{P}_r^2 = \mathbb{P}^2$ and $\tau$ is an original orbit data, and calculate the action of $F_{\tau} : X_{\tau} \to X_{\tau}$ on the cohomology group when $\overline{f}$ is a realization of $\tau$.
To this end, we recall some properties of quadratic maps. Let $f : \mathbb{P}^2 \to \mathbb{P}^2$ be a quadratic birational map on $\mathbb{P}^2$. It is known that $f$ can be expressed as $f= \iota_- \circ \psi_{\ell} \circ \iota_+^{-1}$ for some $\ell \in \{1,2,3\}$, where $\iota_+, \iota_- : \mathbb{P}^2 \to \mathbb{P}^2$ are linear transformations, and $\psi_{\ell} : \mathbb{P}^2 \to \mathbb{P}^2$ are the quadratic birational maps given by
1. $\psi_1 : [x:y:z] \mapsto [yz:zx:xy]$ with $\psi_1^{-1}=\psi_1$,
2. $\psi_2 : [x:y:z] \mapsto [xz:yz:x^2]$ with $\psi_2^{-1}=\psi_2$,
3. $\psi_3 : [x:y:z] \mapsto [x^2:xy:y^2+xz]$ with $\psi_3^{-1} : [x:y:z] \mapsto [x^2:xy:-y^2+xz]$.
The indeterminacy sets of $\psi_{\ell}^{\pm 1}$ are expressed as $I(\psi_{\ell}^{\pm 1})=\{p_{\ell,1},p_{\ell,2},p_{\ell,3}\}$, where $$p_{1,1}=[1:0:0], \quad p_{2,2}>p_{2,1}=p_{1,2}=[0:1:0], \quad
p_{3,3}>p_{3,2}>p_{3,1}=p_{2,3}=p_{1,3}=[0:0:1].$$ Then the geometry of the simple quadratic maps $\psi_{\ell} : \mathbb{P}^2 \to \mathbb{P}^2$ is described as follows. Let $\pi_{\ell} : X_{\ell} \to \mathbb{P}^2$ be the blowup of the cluster $\{ p_{\ell,1}, p_{\ell,2}, p_{\ell,3} \}$, and let $H_{\ell}$ be the total transform of a line in $\mathbb{P}^2$, $H_{\ell,1}$, $H_{\ell,2}$, $H_{\ell,3}$ be the strict transforms of the lines ${x=0}$, ${y=0}$, ${z=0}$, respectively, and $E_{\ell,i}$ be the total transform of the point $p_{\ell,i}$ for $i=1,2,3$. Then $E_{\ell,i}$ is linearly equivalent to $H_{\ell} - E_{\ell,j} - E_{\ell,k}$ for $\{i,j,k\}=\{1,2,3\}$. The birational map $\psi_{\ell}$ lifts to an automorphism $\widetilde{\psi}_{\ell} : X_{\ell} \to X_{\ell}$, which sends irreducible rational curves $E_{\ell,i}$ to $H_{\ell,i}$ for $(\ell,i)=(1,1),(1,2),(1,3),(2,2),(2,3),(3,3)$, and sends irreducible rational curves $E_{\ell,j}-E_{\ell,k}$ to themselves for $(\ell,j,k)=(2,1,2),(3,1,2),(3,2,3)$ (see Figures \[fig:quadratic1\]-\[fig:quadratic3\]). Moreover, $\psi_{\ell}$ sends a generic line to a conic passing through the three points $p_{\ell,1},p_{\ell,2},p_{\ell,3}$. Therefore, the action $\widetilde{\psi}_{\ell}^* : H^2(X_{\ell};\mathbb{Z})
\to H^2(X_{\ell};\mathbb{Z})$ on the cohomology group $H^2(X_{\ell};\mathbb{Z}) \cong \mathrm{Pic}(X_{\ell}) =
\mathbb{Z} [H_{\ell}] \oplus \mathbb{Z} [E_{\ell,1}] \oplus
\mathbb{Z} [E_{\ell,2}] \oplus \mathbb{Z} [E_{\ell,3}]$ is given by $$\label{eqn:relelem}
\widetilde{\psi}_{\ell}^* : \left\{
\begin{array}{lll}
[H_{\ell}] & \mapsto 2 [H_{\ell}] - \sum_{i=1}^3 [E_{\ell,i}] & ~ \\[2mm]
[E_{\ell,i}] & \mapsto [H_{\ell}] - [E_{\ell,j}] - [E_{\ell,k}] \, &
(\{i,j,k\}=\{1,2,3\}).
\end{array}
\right.$$
0.1in
( 55.6100, 24.1300)( 2.0000,-26.7300)
(29.0100,-21.0400)[(0,0)\[lb\][$\psi_1$]{}]{}
(29.0100,-8.0700)[(0,0)\[lb\][$\widetilde{\psi}_1$]{}]{}(14.3900,-18.5500)[(0,0)\[lb\][$p_{1,1}$]{}]{}(3.6000,-27.3000)[(0,0)\[lb\][$p_{1,2}$]{}]{}(20.7000,-27.4000)[(0,0)\[lb\][$p_{1,3}$]{}]{}(19.5000,-4.3000)[(0,0)\[lb\][$E_{1,1}$]{}]{}(18.0000,-11.0000)[(0,0)\[lb\][$E_{1,3}$]{}]{}(2.5000,-12.2000)[(0,0)\[lb\][$E_{1,2}$]{}]{}(2.8000,-22.5000)[(0,0)\[lb\][{z=0}]{}]{}(19.1000,-22.5000)[(0,0)\[lb\][{y=0}]{}]{}(11.0000,-27.4000)[(0,0)\[lb\][{x=0}]{}]{}
(47.7900,-18.5500)[(0,0)\[lb\][$p_{1,1}$]{}]{}(37.0000,-27.3000)[(0,0)\[lb\][$p_{1,2}$]{}]{}(54.1000,-27.4000)[(0,0)\[lb\][$p_{1,3}$]{}]{}(52.9000,-4.3000)[(0,0)\[lb\][$E_{1,1}$]{}]{}(51.4000,-11.0000)[(0,0)\[lb\][$E_{1,3}$]{}]{}(35.9000,-12.2000)[(0,0)\[lb\][$E_{1,2}$]{}]{}(36.2000,-22.5000)[(0,0)\[lb\][{z=0}]{}]{}(52.5000,-22.5000)[(0,0)\[lb\][{y=0}]{}]{}(44.4000,-27.4000)[(0,0)\[lb\][{x=0}]{}]{}
0.1in
( 62.4100, 25.1000)( 2.0000,-27.1000)
(32.9800,-21.1100)[(0,0)\[lb\][$\psi_2$]{}]{}
(32.9800,-9.1100)[(0,0)\[lb\][$\widetilde{\psi}_2$]{}]{}(5.6200,-23.8000)[(0,0)\[lb\][$p_{2,1}<p_{2,2}$]{}]{}(25.1100,-23.8000)[(0,0)\[lb\][$p_{2,3}$]{}]{}
(2.1000,-7.9000)[(0,0)\[lb\][$E_{2,2}$]{}]{}(24.7800,-9.8100)[(0,0)\[lb\][$E_{2,3}$]{}]{}
(5.4200,-12.7000)[(0,0)\[lb\][$E_{2,1}-E_{2,2}$]{}]{}(13.7000,-23.6000)[(0,0)\[lb\][{x=0}]{}]{}(3.1000,-18.5000)[(0,0)\[lb\][{z=0}]{}]{}
(42.3200,-23.9000)[(0,0)\[lb\][$p_{2,1}<p_{2,2}$]{}]{}(61.8100,-23.9000)[(0,0)\[lb\][$p_{2,3}$]{}]{}
(38.8000,-8.0000)[(0,0)\[lb\][$E_{2,2}$]{}]{}(61.4800,-9.9100)[(0,0)\[lb\][$E_{2,3}$]{}]{}
(42.1200,-12.8000)[(0,0)\[lb\][$E_{2,1}-E_{2,2}$]{}]{}(50.4000,-23.7000)[(0,0)\[lb\][{x=0}]{}]{}(39.8000,-18.6000)[(0,0)\[lb\][{z=0}]{}]{}
0.1in
( 65.0300, 20.1000)( 2.0000,-22.3000)
(33.7600,-21.3600)[(0,0)\[lb\][$\psi_3$]{}]{}
(33.7600,-8.9600)[(0,0)\[lb\][$\widetilde{\psi}_3$]{}]{}(15.0000,-24.0000)[(0,0)\[lb\][$p_{3,1}<p_{3,2}<p_{3,3}$]{}]{}
(22.4100,-6.0400)[(0,0)\[lb\][$E_{3,1}-E_{3,2}$]{}]{}(14.2600,-3.9200)[(0,0)\[lb\][$E_{3,2}-E_{3,3}$]{}]{}(22.5100,-10.3500)[(0,0)\[lb\][$E_{3,3}$]{}]{}
(3.6000,-23.7000)[(0,0)\[lb\][{x=0}]{}]{}
(53.9000,-23.9800)[(0,0)\[lb\][$p_{3,1}<p_{3,2}<p_{3,3}$]{}]{}
(61.3100,-6.0200)[(0,0)\[lb\][$E_{3,1}-E_{3,2}$]{}]{}(53.1600,-3.9000)[(0,0)\[lb\][$E_{3,2}-E_{3,3}$]{}]{}(61.4100,-10.3300)[(0,0)\[lb\][$E_{3,3}$]{}]{}
(42.5000,-23.6800)[(0,0)\[lb\][{x=0}]{}]{}
Next, we consider a general quadratic birational map $f=\iota_- \circ \psi_{\ell} \circ \iota_+^{-1} :
\mathbb{P}^2 \to \mathbb{P}^2$ with its inverse $f^{-1}=\iota_+ \circ \psi_{\ell}^{-1} \circ \iota_-^{-1} :
\mathbb{P}^2 \to \mathbb{P}^2$. Given $\{i,j,k\}=\{1,2,3\}$, put $$\label{eqn:label}
p_i^{\pm}= \iota_{\pm}(p_{\ell,1}), \quad
p_j^{\pm}= \iota_{\pm}(p_{\ell,2}), \quad
p_k^{\pm}= \iota_{\pm}(p_{\ell,3}).$$ Then the indeterminacy points of $f^{\pm 1}$ are labeled as $$\label{eqn:indquad}
I(f^{\pm 1})=\{p_1^{\pm}, p_2^{\pm}, p_3^{\pm} \}.$$ Let $\pi^{\pm} : X^{\pm} \to \mathbb{P}^2$ be blowups of the clusters $\{ p_1^{\pm},p_2^{\pm},p_3^{\pm} \}$, and let $H^{\pm} \subset X^{\pm}$ be the total transforms of a line in $\mathbb{P}^2$ under $\pi^{\pm}$, and $E_i^{\pm} \subset X^{\pm}$ be the exceptional divisors over the points $p_i^{\pm}$. Then the birational map $f : \mathbb{P}^2 \to \mathbb{P}^2$ lifts to an isomorphism $\widetilde{f} : X^+ \to X^-$. From (\[eqn:relelem\]), the cohomological action $\widetilde{f}^* : H^2(X^-;\mathbb{Z}) \to H^2(X^+;\mathbb{Z})$ is given by $$\label{eqn:relquad}
\widetilde{f}^* : \left\{
\begin{array}{lll}
[H^-] & \mapsto 2 [H^+] - \sum_{i=1}^3 [E_i^+] & ~ \\[2mm]
[E_i^-] & \mapsto [H^+] - [E_{j}^+] - [E_{k}^+] \, &
(\{i,j,k\}=\{1,2,3\}).
\end{array}
\right.$$ Conversely, if a birational quadratic map $f=\iota_- \circ \psi_{\ell} \circ \iota_+^{-1} :
\mathbb{P}^2 \to \mathbb{P}^2$ with the indeterminacy sets given in (\[eqn:indquad\]) lifts to $\widetilde{f}: X^+ \to X^-$ satisfying (\[eqn:relquad\]), then the points $p_{i}^{\pm}$ are expressed as (\[eqn:label\]) for some $\{i,j,k\}=\{1,2,3\}$. From here on, we assume that $f : \mathbb{P}^2 \to \mathbb{P}^2$ lifts to $\widetilde{f}: X^+ \to X^-$ satisfying (\[eqn:relquad\]). Then labeling $I(f)=\{p_1^+, p_2^+,p_3^+\}$ determines $I(f^{-1})=\{p_1^-, p_2^-,p_3^- \}$ and vice versa. In particular, it follows that $p_i^+ < p_j^+$ if and only if $p_i^- < p_j^-$.
Under these settings, we come back to consider an $n$-tuple $\overline{f}=(f_1,\dots,f_n)$ of quadratic birational maps $f_r : \mathbb{P}_{r-1}^2 \to \mathbb{P}_{r}^2$ with the indeterminacy sets $I(f_r^{\pm 1})=\{ p_{r,1}^{\pm},p_{r,2}^{\pm},p_{r,3}^{\pm} \}$. Since $\mathcal{K}(n)=\mathcal{K}(\overline{f})=
\mathcal{K}_+(\overline{f})=\{ \overline{\iota}=(i,\iota) \, | \,
i=1,2, \dots, n, \, \iota=1,2,3 \}$, a generalized orbit data $\tau$ for $\overline{f}$ becomes an original orbit data according to Definition \[def:data\]. Moreover, $\overline{f}$ can be called a realization of $\tau$ if the conditions in Definition \[def:real’\] hold.
\[rem:triple\] The value $\mu(\overline{\iota})$ stands for the length of the orbit segment $p_{\overline{\iota}}^0,p_{\overline{\iota}}^1,\dots,
p_{\overline{\iota}}^{\mu(\overline{\iota})}$, while the value $\kappa(\overline{\iota})$ stands for the number of points in the orbit segment that lie on $\mathbb{P}_n^2$. Moreover, the definition given in (\[eqn:nu’\]) of $\mu(\overline{\iota})$ yields a function $\mu :\mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ such that $\mu(\overline{\iota}) - i_1 +i + 1 \in n \cdot \mathbb{Z}$ for any $\overline{\iota} \in \mathcal{K}(n)$. Thus, there is one-to-one correspondence between the data $(n,\sigma,\kappa)$ and $(n,\sigma,\mu)$ through equation (\[eqn:nu’\]). In what follows, we identify the orbit data $\tau=(n,\sigma,\kappa)$ with $(n,\sigma,\mu)$, and write $\tau=(n,\sigma,\kappa)=(n,\sigma,\mu)$.
When $\overline{f}$ is a realization of $\tau$, the blowups $\pi_r : X_r \to Y_r=\mathbb{P}_r^2$ lift $f_r : \mathbb{P}_{r-1}^2 \to \mathbb{P}_{r}^2$ to biholomorphisms $F_r : X_{r-1} \to X_{r}$ and $\pi_{\tau} : X_{\tau} \to \mathbb{P}^2$ lifts $f : \mathbb{P}^2 \to \mathbb{P}^2$ to the automorphism $F_{\tau}=F_n \circ \cdots \circ F_1 : X_{\tau} \to X_{\tau}$. Let $H_r \subset X_r$ and $E_{\overline{\iota},r}^k \subset X_r$ be the total transforms of a line in $\mathbb{P}_r^2$ and the point $p_{\overline{\iota}}^{m}$ with $k \ge 0$ and $$m=
\left\{
\begin{array}{ll}
\theta_{i,r}(k) \quad & (i \le r) \\[2mm]
\theta_{i,r}(k+1) \quad & (i > r).
\end{array}
\right.$$ Then the cohomology group of $X_r$ is $H^2(X_r;\mathbb{Z})= \mathbb{Z} [H_r] \oplus
\bigr( \oplus_{\overline{\iota} \in \mathcal{K}(n)}
\oplus_{k=1}^{\kappa(r,\overline{\iota})} \mathbb{Z}
[E_{\overline{\iota},r}^{k-1}]\bigl)$, where $\kappa(r,\overline{\iota})$ is given in (\[eqn:nuk\]). Moreover, from (\[eqn:relquad\]), the action $F_{r}^* : H^2(X_{r};\mathbb{Z}) \to H^2(X_{r-1};\mathbb{Z})$ is given by $$F_{r}^* : \left\{
\begin{array}{lll}
[H_{r}] & \mapsto 2 [H_{r-1}] - \sum_{\ell=1}^3
[E_{\sigma^{-1}(r,\ell),r-1}^{\kappa(r-1,\sigma^{-1}(r,\ell))-1}] & ~ \\[2mm]
[E_{(r,\ell_1),r}^0] \hspace{-3mm} & \mapsto [H_{r-1}]
- [E_{\sigma^{-1}(r,\ell_2),r-1}^{\kappa(r-1,\sigma^{-1}(r,\ell_2))-1}]
- [E_{\sigma^{-1}(r,\ell_3),r-1}^{\kappa(r-1,\sigma^{-1}(r,\ell_3))-1}] &
(\{\ell_1,\ell_2,\ell_3\}=\{1,2,3\}) \\[2mm]
[E_{\overline{\iota},r}^k] & \mapsto [E_{\overline{\iota},r-1}^k] &
(i \neq r) \\[2mm]
[E_{(r,l),r}^k] & \mapsto [E_{(r,l),r-1}^{k-1}]. & ~ \\[2mm]
\end{array}
\right.$$ The composition $F_{\tau}^*=F_n^* \circ \cdots \circ F_1^*$ acts on the cohomology group $H^2(X_{\tau};\mathbb{Z})= \mathbb{Z} [H] \oplus
\bigr( \oplus_{\overline{\iota} \in \mathcal{K}(n)}
\oplus_{k=1}^{\kappa(\overline{\iota})} [E_{\overline{\iota}}^{k-1}]\bigl)$, where $H:=H_0=H_n$ and $E_{\overline{\iota}}^k:=E_{\overline{\iota},0}^k=E_{\overline{\iota},n}^k$.
The above observation leads us to Definition \[def:latiso\]. Namely, let $\phi_{\pi_{\tau}} : \mathbb{Z}^{1,N} \cong
L_{\tau} \to H^2(X_{\tau}:\mathbb{Z})$ be the isomorphism defined by $\phi_{\pi_{\tau}}(e_0)=[H]$ and $\phi_{\pi_{\tau}}(e_{\overline{\iota}}^k)=E_{\overline{\iota}}^{k-1}$. Then it is easily seen that the automorphism $w_{\tau}: \mathbb{Z}^{1,N} \to \mathbb{Z}^{1,N}$ is realized by $(\pi_{\tau},F_{\tau})$, that is, $\phi_{\pi_{\tau}} \circ w_{\tau}= F_{\tau}^* \circ \phi_{\pi_{\tau}}
: \mathbb{Z}^{1,N} \to H^2(X_{\tau};\mathbb{Z})$. Summing up these discussions, we have the following proposition.
\[pro:auto\] Assume that $\overline{f}$ is a realization of $\tau$. Then the blowup $\pi_{\tau} : X_{\tau} \to \mathbb{P}^2$ of $N=\sum_{\overline{\iota} \in \mathcal{K}(n)} \kappa(\overline{\iota})$ points $\{ p_{\overline{\iota}}^m \, | \, \overline{\iota} \in \mathcal{K}(n), \,
m=\theta_{i,0}(k), 1 \le k \le \kappa(\overline{\iota}) \}$ lifts $f=f_n \circ \cdots \circ f_1$ to the automorphism $F_{\tau} : X_{\tau} \to X_{\tau}$. Moreover, $(\pi_{\tau},F_{\tau})$ realizes $w_{\tau}$ and $F_{\tau}$ has positive entropy $h_{\mathrm{top}}(F_{\tau})= \log \lambda(w_{\tau}) > 0$.
Tentative Realizability {#sec:tenta}
=======================
By restricting our attention to quadratic birational maps preserving a cuspidal cubic, we define a concept of tentative realization of orbit data. As is mentioned below, when such a realization exists, it is uniquely determined in some sense by the orbit data $\tau$. From the characterization of composition of quadratic birational maps mentioned in Proposition \[pro:birat\], the existence of a tentative realization is investigated under the condition $$\label{eqn:TR}
\Gamma_{\tau}^{(1)} \cap P(\tau) = \emptyset,$$ where $\Gamma_{\tau}^{(1)}$ is given in (\[eqn:roots1\]), and $P(\tau)$ is the set of periodic roots with period $\ell_{\tau}$, that is, $$\label{eqn:per}
P(\tau):=\big\{ \alpha \in \Phi_{N} \, \big| \, w_{\tau}^{\ell_{\tau}}(\alpha)
= \alpha \big\}.$$
First, we introduce some terminology used below. Let $X$ be a smooth surface, $C$ be a curve in $X$, and $x$ be a proper point of the smooth locus $C^*$ of $C$. Moreover, put $(X_0,C_0^*,x_0):=(X,C^*,x)$, and for $m > 0$, inductively determine $(X_m,C_m^*,x_m)$ from the blowup $\pi_m : X_m \to X_{m-1}$ of $x_{m-1} \in C_{m-1}^*$, the strict transform $C_m^*$ of $C_{m-1}^*$ under $\pi_m$, and a unique point $x_{m} \in C_m^* \cap E_m$, where $E_m$ stands for the exceptional curve of $\pi_m$. Then, $x_m$ is called the [*point in the $m$-th infinitesimal neighbourhood of $x$ on $C^*$*]{}, or an [*infinitely near point on $C^*$*]{}. Thus, a point in the $m$-th infinitesimal neighbourhood of $x$ on $C^*$ is uniquely determined. Moreover, if a cluster $I$ consists of proper or infinitely near points on $C^*$, then we say that $I$ is [*a cluster in $C^*$*]{}.
Now let $C$ be a cubic curve on $\mathbb{P}^2$ with a cusp singularity. In what follows, a coordinate on $\mathbb{P}^2$ is chosen so that $C=\{[x:y:z] \in \mathbb{P}^2 \, | \, y z^2=x^3\} \subset \mathbb{P}^2$ with a cusp $[0:1:0]$. Then the smooth locus $C^*=C \setminus \{[0:1:0]\}$ is parametrized as $\mathbb{C} \ni t \mapsto [t:t^3:1] \in C^*$. We denote by $\mathcal{B}(C)$ the set of birational self-maps $f$ of $\mathbb{P}^2$ such that $f(C):=\overline{f(C \setminus I(f))}=C$ and $[0:1:0] \notin I(f^{\pm1})$, and denote by $\mathcal{Q}(C) \subset \mathcal{B}(C)$ and $\mathcal{L}(C) \subset \mathcal{B}(C)$ the subsets consisting of the quadratic maps in $\mathcal{B}(C)$ and of the linear maps in $\mathcal{B}(C)$, respectively. Any map $f \in \mathcal{B}(C)$ restricted to $C^*$ is an automorphism of $C^*$ expressed as $$\label{eqn:linear}
f|_{C^*} : C^* \ni [t:t^3:1] \mapsto
[\delta(f) \cdot t+ k_f: (\delta(f) \cdot t+k_f)^3:1] \in C^*,$$ for some $\delta(f) \in \mathbb{C}^{\times}$ and $k_f \in \mathbb{C}$. The value $\delta(f)$ is called the [*determinant*]{} of $f$. It is independent of the choice of the coordinate. Moreover, when $f \in \mathcal{Q}(C)$, it turns out that the indeterminacy sets $I(f^{\pm 1})$ are clusters in $C^*$ (see Lemma \[lem:quad\]).
We give the following definition for an $n$-tuple $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ of quadratic birational maps $f_i$ preserving $C$.
\[def:tenta\] An $n$-tuple $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ is called a [*tentative realization*]{} of an orbit data $\tau=(n,\sigma,\mu)$ if $p_{\overline{\iota}}^{\mu(\overline{\iota})}
\approx p_{\sigma(\overline{\iota})}^+$ for any $\overline{\iota} \in \mathcal{K}(n)$, where $p_{\overline{\iota}}^m$ is given in (\[eqn:p\]) with $f_r$ restricted to $C$ and thus is well-defined.
We should note that a realization $\overline{f}$ of $\tau$ is of course a tentative realization of $\tau$, and thus the existence of a tentative realization is of interest to us.
Now, we describe a quadratic birational map $f \in \mathcal{Q}(C)$ in terms of the behavior of $f|_{C^*}$. The following proposition states that the configuration of $I(f^{-1})$ on $C^*$ and the determinant $\delta(f)$ of $f$ determine the map $f \in \mathcal{Q}(C)$ uniquely (see [@D]).
\[lem:quad\] A birational map $f$ belongs to $\mathcal{Q}(C)$ if and only if there exist $d \in \mathbb{C}^{\times}$ and $b=(b_{\iota})_{\iota=1}^3 \in \mathbb{C}^3$ with $b_1+b_2+b_3 \neq 0$ such that $f$ can be expressed as $f=f_{d,b}$, where $f_{d,b} \in \mathcal{Q}(C)$ is a unique map determined by the following properties.
1. $\delta(f_{d,b})=d$.
2. $p_{\iota}^{-} \approx [b_{\iota}:b_{\iota}^3:1] \in C^*$ for a suitable labeling $I(f_{d,b}^{-1})=\{p_1^{-},p_2^{-},p_3^{-}\}$.
Moreover, the map $f_{d,b} \in \mathcal{Q}(C)$ satisfies the following.
1. $p_{\iota}^{+} \approx [a_{\iota}:a_{\iota}^3:1] \in C^*$ for $I(f_{d,b})=\{p_1^{+},p_2^{+},p_3^{+}\}$, where $\displaystyle a_{\iota}:=\frac{1}{d} \big\{b_{\iota}-\frac{2}{3}
(b_1+b_2+b_3) \big\}$.
2. $k_{f_{d,b}} = - \frac{1}{3} (b_1+b_2+b_3)\in \mathbb{C}^{\times}$, where $k_{f}$ is given in (\[eqn:linear\]).
In a similar manner, any linear map $f \in \mathcal{L}(C)$ is determined uniquely by the determinant $\delta(f)$ of $f$ (see [@D]).
\[lem:linear\] For any $d \in \mathbb{C}^{\times}$, there is a unique linear map $f \in \mathcal{L}(C)$ such that $\delta(f)=d$. In particular, the map $f \in \mathcal{L}(C)$ with $\delta(f)=1$ is the identity. Moreover, for any $f \in \mathcal{L}(C)$, the automorphism $f|_{C^*}$ restricted to $C^*$ is given by $$f|_{C^*} : [t:t^3:1] \mapsto [\delta(f) \cdot t : (\delta(f) \cdot t )^3:1].$$
Next, let us consider the composition $f=f_n \circ f_{n-1} \circ \cdots \circ f_1 : \mathbb{P}^2 \to \mathbb{P}^2$ of quadratic birational maps $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$. A labeling $I(f_i^{-1})=\{ p_{i,1}^{-},p_{i,2}^{-},p_{i,3}^{-} \}$ determines $I(f_i)=\{ p_{i,1}^{+},p_{i,2}^{+},p_{i,3}^{+} \}$ and the points: $$\label{eqn:Indp}
\check{p}_{i,\iota}^{+}:=f_1^{-1}|_{C} \circ \cdots \circ
f_{i-1}^{-1}|_{C} (p_{i,\iota}^{+}), \qquad
\check{p}_{i,\iota}^{-}:=f_n|_{C} \circ \cdots \circ f_{i+1}|_{C}
(p_{i,\iota}^{-})$$ (see Figure \[fig:comp\]). Then it is easy to see that $I(f^{\pm 1}) \subset \{\check{p}_{i,\iota}^{\pm} \, | \,
(i,\iota) \in \mathcal{K}(n) \}$. Moreover, let $\delta(\overline{f})$ be the [*determinant*]{} of $\overline{f}$ defined by $\delta(\overline{f})=\prod_{j=1}^n \delta(f_j)$ or, in other words, $\delta(\overline{f})=\delta(f)$.
\[pro:birat\] Let $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ be an $n$-tuple of quadratic birational maps in $\mathcal{Q}(C)$ with $d=\delta(\overline{f}) \neq 1$, and let $\check{p}_{i,\iota}^{\pm}$ be the points given in (\[eqn:Indp\]) for a labeling $I(f_i^{-1})=\{ p_{i,1}^{-},p_{i,2}^{-},p_{i,3}^{-} \}$. Then there is a unique pair $(v,s)$ of values $v=(v_{\overline{\iota}})_{\overline{\iota} \in
\mathcal{K}(n)} \in \mathbb{C}^{3n}$ and $s=(s_i)_{i=1}^n \in (\mathbb{C}^{\times})^n$ such that $(d,v,s)$ satisfies equation (\[eqn:SB\]) and the composition $f=f_1 \circ \cdots \circ f_n$ satisfies
1. $f|_{C^*} : C^* \ni [t+ \frac{1}{3} k(s):
(t+ \frac{1}{3} k(s))^3:1] \mapsto [d \cdot t+ \frac{1}{3} k(s)
:(d \cdot t+ \frac{1}{3} k(s))^3:1] \in C^*$, where $k(s)$ is given in (\[eqn:fix\]),
2. $\check{p}_{i,\iota}^{-} \approx [v_{i,\iota}+\frac{1}{3} k(s):
(v_{i,\iota}+ \frac{1}{3} k(s))^3:1] \in C^*$,
3. $\check{p}_{i,\iota}^{+} \approx [u_{i,\iota}+\frac{1}{3} k(s):
(u_{i,\iota}+\frac{1}{3} k(s))^3:1] \in C^*$, where $$\label{eqn:uv}
u_{i,\iota}:= \frac{1}{d} \big\{ v_{i,\iota}
- (d-1) \cdot s_i \big\}.$$
Conversely, for any $d \in \mathbb{C} \setminus \{ 0,1\}$, $v \in \mathbb{C}^{3n}$ and $s \in (\mathbb{C}^{\times})^n$ satisfying equation (\[eqn:SB\]), there exists an $n$-tuple $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ such that, for a suitable labeling $I(f_i^{-1})=\{ p_{i,1}^{-},p_{i,2}^{-},p_{i,3}^{-} \}$, the composition $f=f_1 \circ \cdots \circ f_n$ satisfies conditions (1)–(3). Moreover, the map $\overline{f}$ is determined uniquely by $(d,v,s)$ in the sense that if $\overline{f}=(f_1,\dots,f_n)$ and $\overline{f}'=(f_1',\dots,f_n')$ are determined by $(d,v,s)$, then there are linear maps $g_1, \dots, g_{n-1} \in \mathcal{L}(C)$ such that $f_j=g_{j-1} \circ f_j' \circ g_{j}$ for any $j=1, \dots,n$, where $g_0=g_n=\text{id}$.
[*Proof*]{}. From Lemma \[lem:quad\], each map $f_i \in \mathcal{Q}(C)$ is given by $f_i=f_{d_i,(b_{i,\iota})}$ for some $d_i \in \mathbb{C}^{\times}$ and $(b_{i,\iota})_{\iota=1}^{3} \in \mathbb{C}^3$ with $b_{i}:=b_{i,1}+b_{i,2}+b_{i,3} \neq 0$. Then the maps $f_i$ and $f$ restricted to $C^*$ can be expressed as $f_i |_{C^*}( [t:t^3:1]) = [y_i(t):y_i(t)^3:1]$ and $f |_{C^*}([t:t^3:1]) = [y(t):y(t)^3:1]$, respectively, where $y_i, \, y : \mathbb{C} \to \mathbb{C}$ are the maps given by $$y_i(t)=d_i \cdot t -\frac{1}{3} b_{i},$$ and $y:=y_n \circ y_{n-1} \circ \cdots \circ y_1$. Now we put $\check{d}_i:=d_{i+1} \cdot d_{i+2} \cdots d_n$, $a_{i,\iota}:=( b_{i,\iota}-\frac{2}{3} b_i)/d_i$ and $$\check{a}_{i,\iota}:=y_1^{-1} \circ \cdots \circ y_{i-1}^{-1} (a_{i,\iota}),
\qquad
\check{b}_{i,\iota}:= y_n \circ \cdots \circ y_{i+1} (b_{i,\iota}), \qquad
\check{b}_{i}:= \check{b}_{i,1}+\check{b}_{i,2}+\check{b}_{i,3}.$$ Then it follows that $\check{p}_{i,\iota}^{+} \approx
[\check{a}_{i,\iota}:\check{a}_{i,\iota}^3:1]$, $\check{p}_{i,\iota}^{-} \approx
[\check{b}_{i,\iota}:\check{b}_{i,\iota}^3:1]$ and $d=\check{d}_0$. A little calculation shows that $$\begin{array}{rl}
y(t) & \displaystyle =
d \cdot t- \frac{1}{3} \sum_{r=1}^n 2^{r-1} \cdot \check{b}_r, \\[2mm]
\check{b}_{i,\iota} & = \displaystyle
\check{d}_{i} \cdot b_{i,\iota} -\frac{1}{3} \sum_{r=i+1}^n
\check{d}_{r} \cdot b_r = \check{d}_{i} \cdot b_{i,\iota}
-\frac{d-1}{3} \sum_{r=i+1}^n s_r, \\[2mm]
\check{a}_{i,\iota} & \displaystyle =
\frac{1}{d} \Bigl( \check{d}_i \cdot b_{i,\iota} - \check{d}_i \cdot b_i
+ \frac{1}{3} \sum_{r=1}^i \check{d}_r \cdot b_r \Bigr)
=\frac{1}{d} \Big\{ \check{d}_i \cdot b_{i,\iota} - (d-1) \cdot s_i
+ \frac{d-1}{3} \sum_{r=1}^i s_r \Big\}, \\[2mm]
\end{array}$$ where $s_i := \check{d}_i \cdot b_i/(d-1) \neq0$. If we put $$v_{i,\iota} :=
\check{d}_i \cdot b_{i,\iota} - \frac{1}{3}
\Big( \sum_{r=1}^i s_r + d \cdot \sum_{r=i+1}^n s_r \Big),$$ then we have $$v_{i,1}+v_{i,2}+v_{i,3} = \displaystyle
\check{d}_i \cdot b_{i} -
\Big( \sum_{r=1}^i s_r + d \cdot \sum_{r=i+1}^n
s_r \Big) = \displaystyle - \sum_{r=1}^{i-1} s_r
+(d-2) s_i - d \sum_{r=i+1}^n s_r,$$ which shows that equation (\[eqn:SB\]) holds. Moreover, since $\check{b}_i = \check{d}_{i} \cdot b_{i} - (d-1) \cdot \sum_{r=i+1}^n
s_r = (d-1) \cdot \{s_{i}-\sum_{r=i+1}^n s_{r} \}$ and thus $\sum_{r=1}^n 2^{r-1} \cdot \check{b}_r = (d-1) \cdot k(s)$, the map $y(t)= d \cdot t - (d-1) \cdot k(s)/3$ has the unique fixed point $k(s)/3$ under the assumption that $d \neq 1$. Finally, we have $$\begin{array}{rl}
\check{b}_{i,\iota} = & \displaystyle
\check{d}_{i} \cdot b_{i,\iota}
-\frac{d-1}{3} \hspace{-1.5mm} \sum_{r=i+1}^n \hspace{-0.5mm} s_r =
v_{i,\iota} + \frac{1}{3}
\Big( \sum_{r=1}^i \hspace{-0.5mm} s_r + d \cdot \hspace{-1mm}
\sum_{r=i+1}^n \hspace{-0.5mm} s_r \Big)
- \frac{d-1}{3} \hspace{-1.5mm} \sum_{r=i+1}^n \hspace{-0.5mm}
s_r = v_{i,\iota} + \frac{1}{3} k(s), \\[2mm]
d \cdot \check{a}_{i,\iota} = &
\displaystyle \check{d}_{i} \cdot b_{i,\iota}
- (d-1) \cdot s_i + \frac{d-1}{3} \sum_{r=1}^i s_r \\[2mm]
= & \displaystyle v_{i,\iota} + \frac{1}{3}
\Big( \sum_{r=1}^i s_r + d \cdot \sum_{r=i+1}^n s_r \Big)
- (d-1) \cdot s_i + \frac{d-1}{3} \sum_{r=1}^i s_r \\[2mm]
= & \displaystyle v_{i,\iota} - (d-1) \cdot s_i + \frac{d}{3} k(s).
\end{array}$$ Thus conditions (1)–(3) hold.
Conversely, for any $d \neq 1$, $(s_{i})$ and $(v_{i,\iota})$ satisfying (\[eqn:SB\]), the maps $(f_i)=(f_{d_i,(b_{i,\iota})})$ with $$\begin{array}{l}
d_i=
\left\{
\begin{array}{ll}
1, & (i \neq n) \\[2mm]
d, & (i = n)
\end{array}
\right.
\\[2mm]
\displaystyle b_{i,\iota} = \frac{1}{d} \Big\{ v_{i,\iota} +
\frac{1}{3} \Big(\sum_{r=1}^i s_r + d \cdot \sum_{r=i+1}^n s_r \Big) \Big\}
\end{array}$$ give the birational map $f=f_n \circ \cdots \circ f_1$ satisfying conditions (1)–(3). Moreover, assume that there are two $n$-tuples $\overline{f}=(f_1, \dots,f_n)$ and $\overline{f}'=(f_1', \dots,f_n')$ in $\mathcal{Q}(C)^n$ such that $f=f_n \circ \cdots \circ f_1$ and $f'=f_n' \circ \cdots \circ f_1'$ satisfy conditions (1)–(3) for $(d,v,s)$. Put $g_j:=f_{j+1}' \circ \cdots \circ f_{n}' \circ f_n^{-1} \circ
\cdots \circ f_{j+1}' : \mathbb{P}^2 \to \mathbb{P}^2$. Then one has $f_j=g_{j-1} \circ f_j' \circ g_{j}$, where $g_n=\text{id}$. It follows from condition (2) that $g_j$ is a linear map in $\mathcal{L}(C)$, and from condition (1) that the determinant of $g_1$ is given by $\delta(g_1)=\delta(f') \cdot \delta(f)^{-1}=1$, which means that $g_1=\text{id}$ (see Lemma \[lem:linear\]). This completes the proof. $\Box$
0.1in
( 57.4500, 13.7400)( 7.5500,-16.9000)
(12.0000,-9.5000)[(0,0)\[lb\][$f_{i-1} \circ \cdots \circ f_1$]{}]{}(52.0000,-9.5000)[(0,0)\[lb\][$f_{n} \circ \cdots \circ f_{i+1}$]{}]{}(28.2000,-6.2000)[(0,0)\[lb\][$p_{i,1}^+$]{}]{}(33.0000,-13.5000)[(0,0)\[lb\][$p_{i,2}^+$]{}]{}(18.5000,-13.6000)[(0,0)\[lb\][$p_{i,3}^+$]{}]{}(48.0000,-15.5000)[(0,0)\[lb\][$p_{i,1}^-$]{}]{}(41.5000,-5.5000)[(0,0)\[lb\][$p_{i,2}^-$]{}]{}(54.0000,-5.5000)[(0,0)\[lb\][$p_{i,3}^-$]{}]{}
(11.0000,-14.5000)[(0,0)\[lb\][$\check{p}_{i,3}^+$]{}]{}(9.0000,-5.5000)[(0,0)\[lb\][$\check{p}_{i,1}^+$]{}]{}(65.0000,-6.0000)[(0,0)\[lb\][$\check{p}_{i,3}^-$]{}]{}(63.0000,-15.0000)[(0,0)\[lb\][$\check{p}_{i,1}^-$]{}]{}
(36.0000,-9.5000)[(0,0)\[lb\][$f_i$]{}]{}
\[cor:tenta\] Let $\tau$ be an orbit data with $\lambda(w_{\tau})>1$, $d$ be a root of $S_{w_{\tau}}(t)=0$ and $s \neq 0$ be a unique solution of equation (\[eqn:mat\]) (see Corollary \[cor:sol\]). Then $s$ satisfies $s_j \neq 0$ for any $1 \le j \le n$ if and only if there is a tentative realization $\overline{f}$ of $\tau$ with $\delta(\overline{f})=d$. Moreover, the tentative realization $\overline{f}$ of $\tau$ is uniquely determined in the sense that if there are two tentative realizations $\overline{f}=(f_1,\dots,f_n)$ and $\overline{f}'=(f_1',\dots,f_n')$ of $\tau$ with $\delta(\overline{f})=\delta(\overline{f}')=d$, then there are linear maps $g_1, \dots, g_n \in \mathcal{L}(C)$ such that $f_j=g_{j-1} \circ f_j' \circ g_{j}$ for any $j=1, \dots,n$, where $g_0:=g_n$.
[*Proof*]{}. First, assume that there is a tentative realization $\overline{f}$ of $\tau$ with $\delta(\overline{f})=d$. Then we notice that it is unique. Indeed, $\overline{f}$ satisfies $p_{\overline{\iota}}^{\mu(\overline{\iota})}
\approx p_{\sigma(\overline{\iota})}^+$, or equivalently, $f|_C^{\kappa(\overline{\iota})-1}(\check{p}_{\overline{\iota}}^-)
\approx \check{p}_{\sigma(\overline{\iota})}^+$ and thus $d^{\kappa(\overline{\iota})-1} \cdot
v_{\overline{\iota}}=u_{\sigma(\overline{\iota})}$ for any $\overline{\iota} \in \mathcal{K}(n)$. Therefore, from (\[eqn:uv\]), the pair $(v,s) \in \mathbb{C}^{3n} \times (\mathbb{C}^{\times})^{n}$ given in Proposition \[pro:birat\] satisfies (\[eqn:SB\]) and (\[eqn:expab\]). Since a solution of (\[eqn:SB\]) and (\[eqn:expab\]) is unique, up to a constant multiple (see Corollary \[cor:sol\]), the map $f$ is unique, up to conjugacy by a linear map in $\mathcal{L}(C)$, and so is $\overline{f}$ (see Lemma \[lem:linear\]). Moreover, $s$ satisfies $s_j \neq 0$ for any $1 \le j \le n$.
Conversely, assume that $s$ satisfies $s_j \neq 0$ for any $1 \le j \le n$. From Corollary \[cor:sol\], there is a solution $(v,s)$ of (\[eqn:SB\]) and (\[eqn:expab\]). Hence, Proposition \[pro:birat\] gives an $n$-tuple $\overline{f}=(f_1,\dots,f_n) \in \mathcal{Q}(C)^n$ such that $f=f_n \circ \cdots \circ f_1$ satisfies conditions $(1)$–$(3)$ in Proposition \[pro:birat\] and thus $\delta(\overline{f})=d$. In view of (\[eqn:expab\]) and (\[eqn:uv\]), one has $d^{\kappa(\overline{\iota})-1} \cdot v_{\overline{\iota}}
=u_{\sigma(\overline{\iota})}$ and so $p_{\overline{\iota}}^{\mu(\overline{\iota})}
\approx p_{\sigma(\overline{\iota})}^+$ for any $\overline{\iota} \in \mathcal{K}(n)$, which means that $\overline{f}$ is the tentative realization of $\tau$ with $\delta(\overline{f})=d$. $\Box$
Now we fix an orbit data $\tau$ with $\lambda(w_{\tau}) > 1$. Then from Corollary \[cor:sol\], $\lambda(w_{\tau})$ is a root of $\chi_{\tau}(t)=0$ and there is a unique solution $s_{\tau} \neq 0 \in \mathbb{C}^n$ of the equation (\[eqn:mat\]) with $d=\lambda(w_{\tau})$.
\[lem:vanish\] For each $1 \le j \le n$, $\alpha_{j}^c$ belongs to $P(\tau)$ if and only if $(s_{\tau})_j=0$, where $\alpha_{j}^c$ is given in (\[eqn:root1\]) and $(s_{\tau})_j$ is the $j$-th component of $s_{\tau}$.
[*Proof*]{}. Assume that $\alpha_{j}^c \in P(\tau)$, which is equivalent to saying that $(\alpha_{j}^c,\overline{v}_{\delta})=0$ from Lemma \[lem:per\], where $\delta:=\lambda(w_{\tau})$. By (\[eqn:wexp\]), we have $$\begin{array}{ll}
(\alpha_{j}^c,\overline{v}_{\delta}) &
= (e_0-e_{(j,1)_{\tau}}^1-e_{(j,2)_{\tau}}^1
-e_{(j,3)_{\tau}}^1, q_{j+1} \circ \cdots
\circ q_n (\overline{v}_{\delta})) \\[2mm]
~&= \displaystyle
\Big( \sum_{m=1}^{j} (s_{\tau})_m +\delta \sum_{m=j+1}^{n} (s_{\tau})_m \Big)
+ \sum_{\iota=1}^3 v_{j,\iota} \\[2mm]
~ &= \displaystyle
\Big( \sum_{m=1}^{j} (s_{\tau})_m +\delta \sum_{m=j+1}^{n} (s_{\tau})_m \Big)
+ \Big(-\sum_{m=1}^{j-1} (s_{\tau})_m +(\delta-2) (s_{\tau})_j -
\delta \sum_{m=j+1}^{n} (s_{\tau})_m \Big) \\[2mm]
~&=\displaystyle (\delta-1) (s_{\tau})_{j}.
\end{array}$$ Thus the equation $(\alpha_{j}^c,\overline{v}_{\delta})=0$ is equivalent to saying that $(s_{\tau})_{j}=0$, since $\delta >1$. $\Box$
Propositions \[pro:tenta\], \[pro:nonzero\] and \[pro:det\] mentioned below run parallel with Theorems \[thm:main1\]–\[thm:main3\]. Namely, Proposition \[pro:tenta\] states that there is a tentative realization of $\tau$ under condition (\[eqn:TR\]), Proposition \[pro:nonzero\] states that the sibling $\check{\tau}$ of any orbit data satisfies condition (\[eqn:TR\]), and finally Proposition \[pro:det\] gives a sufficient condition for (\[eqn:TR\]).
\[pro:tenta\] Assume that an orbit data $\tau$ satisfies $\lambda(w_{\tau}) > 1$ and condition (\[eqn:TR\]). Then there is a unique tentative realization $\overline{f}$ of $\tau$ such that $\delta(\overline{f}) = \lambda(w_{\tau}) >1$. Conversely, if there is a tentative realization $\overline{f}$ of $\tau$ such that $\delta(\overline{f})=\lambda(w_{\tau}) >1$, then $\tau$ satisfies condition (\[eqn:TR\]).
[*Proof*]{}. This proposition follows easily from Corollary \[cor:tenta\] and Lemma \[lem:vanish\]. $\Box$
\[pro:nonzero\] There is a data $\check{\tau}=(\check{n}, \check{\sigma},
\check{\kappa})$ with $\check{n} \le n$ such that $\delta=\lambda(w_{\tau})=\lambda(w_{\check{\tau}})$ and $(s_{\check{\tau}})_{j} \neq 0$ for any $1 \le j \le \check{n}$. In particular, $\check{\tau}$ satisfies condition (\[eqn:TR\]).
[*Proof*]{}. Let $(v,s_{\tau}) \in (\mathbb{C}^{3n} \setminus \{0\})
\times (\mathbb{C}^{n} \setminus \{0\})$ be the unique solution of (\[eqn:SB\]) and (\[eqn:expab\]) as in Corollary \[cor:sol\], and assume that $(s_{\tau})_{j} = 0$. Then we put $\check{n}:=n-1$, and for any $\overline{\iota}=(i,\iota) \in \mathcal{K}(\check{n}) \cong \{(i,\iota)
\in \mathcal{K}(n) \, | \, i \neq j\}$, choose $k(\overline{\iota})$ so that $i_1=\cdots =i_{k(\overline{\iota})-1} =j$ but $i_{k(\overline{\iota})} \neq j$. The new orbit data $\check{\tau}=(\check{n}, \check{\sigma}, \check{\kappa})$ is defined by $\check{\sigma}(\overline{\iota})
:=\sigma^{k(\overline{\iota})}(\overline{\iota})$ and $\check{\kappa}(\overline{\iota})
:=\sum_{k=0}^{k(\overline{\iota})-1} \kappa(\sigma^k(\overline{\iota}))$ for any $\overline{\iota} \in \mathcal{K}(\check{n})$. Then, since $v_{\sigma(\overline{\iota})}=\delta^{\kappa(\overline{\iota})-1}
\cdot v_{\overline{\iota}} + (\delta-1) \cdot (s_{\tau})_{i_1}$ and $(s_{\tau})_j=0$, we have $v_{\overline{\iota}_1^{\check{\sigma}}}
=\delta^{\check{\kappa}(\overline{\iota})-1}
\cdot v_{\overline{\iota}} + (\delta-1)
\cdot (s_{\tau})_{i_1^{\check{\sigma}}}$ for any $\overline{\iota} \in \mathcal{K}(\check{n})$, where $\overline{\iota}_m^{\check{\sigma}}=(i_m^{\check{\sigma}},
\iota_m^{\check{\sigma}}):=\check{\sigma}^m(\overline{\iota})$. Moreover, as $v$ satisfies (\[eqn:SB\]) with $d=\delta$ and with $s=s_{\tau}$, $(v_{\overline{\iota}})_{\overline{\iota} \in \mathcal{K}(\check{n})}$ satisfies (\[eqn:SB\]) with $n=\check{n}$, $d=\delta$ and $s=s_{\check{\tau}}=((s_{\tau})_1,\dots,(s_{\tau})_{j-1}, (s_{\tau})_{j+1},
\dots, (s_{\tau})_n) \neq 0$. Hence, we have $\mathcal{A}_{\check{\tau}}(\delta) \, s_{\check{\tau}}=0$ and $\delta=\lambda(w_{\check{\tau}})=\lambda(w_{\tau})$.
Therefore, either $(s_{\check{\tau}})_{\hat{j}} \neq 0$ for any $\hat{j}$, or we can repeat the above argument to eliminate $(s_{\check{\tau}})_{\hat{j}}=0$ from $s_{\check{\tau}}$. Since each step reduces $n$ by $1$, $\check{\tau}$ satisfies $(s_{\check{\tau}})_{\hat{j}} \neq 0$ for any $\hat{j}$ after finitely many steps. $\Box$
\[pro:det\] For any orbit data $\tau$ satisfying conditions $(1)$ and $(2)$ in Theorem \[thm:main3\], there is a real number $\delta$ with $2^n-1 < \delta < 2^n$ such that $\chi_{\tau}(\delta)=0$, and thus $\lambda(w_{\tau})=\delta>1$. Moreover, $\tau$ satisfies condition (\[eqn:TR\]).
The proof of this proposition is given in Section \[sec:proof\].
\[rem:inf2\] As is mentioned in Proposition \[pro:tenta\], the tentative realization $\overline{f}$ of $\tau$ with $\delta(\overline{f})=\lambda(w_{\tau})$ is unique. However, when $p_{\overline{\iota}}^- \approx p_{\overline{\iota}'}^-$ for some $\overline{\iota} \neq \overline{\iota}' \in
\mathcal{K}(n)_i=\{(i,1),(i,2),(i,3)\}$, there remains an ambiguity about how to label the indeterminacy points. Namely, either $p_{\overline{\iota}}^{\pm} < p_{\overline{\iota}'}^{\pm}$ or $p_{\overline{\iota}}^{\pm} > p_{\overline{\iota}'}^{\pm}$ holds. In this case, for a fixed $n$-tuple $(\prec_{i}) \in \mathcal{T}(\tau)$ of total orders (see Definition \[def:order\]), we choose the labeling so that if $p_{\overline{\iota}}^- \approx p_{\overline{\iota}'}^-$ and $\overline{\iota}' \prec_i \overline{\iota}$, then $p_{\overline{\iota}'}^- < p_{\overline{\iota}}^-$ and $p_{\overline{\iota}'}^+ < p_{\overline{\iota}}^+$.
Realizability {#sec:real}
=============
Under condition (\[eqn:TR\]), we study the tentative realization $\overline{f}$ of $\tau$ given in Proposition \[pro:tenta\], and show that $\overline{f}$ becomes a realization of $\tau$ when $\tau$ satisfies the condition $$\label{eqn:R}
\Gamma_{\tau}^{(2)} \cap P(\tau) = \emptyset,$$ where $\Gamma_{\tau}^{(2)}$ and $P(\tau)$ are given in (\[eqn:roots2\]) and (\[eqn:per\]), respectively. In the last part of this section, the main theorems of this paper are established.
First, we prove the following lemma.
\[lem:indA\] Assume that $\tau$ satisfies condition (\[eqn:TR\]). Then, for the tentative realization $\overline{f}$ of $\tau$ mentioned in Proposition \[pro:tenta\], the following hold.
1. $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}'}^-$ if and only if $\alpha_{\overline{\iota},\overline{\iota}'}^k \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$, where $m=\theta_{i,i'}(k) \ge 0$.
2. $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}_1'}^+$ with $\mu(\overline{\iota}') \le m$ if and only if $\alpha_{\overline{\iota},\overline{\iota}'}^{k} \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$, where $k$ is determined by $\theta_{i,i'}(k)+\mu(\overline{\iota}')= m$.
3. $p_{\overline{\iota}}^{m} \approx
p_{\overline{\iota}_1'}^+$ with $m \le \mu(\overline{\iota}')$ if and only if $\alpha_{\overline{\iota}',\overline{\iota}}^{k} \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$, where $k$ is determined by $\theta_{i',i}(k)+ m=\mu(\overline{\iota}')$.
4. $p_{\overline{\iota}}^- \approx p_{\overline{\iota}'}^-$ if and only if $\alpha_{\overline{\iota},\overline{\iota}'}^0 \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$ with $i=i'$.
5. $p_{\overline{\iota}_1}^+ \approx p_{\overline{\iota}_1'}^+$ with $\mu(\overline{\iota}) \ge \mu(\overline{\iota}')$ if and only if $\alpha_{\overline{\iota},\overline{\iota}'}^k \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$, where $k$ is determined by $\theta_{i,i'}(k)+ \mu(\overline{\iota}')=\mu(\overline{\iota})$.
[*Proof*]{}. We only prove assertion (1) as the remaining statements can be treated in a similar manner. Assume that $\alpha_{\overline{\iota},\overline{\iota}'}^k \in P(\tau)$, which is equivalent to saying that $(\alpha_{\overline{\iota},\overline{\iota}'}^k,\overline{v}_{\delta})=0$ by Lemma \[lem:per\]. From (\[eqn:wexp\]), this means that $$\label{eqn:vv}
0=(\alpha_{\overline{\iota},\overline{\iota}'}^k,\overline{v}_{\delta})
=(e_{\overline{\iota}_{\tau}}^{k+1}-e_{\overline{\iota}_{\tau}'}^1,q_{i'+1}
\circ \cdots \circ q_n (\overline{v}_{\delta}))=\delta^{k} \cdot
v_{\overline{\iota}}-v_{\overline{\iota}'},$$ where the last equality follows from the fact that the coefficients of $e_{\overline{\iota}_{\tau}}^{k+1}$ and $e_{\overline{\iota}_{\tau}'}^1$ in $q_{i'+1} \circ \cdots \circ q_n (\overline{v}_{\delta})$ are $\delta^{k} \cdot v_{\overline{\iota}}$ and $v_{\overline{\iota}'}$ respectively, since $\theta_{i,i'}(k) \ge 0$. Thus we have $\check{p}_{\overline{\iota}'}^- \approx
f|_{C}^{k}(\check{p}_{\overline{\iota}}^-)$ and $$\begin{array}{rl}
p_{\overline{\iota}'}^- = & f_{i'+1}^{-1}|_{C} \circ \cdots \circ
f_{n}^{-1}|_{C} (\check{p}_{\overline{\iota}'}^-) \approx
f_{i'+1}^{-1}|_{C} \circ \cdots \circ f_{n}^{-1}|_{C}
( f|_{C}^k(\check{p}_{\overline{\iota}}^-)) \\[2mm]
= & f_{i'+1}^{-1}|_{C} \circ \cdots \circ f_{n}^{-1}|_{C} \circ
( f|_{C})^k \circ f_n|_C \circ \cdots \circ
f_{i+1}|_{C} (p_{\overline{\iota}}^-) =
p_{\overline{\iota}}^{\theta_{i,i'}(k)}.
\end{array}$$ Conversely, if $p_{\overline{\iota}'}^- \approx p_{\overline{\iota}}^{\theta_{i,i'}(k)}$, then it follows from the above arguments that $\alpha_{\overline{\iota},\overline{\iota}'}^k \in P(\tau)$. Therefore, assertion (1) of the lemma is established. $\Box$
In order to see whether the tentative realization $\overline{f}$ becomes a realization, we restate Lemma \[lem:real\] as follows.
\[pro:real\] Assume that $\overline{f}$ is a tentative realization of $\tau$. Then $\overline{f}$ is a realization of $\tau$ if and only if there is a total order $\prec$ on $\mathcal{K}(n)$ such that the following conditions hold:
1. If $i=i'$ and $p_{\overline{\iota}'}^- < p_{\overline{\iota}}^-$, then $\overline{\iota}' \precneqq \overline{\iota}$.
2. If $i_1=i_1'$ and $p_{\overline{\iota}_1'}^+ < p_{\overline{\iota}_1}^+$, then $\overline{\iota}' \precneqq \overline{\iota}$.
3. If $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}'}^-$ for $0 < m \le \mu(\overline{\iota})$, then $\overline{\iota}' \precneqq \overline{\iota}$.
4. If $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}_1'}^+$ for $0 \le m < \mu(\overline{\iota})$, then $\overline{\iota}' \precneqq \overline{\iota}$.
[*Proof*]{}. Assume that there is a total order $\prec$ on $\mathcal{K}(n)$ satisfying conditions (1)–(4). Under the notation of Lemma \[lem:real\], consider the sequence $\overline{\iota}^{(1)} \prec \cdots \prec \overline{\iota}^{(3n)}$, and assume that $(p_{\overline{\iota}^{(\ell)}}^-,p_{\sigma(\overline{\iota}^{(\ell)})}^+)$ is a proper pair of $\overline{g}_{\ell-1}=(g_{\ell-1,1},\dots,g_{\ell-1,n})$ with length $\mu(\overline{\iota}^{(\ell)})$ for any $\ell=1, \dots,j-1$. Then from (\[eqn:elim\]), we have $$I(g_{j-1,r}^{-1})=\{p_{\overline{\iota}}^- \, | \, \overline{\iota}^{(j)}
\prec \overline{\iota}, \, i=r \}, \qquad
I(g_{j-1,r})=\{p_{\sigma(\overline{\iota})}^+ \, | \, \overline{\iota}^{(j)}
\prec \overline{\iota}, \, i_1=r \}.$$ Thus, $p_{\overline{\iota}^{(j)}}^-$ and $p_{\sigma(\overline{\iota}^{(j)})}^+$ are proper points from conditions (1) and (2) respectively. Moreover, one has $p_{\overline{\iota}^{(j)}}^{m}
\approx \hspace{-.99em}/\hspace{.70em} p_{\overline{\iota}_1'}^+$ for any $0 \le m < \mu(\overline{\iota})$ and $\overline{\iota}' \succneqq \overline{\iota}^{(j)}$, and $p_{\overline{\iota}^{(j)}}^{m}
\approx \hspace{-.99em}/\hspace{.70em} p_{\overline{\iota}'}^-$ for any $0 < m \le \mu(\overline{\iota})$ and $\overline{\iota}' \succneqq \overline{\iota}^{(j)}$ from conditions (3) and (4), respectively. Since $p_{\overline{\iota}^{(j)}}^{\mu(\overline{\iota}^{(j)})} \approx
p_{\sigma(\overline{\iota}^{(j)})}^+$ and $p_{\overline{\iota}^{(j)}}^{\mu(\overline{\iota}^{(j)})}$ is also a proper point, we have $p_{\overline{\iota}^{(j)}}^{\mu(\overline{\iota}^{(j)})}
= p_{\sigma(\overline{\iota}^{(j)})}^+$. Therefore, $(p_{\overline{\iota}^{(j)}}^-,p_{\sigma(\overline{\iota}^{(j)})}^+)$ is a proper pair of $\overline{g}_{j-1}$ with length $\mu(\overline{\iota}^{(j)})$, and $\overline{f}$ is a realization of $\tau$ by Lemma \[lem:real\].
Similarly, if $\overline{f}$ is a realization of $\tau$, it is easy to see that the total order $\prec$ mentioned in Lemma \[lem:real\] satisfies conditions (1)–(4), and so the proof is complete. $\Box$
From the results mentioned above, we have the following three propositions, which also run parallel with Theorems \[thm:main1\]–\[thm:main3\] in a similar way to Propositions \[pro:tenta\], \[pro:nonzero\] and \[pro:det\].
\[pro:ind\] Let $\tau$ be an orbit data satisfying $\lambda(w_{\tau}) > 1$ and condition (\[eqn:TR\]), and $\overline{f}$ be the tentative realization mentioned in Proposition \[pro:tenta\]. Then, $\tau$ satisfies condition (\[eqn:R\]) if and only if $\overline{f}$ is a realization of $\tau$.
[*Proof*]{}. From Proposition \[pro:real\], we will show that $\tau$ satisfies condition (\[eqn:R\]) if and only if there is a total order $\prec$ on $\mathcal{K}(n)$ satisfying conditions $(1)$–$(4)$ in Proposition \[pro:real\].
First, we assume that $\tau$ satisfies condition (\[eqn:R\]). For a fixed $(\prec_{i}) \in \mathcal{T}(\tau)$ (see Definition \[def:order\]), let $\widehat{P}(\tau;(\prec_{i}))$ be the set of elements $\alpha_{\overline{\iota},\overline{\iota}'}^k$ in $P(\tau)$ satisfying either $\theta_{i,i'}(k) = 0$ and $\overline{\iota}' \prec_i \overline{\iota}$ or $\theta_{i,i'}(k) > 0$. Moreover, we fix a total order $\prec$ on $\mathcal{K}(n)$ such that if $\alpha_{\overline{\iota},\overline{\iota}'}^k \in
\widehat{P}(\tau;(\prec_{i}))$, then $\overline{\iota}' \prec \overline{\iota}$. This total order is well-defined. To see this, we show that if $\alpha_{\overline{\iota}^{(\ell)},
\overline{\iota}^{(\ell+1)}}^{k_{\ell}} \in
\widehat{P}(\tau;(\prec_{i}))$ with $\overline{\iota}^{(1)}
=\overline{\iota}^{(j+1)}$ for some $j$, then $\overline{\iota}^{(1)}=\overline{\iota}^{(2)}=\cdots
= \overline{\iota}^{(j)}$. Indeed, since $\alpha_{\overline{\iota}^{(\ell)},\overline{\iota}^{(\ell+1)}}^{k_{\ell}}
\notin \Gamma_{\tau}^{(2)}$, one has $0 \le \theta_{i^{(\ell)},i^{(\ell+1)}}(k_{\ell}) +
\mu(\overline{\iota}^{(\ell+1)}) \le \mu(\overline{\iota}^{(\ell)})$, which yields $\sum_{\ell=1}^j \theta_{i^{(\ell)},i^{(\ell+1)}}(k_{\ell})=0$ and $\theta_{i^{(\ell)},i^{(\ell+1)}}(k_{\ell})=0$ for any $\ell$. As $\alpha_{\overline{\iota}^{(\ell)},
\overline{\iota}^{(\ell+1)}}^{k_{\ell}} \in
\widehat{P}(\tau;(\prec_{i}))$ with $\theta_{i^{(\ell)},i^{(\ell+1)}}(k_{\ell})=0$, we have $\overline{\iota}^{(\ell+1)} \prec_{i^{(\ell)}} \overline{\iota}^{(\ell)}$ for any $\ell$, and $\overline{\iota}^{(1)}=\overline{\iota}^{(2)}
=\cdots = \overline{\iota}^{(j)}$. Then it is easy to see from Lemma \[lem:indA\] that the total order $\prec$ satisfies conditions (1) and (3) in Proposition \[pro:real\] (see Remark \[rem:inf2\]). Thus, we need to prove that this total order satisfies the remaining conditions.
To prove condition (2), assume that $i_1=i_1'$ and $p_{\overline{\iota}_1'}^+<p_{\overline{\iota}_1}^+$. Then we have $p_{\overline{\iota}_1'}^-<p_{\overline{\iota}_1}^-$ and thus $\overline{\iota}_1' \prec_{i_1} \overline{\iota}_1$. It follows from assertion (4) of Lemma \[lem:indA\] that $\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$ contains either $\alpha_{\overline{\iota},\overline{\iota}'}^k$ with $\mu(\overline{\iota}')+\theta_{i,i'}(k) = \mu(\overline{\iota})$ and with $\theta_{i,i'}(k) \ge 0$, or $\alpha_{\overline{\iota}',\overline{\iota}}^k$ with $\mu(\overline{\iota})+\theta_{i',i}(k) = \mu(\overline{\iota}')$ and with $\theta_{i',i}(k) > 0$. However, the latter case does not occur, since $\alpha_{\overline{\iota}',\overline{\iota}}^k \in \Gamma_{\tau}^{(2)}$. In the former case, if $\theta_{i,i'}(k) > 0$, then one has $\overline{\iota}' \prec \overline{\iota}$. Similarly, if $\theta_{i,i'}(k) = 0$ then we have $\overline{\iota}' \prec_{i} \overline{\iota}$, and thus $\overline{\iota}' \prec \overline{\iota}$, since $\overline{\iota}_1' \prec_{i_1} \overline{\iota}_1$ and $\alpha_{\overline{\iota},\overline{\iota}'}^k \notin \Gamma_{\tau}^{(2)}$. Therefore, condition (2) is proved.
On the other hand, to prove condition (4), assume that $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}_1'}^+$ for $0 \le m < \mu(\overline{\iota})$. Then it follows from assertion (2) of Lemma \[lem:indA\] that $\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$ contains either $\alpha_{\overline{\iota},\overline{\iota}'}^{k}$ with $\theta_{i,i'}(k)+\mu(\overline{\iota}')=m$ and with $\theta_{i,i'}(k) \ge 0$, or $\alpha_{\overline{\iota}',\overline{\iota}}^{k}$ with $\theta_{i',i}(k)+ m= \mu(\overline{\iota}')$ and with $\theta_{i',i}(k)>0$. In the latter case, we have $\theta_{i',i}(k)+\mu(\overline{\iota}) >
\theta_{i',i}(k)+m=\mu(\overline{\iota}')$, and thus $\alpha_{\overline{\iota}',\overline{\iota}}^{k} \in \Gamma_{\tau}^{(2)}$, which is a contradiction. In the former case, one has $\theta_{i,i'}(k)+\mu(\overline{\iota}')=m< \mu(\overline{\iota})$. If $\theta_{i,i'}(k)>0$, $\overline{\iota}'$ and $\overline{\iota}$ satisfy $\overline{\iota}' \prec \overline{\iota}$. Similarly, if $\theta_{i,i'}(k)=0$, they also satisfy $\overline{\iota}' \prec \overline{\iota}$, since the inequality $\mu(\overline{\iota}')=m< \mu(\overline{\iota})$ and the assumption $\alpha_{\overline{\iota},\overline{\iota}'}^k \notin \Gamma_{\tau}^{(2)}$ together yield $\overline{\iota}' \prec_{i} \overline{\iota}$. Therefore, condition (4) is proved, and the total order $\prec$ satisfies conditions (1)–(4) in Proposition \[pro:real\].
Conversely, assume that there is a total order $\prec$ on $\mathcal{K}(n)$ satisfying conditions $(1)$–$(4)$, or $\overline{f}$ is a realization of $\tau$. We claim that there is an $n$-tuple $(\prec_{i}) \in \mathcal{T}(\tau)$ such that if $p_{\overline{\iota}'}^- < p_{\overline{\iota}}^-$, and thus $\overline{\iota}' \precneqq \overline{\iota}$, then $\overline{\iota}' \prec_{i} \overline{\iota}$. In order to prove the claim, it is enough to show that for any $p_{\overline{\iota}'}^- < p_{\overline{\iota}}^-$, either $\mu(\overline{\iota}') = \mu(\overline{\iota})$ and $\overline{\iota}_1' \prec \overline{\iota}_1$, or $\mu(\overline{\iota}') < \mu(\overline{\iota})$ holds. First, we assume the contrary that $\mu(\overline{\iota}') > \mu(\overline{\iota})$. As $p_{\overline{\iota}'}^- \approx p_{\overline{\iota}}^-$, we have $p_{\overline{\iota}'}^{\mu(\overline{\iota})}
\approx p_{\overline{\iota}_1}^+$ and hence $\overline{\iota} \precneqq \overline{\iota}'$ from condition $(4)$, which is a contradiction. On the other hand, if $\mu(\overline{\iota}') = \mu(\overline{\iota})$, then it follows that $p_{\overline{\iota}_1'}^+ \approx p_{\overline{\iota}_1}^+$. From the relation $\overline{\iota}' \precneqq \overline{\iota}$ and condition (2), one has $p_{\overline{\iota}_1'}^+ < p_{\overline{\iota}_1}^+$ and $p_{\overline{\iota}_1'}^- < p_{\overline{\iota}_1}^-$, which yields $\overline{\iota}_1' \prec \overline{\iota}_1$. Therefore, we establish the claim and, in what follows, fix $(\prec_{i}) \in \mathcal{T}(\tau)$ mentioned in the claim.
Next, we assume the contrary that there is a periodic root $\alpha_{\overline{\iota},\overline{\iota}'}^k
\in \Gamma_{\tau}^{(2)} \cap P(\tau)$, which means that $p_{\overline{\iota}}^m \approx p_{\overline{\iota}'}^-$ with $m=\theta_{i,i'}(k) \le \mu(\overline{\iota})$. Note that if $\theta_{i,i'}(k) > 0$ then the relation $\overline{\iota}' \precneqq \overline{\iota}$ holds from condition $(3)$, but if $\mu(\overline{\iota}) < \mu(\overline{\iota}')+ \theta_{i,i'}(k)$ then the relation $\overline{\iota} \precneqq \overline{\iota}'$ holds, since $p_{\overline{\iota}'}^{m'} \approx p_{\overline{\iota}_1}^+$ for $m'+\theta_{i,i'}(k)=\mu(\overline{\iota})$ and thus for $m' < \mu(\overline{\iota}')$. Now, we assume that $\theta_{i,i'}(k) > 0$ and so $\overline{\iota}' \precneqq \overline{\iota}$. Since $\alpha_{\overline{\iota},\overline{\iota}'}^k
\notin \check{\Gamma}_{\tau}^{(2)}$ and $\mu(\overline{\iota}) \ge \mu(\overline{\iota}')+ \theta_{i,i'}(k)$, one has $\mu(\overline{\iota}) = \mu(\overline{\iota}')+ \theta_{i,i'}(k)$ and $\overline{\iota}_1 \prec_{i} \overline{\iota}_1'$. This means that the relations $p_{\overline{\iota}_1}^-<p_{\overline{\iota}_1'}^-$ and $p_{\overline{\iota}_1}^+<p_{\overline{\iota}_1'}^+$ hold. Therefore, one has $\overline{\iota} \precneqq \overline{\iota}'$ from condition $(2)$, which is a contradiction. On the other hand, assume that $\theta_{i,i'}(k) = 0$ and $\mu(\overline{\iota}) < \mu(\overline{\iota}')$, which leads to $\overline{\iota}' \prec_i \overline{\iota}$ as $\alpha_{\overline{\iota},\overline{\iota}'}^k
\notin \check{\Gamma}_{\tau}^{(2)}$. By the first equality, one has $p_{\overline{\iota}}^- \approx p_{\overline{\iota}'}^-$, and by the second inequality and condition (3), one has $p_{\overline{\iota}'}^{\mu(\overline{\iota})}
\approx p_{\overline{\iota}_1}^+$ and $\overline{\iota} \precneqq \overline{\iota}'$, which is a contradiction. Finally, assume that $\theta_{i,i'}(k) = 0$, $\mu(\overline{\iota}) =\mu(\overline{\iota}')$ and $\overline{\iota}' \prec_i \overline{\iota}$. In this case, we have $p_{\overline{\iota}}^- \approx p_{\overline{\iota}'}^-$ and $p_{\overline{\iota}_1}^+ \approx p_{\overline{\iota}_1'}^+$. Since $\alpha_{\overline{\iota},\overline{\iota}'}^{k}
\notin \check{\Gamma}_{\tau}^{(2)}$, one has $\overline{\iota}_1 \prec_{i_1} \overline{\iota}_1'$, $p_{\overline{\iota}_1}^-<p_{\overline{\iota}_1'}^-$ and $p_{\overline{\iota}_1}^+<p_{\overline{\iota}_1'}^+$. This shows that $\overline{\iota} \precneqq \overline{\iota}'$ from condition (2), which is a contradiction. Summing up this discussion, $\tau$ satisfies condition (\[eqn:R\]) and the proposition is established. $\Box$
\[pro:vari\] Let $\tau$ be an orbit data satisfying $\lambda(w_{\tau}) > 1$ and condition (\[eqn:TR\]), and $\overline{f}$ be the tentative realization mentioned in Proposition \[pro:tenta\]. Then, there is an orbit data $\check{\tau}$ such that $\delta=\lambda(w_{\tau})=\lambda(w_{\check{\tau}})$ and $\overline{f}$ is a realization of $\check{\tau}$. In particular, $\check{\tau}$ satisfies condition (\[eqn:R\]).
[*Proof*]{}. Under the notation of Lemma \[lem:real\], assume that there is a sequence $\overline{\iota}^{(1)} \prec \overline{\iota}^{(2)}
\prec \cdots \prec \overline{\iota}^{(j)} \in \mathcal{K}(n)$ such that $(p_{\overline{\iota}^{(\ell)}}^-,p_{\sigma(\overline{\iota}^{(\ell)})}^+)$ is a proper pair of $\overline{g}_{\ell-1}$ with length $\mu(\overline{\iota}^{(\ell)})$ for any $\ell=1, \dots,j$, and $(p_{\overline{\iota}}^-,p_{\sigma(\overline{\iota})}^+)$ are not proper pairs of $\overline{g}_{j}$ with length $\mu(\overline{\iota})$ for any $\overline{\iota} \in \mathcal{K}(\overline{g}_{j}) = \mathcal{K}(n)
\setminus \{\overline{\iota}^{(1)} \cdots \overline{\iota}^{(j)} \}$. If $j=3n$, then $\overline{f}$ is a realization of $\tau$ and the proposition is already proved. Otherwise, there is a pair $(\overline{\iota}',\overline{\iota}'')
\in \mathcal{K}(\overline{g}_{j}) \times
\mathcal{K}(\overline{g}_{j})$ such that
1. $p_{\overline{\iota}''}^{m'} \approx p_{\overline{\iota}'}^-$ with $(\overline{\iota}',\overline{\iota}'',m') \neq
(\overline{\iota}',\overline{\iota}',0)$ and $\mu(\overline{\iota}'')- m'
= \min \big\{ \mu(\overline{\iota})- m \, \big| \,
p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}'''}^-, \,
\overline{\iota},\overline{\iota}''' \in \mathcal{K}(\overline{g}_{j}), \,
(\overline{\iota},\overline{\iota}''',m)
\neq (\overline{\iota},\overline{\iota},0) \big\}$,
2. if $p_{\overline{\iota}''}^{m} \approx p_{\overline{\iota}}^-$ satisfy $\mu(\overline{\iota}'')- m'=
\mu(\overline{\iota}'')- m$ and thus $i'=i$, then $p_{\overline{\iota}'}^- < p_{\overline{\iota}}^-$,
3. if $p_{\overline{\iota}}^{m} \approx p_{\overline{\iota}'}^-$ satisfy $\mu(\overline{\iota}'')- m'=
\mu(\overline{\iota})- m$ and thus $i_1''=i_1$, then $p_{\overline{\iota}_1''}^+ < p_{\overline{\iota}_1}^+$.
Let $(v,s_{\tau}) \in (\mathbb{C}^{3n} \setminus \{0\})
\times (\mathbb{C}^n \setminus \{0\})$ be the unique solution of (\[eqn:SB\]) and (\[eqn:expab\]) as in Corollary \[cor:sol\], and denote by $u_{\overline{\iota}}$ the value given in (\[eqn:uv\]) with $d=\delta$ and $s_j=(s_{\tau})_j$.
If $\overline{\iota}' \neq \overline{\iota}''$, then put $\check{\tau}=(n,\check{\sigma},\check{\mu})$, where $\check{\sigma} : \mathcal{K}(n) \to \mathcal{K}(n)$ and $\check{\mu} : \mathcal{K}(n) \to \mathbb{Z}_{\ge 0}$ are given by $$\check{\sigma}(\overline{\iota}):=
\left\{
\begin{array}{ll}
\sigma(\overline{\iota}'') & (\overline{\iota}=\overline{\iota}') \\[2mm]
\sigma(\overline{\iota}') & (\overline{\iota}=\overline{\iota}'') \\[2mm]
\sigma(\overline{\iota}) & (\text{otherwise}), \\[2mm]
\end{array}
\right. \quad
\check{\mu}(\overline{\iota}):=
\left\{
\begin{array}{ll}
\mu(\overline{\iota}'')-m' & (\overline{\iota}=\overline{\iota}') \\[2mm]
\mu(\overline{\iota}')+m' & (\overline{\iota}=\overline{\iota}'') \\[2mm]
\mu(\overline{\iota}) & (\text{otherwise}). \\[2mm]
\end{array}
\right.$$ Since $p_{\overline{\iota}''}^{m'} \approx p_{\overline{\iota}'}^-$, one has $p_{\overline{\iota}''}^{\check{\mu}(\overline{\iota}'')} =
p_{\overline{\iota}''}^{\mu(\overline{\iota}')+m'} \approx
p_{\overline{\iota}'}^{\mu(\overline{\iota}')}
\approx p_{\sigma(\overline{\iota}')}^+
=p_{\check{\sigma}(\overline{\iota}'')}^+$ and $p_{\overline{\iota}'}^{\check{\mu}(\overline{\iota}')} =
p_{\overline{\iota}'}^{\mu(\overline{\iota}'')-m'} \approx
p_{\overline{\iota}''}^{\mu(\overline{\iota}'')}
\approx p_{\sigma(\overline{\iota}'')}^+
=p_{\check{\sigma}(\overline{\iota}')}^+$, which yield $\delta^{\check{\kappa}(\overline{\iota}')-1} \cdot v_{\overline{\iota}'}=
u_{\check{\sigma}(\overline{\iota}')}$ and $\delta^{\check{\kappa}(\overline{\iota}'')-1} \cdot
v_{\overline{\iota}''}=u_{\check{\sigma}(\overline{\iota}'')}$. For $\overline{\iota} \neq \overline{\iota}',\overline{\iota}''$, the equation $\delta^{\kappa(\overline{\iota})-1} \cdot v_{\overline{\iota}}
= u_{\sigma(\overline{\iota})}$ leads to $\delta^{\check{\kappa}(\overline{\iota})-1} \cdot v_{\overline{\iota}}
=u_{\check{\sigma}(\overline{\iota})}$. This means that $v_{\overline{\iota}_1^{\check{\sigma}}}
= \delta^{\check{\kappa}(\overline{\iota})}
\cdot v_{\overline{\iota}} +(\delta-1)
\cdot (s_{\tau})_{i_1^{\check{\sigma}}}$ for any $\overline{\iota} \in \mathcal{K}(n)$, where $\overline{\iota}_k^{\check{\sigma}}
=(i_k^{\check{\sigma}},\iota_k^{\check{\sigma}})
:=\check{\sigma}^k(\overline{\iota})$. As $(\delta,v,s_{\tau})$ satisfies (\[eqn:SB\]), we have $\mathcal{A}_{\check{\tau}}(\delta) \, s_{\tau}=0$, and thus $\delta=\lambda(w_{\tau})=\lambda(w_{\check{\tau}})$. Moreover, $(p_{\overline{\iota}'}^-,p_{\check{\sigma}(\overline{\iota}')}^+)$ is a proper pair of $\overline{g}_{j}$ with length $\check{\mu}(\overline{\iota}')$. Indeed, from conditions $(2)$ and $(3)$, $p_{\overline{\iota}'}^-$ and $p_{\check{\sigma}(\overline{\iota}')}^+$ are proper points, and from the minimality of the number $\mu(\overline{\iota}'')- m'$, $p_{\overline{\iota}'}^{m}$ satisfies $p_{\overline{\iota}'}^{m}
\approx \hspace{-.99em}/\hspace{.70em} p_{\overline{\iota}}^+$ for any $0 \le m < \mu(\overline{\iota}')$ and $\overline{\iota} \in \mathcal{K}(\overline{g}_{j})$, and also satisfies $p_{\overline{\iota}'}^{m}
\approx \hspace{-.99em}/\hspace{.70em} p_{\overline{\iota}}^-$ for any $0 < m \le \mu(\overline{\iota}')$ and $\overline{\iota} \in \mathcal{K}(\overline{g}_{j})$. Furthermore, $(p_{\overline{\iota}^{(\ell)}}^-,
p_{\check{\sigma}(\overline{\iota}^{(\ell)})}^+)$ remains a proper pair of $\overline{g}_{\ell-1}$ with length $\check{\mu}(\overline{\iota}^{(\ell)})$ for $\ell=1, \dots,j$, since $\check{\sigma}(\overline{\iota}^{(\ell)})=\sigma(\overline{\iota}^{(\ell)})$, $\check{\mu}(\overline{\iota}^{(\ell)})=\mu(\overline{\iota}^{(\ell)})$, and the indeterminacy points of $\overline{g}_{\ell-1}$ are invariant under the change of orbit data.
On the other hand, if $\overline{\iota}''=\overline{\iota}'$ and $m' > 0$, then the new orbit data $\check{\tau}=(n,\sigma,\check{\mu})$ is defined by $\check{\mu}(\overline{\iota}'):=\mu(\overline{\iota}')-m'$, and $\check{\mu}(\overline{\iota}):=\mu(\overline{\iota})$ if $\overline{\iota} \neq \overline{\iota}'$. Note that for $m'>0$, the relation $p_{\overline{\iota}'}^{m'}=p_{\overline{\iota}'}^-$ yields $\delta^{k'} \cdot v_{\overline{\iota}'}=v_{\overline{\iota}'}$ for some $k' > 0$. Since $\delta$ is not a root of unity, we have $v_{\overline{\iota}'}=0$. Therefore, $v$ satisfies (\[eqn:SB\]) and (\[eqn:expab\]) with $d=\delta$ and with the orbit data $\check{\tau}$. This means that $\delta=\lambda(w_{\tau})=\lambda(w_{\check{\tau}})$. Similarly, $(p_{\overline{\iota}'}^-,p_{\check{\sigma}(\overline{\iota}')}^+)$ is a proper pair of $\overline{g}_{j}$ with length $\check{\mu}(\overline{\iota}')$, and $(p_{\overline{\iota}^{(\ell)}}^-,
p_{\check{\sigma}(\overline{\iota}^{(\ell)})}^+)$ remains a proper pair of $\overline{g}_{\ell-1}$ with length $\check{\mu}(\overline{\iota}^{(\ell)})$ for $\ell=1, \dots, j$.
Thus, either $\overline{f}$ is a realization of $\check{\tau}$, or we can repeat the above argument to construct a realization. When $\check{\tau}$ admits the realization $\overline{f}$, it follows from Proposition \[pro:ind\] that $\check{\tau}$ satisfies condition (\[eqn:R\]), and so the proposition is established. $\Box$
\[pro:cri\] Let $\tau$ be an orbit data satisfying conditions $(1)$ and $(2)$ in Theorem \[thm:main3\]. Then we have $\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau) = \{
\alpha_{\overline{\iota},\overline{\iota}'}^0 \, | \,
i_m=i_m', \, \kappa(\overline{\iota}_m)=\kappa(\overline{\iota}_m'), \,
m \ge 0 \}$. In addition, if $\tau$ satisfies condition $(3)$ in Theorem \[thm:main3\], then it also satisfies condition (\[eqn:R\]).
The proof of this proposition is given in Section \[sec:proof\]. We are now in a position to establish the main theorems.\
[*Proofs of Theorems \[thm:main1\]–\[thm:main3\]*]{}. Theorem \[thm:main1\] is an immediate consequence of Propositions \[pro:auto\], \[pro:tenta\] and \[pro:ind\]. We notice that the points $\{ p_{\overline{\iota}}^m \, | \, \overline{\iota} \in \mathcal{K}(n),
m=\theta_{i,0}(k), 1 \le k \le \kappa(\overline{\iota}) \}$, which are blown up by $\pi_{\tau}$, lie on $C^*$. Moreover, Theorem \[thm:main2\] follows from Propositions \[pro:nonzero\] and \[pro:vari\], and Theorem \[thm:main3\] follows from Propositions \[pro:det\] and \[pro:cri\]. $\Box$\
[*Proof of Corollary \[cor:main\]*]{}. For any value $\lambda \neq 1 \in \Lambda$, Theorem \[thm:main2\] and Proposition \[pro:iden\] show that there is an orbit data $\tau$ such that $\lambda=\lambda(w_{\tau})$ and $\tau$ satisfies the realizability condition (\[eqn:condi\]). In particular, the automorphism $F_{\tau}$ mentioned in Theorem \[thm:main1\] has entropy $h_{\mathrm{top}}(F_{\tau})= \log \lambda >0$. Note that when $\lambda=1 \in \Lambda$, the automorphism $\mathrm{id}_{\mathbb{P}^2} : \mathbb{P}^2 \to \mathbb{P}^2$ satisfies $\lambda(\mathrm{id}_{\mathbb{P}^2}^*)=\lambda=1$ and $h_{\mathrm{top}}(\mathrm{id}_{\mathbb{P}^2})=0$. On the other hand, from Proposition \[pro:expent\], the entropy of any automorphism $F: X \to X$ is given by $h_{\mathrm{top}}(F)= \log \lambda$ for some $\lambda \in \Lambda$. Therefore, Corollary \[cor:main\] is proved. $\Box$
Proof of Realizability with Estimates {#sec:proof}
=====================================
As is seen in Section \[sec:real\], Propositions \[pro:det\] and \[pro:cri\] prove Theorem \[thm:main3\], or the realizability of orbit data. In this section, we prove these propositions by applying some estimates mentioned below. To this end, we fix an orbit data $\tau$ satisfying conditions $(1)$ and $(2)$ in Theorem \[thm:main3\].
If $d > 1$, then for any $j \in \{1,\dots , n\}$ and $\overline{\iota} \in \mathcal{K}(n)$, we have $$-\frac{1}{d^2+d+1} \le \overline{c}_{\overline{\iota},j}(d) \le 0,$$ where $\overline{c}_{\overline{\iota},j}(d)$ is given by (\[eqn:c1\]).
[*Proof*]{}. In view of equation (\[eqn:expre\]), $\overline{c}_{\overline{\iota},j}(d)$ may be expressed as either $\overline{c}_{\overline{\iota},j}(d)=0$, or $\overline{c}_{\overline{\iota},j}(d)=- (d-1) \cdot
d^{\eta_1}/(d^{\eta}-1)$ with $\eta_1+3 \le \eta$, or $\overline{c}_{\overline{\iota},j}(d)=-(d-1) \cdot
(d^{\eta_1}+d^{\eta_2}) /(d^{\eta}-1)$ with $\eta_1+3 \le \eta_2$ and $\eta_2+3 \le \eta$, or $\overline{c}_{\overline{\iota},j}(d)=-(d-1) \cdot
(d^{\eta_1}+d^{\eta_2}+d^{\eta_3})/(d^{\eta}-1)$ with $\eta_1+3 \le \eta_2$, $\eta_2+3 \le \eta_3$ and $\eta_3+3 \le \eta$, since $\# \{ m \, | \, i_m=j \} \le \#\{(j,1), (j,2), (j,3)\}=3$. We only consider the case $\overline{c}_{\overline{\iota},j}(d)=-(d-1) \cdot
(d^{\eta_1}+d^{\eta_2}) /(d^{\eta}-1)$ as the remaining cases can be treated in the same manner. Since $d > 1$, the inequality $\overline{c}_{\overline{\iota},j}(d) < 0$ is trivial. Moreover, one has $$\frac{\overline{c}_{\overline{\iota},j}(d)}{d-1}=- \frac{d^{\eta_1}+
d^{\eta_2}}{d^{\eta}-1}
\ge -\frac{d^{\eta_2-3}+d^{\eta_2}}{d^{\eta}-1}
\ge -\frac{d^{\eta-6}+d^{\eta-3}}{d^{\eta}-1}
= (1-\frac{1}{d^{\eta}-1}) (d^{-6}+d^{-3})
\ge -\frac{1}{d^3-1}.$$ Thus the lemma is established. $\Box$
Since $c_{i,j}(d)= - \sum_{\iota=1}^3 \overline{c}_{(i,\iota),j}(d)$ from (\[eqn:c2\]), the above lemma leads to the inequality $$0 \le c_{i,j}(d) \le \gamma_d, \qquad \gamma_d:=\frac{3}{1+d+d^2}.$$ Note that for any $d \ge 2$ and any $0 \le x_{i,j} \le \gamma_d$, each diagonal entry $\mathcal{A}_n(d,x)_{i,i}$ of $\mathcal{A}_n(d,x)$ is positive and each non-diagonal entry $\mathcal{A}_n(d,x)_{i,j}$ with $i \neq j$ is negative, where $\mathcal{A}_n(d,x)$ is the matrix given in (\[eqn:matA\]). Let $\overline{\mathcal{A}}_n(d,x)_{i,j}$ be the $(i,j)$-cofactor of the matrix $\mathcal{A}_n(d,x)$. Then, the relation $|\mathcal{A}_n(d,x)|=\sum_{i=1}^n \overline{\mathcal{A}}_n(d,x)_{i,j} \cdot
\mathcal{A}_n(d,x)_{i,j}$ holds for any $j=1,\dots,n$, where $|\mathcal{A}_n(d,x)|$ is the determinant of the matrix $\mathcal{A}_n(d,x)$.
\[lem:esti1\] For any $n \ge 2$, the following inequalities hold: $$\left\{
\begin{array}{ll}
\overline{\mathcal{A}}_n(d,x)_{i,j} > 0, &
(d > 2^n-1, \, 0 \le x_{i,j} \le \gamma_d) \\[2mm]
|\mathcal{A}_n(d,x)| > 0, &
(d > 2^n, \, 0 \le x_{i,j} \le \gamma_d) \\[2mm]
|\mathcal{A}_n(2^n-1,x)| < 0, &
(0 \le x_{i,j} \le \gamma_d).
\end{array}
\right.$$
[*Proof*]{}. We prove the inequalities by induction on $n$. For $n=2$, the first inequality holds since $$\overline{\mathcal{A}}_2(d,x)_{i,j}=
\left\{
\begin{array}{ll}
- \mathcal{A}_2(d,x)_{j,i} > 0 & (i \neq j) \\[2mm]
\mathcal{A}_2(d,x)_{i+1,i+1} > 0 & (i = j \in \mathbb{Z}/2\mathbb{Z} ) .
\end{array}
\right.$$ As $\gamma_d < \frac{3}{13}$ when $d > 3$, the remaining inequalities follow from the estimates $$\left\{
\begin{array}{l}
|\mathcal{A}_2(3,x)|=(1+x_{1,1}) (1+x_{2,2}) -
(1-x_{2,1}) (3 - x_{1,2}) < (1+\frac{3}{13})^2 -
(1- \frac{3}{13}) (3 - \frac{3}{13}) < 0 \\[2mm]
|\mathcal{A}_2(d,x)|=(d-2+x_{1,1}) (d-2+x_{2,2}) -
(1-x_{2,1}) (d - x_{1,2}) > 2^2 - 1 \cdot 4 = 0.
\end{array}
\right.$$ Therefore, the lemma is proved when $n=2$. Assume that the inequalities hold when $n=l-1$. A little calculation shows that $\overline{\mathcal{A}}_{i,j} := \overline{\mathcal{A}}_l(d,x)_{i,j}$ can be expressed as $$\overline{\mathcal{A}}_{i,j}=
\left\{
\begin{array}{ll}
\displaystyle
- \Bigl\{ \sum_{k=1}^{i-1} \overline{\mathcal{A}}_{l-1}(d,x^i)_{k,j-1} \cdot
\mathcal{A}_{l}(d,x)_{k,i}
+ \sum_{k=i+1}^{l} \overline{\mathcal{A}}_{l-1}(d,x^i)_{k-1,j-1} \cdot
\mathcal{A}_{l}(d,x)_{k,i} \Bigr\} & (i < j) \\[2mm]
\displaystyle
- \Bigl\{ \sum_{k=1}^{i-1} \overline{\mathcal{A}}_{l-1}(d,x^i)_{k,j} \cdot
\mathcal{A}_{l}(d,x)_{k,i}
+ \sum_{k=i+1}^{l} \overline{\mathcal{A}}_{l-1}(d,x^i)_{k-1,j} \cdot
\mathcal{A}_{l}(d,x)_{k,i} \Bigr\} & (i < j) \\[8mm]
~~~ |\mathcal{A}_{l-1}(d,x^i)| & (i=j),
\end{array}
\right.$$ where $x^i$ is the $(l-1,l-1)$-matrix obtained from $x$ by removing the $i$-th row and column vectors. Therefore, the first assertion follows from the induction hypothesis. Moreover, since $|\mathcal{A}_l(d,x)|=\sum_{i=1}^l \overline{\mathcal{A}}_l(d,x)_{i,j}
\cdot \mathcal{A}_l(d,x)_{i,j}$, the bounds $$|\mathcal{A}_l(d,(x_1,\dots,x_{j-1},0,x_{j+1},\dots,x_l)|
\le |\mathcal{A}_l(d,x)| \le
|\mathcal{A}_l(d,(x_1,\dots,x_{j-1},\overline{\gamma}_d,x_{j+1},\dots,x_l)|$$ hold for any $j$, where $\overline{\gamma}_d$ is the column vector having each component equal to $\gamma_d$. Thus, we have $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
\begin{array}{l}
|\mathcal{A}_l(d,x)| \ge |\mathcal{A}_l(d,(0,\dots,0))| =
(d-1)^{l-1} (d-2^l) > 0,
\qquad (d > 2^l), \\[2mm]
\displaystyle |\mathcal{A}_l(2^l-1,x)| \le
|\mathcal{A}_l(2^l-1,(\overline{\gamma}_{2^l-1},\dots,
\overline{\gamma}_{2^l-1}))| = - \frac{(2^l-2)^{l+1}}{2^{2l}-2^l+1} < 0,
\end{array}
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ which show that the assertions are verified when $n=l$. Therefore, the induction is complete, and the lemma is established. $\Box$
Let $s_{\tau}$ be the unique solution of equation (\[eqn:mat\]) with $d=\delta$.
\[lem:esti2\] For any $1 \le i \le n-1$, the ratio $(s_{\tau})_{i+1}/(s_{\tau})_{i}$ satisfies $$z_1(n) < \frac{(s_{\tau})_{i+1}}{(s_{\tau})_{i}} < z_2(n),$$ where $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
z_1(n) := \frac{2^{n-1} (2^n+2)}{2^{2 n} + 2^{n+1}+6}, \quad
z_2(n) := \frac{2^{2 n-1} + 2^{n}+3}{2^{2 n} + 2^{n+1}+3}.
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$
[*Proof*]{}. For each $k_1, k_2 \ge 0$ with $k_1 + k_2 \le n-2$, let $\mathcal{A}^{k_1, k_2}_n(\delta)$ be the $n \times n$ matrix defined inductively as follows. First, put $\mathcal{A}_n^{0, 0}(\delta):=\mathcal{A}_{\tau}(\delta)$. Next, let $\mathcal{A}_n^{k_1, 0}(\delta)$ be the matrix obtained from $\mathcal{A}_n^{k_1-1, 0}(\delta)$ by replacing the $i$-th row of $\mathcal{A}_n^{k_1-1, 0}(\delta)$ with the sum of the $i$-th row and the $k_1$-th row multiplied by $-\mathcal{A}_n^{k_1-1, 0}(\delta)_{i,k_1} /
\mathcal{A}_n^{k_1-1, 0}(\delta)_{k_1,k_1}$, where $i$ runs from $k_1+1$ to $n$. Finally, let $\mathcal{A}_{n}^{k_1, k_2}(\delta)$ be the matrix obtained from $\mathcal{A}_n^{k_1, k_2-1}(\delta)$ by replacing the $i$-th row of $\mathcal{A}_n^{k_1, k_2-1}(\delta)$ with the sum of the $i$-th row and the $(n-k_2+1)$-th row multiplied by $-\mathcal{A}_n^{k_1,k_2-1}(\delta)_{i,n-k_2+1} /
\mathcal{A}_n^{k_1,k_2-1}(\delta)_{n-k_2+1,n-k_2+1}$, where $i$ runs from $k_1+1$ to $n-k_2$. Therefore, each entry of $\mathcal{A}^{k_1, k_2}_n(\delta)$ may be expressed as $$\mathcal{A}^{k_1, k_2}_n(\delta)_{i,j}=
\left\{
\begin{array}{ll}
\delta_{i,j} + \xi_{i,j}^{i-1} & (i \le k_1 \text{~and~} i \le j) \\[2mm]
\delta_{i,j} + \xi_{i,j}^{k_1+k_2} & (k_1+1 \le i, j \le n-k_2) \\[2mm]
\delta_{i,j} + \xi_{i,j}^{k_1+n-i} & (n-k_2+1 \le i \text{~and~}
k_1+1 \le j \le i) \\[2mm]
0 & (\text{otherwise}),
\end{array}
\right.$$ where $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
\delta_{i,j} =
\left\{
\begin{array}{ll}
\delta-2 & (i=j) \\[2mm]
-1 & (i>j) \\[2mm]
-\delta & (i<j),
\end{array}
\right.
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ and $\xi_{i,j}^k$ is given inductively by $$\xi_{i,j}^{0}=c_{i,j}(\delta), \quad
\xi_{i,j}^{k+1}=
\left\{
\begin{array}{ll}
\displaystyle \xi_{i,j}^{k}-\frac{(1-\xi_{i,k}^k)
(\delta - \xi_{k,j}^k)}{\delta-2 + \xi_{k,k}^{k}} & (k < k_1) \\[2mm]
\displaystyle \xi_{i,j}^{k}-\frac{(\delta - \xi_{i,n-k+k_1}^k)
(1 - \xi_{n-k+k_1,j}^k)}
{\delta-2 + \xi_{n-k+k_1,n-k+k_1}^{k}} & (k \ge k_1).
\end{array}
\right.$$ Moreover, it is seen that $\xi_{i,j}^{k}$ satisfies the estimates $$-\frac{(2^k-1) \delta}{\delta -2^k} \le \xi_{i,j}^{k} \le
-\overline{\xi}_k, \qquad
\overline{\xi}_k:=\frac{(2^k-1) \delta - (2^k \delta -1) \gamma_{\delta}}
{(\delta-2^k)+(2^k-1) \gamma_{\delta}}.$$ Note that $s_{\tau}$ satisfies $\mathcal{A}^{k_1, k_2}_n(\delta) \, s_{\tau}=0$ for any $k_1, k_2 \ge 0$. In particular, one has $\mathcal{A}^{i-1, n-i-1}_n(\delta) \, s_{\tau}=0$, the $i$-th and $(i+1)$-th components of which are given by $$\left\{
\begin{array}{l}
(\delta -2 + \xi_{i,i}^{n-2}) (s_{\tau})_i +
(-\delta + \xi_{i,i+1}^{n-2}) (s_{\tau})_{i+1}=0, \\[2mm]
(-1 + \xi_{i+1,i}^{n-2}) (s_{\tau})_i +
(\delta -2 + \xi_{i+1,i+1}^{n-2}) (s_{\tau})_{i+1}=0.
\end{array}
\right.$$ Therefore, we have $$\frac{(s_{\tau})_{i+1}}{(s_{\tau})_i} =
\frac{\delta -2 + \xi_{i,i}^{n-2}}{\delta - \xi_{i,i+1}^{n-2}} <
\frac{\delta-2-\overline{\xi}_{n-2}}{\delta+\overline{\xi}_{n-2}} =
\frac{2 \delta^2 - (2^n-4) \delta - (2^{n+1}-6)}{2 (\delta^2 +2 \delta +3)},$$ the righthand side of which is monotone increasing with respect to $\delta$, and thus is less than $z_2(n)$ since $\delta < 2^n$. In a similar manner, we have $$\frac{(s_{\tau})_{i+1}}{(s_{\tau})_i} =
\frac{1 - \xi_{i+1,i}^{n-2}}{\delta -2 + \xi_{i+1,i+1}^{n-2}} >
\frac{1+\overline{\xi}_{n-2}}{\delta-2-\overline{\xi}_{n-2}} =
\frac{2^{n-2} (1-\gamma_{\delta})}{\delta-2^{n-1} + (2^{n-1}-1)
\gamma_{\delta}} > z_1(n).$$ Thus, the lemma is established. $\Box$
We remark that the functions $z_1(n)$ and $z_2(n)$ satisfy $$0 < z_1(n) < \frac{1}{2} < z_2(n) < 1.$$\
[*Proof of Proposition \[pro:det\]*]{}. Recall that $\chi_{\tau}(d)=|\mathcal{A}_{\tau}(d)|$. From Lemma \[lem:esti1\], one has $\chi_{\tau}(2^n-1) < 0$ and $\chi_{\tau}(2^n-1) > 0$. Therefore, there is a real number $\delta$ such that $\chi_{\tau}(\delta) = 0$ and $2^n-1 < \delta < 2^n$. Moreover, it follows from Lemma \[lem:esti2\] that $(s_{\tau})_j \neq 0$ for any $j$. Thus Lemma \[lem:vanish\] yields $\Gamma_{\tau}^{(1)} \cap P(\tau) = \emptyset$. $\Box$
Next we prove Proposition \[pro:cri\].
\[lem:g\] For any $n \ge 2$, we have the following two inequalities:
1. $g_1(n) < 0$, where $\displaystyle g_1(n):= \frac{1}{\delta^3-1}+1-\delta \cdot z_1(n)^{n-1}$,
2. $g_2(n) > 0$, where $\displaystyle g_2(n):= z_1(n)^{n-2}-z_2(n)^{n-1}-\frac{1}{\delta^3-1}$.
[*Proof*]{}. First, we claim that the following inequality holds: $$\label{eqn:ineqxi}
z_1(n)^{n-1} > \frac{1}{2^{n-1}} -(n-1) \Bigl(\frac{1}{2^{3 n -4}} +
\frac{1}{2^{4 n -3}} \Bigr).$$ Indeed, since $$\Bigl(1- \frac{1}{2^{n-1}} - \frac{1}{2^{2n-1}} \Bigr)
\Bigl(1+ \frac{1}{2^{n-1}} + \frac{6}{2^{2n}} \Bigr)
= 1 - \frac{1}{2^{4 n-2}} (2^{n+2}+3) \le 1,$$ one has $$z_1(n) \ge \frac{1}{2} \Bigl(1 + \frac{1}{2^{n-1}} \Bigr)
\Bigl(1 - \frac{1}{2^{n-1}} - \frac{1}{2^{2n-1}} \Bigr)
= \frac{1}{2} \Bigl\{ 1-\bigl( \frac{3}{2^{2n-1}} + \frac{1}{2^{3n-2}} \bigr)
\Bigr\} > \frac{1}{2}
\Bigl\{ 1-\bigl( \frac{1}{2^{2n-3}} + \frac{1}{2^{3n-2}} \bigr) \Bigr\}.$$ Therefore, the claim holds from the Bernoulli inequality, namely, $(1+x)^n \ge 1+nx$ for any $x \ge -1$. By using inequality (\[eqn:ineqxi\]), we prove the two inequalities in the lemma.
In order to prove assertion (1), we consider the function of $n$: $$\check{g}_1(n) := \frac{1}{(2^n-1)^3-1}+1- (2^n-1) \cdot z_1(n)^{n-1}.$$ Then the inequality $g_1(n) < \check{g}_1(n)$ holds since $\delta > 2^n-1$. Moreover, as $\check{g}_1(2) < 0$, one has $g_1(2) <0$. On the other hand, when $n \ge 3$, inequality (\[eqn:ineqxi\]) yields $$\begin{array}{rl}
\check{g}_1(n) < & \displaystyle
\frac{(2^n-1)^3}{(2^n-1)^3-1} - \frac{2^n-1}{2^{n-1}} \Bigl\{ 1- (n-1)
\bigl( \frac{1}{2^{2n-3}} + \frac{1}{2^{3n-2}} \bigr) \Bigr\} \\[2mm]
~ < & \displaystyle \frac{2^n-1}{2^{n-1}} \Bigl( -1+
\frac{2^{n-1} (2^n-1)^2}{(2^n-1)^3-1} + \frac{n-1}{2^{2n-3}}
+ \frac{n-1}{2^{3n-2}} \Bigr).
\end{array}$$ Since the terms $\frac{2^{n-1} (2^n-1)^2}{(2^n-1)^3-1}$, $\frac{n-1}{2^{2n-3}}$ and $\frac{n-1}{2^{3n-2}}$ are monotone decreasing with respect to $n$, the function $-1 + \frac{2^{n-1} (2^n-1)^2}{(2^n-1)^3-1} +
\frac{n-1}{2^{2n-3}} + \frac{n-1}{2^{3n-2}}$ is maximized when $n=3$, which is negative. Therefore, we have $\check{g}_1(n) < 0$, and thus $g_1(n) < 0$.
Finally, in order to prove assertion (2), we consider the function of $n$: $$\check{g}_2(n) := z_1(n)^{n-2} - z_2(n)^{n-1} -\frac{1}{(2^n-1)^3-1}.$$ Then the inequality $g_2(n) > \check{g}_2(n)$ holds since $\delta > 2^n-1$. Moreover, as $\check{g}_2(2),\check{g}_2(3)> 0$, one has $g_2(2), g_2(3) >0$. On the other hand, when $n \ge 4$, $\check{g}_2(n)$ can be estimated as $$\begin{array}{ll}
\check{g}_2(n) & \displaystyle
= z_1(n)^{n-2} \bigl( 1-z_1(n) \bigr) -
\bigl( z_2(n)^{n-1}-z_1(n)^{n-1} \bigr)
-\frac{1}{(2^n-1)^3-1} \\[2mm]
& \displaystyle
\ge z_1(n)^{n-2} \bigl( 1-z_1(n) \bigr) - (n-1)
\bigl( z_2(n)-z_1(n) \bigr)
z_2(n)^{n-2} -\frac{1}{(2^n-1)^3-1},
\end{array}$$ where the last inequality follows from the general inequality $x^n-y^n \le n(x-y) x^{n-1}$ for any $x \ge y \ge 0$. Since $z_2(n)-z_1(n)=\frac{9}{2} \frac{2^{2n}+2^{n+1}+4}
{(2^{2n}+2^{n+1}+3)(2^{2n}+2^{n+1}+6)}<
\frac{9}{2}\frac{1}{2^{2n}+2^{n+1}+3} < \frac{9}{8} \frac{1}{2^{2 (n-1)}}$, and $z_2(n)=\frac{1}{2}+\frac{3}{2}\frac{1}{2^{2n}+2^{n+1}+3}$ is monotone decreasing with respect to $n$, and thus is less than $\frac{13}{24}$, we have $$~~~~~~~~~~~~~~~~~~~~~~~~~~~
(n-1) \bigl( z_2(n)-z_1(n) \bigr) z_2(n)^{n-2}
< (n-1) \frac{9}{8} \Bigl( \frac{13}{24} \Bigr)^{n-2}
\frac{1}{2^{2 (n-1)}} < \frac{1}{2^{2 (n-1)}},
~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ where we use the fact that the function $(n-1) \frac{9}{8}
\Bigl( \frac{13}{24} \Bigr)^{n-2}$ is monotone decreasing and is less than $1$. Moreover, as $1-z_1(n) > z_1(n)$, one has $$\begin{array}{rl}
\check{g}_2(n) > & \displaystyle
z_1(n)^{n-1} - \frac{1}{2^{2 (n-1)}} -\frac{1}{(2^n-1)^3-1}
\\[2mm]
> & \displaystyle
\frac{1}{2^{n-1}} \Bigl\{ 1 -(n-1) \bigl(\frac{1}{2^{2 n -3}} + \frac{1}
{2^{3 n -2}} \bigr) \Bigr\} - \frac{1}{2^{2 (n-1)}} -\frac{1}{(2^n-1)^3-1}
\\[2mm]
= & \displaystyle
\frac{1}{2^{n-1}} \Bigl( 1 - \frac{n-1}{2^{2 n -3}} - \frac{n-1}
{2^{3 n -2}} - \frac{1}{2^{n-1}} - \frac{2^{n-1}}{(2^n-1)^3-1} \Bigr).
\end{array}$$ Since the terms $\frac{n-1}{2^{2n-3}}$, $\frac{n-1}{2^{3n-2}}$, $\frac{1}{2^{n-1}}$ and $\frac{2^{n-1}}{(2^n-1)^3-1}$ are monotone decreasing with respect to $n$, the function $1 - \frac{n-1}{2^{2 n -3}} - \frac{n-1}
{2^{3 n -2}} - \frac{1}{2^{n-1}} - \frac{2^{n-1}}{(2^n-1)^3-1}$ is minimized when $n=4$, which is positive. Therefore, we have $\check{g}_2(n) > 0$ and thus $g_2(n) > 0$, and so the proof is complete. $\Box$
\[lem:coeffi\] Assume that $v_{\overline{\iota}}(\delta)=\delta^k \cdot
v_{\overline{\iota}'}(\delta)$. Then we have $\kappa(\overline{\iota})=\kappa(\overline{\iota}')-k$, $i_1=i_1'$ and $\overline{c}_{\overline{\iota},j}(\delta)=\delta^k \cdot
\overline{c}_{\overline{\iota}',j}(\delta)$ for any $j \in \mathbb{N}$.
[*Proof*]{}. Put $s:=s_{\tau}$. Viewing $v_{\overline{\iota}}(\delta)/(\delta-1)$ and $\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)$ as functions of $\delta$ (see (\[eqn:expre\])), we expand them into Taylor series around infinity: $$\begin{aligned}
\frac{v_{\overline{\iota}}(\delta)}{\delta-1} & = &
- s_{i_1} \cdot \delta^{-\varepsilon_1(\overline{\iota})} -
s_{i_2} \cdot \delta^{-\varepsilon_2(\overline{\iota})} - \cdots -
s_{i_{|\overline{\iota}|}} \cdot
\delta^{-\varepsilon_{|\overline{\iota}|}(\overline{\iota})} -
s_{i_{|\overline{\iota}|+1}} \cdot
\delta^{-\varepsilon_{|\overline{\iota}|+1}(\overline{\iota})} -
\cdots, \label{eqn:expand1}
\\[2mm]
\frac{\delta^k \cdot v_{\overline{\iota}'}(\delta)}{\delta-1} & = &
- s_{i_1'} \cdot \delta^{-\varepsilon_1(\overline{\iota}')+k} - \cdots -
s_{i_{|\overline{\iota}'|}'} \cdot
\delta^{-\varepsilon_{|\overline{\iota}'|}(\overline{\iota}')+k} -
s_{i_{|\overline{\iota}'|+1}'} \cdot
\delta^{-\varepsilon_{|\overline{\iota}'|+1}(\overline{\iota}')+k}
- \cdots. \label{eqn:expand2}\end{aligned}$$ In view of these expressions, the coefficient of $\delta^{-l}$ is either $-s_{\bullet}$ or $0$. Now assume the contrary that $v_{\overline{\iota}}(\delta)/(\delta-1)$ and $\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)$ have different coefficients. Let $l_1$ and $l_2$ be the minimal integers such that $v_{\overline{\iota}}(\delta)/(\delta-1)$ and $\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)$ have the coefficients $-s_{m_1}$ and $-s_{m_2}$ of $\delta^{-l_1}$ and of $\delta^{-l_2}$ which are different from the coefficient of $\delta^{-l_1}$ in $\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)$ and the coefficient of $\delta^{-l_2}$ in $v_{\overline{\iota}}(\delta)/(\delta-1)$ for some $1 \le m_1 \le n$ and $1 \le m_2 \le n$ respectively. Note that $s_1 > s_2 > \cdots > s_n$ and $\varepsilon_{m+1}(\overline{\iota})- \varepsilon_m(\overline{\iota}) \ge 3$ for any $\overline{\iota}$ and $m$. Thus, $v_{\overline{\iota}}(\delta)/(\delta-1) -
\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)=0$ satisfies the estimates $$s_{m_1} \delta^{-l_1}-s_{m_2} \delta^{-l_2} - s_{1}
\frac{\delta^{-l_2}}{\delta^3-1} <
\frac{v_{\overline{\iota}}(\delta)}{\delta-1} -
\frac{\delta^k\cdot v_{\overline{\iota}'}(\delta)}{\delta-1}
< s_{m_1} \delta^{-l_1}-s_{m_2} \delta^{-l_2} + s_{1}
\frac{\delta^{-l_1}}{\delta^3-1}.$$ If $l_1 > l_2$, then it follows that $$0 < s_{m_1} \delta^{-l_1}-s_{m_2} \delta^{-l_2} + s_{1}
\frac{\delta^{-l_1}}{\delta^3-1} <
s_{1} \delta^{-l_1}-s_{1} z_1(n)^{n-1} \delta^{-l_1+1} + s_{1}
\frac{\delta^{-l_1}}{\delta^3-1} < s_{1} \delta^{-l_1} g_1(n),$$ which contradicts Lemma \[lem:g\]. On the other hand, if $l_1 = l_2$ and $m_1 > m_2$, then we have $$0 < s_{m_1} \delta^{-l_1}-s_{m_2} \delta^{-l_1} + s_{1}
\frac{\delta^{-l_1}}{\delta^3-1} <
s_{1} z_2(n)^{m_1-1} \delta^{-l_1}-s_{1} z_1(n)^{m_1-2}
\delta^{-l_1} + s_{1} \frac{\delta^{-l_1}}{\delta^3-1} <
- s_{1} \delta^{-l_1} g_2(n),$$ where the last inequality is a consequence of the fact that $z_2(n)^{m_1-1}-z_1(n)^{m_1-2}= - z_2(n)^{m_1-2}$ $\bigl((\frac{z_1(n)}{z_2(n)})^{m_1-2}- z_2(n) \bigr)$ is monotone increasing with respect to $m_1$ since $0 < z_2(n),\frac{z_1(n)}{z_2(n)} < 1$ and $(\frac{z_1(n)}{z_2(n)})^{m_1-2}- z_2(n) > \frac{g_2(n)}{z_2(n)^{m_1-2}} >0$. This contradicts Lemma \[lem:g\]. In a similar manner, if $l_1 < l_2$, then it follows that $$0 > s_{m_1} \delta^{-l_1}-s_{m_2} \delta^{-l_2} - s_{1}
\frac{\delta^{-l_2}}{\delta^3-1} >
s_{1} z_1(n)^{n-1} \delta^{-l_2+1}-s_{1} \delta^{-l_2} - s_{1}
\frac{\delta^{-l_2}}{\delta^3-1} > -s_{1} \delta^{-l_2} g_1(n),$$ which is a contradiction. On the other hand, if $l_1 = l_2$ and $m_1 < m_2$, then we have $$0 > s_{m_1} \delta^{-l_2}-s_{m_2} \delta^{-l_2} - s_{1}
\frac{\delta^{-l_2}}{\delta^3-1} >
s_{1} z_1(n)^{m_2-2} \delta^{-l_2}-s_{1} z_2(n)^{m_2-1}
\delta^{-l_2} - s_{1}
\frac{\delta^{-l_2}}{\delta^3-1} > s_{1} \delta^{-l_2} g_2(n),$$ which is a contradiction. Thus, $v_{\overline{\iota}}(\delta)/(\delta-1)$ and $\delta^k\cdot v_{\overline{\iota}'}(\delta)/(\delta-1)$ have the same coefficients. In particular, we have $i_1=i_1'$ and $\kappa(\overline{\iota})=\varepsilon_1(\overline{\iota})=
\varepsilon_1(\overline{\iota}')-k=\kappa(\overline{\iota}')-k$. Moreover, $\overline{c}_{\overline{\iota},j}(\delta)=
\delta^k \cdot \overline{c}_{\overline{\iota}',j}(\delta)$ holds since $\overline{c}_{\overline{\iota},j}(\delta)/(\delta-1)$ and $\delta^k \cdot \overline{c}_{\overline{\iota}',j}(\delta)/(\delta-1)$ are the sums of the terms $\delta^l$ in (\[eqn:expand1\]) and (\[eqn:expand2\]) whose coefficients are equal to $s_{j}$, respectively. Therefore, the lemma is established. $\Box$
Recall that if $\overline{c}_{\overline{\iota},j}(d) \neq 0$, then it may be expressed as $$\overline{c}_{\overline{\iota},j}(d) =
- \frac{(d-1) \cdot (d^{\eta_j}+\epsilon_{j,1} d^{\eta_{j,1}} +
\epsilon_{j,2} d^{\eta_{j,2}})}{d^{\varepsilon_{|\overline{\iota}|}}-1}$$ for some $\eta_j:=\eta_j(\overline{\iota}) < \eta_{j,1}< \eta_{j,2}$ and $\epsilon_{j,k}:= \epsilon_{j,k}(\overline{\iota}) \in \{0,1\}$. In view of this expression, one has $$\eta_{i}(\overline{\iota})=0, \qquad
\eta_{i_{|\overline{\iota}|-1}}(\overline{\iota})=
\kappa(\overline{\iota}_{|\overline{\iota}|-1}), \qquad
\eta_{j}(\overline{\iota}) > 0 \quad (j \neq i).$$
\[lem:K\] For a given $d > 2$, we put $$\mathcal{F}_1(m_1;k):=\frac{d^{m_{1}}}{d^{k}-1}, \quad
\mathcal{F}_2(m_1,m_2;k):=\frac{d^{m_{1}}+d^{m_{2}}}{d^{k}-1},
\quad
\mathcal{F}_3(m_1,m_2,m_3;k):=\frac{d^{m_{1}}+
d^{m_{2}}+d^{m_{3}}}{d^{k}-1},$$ where $m_{1} < m_{2} < m_{3} \in \mathbb{Z}$ and $k \in \mathbb{Z}_{>0}$. Then,
1. if $\mathcal{F}_j(m_{1,1},\dots,m_{1,j};k_1)=
\mathcal{F}_j(m_{2,1},\dots,m_{2,j};k_2)$ for $j=1,2,3$, then we have $(m_{1,1},\dots,$ $m_{1,j},k_1)=
(m_{2,1},\dots,m_{2,j},k_2)$,
2. if $\mathcal{F}_2(m_{1,1},m_{1,2};k_1)=
\mathcal{F}_1(m_{2,1};k_2)$, then we have $(m_{1,1},m_{1,2},k_1)=(m,m+k,2 k)$ and $(m_{2,1},k_2)=(m,k)$ for some $m \in \mathbb{Z}$ and $k \in \mathbb{Z}_{\ge 0}$,
3. if $\mathcal{F}_3(m_{1,1},m_{1,2},m_{1,3};k_1)=
\mathcal{F}_1(m_{2,1};k_2)$, then we have $(m_{1,1},m_{1,2},m_{1,3},k_1)=(m,m+k,m+2 k,3 k)$ and $(m_{2,1},k_2)=(m,k)$ for some $m \in \mathbb{Z}$ and $k \in \mathbb{Z}_{\ge 0}$,
4. if $\mathcal{F}_3(m_{1,1},m_{1,2},m_{1,3};k_1)=
\mathcal{F}_2(m_{2,1},m_{2,2};k_2)$, then we have $(m_{1,1},m_{1,2},m_{1,3},k_1)=(m,m+k,m+2 k,3 k)$ and $(m_{2,1},m_{2,2},k_2)=(m,m+k,2 k)$ for some $m \in \mathbb{Z}$ and $k \in \mathbb{Z}_{\ge 0}$.
In particular, if $\mathcal{F}_{j_1}(m_{1,1},\dots,m_{1,j_1};k_1)=
\mathcal{F}_{j_2}(m_{2,1},\dots,m_{2,j_2};k_2)$ for some $j_1,j_2=1,2,3$, then we have $m_{1,1}=m_{2,1}$.
[*Proof*]{}. We only discuss the case $\mathcal{F}_1(m_{1,1};k_1)=\mathcal{F}_1(m_{2,1};k_2)$ as the remaining cases can be treated in a similar manner. Moreover, multiplying the both sides by $d^{- m_{1,1}}$, we may assume the relation $\mathcal{F}_1(0;k_1)=\mathcal{F}_1(m_{2,1};k_2)$, which yields $$d^{k_1+m_{2,1}} + 1= d^{k_2}+d^{m_{2,1}}.$$ If $k_1+m_{2,1} > k_2$, then one has $d^{k_1+m_{2,1}} + 1 > d^{k_2}+d^{m_{2,1}}$ from the assumption that $d > 2$. Similarly, if $k_1+m_{2,1} < k_2$, then one has $d^{k_1+m_{2,1}} + 1 < d^{k_2}+d^{m_{2,1}}$. This means that $k_1+m_{2,1} = k_2$, and thus $m_{2,1}=0$ since $d^{m_{2,1}}=1$. Therefore, we have $(0,k_1)=(m_{2,1},k_2)$ and establish the lemma. $\Box$
\[lem:eta\] Assume that $\overline{c}_{\overline{\iota},j}(d)=
d^k \cdot \overline{c}_{\overline{\iota}',j}(d) \neq 0$ for some $j \in \{1, \dots n\}$ and $d > 2$. Then we have $\eta_{j}(\overline{\iota})= k+ \eta_{j}(\overline{\iota}')$.
[*Proof*]{}. Using the notation of Lemma \[lem:K\], the relation $\overline{c}_{\overline{\iota},j}(d)=
d^k \cdot \overline{c}_{\overline{\iota}',j}(d)$ yields $\mathcal{F}_{j_1}(\eta_{j}(\overline{\iota}),\dots)=
\mathcal{F}_{j_2}(k+ \eta_{j}(\overline{\iota}'),\dots)$ for some $j_1, j_2 \in \{1,2,3\}$, and thus $\eta_{j}(\overline{\iota})= k+ \eta_{j}(\overline{\iota}')$, which establishes the lemma. $\Box$
\[cor:ind1\] Assume that $v_{\overline{\iota}}(\delta)=
\delta^k \cdot v_{\overline{\iota}'}(\delta)$. Then we have $k=0$ and $i=i'$.
[*Proof*]{}. By Lemma \[lem:coeffi\], one has $\overline{c}_{\overline{\iota},j}(\delta)=
\delta^k \cdot \overline{c}_{\overline{\iota}',j}(\delta)$ for any $j \in \{1, \dots n\}$. In particular, it follows that $\overline{c}_{\overline{\iota},i'}(\delta)=
\delta^k \cdot \overline{c}_{\overline{\iota}',i'}(\delta) \neq 0$ and $\overline{c}_{\overline{\iota},i}(\delta)=
\delta^k \cdot \overline{c}_{\overline{\iota}',i}(\delta) \neq 0$. Moreover, since $\eta_{i'}(\overline{\iota})= k+ \eta_{i'}(\overline{\iota}')=k$, $0=\eta_{i}(\overline{\iota})= k+ \eta_{i}(\overline{\iota}')$ by Lemma \[lem:eta\], and $\eta_{i'}(\overline{\iota}), \eta_{i}(\overline{\iota}') \ge 0$, we have $\eta_{i'}(\overline{\iota})=\eta_{i}(\overline{\iota}')=0$, which yields $i=i'$ and thus $k=0$. $\Box$
\[cor:ind2\] Assume that $v_{\overline{\iota}}(\delta)=v_{\overline{\iota}'}(\delta)$. Then we have $v_{\overline{\iota}_m}(\delta)=v_{\overline{\iota}_m'}(\delta)$, $\kappa(\overline{\iota}_m)=\kappa(\overline{\iota}_m')$ and $i_m=i_m'$ for any $m \ge 0$.
[*Proof*]{}. Let us prove the corollary by induction on $m$. For $m=0$, the statement immediately follows from Lemma \[lem:coeffi\] and Corollary \[cor:ind1\]. Assume that the statement holds for some $m$. Then Corollary \[cor:ind1\] and Lemma \[lem:coeffi\] show that $i_m=i_m'$ and $i_{m+1}=i_{m+1}'$. Moreover, since $u_{\overline{\iota}_{m+1}}=
\delta^{\kappa(\overline{\iota}_m)-1} \cdot v_{\overline{\iota}_m}=
\delta^{\kappa(\overline{\iota}_m')-1} \cdot v_{\overline{\iota}_m'}=
u_{\overline{\iota}_{m+1}'}$, we have $v_{\overline{\iota}_{m+1}}=\delta \cdot
u_{\overline{\iota}_{m+1}}+ s_{i_{m+1}}=
\delta \cdot u_{\overline{\iota}_{m+1}'}+ s_{i_{m+1}'}=
v_{\overline{\iota}_{m+1}'}$, and thus $\kappa(\overline{\iota}_{m+1})=\kappa(\overline{\iota}_{m+1}')$ by Lemma \[lem:coeffi\]. Therefore, the statement is verified for $m+1$ and the induction is complete. $\Box$\
[*Proof of Proposition \[pro:cri\]*]{}. From Corollaries \[cor:ind1\] and \[cor:ind2\], if the relation $\delta^k \cdot v_{\overline{\iota}}(\delta)=v_{\overline{\iota}'}(\delta)$ holds, then one has $k=0$, $i_m=i_m'$ and $\kappa(\overline{\iota}_m)=\kappa(\overline{\iota}_m')$ for any $m \ge 0$. Conversely, it is easily seen that if $i_m=i_m'$ and $\kappa(\overline{\iota}_m)=\kappa(\overline{\iota}_m')$ for any $m \ge 0$ then $v_{\overline{\iota}}(\delta)=v_{\overline{\iota}'}(\delta)$ holds. In particular, we have $\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau) = \{
\alpha_{\overline{\iota},\overline{\iota}'}^0 \, | \,
i_m=i_m', \, \kappa(\overline{\iota}_m)=
\kappa(\overline{\iota}_m'), \, m \ge 0 \}$ from (\[eqn:vv\]) and Lemma \[lem:per\]. Moreover, assume that $\tau$ satisfies condition $(3)$ in Theorem \[thm:main3\]. For $\alpha_{\overline{\iota},\overline{\iota}'}^k \in
\overline{\Gamma}_{\tau}^{(2)} \cap P(\tau)$, it follows that $\overline{\iota} \prec_{i} \overline{\iota}'$ if and only if $\overline{\iota}_{1} \prec_{i_1} \overline{\iota}_{1}'$ for a fixed $(\prec_{i}) \in \mathcal{T}(\tau)$ (see also Definition \[def:order\]). Since $\theta_{i,i'}(k)=0$ and $\mu(\overline{\iota})=\mu(\overline{\iota}')$, we have $\alpha_{\overline{\iota},\overline{\iota}'}^k \notin \Gamma_{\tau}^{(2)}$ and $\Gamma_{\tau}^{(2)} \cap P(\tau) = \emptyset$, which establishes Proposition \[pro:cri\]. $\Box$
[**Acknowledgment**]{}. This is the author’s doctoral dissertation presented to Kyushu University on 22 February 2010. He thanks Professor Katsunori Iwasaki for numerous comments and kind encouragement. He is grateful to Professor Charles Favre, Professor Yutaka Ishii, Professor Eiichi Sato and Professor Mitsuhiro Shishikura for useful discussions and advices.
[99]{} , [*Geometry of the plane Cremona maps*]{}, Lecture Notes in Math. **1769**, Springer-Verlag, Berlin, 2002. , [*Periodicities in linear fractional recurrences: degree growth of birational surface maps*]{}, Michigan Math. J. **54** (2006), no. 3, 647–670. , [*Dynamics of rational surface automorphisms: linear fractional recurrences*]{}, J. Geom. Anal. **19** (2009), no. 3, 553–583. , [*Dynamique des automorphismes des surfaces projectives complexes*]{}, C. R. Acad. Sci. Paris Sér. I Math. **328** (1999), no. 10, 901–906. , [*Cremona transformations, surface automorphisms and the group law*]{}, , arXiv:0811.3038v1. , [*Point sets in projective spaces and theta functions*]{}, Astérisque No. 165 (1988). , [*On the entropy of holomorphic maps*]{}, Enseign. Math. (2) **49** (2003), no. 3-4, 217–235. , [*Blowings-up of $P\sp 2$ and their blowings-down*]{}, Duke Math. J. **52** (1985), no. 1, 129–148. , [*Rational surfaces with infinite automorphism group and no antipluricanonical curve*]{}, Proc. Amer. Math. Soc. **99** (1987), no. 3, 409–414. , [*Dynamics on blowups of the projective plane*]{}, Publ. Math. Inst. Hautes Études Sci. **105** (2007), 49–89. , [*On rational surfaces. I. Irreducible curves of arithmetic genus $0$ or $1$*]{}, Mem. Coll. Sci. Univ. Kyoto Ser. A Math. **32** (1960), 351–370. , [*On rational surfaces. II*]{}, Mem. Coll. Sci. Univ. Kyoto Ser. A Math. **33** (1961), 271–293. , [*Volume growth and entropy*]{}, Israel J. Math. **57** (1987), no. 3, 285–300.
[^1]: E-mail addresses: [t-uehara@math.kyushu-u.ac.jp]{}
[^2]: Mathematics Subject Classification: 14E07, 14J50, 37F99.
|
---
author:
- 'K. Hess[^1]'
- 'K. Michielsen[^2]'
- 'H. De Raedt[^3]'
bibliography:
- '../epr.bib'
title: 'Possible Experience: from Boole to Bell'
---
Introduction
============
We discuss models of Einstein-Podolsky-Rosen-Bohm type [@EPR35; @BOHM57] of experiments as used by John Bell [@BELL64] when presenting his celebrated inequalities. These experiments result in outcomes of two spin-values $\pm 1$ (in units of $\hbar/2$) that in turn depend on certain magnet settings ${\bf a}, {\bf b}, {\bf c}...$ and have been linked to two-valued functions $A_{\bf a}(\cdot), A_{\bf b}(\cdot), A_{\bf c}(\cdot) =
\pm 1$ by Bell and followers. Here $(\cdot)$ stands for the dependency on some element of a set of mathematical representations of elements of reality that do not depend on the magnet settings ${\bf a}, {\bf b}, {\bf c}...$. This latter fact of independence from magnet settings was deduced by Bell from considerations of Einstein locality and the (physically unjustified) assumption that the elements of reality emanate exclusively from a distant source and not from the measurement equipment (including the magnets). There are numerous inequalities, delineated in the physics literature that are related to Bell’s functions $A_{\bf a}(\cdot), \ldots$. These inequalities were first derived by Boole [@BO1862] in a much more general context. Here we discuss mainly a variation of the inequalities as published by Leggett and Garg [@LEGG85], for which we also have developed a transparent counterexample. More complex counterexamples have been developed in the past for the more elaborate inequalities [@HESS01] but have remained largely unappreciated because of their lack of transparency.
The Leggett-Garg inequality reads: $$A_{\bf a}(\cdot)A_{\bf b}(\cdot) + A_{\bf a}(\cdot)A_{\bf c}(\cdot)
+ A_{\bf b}(\cdot)A_{\bf c}(\cdot) \geq -1
.
\label{hla27n1}$$ Inserting all possible values of $\pm 1$ for the functions $A(.)$ shows the correctness of this inequality. Because measurement outcomes of Einstein-Podolsky-Rosen (EPR) experiments [@ASPE82a] (that are closely related to such two-valued functions $A(.)$) do violate this inequality, it is commonly concluded that either $(\cdot)$ can not stand for any element of reality and one must therefore abandon realism or if it stands for an element of reality it must depend on the magnet settings and thus violate Einstein locality. There are, however, two important questions that have never been answered satisfactorily. If $(\cdot)$ stands for an element of reality, why does it have to appear identically for the three magnet setting pairs? If, on the other hand $(\cdot)$ is just seen as a random variable, why do the functions $A$ not also depend on a measurement time label, as they are introduced in the theory of stochastic processes [@BREU02]?
The probability theory of Boole and its generalization and perfection by Kolmogorov reduce the actual experiments to logical abstractions and establish a one to one correspondence between the experiments and these abstractions. For the case that interests us we have only two possible experimental outcomes denoted by $\pm 1$ (or equivalently $0, 1$ or $true$ and $false$). “Probability" is defined by Boole and Kolmogorov by imposing a measure (a real number of the interval $[0, 1]$) onto these elements that is consistent with the experimental factors related to both the single logical abstractions as well as the whole set of these abstractions. This is the hallmark of modern probability theory and emphasizes the relation to set theory.
The one to one correspondence of mathematical abstractions to actual experiments and a measure on the set of these abstractions are both necessary to give meaning to the word probability in a set-theoretic sense. The less familiar reader is encouraged to look at these definitions in the original work of Boole [@BO1862] or, for the Kolmogorov framework, in textbooks such as [@FELL68]. For such a model to make general sense in all experimental situations, we must assume that (1) a given and well defined logical element representing an experimental outcome or, in the language of Kolmogorov, an elementary event will occur with the same probability measure throughout all experiments and that (2) the physical characterization of the logical elements of Boole (elementary events of Kolmogorov) is consistent and complete throughout the experimental sequence.
This requirement for the description of experiments by mathematical and logical abstractions that represent a “truth” throughout an experimental sequence, brings us back to the fundamental statement of Plato’s logic: “$P$ aut non $P$ tertium non datur” and goes to the heart of discussions related to questions such as “does the moon shine when I am not looking?". The sentence “The moon shines” is, in general too ill defined to be identified with a logical variable, say $B$ that assumes a value $+1$ if the moon shines and $-1$ if it does not. Throughout any reasonably general experimental sequence that lasts for a certain duration, the moon may or may not shine at certain different places and $B$ will therefore assume a variety of values at these different places. Correspondingly a certain outcome of $B$ can not stand for the same mathematical abstraction that describes facts at different locations. If we wish to associate with $B$ a certain truth or logical expression that is valid everywhere and throughout the experimental sequence we need to introduce some generalized coordinates and formulate a more precise statement such as “the moon was shining in Monte Carlo at a certain date and time". In connection with general science experiments we need to note that a statement about experimental outcomes often may make no sense whatsoever without the introduction of a coordinate system.
Therefore, we propose the use of the space-time of special relativity to complete the characterization of Boole’s logical elements and Kolmogorov’s elementary events. We assume that only this completion can lead to $true$-$false$ or other binary statements that are always and everywhere valid even in very complex one to one correspondences of mathematical abstractions with actual experiments.
We can, as a simple example, have a number of coins and measure the outcome of coin-tosses at certain given space-time coordinates. The coins may contain some magnetic material and there may be hidden magnets with settings ${\bf a}, {\bf b}, {\bf c}$ that co-determine a probability to measure head or tail for the given coins at the given space-time coordinates. For given magnet settings and space-time coordinates of the coins we have then certain outcomes that form a sample space and certain probabilities for the outcomes that together with the sample space form a probability space [@FELL68]. If we do not label the coins by their correct space-time coordinates then we may have, for example, different magnet settings applying to the same coin and therefore may have different probabilities for the outcomes of the coin toss which may lead to confusion and contradictions.
Quantum theory uses a variation of probability theory by invoking a wave function $\psi$ that does not have a direct physical interpretation but does correspond to a certain experimental procedure of preparation. The settings of the macroscopic measurement equipment can be chosen at will and the measurements may be performed involving detection of particles that involve a space-time description through the many-body Hamiltonian and wave function $\psi$. The “probability" to measure a particle by the given equipment with given setting is then related by Born’s interpretation to the absolute square (a positive number) of the wave function that thus assigns a positive number to an event once the actual type of measurement is chosen. This assignment, however, can not yet be regarded as a probability measure in the spirit of Boole or in terms of Kolmogorov’s definitions because there is no assignment made at this point for a sample space, i.e. a space of all possible outcomes and corresponding elementary events or logical elements. The Born rule appears thus as a pre-measure that may be expanded to a full Kolmogorov probability measure only after all experiments of a sequence are chosen i.e. once all macroscopic equipment configurations of measurements and all possible outcomes (data) are fully determined. If we desire to create a Kolmogorov frame model based on Born’s rule, then the actual choice of random variables may also necessitate the introduction of one or more stochastic processes in order to include time coordinates that are otherwise not included in the Kolmogorov framework. Even this advanced procedure as described e.g. in [@BREU02] leaves us with the vexing problem of determination which mathematical abstractions (elementary events of Kolmogorov or logical elements of Boole) correspond to the different actual experiments.
For example, assume that one measures correlated pairs of spin $1/2$ particles with magnet settings ${\bf a}, {\bf b}$ and characterizes the dichotomic outcomes for the ${\bf a}, {\bf b}$ settings by the variables $A_{\bf a}, A_{\bf b}$. Further assume that in another set of measurements we measure with magnet settings ${\bf a}, {\bf c}$. Can we then denote the corresponding variables for the outcomes by $A_{\bf a}, A_{\bf c}$? Recall that, in this second case, we measure the “$A_{\bf a}$” outcomes corresponding to the $\bf c$ setting (in the other wing of the experiment) at different space-time coordinates and with different correlated pairs as compared to the first case “$A_{\bf a}$” outcomes that correspond to the original $\bf b$ setting. Is it then permitted to use the same dichotomic variable or logical element as used for the $\bf b$ setting?
Because a sample space and single outcomes are not included into considerations of quantum theory, this theory does not answer the above question. The Born rule per se does therefore not provide probabilities in the sense of Boole or Kolmogorov but can only lead to a probability once a one to one assignment of mathematical elements and experimental outcomes is made and a measure for the whole space of possible outcomes, the whole sample space, is introduced. This can not be accomplished by normalizing a given wave function because that normalization refers only to a single preparation and measurement of a much more elaborate sequence of experiments. However, it is clear that for measurements with a given macroscopic setting and a fixed method of preparation, sample spaces can always be created and that such a sample space of measurement outcomes together with the probabilities from Born’s rule forms then also a probability space à la Kolmogorov for a given setting as outlined in texts such as [@BREU02]. Nevertheless, for different and particularly for incompatible experiments and for a given characterization of functions or random variables e.g. by magnet settings only, such a probability space may not exist.
As we will see in our counterexample this non-existence depends crucially on the one-to-one correspondence of the experimental outcomes to their mathematical idealizations be they elements of Boolean logic or elementary events in the framework of Kolmogorov.
Many mathematical papers on probability theory simply start with the phrase “given a Kolmogorov probability space...”. It is, however, well known and has been particularly well pointed out by Vorob’ev [@VORO62] that there are cases in which a Kolmogorov probability space does not exist. In particular, there exist numerous classical experiments that subject to certain characterizations by simple settings, can not be described on one probability space in a logically consistent way. Take, for example, certain physical experiments that can be described by Stochastic Processes. Examples are Brownian motion or stock market and exchange rate fluctuations. It is plausible that such different processes may not be describable by a single stochastic process but are described rather by different ones. It is less known but has been shown in great detail that even very slight changes in experiments may require the use of different stochastic processes for their description and that this is true also for EPR-type experiments. It is the purpose of this paper to show that Born’s rule defines a pre-probability measure that only then can be turned into a Kolmogorov (or Boole) probability if a logically consistent one to one correspondence between experimental outcomes and mathematical abstractions is or can be made. We also show that such one to one correspondence can always be made for the known EPR experiments by completing the characterization of the mathematical symbols describing the functions $A$ of Bell by use of space-time indices that relativity theory provides us with. Indices related to influences at a distance would also accomplish the same goal of obtaining a consistent probability measure à la Kolmogorov from Born’s rule but do not appear to be necessary.
Games with symptoms and patients: From Boole to Bell
====================================================
As mentioned, the early definitions of probability by Boole were related to a one to one correspondence that Boole established between actual experiments and idealizations of them through elements of logic with two possible outcomes. His view gave the concept of probability precision in its relation to sets of experiments and this precision is expressed by Boole’s discussion of probabilities as related to possible experience. These discussions can be best explained by an example that also shows the role of space-time coordinates in the characterization of variables related to probability theory. We discuss first this example that has its origins in the works of Boole and also Vorob’ev and relates to the work of Bell inasmuch as it can be used as a counterexample to Bell’s conclusions related to non-locality. Then we return to the more general discussions of probability in quantum theory.
Consider a certain disease that strikes persons in different ways depending on circumstances such as place of birth and place of residence etc.. Assume that we deal with of patients that are born in Africa (subscript $\bf a$), in Asia (subscript $\bf b$) and in Europe (subscript $\bf c$). Assume further that doctors are assembling information about the disease altogether in the three cities Lille, Lyon and Paris, all in France. The doctors are careful and perform the investigations on randomly chosen but identical dates. The patients are denoted by the symbol $A_{\bf
o}^l(n)$ where ${\bf o} = {\bf a}, {\bf b}, {\bf c}$ depending on the birthplace of the patient, $l = 1, 2, 3$ depending on where the doctor gathered information $1$ designating Lille, $2$ Lyon and $3$ Paris respectively, and $n = 1, 2, 3,\ldots,N$ denotes just a given random day of the examination. The doctors assign a value $A = \pm
1$ to each patient; $A = +1$ if the patients show a certain symptom and $A = -1$ if they do not.
The first variation of this investigation of the disease is performed as follows. The doctor in Lille examines patients of type $\bf a$, the doctor in Lyon patients of type $\bf b$ and the doctor in Paris patients of type $\bf c$. On any given day of examination (of precisely one patient for each doctor and day) they write down their diagnosis and then, after many exams, concatenate the results and form the following sum of pair-products of exam outcomes at a given date described by $n$: $$\Gamma(n) = A_{\bf a}^1(n)A_{\bf b}^2(n) + A_{\bf a}^1(n)A_{\bf
c}^3(n) + A_{\bf b}^2(n)A_{\bf c}^3(n)
.
\label{hla23n1}$$ Boole noted now that $$\Gamma(n) \geq -1
,
\label{hla23n2}$$ which can be found by inserting all possible values for the patient outcomes summed in Eq. (\[hla23n1\]). For the average (denoted by $\langle . \rangle$) over all examinations we have then also: $$\Gamma= \langle\Gamma(n)\rangle=\frac{1}{N}\sum_{n=1}^N \Gamma(n)
\geq -1
.
\label{hla23n3}$$ This equation gives conditions for the product averages and therefore for the frequencies of the concurrence of certain values of $A_{\bf a}^1(n), A_{\bf b}^2(n)$ etc. e.g. for $A_{\bf a}^1(n)
=+1, A_{\bf b}^2(n) = -1$. These latter frequencies must therefore obey these conditions. Thus we obtain rules or non-trivial inequalities for the frequencies of concurrence of the patients symptoms. Boole calls these rules “conditions of possible experience". In case of a violation, Boole states that then the “evidence is contradictory”.
In the opinion of the authors, the term “possible experience” is somewhat of a misnomer. The experimental outcomes have been determined from an experimental procedure in a scientific way and are therefore possible. What may not be possible is the one to one correspondence of Boole’s logical elements or variables to the experimental outcomes that the scientist or statistician has chosen. In order to judge precisely where the contradictions arise from, we need to advance 100 years to the work of Vorob’ev on the one side and go back to the meaning of Plato’s logic and his rule “aut $P$ aut non $P$ tertium non datur" on the other.
Before doing so, however, we note the following. In this example, we may indeed regard the various $A_{\bf o}^l(n) = \pm1$ with given indices as the elements of Boole’s logic to which the actual experiments can be mapped. As shown by Boole, this is a sufficient condition for the inequality of Eq. (\[hla23n3\]) to be valid. We may in this case also omit all the indices except for those designating the birth place and still will obtain a valid equation that can never be violated: $$\langle A_{\bf a}A_{\bf b}\rangle + \langle A_{\bf a}A_{\bf c}\rangle + \langle A_{\bf
b}A_{\bf c}\rangle \geq -1
.
\label{hla23n3b}$$ The reason is simply that three arbitrary dichotomic variables i.e. variables that assume only two values ($\pm 1$ in our case) must always fulfill Eq. (\[hla23n3b\]) no matter what their logical connection to experiments is because we deduce the three products of Eq. (\[hla23n3b\]) from sequences of each three measurement outcomes. Note that Eq. (\[hla23n3b\]) contains six factors with each birthplace appearing twice and representing then the identical result. Below we will discuss a slightly modified experiment that is much more general and contains six measurement results for the six factors. Before discussing this more general experiment that resembles more clearly EPR experiments we turn now to the findings of Vorob’ev regarding this type of inequalities and Boole’s conditions of possible experience.
Obviously the inequality of Eq. (\[hla23n2\]) is non-trivial because based on the fact that the value of all products must be $\pm 1$ one could only conclude that $$\Gamma(n) \geq -3
.
\label{hla23n4}$$ The nontrivial result has the following reason. Boole included into Eq. (\[hla23n1\]) a cyclicity: the outcomes of the first two products determine the outcomes in the third product. Because all outcomes can only be $\pm 1$ the cyclicity gives rise to Eq. (\[hla23n2\]). Vorob’ev showed precisely 100 years after Boole’s original work in a very general way that it is always a combinatorial-topological cyclicity that gives rise to non-trivial inequalities for the mathematical abstractions of experimental outcomes. Boole pointed to the fact that Eq. (\[hla23n2\]) can not be violated. However, in order to come to that conclusion, the $A_{\bf o}^l(n)$ need, in the first place, to be in a one to one correspondence to Boole’s elements of logic that follow the law “aut $A = +1$ aut $A = -1$ tertium non datur". As discussed in the introduction, eternally valid statements about physical experience such as “aut $A = +1$ aut $A = -1$ tertium non datur" can usually not be made when describing the physical world without the use of some coordinates. In the example above these coordinates where the places of birth, the places of examination and the numbering of the exams that were randomly taken. All these coordinates when added need to still allow for a cyclicity in order to make Boole’s inequality non-trivial. Therefore, if we have a violation of a non-trivial Boole inequality, then we must conclude that we have not achieved a one to one correspondence of our variables to the elementary eternally true logical variables of Boole and that we need further “coordinates” that will then remove the cyclicity. In order to illustrate all this by a simple example, we consider the following second different statistical investigation of the same disease.
We now let only two doctors, one in Lille and one in Lyon perform the examinations. The doctor in Lille examines randomly patients of types $\bf a$ and $\bf b$ and the one in Lyon of type $\bf b$ and $\bf c$ each one patient at a randomly chosen date. The doctors are convinced that neither the date of examination nor the location (Lille or Lyon) has any influence and therefore denote the patients only by their place of birth. After a lengthy period of examination they find: $$\Gamma = \langle A_{\bf a}A_{\bf b}\rangle + \langle A_{\bf a}A_{\bf c}\rangle + \langle A_{\bf
b}A_{\bf c}\rangle = -3 .
\label{hla23n5}$$ They further notice that the single outcomes of $A_{\bf a}, A_{\bf
b}$ and $A_{\bf c}$ are randomly equal to $\pm 1$. This latter fact completely baffles them. How can the single outcomes be entirely random while the products are not random at all and how can a Boole inequality be violated hinting that we are not dealing with a possible experience? After lengthy discussions they conclude that there must be some influence at a distance going on and the outcomes depend on the exams in both Lille and Lyon such that a single outcome manifests itself randomly in one city and that the outcome in the other city is then always of opposite sign. Naturally that way they have removed the Vorob’ev cyclicity and we have only the trivial inequality Eq. (\[hla23n4\]) to obey.
However, there are also other ways that remove the cyclicity, ways that do not need to take recourse to influences at a distance. For example we can have a time dependence and a city dependence of the illness as follows. On even dates we have $A_{\bf a} = +1$ and $A_{\bf c} = -1$ in both cities while $A_{\bf b} = +1$ in Lille and $A_{\bf b} = -1$ in Lyon. On odd days all signs are reversed. Obviously for measurements on random dates we have then the outcome that $A_{\bf
a}, A_{\bf b}$ and $A_{\bf c}$ are randomly equal to $\pm 1$ while at the same time $\Gamma(n) = -3$ and therefore $\Gamma = -3$. We need no deviation from conventional thinking to arrive at this result because now, in order to deal with Boole’s elements of logic, we need to add the coordinates of the cities to obtain $
%\begin{equation}
\Gamma = \langle A_{\bf a}^1 A_{\bf b}^2\rangle + \langle A_{\bf a}^1 A_{\bf c}^2\rangle +
\langle A_{\bf b}^1 A_{\bf c}^2\rangle \geq -3
%,
%\label{hla23n6}
%\end{equation}
$ and the inequality is of the trivial kind because the cyclicity is removed. The date index does not matter for the products since both signs are reversed leaving the products unchanged. However, in actual fact, also this index might have to be included and could be a reason to remove the cyclicity e.g. $
\Gamma = \langle A_{\bf a}^1(d_1) A_{\bf b}^2(d_1)\rangle + \langle A_{\bf a}^1(d_2)
A_{\bf c}^2(d_2)\rangle
+ \langle A_{\bf b}^1(d_3) A_{\bf c}^2(d_3)\rangle \geq -3
$, where we now have included the fact that the exams of pairs are performed at different dates $d_1, d_2, d_3$.
We note that in connection with EPR experiments and questions relating to interpretations of quantum theory, Eqs. (\[hla27n1\]) and (\[hla23n2\]) are called Leggett-Garg inequalities and are of the Bell-type. It is often claimed that a violation of such inequalities implies that either realism or Einstein locality should be abandoned. As we saw in our counterexample which is both Einstein local and realistic in the common sense of the word, it is the one to one correspondence of the variables to the logical elements of Boole that matters when we determine a possible experience, but not necessarily the choice between realism and Einstein locality. Phrased differently, the question “does the moon shine when we are not looking" is simply too imprecise. Had we given a space-time coordinate for the event that the moon shines we would have expressed an eternal truth of a measurement.
Realism plays a role in the arguments of Bell and followers because they introduce a variable $\lambda$ representing an element of reality and then write $$\Gamma = \langle A_{\bf a}(\lambda) A_{\bf b}(\lambda)\rangle + \langle A_{\bf
a}(\lambda) A_{\bf c}(\lambda)\rangle + \langle A_{\bf b}(\lambda) A_{\bf
c}(\lambda)\rangle \geq -1
.
\label{hla23n7}$$ Because no $\lambda$ exists that would lead to a violation except a $\lambda$ that depends on the index pairs ($\bf a$, $\bf b$), ($\bf
a$, $\bf c$) and ($\bf b$, $\bf c$) the simplistic conclusion is that either elements of reality do not exist or they are non-local. The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs. This assumption implies the existence of the combinatorial-topological cyclicity that in turn implies the validity of a non-trivial inequality but has no physical basis. Why should the elements of reality not all be different? Why should they, for example not include the time of measurement? There is furthermore no reason why there should be no parameter of the equipment involved. Thus the equipment could involve time and setting dependent parameters such as $\lambda_{\bf
a}(t), \lambda_{\bf b}(t), \lambda_{\bf c}(t)$ and the functions $A$ might depend on these parameters as well.
Bell revisited from the view of quantum theory
==============================================
Consider three spin 1/2 particles that are measured by macroscopic equipment involving three Stern-Gerlach magnets. The wave function of the three particles is not nearer specified and denoted by $\psi_3$. If we denote the measurement outcomes at measurement time $n$ for the three particles with the three respective magnet settings by $A_{\bf a}^n(\psi_3), A_{\bf
b}^n(\psi_3), A_{\bf c}^n(\psi_3)$, then it is easy to show by the laws of quantum theory that the Boole (Bell) inequality [@RAED09a]: $$\begin{aligned}
\langle A_{\bf a}^n(\psi_3)A_{\bf b}^n(\psi_3)\rangle &+& \langle A_{\bf
a}^n(\psi_3)A_{\bf c}^n(\psi_3)\rangle
\nonumber \\
&+& \langle A_{\bf b}^n(\psi_3)A_{\bf
c}^n(\psi_3)\rangle \geq -1 \label{hlm2n1}
,\end{aligned}$$ is fulfilled and we can conclude that we have dealt with the logical elements of Boole and well defined probabilities.
If we consider instead six measurements of pairs of particles that are described by the singlet state $\psi_S$ then we need three different measurement station pairs or one pair of measurement-stations at three different measurement times. For simplicity consider three different measurement-station pairs that we label with indices $n,
m, l$. Correspondingly we also introduce for the measurement outcomes the symbols $A_{\bf a}^n(\psi_S), A_{\bf b}^n(\psi_S);
A_{\bf a}^m(\psi_S), A_{\bf c}^m(\psi_S); A_{\bf b}^l(\psi_S),
A_{\bf c}^l(\psi_S)$. Then quantum theory tells us that for certain magnet settings we may have: $$\begin{aligned}
\langle A_{\bf a}^n(\psi_S)A_{\bf b}^n(\psi_S)\rangle
&+& \langle A_{\bf a}^m(\psi_S)A_{\bf c}^m(\psi_S)\rangle
\nonumber \\
&+& \langle A_{\bf b}^l(\psi_S)A_{\bf c}^l(\psi_S)\rangle < -1 \label{hlm2n2}
,\end{aligned}$$ and we have a violation of an inequality that resembles the Bell- type. In this case, however, this does not surprise us because as long as we have no cyclicity in the expressions of Eq. (\[hlm2n2\]), we obtain only a trivial Boole inequality and as far as Boole’s or Kolmogorov’s probability are concerned the right hand side of Eq. (\[hlm2n2\]) might as well be $-3$. Note that the attachment of space-time indices to the variables that provide a characterization of the experiments in addition to observations such as the magnet settings always permit a removal of any cyclicity. Quantum theory does not have any concerns about the indices $n,
m, l$ because quantum theory is careful not to assign any meaning to the single outcomes and therefore does not rely on or need a sample space or probability space.
A probability as in the frameworks of Boole or Kolmogorov is thus not defined in quantum theory because quantum theory does not define any relations of its framework to single logical elements or elementary events and therefore also can not provide a measure to general sets or subsets of such elements or events. What is defined in quantum theory are long term averages and these may be related in a variety of ways to the actual logical elements of a theory. The probability amplitude just carries with it all the possibilities that may actually be realized in a set of data, that is all the possibilities that may be realized as a sample space. For an actual sample space to be realized other choices must be made that, in principle, have nothing to do with the quantum particles that are measured but only with the macroscopic equipment that is brought into a certain setting for the purpose of measurement. These other choices may again involve sample spaces and probability spaces that together with the measurement outcomes related to quantum particles may form complex stochastic processes.
Quantum theory predicts the long term averages of these stochastic processes but does not attempt to unify these processes into one common stochastic process. The Born rule thus attaches positive values to measurement outcomes that are related to certain measurements and preparations and defines in this way what one could call a pre-measure. For all well defined macroscopic equipment arrangements this pre-measure can be turned into a probability measure with different experimental sequences corresponding, in principle, to different probability measures. Whether or not these different measures and sample spaces can be unified is a matter of characterization. If no unification is possible, as would be indicated by a violation of a Boole (Bell) inequality, then one needs further detail in the characterization of variables in order to remove the cyclicity. That may be achieved both in an Einstein local way or in a non-local fashion. As we saw above, EPR experiments always permit extended characterization by Einstein’s space-time and corresponding avoidance of cyclicity. Nonlocal characterizations that avoid cyclicity are also always possible but not necessary. The only alternative to the above is to abandon realism (whatever we mean by this word) altogether. The examples (counterexamples) with the patient-investigations and the relation of these examples to EPR experiments prove, at least in the opinion of these authors, that neither realism nor Einstein locality need be abandoned because of a violation of Bell’s inequalities.
[^1]: E-mail: k-hess@illinois.edu
[^2]: E-mail: k.michielsen@fz-juelich.de
[^3]: E-mail: h.a.de.raedt@rug.nl
|
---
abstract: 'The present article is devoted to the construction of a unified formalism for Palatini and unimodular gravity. The basic idea is to employ a relationship between unified formalism for a Griffiths variational problem and its classical Lepage-equivalent variational problem. As a way to understand from an intuitive viewpoint the Griffiths variational problem approach considered here, we may say the variations of the Palatini Lagrangian are performed in such a way that the so called *metricity condition*, i.e. (part of) the condition ensuring that the connection is the Levi-Civita connection for the metric specified by the vielbein, is preserved. From the same perspective, the classical Lepage-equivalent problem is a geometrical implementation of the Lagrange multipliers trick, so that the metricity condition is incorporated directly into the Palatini Lagrangian. The main geometrical tools involved in these constructions are canonical forms living on the first jet of the frame bundle for the spacetime manifold $M$. These forms play an essential rôle in providing a global version of the Palatini Lagrangian and expressing the metricity condition in an invariant form; with their help, it was possible to formulate an unified formalism for Palatini gravity in a geometrical fashion. Moreover, we were also able to find the associated equations of motion in invariant terms and, by using previous results from the literature, to prove their involutivity. As a bonus, we showed how this construction can be used to provide a unified formalism for the so called *unimodular gravity* by employing a reduction of the structure group of the principal bundle $LM$ to the special linear group $SL\left(m\right),m=\mathop{\text{dim}}{M}$.'
address: |
Departamento de Matemática\
Universidad Nacional del Sur, CONICET\
8000 Bah[í]{}a Blanca\
Argentina
author:
- 'S. Capriotti'
title: Unified formalism for Palatini gravity
---
[^1]
Introduction
============
The search of a Hamiltonian setting for General Relativity has a long and distinguished history. Among the first works dealing with this problem, we can mention [@PhysRev.79.986; @PhysRev.87.452; @PhysRev.83.1018] and the works from Dirac [@10.2307/100496; @Dirac1958]. A coordinate-free formulation appeared in 1962 with the groundbreaking work of Arnowitt, Deser and Misner (a reprinting of this article can be found in [@citeulike:820116]). For a modern account, see for example [@bojowald2010canonical; @9780521830911; @Thiemann:2007zz] and references therein. However, as these pictures require of a $3+1$-decomposition of space-time, they tend to hide the full covariance of the theory.
From a general viewpoint, the so called *multisymplectic description of field theory* was developed as a way to preserve this covariance (see [@dedecker1953calcul; @Kijowski1973; @kijowski79:_sympl_framew_field_theor; @Carinena1991345; @Gotay:2004ib; @helein04:_hamil; @9789810215873; @helein:hal-00599691] and references therein). The formulation of general relativity from this geometrical viewpoint, both in the Lagrangian and the Hamiltonian realm, was carried out in several places (we can mention, for example, [@0264-9381-22-19-016; @Vignolo2006; @doi:10.1063/1.4890555; @Ibort:2016xoo]); a purely multisymplectic formulation was given in [@0264-9381-32-9-095005].
Nevertheless, the singular nature of the Lagrangians associated to field theory requires special techniques in order to succesfully construct the Hamiltonian counterpart of a variational problem. A way to overcome these problems is by means of the so called *Skinner-Rusk or unified formalism* [@DeLeon:2002fc; @2004JMP....45..360E; @2014arXiv1402.4087P; @375178]. Relevant features of this formulation are:
- In it, both the velocities and the momenta are present as degrees of freedom, and
- in the underlying variational problem, the variations of the velocities are performed independently of the variations of the fields.
The present work continues the exploration of geometrical formulations of the Lagrangian and Hamiltonian versions of General Relativity (GR from now on). More specifically, it deals with a unified formalism [@1751-8121-40-40-005; @2004JMP....45..360E; @Vitagliano2010857] for Palatini version of GR, pursuing the work initiated in [@Gaset:2017ahy] for Einstein-Hilbert action. Also, a novel unified version of *unimodular gravity* is obtained as a by-product of the geometrical tools employed in the article.
Our starting point will be the Griffiths variational problem considered in [@capriotti14:_differ_palat] for Palatini gravity; this variational problem can be considered as an alternative Lagrangian version for the formulation of vacuum GR equations in terms of exterior differential systems, as it is given for example in [@PhysRevD.71.044004]. The method chosen for the construction of the unified formalism comes from the work of Gotay [@GotayCartan; @Gotay1991203], and it consists into the employment of a *classical Lepage-equivalent variational problem* (in the sense defined by Gotay in these works) as a replacement for the Griffiths variational problem at hands. The new problem becomes a particular instance of the $m$-phase space theory in the sense of Kijowski [@Kijowski1973].
Roughly speaking, a Griffiths variational problem is a variational problem in which the variations are selected to preserve a specific relationship between the involved fields; in the particular case of Palatini gravity, the degrees of freedom are a moving frame and a connection, and they can be interpreted as forming an element in the jet bundle of the frame bundle (see Equation and Theorem \[thm:metr-constr-interpretation\] below). In this setting our variations will be performed in such a way that the connection is always the Levi-Civita connection for the metric associated to the vielbein. It will be achieved here by assuming the *metricity condition* as the basic constraint; this condition is equivalent to the annihilation of the so called *nonmetricity tensor*, as it appeared in [@Friedric-1978] (for a more explicit explanation on this, see Section \[sec:null-trace-constr\]). It should be stressed that this condition is weaker than the most common restriction found when working with variational problems on a jet space, namely, the set of constraints imposed on the fields by the *contact structure* (see Section \[sec:geom-prel\]). On the other hand, the classical Lepage-equivalent is a geometrical construction equivalent to the well-known “Lagrange multiplier trick”, where the constraints on the fields are incorporated as terms into the Lagrangian of the theory. For articles employing this trick, see [@Ray1975; @Safko1976; @Friedric-1978], although in these references the components of the metric tensor instead of vielbeins are used as degrees of freedom besides connection variables. Through it, we will be able to find a unified formulation for Palatini and unimodular GR.
In more geometrical terms, the definition of *classical Lepage-equivalent problem* goes as follows: We begin with a bundle $p:F\rightarrow M$ with $m:=\dim{M}$, an $m$-form $\lambda$ on $F$ and a set of restrictions encoded as an exterior differential system ${{\mathcal I}}$ on $F$ [@nkamran2000; @BryantNine; @CartanBeginners; @BCG]; furthermore, it is necessary to admit that ${{\mathcal I}}$ is locally generated by sections of a subbundle $I\subset\wedge^\bullet\left(T^*F\right)$. Under these assumptions, Gotay showed in the previously cited works how to construct another variational problem, its *classical Lepage-equivalent problem*, whose underlying bundle is the affine subbundle of forms $$W_\lambda:=\left(\lambda+I\right)\cap\wedge^m\left(T^*F\right),$$ and where the new Lagrangian form is calculated as the pullback of the canonical $m$-form on $\wedge^m\left(T^*F\right)$ to $W_\lambda$. The degrees of freedom along the fibers of $I$ play the rôle of Lagrange multipliers, and the Lagrangian of the theory is recovered through the pullback construction described above. It is possible to show that in this new variational problem, any extremal of the Lepage-equivalent problem projects onto extremal sections of the original variational problem; in the language used in [@GotayCartan], it is said that the classical Lepage-equivalent problem is *covariant*[^2]. However, there is no general proof of the *contravariance* of a Lepage-equivalent variational problem, namely, the fact that every critical section of the original variational problem can be lifted to a critical section of the Lepage-equivalent, and so it must be proved in each case separately.
A key feature of the previous scheme is that, when $F=J^1\pi$ for some bundle $\pi:E\rightarrow M$, $\lambda$ is a Lagrangian density and the exterior differential system ${{\mathcal I}}$ is the contact structure in $J^1\pi$, the classical Lepage-equivalent problem yield to the unified formalism as it is described in [@2004JMP....45..360E].
Thus, for setting a suitable unified formalism for Palatini gravity, we will apply the Gotay scheme; this brings us to the problem of constructing a classical Lepage-equivalent problem for the Griffiths variational problem considered in [@capriotti14:_differ_palat], and to show its contravariance. Part of the present article is devoted to this task. In the final part we will show how to obtain the equations of motion from the formalism, and how the existence of a fundamental volume form gives rise to a unified formalism for unimodular gravity.
The organization of the paper is the following: In Section \[sec:geom-prel\] the geometrical tools and conventions to be employed throughout the article are introduced, as well as the relevant definitions regarding Griffiths variational problems and its classical Lepage-equivalents. The canonical forms on the jet space of the bundle of frames are also introduced in this section. The unified formalism for Palatini gravity is described in Section \[sec:griff-vari-probl\], where a discussion about the metricity constraint is carried out. To work with the equations of motion of the unified formalism requires the choice of a basis for the vertical vector fields of the underlying bundle. To this end is devoted Section \[sec:convenient-basis\]: The selection of a connection on $LM$ allows us to construct a basis on $LM$, and its elements are lifted to the jet space through the canonical lifts. In the first part of Section \[sec:veloc-mult-space\] the existence of a direct product structure on the bundle of forms belonging to the basic bundle of the unified formalism is used to find a basis of vector fields on this bundle, suitable to work with the equations of motion. The equations of motion are explicity written in the final part of this section, and its involutivity analised. Finally, an unified formalism for unimodular gravity is discussed in Section \[sec:mult-unim-grav\].
Geometrical preliminaries {#sec:geom-prel}
=========================
The spacetime manifold will be indicated with $M$, and it will have dimension $m$. Throughout the article, lower case latin indices $i,j,k,l,m,\cdots$ will refer to coordinates in the tensor products of the vector space ${\mathbb{R}}^m$ and its dual; they will run from $1$ to $m$. With this convention in mind, the canonical basis in ${\mathbb{R}}^m$ will be indicated as $\left\{e_i\right\}$, and its dual with $\left\{e^j\right\}$. Greek indices, on the other hand, will refer to indices associated to local coordinates in the spacetime manifold $M$; for this reason, they also will run from $1$ to $m$. Finally, upper case latin indices will be used for the representation of general fiber coordinates. Einstein convention regarding sum over repeated indices will be adopted.
The matrix $$\eta:=
\begin{bmatrix}
1&\cdots&0&0\\
\vdots&\ddots&\vdots&\vdots\\
0&\cdots&1&0\\
0&\cdots&0&-1
\end{bmatrix}\in GL\left(m\right)$$ will set the metric on ${\mathbb{R}}^m$. It should be stressed that there is nothing special in the signature chosen for $\eta$, and that the results reached in the article will work for any other signature.
The matrix $\eta$ determines a Lie algebra $$\mathfrak{u}\left(m-1,1\right):=\left\{A\in\mathfrak{gl}\left(m,{\mathbb{C}}\right):A^\dagger\eta+\eta A=0\right\},$$ which is a compact real form for the complexification $\mathfrak{gl}\left(m,{\mathbb{C}}\right)=\mathfrak{gl}\left(m\right)\otimes_{\mathbb{R}}{\mathbb{C}}$. Another way to define this compact form is through the involution $$F:\mathfrak{gl}\left(m,{\mathbb{C}}\right)\rightarrow\mathfrak{gl}\left(m,{\mathbb{C}}\right):A\mapsto-\eta A^\dagger\eta;$$ the eigenspaces of $F$, associated to the eigenvalues $\pm1$, induce the decomposition $$\mathfrak{gl}\left(m,{\mathbb{C}}\right)=\mathfrak{u}\left(m-1,1\right)\oplus\mathfrak{s}\left(m-1,1\right).$$ This decomposition is the *Cartan decomposition of $\mathfrak{gl}\left(m,{\mathbb{C}}\right)$ associated to the compact real form $\mathfrak{u}\left(m-1,1\right)$*, and descends to $\mathfrak{gl}\left(m\right)\subset\mathfrak{gl}\left(m,{\mathbb{C}}\right)$, namely $$\mathfrak{gl}\left(m\right)={\mathfrak{k}}\oplus{\mathfrak{p}},$$ where $${\mathfrak{k}}:=\mathfrak{u}\left(m-1,1\right)\cap\mathfrak{gl}\left(m\right),\qquad{\mathfrak{p}}:=\mathfrak{s}\left(m-1,1\right)\cap\mathfrak{gl}\left(m\right).$$ Denoting $f:=F_{|\mathfrak{gl}\left(m\right)}$, we have that ${\mathfrak{k}}$ (resp. ${\mathfrak{p}}$) is the eigenspace corresponding to the eigenvalue $+1$ (resp. $-1$) for $f$. The projectors in every of these eigenspaces become $$\pi_{\mathfrak{k}}\left(A\right):=\frac{1}{2}\left(A-\eta A^T\eta\right),\qquad\pi_{\mathfrak{p}}\left(A\right):=\frac{1}{2}\left(A+\eta A^T\eta\right).$$
There exists some facts related to this decomposition which could be useful when dealing with $\mathfrak{gl}\left(m\right)$-valued forms. First of all, given $N$ a manifold and $\gamma\in\Omega^p\left(N,\mathfrak{gl}\left(m\right)\right)$, we will define $$\gamma_{\mathfrak{k}}:=\pi_{\mathfrak{k}}\circ\gamma,\qquad\gamma_{\mathfrak{p}}:=\pi_{\mathfrak{p}}\circ\gamma;$$ if $\gamma=\gamma^i_jE^j_i$ is the expression of $\gamma$ in terms of the canonical basis of $\mathfrak{gl}\left(m\right)$, then we will have that $$\begin{aligned}
&\left(\gamma_{\mathfrak{k}}\right)^i_j=\frac{1}{2}\left(\gamma^i_j-\eta_{jp}\gamma^p_q\eta^{qi}\right)\\
&\left(\gamma_{\mathfrak{p}}\right)^i_j=\frac{1}{2}\left(\gamma^i_j+\eta_{jp}\gamma^p_q\eta^{qi}\right).\end{aligned}$$ Additional properties for this decomposition can be found in Appendix \[sec:cart-decomp-forms\].
We will make extensive use of jet bundle theory throughout the article, as it is presented in [@saunders89:_geomet_jet_bundl]. Thus, associated to every bundle $\pi:E\rightarrow M$ there exists an affine bundle $J^1\pi$ and maps $\pi_1:J^1\pi\rightarrow M,\pi_{10}:J^1\pi\rightarrow E$ fitting in the following commutative diagram
J\^1 & & E\
& M &
The elements of $J^1\pi$ are regarded as linear maps $$j_x^1s:T_xM\rightarrow T_eE$$ such that $T_e\pi\circ j_x^1s=\text{id}_{T_xM}$. Every section $s:M\rightarrow E$ can be lifted to a section $j^1s:M\rightarrow J^1\pi$ through the formula $$j^1s\left(x\right):=T_xs.$$ Sections of $\pi_1$ arising as lifts of section of $\pi$ are called *holonomic sections*.
The *contact structure on $J^1\pi$* is the ideal in $\Omega^\bullet\left(J^1\pi\right)$ locally generated by the contact forms $$du^A-u^A_\mu dx^\mu$$ and its differentials. According to the next result, it fully characterizes the holonomic sections of $\pi_1$.
\[prop:HolonomicSections\] A section $s$ of $\pi_1$ is holonomic if and only if $$\sigma^*\left(du^A-u^A_\mu dx^\mu\right)=0.$$
In the construction of the classical Lepage-equivalent problem associated to a given Griffiths variational problem it will be also necessary to have at our disposal some facts regarding spaces of forms on a fiber bundle $\pi:E\rightarrow M$. For every $k<l$, the set of $k$-horizontal $l$-forms on $E$ is defined by $$\left.\wedge^l_k\left(T^*E\right)\right|_e:=\left\{\alpha\in\wedge^l\left(T^*_eE\right):V_1\lrcorner\cdots\lrcorner V_k \lrcorner\alpha=0\quad\forall V_1,\cdots,V_k\in V_e\pi\right\}.$$ We will indicate with $$i_k^l:\wedge^l_k\left(T^*E\right)\hookrightarrow\wedge^l\left(T^*E\right)$$ the canonical immersions. The canonical $m$-form $\Theta$ on $\wedge^m\left(T^*E\right)$ is defined by the expression $$\left.\Theta\right|_{\alpha_e}\left(Z_1,\cdots,Z_m\right):=\alpha_e\left(T_{\alpha_e}\overline{\tau}\left(Z_1\right),\cdots,T_{\alpha_e}\overline{\tau}\left(Z_m\right)\right)$$ Further properties of this form can be found in Appendix \[sec:canonical-k-form\]. We will also set $$\Theta_2:=\left(i_2^m\right)^*\Theta,\qquad\Omega_2:=-d\Theta_2.$$
In order to find local expressions let us choose adapted local coordinates $\left(x^\mu,u^A\right)$ on $U\subseteq E$. Then we have local coordinates $\left(x^\mu,u^A,p,p_A^\mu\right)$ on $\left(\overline{\tau}^m_E\right)^{-1}\left(U\right) \subseteq\wedge^m_2\left(T^*E\right)$ given by the condition $\gamma\in\left(\overline{\tau}^m_E\right)^{-1}\left(U\right)$ if and only if $$\gamma=pd^mx+p_A^\mu du^A\wedge d^{m-1}x_\mu$$ where $$d^mx:=dx^1\wedge\cdots\wedge dx^m,\qquad d^{m-1}x_\mu:=\partial_\mu\lrcorner d^mx,\qquad\partial_\mu\equiv\frac{\partial}{\partial x^\mu}.$$ In terms of these coordinates one has $$\begin{aligned}
\Theta_2 \left(x^\mu,u^A,p,p_A^\mu\right) &=pd^mx+p_A^\mu du^A\wedge d^{m-1}x_\mu,\\
\Omega_2 \left(x^\mu,u^A,p,p_A^\mu\right)&=-dp\wedge d^mx-dp_A^\mu\wedge du^A\wedge d^{m-1}x_\mu.\end{aligned}$$
Griffiths variational problems {#sec:teoria-lagrangiana}
------------------------------
These kind of variational problems were considered by Griffiths in [@book:852048], and have been employed in geometry [@hsu92:_calcul_variat_griff; @sabau_shibuya_2016; @10.2307/2374654] and mathematical physics [@0264-9381-24-22-005; @makhmali16:_differ]. The essential data for the construction of this version of variational theory is the following:
- A fiber bundle $p:F\rightarrow M$.
- An $m$-form $\lambda\in\Omega^m\left(F\right)$, the *Lagrangian form*.
- An exterior differential system (EDS from now on) ${{\mathcal I}}\in\Omega^\bullet\left(F\right)$, that is, an ideal in the exterior algebra of $F$ that is closed by exterior differentiation.
For first order Lagrangian field theory, the bundle $F$ is set to be the jet bundle $J^1\pi$ associated to a bundle $\pi:E\rightarrow M$, the Lagrangian form is in general a horizontal form $\lambda=L\pi_1^*\nu$, where $L\in C^\infty\left(J^1\pi\right)$ and $\nu$ is a volume form on $M$. In this case, the EDS becomes the contact structure of $J^1\pi$.
The *Griffiths variational problem associated to the data $\left(F,\lambda,{{\mathcal I}}\right)$* consists into finding a section $\sigma:M\rightarrow F$ stationary for the action $$S\left[\sigma\right]:=\int_M\sigma^*\lambda$$ which is an integral section of ${{\mathcal I}}$, namely, such that $$\sigma^*\alpha=0$$ for every $\alpha\in{{\mathcal I}}$. A section fulfilling these requeriments for a given variational problem is also called *critical*.
We will assume the existence of these integrals as granted.
For first order Lagrangian field theory, Proposition \[prop:HolonomicSections\] implies that a section of $J^1\pi$ will be integral for the contact structure if and only if it is holonomic. Thus the associated variational problem will translate into finding the sections $s:M\rightarrow E$ such that their lifts $j^1s:M\rightarrow J^1\pi$ are stationary for the action integral $$S\left[s\right]=\int_M\left(j^1s\right)^*\left(L\pi^*\nu\right)=\int_ML\left(j^1s\right)\nu,$$ which is the Hamilton’s principle [@Gotay:2004ib; @campos10:_geomet_method_class_field_theor_contin_media] for this kind of field theories.
Unified formalism for a Griffiths variational problem {#sec:form-hamilt-restr}
-----------------------------------------------------
In order to construct a unified formalism for a given Griffiths variational problem $\left(F,\lambda,{{\mathcal I}}\right)$, we need to assume that the EDS ${{\mathcal I}}$ is locally generated by the set of sections of a vector subbundle $I\subset\wedge^\bullet\left(T^*F\right)$; it means that we can find an open cover $\left\{U_\alpha\right\}$ of $F$ such that the pullback of ${{\mathcal I}}$ to each of the elements $U_\alpha$ of the cover is generated by sections of $I_{|U_\alpha}$. The following definitions are quoted from [@GotayCartan].
Let us fix some integer $k$ such that $$\left.\lambda\right|_p\in\wedge^m_k\left(T_pF\right),\qquad I\subset\wedge^m_k\left(T^*F\right).$$ We define the affine subbundle $W_\lambda\subset\wedge^m\left(T^*F\right)$ with the formula $$\left.W_\lambda\right|_p:=\left.\lambda\right|_p+I^m_p,$$ where $I^m_p:=I\cap\wedge_k^m\left(T^*_pF\right)$ is the fiber composed of the $k$-horizontal $m$-forms of $I$ at $p\in F$. Also, we define the $m$-form $\Theta_\lambda$ as the pullback of the canonical $m$-form $\Theta\in\Omega^m\left(\wedge^m\left(T^*F\right)\right)$ to $W_\lambda$, and $$\Omega_\lambda:=d\Theta_\lambda.$$
We will indicate with $\overline{\tau}_\lambda:W_\lambda\rightarrow F$ the canonical projection of this subbundle of forms.
The *classical Lepage-equivalent variational problem associated to $\left(F,\lambda,{{\mathcal I}}\right)$* is the variational problem $\left(W_\lambda,\Theta_\lambda,0\right)$.
The following theorem allows us to write down the equations of motion for a Lepage-equivalent variational problem.
\[Thm:HamJac\] For a variational problem $\left(W_\lambda,\Theta_\lambda,0\right)$ the following statements are equivalent.
1. \[Thm1\] $\sigma:M\rightarrow W_\lambda$ is critical section for the action $$\widetilde{S}\left[\sigma\right]:=\int_M\sigma^*\Theta_\lambda.$$
2. \[Thm2\] $\displaystyle\sigma^*\left(Z\lrcorner\Omega_\lambda\right)=0$ for every $Z\in\mathfrak{X}^{V\left(p\circ\overline{\tau}_\lambda\right)}\left(W_\lambda\right).$
3. \[Thm3\] $\displaystyle\sigma^*\left(Z\lrcorner\Omega_\lambda\right)=0$ for every $Z\in\mathfrak{X}\left(W_\lambda\right).$
It should be noted that the equations of motion of a classical Lepage-equivalent variational problem are easier to write down than the equations of motion of the original problem, because the latter involves the EDS ${{\mathcal I}}$, that restricts in a non trivial manner the allowed sections, whereas the former is a variational problem with this EDS set to $0$. Nevertheless, there is in principle no relationship between the critical sections of these variational problems. The following result partially fills this gap.
Any critical section for $\left(W_\lambda,\Theta_\lambda,0\right)$ projects onto a critical section of $\left(F,\lambda,{{\mathcal I}}\right)$.
For a proof, see [@GotayCartan].
So, it remains to prove if every critical section of the original variational problem $\left(F,\lambda,{{\mathcal I}}\right)$ can be lifted to a critical section of $\left(W_\lambda,\Theta_\lambda,0\right)$; there is no general result regarding this problem, so it is necessary to prove it in each case separately. This fact deserves a proper name.
\[def:contravariant-Var-prob\] A Lepage-equivalent problem is *contravariant* if every critical section of the original variational problem has a lift to a critical section.
The classical Lepage-equivalent variational problem associated to the variational problem of first order field theory $\left(J^1\pi,L\nu,{{\mathcal I}}_{\text{con}}\right)$, where ${{\mathcal I}}_{\text{con}}$ is the contact structure on $J^1\pi$, gives us an unified formalism for first order field theory. In fact, setting $k=2$ in the above formalism, in this case we will have that $\rho\in W_{L\eta}$ if and only if $$\rho=L\nu+p_A^\mu\left(du^A-u_\nu^A dx^\nu\right)\wedge\nu_\mu,$$ for some numbers $p_A^\mu$, where $\nu_\mu:=\partial_\mu\lrcorner\nu$; therefore, the map $$\rho\mapsto\left(x^\mu,u^A,p,p_A^\mu\right)$$ induces coordinates on $W_{L\eta}$. Furthermore, it can be proved that $$W_{L\eta}\simeq J^1\pi\times_{E}\wedge^m_2\left(T^*E\right)$$ as bundles on $E$; it is the velocity-multimomentum bundle involved in the usual unified formalism for first order field theories [@2004JMP....45..360E]. In this context Theorem \[Thm:HamJac\] provides a variational formulation for the unified formalism, in the same way as [@doi:10.1142/S0219887815600191] does for the case of second order field theories. This formulation has been succesfully applied to the symmetry reduction of differential equations associated to variational problems, from EDS viewpoint [@1751-8121-45-6-065202; @Morando2012].
Geometric structures on the jet space of the frame bundle {#sec:geom-struct-jet}
---------------------------------------------------------
Before to introduce our version for the unified formalism, it is necessary to point out some interesting geometric structures associated to the jet bundle of the frame bundle of a given manifold $M$ [@springerlink:10.1007/PL00004852; @MR0315624; @brajercic04:_variat].
In the case of the Griffiths variational problem describing Palatini gravity, the underlying bundle is $J^1\tau$, shown in the following diagram $$\label{eq:DiagramJetConnectionBundles}
\begin{diagram}
\node[2]{J^1\tau}\arrow[2]{s,l}{\tau_1}\arrow{se,t}{\tau_{10}}\arrow{sw,t}{p^{J^1\tau}_{GL\left(m\right)}}\\
\node{C\left(LM\right)}\arrow{se,b}{\overline\tau}\node[2]{LM}\arrow{sw,b}{\tau}\\
\node[2]{M}
\end{diagram}$$ The novelty in our approach rests in the fact that it uses canonical structures of the $GL\left(m\right)$-principal bundle $$p^{J^1\tau}_{GL\left(m\right)}:J^1\tau\rightarrow J^1\tau/GL\left(m\right)=:C\left(LM\right)$$ on the bundle of connections $C\left(LM\right)$ for the formulation of the underlying variational problem, and thus for the description of the unified formalism. To this end, it is convenient to recall the identification $$\label{eq:JetToConnectionPlusFrame}
J^1\tau\simeq C\left(LM\right)\times_M LM$$ induced by the map $$j_x^1s\mapsto\left(\left[j_x^1s\right],s\left(x\right)\right).$$ Under this diffeomorphism, the projection $\tau_{10}$ reads $$\tau_{10}\left(\Gamma,u\right)=u$$ and the $GL\left(m\right)$-action becomes $$\left(\Gamma,u\right)\cdot g=\left(\Gamma,u\cdot g\right).$$ According to [@springerlink:10.1007/PL00004852], the projection onto the first factor $$\text{pr}_1:\left(\Gamma,u\right)\mapsto\Gamma$$ gives $J^1\tau$ a $GL\left(m\right)$-principal bundle structure, and there exists a canonical[^3] form $\omega\in\Omega^1\left(J^1\tau,\mathfrak{gl}\left(m\right)\right)$, which becomes a connection form in this bundle. In fact, we have that $$\label{eq:CanonicalConnection}
\left.\omega\right|_{j_x^1s}=\left[T_{j_x^1s}\tau_{10}-T_xs\circ T_{j_x^1s}\tau_1\right]_{\mathfrak{gl}\left(m\right)},$$ where $\left[\cdot\right]_{\mathfrak{gl}\left(m\right)}$ consists into the identification $V\tau\sim LM\times\mathfrak{gl}\left(m\right)$.
Using the canonical basis $\left\{E_k^l\right\}$ of $\mathfrak{gl}\left(m\right)$, such that $$\left(E^k_l\right)^i_j=\delta^i_l\delta_j^k,$$ then there exists a set of $1$-forms $\left(\omega^i_j\right)$ on $J^1\tau$ such that $$\omega=\omega^i_jE^j_i.$$
Also, we can pullback the $1$-form $\theta=\theta^ie_i\in\Omega^1\left(LM,{\mathbb{R}}^m\right)$ along $\tau_{10}:J^1\tau\rightarrow LM$, obtaining a ${\mathbb{R}}^m$-valued $1$-form on $J^1\tau$, which will be indicated with the same symbol $\theta$. This form allows us to define the *canonical or universal torsion* on $J^1\tau$, according to the usual formula $$T^i:=d\theta^i+\omega^i_k\wedge\theta^k$$ for the exterior covariant derivative of a tensorial form respect to the canonical connection.
For every principal connection $\Gamma$, considered as a section $\Gamma:M\rightarrow C\left(LM\right)$ of the bundle of connections, we can associate an equivariant section $\sigma_\Gamma:LM\rightarrow J^1\tau$ by means of the Diagram (see also Appendix \[App:LocalExpressions\] for the explicit definition.) It allows us to construct the forms $$\omega_\Gamma:=\sigma_\Gamma^*\omega,\qquad T_\Gamma:=\sigma_\Gamma^*T$$ on $LM$, having immediate geometrical interpretation.
([@springerlink:10.1007/PL00004852]) The forms $\omega_\Gamma\in\Omega^1\left(LM,\mathfrak{gl}\left(m\right)\right)$ and $T_\Gamma\in\Omega^1\left(LM,{\mathbb{R}}^m\right)$ are the connection forms and torsion forms respectively, for the principal connection $\Gamma$.
The unified formalism for Palatini gravity {#sec:griff-vari-probl}
==========================================
We are now ready to define an unified formalism for vacuum GR with vielbein. From the discussion of Section \[sec:form-hamilt-restr\], we know that a way to do it involves a two-steps procedure, namely:
1. To set a Griffiths variational problem for this theory, and
2. to construct the classical Lepage-equivalent variational problem for this variational problem, proving also that it is a contravariant Lepage-equivalent in the sense of Definition \[def:contravariant-Var-prob\].
As we know, a Griffiths variational problem describing vacuum Palatini gravity exists [@capriotti14:_differ_palat]. In short, it is the variational problem specified by the triple $$\left(J^1\tau\rightarrow M,{{\mathcal L}}_{PG},{{\mathcal I}}_{PG}\right)$$ where ${{\mathcal L}}_{PG}=\eta^{kl}\theta_{kp}\wedge\left(d\omega^p_l+\omega^p_q\wedge\omega^q_l\right)$ and ${{\mathcal I}}_{PG}$ is generated by the set of forms $$\label{eq:MetricityGenerators}
\mathcal{G}:=\left\{\eta^{ip}\omega^j_p+\eta^{jp}\omega^i_p\right\}.$$
By taking the subbundle $G_{PG}$ in $\wedge^m_2\left(J^1\tau\right)$ such that $$\left.G_{PG}\right|_{j_x^1s}:={\mathbb{R}}\left<\left.\alpha\right|_{j_x^1s}:\alpha\in\mathcal{G}\right>,$$ we see that $$\left.W_{{\mathcal L}}\right|_{j_x^1s}:=\left.{{\mathcal L}}_{PG}\right|_{j_x^1s}+\left.G_{PG}\right|_{j_x^1s}$$ for every $j_x^1s\in J^1\tau$, will define the bundle we were looking for; we will call $\overline{\tau}_{{\mathcal L}}:W_{{\mathcal L}}\rightarrow J^1\tau$ to the canonical projection.
In rigor, the Griffiths variational problem we are dealing with here is not the variational problem considered in [@capriotti14:_differ_palat], because we are discarding the torsion constraints in the definition of the EDS ${{\mathcal I}}_{PG}$ adopted here. The reason for doing it is that we will ultimately find that the new variational problem also reproduces vacuum GR equations of motion (see Equation below), and in this way we are avoiding to deal with a constraint that does not live in the exterior algebra (when considered on $J^1\tau$, these constraints are equivalent to the annihilation of a set of functions on $J^1\tau$, see Lemma \[lem:metricity-and-torsion\] below.)
Finally, we will define the $m$-form $$\lambda_{PG}\in\Omega^m\left(W_{{\mathcal L}}\right)$$ by pulling back the canonical $m$-form $\Theta_2\in\Omega^m\left(\wedge^m_2\left(J^1\tau\right)\right)$ to this subbundle.
The *unified formalism for Palatini gravity* is the variational problem associated to the set of data $$\left(W_{{\mathcal L}},\lambda_{PG},0\right).$$
Therefore, from Theorem \[Thm:HamJac\] it results that a section $\sigma:M\rightarrow W_{{\mathcal L}}$ is a *solution for this unified formalism* if and only if $$\sigma^*\left(Z\lrcorner d\lambda_{PG}\right)=0$$ for any vertical vector field $Z\in\mathfrak{X}^{V\left(\tau_1\circ\overline{\tau}_{{\mathcal L}}\right)}\left(W_{{\mathcal L}}\right)$; it tells us that it becomes crucial to construct a basis for this set of vertical vector fields in order to find the corresponding equations of motion. This question will be considered in Section \[sec:convenient-basis\].
The metricity constraints {#sec:null-trace-constr}
-------------------------
We want to give an interpretation of the main constraints we are imposing on a solution of the equations of motion in terms of a coordinate chart $\tau_1^{-1}\left(U\right)$ with coordinates $\left(x^\mu,e^\rho_k,e^\mu_{k\nu}\right)$. Some additional properties related to these coordinates are listed in Appendix \[App:LocalExpressions\]).
First, it is necessary to recall that $C\left(LM\right)$ is a affine bundle modelled on $T^*M\otimes\text{ad}\left(LM\right)$, where $\text{ad}\left(LM\right)$ is the adjoint bundle of $LM$, which is the bundle associated to the principal bundle $LM$ through the adjoint action of $GL\left(m\right)$ on itself. Thus if $\Gamma_1,\Gamma_2$ are connections at the same point $x\in M$, then we have that $$\Gamma_1-\Gamma_2\in T^*M\otimes\text{ad}\left(LM\right)$$ On a coordinate open set $U\subset M$, the set of vector fields $$\left(x^\mu,e^\nu_k\right)\mapsto W_\nu^\mu\left(x^\mu,e^\nu_k\right):=e^\mu_ke^l_\nu\left(E^k_l\right)_{LM}\left(x^\mu,e^\nu_k\right)=-e^l_\mu\frac{\partial}{\partial e_l^\nu}$$ generates $\text{gau}\left(LU\right)$, the set of $\tau$-vertical $GL\left(m\right)$-invariant vector fields.
Additionally, we have that the set of local sections $\Gamma\left(U,\text{ad}\left(LM\right)\right)$ can be identified with $\text{gau}\left(LU\right)$; then, every $\Gamma\in\left(C\left(LM\right)\right)_x,x\in U$ can be written as $$\Gamma:\frac{\partial}{\partial x^\mu}\mapsto\frac{\partial}{\partial x^\mu}+A_{\sigma\mu}^\rho\left(\Gamma\right)W_\rho^\sigma$$ for an uniquely determined set of real numbers $\left\{A_{\sigma\mu}^\rho\left(\Gamma\right)\right\}$. The maps $$A_{\sigma\mu}^\rho:\Gamma\mapsto A_{\sigma\mu}^\rho\left(\Gamma\right)$$ induces a set of coordinates in the fibres of $\left.C\left(LM\right)\right|_U\rightarrow U$.
On the other hand, the identification $\Gamma\left(U,\text{ad}\left(LM\right)\right)\simeq\text{gau}\left(LU\right)$ implies that the canonical projection $p^{TLM}_{GL\left(m\right)}:TLM\rightarrow TLM/GL\left(m\right)$ is such that $$p^{TLM}_{GL\left(m\right)}\circ W_\mu^\nu=W_\mu^\nu.$$
Then if $j_x^1s=\left(x^\mu,e^\nu_k,e^\sigma_{l\rho}\right)$ in the adapted coordinates, we have that $$j_x^1s:\frac{\partial}{\partial x^\mu}\mapsto\frac{\partial}{\partial x^\mu}+e^\sigma_{k\mu}\frac{\partial}{\partial e^\sigma_k}=\frac{\partial}{\partial x^\mu}-e^k_\nu e^\sigma_{k\mu}W^\nu_\sigma$$ and so $$\left[j_x^1s\right]:\frac{\partial}{\partial x^\mu}\mapsto\frac{\partial}{\partial x^\mu}-e^k_\nu e^\sigma_{k\mu}W^\nu_\sigma.$$ Therefore, the projection $p^{J^1\tau}_{GL\left(m\right)}:J^1\tau\rightarrow C\left(LM\right)$ reads $$p_{GL\left(m\right)}^{J^1\tau}\left(x^\mu,e^\nu_k,e^\sigma_{l\rho}\right)=\left(x^\mu,-e^k_\mu e^\sigma_{k\rho}\right).$$
We will indicate the functions $\left(p^{J^1\tau}_{GL\left(m\right)}\right)^*A^\mu_{\nu\sigma}$ on $J^1\tau$ with the same symbol $A^\sigma_{\mu\nu}$.
Let us denote by $\left(E_k^l\right)_{J^1\tau}\in\mathfrak{X}\left(J^1\tau\right)$ the fundamental vector field associated to $E_k^l\in\mathfrak{gl}\left(m\right)$. Then $$\left(E_k^l\right)_{J^1\tau}=\left(\left(E_k^l\right)_{LM}\right)^1=-e_k^\mu\frac{\partial}{\partial e_l^\mu}-e^\sigma_{k\mu}\frac{\partial}{\partial e^\sigma_{l\mu}}$$ where $\left(\cdot\right)^1$ indicates the complete lift of a vector field to $J^1\tau$.
Let $K\subset GL\left(m\right)$ be the compact group whose Lie algebra is ${\mathfrak{k}}$. The previous identification allows us to establish the bundle isomorphism $$\label{eq:QuotientIdentifyMetrics}
J^1\tau/K\simeq C\left(LM\right)\times_M LM/K=C\left(LM\right)\times_M\Sigma,$$ where $\left[\tau\right]:\Sigma\rightarrow M$ is the *bundle of metrics of signature $\left(m-1,1\right)$ on $M$*; therefore, we recall the following result from [@capriotti14:_differ_palat].
\[lem:metricity-and-torsion\] The functions $$\widetilde{g}^{\mu\nu}:=\eta^{ij}e_i^\mu e_j^\nu$$ are $K$-invariant, and together with the functions $A^\mu_{\nu\sigma}$ induce coordinates $\left(x^\mu,g^{\mu\nu},A^\sigma_{\mu\nu}\right)$ on $J^1\tau/K$. In term of them, the metricity condition becomes $$\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i=e^i_\mu e^j_\nu\left[d\widetilde{g}^{\mu\nu}+\left(\widetilde{g}^{\mu\sigma}A^\nu_{\gamma\sigma}+\widetilde{g}^{\nu\sigma}A^\mu_{\gamma\sigma}\right)dx^\gamma\right],$$ and the universal torsion reads $$T^i=e^i_\sigma A^\sigma_{\mu\nu}dx^\mu\wedge dx^\nu.$$
This result permits us to give sense to the metricity condition.
\[thm:metr-constr-interpretation\] Let $s:M\rightarrow J^1\tau$ be a section of the jet bundle projection $\tau_1:J^1\tau\rightarrow M$, and such that it annihilates the forms $\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i$ and $T^i$. Then its quotient section $\left[s\right]:M\rightarrow J^1\tau/K$, viewed through the identification , consists into a metric of signature $\left(1,m-1\right)$ and the corresponding Levi-Civita connection.
\[rem:Signature-independence\] The validity of the previous theorem is independent of the signature of the matrix $\eta$; the signature in the statement was chosen for its relationship with the usual signature found in general relativity.
The metricity form $Q^{ij}:=\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i$ is the corresponding concept in our approach to what was called *nonmetricity tensor* in [@Friedric-1978].
Some consequences of the structure equations {#sec:structure-equations}
--------------------------------------------
We will gather some identities from [@capriotti14:_differ_palat], useful when dealing with differentials of the canonical forms we work with here. First, we have the *structure equations* $$\begin{aligned}
&d\omega^i_j+\omega^i_k\wedge\omega^k_j=\Omega^i_j\\
&d\theta^i+\omega^i_k\wedge\theta^k=T^i,\end{aligned}$$ and its differential consequences, the so called *Bianchi identities* $$\begin{aligned}
&d\Omega^p_l=\Omega^p_r\wedge\omega^r_l-\omega^p_r\wedge\Omega^r_l\\
&d T^k=\Omega^k_l\wedge\theta^l-\omega^k_l\wedge T^l,\end{aligned}$$ Additionally, $$\begin{aligned}
&d\theta_i=\omega^l_i\wedge\theta_l-\omega^p_p\wedge\theta_i+T^l\wedge\theta_{li}\\
&d\theta_{kp}=\omega^q_k\wedge\theta_{qp}-\omega^q_p\wedge\theta_{qk}-\omega^s_s\wedge\theta_{kp}+T^q\wedge\theta_{kpq}\\
&d\theta_{ipq}=\omega^k_i \wedge\theta_{kpq}+\omega^k_p \wedge\theta_{kqi}+\omega^k_q \wedge\theta_{kip}-\omega_s^s \wedge\theta_{ipq}+T^k \wedge\theta_{ipqk}.\end{aligned}$$
We will use these structure equations on order to calculate the differential of the Palatini Lagrangian ${{\mathcal L}}_{PG}$. In fact, we have that $$\begin{aligned}
\label{eq:PalatiniLagrangianDiff}
d{{\mathcal L}}_{PG}&=d\left(\eta^{kl}\theta_{kp}\wedge\Omega^p_l\right)\cr
&=\eta^{kl}d\theta_{kp}\wedge\Omega^p_l+\left(-1\right)^m\eta^{kl}\theta_{kp}\wedge d\Omega^p_l\cr
&=\eta^{kl}\left(\omega^q_k\wedge\theta_{qp}-\omega^q_p\wedge\theta_{qk}-\omega^s_s\wedge\theta_{kp}+T^q\wedge\theta_{kpq}\right)\wedge\Omega^p_l+\left(-1\right)^m\eta^{kl}\theta_{kp}\wedge\left(\Omega^p_r\wedge\omega^r_l-\omega^p_r\wedge\Omega^r_l\right)\cr
&=\big(\eta^{kl}\omega^q_k\wedge\theta_{qp}-\eta^{kl}\omega^q_p\wedge\theta_{qk}-\eta^{kl}\omega^s_s\wedge\theta_{kp}+\eta^{kl}T^q\wedge\theta_{kpq}\big)\wedge\Omega^p_l+\cr
&\qquad\qquad\qquad\qquad\qquad\qquad+\left(-1\right)^m\left(\eta^{kr}\theta_{kp}\wedge\omega^l_r\wedge\Omega^p_l-\eta^{kl}\theta_{kr}\wedge\omega^r_p\wedge\Omega^p_l\right)\cr
&=\big(\eta^{kl}\omega^q_k\wedge\theta_{qp}-\eta^{kl}\omega^q_p\wedge\theta_{qk}-\eta^{kl}\omega^s_s\wedge\theta_{kp}+\eta^{kl}T^q\wedge\theta_{kpq}\big)\wedge\Omega^p_l+\cr
&\qquad\qquad\qquad\qquad\qquad\qquad+\left(\eta^{qk}\omega^l_k\wedge\theta_{qp}\wedge\Omega^p_l-\eta^{kl}\omega^q_p\wedge\theta_{kq}\wedge\Omega^p_l\right)\cr
&=\big[\left(\eta^{kl}\omega^q_k+\eta^{qk}\omega^l_k\right)\wedge\theta_{qp}-\eta^{kl}\omega^s_s\wedge\theta_{kp}+\eta^{kl}T^q\wedge\theta_{kpq}\big]\wedge\Omega^p_l\end{aligned}$$
Also, there is a consequence of the first Bianchi identity that will become important later.
For the canonical torsion and the curvature of the canonical connection on $J^1\tau$, the following identity $$\left(d T^k+\omega^k_l\wedge T^l\right)\wedge\theta_{ikp}=\Omega^k_p\wedge\theta_{ik}-\Omega^k_i\wedge\theta_{pk}-\Omega^l_l\wedge\theta_{ip}$$ holds.
From the structure equation $$d T^k+\omega^k_l\wedge T^l=\Omega^k_l\wedge\theta^l$$ and multiplying both sides by $\theta_{ikp}$, we obtain $$\left(d T^k+\omega^k_l\wedge
T^l\right)\wedge\theta_{ikp}=\Omega^k_l\wedge\theta^l\wedge\theta_{ikp}.$$ Now, we can use that $\theta^l\wedge\theta_{ikp}=\delta^l_p\theta_{ik}-\delta^l_k\theta_{ip}+\delta^l_i\theta_{kp}$ and so $$\begin{aligned}
\left(d T^k+\omega^k_l\wedge T^l\right)\wedge\theta_{ikp}&=\Omega^k_l\wedge\theta^l\wedge\theta_{ikp}\cr
&=\Omega^k_l\wedge\left(\delta^l_p\theta_{ik}-\delta^l_k\theta_{ip}+\delta^l_i\theta_{kp}\right)\cr
&=\Omega^k_p\wedge\theta_{ik}-\Omega^k_i\wedge\theta_{pk}-\Omega^l_l\wedge\theta_{ip}.\qedhere
\end{aligned}$$
The following consequence will be useful when dealing with the involutivity of the equations of motion (see Section \[sec:constraint-algorithm\]).
\[cor:FirstBianchiConsequence\] If $T=0$ and $\Omega$ is ${\mathfrak{k}}$-valued, then $$\Omega^k_p\wedge\theta_{ik}=\Omega^k_i\wedge\theta_{pk}.$$
A convenient basis {#sec:convenient-basis}
==================
According to Theorem \[Thm:HamJac\], in order to be able to write down the equations of motion for the unified formalism just constructed, it is necessary to contract arbitrary vertical vector fields with the differential of the form $\lambda_{PG}$ on $W_{{\mathcal L}}$. In this section we will define a basis for $TJ^1\tau$ that will allow us, in Section \[sec:equations-motion\], to carry out this procedure. The existence of this basis is tied to the choice of a connection on the frame bundle $LM$; nevertheless, the fact that in the calculation of the equations just vertical vector fields are involved, determines that these equations are independent from this choice.
The construction of this basis goes as follows: The chosen connection is employed in the definition of a basis on $LM$. After that, using the canonical lifts and other tools at our disposal in any jet bundle [@saunders89:_geomet_jet_bundl], we will be able to find a basis on $J^1\tau$. Finally, and using an identification of the velocity-multimomentum bundle with a product bundle involving $J^1\tau$ (see Section \[sec:veloc-mult-space-1\]), it will be possible to use this basis in the calculation of the equations of motion.
A word of caution should be said here: as we said before, the equations of motion for the unified formalism can be written using only the vertical part of the basis to be built in the present section. However, we feel that the construction of a full basis on $J^1\tau$ could be of interested for the readers, even if it makes the article a little longer and harder to read.
Basis on $LM$ {#sec:basis-lm}
-------------
Let us fix a linear connection $\sigma\in\Omega^1\left(LM,\mathfrak{gl}\left(m\right)\right)$; also, we must recall the construction of the *standard horizontal vector field corresponding to* $\xi\in{\mathbb{R}}^n$ [@KN1]. This vector field $B\left(\xi\right)$ is characterized by two properties:
- It is a horizontal vector field, and
- $T_u\tau\left(B\left(\xi\right)\right)=u\left(\xi\right)$ for every $u\in LM$.
Moreover, let $\left\{E_i^j:i,j=1,\cdots,m\right\}$ be a basis for $\mathfrak{gl}\left(m\right)$ given by the formula $$\left(E_k^l\right)_i^j=\delta^l_i\delta^j_k;$$ as we said above, symbols $\left(E^j_i\right)_{LM}\in\mathfrak{X}\left(LM\right)$ refer to the infinitesimal generator associated to the element $E^j_i$. The set $\left\{B\left(e^i\right),\left(E_i^j\right)_{LM}\right\}$ results to be a basis of vector fields on $LM$.
Adapted coordinates calculations {#sec:adapt-coord-calc}
--------------------------------
Let us calculate the previous vector fields in an adapted coordinate chart $\left(x^\mu,e^\mu_i\right)$ on $LM$ [@KN1; @brajercic04:_variat]. The connection form in these coordinates becomes $$\sigma=e^i_\mu\left(de^\mu_j+\Xi^\mu_{\sigma\nu}e^\nu_jdx^\sigma\right)E_i^j$$ for some functions $\left\{\Xi^\mu_{\sigma\nu}\right\}$ defined on the domain of the coordinates $\left(x^\mu\right)$. Therefore, if $$B\left(e_i\right):=B^\mu_i\frac{\partial}{\partial x^\mu}+B^\mu_{ik}\frac{\partial}{\partial e^\mu_k},$$ projectability condition, namely $$T_u\tau\left(B\left(e_i\right)\right)=u\left(e_i\right),$$ reduces to $$B^\mu_i=e^\mu_i$$ for all $\mu,i=1,\cdots,m$. On the other hand, horizontality condition $\sigma\left(B\left(e_i\right)\right)=0$ becomes $$\begin{aligned}
0&=\sigma\left(B\left(e_i\right)\right)\\
&=e_\mu^l\left(B_{ij}^\mu+\Xi^\mu_{\sigma\nu}e^\nu_je^\sigma_i\right);\end{aligned}$$ it means that the standard horizontal vector fields have the local expressions $$B\left(e_i\right)=e_i^\mu\left(\frac{\partial}{\partial x^\mu}-e^\nu_j\Xi^\sigma_{\mu\nu}\frac{\partial}{\partial e^\sigma_j}\right).$$
Also, in the above coordinates every element $M:=\left(M_k^l\right)\in GL\left(m\right)$ acts according to the formula $$\left(x^\mu,e^\rho_k\right)\cdot M=\left(x^\mu,e^\rho_kM^k_l\right).$$ Therefore, if $E_k^l\in\mathfrak{gl}\left(m\right)$ is such that $$\left(E_k^l\right)_i^j=\delta^l_i\delta^j_k,$$ then its associated fundamental vector field reads $$\begin{aligned}
\left(E_k^l\right)_{LM}\left(x^\mu,e^\rho_k\right)&=\left.\frac{\vec{\text{d}}}{\text{d}t}\right|_{t=0}\left(x^\mu,e^\rho_i\left(\exp{\left(-tE_k^l\right)}\right)^i_j\right)\\
&=-e^\rho_i\left(E_k^l\right)^i_j\frac{\partial}{\partial e^\rho_j}\\
&=-e^\rho_k\frac{\partial}{\partial e^\rho_l}.\end{aligned}$$
Lifts to $J^1\tau$ {#sec:lifts-j1tau}
------------------
Given a vector field $X$ on $LM$, we can construct a vector field $X^1$ on $J^1\tau$, called *prolongation* [@saunders89:_geomet_jet_bundl], by differentiating the jet lift of the flow $\phi^X_t:LM\rightarrow LM,t\in\left(-\epsilon,\epsilon\right)$ associated to $X$, namely $$X^1\left(j_x^1s\right):=\left.\frac{\vec{\text{d}}}{\text{d}t}\right|_{t=0}\left[j^1_x\left(\Phi^X_t\circ s\right)\right].$$ In adapted coordinates $\left(x^\mu,e^\mu_k,e^\mu_{k\nu}\right)$, we have that if $$X=X^\mu\frac{\partial}{\partial x^\mu}+X^\mu_k\frac{\partial}{\partial e^\mu_k}$$ then $$\label{eq:ProlongationVectorFormula}
X^1=X^\mu\frac{\partial}{\partial x^\mu}+X^\mu_k\frac{\partial}{\partial e^\mu_k}+\left(D_\kappa X^\nu_j-e_{j\sigma}^\nu D_\kappa X^\sigma\right)\frac{\partial}{\partial e_{j\kappa}^\nu},$$ where $$D_\kappa F:=\frac{\partial F}{\partial x^\kappa}+e_{k\kappa}^\mu\frac{\partial F}{\partial e^\mu_k}$$ is the total derivative operator on $J^1\tau$.
It is also necessary to consider *vertical lifts* for a vertical vector field $X\in\mathfrak{X}^{V\tau}\left(LM\right)$; nevertheless, in order to construct such a lift we must have at our disposal a form $\alpha\in\Omega^1\left(M\right)$. Then we use the affine structure of $J^1\tau$, which is modelled on the vector bundle $\tau^*T^*M\otimes V\tau$, in order to define the vertical lift according to the formula $$\left(\alpha,X\right)^V\left(j_x^1s\right):=\left.\frac{\vec{\text{d}}}{\text{d}t}\right|_{t=0}\left[j_x^1s+t\alpha_x\otimes X\left(s\left(x\right)\right)\right].$$ In adapted coordinates, if $\alpha:=\alpha_\nu dx^\nu$ and $X$ is as before, then $$\left(\alpha,X\right)^V=\alpha_\nu X^\mu_k\frac{\partial}{\partial e^\mu_{k\nu}}.$$
Now, a consequence of being working in $LM$ is that for every $j_x^1s\in J^1\tau$ there exists a ${\mathbb{R}}^m$-valued $1$-form $\theta=\theta^ie_i$; therefore, we can generalise the previous vertical lift construction by using these forms, namely $$\left(\theta^i,X\right)^V\left(j_x^1s\right):=\left.\frac{\vec{\text{d}}}{\text{d}t}\right|_{t=0}\left[j_x^1s+t\left.\theta^i\right|_{j_x^1s}\otimes X\left(s\left(x\right)\right)\right].$$ In local terms it becomes $$\label{eq:VerticalLiftTheta}
\left(\theta^i,X\right)^V=e_\nu^iX^\mu_k\frac{\partial}{\partial e^\mu_{k\nu}}.$$
The brackets between these lifts have a particular structure.
\[lem:LiftsBrackets\] Let $\pi:E\rightarrow M$ be a bundle and $X,Y\in\mathfrak{X}\left(E\right)$ a pair of vector fields on the bundle; let $\alpha,\beta\in\Omega^1\left(M\right)$ be $1$-forms on $M$. Then
- $\displaystyle\left[X^1,Y^1\right]=\left(\left[X,Y\right]\right)^1$.
- $\displaystyle\left[X^1,\left(\alpha,Y\right)^V\right]=\left(\alpha,\left[X,Y\right]\right)^V$.
- $\displaystyle\left[\left(\alpha,X\right)^V,\left(\beta,Y\right)^V\right]=0$.
Local expressions for the lifts of a particular basis {#sec:local-expr-lift}
-----------------------------------------------------
Now, we will proceed to calculate the local expressions for the lifts associated to the basis $\left\{B\left(e_i\right),\left(E_j^k\right)_{LM}\right\}$ on $LM$. Using the calculations made in Section \[sec:adapt-coord-calc\], we will have that $$\begin{aligned}
D_\kappa\left(B\left(e_i\right)\right)^\mu&=\frac{\partial\left(B\left(e_i\right)\right)^\mu}{\partial x^\kappa}+e_{l\kappa}^\sigma\frac{\partial\left(B\left(e_i\right)\right)^\mu}{\partial e^\sigma_l}\\
&=e^\mu_{i\kappa}\end{aligned}$$ and $$\begin{aligned}
D_\kappa\left(B\left(e_i\right)^\mu_j\right)&=\frac{\partial}{\partial x^\mu}\left(-\Xi^\mu_{\sigma\nu}e^\sigma_ie^\nu_j\right)+e^\rho_{l\kappa}\frac{\partial}{\partial x^\rho_l}\left(-\Xi^\mu_{\sigma\nu}e^\sigma_ie^\nu_j\right)\\
&=-e^\sigma_ie^\nu_j\frac{\partial\Xi^\mu_{\sigma\nu}}{\partial x^\kappa}-\Xi^\mu_{\sigma\nu}\left(e^\sigma_{i\kappa}e^\nu_j+e^\sigma_ie^\nu_{j\kappa}\right);\end{aligned}$$ therefore $$\left(B\left(e_i\right)\right)^1=e_i^\mu\left(\frac{\partial}{\partial x^\mu}-e^\nu_j\Xi^\sigma_{\mu\nu}\frac{\partial}{\partial e^\sigma_j}\right)-\left[e^\sigma_ie^\nu_j\frac{\partial\Xi^\mu_{\sigma\nu}}{\partial x^\kappa}+\Xi^\mu_{\sigma\nu}\left(e^\sigma_{i\kappa}e^\nu_j+e^\sigma_ie^\nu_{j\kappa}\right)+e^\mu_{j\sigma}e^\sigma_{i\kappa}\right]\frac{\partial}{\partial e^\mu_{j\kappa}}.$$ In the same vein $$\left(\left(E_k^l\right)_{LM}\right)^1=-e^\rho_k\frac{\partial}{\partial e^\rho_l}-e^\rho_{k\kappa}\frac{\partial}{\partial e^\rho_{l\kappa}}=\left(E^l_k\right)_{J^1\pi}.$$
For the vertical lifts of the vertical vector fields $\left(E^l_k\right)_{LM}$, we use formula , obtaining $$\left(\theta^i,\left(E^l_k\right)_{LM}\right)^V=-e^i_\nu e^\mu_k\frac{\partial}{\partial e^\mu_{l\nu}}.$$
Basis on $J^1\tau$ {#sec:basis-j1tau}
------------------
According to the discussion carried out in the previous paragraphs, we will use as a basis for writing out the equations of motion of Palatini gravity the following set of (global!) vector fields $$B:=\left\{\left(B\left(e_i\right)\right)^1,\left(E^k_l\right)_{J^1\tau},\left(\theta^i,E^j_k\right)^V:i,j,k,l=1,\cdots,m\right\}.$$ Also, we will need the dual basis $B^*$ corresponding to $B$; in order to find it, recall that $\theta,\omega$ are semibasic respect to the projection $\tau_{10}:J^1\tau\rightarrow LM$, so $$\left(\theta^i,E^j_k\right)^V\lrcorner\theta^l=0=\left(\theta^i,E^j_k\right)^V\lrcorner\omega^p_q.$$ Also, $\theta$ is semibasic respect to the projection $\tau_1:J^1\tau\rightarrow M$, so we will have that $$\left(E^k_l\right)_{J^1\tau}\lrcorner\theta^i=0.$$ The infinitesimal generators for the $GL\left(m\right)$-action also have the useful property $$\left(E^k_l\right)_{J^1\tau}\lrcorner\omega^p_q=\delta^p_l\delta^k_q.$$ To this list of niceties, we would add Lemma \[lem:CVerticalDuality\], that uses the difference map $C$ defined in Appendix \[sec:diff-tens-c\_i\], and tells us that the exact forms $$\Psi^k_{jl}:=dC^k_{jl}$$ could be used in the search of dual directions corresponding to the set of vertical vector fields $\left\{\left(\theta^i,\left(E^k_l\right)_{LM}\right)\right)^V$.
Please note that the contraction of these forms with the infinitesimal generators of the $GL\left(m\right)$-action on $J^1\tau$ can be calculated using using Corollary \[cor:VerticalDerivativesFunctonDifference\]; in fact, from there we conclude that $$\begin{aligned}
\left(A_{LM}\right)^1\cdot C^k_{li}&=\left(A_{J^1\tau}\cdot C\right)^k_{li}\cr
&=\left(-\left[A,C\right]+A^*C\right)^k_{li}\cr
&=-A^p_lC^k_{pi}+C^p_{li}A_p^k+A^p_iC_{lp}^k,\label{eq:InfGenOnC}\end{aligned}$$ or, using the basis $\left\{E_q^p\right\}$ for the Lie algebra $\mathfrak{gl}\left(m\right)$, $$\left(E^p_q\right)_{J^1\tau}\cdot C^k_{li}=-\delta^p_lC^k_{qi}+\delta^k_qC^p_{li}+\delta^p_iC^k_{lq}=:F^{pk}_{qli}.$$
On the contrary, “horizontal” vector fields $\left(B\left(e_i\right)\right)^1$ does not share these characteristics with the rest of vector fields in $B$; namely, the contraction with $\omega$ gives rise to the difference function $C$ (see Appendix \[sec:diff-tens-c\_i\]) $$\left(B\left(e_i\right)\right)^1\lrcorner\omega_k^l=C_{ki}^l$$ and the contraction with the forms $\Psi^k_{jl}$ yields to a new set of functions on $J^1\tau$, namely $$D^l_{jki}:=\left(B\left(e_i\right)\right)^1\cdot C^l_{jk}.$$
The fact that these functions cannot be described in terms of $C$ is consequence of the proposition we will formulate below; in short, the functions $D$ encode information about the covariant derivative of $C$ respect to the connection $\sigma$, if we consider $C$ as a section $\phi_C:C\left(LM\right)\rightarrow E$ of the bundle $\pi:E\rightarrow C\left(LM\right)$ associated to $J^1\tau\rightarrow C\left(LM\right)$ through $\left(\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*,\rho\right)$ (see Remark \[rem:CAsASectionOfE\]). Now, let $\pi':E'\rightarrow M$ be the fibre bundle associated to $LM$ through the same representation, and consider the product bundle $$F:=C\left(LM\right)\otimes_M E'.$$ Recall the isomorphism $$J^1\tau\simeq C\left(LM\right)\times_M LM$$ given by $j_x^1s\mapsto\left(\left[j_x^1s\right],s\left(x\right)\right)$, and consider the bundle $E$ as a bundle on $M$ through the composite map $$\overline{\pi}:=\overline{\tau}\circ\pi:E\rightarrow M;$$ then there exists a bundle map isomorphism over the identity on $M$
E & & F\
& M
It is given by the formula $$f:E\rightarrow
F:\left[j_x^1s,A\otimes\alpha\right]\mapsto\left(\left[j_x^1s\right],\left[s\left(x\right),A\otimes\alpha\right]\right);$$ thus, a connection $\Gamma$ on $LM$, considered as a section $\sigma_\Gamma:M\rightarrow C\left(LM\right)$, can be used together with the section $\phi_C:C\left(LM\right)\rightarrow E$ in order to define a new section $$\phi'_C:M\rightarrow F:x\mapsto
f\left(\phi_C\left(\sigma_\gamma\left(x\right)\right)\right).$$
Let $s:M\rightarrow LM$ be a section of the bundle of frames; let $$X_i\left(x\right):=s\left(x\right)\left(e_i\right)$$ be the basis of vector fields on $M$ determined by the section $s$ and the basis $\left\{e_i\right\}\subset{\mathbb{R}}^m$. Then $$\nabla_{X_i}\phi'_C=D^l_{jki}E^j_l\otimes e^k,$$ where $\nabla$ is the covariant derivative on $F$ associated to the connection $\sigma$.
An almost dual basis on $J^1\tau$ {#sec:almost-dual-basis}
---------------------------------
Now, we can define the following $1$-forms on $J^1\tau$ $$\begin{aligned}
\Psi^i_{jk}&:=dC^i_{jk}-F^{pi}_{qjk}\omega^q_p-\left(D^i_{jkp}-F^{si}_{rjk}C^r_{sp}\right)\theta^p\\
\rho^k_l&:=\omega^k_l-C^k_{lp}\theta^p;\end{aligned}$$ recalling the calculations performed in Appendix \[sec:contr-elem-b\], the basis $$B^*:=\left\{\theta^i,\rho^k_l,\Psi^i_{jk}\right\}\subset\Omega^1\left(J^1\tau\right)$$ becomes the dual basis for $B$. With these elements we will be able to write down the contraction of $\Omega$ and $T$ with the elements of $B$.
The contractions of $\Omega$ and $T$ with the elements of $B$ become $$\begin{aligned}
&\left(B\left(e_i\right)\right)^1\lrcorner\Omega^k_l=\left[D^k_{lij}-D^k_{lji}+R^k_l\left(B\left(e_j\right),B\left(e_i\right)\right)+C^k_{pj}C^p_{li}-C^k_{pi}C^p_{lj}\right]\theta^j+\Psi^k_{li},\\
&A_{J^1\tau}\lrcorner\Omega^k_l=0,\\
&\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\lrcorner\Omega^q_p=-\delta_p^k\delta_l^q\theta^r,\\
&\left(B\left(e_i\right)\right)^1\lrcorner T^k=\left(C_{ij}^k-C_{ji}^k\right)\theta^j,\\
&A_{J^1\tau}\lrcorner T^k=0,\\
&\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\lrcorner T^q=0.
\end{aligned}$$ for any $A\in\mathfrak{gl}\left(m\right)$.
The velocity-multimomentum space $W_{{\mathcal L}}$ and its canonical form {#sec:veloc-mult-space}
==========================================================================
As we explained at the beginning of the previous section, it is time to write down the equations of motion associated to our unified formulation of Palatini gravity, as it is was formulated in Section \[sec:griff-vari-probl\]. Before that, it will be necessary to simplify somewhat the bundle $W_{{\mathcal L}}$; it will be achieved achieved by taking into account that the restriction EDS ${{\mathcal I}}_{\text{PG}}$ has global generators. As a consequence, $W_{{\mathcal L}}$ will become isomorphic to a product bundle on $J^1\tau$, and so it will be possible to lift the elements of the basis $B$ in order to be part of a basis of $W_{{\mathcal L}}$.
The velocity-multimomentum space {#sec:veloc-mult-space-1}
--------------------------------
Recall that any element $\rho\in\overline{\tau}_{{\mathcal L}}^{-1}\left(j_x^1s\right)\subset W_{{\mathcal L}}$ can be written as $$\rho=\eta^{kl}\theta_{kp}\wedge\left(d\omega^p_l+\omega^p_q\wedge\omega^q_l\right)+\eta^{ql}\beta_{pq}\wedge\omega^p_l,$$ where $\beta_{kl}\in\wedge^{m-1}\left(T_x^*M\right)$ and $\beta_{kl}-\beta_{lk}=0$.
Nevertheless, it would be preferable to keep the basic symmetries of the bundles involved in this description; for that reason, we will take $\beta_{ij}$ as elements on a bundle whose sections have special transformations properties regarding the $GL\left(m\right)$-action on $LM$. Namely, let us consider the natural representation of $GL\left(m\right)$ on $S^*\left(m\right):=\left({\mathbb{R}}^m\right)^*\odot\left({\mathbb{R}}^m\right)^*$, where $\odot$ indicates the symmetrised tensorial product. Then the bundle
E\_2:=\^[m-1]{}\_1(J\^1)S\^\*(m) & J\^1
where $$\wedge^k_1\left(J^1\tau\right):=\left\{\gamma\in\wedge^k\left(J^1\tau\right):\gamma\text{ is horizontal respect to the projection }\tau_1:J^1\tau\rightarrow M\right\},$$ will provide the forms $\beta_{ij}$. The considerations made in Appendix \[sec:canonical-k-form\] apply to this bundle: We must take $P=J^1\tau, N=C\left(LM\right)$ and $G=GL\left(m\right)$. Then it is a $GL\left(m\right)$-space through the action $$\Phi_g\left(\gamma\right):=g\cdot\left(\alpha\circ\left(T\phi_{g}^1\times\cdots\times T\phi_{g}^1\right)\right)$$ where $\phi^1_g:J^1\tau\rightarrow J^1\tau$ is the lift of the action $R_{g^{-1}}:LM\rightarrow LM$ for every $g\in GL\left(m\right)$. In particular, the canonical $m-1$-form on $E_2$ is a $GL\left(m\right)$-equivariant form.
\[lem:WLIsomorphism\] The map $$\Gamma:\rho\mapsto\left(j_x^1s,\beta_{ij}e^i\odot e^j\right)$$ induces an isomorphism of bundles $$W_{{\mathcal L}}\simeq J^1\tau\times_ME_2,$$ making the following diagram commutative
W\_[[L]{}]{} & & J\^1\_ME\_2\
& J\^1
Moreover, this map is equivariant respect to the $GL\left(m\right)$-action in each of these spaces.
The canonical form {#sec:canonical-form}
------------------
From now on we will introduce the notation $$\widehat{W_{{\mathcal L}}}:=J^1\tau\times_ME_2.$$ With this result in mind, we can give a formula for the canonical $m$-form on $W_{{\mathcal L}}$; the idea is to lift the forms $\omega,\theta,T,\dots$ from $J^1\tau$ to $\widehat{W_{{\mathcal L}}}$ through $\text{pr}_0:\widehat{W_{{\mathcal L}}}\rightarrow J^1\tau$, and to do the same to the canonical form on $E_2$, using the projection $$\text{pr}_2:\widehat{W_{{\mathcal L}}}\longrightarrow E_2.$$ Whenever possible, we will indicate the forms lifted from $J^1\tau$ with the same symbols that the forms that are being lifted (i.e. $\overline{\tau}_{{\mathcal L}}^*\omega^k_l\rightsquigarrow\omega^k_l$ and so on) and the components[^4] of the canonical $m-1$-form, both the original and the lifted, will be indicated by $\Theta_{ij}$. With these notational conventions the pullback $\lambda_{{\mathcal L}}$ of the canonical $m$-form from $\wedge^m\left(J^1\tau\right)$ to $W_{{\mathcal L}}$ at the point $$\rho=\eta^{kl}\theta_{kp}\wedge\left(d\omega^p_l+\omega^p_q\wedge\omega^q_l\right)+\eta^{ql}\beta_{pq}\wedge\omega^p_l$$ will read $$\label{eq:LambdaInTermsCanForms0}
\left.\lambda_{{{\mathcal L}}}\right|_\rho=\eta^{kl}\theta_{kp}\wedge\left(d\omega^p_l+\omega^p_q\wedge\omega^q_l\right)+\eta^{ql}\left.\Theta_{pq}\right|_{\beta}\wedge\omega^p_l.$$ Using Equation from Section \[sec:structure-equations\], we have that $$\begin{gathered}
\left.d\lambda_{{\mathcal L}}\right|_\rho=\left[\left(\eta^{kp}\omega_k^i+\eta^{ki}\omega_k^p\right)\wedge\theta_{il}-\omega^s_s\wedge\eta^{kp}\theta_{kl}+\eta^{kp}T^i\wedge\theta_{kli}+\eta^{ip}\left.\Theta_{il}\right|_{\beta}\right]\wedge\Omega^l_p+\\
+\left.d\Theta_{ij}\right|_{\beta}\wedge\left(\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i\right)-\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\omega^j_l\wedge\omega_k^l.\end{gathered}$$ Now, recalling the discusion of Appendix \[sec:cart-decomp-forms\], we have that $$\eta^{ik}\omega_k^j+\eta^{jk}\omega_k^i=\eta^{ik}\left(\omega^j_k+\eta_{kr}\omega^r_l\eta^{lj}\right)=2\eta^{ik}\left(\omega_{\mathfrak{p}}\right)_k^j,\qquad\omega^s_s=\left(\omega_{\mathfrak{p}}\right)^s_s.$$ Also we have that $$\eta^{ik}\Theta_{kj}E_i^j\in{\mathfrak{p}},$$ because $\Theta_{ij}+\Theta_{ji}=0$; therefore, from Proposition \[prop:TraceWedge\], it results that $$\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\omega^j_l\wedge\omega_k^l=\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\left[\left(\omega\wedge\omega\right)_{\mathfrak{p}}\right]_j^i,$$ and so, using Proposition \[prop:DecompSquare\], we obtain $$\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\omega^j_l\wedge\omega_k^l=\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\left[\left(\omega_{\mathfrak{p}}\right)^j_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_k+\left(\omega_{\mathfrak{k}}\right)^j_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_k\right].$$ Replacing these identities in the previous expression for $d\lambda_{{\mathcal L}}$, it becomes $$\begin{gathered}
\left.d\lambda_{{\mathcal L}}\right|_\rho=\left[2\eta^{kp}\left(\omega_{\mathfrak{p}}\right)_k^i\wedge\theta_{il}-\left(\omega_{\mathfrak{p}}\right)^s_s\wedge\eta^{kp}\theta_{kl}+\eta^{kp}T^i\wedge\theta_{kli}+\eta^{ip}\left.\Theta_{il}\right|_{\beta}\right]\wedge\Omega^l_p+\\
+\eta^{ik}\left.d\Theta_{ij}\right|_{\beta}\wedge\left(\omega_{\mathfrak{p}}\right)_k^j-\eta^{ik}\left.\Theta_{ij}\right|_{\beta}\wedge\left[\left(\omega_{\mathfrak{p}}\right)^j_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_k+\left(\omega_{\mathfrak{k}}\right)^j_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_k\right].\end{gathered}$$
It is interesting to rearrange some terms, and put them in the following form $$\begin{gathered}
\label{eq:FormulaFordLambda0}
\left.d\lambda_{{\mathcal L}}\right|_\rho=\left[2\eta^{kp}\left(\omega_{\mathfrak{p}}\right)_k^i\wedge\theta_{il}-\left(\omega_{\mathfrak{p}}\right)^s_s\wedge\eta^{kp}\theta_{kl}+\eta^{kp}T^i\wedge\theta_{kli}+\eta^{ip}\left.\Theta_{il}\right|_{\beta}\right]\wedge\Omega^l_p+\\
+\eta^{ik}\left[\left.d\Theta_{ij}\right|_{\beta}+\eta^{rq}\eta_{li}\left.\Theta_{rj}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^l_q-\left.\Theta_{ip}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^p_j\right]\wedge\left(\omega_{\mathfrak{p}}\right)^j_k.\end{gathered}$$
We will note that the canonical form on $E_2$ is $S^*\left(m\right)$-valued, and that it is also $GL\left(m\right)$-equivariant; thus let us define $$\begin{aligned}
D\Theta^{m-1}&:=\left(d\Theta_{ij}+\eta^{rk}\eta_{iq}\Theta_{rj}\wedge\omega^q_k-\Theta_{ip}\wedge\omega^p_j\right)e^i\odot e^j.
\end{aligned}$$ Then the tautological property of the canonical forms allows us to set the following result.
For $\beta\in\Omega^{m-1}\left(E_2\right)$, we have that $$D\beta=\beta^*\left(D\Theta^{m-1}\right)$$ is the exterior covariant differential of the form $\beta$ respect to the canonical connection $\omega$.
This lemma gives us a hint on the interpretation of some terms present in Equation for $d\lambda_{{\mathcal L}}$.
Equations of motion {#sec:equations-motion}
-------------------
From Theorem \[Thm:HamJac\] we know that the equations of motion arise from $$Z\lrcorner d\lambda_{{\mathcal L}}=0$$ for $Z\in\mathfrak{X}^{V\left(\tau_1\circ\overline{\tau}_{{\mathcal L}}\right)}\left(W_{{\mathcal L}}\right)$. On the other hand, Lemma \[lem:WLIsomorphism\] allows us to conclude that $$V{\left(\tau_1\circ\overline{\tau}_{{\mathcal L}}\right)}\simeq V\tau_1\oplus V\left(\tau_1\circ p_2\right);$$ it means that a set of generators for $\mathfrak{X}^{V\left(\tau_1\circ\overline{\tau}_{{\mathcal L}}\right)}\left(\widehat{W}_{{\mathcal L}}\right)$ can be constructed with vector fields such as $$X+0,\qquad 0+Y$$ where $X\in\mathfrak{X}^{V\tau_1}\left(J^1\tau\right),Y\in\mathfrak{X}^{Vp_2}\left(E_2\right)$. Now, from Section \[sec:basis-j1tau\] we know that $$\left\{\left(E^k_l\right)_{J^1\tau},\left(\theta^i,E^j_k\right)^V:i,j,k,l=1,\cdots,m\right\}$$ is a set of generators for the set of $\tau_1$-vertical vector fields, and any section $\beta\in\Gamma\left(E_2\right)$ gives rise to a $p_2$-vertical vector field $$\delta\beta\left(\alpha_{j_x^1s}\right):=\left.\frac{d}{dt}\right|_{t=0}\left[\alpha_{j_x^1s}+t\beta\left(j_x^1s\right)\right],\qquad\alpha_{j_x^1s}\in E_2$$ on $E_2$, and the collection of this sort of vector fields is a set of generators for $p_2$-vertical vector fields.
Thus Equation will yield to the equations of motion, by contracting this form with the set of vector fields $$A_{J^1\tau}+0,\qquad\left(\theta^i,B_{LM}\right)^V+0,\qquad 0+\delta\beta$$ for $A,B\in\mathfrak{gl}\left(m\right)$ and $\beta\in\Gamma\left(E_2\right)$. In fact:
- Contraction with elements of the form $0+\delta\beta$ gives us the equations $$\eta^{ik}\delta\beta_{ij}\left(\omega_{{\mathfrak{p}}}\right)^j_k=0;$$ it means that $$\label{eq:MetricConditionEq}
\left(\omega_{{\mathfrak{p}}}\right)^j_k=0,$$ that is, the connection must be metric.
- Contraction with vector fields of the form $\left(\theta^r,\left(E^s_t\right)_{LM}\right)^V$ will give us $$\begin{aligned}
0&=\left[2\eta^{kp}\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\theta_{il}-\left(\omega_{\mathfrak{p}}\right)^i_i\eta^{kp}\theta_{kl}+\eta^{kp}T^i\wedge\theta_{kli}+\eta^{ip}\Theta_{il}\right]\wedge\delta^s_p\delta^l_t\theta^r\\
&=\eta^{ks}\left(T^i\wedge\theta_{kti}+\Theta_{kt}\right)\wedge\theta^r,
\end{aligned}$$ where in the last step Equation was used. It is equivalent to equation $$\label{eq:EqTorsionTheta}
T^i\wedge\theta_{kti}+\Theta_{kt}=0,$$ and this is turn means that $$\Theta_{kt}=0=T^i\wedge\theta_{kti},$$ given the symmetry properties of $\Theta_{ij}$ and $\theta_{ijk}$. Finally, we have the following result.
Let $\nu:=\nu^ie_i$ be a ${\mathbb{R}}^m$-valued $2$-form on $J^1\tau$, horizontal respect to $\tau_1$. Suppose further that[^5] $m>3$ and $$\nu^i\wedge\theta_{ijk}=0$$ for all $j,k=1,\cdots,m$. Then $\nu=0$.
Let us write $\nu$ in the following form $$\nu=\nu^{i}_{jk}\theta^j\wedge\theta^k\otimes e_i;$$ then $$\label{eq:ConditionCoordinatesNu}
0=\nu^i\wedge\theta_{ikl}=2\left(\nu^{i}_{kl}\theta_i+\nu^{i}_{li}\theta_k-\nu^{i}_{ki}\theta_l\right).$$ From this equation it results that $$\theta^k\wedge\nu^i\wedge\theta_{ikl}=2\left(\nu^{k}_{kl}+m\nu^{i}_{li}-\nu^{i}_{li}\right)\sigma_0=2\left(m-2\right)\nu^i_{li}\sigma_0,$$ so, because $m>2$, we have that $\nu^i_{li}=0$, and plugging it back in Equation , we obtain $\nu^i_{jk}=0$, as required.
As a consequence, the equations of motion associated to these vertical vector fields will be $$\label{eq:TorsionMomentaVanish}
T^i=0=\Theta_{kl}.$$ As we promised, torsion constraint is recovered from a variational problem involving only the metricity condition. Moreover, the fact that the multimomenta $\Theta_{ij}$ vanish means in particular that the Lepage-equivalent problem constructed in Section \[sec:griff-vari-probl\] for the Griffiths variational problem $$\left(J^1\tau,\lambda_{PG},{{\mathcal I}}_{PG}\right)$$ is necessarily contravariant.
- Let us consider now contractions of the differential $d\lambda_{{\mathcal L}}$ with vector fields of the form $A_{J^1\tau}+0$, for $A\in{\mathfrak{k}}$. We have $$\left(-1\right)^m\eta^{ik}\left(\eta^{rq}\eta_{li}\left.\Theta_{rj}\right|_{\beta}A^l_q-\left.\Theta_{ip}\right|_{\beta}A^p_j\right)\wedge\left(\omega_{\mathfrak{p}}\right)^j_k=0$$ that does not give rise to additional restrictions. It was expected, because ${{\mathcal L}}_{PG}$ and the forms that define the metricity condition are invariant for the ${\mathfrak{k}}$-action.
- Finally, we have to consider contractions with elements $A_{J^1\tau}+0$ for $A\in{\mathfrak{p}}$; we obtain in this case $$\begin{gathered}
0=\left[2\eta^{kp}A_k^i\theta_{il}-\left(\mathop{\text{tr}}{A}\right)\eta^{kp}\theta_{kl}\right]\wedge\Omega^l_p+\\
+\left(-1\right)^{m-1}\eta^{ik}\left[d\Theta_{ij}+\eta^{rq}\eta_{li}\left.\Theta_{rj}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^l_q-\left.\Theta_{ip}\right|_{\beta}\wedge\left(\omega_{\mathfrak{k}}\right)^p_j\right]A^j_k,
\end{gathered}$$ namely $$\label{eq:EinsteinEqFirst}
A_k^i\left(\eta^{kp}\theta_{il}-\frac{1}{2}\delta_i^k\eta^{qp}\theta_{ql}\right)\wedge\Omega^l_p=0$$ for every $A\in{\mathfrak{p}}$. Equivalently, it will have that $$\theta_{il}\wedge\Omega^l_k+\theta_{kl}\wedge\Omega^l_i-\eta_{ik}\left(\eta^{pq}\theta_{ql}\wedge\Omega^l_p\right)=0.$$
Let us show that this system is equivalent to Einstein’s equations in vacuum; for this, it is necessary to consider that, on a solution, we can write down $$\Omega^l_p=\Omega^l_{pab}\theta^a\wedge\theta^b$$ for some functions $\Omega^l_{pab}$ such that $\Omega^l_{pab}+\Omega^l_{pba}=0$. Then we have that $$\left(\eta^{kp}\theta_{il}-\frac{1}{2}\delta_i^k\eta^{qp}\theta_{ql}\right)\wedge\Omega^l_p=-2\eta^{kp}\left(\Omega_{pi}-\frac{1}{2}\eta_{pi}\Omega\right)\sigma_0$$ where the standard notation $\Omega_{ij}:=\Omega^l_{ilj},\Omega:=\eta^{ij}\Omega_{ij}$ was employed. Now, from Equation and taking into account that $\Omega_k^l$ is the Riemann tensor of a Levi-Civita connection, it results that $\Omega_{ij}=\Omega_{ji}$, we must conclude that $$\Omega_{ij}-\frac{1}{2}\eta_{ij}\Omega=0,$$ as required.
Constraint algorithm {#sec:constraint-algorithm}
--------------------
Successful field theory formulations should provide not only the equations of motion, but also the set of conditions ensuring that these equations are involutive. The additional procedures intended to extract this set of conditions (“constraints”, as they are usually called) are unsurprisingly dubbed “constraint algorithms” [@de1996geometrical; @zbMATH02233555]. It is the purpose, for example, of Proposition $2$ in [@Gaset:2017ahy] in the realm of the unified formalism for Einstein-Hilbert gravity. The constraint algorithm we will follow here is the one formulated in [@2013arXiv1309.4080C], where it was referred to as *Gotay algorithm* (see also [@Estabrook:2014hfa]). In short, this is essentially Cartan algorithm [@Hartley:1997:IAN:2274723.2275278; @CartanBeginners] for the first prolongation of the EDS ${{\mathcal J}}$ shown below (see Equation ); the set of constraints are thus obtained by annihilating the torsion of the sucessive prolongations. It means in particular that if the first prolongation of ${{\mathcal J}}$ is involutive, no further constraints will arise, and it will become involutive.
As we mentioned above, the equations of motion for the premultisymplectic system are represented by the exterior differential system $$\label{eq:EinsteinPalatiniEDS}
{{\mathcal J}}:=\left<\Theta,\omega_{\mathfrak{p}},T,\Omega^k_l\wedge\theta^l,\theta_{il}\wedge\Omega^l_k+\theta_{kl}\wedge\Omega^l_i-\eta_{ik}\left(\eta^{pq}\theta_{ql}\wedge\Omega^l_p\right)\right>.$$ The form $\Omega^k_l\wedge\theta^l$ comes from the first Bianchi identity; also, from constraint $\omega_{\mathfrak{p}}=0$ it results that $\Omega$ takes values in ${\mathfrak{k}}$.
The involutivity of this system would imply that every $m$-plane $Z$ on $W_{{\mathcal L}}$ annihilating ${{\mathcal J}}$ and such that $Z\lrcorner\sigma_0\not=0$ is the tangent space for some solution of the field equations, and so no further constraint would appear. We know that Einstein’s equations, at least in dimension $4$, are involutive (see [@doi:10.1063/1.3305321; @PhysRevD.71.044004] and references therein). Although it does not necessarily imply that the EDS ${{\mathcal J}}$ is involutive, it will imply that its first prolongation ${{\mathcal J}}^{\left(1\right)}$ does. In order to prove it, we need to recall Corollary \[cor:FirstBianchiConsequence\], that implies $$\Omega^l_i\wedge\theta_{lp}-\Omega^l_p\wedge\theta_{li}\equiv0\mod{T}$$ for the ${\mathfrak{k}}$-valued form $\Omega$. Thus $$\label{eq:SameEquationsLagHam}
\theta_{il}\wedge\Omega^l_k+\theta_{kl}\wedge\Omega^l_i-\eta_{ik}\left(\eta^{pq}\theta_{ql}\wedge\Omega^l_p\right)\equiv\frac{1}{2}\eta_{ip}\theta^p\wedge\eta^{qs}\theta_{kqr}\wedge\Omega^r_s\mod{T}.$$ Now, the EDS considered in [@PhysRevD.71.044004] becomes in our notation $${{\mathcal I}}_{\text{PG}}=\left<\omega_{\mathfrak{p}},T,\Omega^k_l\wedge\theta^l,\eta^{qk}\theta_{iql}\wedge\Omega^l_k\right>,$$ meaning in particular that $$V_4\left({{\mathcal I}}_{\text{PG}}\right)=V_4\left({{\mathcal J}}\right)$$ as subsets of $G_4\left(TW_{{\mathcal L}},\nu\right)$. Therefore, if we use the constraint algorithm as it is presented in [@2013arXiv1309.4080C], we have no additional constraints, and the constraint algorithm must stops.
Equation is another proof fo the fact that the equations of motion obtained in this article are equivalent to those in [@capriotti14:_differ_palat].
Unified formalism for unimodular gravity {#sec:mult-unim-grav}
========================================
It is interesting to note that the same scheme is useful when dealing with unimodular gravity [@doi:10.1119/1.1986321; @PhysRevD.92.024036]. Recall that in this formulation, the space-time is endowed with a volume form, dubbed *fundamental form*. In this case, and using the fact that in this formulation of relativity the fundamental volume form is conserved, we must consider a reduction of $LM$ to a subbundle $UM$ with structure group $SL\left(m\right)$ [@KN1]. This reduction consist into the elements of $LM$ which are constant when contracted with the fundamental form, and its structure group becomes $SL\left(m\right)$. Because $\mathfrak{sl}\left(m\right)$ consists into the traceless elements of $\mathfrak{gl}\left(m\right)$, the decomposition carried out in Section \[sec:cart-decomp-forms\] induces a decomposition $$\mathfrak{sl}\left(m\right)={\mathfrak{k}}\oplus{\mathfrak{p}}',$$ where ${\mathfrak{p}}'$ is the set of traceless elements in ${\mathfrak{p}}$. Thus we have a embedding $$i_{UM}:UM\hookrightarrow LM$$ that can be lifted to an embedding $j^1i_{UM}:J^1\tau_{UM}\rightarrow J^1\tau$, where $\tau_{UM}:UM\rightarrow M$ is the restriction of $\tau$ to $UM$. Using this map it is immediate to pullback the canonical form and the canonical torsion from $J^1\tau$ to $J^1\tau_{UM}$, and thus we can formulate a Griffiths variational problem for unimodular Palatini gravity. This variational problem should be compared with the first order variational problem for unimodular gravity as it is found in [@PhysRevD.92.024036]: In our case the degrees of freedom we are working with admit in principle arbitrary connections, not restricted to be Lorentz, and this restriction is implemented *a posteriori* through the metricity condition.
Moreover, following the constructions above, we can find a unified formalism for unimodular gravity and deduce the same equations of motion when contracting the differential of the Lagrangian form for the Lepage-equivalent problem with the vertical vector fields $0+\delta\beta,\left(\theta^r,A_{UM}\right)^V+0$ and $A_{J^1\tau_{UM}}+0$ for $A\in{\mathfrak{k}}$. Now, besides the metricity condition, we have the *equiaffinity condition* [@9780521441773] $$\omega^s_s=0$$ on the connection.
The main changes regarding these equations of motion are twofold:
1. Equation is replaced by $$T^i\wedge\theta_{kli}+\Theta_{kl}=\mu\eta_{kl}$$ for some unknown $m-1$-form $\mu$. From here it results again $T^i=0$ and $$\Theta_{kl}=\eta_{kl}\mu.$$
2. Although the form $\Theta$ is no longer zero, we will also end up with condition for this new problem, because this condition concerns a *traceless* symmetric matrix $A$ in unimodular case. So the equation of motion will involve a unknown function $\lambda$, due to the fact that the trace of the element $$\eta^{kp}\theta_{il}\wedge\Omega^l_p$$ is no longer determined. Thus the associated equation of motion becomes $$\Omega_{ij}=\lambda\eta_{ij}.$$ These equations are equivalent to the equation of unimodular gravity that we met in literature (see for example [@PhysRevD.92.024036]); in order to see this, take traces in both sides, so that $$\lambda=\frac{1}{4}\Omega$$ and thus $$\Omega_{ij}-\frac{1}{4}\Omega\eta_{ij}=0.$$
Forms on $J^1\tau$ {#sec:forms-j1tau}
==================
We will establish in this appendix some basic properties regarding forms on the jet space $J^1\tau$ and some associated spaces. As a preliminary step, let us introduce the set of forms $$\begin{aligned}
\theta_{i_1\cdots i_p}&:=\frac{1}{\left(n-p\right)!}\epsilon_{i_1\cdots i_pi_{p+1}\cdots i_n}\theta^{i_{p+1}}\wedge\cdots\wedge\theta^{i_n}\cr
&=X_{i_p}\lrcorner\cdots\lrcorner X_{i_1}\lrcorner\sigma_0\end{aligned}$$ associated to the canonical form $\theta$ on $J^1\tau$; here $\sigma_0:=\theta^1\wedge\cdots\wedge\theta^m$.
Some identities involving canonical forms {#App:SomeIdent}
-----------------------------------------
Let us use the algebraic properties of the vielbeins in order to settle the identities used in the present article. Recall that for $\alpha\in\Omega^p\left(X\right)$ we have that $$X\lrcorner\left(\alpha\wedge\beta\right)=\left(X\lrcorner\alpha\right)\wedge\beta+\left(-1\right)^p\alpha\wedge\left(X\lrcorner\beta\right).$$ Then for $$\theta_{ij}=X_j\lrcorner X_i\lrcorner\sigma_0$$ we obtain $$\begin{aligned}
\theta^m\wedge\theta_{ij}&=\theta^m\wedge\left(X_j\lrcorner X_i\lrcorner\sigma_0\right)\\
&=-X_j\lrcorner\left(\theta^m\wedge\left(X_i\lrcorner\sigma_0\right)\right)+\left(X_j\lrcorner\theta^m\right)\wedge\left(X_i\lrcorner\sigma_0\right)\\
&=-X_j\lrcorner\left(-X_i\lrcorner\left(\theta^m\wedge\sigma_0\right)+\left(X_i\lrcorner\theta^m\right)\sigma_0\right)+\delta^m_j\theta_i\\
&=\delta^m_j\theta_i-\delta^m_i\theta_j.\end{aligned}$$ Additionally $$\begin{aligned}
\theta^m\wedge\theta_{ijk}&=\theta^m\wedge\left(X_k\lrcorner X_j\lrcorner X_i\lrcorner\sigma_0\right)\\
&=-X_k\lrcorner\left(\theta^m\wedge\left(X_j\lrcorner X_i\lrcorner\sigma_0\right)\right)+\left(X_k\lrcorner\theta^m\right)\left(X_j\lrcorner X_i\lrcorner\sigma_0\right)\\
&=-X_k\lrcorner\left(\delta^m_j\theta_i-\delta^m_i\theta_j\right)+\delta^m_k\theta_{ij}\\
&=\delta^m_k\theta_{ij}-\delta^m_j\theta_{ik}+\delta^m_i\theta_{jk}\end{aligned}$$ and in general $$\theta^m\wedge\theta_{i_1\cdots i_p}=\sum_{k=1}^p\left(-1\right)^{k+1}\delta^m_{i_k}\theta_{i_1\cdots\widehat{i_k}\cdots i_p}.$$
Cartan decomposition and forms {#sec:cart-decomp-forms}
------------------------------
The decomposition of $\mathfrak{gl}\left(m\right)$ associated to $\eta$ (see Section \[sec:geom-prel\]) has useful properties. We will mention some of them in the present section of the Appendix.
\[lem:SymAntiSymForms\] $\gamma\in{\mathfrak{k}}$ (resp. $\gamma\in{\mathfrak{p}}$) if and only if $\gamma\eta:=\gamma_p^i\eta^{pj}E_{ij}$ (resp. $\eta\gamma:=\eta_{ip}\gamma_j^pE^{ij}$) takes values in the set of antisymmetric (resp. symmetric) matrices.
From this lemma we can deduce the following useful fact.
\[prop:TraceWedge\] Let $\gamma\in\Omega^p\left(N,{\mathfrak{k}}\right),\rho\in\Omega^q\left(N,{\mathfrak{p}}\right)$ be a pair of forms on $N$. Then $$\mathop{\text{Tr}}{\left(\gamma\wedge\rho\right)}=\gamma^k_p\wedge\rho^p_k=0.$$
In fact, $$\begin{aligned}
\mathop{\text{Tr}}{\left(\gamma\wedge\rho\right)}&=\gamma^k_p\wedge\rho^p_k\\
&=\eta_{kr}\gamma^r_p\wedge\eta^{ks}\rho_s^p,
\end{aligned}$$ so we have that $$\mathop{\text{Tr}}{\left(\gamma\wedge\rho\right)}=\mu_{kl}\wedge\nu^{kl}$$ where by Lemma \[lem:SymAntiSymForms\], $\mu:=\mu_{ij}E^{ij}$ takes values in the set of antisymmetric matrices and $\nu:=\nu^{ij}E_{ij}$ takes values in the set of symmetric matrices; so $\mathop{\text{Tr}}{\left(\gamma\wedge\rho\right)}=\mu_{kl}\wedge\nu^{kl}=0$, as required.
\[prop:DecompSquare\] Let $\omega\in\Omega^n\left(N,\mathfrak{gl}\left(m\right)\right)$ be a $\mathfrak{gl}\left(m\right)$-valued $n$-form on $N$. Then $$\begin{aligned}
\left[\left(\omega\wedge\omega\right)_{\mathfrak{k}}\right]^i_j&=
\begin{cases}
\left(\omega_{\mathfrak{p}}\right)^i_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_j+\left(\omega_{\mathfrak{k}}\right)^i_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_j&n\text{ odd,}\cr
\left(\omega_{\mathfrak{p}}\right)^i_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_j+\left(\omega_{\mathfrak{k}}\right)^i_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_j&n\text{ even,}
\end{cases}\\
\left[\left(\omega\wedge\omega\right)_{\mathfrak{p}}\right]^i_j&=
\begin{cases}
\left(\omega_{\mathfrak{p}}\right)^i_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_j+\left(\omega_{\mathfrak{k}}\right)^i_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_j&n\text{ odd,}\cr
\left(\omega_{\mathfrak{p}}\right)^i_p\wedge\left(\omega_{\mathfrak{p}}\right)^p_j+\left(\omega_{\mathfrak{k}}\right)^i_p\wedge\left(\omega_{\mathfrak{k}}\right)^p_j&n\text{ even.}
\end{cases}
\end{aligned}$$
We have that $$\begin{aligned}
\eta_{jp}\eta^{iq}\omega^p_r\wedge\omega^r_q&=\eta_{jp}\omega^p_r\eta^{rk}\wedge\eta_{ks}\omega^s_q\eta^{iq}\\
&=\left[\left(\omega_{\mathfrak{p}}\right)^k_j-\left(\omega_{\mathfrak{k}}\right)^k_j\right]\wedge\left[\left(\omega_{\mathfrak{p}}\right)^i_k-\left(\omega_{\mathfrak{k}}\right)^i_k\right]\\
&=\left(-1\right)^n\left[\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j-\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j-\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j\right],
\end{aligned}$$ and also $$\begin{aligned}
\omega^i_k\wedge\omega^k_j&=\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j+\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j.
\end{aligned}$$ Therefore, we obtain $$\begin{aligned}
\left[\left(\omega\wedge\omega\right)_{\mathfrak{k}}\right]^i_j&=\frac{1}{2}\left(\omega^i_p\wedge\omega^p_j-\eta_{jp}\eta^{iq}\omega^p_r\wedge\omega^r_q\right)\\
&=\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j
\end{aligned}$$ for the ${\mathfrak{k}}$-projection, and $$\begin{aligned}
\left[\left(\omega\wedge\omega\right)_{\mathfrak{p}}\right]^i_j&=\frac{1}{2}\left(\omega^i_p\wedge\omega^p_j+\eta_{jp}\eta^{iq}\omega^p_r\wedge\omega^r_q\right)\\
&=\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j,
\end{aligned}$$ for the ${\mathfrak{p}}$-projection in the $n$ odd case, and $$\begin{aligned}
\left[\left(\omega\wedge\omega\right)_{\mathfrak{k}}\right]^i_j&=\frac{1}{2}\left(\omega^i_p\wedge\omega^p_j-\eta_{jp}\eta^{iq}\omega^p_r\wedge\omega^r_q\right)\\
&=\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j\\
\left[\left(\omega\wedge\omega\right)_{\mathfrak{p}}\right]^i_j&=\frac{1}{2}\left(\omega^i_p\wedge\omega^p_j+\eta_{jp}\eta^{iq}\omega^p_r\wedge\omega^r_q\right)\\
&=\left(\omega_{\mathfrak{p}}\right)^i_k\wedge\left(\omega_{\mathfrak{p}}\right)^k_j+\left(\omega_{\mathfrak{k}}\right)^i_k\wedge\left(\omega_{\mathfrak{k}}\right)^k_j
\end{aligned}$$ when $n$ is even, as required.
As we mention in the Remark \[rem:Signature-independence\], there is nothing special in the results of this section regarding the chosen signature. Everything could be proved with a more general signature $\left(p,q\right)$, for $p+q=m$.
The canonical $k$-form on a bundle of forms {#sec:canonical-k-form}
-------------------------------------------
Let us study in this section the behaviour of a canonical form on a principal bundle respect to the lifted action. Namely, let $\pi:P\rightarrow N$ be a principal bundle with structure group $G$, $\left(V,\rho\right)$ a $G$-representation, and suppose further that there exists a pair of surjective submersions $q:P\rightarrow M,p:N\rightarrow M$, such that the following diagram is commutative
P & & N\
& M &
Consider now the bundle $$\overline{\tau}^k_{n,q}:\wedge^k_{n,q}\left(T^*P\right)\otimes V\rightarrow P$$ of $V$-valued $k$-forms on $P$ which are $n$-vertical respect to $q$; it means that $\alpha\in\wedge^k_{n,q}\left(J^1\tau\right)\otimes V$ if and only if $$\alpha\left(X_1,\cdots,X_k\right)=0$$ whenever $n$ of the vectors $X_1,\cdots,X_k$ belong to $\mathop{\text{ker}}{Tq}$.
This bundle has a canonical $V$-valued $k$-form $\Theta^k_{n,q}$ defined through the formula $$\left.\Theta^k_{n,q}\right|_{\alpha}\left(Z_1,\cdots,Z_k\right):=\alpha\left(T_\alpha\overline{\tau}^k_{n,q}\left(Z_1\right),\cdots,T_\alpha\overline{\tau}^k_{n,q}\left(Z_k\right)\right).$$ The bundle $\wedge^k_{n,q}\left(T^*P\right)\otimes V$ is a $G$-space, with action of an element $g\in G$ defined via $$\Phi^k_g\left(\alpha\right)\left(X_1,\cdots,X_k\right):=\rho\left(g\right)\cdot\left(\alpha\left(T_{u\cdot g}R_{g^{-1}}X_1,\cdots,T_{u\cdot g}R_{g^{-1}}X_k\right)\right),$$ for $\alpha\in\wedge^k_{n,q}\left(T^*_uP\right)\otimes V$ and $X_1,\cdots,X_k\in T_{u\cdot g}P$. The canonical form has special proerties regarding this action.
The canonical $k$-form $\Theta^k_{n,q}$ is $G$-equivariant.
It is necessary to recall the commutative diagram
T(\^k\_[n,q]{}(T\^\*P)V) & T(\^k\_[n,q]{}(T\^\*P)V)\
TP & TP
for every $g\in G$. Then $$\begin{aligned}
\left(\Phi_g^k\right)^*\left(\left.\Theta^k_{n,q}\right|_{\Phi^k_g\left(\alpha\right)}\right)&\left(Z_1,\cdots,Z_k\right)\\
&=\left.\Theta^k_{n,q}\right|_{\Phi^k_g\left(\alpha\right)}\left(T_\alpha \Phi^k_gZ_1,\cdots,T_\alpha\Phi^k_gZ_k\right)\\
&=\Phi^k_g\left(\alpha\right)\left(\left(T_{\Phi^k_g\left(\alpha\right)}\overline{\tau}^k_{n,q}\circ T_\alpha \Phi^k_g\right)Z_1,\cdots,\left(T_{\Phi^k_g\left(\alpha\right)}\overline{\tau}^k_{n,q}\circ T_\alpha\Phi^k_gZ_k\right)\right)\\
&=\Phi^k_g\left(\alpha\right)\left(\left(T_{u} R_{g}\circ T_{\alpha}\overline{\tau}^k_{n,q}\right)Z_1,\cdots,\left(T_{u} R_{g}\circ T_{\alpha}\overline{\tau}^k_{n,q}\right)Z_k\right)\\
&=\rho\left(g\right)\cdot\left(\alpha\left(T_{u\cdot g}R_{g^{-1}}\left(T_{u} R_{g}\circ T_{\alpha}\overline{\tau}^k_{n,q}\right)Z_1,\cdots,T_{u\cdot g}R_{g^{-1}}\left(T_{u} R_{g}\circ T_{\alpha}\overline{\tau}^k_{n,q}\right)Z_k\right)\right)\\
&=\rho\left(g\right)\cdot\left(\alpha\left(T_{\alpha}\overline{\tau}^k_{n,q}Z_1,\cdots,T_{\alpha}\overline{\tau}^k_{n,q}Z_k\right)\right)\\
&=\rho\left(g\right)\cdot\left(\left.\Theta^k_{n,q}\right|_{\alpha}\left(Z_1,\cdots,Z_k\right)\right),
\end{aligned}$$ as required.
Local expressions on $J^1\tau$ {#App:LocalExpressions}
==============================
Let us recall some local expressions regarding canonical coordinates on $J^1\tau$; we are quoting almost word-to-word the Appendix $B.3.4$ of [@capriotti14:_differ_palat].
Let $U\subset M$ be a coordinate neighborhood and $\tau:LM\rightarrow M$ the canonical projection of the frame bundle; on $\tau^{-1}\left(U\right)$ can be defined the coordinate functions $$u\in \tau^{-1}\left(U\right)\mapsto\left(x^\mu\left(u\right),e^\nu_k\left(u\right)\right)$$ where $x^\mu\equiv x^\mu\circ \tau$ and $$u=\left\{e_1^\mu\left.\frac{\partial}{\partial x^\mu}\right|_{\tau\left(u\right)},\cdots,e_n^\mu\left.\frac{\partial}{\partial x^\mu}\right|_{\tau\left(u\right)}\right\}.$$ If $\bar{U}\subset M$ is another coordinate neighborhood such that $U\cap\bar{U}\not=\emptyset$ and $u\in U\cap\bar{U}$, then $$u=\left\{\bar{e}_1^\mu\left.\frac{\partial}{\partial \bar{x}^\mu}\right|_{\tau\left(u\right)},\cdots,\bar{e}_n^\mu\left.\frac{\partial}{\partial \bar{x}^\mu}\right|_{\tau\left(u\right)}\right\},$$ and the coordinates change on $\tau^{-1}\left(U\right)\cap\tau^{-1}\left(\bar{U}\right)\subset LM$ can be given as $$\begin{aligned}
\bar{x}^\mu&=\bar{x}^\mu\left(x^1,\cdots,x^n\right)\\
\bar{e}_k^\mu&=\frac{\partial\bar{x}^\mu}{\partial x^\nu}e_k^\nu.\end{aligned}$$ On the jet space $J^1p$ of any bundle $p:E\rightarrow M$, the change of adapted coordinates given by the rule $\left(x^\mu,u^A\right)\mapsto\left(\bar{x}^\nu\left(x\right),\bar{u}^B\left(x,u\right)\right)$ on $E$, transform the induced coordinates on $J^1p$ accordingly to [@saunders89:_geomet_jet_bundl]\
$$\bar{u}^A_\mu=\left(\frac{\partial\bar{u}^A}{\partial u^B}u^B_\nu+\frac{\partial\bar{u}^A}{\partial{x}^\nu}\right)\frac{\partial x^\nu}{\partial\bar{x}^\mu}.$$ By supposing that the induced coordinates on $J^1\tau$ are in the present case $\left(x^\mu,e^\mu_k,e^\mu_{k\nu}\right)$ and $\left(\bar{x}^\mu,\bar{e}^\mu_k,\bar{e}^\mu_{k\nu}\right)$, we will have that $$\bar{e}^\mu_{k\nu}=\left(\frac{\partial\bar{x}^\mu}{\partial x^\sigma}e^\sigma_{k\rho}+\frac{\partial^2\bar{x}^\mu}{\partial{x}^\rho\partial{x}^\sigma}{e}^\sigma_k\right)\frac{\partial x^\rho}{\partial\bar{x}^\nu}.$$ Take note on the fact that the functions $$A_{\mu\nu}^\sigma:=-e^\sigma_{k\nu}e^k_\mu,$$ where the quantities $e^k_\mu$ are uniquely determined by the conditions $$e_\mu^ke_k^\nu=\delta^\nu_\mu,$$ transform accordingly to $$\bar{A}^\mu_{\rho\gamma}=-\frac{\partial\bar{x}^\mu}{\partial x^\nu}\frac{\partial x^\sigma}{\partial\bar{x}^\alpha}e^\alpha_{k\sigma}\bar{e}^k_\rho-\frac{\partial^2\bar{x}^\mu}{\partial{x}^\rho\partial{x}^\alpha}\frac{\partial x^\alpha}{\partial\bar{x}^\gamma}.$$ But by using the previous definition, we can find the way in which $e^k_\mu$ and $\bar{e}^k_\mu$ are related, namely $$\bar{e}^k_\mu=\frac{\partial x^\gamma}{\partial \bar{x}^\mu}e_\gamma^k$$ and therefore $$\bar{A}^\mu_{\delta\nu}=\frac{\partial\bar{x}^\mu}{\partial x^\sigma}\frac{\partial x^\rho}{\partial\bar{x}^\nu}\frac{\partial x^\gamma}{\partial\bar{x}^\delta}A^\sigma_{\gamma\rho}-\frac{\partial^2\bar{x}^\mu}{\partial{x}^\rho\partial{x}^\gamma}\frac{\partial x^\rho}{\partial \bar{x}^\nu}\frac{\partial x^\gamma}{\partial \bar{x}^\delta},$$ which is the transformation rule for the Christoffel symbols, if the following identity $$\frac{\partial^2\bar{x}^\sigma}{\partial x^\rho\partial x^\gamma}\frac{\partial x^\rho}{\partial \bar{x}^\mu}\frac{\partial x^\gamma}{\partial \bar{x}^\nu}=-\frac{\partial^2 x^\rho}{\partial\bar{x}^\mu \partial\bar{x}^\nu}\frac{\partial \bar{x}^\sigma}{\partial x^\rho}.$$ is used. So we are ready to calculate local expressions for the previously introduced canonical forms. First we have that $$\theta^k=e^k_\mu dx^\mu$$ determines the components of the tautological form on $J^1\tau$, and the canonical connection form will result $$\omega^k_l=e^k_\mu\left(d e^\mu_l-e^\mu_{l\sigma}d x^\sigma\right).$$ It is immediate to show that $$\bar{\theta}^k=\theta^k,$$ and moreover $$\begin{aligned}
\bar{\omega}^k_l&=\bar{e}^k_\mu\left(d \bar{e}^\mu_l-\bar{e}^\mu_{l\nu}d \bar{x}^\nu\right)\\
&=\frac{\partial x^\gamma}{\partial\bar{x}^\mu}e^k_\gamma\left[d\left(\frac{\partial\bar{x}^\mu}{\partial x^\gamma}e^\gamma_l\right)-\left(\frac{\partial\bar{x}^\mu}{\partial x^\sigma}e^\sigma_{l\rho}+\frac{\partial^2\bar{x}^\mu}{\partial{x}^\rho\partial{x}^\sigma}{e}^\sigma_l\right)\frac{\partial x^\rho}{\partial\bar{x}^\nu}d\bar{x}^\nu\right]\\
&=\frac{\partial x^\gamma}{\partial\bar{x}^\mu}e^k_\gamma\left(\frac{\partial\bar{x}^\mu}{\partial x^\gamma}d e^\gamma_l-\frac{\partial\bar{x}^\mu}{\partial x^\sigma}e^\sigma_{l\rho}d{x}^\rho\right)\\
&=e^k_\gamma\left(d e^\gamma_l-e^\gamma_{l\rho}d{x}^\rho\right)\\
&=\omega^k_l.\end{aligned}$$ The associated curvature form can be calculated according to the formula $$\begin{aligned}
\Omega^k_l&:=d\omega^k_l+\omega^k_p\wedge\omega^p_l\\
&=d\left[e^k_\gamma\left(d e^\gamma_l-e^\gamma_{l\rho}d{x}^\rho\right)\right]+e^k_\gamma\left(d e^\gamma_p-e^\gamma_{p\sigma}d{x}^\sigma\right)\wedge\left[e^p_\sigma\left(d e^\sigma_l-e^\sigma_{l\rho}d{x}^\rho\right)\right]\\
&=d e^k_\gamma\wedge\left(d e^\gamma_l-e^\gamma_{l\rho}d{x}^\rho\right)-e^k_\gamma d e^\gamma_{l\rho}\wedge d{x}^\rho+\\
&\qquad+e^k_\gamma e^p_\sigma\left[d e^\gamma_p\wedge d e^\sigma_l+\left(e^\gamma_{p\beta}d e^\sigma_l\wedge d x^\beta-e^\sigma_{l\beta}d e^\gamma_p\wedge d x^\beta\right)+e^\gamma_{p\beta}e^\sigma_{l\delta}d x^\beta\wedge d x^\delta\right]\\
&=-e^\gamma_{l\rho}d e^k_\gamma\wedge d{x}^\rho-e^k_\gamma d e^\gamma_{l\rho}\wedge d{x}^\rho+\\
&\qquad+e^k_\gamma e^p_\sigma\left[\left(e^\gamma_{p\beta}d e^\sigma_l\wedge d x^\beta-e^\sigma_{l\beta}d e^\gamma_p\wedge d x^\beta\right)+e^\gamma_{p\beta}e^\sigma_{l\delta}d x^\beta\wedge d x^\delta\right]\end{aligned}$$ where in the passage from the third to the fourth line it was used the identity $$d e^k_\gamma\wedge d e^\gamma_l+e^k_\gamma e^p_\sigma d e^\gamma_p\wedge d e^\sigma_l=0.$$ Because of the identity $$e_\gamma^kd e^\gamma_p=-e^\gamma_pd e_\gamma^k$$ we can reduce further the expression for $\Omega^k_l$ $$\Omega^k_l =e^k_\gamma\left[-d e^\gamma_{l\rho}\wedge d{x}^\rho+e^p_\sigma\left(e^\gamma_{p\beta}d e^\sigma_l\wedge d x^\beta+e^\gamma_{p\beta}e^\sigma_{l\delta}d x^\beta\wedge d x^\delta\right)\right].$$ Take note that $$\label{Eq:CurvaturaIntermedia}
e^l_\mu\Omega^k_l=e^k_\gamma\left(d A^\gamma_{\mu\rho}\wedge d{x}^\rho+A^\gamma_{\sigma\beta}A^\sigma_{\mu\delta}d x^\beta\wedge d x^\delta\right),$$ so that if we fix a connection $\Gamma$ through its Christoffel symbols $\left(\Gamma^\mu_{\nu\sigma}\right)$ in the canonical basis $\left\{\partial/\partial x^\mu\right\}$, then we will have that $e^\gamma_k=\delta^\gamma_k$ and this formula reduces to $$\Omega^\mu_\nu:=e_k^\nu e^l_\mu\Omega^k_l=d\Gamma^\mu_{\nu\rho}\wedge d{x}^\rho+\Gamma^\mu_{\sigma\beta}\Gamma^\sigma_{\nu\delta}d x^\beta\wedge d x^\delta$$ providing us with the usual formula for the connection in terms of the local coordinates.
Next we can provide a local expression for the map $\tilde\sigma_\Gamma:LM\rightarrow J^1\tau$. First we realize that a connection $\Gamma$ is locally described by a map $$\Gamma:x^\mu\mapsto\left(x^\mu,\Gamma^\sigma_{\mu\nu}\left(x\right)\right);$$ in these terms, the map $\tilde\sigma_\Gamma$ is given by $$\tilde\sigma_\Gamma:\left(x^\mu,e^k_\nu\right)\mapsto\left(x^\mu,e^k_\nu,-e^\mu_k\Gamma_{\mu\nu}^\sigma\left(x\right)\right).$$ It is convenient to stress about an abuse of language committed here: We are indicating with the same symbol $\tilde\sigma_\Gamma$ either the map itself and its local version. Nevertheless, we obtain the following local expression for the connection form associated to $\Gamma$, namely $$\left(\tilde\sigma_\Gamma^*\omega\right)_l^k=e^k_\mu\left(d e^\mu_l+e^\sigma_l\Gamma^\mu_{\sigma\rho}\left(x\right)d x^\rho\right).$$ In our approach this equation is equivalent to the so called *tetrad postulate*, which relates the components *of the same connection* in the two representations provided by the theory developed here: As a section $\Gamma$ of the bundle of connections, and as an equivariant map $\tilde\sigma_\Gamma:LM\rightarrow J^1\tau$ such that the following diagram commutes $$\begin{diagram}
\node{LM}\arrow{e,t}{\tilde\sigma_\Gamma}\arrow{s,l}{\tau}\node{J^1\tau}\arrow{s,r}{p_{GL\left(m\right)}}\\
\node{M}\arrow{e,b}{\Gamma}\node{C\left(LM\right)}
\end{diagram}$$ According to the previous discussion, the pullback of these forms along the section $s:x^\mu\mapsto\left(x^\mu,e_k^\nu\left(x\right)\right)$ provides us with the expression for the connection forms associated to the underlying moving frame $$e_k\left(x\right):=e^\nu_k\left(x\right)\frac{\partial}{\partial x^k};$$ in fact, given another such section $\bar{s}:x^\mu\mapsto\left(x^\mu,\bar{e}_k^\nu\left(x\right)\right)$, there exists a map $g:x^\mu\mapsto\left(g^k_l\left(x\right)\right)\in GL\left(m\right)$ relating them, namely $$\bar{e}^\mu_k\left(x\right)=g^l_k\left(x\right)e_l^\mu\left(x\right)$$ and so $$\bar{s}^*\left(\tilde\sigma_\Gamma^*\omega\right)_l^k=h^k_pg^q_ls^*\left(\tilde\sigma_\Gamma^*\omega\right)_q^p+h^k_pd g^p_l.$$ It allows us to answer the concerns raised in the introduction: The Palatini Lagrangian is a global form on $J^1\tau$, but this is false for its pullback along a local section. Namely, its global description needs the inclusion of information about the $1$-jet of the vielbein involved in the local representation of the connection.
The $\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*$-valued difference function $C$ {#sec:diff-tens-c_i}
====================================================================================================
In this section we will introduce a $\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*$-valued function $C$ on $J^1\tau$, associated to a *torsionless* connection $\sigma\in\Omega^1\left(LM,\mathfrak{gl}\left(m\right)\right)$. In fact, we define $$C\left(\xi\right):=\omega\left(\left(B\left(\xi\right)\right)^1\right)$$ for every $\xi\in{\mathbb{R}}^m$; in this formula $B\left(\xi\right)\in\mathfrak{X}\left(LM\right)$ is the standard horizontal vector field determined by $\xi$ and $\sigma$.
As we will see below, $C\left(j_x^1s\right)$ gives us the difference between the connection $\sigma$ evaluated at $u=s\left(x\right)$ and the connection at a point $u$ corresponding to $j_x^1s$. In our interpretation of Palatini gravity as a Griffiths variational problem on $J^1\tau$, this function corresponds to the variables determined by the well-known trick [@AshtekarNoPerturbative p. 44] of substracting a connection with zero torsion from the connection variables.
We can give a coordinate version of these functions. In fact, because of the formula , $$\left.\omega\right|_{j_x^1s}=\left[T_{j_x^1s}\tau_{10}-T_xs\circ T_{j_x^1s}\tau_1\right]_{\mathfrak{gl}\left(m\right)},$$ we have that $$\left.\omega\right|_{j_x^1s}\left(\left(B\left(e_i\right)\right)^1\right)=\left[B\left(e_i\right)-T_xs\left(s\left(x\right)\left(e_i\right)\right)\right]_{\mathfrak{gl}\left(m\right)}.$$ Using the local expressions calculated in Section \[sec:local-expr-lift\], we have that $$\begin{aligned}
T_xs\left(s\left(x\right)\left(e_i\right)\right)&=T_xs\left(e_i^\mu\frac{\partial}{\partial x^\mu}\right)\\
&=e^\mu_i\left(\frac{\partial}{\partial x^\mu}+e^\nu_{j\mu}\frac{\partial}{\partial e^\nu_j}\right),\end{aligned}$$ and so $$\left.\omega\right|_{j_x^1s}\left(\left(B\left(e_i\right)\right)^1\right)=\left[e^\mu_i\left(e^\nu_{j\mu}-e^\sigma_j\Xi^\nu_{\mu\sigma}\right)\frac{\partial}{\partial e^\nu_j}\right]_{\mathfrak{gl}\left(m\right)}=\left[e^\mu_ie_\nu^k\left(e^\nu_{j\mu}-e^\sigma_j\Xi^\nu_{\mu\sigma}\right)\left(E_k^j\right)_{LM}\right]_{\mathfrak{gl}\left(m\right)};$$ therefore $$C=e^\mu_ie_\nu^k\left(e^\nu_{j\mu}-e^\sigma_j\Xi^\nu_{\mu\sigma}\right)E_k^j\otimes e^i,$$ for $\left\{e^1,\cdots,e^m\right\}\subset\left({\mathbb{R}}^m\right)^*$ the dual basis of $\left\{e_1,\cdots,e_m\right\}$.
\[lem:CVerticalDuality\] Let $C^k_{ij}$ be the coordinates functions of $C$ respect to the basis $\left\{E^k_i\otimes e^j\right\}$, where $\left\{e^j\right\}\subset\left({\mathbb{R}}^m\right)^*$ is the dual basis to $\left\{e_i\right\}$. Then $$\left(\theta^i,\left(E^j_l\right)_{LM}\right)^V\cdot C^p_{qr}=\delta^i_r\delta^j_q\delta_l^p.$$
It follows using the local expressions of these objects.
In order to formulate the following result, let us denote by $\rho$ the product representation of the adjoint and the transpose action of $GL\left(m\right)$ on $\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*$. Given a representation $\left(V,\rho\right)$ of a Lie group $G$ on the vector space $V$, a $V$-valued function $f$ on a $G$-principal bundle $P$ is said to be *of type $\rho$* if and only if $$f\left(u\cdot g\right)=\rho\left(g^{-1}\right)\cdot f\left(u\right)$$ for every $u\in P$.
The map $C$ is a $\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*$-valued function of type $\rho$.
Let $\xi\in{\mathbb{R}}^m$, $j_x^1s\in LM,u=s\left(x\right)\in LM$ and $g\in GL\left(m\right)$; recalling that the lift $j^1\Phi_t$ of the flow $\Phi_t:LM\rightarrow LM$ of the vector field $B\left(\xi\right)$ to $J^1\tau$ is the flow of $\left(B\left(\xi\right)\right)^1$, we have that $$\left(R_{g*}B\left(\xi\right)\right)^1=R_{g*}\left(B\left(\xi\right)\right)^1.$$ Also, it must be remembered that $$R_{g*}\left(B\left(\xi\right)\right)=B\left(g^{-1}\cdot\xi\right).$$ Therefore $$\begin{aligned}
C_{\left(j_x^1s\right)\cdot g}\left(\xi\right)&=\left.\omega\right|_{\left(j_x^1s\right)\cdot g}\left(\left.\left(B\left(\xi\right)\right)\right|^1_{\left(j_x^1s\right)\cdot g}\right)\\
&=\left[\left(T_{\left(j_x^1s\right)\cdot g}R_{g}\right)^*\left.\omega\right|_{\left(j_x^1s\right)\cdot g}\right]\left(T_{\left(j_x^1s\right)\cdot g}R_{g^{-1}}\left(\left.\left(B\left(\xi\right)\right)^1\right|_{\left(j_x^1s\right)\cdot g}\right)\right)\\
&=\left(\mathop{\text{Ad}_{g^{-1}}}{\left.\omega\right|_{j_x^1s}}\right)\left(\left(T_{u\cdot g}R_{g^{-1}}\left(\left.B\left(\xi\right)\right|_{u\cdot g}\right)\right)^1\right)\\
&=\left(\mathop{\text{Ad}_{g^{-1}}}{\left.\omega\right|_{j_x^1s}}\right)\left(\left(\left.B\left(g\cdot\xi\right)\right|_{u}\right)^1\right),
\end{aligned}$$ namely $$C_{\left(j_x^1s\right)\cdot g}\left(\xi\right)=\mathop{\text{Ad}_{g^{-1}}}{\left(C_{j_x^1s}\left(g\cdot\xi\right)\right)}$$ for any $j_x^1s\in J^1\tau,\xi\in\mathfrak{gl}\left(m\right)$.
\[rem:CAsASectionOfE\] As pointed out in [@KN1 p. 76], the previous lemma means in particular that $C$ can be seen as a section of the bundle $E$ associated to the principal bundle $p^{J^1\tau}_{GL\left(m\right)}:J^1\tau\rightarrow C\left(LM\right)$ through the representation $\left(\mathfrak{gl}\left(m\right)\otimes\left({\mathbb{R}}^m\right)^*,\rho\right)$.
Now, for any $A\in\mathfrak{gl}\left(m\right)$ and $j_x^1s\in J^1\tau$ we have that $$\begin{aligned}
\left.A_{J^1\tau}\right|_{j_x^1s}\cdot C&=\left.\frac{d}{dt}\right|_{t=0}\left[C_{\left(j_x^1s\right)\cdot\left(\exp{\left(-tA\right)}\right)}\right]\\
&=\left.\frac{d}{dt}\right|_{t=0}\left[\rho\left(\exp{\left(-tA\right)}\cdot C_{j_x^1s}\right)\right]\\
&=-\left[A,C_{j_x^1s}\right]+A^*C_{j_x^1s}.\end{aligned}$$ We can use this formula in order to set the following result, which is consequence of Lemma 1 in [@KN1 p. 97]. It will be relevant in the evaluation of the vector fields $\left(B\left(e_i\right)\right)^1$ on the curvature form $\Omega^j_k$.
\[cor:VerticalDerivativesFunctonDifference\] Let $Z\in\mathfrak{X}\left(J^1\tau\right)$ be an arbitrary vector field, and consider $j_x^1s\in J^1\tau$; further, let $v_\omega\left(Z\right)$ be the vertical part of $Z$ respect to the connection $\omega$ on the bundle $J^1\tau\rightarrow C\left(LM\right)$. Then
1. It results that $$\left.v_\omega\left(Z\right)\right|_{j_x^1s}\cdot C=-\left[\left.\omega\right|_{j_x^1s}\left(Z\right),C_{j_x^1s}\right]+\left(\left.\omega\right|_{j_x^1s}\left(Z\right)\right)^*C_{j_x^1s}.$$
2. For any pair $Z_1,Z_2$ of horizontal vector fields for $\omega$ on $J^1\tau$, we have that $$\left.v_\omega\left(\left[Z_1,Z_2\right]\right)\right|_{j_x^1s}\cdot C=-2\left[\left.\Omega\right|_{j_x^1s}\left(Z_1,Z_2\right),C_{j_x^1s}\right]+2\left(\left.\Omega\right|_{j_x^1s}\left(Z_1,Z_2\right)\right)^*C_{j_x^1s}.$$
The contraction of elements of $B$ with canonical forms on $W_{{\mathcal L}}$ {#sec:contr-elem-b}
=============================================================================
We will need how the elements of the basis $B$ contract with the forms $\Omega,T,\theta,\omega$ in the calculation of the equations of motion associated to the multisymplectic version of Palatini gravity. To this end, we devoted the following section.
Contraction of the curvature $\Omega$ with elements of $B$
----------------------------------------------------------
Now, let us evaluate vectors $\left(B\left(e_i\right)\right)^1$ on the curvature $\Omega$; we have that $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)+C^k_{pi}C^p_{lj}-C^k_{pj}C^p_{li}$$ where $C^i_{jk}$ are the components of $C$ in the basis $\left\{E^i_j\otimes e^k\right\}$. Now, the differential of the form $\omega^k_l$ reads $$d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=
\left(B\left(e_i\right)\right)^1\cdot C^k_{lj}-\left(B\left(e_j\right)\right)^1\cdot C^k_{li}-\omega^k_l\left(\left[\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right]\right).$$ Furthermore, using the identity $$\left[Y^1,Z^1\right]=\left(\left[Y,Z\right]\right)^1$$ for any pair $Y,Z\in\mathfrak{X}\left(LM\right)$, we can rewrite the last term as follows $$\omega^k_l\left(\left[\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right]\right)=\omega^k_l\left(\left(\left[B\left(e_i\right),B\left(e_j\right)\right]\right)^1\right);$$ recalling that $\sigma$ is torsionless, we have that $\left[B\left(e_i\right),B\left(e_j\right)\right]$ is vertical, and so [@KN1 Cor. (5.3)] $$\left[B\left(e_i\right),B\left(e_j\right)\right]=-\left(R\left(B\left(e_i\right),B\left(e_j\right)\right)\right)_{LM},$$ for $R$ the curvature form of $\sigma$, so that $$\omega^k_l\left(\left[\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right]\right)=-\omega^k_l\left(\left(R\left(B\left(e_i\right),B\left(e_j\right)\right)\right)_{J^1\tau}\right)=-R^k_l\left(B\left(e_i\right),B\left(e_j\right)\right)$$ where the identity $\left(A_{LM}\right)^1=A_{J^1\tau}$, valid for any $A\in\mathfrak{gl}\left(m\right)$, was used. Finally $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=\left(B\left(e_i\right)\right)^1\cdot C^k_{lj}-\left(B\left(e_j\right)\right)^1\cdot C^k_{li}+R^k_l\left(B\left(e_i\right),B\left(e_j\right)\right)+C^k_{pi}C^p_{lj}-C^k_{pj}C^p_{li}.$$ This formula is the analogous of Equation $\left(2\right)$ in [@AshtekarNoPerturbative p. 44].
Additionally, we have that $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)+C_{pi}^kA^p_l-A^k_pC^p_{li}$$ for any $A\in\mathfrak{gl}\left(m\right)$. Again $$d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=-\left(A_{LM}\right)^1\cdot C^k_{li}-\omega^k_l\left(\left[\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right]\right),$$ and from $$\left[\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right]=\left(\left[B\left(e_i\right),A_{LM}\right]\right)^1=-\left(B\left(Ae_i\right)\right)^1=-A_i^p\left(B\left(e_p\right)\right)^1$$ we can conclude with the formula $$d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=-\left(A_{LM}\right)^1\cdot C^k_{li}+A^p_iC^k_{lp},$$ namely $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=-\left(A_{LM}\right)^1\cdot C^k_{li}+A^p_iC^k_{lp}+C_{pi}^kA^p_l-A^k_pC^p_{li}.$$ But, from Equation we have that $$\left(A_{LM}\right)^1\cdot C^k_{li}=-A^p_lC^k_{pi}+C^p_{li}A_p^k+A^p_iC_{lp}^k$$ and so $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=0.$$ It also can be proved in a more direct fashion by realizing that $\left(A_{LM}\right)^1=A_{J^1\tau}$ and $\Omega$ is a covariant derivative (and so, it annihilates on vertical vector fields for the projection $p_{GL\left(m\right)}^{J^1\tau}:J^1\tau\rightarrow J^1\tau/GL\left(m\right)=:C\left(LM\right)$).
Finally, we will compute the contraction of $\Omega^k_l$ with a pair consisting the vector field $\left(B\left(e_i\right)\right)^1$ and $\left(\theta^j,\left(A_{LM}\right)^V\right)$. Recalling that the forms $\theta,\omega$ are semibasic respect to the projection $\tau_{10}:J^1\tau\rightarrow LM$, we have that $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right).$$ Now $$d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=-\left(\theta^j,A_{LM}\right)^V\cdot C^k_{li}-\omega^k_l\left(\left[\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right]\right),$$ but from Lemma \[lem:LiftsBrackets\], we obtain that $$\left[\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right]\in\ker{T\tau_{10}},$$ so for $A=E^p_q$, $$d\omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=-\left(\theta^j,\left(E^p_q\right)_{LM}\right)^V\cdot C^k_{li}=-\delta^j_i\delta^p_l\delta^k_q;$$ then we conclude that $$\Omega^k_l\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,\left(E^p_q\right)_{LM}\right)^V\right)=-\delta^j_i\delta^p_l\delta^k_q.$$
Also, we will have that $$\Omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right)=d\omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right)$$ because $\omega^q_r\left(\left(E^j_k\right)_{J^1\tau}\right)=\delta^q_k\delta^j_r$; additionally $$d\omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right)=-\omega^q_p\left(\left[\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right]\right).$$ Now from the fact that $$\left[E^i_j,E^k_l\right]=\delta^k_jE^i_l-\delta^i_lE^k_j,$$ we obtain $$\left[\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right]=-\left(\delta^k_jE^i_l-\delta^i_lE^k_j\right)_{J^1\tau}=\left(\delta^i_lE^k_j-\delta^k_jE^i_l\right)_{J^1\tau},$$ namely $$\Omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right)=\left(\delta^k_jE^i_l-\delta^i_lE^k_j\right)^q_p=\delta^k_j\delta^i_p\delta^q_l-\delta^i_l\delta^k_p\delta^q_j.$$
Another possible contraction is an infinitesimal generator with a vertical vector for $\tau_{10}$; it becomes $$\Omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right)=d\omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right).$$ On the other hand, $$d\omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right)=-\omega^q_p\left(\left[\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right]\right)=0$$ because $$\left[\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right]=\left(\theta^r,\left[\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right]\right)^V$$ as a consequence of Lemma \[lem:LiftsBrackets\]; so $$\Omega^q_p\left(\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right)=0.$$
Contraction of the torsion $T$ with elements of $B$
---------------------------------------------------
Let us calculate the contraction of the elements of $B$ with the universal torsion $T$. We have that $$T^k\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)+C^k_{pi}\delta^p_j-C^k_{pj}\delta^p_i,$$ because $\left(B\left(e_i\right)\right)^1\lrcorner\theta^k=\delta^k_i$. Additionally, $$d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=
-\theta^k\left(\left[\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right]\right)=0$$ using the fact that, because $\sigma$ is torsionless, the bracket $$\left[\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right]=\left(\left[B\left(e_i\right),B\left(e_j\right)\right]\right)^1=\left(R\left(B\left(e_i\right),B\left(e_j\right)\right)\right)_{J^1\tau}$$ is a vector field tangent to the orbits of the action of $GL\left(m\right)$ on $J^1\tau$. Then $$T^k\left(\left(B\left(e_i\right)\right)^1,\left(B\left(e_j\right)\right)^1\right)=C^k_{ji}-C^k_{ij}.$$
On the other hand, we can calculate $$\begin{aligned}
T^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)&=d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)-A^k_p\delta^p_i\\
&=d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)-A^k_i.\end{aligned}$$ The differential becomes $$d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=-\theta^k\left(\left[\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right]\right)$$ where we used that $\theta^k\left(A_{J^1\tau}\right)=0$.
Therefore, from $$\left[\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right]=\left(\left[B\left(e_i\right),A_{LM}\right]\right)^1=-\left(B\left(Ae_i\right)\right)^1=-A_i^p\left(B\left(e_p\right)\right)^1$$ we can conclude that $$d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=A^k_i,$$ and so $$T^k\left(\left(B\left(e_i\right)\right)^1,\left(A_{LM}\right)^1\right)=0.$$ As before, it is only a check of the fact that $T^k$ is a exterior covariant derivative [@KN1], and as such it must annihilates on infinitesimal generators of the $GL\left(m\right)$-action on $J^1\tau$.
Now let us calculate the contraction with elements tangent to the fibers of $\tau_{10}$. It results that $$T^k\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right),$$ as before. Moreover, $$d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=-\theta^k\left(\left[\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right]\right),$$ but from Lemma \[lem:LiftsBrackets\], we obtain that $$\left[\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right]\in\ker{T\tau_{10}},$$ so $$d\theta^k\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=0,$$ and consequently $$T^k\left(\left(B\left(e_i\right)\right)^1,\left(\theta^j,A_{LM}\right)^V\right)=0.$$
In a similar fashion can be proved that $$T^q\left(\left(E^i_j\right)_{J^1\tau},\left(E^k_l\right)_{J^1\tau}\right)=0=T^q\left(\left(E^i_j\right)_{J^1\tau},\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\right)$$ and from here, that $$\left(\theta^r,\left(E^k_l\right)_{LM}\right)^V\lrcorner T^q=0.$$
[BLEEdD[[$^{+}$]{}]{}07]{}
J. L. Anderson and P. G. Bergmann. Constraints in covariant field theories. , 83:1018–1025, Sep 1951.
R. Arnowitt, S. Deser, and C. W. Misner. . , 40(9):1997–2027, May 2004.
J. L. Anderson and D. Finkelstein. Cosmological constant and fundamental length. , 39(8):901–904, 1971.
E. Álvarez and S. González-Martín. First order formulation of unimodular gravity. , 92:024036, Jul 2015.
A. Ashtekar. . Number 6 in Advanced Series in Astrophysics and Cosmology. World Scientific Pub Co Inc, 1991.
R. L. Bryant, S. S. Chern, R. B. Gardner, H. L. Goldschmidt, and P. A. Griffiths. . Springer-Verlag, 1991.
R. Bryant and P. Griffiths. Reduction for constrained variational problems and $\int\frac{\kappa^2}{2}ds$. , 108(3):525–570, 1986.
J. Brajerčíc and D. Krupka. Variational principles on the frame bundles. , 5:1–14, 2004.
M. Barbero-Li[ñ]{}án, A. Echeverr[í]{}a-Enr[í]{}quez, D. Mart[í]{}n de Diego, M. C Mu[ñ]{}oz-Lecanda, and N. Román-Roy. Skinner–Rusk unified formalism for optimal control systems and applications. , 40(40):12071, 2007.
M. Bojowald. . Cambridge University Press, 2010.
R. Bryant. Nine lectures on exterior differential systems. 2011.
C. M. Campos. . PhD thesis, Departamento de Matem[á]{}ticas, Facultad de Ciencias, Universidad Aut[ó]{}noma de Madrid, 2010.
S. Capriotti. Differential geometry, palatini gravity and reduction. , 55(1):012902, 2014.
H. [Cendra]{} and S. [Capriotti]{}. . , September 2013.
J. F. Cari[ñ]{}ena, M. Crampin, and L. A. Ibort. On the multisymplectic formalism for first order field theories. , 1(4):345 – 374, 1991.
M. Castrillón López and J. Muñoz Masqué. The geometry of the bundle of connections. , 236:797–811, 2001. 10.1007/PL00004852.
F. Cantrijn and J. Vankerschaver. The Skinner-Rusk approach for vakonomic and nonholonomic field theories. In [*Differential Geometric Methods in Mechanics and Field Theory*]{}, pages 1–14. Academia Press, 2007.
P. Dedecker. Calcul des variations, formes diff[é]{}rentielles et champs g[é]{}od[é]{}siques. , 52:17, 1953.
P. A. M. Dirac. Generalized hamiltonian dynamics. , 246(1246):326–332, 1958.
P. A. M. Dirac. The theory of gravitation in Hamiltonian form. , 246(1246):333–343, aug 1958.
M. de León, J. C. Marrero, and D. [Martin de Diego]{}. . , 59:189–209, 2003.
M de León, J Marín-Solano, and J C Marrero. A geometrical approach to classical field theories: a constraint algorithm for singular theories. , 350:291–312, 1996.
M. de León, J. Marín-Solano, J. C. Marrero, M. C. Muñoz Lecanda, and N. Román-Roy. , 2(5):839–871, 2005.
A. [Echeverr[í]{}a-Enr[í]{}quez]{}, C. [L[ó]{}pez]{}, J. [Mar[í]{}n-Solano]{}, M. C. [Mu[ñ]{}oz-Lecanda]{}, and N. [Rom[á]{}n-Roy]{}. . , 45:360–380, January 2004.
F. B. Estabrook. Mathematical structure of tetrad equations for vacuum relativity. , 71:044004, Feb 2005.
——. . 2014.
P. L. Garc[í]{}a. Connections and [$1$]{}-jet fiber bundles. , 47:227–242, 1972.
M. J. Gotay, J. Isenberg, and J. E. Marsden. . 2004.
G. Giachetta, L. Mangiarotti, and G. A. Sardanashvily. . World Scientific Publishing Company, 1997.
M. J. Gotay. An exterior differential system approach to the [C]{}artan form. In P. Donato, C. Duval, J. Elhadad, and G.M. Tuynman, editors, [ *Symplectic geometry and mathematical physics. Actes du colloque de géométrie symplectique et physique mathématique en l’honneur de Jean-Marie Souriau, Aix-en-Provence, France, June 11-15, 1990.*]{}, pages 160–188. Progress in Mathematics. 99. Boston, MA, Birkhäuser, 1991.
——. A multisymplectic framework for classical field theory and the calculus of variations i: Covariant hamiltonian formalism. , pages 203–235, 1991.
P. Griffiths. . Progress in Mathematics. Birkhauser, 1982.
J. Gaset and N. Román-Roy. . 2017.
D. Hartley. Involution analysis for nonlinear exterior differential systems. , 25(8-9):51–62, April 1997.
F. H[é]{}lein. . In [*[Variational Problems in Differential Geometry]{}*]{}, Leeds, Royaume-Uni, April 2009.
F. Hélein and J. Kouneiher. The notion of observable in the covariant hamiltonian formalism for the calculus of variations with several variables. , 8:735–777, 2004.
L. Hsu. Calculus of variations via the [G]{}riffiths formalism. , 36:551–589, 1992.
T. A. Ivey and J. M. Landsberg. . Graduate Texts in Mathematics. American Mathematical Society, 2003.
A. Ibort and A. Spivak. . 2016.
N. Kamran. An elementary introduction to exterior differential systems. In [*Geometric approaches to differential equations ([C]{}anberra, 1995)*]{}, volume 15 of [*Austral. Math. Soc. Lect. Ser.*]{}, pages 100–115. Cambridge Univ. Press, Cambridge, 2000.
F. W. Hehl; G. D. Kerlick. Metric-affine variational principles in general relativity. i. Riemannian space-time. , 9, 1978.
J. Kijowski. A finite-dimensional canonical formalism in the classical field theory. , 30(2):99–128, jun 1973.
S. Kobayashi and K. Nomizu. , volume 1. Wiley, 1963.
B. Kruglikov. Involutivity of field equations. , 51(3):032502, 2010.
J. Kijowski and W. M. Tulczyjew, editors. . Springer Berlin Heidelberg, 1979.
M. Castrill[ó]{}n L[ó]{}pez, J. Mu[ñ]{}oz Masqu[é]{}, and E. Rosado Mar[í]{}a. First-order equivalent to Einstein-Hilbert Lagrangian. , 55(8):082501, 2014.
O. Makhmali. . PhD thesis, Department of Mathematics and Statistics, McGill University, 2016.
E. Musso and L. Nicolodi. Closed trajectories of a particle model on null curves in anti-de Sitter 3-space. , 24(22):5401, 2007.
P. Morando and S. Sammarco. Reduction of exterior differential systems for ordinary variational problems. , 45(6):065202, 2012.
——. Variational problems with symmetries: A Pfaffian system approach. , 120(1):255–274, Aug 2012.
K. Nomizu and T. Sasaki. . Cambridge University Press, 1995.
P. D. Prieto-Mart[í]{}nez and N. Román-Roy. . , February 2014.
——. Variational principles for multisymplectic second-order classical field theories. , 0(0):1560019, 2015.
E. Poisson. . Cambridge University Press, 2004.
F. A. E. Pirani and A. Schild. On the quantization of Einstein’s gravitational field equations. , 79:986–991, Sep 1950.
F. A. E. Pirant, A. Schild, and R. Skinner. Quantization of Einstein’s gravitational field equations. ii. , 87:452–454, Aug 1952.
J. R. Ray. Palatini variational principle. , 25(2):706–710, Feb 1975.
D. J. Saunders. . Cambridge University Press, 1989.
J. L. Safko and F. Elston. Lagrange multipliers and gravitational theory. , 17(8):1531–1537, aug 1976.
S. V. Sabau and K. Shibuya. A variational problem for curves on Finsler surfaces. , 101(3):418–430, 2016.
T. Thiemann. . Cambridge University Press, 2008.
S. Vignolo, R. Cianci, and D. Bruno. A first-order purely frame-formulation of general relativity. , 22(19):4063, 2005.
——. . , 3:1493–1500, 2006.
D. Vey. ultisymplectic formulation of vielbein gravity: [I]{}. [D]{}e [D]{}onder-[W]{}eyl formulation, [H]{}amiltonian $\left(n-1\right)$-forms. , 32(9):095005, 2015.
L. Vitagliano. The Lagrangian-Hamiltonian formalism for higher order field theories. , 60(6–8):857 – 873, 2010.
[^1]: The author thanks CONICET for finantial support, and as a member of research projects PIP 11220090101018 and PICT 2010-2746 of the ANPCyT.
[^2]: We cannot avoid here the accidental lack of uniqueness in the terminology; the covariance property of a Lepage-equivalent problem does not have direct relationship with the invariance of the underlying variational problems respect to general changes of coordinates.
[^3]: This form is essentially the contact structure of $J^1\tau$.
[^4]: Respect to the canonical basis of $S^*\left(m\right)$.
[^5]: Otherwise, $\theta_{ijk}=0$.
|
---
author:
- 'P. Gnaciński , J.K. Sikorski , G.A. Galazutdinov'
title: Electron density and carriers of the diffuse interstellar bands
---
Introduction
============
The Diffuse Interstellar Bands (DIBs) are broad absorption features, seen in the interstellar medium. There are almost 300 DIBs known in the optical and NIR spectrum ([@Gazinur]). Despite of over 80 years of investigations their nature is still unknown (for a review, see [@Herbig]). Many carriers have been proposed as the carriers of DIBs, e.g. solid particles, simple molecules, negative atomic ions, carbon chains, fullerens.
In 1985 [@Zwet] proposed polycyclic aromatic hydrocarbons (PAHs) as the source of DIBs. Since then it is a popular hypothesis. There are however problems with obtaining gas phase laboratory spectra of dehydrogenated and/or ionised PAHs to verify this hypothesis. Recently [@Cox] have presented simulations of PAH charge state distribution in various environments. In clouds with various irradiation and density the fractional abundances of PAH cations, neutrals and anions changes dramatically.
We have used the ionisation equilibrium equation to obtain the electron densities in individual clouds. The electron density have been compared to the equivalent width of DIBs and the CH/CH+ lines. The equivalent width of the CH+ line drops with rising $n_e$, but no changes of the DIBs equivalent widths are observed.
Column densities and equivalent widths
======================================
The aim of this paper is to check the dependiences between the electron density and the carriers of DIBs. In order to determine the electron density we had to measure the column densities of two elements in two adjacent ionisation stages. Our target stars were stars fulfilling the following criteria:
- reddened stars of spectral type O or B
- high resolution Hubble Space Telescope (HST) spectra for at least MgI, MgII lines are available (both, GHRS and STIS spectra were used)
- hydrogen column densities are available
- equivalent widths of the chosen DIBs are available
Column densities of Mg I, Mg II, Si I, Si II, C II, C II\* were calculated from high resolution HST spectra. The spectra from the ultraviolet spectral range were downloaded from the HST Data Archive. The GHRS spectra taken in the FP-SPLIT mode were processed with IRAF tasks [*poffsets*]{} and [*specalign*]{} to achieve the final spectrum. The column densities were derived using the profile fitting technique. The absorption lines were fitted by Voigt profiles. The transitions for which the natural dumping constant ($\Gamma$) in not known (Mg II 1240 Å doublet, Mg I 1828 Å) were fitted with a Gauss function. The cloud velocities (v), Doppler broadening parameters (b) and column densities (N) for multiple absorption components were simultaneously fitted to the observed spectrum. Both lines of magnesium doublet (at 1200 Å) were also fitted simultaneously - v, b and N were common for both lines in the doublet. The wavelengths, oscillator strengths (f) and natural damping constants ($\Gamma$) were adopted from [@Morton]. A convolution with a point spread function (PSF) was also performed. The PSF for the GHRS spectrograph consists of two Gaussian components. The “core” Gaussian has a FWHM=1.05 diodes, while the “halo” component has FWHM=5.0 diodes ([@Spitzer]). The relative contribution of the “core” and “halo” components into the PSF is wavelength dependent and was interpolated from the table given by [@Cardelli]. The Gaussian PSF for the STIS spectrograph depends on wavelength, slit and the mode of observations. The tables with FWHM for the combination of mode and slit can be found in “STIS Instrument Handbook” ([@Kim]).The FWHM of the Gaussian PSF was wavelength-interpolated from these tables.
The derived column densities, used to calculate the electron density are presented in Table 1. The hydrogen (HI) column densities were adopted from [@Dip]. Molecular hydrogen (H$_2$) column densities come from [@Rach] and [@Sav]. The equivalent widths of DIBs and CH/CH+ were kindly supported by Jacek Krełowski.
Electron density
================
The electron density ($n_e$ in $cm^{-3}$) was calculated from the equations of ionisation equilibrium for two elements. The first element was Mg, because it is easily observed in two ionisation stages. The Mg II column density was determined from the 1240 Å doublet, the Mg I column density was determined from the 2026 Å, 2852 Å or 1827 Å line.
The step rise of dielectron recombination coefficient for Mg II with temperature causes the decrease of electron density, inferred from MgI/MgII, with temperature (Fig. \[Intersection\]). Such behaviour enables calculation of electron density, because the curve $n_e(T_{e})$ from MgI/MgII intersects with a curve $n_e(T_{e})$ from another element. The equation of ionisation equilibrium for Mg is the following: $$\frac{n_{e}N(Mg\: II)}{N(Mg\: I)}=\frac{\Gamma(Mg_{12})+n_{e}C(Mg_{12})}{\alpha_{rad}(Mg_{21})+\alpha_{die}(Mg_{21})}$$ Where $N(MgII)$ and $N(MgI)\ [cm^{-2}]$ are the column densities of ionised and neutral Mg; $\alpha_{rad}(Mg_{21})\ [cm^3/s]$ is the radiative recombination coefficient; $\alpha_{die}(Mg_{21})\ [cm^3/s]$ is the dielectronic recombination rate; $\Gamma(Mg_{12})\ [1/s]$ is the ionisation rate of MgI by UV photons; $C(Mg_{12})\ [cm^3/s]$ is the collisional ionisation rate.
Because the coefficients $\alpha_{rad}$, $\alpha_{die}$ and $C$ depend from electron temperature ($T_e$) we need an analogous equation for a second element to obtain $n_e$ and $T_e$ simultaneously. For stars: HD 24534, HD 203374, HD 206267, HD 209339, HD 210839, the second element was Si. The column density of Si I was calculated form the 1845 Å line, and column density of Si II from the 1808 Å one. From the intersection of the $n_e(T)$ curves from Mg and Si (Fig. \[Intersection\]) we have obtained the electron density.
The $\Gamma$ coefficients for Mg and Si were adopted from the WJ2 model ([@Boer]). The recombination coefficients ($\alpha_{rad}$ and $\alpha_{die}$) and the collisional ionisation rate coefficient ($C$) (see Table 2) were adopted from [@Shull].
For the stars HD 202904 and HD 160578 the ionisation equilibrium was calculated from MgI/MgII and CII/CII\*. The column density of carbon was calculated from the C II 1335 Å and C II\* 1336 Å lines, using the profile fitting technique (see [@Gna] for details). The equilibrium between the collisional excitation and radiative de-excitation of ionised carbon is described by the equation ([@Wood]): $$\frac{N(C\: II^*)}{N(C\: II)}=\frac{n_{e}C(C_{12})}{\alpha_{rad}(C_{21})}$$ The radiative de-excitation $\alpha_{rad}(C_{21})$ was adopted from [@NS]. The collision rate coefficient $C(C_{12})$ was adopted from [@Wood] and [@Hayes].
We have also tryed to use Ca as the second element for obtaining $n_e$. Unfortuatelly, the CaI/CaII ionisation equilibrium curve (eq. 1) for the stars HD 74455 and HD 149757 does not intersect the ionisation equilibrium curve for Mg I/II. The problem can be caused by the change of $n(CaII)/n(CaI)$ between the edge and the centre of the cloud. [@Lepp] have found in their numerical simulations, that $n(CaII)/n(CaI)$ decreases from 4800 at the edge to 160 at the centre of the $\zeta$ Persei cloud.
Results and Discussion
======================
For a lot of stars Mg is the only element with observations of absorption lines of two stages of ionisation. From the column densities of neutral and ionised Mg we can only calculate the maximal possible electron density $n_{e}^{MAX}$. It is simply the maximum of the $n_{e}(T_{e})$ curve. Fortunately this maximum ($n_{e}^{MAX}$) is very well correlated with the electron density $n_{e}$ (Fig. \[FigNeMAX\]). The correlation coefficient R=0.89, and the linear correlation is following: $n_{e}^{MAX}=2.84 \cdot n_{e}$. This relation was derived from $n_e=0.01\div 2.5$ cm$^{-3}$ and may not hold for denser or thiner environments. The linear relation $n_{e}^{MAX}=2.84 \cdot n_{e}$ probably reflects the fact, that most of the clouds for which we can calculate $n_e$ have the electron temperature $T_e\sim 7500 K$. For stars HD 24912, HD 74455, HD 91316, HD 141637, HD 147165, HD 147933, HD 149757 the electron density was calculated using the $n_{e}^{MAX}$ value and this formula. All derived electron densities are presented in Table \[tabela\].
Figure \[CH\] presents the relation between the electron density and equivalent widths of the CH and CH+ molecule normalised to the total hydrogen column density. The equivalent widths of CH and CH+ include all doppler components. The CH abundance does not change between clouds with various electron density. In opposition to CH, the CH+ abundance is lower for clouds with large electron densities (more recombinations). Such behaviour is also illustrated on Fig. \[Mg\], where the theoretical relation between the column densities of Mg I and Mg II are presented versus the electron density.
We have checked that the changes in normalised CH+ equivalent width with $n_e$ are statistically significant. The points on Fig. \[CH\] that are in the direction to the same star (connected with a straight line) were replaced by an average $n_e$ value from two extreme points. The sample of stars was divided in two sets. One with $n_e<0.4$ cm$^{-3}$ and the second one for directions with $n_e>0.4$ cm$^{-3}$. We have calculated the average W(CH)/H$_{tot}$ and W(CH+)/H$_{tot}$ and theirs standard deviations for stars in both sets. The Student’s t-variable was calculated in order to check the agreement between the averages for directions with $n_e<0.4$ cm$^{-3}$ and $n_e>0.4$ cm$^{-3}$. The average of normalised CH for directions with low and high $n_e$ agree with significance level 0.7. The average W(CH+)/H$_{tot}$ for directions with $n_e<0.4$ cm$^{-3}$ and $n_e>0.4$ cm$^{-3}$ differs substantially (significance level 0.009).
Figures \[Diby\], \[Diby2\] and \[Diby3\] presents equivalent widths of DIBs normalised to the total hydrogen column density (N(H$_{tot}$)=N(HI)+2N(H$_2$)) ploted versus electron density ($n_e$ in cm$^{-3}$). One could expect a drop of DIBs equivalent width with $n_e$ as seen on Figure \[CH\] for CH+. Unfortunately, none of the DIBs bands show a relationship with varying electron density. There are two possible explanations for the lack of relationship between DIBs and $n_e$. First explanation, that the carriers of the analysed DIBs may be observed only in one ionisation stage. At Figure \[Mg\] we can see such behaviour for Mg II. In a wide range of observed electron densities ($n_e$=0.009-2.5 cm$^{-3}$) the column density of Mg II does not change by a considerable amount. Such behaviour is also seen for CH on Figure \[CH\].
The second explanation is that DIBs arise in parts of interstellar clouds where we observe only one stage of ionisation of Mg and other elements. The DIBs can arise in dense cores of interstellar clouds, where ionised atoms are hardly observed. DIBs may also arise at outer (ionised) parts of the interstellar clouds, where neutral elements are absent. For both cases we can not calculate the electron density. The hypothesis that DIBs carriers are formed in outer regions of interstellar clouds was already formulated by [@Snow]. They observed that the 4430Å and 5780Å DIBs are shallower than expected in dense molecular clouds. This result was confirmed by observations of the Taurus dark clouds made by [@Adamson]. High UV flux on the clouds surface may be responsible for ionising the DIBs carriers, while the clouds cores are shielded by extinction on dust grains.
Conclusions
===========
The results can be recapitulated as follows:
1. The electron density in the interstellar clouds was determined for 13 lines of sight.
2. Linear correlation between the electron density and maximum possible value of electron density from MgI/MgII was found $n_{e}^{MAX}=2.84 \cdot n_{e}$.
3. The normalised equivalent width of the CH+ line drops with rising electron density as expected from the ionisation equilibrium.
4. The normalised equivalent widths of the 5780Å, 5797Å, 6614Å, 5850Å, 5844Å, 6203Å, 6270Å, 6284Å, 6376Å, 6379Å, 6660Å, 6196Å DIBs do not change with electron density varying in the range $n_e=0.01\div 2.5$ cm$^{-3}$ (diffuse gas).
We are very grateful to Jacek Krełowski for the equivalent widths of the DIBs and the CH/CH+ lines. This publication is based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
Adamson A.J., Whittet D.C.B. & Duley W.W., 1991, MNRAS, 252, 234 de Boer K.S., Koppenaal K. & Pottasch S.R., 1973, A&A, 28, 145 Bohlin R.C., Savage B.D., Drake J.F., 1978, ApJ, 224, 132 Cami J., Salama F., Jiménez-Vicente J., Galazutdinov G.A., Krełowski J., 2004, ApJ, 611, L116 Cardelli, J.A., Ebbets, D.C., Savage, B.D., 1990, ApJ, 365, 789 Chlewicki G., van der Zwet G.P., nav Ijzendoorn L.J., Greenberg J.M., Alvarez P.P., 1986, ApJ, 305, 455 Cox N.L.J., Spaans M., 2006, A&A, 451, 973 Diplas A., Savage B.D., 1994, ApJS, 93, 211 Frisch P.C., York D.G., Fowler J.R., 1987, ApJ, 320, 842 Galazutdinov G.A., Moutou C, Musaev F.A., Krełowski J., 2002, A&A, 384, 215 Galazutdinov G.A., Musaev F.A., Krełowski J., Walker G.A.H., 2000, PASP, 112, 648 Gnaciński P., 2000, Acta Astron., 50, 133 Hayes M.A. & Nussbauer H., 1984, A&A, 134, 193 Herbig G.H., 1995, ARA&A, 33, 19 Jenkins E.B., Savage B.D., Spitzer L., 1986, ApJ, 301, 355 Kim Quijano, J. [*et al.*]{}, 2003, “STIS Instrument Handbook” (Baltimore: STScI), available at: http://www.stsci.edu/hst/stis/documents Kre[ł]{}owski J., et al., 1999, A&A, 347, 235 Kre[ł]{}owski J., 1989, Astron. Nachr., 310, 255 Lepp S., Dalgarno A., van Dishoeck E.F., Black J.H., 1988, ApJ, 329, 418 Lacour S., André, M.K., Sonnentrucker, P., [*et al.*]{} , 2005, A&A, 430, 967 Morton, D.C., 2003, ApJSS, 149, 205 Moutou C., Kre[ł]{}owski J., d’Hendecourt L., Jamroszczak J., 1999, A&A, 351, 680 Nussbaumer H., Storey P.J., 1981, A&A, 96, 91 Omont A., 1986, A&A, 164, 159 Rachford B.L., et al., 2002, ApJ, 577, 221 Savage B.D., Bohlin R.C., Drake J.F., Budich W., 1977, ApJ, 216, 291 Shull J.M. & van Steenberg M., 1982, ApJS, 48, 95 Snow T.P. Jr., & Cohen J.G., 1974, ApJ, 194, 313 Sonnentrucker P., Cami J., Ehrenfreud P., Foing B.H., 1997, A&A, 327, 1215 Spitzer, L.Jr., Fitzpatrick, E.L., 1993, ApJ, 409, 299 Weselak T., Fulara J., Schmidt M.R., Kre[ł]{}owski J., 2001, A&A, 377, 677 Wood B.E., Linsky J.L., 1997, ApJ, 474, L39 van der Zwet G.P., Allamandola L.J., 1985, A&A, 146, 76
Star v \[$km/s$\] Mg I Mg II Si I Si II C II C II\*
----------- -------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------
HD 24534 18 $ 7.2\pm0.4e13 $ $ 2.9\pm0.1e15 $ $ 9.3\pm0.6e11 $ $ 1.8\pm0.1e15 $
HD 24912 14 $ 4.9\pm0.6e12 $ $ 5.5\pm0.4e15 $ $ $ $ $ $ $ $ $
HD 74455 25 $ 8.6\pm0.3e12 $ $ 3.1\pm0.1e15 $ $ $ $ $ $ $ $ $
HD 74455 5 $ 1.7\pm0.1e12 $ $ 5.5\pm1.1e13 $ $ $ $ $ $ $ $ $
HD 74455 -160 $ 4.2\pm0.2e11 $ $ 1.7\pm0.1e13 $ $ $ $ $ $ $ $ $
HD 91316 -8 $ 4.2\pm0.6e12 $ $ 2.4\pm0.1e15 $ $ $ $ $ $ $ $ $
HD 91316 17 $ 1.8\pm0.4e12 $ $ 1.0\pm0.1e15 $ $ $ $ $ $ $ $ $
HD 141637 -7 $ 7.7\pm0.1e12 $ $ 6.8\pm0.2e15 $ $ $ $ $ $ $ $ $
HD 141637 -12 $ 2.8\pm0.1e12 $ $ 2.3\pm0.4e15 $ $ $ $ $ $ $ $ $
HD 147165 -9 $ 1.4\pm0.2e13 $ $ 1.1\pm0.1e15 $ $ $ $ $ $ $ $ $
HD 149757 -17 $ 4.3\pm0.1e12 $ $ 2.1\pm0.1e15 $ $ $ $ $ $ $ $ $
HD 149757 -29 $ 3.9\pm1.0e11 $ $ 8.1\pm0.2e14 $ $ $ $ $ $ $ $ $
HD 160578 -27 $ 1.1\pm0.3e11 $ $ 2.3\pm0.3e14 $ $ $ $ $ $ 1.7\pm1.1e16 $ $ 8.0\pm1.3e12 $
HD 202904 -22 $ 1.3\pm0.1e12 $ $ 3.6\pm1.9e14 $ $ $ $ $ $ 8.4\pm7.4e15 $ $ 2.4\pm0.1e13 $
HD 202904 -13 $ 1.2\pm0.1e12 $ $ 1.2\pm0.1e15 $ $ $ $ $ $ 1.8\pm0.5e16 $ $ 6.4\pm0.1e13 $
HD 203374 -18 $ 1.2\pm0.1e14 $ $ 1.2\pm0.2e16 $ $ 1.8\pm0.2e12 $ $ 1.4\pm0.2e16 $ $ $ $ $
HD 206267 -13 $ 1.7\pm0.1e14 $ $ 1.1\pm0.1e16 $ $ 2.3\pm0.2e12 $ $ 1.0\pm0.1e16 $ $ $ $ $
HD 209339 -15 $ 1.2\pm0.1e14 $ $ 1.2\pm0.1e16 $ $ 1.2\pm0.3e12 $ $ 9.0\pm2.1e15 $ $ $ $ $
HD 210839 -31 $ 2.9\pm0.3e13 $ $ 1.9\pm0.1e15 $ $ 4.9\pm1.5e11 $ $ 1.4\pm0.1e15 $ $ $ $ $
HD 210839 -13 $ 8.5\pm0.1e13 $ $ 1.0\pm0.1e16 $ $ 1.3\pm0.1e12 $ $ 9.0\pm0.3e15 $ $ $ $ $
----------- ----------------------- ------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------
$\Gamma$ $C(T)$ $\alpha_{rad}(T)$ $\alpha_{die}(T)$
$[1/s]$ $[cm^3/s]$ $[cm^3/s]$ $[cm^3/s]$
Mg I/II $ 8.1\cdot 10^{-11} $ $ $ $
\begin{array}{l} 8.9\cdot 10^{-11} \sqrt{T} (1+0.1\cdot T/88700)^{-1} \cdot \\ \cdot exp(-88700/T) \end{array} $ \begin{array}{l} 1.4\cdot 10^{-13}\cdot \\ \cdot (T/10000)^{-0.855} \end{array} $ \begin{array}{l} 4.49\cdot 10^{-4} T^{-3/2} exp(-50100/T)\cdot \\ \cdot (1+0.021\cdot exp(-28100/T)) \end{array} $
Si I/II $ 3.8\cdot 10^{-9} $ $ $ $
\begin{array}{l} 3.92\cdot 10^{-10} \sqrt{T} (1+0.1\cdot T/94600)^{-1}\cdot \\ \cdot exp(-94600/T) \end{array} $ \begin{array}{l} 5.9\cdot 10^{-13}\cdot \\ \cdot (T/10000)^{-0.601} \end{array} $ \begin{array}{l} 1.1\cdot 10^{-3} T^{-3/2} exp(-77000/T) \end{array} $
Ca I/II $ 3.8\cdot 10^{-10} $ $ $ $
\begin{array}{l} 2.09\cdot 10^{-10} \sqrt{T} (1+0.1\cdot T/70900)^{-1}\cdot \\ \cdot exp(-70900/T) \end{array} $ \begin{array}{l} 1.12\cdot 10^{-13}\cdot \\ \cdot (T/10000)^{-0.9} \end{array} $ \begin{array}{l} 3.28\cdot 10^{-4} T^{-3/2} exp(-34600/T) \\ \cdot (1+0.0907\cdot exp(-16400/T)) \end{array} $
C II/II\* – $ $ –
\begin{array}{l} 8.63\cdot 10^{-6} (2\sqrt{T})^{-1} \Omega_{12}(T) \cdot \\ \cdot exp(-\frac{1.31\cdot 10^{-14} erg}{kT}) \end{array} $ 2.29\cdot 10^{-6} $
----------- ----------------------- ------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------
[|l|r|r|r|rr|rr|r|r|r|]{} Star & v & n$_e$ & n$_{e}^{MAX}$ & HI & ref & H$_2$ & ref & CH$^+$ & CH\
&\[$km/s$\] & \[$cm^{-3}$\] & \[$cm^{-3}$\] & & & \[mÅ\] & \[mÅ\]\
HD 24534 & 18 & $2.5 ^{+0.2} _{-0.2}$ & $6.7 ^{+0.6} _{-0.6}$ & 20.73 & 3 & 20.92 & 1 & $3.2 \pm 0.4$ & $24.1 \pm 0.5$\
HD 160578 & -27 & $0.009 ^{+0.02}_{-0.004}$ & $0.14 ^{+0.06} _{-0.05}$ & 20.19 & 3 & & & &\
HD 202904 & -22 & $0.06 ^{+0.38} _{-0.03}$ & $0.9 ^{+1.3} _{-0.4}$ & 20.68 & 4 & 19.15 & 4 & &\
HD 202904 & -13 & $0.06 ^{+0.02} _{-0.01}$ & $0.26 ^{+0.04} _{-0.04}$ & & & & & &\
HD 203374 & -18 & $0.66 ^{+0.22} _{-0.16}$ & $2.7 ^{+0.5} _{-0.4}$ & & & & & &\
HD 206267 & -13 & $1.1 ^{+0.3} _{-0.2}$ & $4.3 ^{+0.2} _{-0.2}$ & 21.30 & 5 & 20.86 & 1 & $11.3 \pm 0.8$ & $21.7 \pm 0.9$\
HD 209339 & -15 & $0.7 ^{+0.4} _{-0.3}$ & $2.7 ^{+0.2} _{-0.2}$ & & & & & &\
HD 210839 & -31 & $1.7 ^{+0.5} _{-0.5}$ & $4.1 ^{+0.6} _{-0.5}$ & 21.15 & 3 & 20.84 & 1 & $11.3 \pm 0.8$ & $22.3 \pm 0.4$\
HD 210839 & -13 & $0.72 ^{+0.1} _{-0.09}$ & $2.24 ^{+0.08} _{-0.07}$ & & & & & &\
\
HD 24912 & 14 & $0.09 ^{+0.02} _{-0.02}$ & $0.24 ^{+0.05} _{-0.04}$ & 21.05 & 3 & 20.53 & 2 & $21.13 \pm 0.19$ & $10.1 \pm 0.3$\
HD 74455 & 25 & $0.27 ^{+0.02} _{-0.02}$ & $0.76 ^{+0.05} _{-0.04}$ & 20.73 & 3 & 19.74 & 6 & $1.0 \pm 0.3$ & $1.9 \pm 0.5$\
HD 74455 & 5 & $3.0 ^{+0.9} _{-0.6}$ & $8.6 ^{+2.6} _{-1.7}$ & & & & & &\
HD 74455 & -160 & $2.4 ^{+0.09} _{-0.08}$ & $6.7 ^{+0.3} _{-0.2}$ & & & & & &\
HD 91316 & -8 & $0.17 ^{+0.02} _{-0.02}$ & $0.48 ^{+0.07} _{-0.07}$ & 20.44 & 3 & 15.61 & 7 & &\
HD 91316 & 17 & $0.17 ^{+0.04} _{-0.04}$ & $0.5 ^{+0.1} _{-0.1}$ & & & & & &\
HD 141637 & -7 & $0.109 ^{+0.005} _{-0.004}$ & $0.31 ^{+0.01} _{-0.01}$ & 21.18 & 3 & 19.23 & 4 & &\
HD 141637 & -12 & $0.12 ^{+0.02} _{-0.02}$ & $0.33 ^{+0.07} _{-0.05}$ & & & & & &\
HD 147165 & -9 & $1.2 ^{+0.3} _{-0.2}$ & $3.5 ^{+0.7} _{-0.6}$ & 21.38 & 3 & 19.79 & 2 & 4.5 & 2.9\
HD 149757 & -17 & $0.192 ^{+0.005}_{-0.006}$ & $0.54 ^{+0.02} _{-0.02}$ & 20.69 & 3 & 20.65 & 2 & 22.4 & 18.0\
HD 149757 & -29 & $0.046 ^{+0.013}_{-0.012}$ & $0.13 ^{+0.04} _{-0.04}$ & & & & & &\
Star 5797 5780 5850 5844 6196 6203 6270 6284 6376 6379 6614 6660
----------- ---------------- ----------------- ---------------- ------ ---------------- ---------------- ---------------- ----------------- ---------------- ---------------- ----------------- ----------------
HD 24534 62.5 $\pm$ 3.1 96.2 $\pm$ 9.5 27.7 $\pm$ 2.3 16.9 $\pm$ 1.9 32.2 $\pm$ 4.8 33.6 $\pm$ 6.9 73.3 $\pm$ 11 30.6 $\pm$ 4.6 50.6 $\pm$ 3.8 65.7 $\pm$ 3.6 12.7 $\pm$ 1.7
HD 202904 8 $\pm$ 1.5 44 $\pm$ 3.5 6 $\pm$ 0.5 11 $\pm$ 1.3 10 $\pm$ 2 96 $\pm$ 15 19 $\pm$ 2.5
HD 206267 89.8 $\pm$ 1.6 222.7 $\pm$ 3.6 44.9 $\pm$ 2.8 27.3 $\pm$ 0.5 44.2 $\pm$ 1.5 72.9 $\pm$ 4.1 199.2 $\pm$ 4.6 25.5 $\pm$ 1.5 36.3 $\pm$ 0.9 117 $\pm$ 1.1 20.3 $\pm$ 0.8
HD 210839 71.2 $\pm$ 0.9 253.1 $\pm$ 2.7 60.9 $\pm$ 2.4 30.7 $\pm$ 1.1 53.8 $\pm$ 2.5 90.2 $\pm$ 3.3 482 $\pm$ 26 23.9 $\pm$ 2 55.9 $\pm$ 1.3 147.2 $\pm$ 2.5 25.5 $\pm$ 0.9
HD 24912 36.1 $\pm$ 0.6 200.3 $\pm$ 2.4 29.1 $\pm$ 1 37.1 20.7 $\pm$ 0.4 22.9 $\pm$ 1 23 $\pm$ 1.1 197 $\pm$ 3.5 12 $\pm$ 1.6 26 $\pm$ 1.4 77.6 $\pm$ 2.5 16 $\pm$ 1
HD 74455 13 $\pm$ 3 31 $\pm$ 5 4.5 $\pm$ 1 6.5 $\pm$ 1.5 12 $\pm$ 3 105 $\pm$ 13 2.3 $\pm$ 1 17.5 $\pm$ 2 1.5 $\pm$ 0.5
HD 91316 17 $\pm$ 3 32 $\pm$ 5 4 $\pm$ 1 15 $\pm$ 3 19 $\pm$ 5 50 $\pm$ 8
HD 141637 8.1 $\pm$ 0.8 78 $\pm$ 3 7.3 $\pm$ 1 11 $\pm$ 1.5 11 $\pm$ 3 220 $\pm$ 14 3.5 $\pm$ 1 16.5 $\pm$ 2
HD 147165 26.3 $\pm$ 4.9 243.3 $\pm$ 3.1 9.9 $\pm$ 0.5 16.5 $\pm$ 0.5 18.9 $\pm$ 0.8 14 $\pm$ 1.1 142.6 $\pm$ 2.1 9.5 $\pm$ 0.5 20.1 $\pm$ 0.3 60.9 $\pm$ 1.2 8.1 $\pm$ 0.5
HD 147933 50.8 $\pm$ 2.4 208 $\pm$ 12.5 28.1 $\pm$ 0.8 20.3 16.6 $\pm$ 0.7 22.8 $\pm$ 1.3 20 $\pm$ 1.3 176.4 $\pm$ 2.8 11.6 $\pm$ 0.8 25.9 $\pm$ 1 64.6 $\pm$ 1.7 11.8 $\pm$ 0.9
HD 149757 30.5 $\pm$ 1.5 66.4 $\pm$ 1.9 15.7 $\pm$ 1.5 10.8 11 $\pm$ 0.5 14.5 $\pm$ 0.8 13.1 $\pm$ 1 68.2 $\pm$ 2 3.5 $\pm$ 0.3 18.7 $\pm$ 0.5 40.5 $\pm$ 2 4.2 $\pm$ 0.4
![image](Fig1.eps){width="\textwidth"}
![image](Fig2.eps){width="\textwidth"}
![image](Fig3.eps){width="\textwidth"}
![image](Fig4.eps){width="\textwidth"}
![image](Fig5.eps){width="\textwidth"}
![image](Fig6.eps){width="\textwidth"}
![image](Fig7.eps){width="\textwidth"}
|
---
abstract: 'In this paper we study the stability of a global Hölderian error bound of the sublevel set $[f \le t]$ under perturbation of $t$, where $f$ is a polynomial function in $n$ real variables. Firstly, we give two formulas which compute the set $$H(f) := \{ t \in \mathbb{R}: [f \le t]\ \text{has a global H\"{o}lderian error bound}\}$$ via some special Fedoryuk values of $f$. Then, based on these formulas, we can determine the stability type of a global Hölderian error bound of $[f \le t]$ for any value $t \in \mathbb{R}$.'
address:
- '$^{\dag}$Thang Long Institute of Mathematics and Applied Sciences, Nghiem Xuan Yem Road, Hoang Mai, District, Hanoi, Vietnam'
- '$^{\ddag}$Department of Mathematics - Faculty of Fundamental Sciences, Laboratory of Applied Mathematics and Computing, Posts and Telecommunications Institute of Technology, Km10 Nguyen Trai Road, Ha Dong District, Hanoi, Vietnam'
author:
- 'HUY-VUI HÀ$^{\dag}$'
- 'PHI-DŨNG HOÀNG$^{\ddag}$'
title: Fedoryuk values and stability of global Hölderian error bounds for polynomial functions
---
Introduction
============
Let $f : \mathbb{R}^n \to \mathbb{R}$ be a polynomial function. For $t \in \mathbb{R}$, put $$[f \le t]:=\{x \in \mathbb{R}^n | f(x) \le t\}$$ and $[a]_+ := \max\{0, a\}$.
[@Ha] We say that the nonempty set $[f \le t]$ has a global Hölderian error bound (GHEB for short) if there exist $\alpha, \beta, c >0$ such that $$\label{Eqn1}
[f(x) - t]_+^\alpha + [f(x) - t]_+^\beta \ge c{\operatornamewithlimits{dist}}(x, [ f \le t])\ \text{for all}\ x \in \mathbb{R}^n.$$
Note that, if $\alpha = \beta = 1$, then (\[Eqn1\]) becomes a global Lipschitzian error bound for $[f \le t]$.
The existence of error bounds have many important applications, including sensitivity analysis, convergence analysis in optimization problems, variational inequalities... After the earliest work by Hoffman ([@Hoff]) and extended paper of Robinson ([@Ro]), the study of error bounds has received rising awareness in many papers of mathematical programming in recent years, see [@LL; @WP; @LS; @Y; @LiG1; @LiG2; @Ha; @Ng; @LMP; @DHP] (for the case of polynomial functions) and [@Hoff; @Ro; @M; @AC; @LiW; @K; @KL; @P; @LP; @Luo; @Jo; @NZ; @CM; @LTW; @I; @BNPS; @DL] (for non-polynomial cases). The reader is referred to survey papers [@LP; @P; @Az; @I] and the references therein for the theory and applications of error bounds.
Studying the stability of error bounds under perturbation is fundamental and hard problem. It has been investigated recently in the works of Daniel, Luo-Tseng, Deng, Ngai-Kruger-Théra, Kruger-Ngai-Théra, Kruger-López-Théra,... (see [@Da; @LT; @D; @NKT; @KNT; @KLT]).
In this paper, we study stability of a global Hölderian error bound for the set $[f \le t]$ under a perturbation of $t$, i.e. the perturbation of $f$ by a constant term. The following questions arise
1. Suppose that $[f \le t]$ has a GHEB, when does there exist an open interval $I(t) \subset \mathbb{R}, t \in I(t)$, such that for any $t' \in I(t)$, $[f \le t']$ has also a GHEB?
2. Suppose that $[f \le t]$ does not have GHEB, when does there exist an open interval $I(t) \subset \mathbb{R}, t \in I(t)$, such that for any $t' \in I(t)$, $[f \le t']$ also does not have GHEB?
3. Are there other types of stability which are different from types in questions 1 and 2?
To classify the stability types of GHEB, our idea is computing the set $$H(f) := \{ t \in \mathbb{R}: [f \le t]\ \text{has a global H\"{o}lderian error bound}\}.$$ It turns out that the set $H(f)$ can be determined via some speacial values of the Fedoryuk set of $f$.
According [@KOS], the Fedoryuk set $F(f)$ of a polynomial $f$ is defined by $$F(f):=\{t \in \mathbb{R} : \exists \{x^k\} \subset \mathbb{R}^n, \|x^k\| \to \infty, \|\nabla f(x^k)\| \to 0, f(x^k) \to t\}.$$ We will show that there exists a value $h(f) \in F(f) \cup \{\pm \infty\}$, which will be called the [*threshold*]{} of global Hölderian error bounds of $f$ and a subset $F^1(f)$ of $F(f)$, such that $$\text{Either}\ H(f) = [h(f), +\infty) \setminus F^1(f)\ \text{or}\ H(f) = (h(f), +\infty) \setminus F^1(f).$$ Since $F^1(f)$ is a semialgebraic subset of $\mathbb{R}$, this formula allows us answer the questions 1 and 2. Moreover, we can discover some other types of stability which are different from the types in questions 1-2 and give the list of all possible types of stability.
The paper is organized as follows. In Section 2, we give two different formulas for computing the set $H(f)$. The first formula is based on criterion for the existence of GHEB for $[f \le t]$, given in [@Ha]. The second formula follows from a new criterion for the existence of global Hölderian error bounds. In Section 3, the relationship between $H(f)$ and the set of Fedoryuk values of $f$ will be established. In Section 4, we use the formulas of $H(f)$ and relationship between $H(f)$ and $F(f)$ to study our problems. It turns out that $F(f)$ is a semialgebraic subset of $\mathbb{R}$, hence $F(f)$ is either empty, or a finite set or a disjoint of finite number of points and intervals. Therefore, it is convenient to consider each of these cases separately.
In Subsection 4.1, we consider the case $F(f) = \emptyset$. In this case, $H(f) = (\inf f, +\infty)$ or $H(f) = [\inf f, +\infty)$ (Theorem \[thm41\]). Therefore, there are two stability types of GHEB if $H(f) = [\inf f, +\infty)$. Namely, any point $t$ of $(\inf f, +\infty)$ is [*y-stable*]{}, by this we mean that $t \in H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset H(f)$. Besides, $t = \inf f$ is [*y-right stable*]{}, by this we mean that $t \in H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset H(f)$ and $(t - \epsilon, t ) \cap H(f) = \emptyset$. Note that, for almost every polynomial $f$, $F(f) = \emptyset$. Hence, $H(f) = (\inf f, +\infty)$ or $[\inf f, +\infty)$ if $f$ is [*generic*]{} (Remark \[remark41\]).
In Subsection 4.2, we consider the case when $F(f)$ is a non-empty finite set. In this case, we show that
- $H(f) \ne \emptyset$ (Proposition \[Prop3.1\]);
- Beside of y-stable type and y-right stable, there are at most 4 other stability types of GHEB. We have
Case A
: If $h(f) = -\infty$, then there are 2 types
1. $t$ is y-stable.
2. $t$ is a [*n-isolated point*]{}: $t \in \mathbb{R}\setminus H(f)$ and for $\epsilon > 0$ sufficiently small, $(t - \epsilon, t) \cup (t, t + \epsilon) \subset H(f)$.
Case B
: If $h(f)$ is a finite value, then there are 5 types for all $t \in [\inf f, +\infty)$
1. $t$ is y-stable;
2. $t$ is y-right stable;
3. $t$ is [*n-stable*]{}: $t \in [\inf f, +\infty)\setminus H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset [\inf f, +\infty) \setminus H(f)$;
4. $t$ is [*n-right stable*]{}: $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset [\inf f, +\infty)\setminus H(f)$ and $(t - \epsilon, t ) \cap ([\inf f, +\infty) \setminus H(f)) = \emptyset$;
5. $t$ is [*n-left stable*]{}: $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $(t - \epsilon, t] \subset [\inf f, +\infty) \setminus H(f)$ and $(t, t + \epsilon) \cap H(f) \ne \emptyset$;
6. $t$ is a n-isolated point;
Note that:
- If $t$ is y-right stable or $t$ is n-left stable, then it is necessarily that $t = h(f)$;
- If $t$ is n-right stable, then it is necessarily that $t = \inf f < h(f)$ and $f^{-1}(\inf f) \ne \emptyset$.
- We can determine the type of stability of any $t \in [\inf f, +\infty)$ (Theorem \[thm43\]);
- We give an estimation of the number of connected components of $H(f)$ (Theorem \[thm44\]);
In Subsection 4.3, we consider the case when $\# F(f) = +\infty$. In this case
- Any value $t$ of $[\inf f, +\infty)$ belongs to one of the following types
1. $t$ is y-stable;
2. $t$ is y-right stable;
3. $t$ is [*y-left stable*]{}: $t \in H(f)$ and there exists $\epsilon > 0$ such that $(t - \epsilon, t] \subset H(f)$ and $(t, t + \epsilon) \cap H(f) = \emptyset$;
4. $t$ is an [*y-isolated point*]{}: $t \in H(f)$ and for $\epsilon > 0$ sufficiently small, $(t - \epsilon, t) \cup (t, t + \epsilon) \subset (\inf f, +\infty) \setminus H(f)$;
5. $t$ is n-stable;
6. $t$ is n-right stable;
7. $t$ is n-left stable;
8. $t$ is an n-isolated point.
- We can determine the type of stability of any $t \in [\inf f, +\infty)$ (Theorem \[thm45\]).
We conclude with some examples which illustrates some types of stability.
The set $H(f)$
==============
The first formula of $H(f)$
---------------------------
\
Let $f: \mathbb{R}^n \to \mathbb{R}$ be a polynomial function and $t \in \mathbb{R}$.
We say that
1. A sequence $\{x^k\} \subset \mathbb{R}^n$ is the first type of $[f \le t]$ if $$\begin{aligned}
\|x^k\|&\to \infty,\\
f(x^k) > t, f(x^k)&\to t,\\
\exists \delta > 0 \ \text{s.t.}\ {\operatornamewithlimits{dist}}(x^k, [f \le t])& \ge \delta.\end{aligned}$$
2. A sequence $\{x^k\} \subset \mathbb{R}^n$ is the second type of $[f \le t]$ if $$\begin{aligned}
\|x^k\|&\to \infty,\\
\exists M \in \mathbb{R}: t < f(x^k)& \le M < +\infty,\\
{\operatornamewithlimits{dist}}(x^k, [f \le t])& \to +\infty.\end{aligned}$$
\[Thm2.1\] The following statements are equivalent:
1. There are no sequences of the first or second types of $[f \le t]$.
2. $[f \le t]$ has a GHEB, i.e. there exist $\alpha, \beta, c > 0$ such that $$[f(x) - t]_+^\alpha + [f(x) - t]_+^\beta \ge c {\operatornamewithlimits{dist}}(x, [ f \le t])\ \text{for all}\ x \in \mathbb{R}^n.$$
Put $$\begin{aligned}
F^1(f) &= \{t \in \mathbb{R}:\exists \{x^k\} \subset \mathbb{R}^n,\{x^k\}\ \text{is a sequence of the first type of}\ [f \le t]\},\\
F^2(f) &= \{t \in \mathbb{R}:\exists \{x^k\} \subset \mathbb{R}^n, \{x^k\}\ \text{is a sequence of the second type of}\ [f \le t]\}.\end{aligned}$$
Put $$h(f) = \begin{cases}
\inf f\ \text{ if }\ F^2(f) = \{\inf f\}\ \text{or}\ F^2(f) = \emptyset,\\
+\infty\ \text{ if }\ F^2(f) = \mathbb{R},\\
\sup\{t \in \mathbb{R}: t \in F^2(f) \}\ \text{ if }\ F^2(f) \ne \emptyset\ \text{and}\ F^2(f) \ne \mathbb{R}.
\end{cases}$$ We call $h(f)$ the [*threshold*]{} of global Hölderian error bounds of $f$.
\[Main\] We have
1. If $h(f) = \inf f$, then $H(f) = [\inf f, +\infty) \setminus F^1(f)$;
2. If $h(f) = +\infty$, then $H(f) = \emptyset$;
3. If $h(f) \in F^2(f)$, then $H(f) = (h(f), +\infty) \setminus F^1(f)$;
4. If $h(f) \notin F^2(f)$, then $H(f) = [h(f), +\infty) \setminus F^1(f)$.
Clearly, if $t \in F^2(f)$ and $\inf f \le t' \le t$, then $t' \in F^2(f)$. Hence, $$\begin{aligned}
\text{either}\ F^2(f) &= \emptyset,\\
\text{or}\ F^2(f) &= \mathbb{R},\\
\text{or}\ F^2(f) &= (\inf f, h(f)]\ \text{if}\ h(f) \in F^2(f),\\
\text{or}\ F^2(f) &= (\inf f, h(f))\ \text{if}\ h(f) \notin F^2(f).
\end{aligned}$$ Therefore, Theorem \[Main\] follows from Theorem \[Thm2.1\].
A new criterion of the existence of a GHEB of $[f \le t]$ and the second formula of $H(f)$
------------------------------------------------------------------------------------------
\
Let $d$ be the degree of a polynomial $f$. By a linear change of coordinates, we can put $f$ in the form $$f(x_1, \dots, x_n) = a_0x_n^d + a_1(x_1, \dots, x_{n-1})x_n^{d-1} + \dots + a_d(x_1, \dots, x_{n-1})\ (*),$$ where $a_0 \ne 0$ and $a_i(x_1, \dots, x_{n-1})$ are polynomials in $(x_1, \dots, x_{n-1})$, where degrees $\deg a_i\le i, i =1, \dots, d$.
Put $V_1 = \{x \in \mathbb{R}: \dfrac{\partial f}{\partial x_n}(x) = 0\}$.
\[Def2.3\] We say that
1. A sequence $\{x^k\}$ is of the first type of $[f \le t]$ w.r.t $V_1$ if $$\begin{aligned}
\|x^k\|& \to \infty,\\
f(x^k) > t,& f(x^k) \to t,\\
{\operatornamewithlimits{dist}}(x^k, [f \le t]) &\ge \delta > 0,\\
\text{and}\ \{x^k\} &\subset V_1.
\end{aligned}$$
2. A sequence $\{x^k\}$ is of the second type of $[f \le t]$ w.r.t $V_1$ if $$\begin{aligned}
\|x^k\|& \to \infty,\\
t< f(x^k)&\le M < +\infty,\\
{\operatornamewithlimits{dist}}(x^k, [f \le t]) &\to \infty,\\
\text{and}\ \{x^k\} &\subset V_1.
\end{aligned}$$
Let $f$ be of the form $(*)$. Put $$\begin{aligned}
P^1(f) &= \{t \in \mathbb{R}: [f \le t]\ \text{has a sequence of the first type w.r.t. $V_1$}\};\\
P^2(f) &= \{t \in \mathbb{R}: [f \le t]\ \text{has a sequence of the second type w.r.t. $V_1$}\};\\
P(f) &= \{t \in \mathbb{R}: \exists \{x^k\} \subset\mathbb{R}^n, \|x^k\|\to\infty, \frac{\partial f}{\partial x_n}(x^k) = 0, f(x^k) \to t \}.\end{aligned}$$
\[Thm2.5\] Let $f$ be of the form $(*)$. Then the following statements are equivalent
1. There are no sequences of the first or second types of $[f \le t]$ w.r.t $V_1$;
2. $\exists \alpha_1, \beta_1, c > 0$ such that $$[f(x) - t]_+^{\alpha_1} + [f(x) - t]_+^{\beta_1} \ge c_1 {\operatornamewithlimits{dist}}(x, [ f \le t]),$$ for all $x \in V_1$;
3. $\exists \alpha_1, \beta_1, c > 0$ such that $$[f(x) - t]_+^{\alpha_1} + [f(x) - t]_+^{\beta_1} + [f(x) - t]_+^{\frac{1}{d}} \ge c {\operatornamewithlimits{dist}}(x, [ f \le t]),$$ for all $x \in \mathbb{R}^n$;
4. $[f \le t]$ has a global Hölderian error bound.
\
We will prove that $(i) \Rightarrow (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (i)$.\
Proof of $(i) \Rightarrow (ii):$\
For $\tau > 0$, put $$\psi(\tau) := \begin{cases}
0 &\text{if}\ [f(x) - t]_+=\tau\ \text{is empty}\\
\sup\limits_{[f(x) - t]_+ = \tau, x \in V_1} {\operatornamewithlimits{dist}}(x, [f \le t]) &\text{if}\ [f(x) - t]_+=\tau\ \text{is not empty}
\end{cases}.$$ By (i), $\psi(\tau)$ is well defined on $[0, +\infty)$. Moreover, it follows from Tarski-Seidenberg theorem (see, for example, [@BCR; @C; @HP]), $\psi(\tau)$ is a semialgebraic function.
To prove (ii), it is important to know the behavior of $\psi(\tau)$, as $\tau \to 0$ or $\tau \to +\infty$. We distinguish 4 possibilities
1. $\psi(\tau) \equiv 0$ for $\tau$ sufficiently small and $\psi(\tau) \equiv 0$ for $\tau$ sufficiently large;
2. $\psi(\tau) \equiv 0$ for $\tau$ sufficiently small and $\psi(\tau) \not\equiv 0$ for $\tau$ sufficiently large;
3. $\psi(\tau) \not\equiv 0$ for $\tau$ sufficiently small and $\psi(\tau) \equiv 0$ for $\tau$ sufficiently large;
4. $\psi(\tau) \not\equiv 0$ both for $\tau$ sufficiently small and $\tau$ sufficiently large.
We will prove (i) $\Rightarrow$ (ii) for the case (d) because the proofs of other cases are similar.
In this case, since $\psi(\tau)$ is semialgebraic and $\psi(\tau) \not\equiv 0$ for any $\tau \in [0, +\infty)$, we have $$\label{Eq1}
\psi(\tau) = a_0\tau^{\tilde{\alpha}} + o(\tau^{\tilde{\alpha}})\ \text{as}\ \tau \to 0,\ \text{where}\ a_0 > 0.$$ and $$\label{Eq2}
\psi(\tau) = b_0\tau^{\tilde{\beta}} + o(\tau^{\tilde{\beta}})\ \text{as}\ \tau \to +\infty,\ \text{where}\ b_0 > 0.$$ Clearly, $\tilde{\alpha} > 0$. It follows from (\[Eq1\]) that there exists $\delta > 0$ such that $$\label{Eq3}
[f(x) - t]_+^{\frac{1}{\tilde{\alpha}}} \ge \frac{a_0}{2}{\operatornamewithlimits{dist}}(x, [f \le t]),$$ for $x \in \{x \in V_1: [f(x) - t]_+ \le \delta\}$.
It follows from (\[Eq2\]) that there exists $\Delta > 0$ sufficiently large, such that for any $x \in \{x \in V_1: [f(x) - t]_+ \ge \Delta\}$. We have $$\label{Eq5}
[f(x) - t]_+ \ge \frac{b_0}{2}{\operatornamewithlimits{dist}}(x, [f \le t])$$ if $\tilde{\beta} \le 0$ and $$\label{Eq4}
[f(x) - t]_+^{\frac{1}{\tilde{\beta}}} \ge \frac{b_0}{2}{\operatornamewithlimits{dist}}(x, [f \le t]),$$ if $\tilde{\beta} >0$.
Since, by (i), there are no sequences of the second type, the function ${\operatornamewithlimits{dist}}(x, [f \le t])$ is bounded on the set $$\{x \in V_1: \delta \le [f(x) - t]_+ \le \Delta\}.$$
This fact, together with (\[Eq3\]), (\[Eq5\]) and (\[Eq4\]), give the proof of (i) $\Rightarrow$ (ii).
Proof of (ii) $\Rightarrow$ (iii):\
The proof is based on the following classical result
Let $u(\tau)$ be a real valued $C^d$-function, $d \in \mathbb{N}$, that satisfies $|u^{(d)}(\tau)| \ge 1$ for all $\tau \in \mathbb{R}$. Then the following estimate is valid for all $\epsilon > 0$: $$mes\{\tau \in \mathbb{R}: |u(\tau)| \le \epsilon\} \le (2e)( (d+1)! )^{1/d}\epsilon^{1/d}.$$
Suppose that we have (ii). Then
- If $x \in [f \le t]$, then ${\operatornamewithlimits{dist}}(x, [f \le t]) = 0$ and (iii) holds automatically.
- If $x \in V_1$, then (iii) follows from (ii).
Assume that $x \notin [f \le t] \cup V_1$.
Clearly
- \(ii) holds if and only if there exists $c > 0$ such that $$\label{Eq7}
[f(x) - t]_+ \ge c\min\{{\operatornamewithlimits{dist}}(x, [f \le t])^{\frac{1}{\alpha_1}}, {\operatornamewithlimits{dist}}(x, [f \le t])^{\frac{1}{\beta_1}}\}$$ for all $x \in V_1$.
- \(iii) holds if and only if there exists $c > 0$ $$\label{Eq8}
[f(x) - t]_+ \ge c\min\{{\operatornamewithlimits{dist}}(x, [f \le t])^{\frac{1}{\alpha_1}}, {\operatornamewithlimits{dist}}(x, [f \le t])^{\frac{1}{\beta_1}}, {\operatornamewithlimits{dist}}(x, [f \le t])^{{d}}\}.$$ for all $x \in \mathbb{R}^n$.
Let $x = (x', x_n) \in \mathbb{R}^{n-1}\times \mathbb{R}, x' = (x_1, \dots, x_{n-1})$. We put $$u_{x'}(\tau) = \frac{f(x', \tau) - t}{a_0 d!}, \tau \in \mathbb{R}$$ and $$\Sigma(x') = \{\tau \in \mathbb{R}: |u_{x'}(\tau)| \le \frac{f(x) - t}{|a_0|d!} \}.$$ Since $u_{x'}^{(d)}(\tau) = 1$, it follows from the van der Corput Lemma that there exists a constant $c > 0$, independent of $x$ such that $$\label{Eq9}
mes\Sigma(x') \le c(f(x) - t)^{1/d}.$$ Clearly, $\Sigma(x') \ne \emptyset$ and $\Sigma(x') \ne \mathbb{R}$. Since $\Sigma(x')$ is a closed semi-algebraic subset of $\mathbb{R}$, we have $$\Sigma(x') = \cup_{i=1}^m [a_i, b_i] \bigcup \cup_{j=1}^s \{c_j\},$$ where $a_i, b_i, c_j \in \mathbb{R}, i=1, \dots, m; j = 1, \dots,s$, and $$|u(a_i)| = |u(b_i)| = |u(c_j)| = \dfrac{f(x) - t}{|a_0|d!} .$$ Firstly, we see that $x_n \ne c_j, \forall j = 1, \dots, s$. In fact, since $c_j$ is an isolated point of $\Sigma(x')$, $c_j$ is a local extremum of $u_{x'}(\tau)$. Hence, $$\frac{d u_{x'}}{d\tau}(c_j) = 0$$ or $\dfrac{\partial f}{\partial x_n}(x', c_j) = 0$ i.e. $(x', c_j) \in V_1$, while by assumption, $x = (x', x_n) \notin V_1$. Thus, $x_n \in \{a_i, b_i; i = 1, \dots, m\}$.
Without loss of generality, we may assume that $x_n = a_1$. Since $|u_{x'}(a_1)| = |u_{x'}(b_1)|$, we distinguish two cases
- If $u_{x'}(a_1) = -u_{x'}(b_1)$, then there exists $\tau_1 \in [a_1, b_1]$ such that $u_{x'}(\tau_1) = 0$, which means that $f(x', \tau_1) = t$ or $(x', \tau_1) \in f^{-1}(t) \subset [f \le t]$. Hence $${\operatornamewithlimits{dist}}(x, [f \le t]) \le {\operatornamewithlimits{dist}}(x, (x', \tau_1)) = |x_n - \tau_1| \le |a_1 - \tau_1| \le mes\Sigma(x').$$ Then, by (\[Eq9\]), (iii) holds.
- If $u_{x'}(a_1) = u_{x'}(b_1)$, then, by Rolle’s Theorem, there exists $\tau_2 \in [a_1, b_1]$ such that $$\dfrac{d u_{x'}}{d\tau}(\tau_2) = 0,$$ which means that $(x', \tau_2) \in V_1$. Applying (\[Eq7\]), there exists $c_1 > 0$ such that $$[f(x', \tau_2) - t]_+ \ge c_1 \min\{{\operatornamewithlimits{dist}}((x', \tau_2), [f \le t] )^{1/\alpha_1}, {\operatornamewithlimits{dist}}((x', \tau_2), [f \le t] )^{1/\beta_1}\}.$$ Moreover, since $\tau_2 \in \Sigma(x')$, we have $$\label{Eq12}
\begin{aligned}
f(x) - t &\ge [f(x', \tau_2) - t]_+ \\
&\ge c_1 \min\{{\operatornamewithlimits{dist}}((x', \tau_2), [f \le t] )^{1/\alpha_1}, {\operatornamewithlimits{dist}}((x', \tau_2), [f \le t] )^{1/\beta_1}\}.
\end{aligned}$$ Let $P(x', \tau_2)$ be the point of $[f \le t]$ such that $${\operatornamewithlimits{dist}}((x',\tau_2), [f \le t] ) = {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2)).$$ We have $$\begin{aligned}
{\operatornamewithlimits{dist}}(x, [f \le t]) &\le {\operatornamewithlimits{dist}}(x, P(x', \tau_2))\\
& \le {\operatornamewithlimits{dist}}(x, (x', \tau_2)) + {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2))\\
& \le 2\max\{{\operatornamewithlimits{dist}}(x, (x', \tau_2)), {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2))\}.
\end{aligned}$$ Now:
- If $\max\{{\operatornamewithlimits{dist}}(x, (x', \tau_2)), {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2))\} = {\operatornamewithlimits{dist}}(x, (x', \tau_2))$, then $${\operatornamewithlimits{dist}}(x, [f\le t]) \le 2{\operatornamewithlimits{dist}}((x', \tau_2),x) \le 2mes\Sigma(x') \le 2c(f(x) - t)^{1/d}.$$
- If $\max\{{\operatornamewithlimits{dist}}(x, (x', \tau_2)), {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2))\} = {\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2))$, then $${\operatornamewithlimits{dist}}(x, [f \le t]) \le 2{\operatornamewithlimits{dist}}((x', \tau_2), P(x', \tau_2)) \le 2{\operatornamewithlimits{dist}}((x', \tau_2), [f \le t]).$$ Then (iii) follows from (\[Eq12\]).
Hence, the implication (ii) $\Rightarrow$ (iii) is proved.
Proof of (iii) $\Rightarrow$ (iv):\
Clearly, if (iii) holds, then there are no sequences of the first or second types of $[f \le t]$. Hence, by Theorem \[Thm2.1\], (iv) holds.
The proof of (iv) $\Rightarrow$ (i) is straightforward.
\[Changevar\] Let $f: \mathbb{R}^n \to \mathbb{R}$ be a polynomial function and $A : \mathbb{R}^n \to \mathbb{R}^n$ be a linear isomorphism. Then we have $$H(f \circ A) = H(f).$$
Let $y = Ax$ and put $g = f \circ A$.
Firstly, we prove that $t_0 \in H(g) \Rightarrow t_0 \in H(f)$.
We have $f(y) = f(A\circ A^{-1}(y)) = g(A^{-1}(y))$. This implies that $$\label{fact15}
[f(y) - t_0]_+^\alpha + [f(y) - t_0]_+^\beta = [g(A^{-1}(y)) - t_0]_+^\alpha + [g(A^{-1}(y)) - t_0]_+^\beta.$$
Since $t_0 \in H(g)$, then there exists $\alpha, \beta, c > 0$ such that $$\label{fact16}
[g(A^{-1}(y)) - t_0]_+^\alpha + [g(A^{-1}(y)) - t_0]_+^\beta \ge c{\operatornamewithlimits{dist}}(A^{-1}(y), [g \le t_0]).$$ Suppose that ${\operatornamewithlimits{dist}}(A^{-1}(y), [g \le t_0]) = \|A^{-1}(y) - x_0\|$, where $g(x_0) = t_0$ or $f(A(x_0)) = t_0$. Since $y_0 = Ax_0$ and $A$ is a linear isomorphism, we have $f(y_0) = t_0$ and there exists $c' > 0$ such that $$c'\|y - y_0\| \ge \|A^{-1}(y) - A^{-1}(y_0)\| \ge \frac{1}{c'}\|y - y_0\|.$$ It follows that $${\operatornamewithlimits{dist}}(A^{-1}(y), [g \le t_0]) = \|A^{-1}(y) - A^{-1}(y_0)\| \ge \frac{1}{c'}\|y - y_0\| \ge \frac{1}{c'}{\operatornamewithlimits{dist}}(y, [f \le t_0]).$$ Combining (\[fact15\]), (\[fact16\]) and above fact, we have $$[f(y) - t_0]_+^\alpha + [f(y) - t_0]_+^\beta \ge \frac{c}{c'}{\operatornamewithlimits{dist}}(y, [f \le t_0]), \forall y \in \mathbb{R}^n,$$ i.e., $t_0 \in H(f)$. The claim $t_0 \in H(f) \Rightarrow t_0 \in H(g)$ is proved similarly.
We have the following theorem
\[second\] Let $f$ be a polynomial of the form $(*)$. Then we have
1. $h(f) = \sup\{t \in \mathbb{R}: t \in P^2(f)\}$;
2. If $h(f) = \inf f$, then $H(f) = [\inf f, +\infty) \setminus P^1(f)$;
3. If $h(f) = +\infty$, then $H(f) = \emptyset$;
4. If $h(f) \in \mathbb{R}$ and $h(f) \in P^2(f)$, then $H(f) = (h(f), +\infty) \setminus P^1(f)$;
5. If $h(f) \in \mathbb{R}$ and $h(f) \notin P^2(f)$, then $H(f) = [h(f), +\infty) \setminus P^1(f)$.
The relationship between $H(f)$ and Fedoryuk values
===================================================
The relationship between Fedoryuk values and the existence of global Hölderian error bounds is well-known and has been explored in many previous works, see, for example, [@Az; @CM; @LP; @Ha; @I]. In this section, we will establish this relationship by proving that $h(f) \in F(f) \cup \{\pm \infty\}$ and $F^1(f) \subset F(f)$. We recall
Let $f : \mathbb{R}^n \to \mathbb{R}$ be a polynomial function. The set [*of Fedoryuk values*]{} of $f$ is defined by $$F(f):=\{t \in \mathbb{R} : \exists \{x^k\} \subset \mathbb{R}^n, \|x^k\| \to \infty, \|\nabla f(x^k)\| \to 0, f(x^k) \to t\}.$$
Moreover, we have
\[Lemma3.1\] $F(f)$ is a semialgebraic subset of $\mathbb{R}$.
It follows from Lemma \[Lemma3.1\] that either $F(f)$ is empty or $F(f)$ is finite set or $F(f)$ is a union of finitely many points and intervals.
Note that $F(f)$ can be an infinite set, for example (see [@Par]), if $f(x,y,z) = x + x^2y + x^4yz$, then $F(f) = \mathbb{R}$ and $F(f^2) = (0, +\infty)$ (see also [@KOS] and [@Sch]).
To prove the lemma, it is more convenient to use the logical formulation of the Tarski-Seidenberg Theorem. Let us to recall it.
A [*first-order formula*]{} is obtained as follows recursively (see, for example, [@BCR; @C; @HP])
1. If $f \in \mathbb{R}[X_1, \dots, X_n]$, then $f=0$ and $f > 0$ are first-order formulas (with free variables $X=(X_1,\dots,X_n)$) and $\{x \in \mathbb{R}^n|f(x) = 0\}$ and $\{x \in \mathbb{R}^n|f(x) > 0\}$ are respectively the subsets of $\mathbb{R}^n$ such that the formulas $f = 0$ and $f>0$ hold.
2. If $\Phi$ and $\Psi$ are first-order formulas, then $\Phi \vee \Psi$ (conjunction), $\Phi \wedge \Psi$ (disjunction) and $\lnot\Phi$ (negation) are also first-order formulas.
3. If $\Phi$ is a formula and $X$ is a variable ranging over $\mathbb{R}$, then $ \exists X \Phi $ and $\forall X \Phi$ are first-order formulas.
\[Tarski\] If $\Phi(X_1, \dots, X_n)$ is a first-order formula, then the set $$\{(x_1, \dots, x_n) \in \mathbb{R}^n :\Phi(x_1, \dots,x_n)\ \text{holds} \}$$ is semialgebraic.
We have $$\begin{aligned}
F(f) = \{t \in \mathbb{R}| \forall \epsilon > 0, \exists \delta >0: \forall & R> 0, \exists x \in\mathbb{R}^n: \|x\|^2 \ge R^2,\\ & \|\nabla f(x)\|^2 \le \delta^2, |f(x) - t| \le \epsilon\}.
\end{aligned}$$ It follows from above that the set $F(f)$ can be determined by a first-order formula, hence by the Tarski-Seidenberg Theorem, it is a semialgebraic subset of $\mathbb{R}$.
The following proposition is contained implicitly in [@Ha Proof of Theorem B].
\[Prop2.2\] $F^1(f) \subset F(f)$.
Put $X = \{x \in \mathbb{R}^n : f(x) \ge t\}$. By the metric induced from that of $\mathbb{R}^n$, $X$ is a complete metric space and the function $f: X \to \mathbb{R}$ is bounded from below. Let $t \in F^1(f)$ and $\{x^k\}$ be a sequence of the first type of $[f \le t]$: $$\begin{aligned}
\|x^k\|&\to \infty,\\
f(x^k)& > t,\\
f(x^k)&\to t,\\
\exists \delta > 0 \ \text{s.t.}\ {\operatornamewithlimits{dist}}(x^k, [f \le t])& \ge \delta.
\end{aligned}$$ Let $\epsilon_k = f(x^k) - t$. Then $\epsilon_k > 0$ and $\epsilon_k \to 0$ as $k \to +\infty$. Set $\lambda_k = \sqrt{\epsilon_k}$. By the Ekeland’s Variational Principle ([@E]), there exists a sequence $\{y^k\} \subset X$ such that $$\begin{aligned}
f(y^k)&\le t + \epsilon_k = f(x^k),\\
{\operatornamewithlimits{dist}}(y^k, x^k)&\le \lambda_k
\end{aligned}$$ and for any $x \in X, x \ne y^k$, we have $$\label{Eq13}
f(x) \ge f(y^k) - \dfrac{\epsilon_k}{\lambda_k}d(x,y^k), \forall x \in X.$$ Since ${\operatornamewithlimits{dist}}(y^k, x^k) \le \lambda_k = \sqrt{\epsilon_k} \to 0$ and ${\operatornamewithlimits{dist}}(x^k, [f \le t]) \ge \delta > 0$, the ball $B(y^k, \delta/2) = \{x \in \mathbb{R}^n: {\operatornamewithlimits{dist}}(y^k, x) \le \delta/2\}$ is contained in $X$. Then, inequality (\[Eq13\]) implies that $$\dfrac{f(y^k + \tau u) - f(y^k)}{\tau} \ge -\sqrt{\epsilon_k}$$ holds true for every $u \in \mathbb{R}^n, \|u\| = 1$ and $\tau \in [0, \delta/2)$. This gives us $$\langle \nabla f(y^k), u \rangle \ge - \sqrt{\epsilon_k}.$$ Putting $u = -\dfrac{\nabla f(y^k)}{\|\nabla f(y^k)\|}$, we get $\|\nabla f(y^k)\| \le \sqrt{\epsilon_k} \to 0$.
Clearly $f(y^k) \to t$. Therefore $t \in F(f)$.
\[Prop2.3\] If there is a sequence of the second type of $[f \le t]$: $$\begin{aligned}
\|x^k\|&\to \infty,\\
t < f(x^k)& \le M < +\infty,\\
{\operatornamewithlimits{dist}}(x^k, [f \le t])& \to +\infty.
\end{aligned}$$ then there exists a sequence $\{y^k\}$ of the second type of $[f \le t]$: $$\begin{aligned}
\|y^k\|&\to \infty,\\
t \le f(y^k)& \le M < +\infty,\\
{\operatornamewithlimits{dist}}(y^k, [f \le t])& \to +\infty.
\end{aligned}$$ with additional conditions $$\begin{aligned}
\|\nabla f(y^k)\|&\to 0,\\
\text{and}\ \lim_{k \to \infty}f(y^k)&\in F(f).
\end{aligned}$$ In particular, the segment $[t, M]$ contains at least one point of F(f).
Put $X = \{x \in \mathbb{R}^n: f(x) \ge t\}, \epsilon_k = f(x^k) - t$ and $\lambda_k = \dfrac{1}{2}{\operatornamewithlimits{dist}}(x^k, [f \le t])$.
As in the proof of Proposition \[Prop2.2\], we can find a sequence $\{y^k\} \subset X$ such that $$\begin{aligned}
\|y^k\|&\to \infty,\\
t \le f(y^k)& \le t + \epsilon_k = f(x^k) \le M < +\infty,\\
\lim_{k \to \infty}f(y^k) &\in F(f),\\
\|\nabla f(y^k)\|&\to 0,\\
{\operatornamewithlimits{dist}}(y^k, x^k)& \le \lambda_k.
\end{aligned}$$ Since $$\begin{aligned}
{\operatornamewithlimits{dist}}(y^k, [f \le t]) &\ge {\operatornamewithlimits{dist}}(x^k, [f \le t]) - {\operatornamewithlimits{dist}}(y^k, x^k)\\
&\ge {\operatornamewithlimits{dist}}(x^k, [f \le t]) - \lambda_k = \dfrac{1}{2}{\operatornamewithlimits{dist}}(x^k, [f \le t]),
\end{aligned}$$ we have ${\operatornamewithlimits{dist}}(y^k, [f \le t]) \to +\infty$. The proposition is proved.
\[Prop2.4\] If $h(f) \ne -\infty$ and $\# F(f) < +\infty$, then $h(f) \in F(f)$.
Assume that $h(f) \ne -\infty$. By contradiction, suppose that $h(f) \notin F(f)$. Hence, either $F(f) = \emptyset$ or $F(f)$ is a non-empty finite set.
By definition of $h(f)$, $[f \le h(f) - \epsilon]$ has a sequence of second type. Hence, it follows from Proposition \[Prop2.3\], $F(f) \ne \emptyset$. Thus, $F(f)$ is a non-empty finite set. Then, for any $\epsilon > 0$ sufficiently small, we have $[h(f) - \epsilon, h(f)] \cap F(f) = \emptyset$ and $h(f) - \epsilon \in F^2(f)$.
Let $\{x^k\}$ be a sequence of the second type of $[f \le h(f) - \epsilon]$: $$h(f)-\epsilon \le f(x^k) \le M, \|x^k\| \to \infty\ \text{and}\ {\operatornamewithlimits{dist}}(x^k, [f \le h(f) - \epsilon])\to \infty.$$ By Proposition \[Prop2.3\], we may assume that $\|\nabla f(x^k)\| \to 0$ and there exists $t_1 \in F(f) \cap [h(f)-\epsilon, M]$ and $ t_1 = \lim\limits_{k\to\infty}f(x^k)$.
Let $\delta_1 > 0$ such that $t_1 - \delta_1 \notin F(f)$ and $t_1 - \delta_1 > h(f)$. Since $f(x^k) \to t_1$, we can assume that $f(x^k) > t_1 - \delta_1$ for all $k$. Let $y^k$ be the point of $[f \le t_1 - \delta_1]$ such that ${\operatornamewithlimits{dist}}(x^k, [f \le t_1 - \delta_1]) = \|x^k - y^k\|$. Clearly, $y^k \in f^{-1}(t_1 - \delta_1)$.\
[**Claim:**]{} $\{y^k\}$ is a sequence of second type of $[f \le h(f) - \epsilon]$.
Since $t_1 - \delta_1 > h(f), t_1 - \delta_1 \notin F^2(f)$. Hence, for some $A > 0$, we have $\|x^k - y^k\| \le A < +\infty$ for all $k$.
Let $z^k$ be the point of $[f \le h(f) - \epsilon]$ such that ${\operatornamewithlimits{dist}}(y^k, [f \le h(f) - \epsilon]) = \|y^k - z^k\|$. We have $$\begin{aligned}
{\operatornamewithlimits{dist}}(y^k, [f \le h(f) - \epsilon]) &\ge {\operatornamewithlimits{dist}}(x^k, [f \le h(f) - \epsilon]) - \|x^k - y^k\|\\
&\ge {\operatornamewithlimits{dist}}(x^k, [f \le h(f) - \epsilon]) - A.
\end{aligned}$$ This shows that ${\operatornamewithlimits{dist}}(y^k, [f \le h(f) - \epsilon]) \to +\infty$ and the claim is proved.
Since $\{y^k\}$ is a sequence of the second type of $[f \le h(f) - \epsilon]$ and $f(y^k) = t_1 - \delta_1 \notin F(f)$, by Proposition \[Prop2.3\], there exists $t_2 \in [h(f)-\epsilon, t_1 - \delta_1] \cap F(f)$. Choose $\delta_2$ such that $t_1 - \delta_2 > h(f)$ and $t_2 - \delta_2 \notin F(f)$. Similarly as in the proof of Claim, we can find a sequence of the second type $\{y'^k\}$ of $[f \le h(f)-\epsilon]$ such that $f(y'^k) = t_2 -\delta_2$ and $t_3 \in F(f)$ such that $h(f)-\epsilon < t_3 < t_2$.
Making this process iteratively, we see that the interval $[h(f) - \epsilon, M]$ contains a infinite number of points in $F(f)$, which is a contradiction.
Types of stability of global Hölderian error bounds
===================================================
We will distinguish 3 cases.
Case 1 - $F(f) = \emptyset$
---------------------------
\[thm41\] If $F(f) = \emptyset$ then $H(f) = (\inf f, +\infty)$ or $H(f) = [\inf f, +\infty)$.
Assume that $F(f) = \emptyset$. Then by Proposition \[Prop2.2\], $F^1(f) = \emptyset$. Moreover, it follows from Proposition \[Prop2.3\] that $F^2(f)$ is also empty.
Hence, by Theorem \[Thm2.1\], $H(f) = (\inf f, +\infty)\setminus(F^1(f) \cup F^2(f)) = (\inf f, +\infty)$ or $H(f) = [\inf f, +\infty)\setminus(F^1(f) \cup F^2(f)) = [\inf f, +\infty)$.
Let $t \in \mathbb{R}$.
1. $t$ is called [*y-stable*]{} if $t \in H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset H(f)$;
2. $t$ is called [*y-right stable*]{} if $t \in H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset H(f)$ and $(t - \epsilon, t ) \cap H(f) = \emptyset$.
\[cor4.1\] If $F(f) = \emptyset$, then we have two cases
1. If $H(f) = (\inf f, +\infty)$, then there is only one type of stability of GHEB. Namely, for all $t \in (\inf f, +\infty)$, $t$ is y-stable.
2. If $H(f) = [\inf f, +\infty)$, then then there are two stability types of GHEB. Namely, for all $t \in (\inf f, +\infty)$, $t$ is y-stable and for $t = \inf f$, $t$ is y-right stable.
\[remark41\]
We recall here results of [@Ha] about the role that Newton polyhedron plays in studying GHEB’s.
Let $f(x) = \sum a_\alpha x^\alpha$ be a polynomial in $n$ variables. Put $supp(f) = \{\alpha \in (\mathbb{N} \cup \{0\})^n: a_\alpha \ne 0\}$ and denote $\Gamma_f$ the convex hull in $\mathbb{R}^n$ of the set $\{(0,0, \dots, 0)\} \cup supp(f)$. Following [@Kou] we call $\Gamma_f$ [*the Newton polyhedron at infinity of $f$*]{}.
Let $\Delta$ be a face (of any dimension) of $\Gamma_f$, set: $$f_\Delta(x):= \sum\limits_{\alpha \in \Delta}a_\alpha x^\alpha.$$
[We say that a polynomial $f$ is nondegenerate with respect to its Newton boundary at infinity (nondegenerate for short), if for every face $\Delta$ of $\Gamma_{f}$ not containing the origin, the system $$x_i\dfrac{\partial f_\Delta}{\partial x_i} = 0, i = 1, \dots, n.$$ has no solution in $(\mathbb{R}\setminus \{0\})^n$.]{}
[A polynomial $f(x)=\sum a_\alpha x^\alpha$ in $n$ variables is said to be convenient if for every $i$, there exists a monomial of $f$ of the form $x_i^{\alpha_i}, \alpha_i > 0$, with a non-zero coefficient.]{}
\[thmHa\] If $f$ is convenient and nondegenerate w.r.t. its Newton polyhedron at infinity, then there exist $r, \delta > 0$ such that $$\|\nabla f(x)\| \ge \delta\ \text{for}\ \|x\| \ge r \gg 1.$$ In particular, $F(f) = \emptyset$.
Let $\mathbb{R}[x_1, \dots, x_n]$ denote the ring of polynomials in $n$ variables over $\mathbb{R}$.
For $g \in \mathbb{R}[x_1, \dots, x_n]$, as before, $\Gamma_g$ denotes the Newton polyhedron at infinity of $g$. Let $f \in \mathbb{R}[x_1, \dots, x_n]$ be a convenient polynomial.
Put $\Gamma:=\Gamma_f$ and $$\mathcal{A}_\Gamma = \{g \in \mathbb{R}[x_1, \dots, x_n]: \Gamma_g \subset \Gamma \}.$$ The set $\mathcal{A}_\Gamma$ can be identified to the space $\mathbb{R}^m$, where $m$ is the number of integer points of $\Gamma$.
Put $\mathcal{B}_\Gamma = \{h \in \mathcal{A}_\Gamma: \Gamma_h = \Gamma\ \text{and $h$ is nondegenerate} \}$. According to [@Kou], $\mathcal{B}_\Gamma$ is an open and dense subset of $\mathcal{A}_\Gamma$. Hence, Theorem \[thm41\] and \[thmHa\] show that if $f$ is a [*generic*]{} polynomial, then $H(f) = (\inf f, +\infty)$ or $H(f) = [\inf f, +\infty)$. By Corollary \[cor4.1\], any value $t \in (\inf f, +\infty)$ is y-stable and $t = \inf f$ is y-right stable where $H(f) = [\inf f, +\infty)$
Case 2 - $F(f)$ is non-empty finite set
---------------------------------------
\[Prop3.1\] If $\# F(f) < +\infty$, then $H(f) \ne \emptyset$.
By contradiction, assume that $H(f) = \emptyset$. Since $\# F(f) < +\infty$, we have $\# F^1(f) < +\infty$ (Proposition \[Prop2.2\]). Then, it follows from the first formula that $H(f) = \emptyset$ if and only if $h(f) = +\infty$ but the later is impossible, since we have
[**Claim:**]{} If $h(f) = +\infty$, then $\# F(f) = +\infty$.
Take $t_1 \in \mathbb{R}$, since $h(f) = +\infty$, $[f \le t_1]$ has a sequence of the second type. By Proposition \[Prop2.3\], there exists $M_1 > t_1$ and $a_1 \in [t_1, M_1] \cap F(f)$. Take $t_2$ such that $M_1 < t_2$, then $[f \le t_2]$ has a sequence of the second type. Hence, there exists $M_2 > t_2$ and $a_2$ such that $a_2 \in [t_2, M_2] \cap F(f)$. Continuing this way, we find an infinite sequence $a_1, a_2, a_3, \dots$ of $F(f)$. Therefore, $\# F(f) = +\infty$.
Now, we classify the stability types of GHEB in the case when $F(f)$ is a non-empty finite set.
\[def42\] Let $t \in \mathbb{R}$.
1. Recall that $t$ is called y-stable if $t \in H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset H(f)$;
2. Recall that $t$ is called y-right stable if $t \in H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset H(f)$ and $(t - \epsilon, t ) \cap H(f) = \emptyset$;
3. $t$ is called [*n-stable*]{} if $t \in [\inf f, +\infty)\setminus H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset \mathbb{R} \setminus H(f)$;
4. $t$ is called [*n-right stable*]{} if $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset [\inf f, +\infty)\setminus H(f)$ and $(t - \epsilon, t ) \cap ([\inf f, +\infty) \setminus H(f)) = \emptyset$;
5. $t$ is called [*n-left stable*]{} if $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $(t - \epsilon, t] \subset [\inf f, +\infty) \setminus H(f)$ and $(t, t + \epsilon) \cap H(f) \ne \emptyset$;
6. $t$ is called [*n-isolated*]{} if $t \in \mathbb{R}\setminus H(f)$ and for $\epsilon > 0$ sufficiently small, $(t - \epsilon, t) \cup (t, t + \epsilon) \subset H(f)$.
It follows from the first formula that
\[thm43\] Let $F(f)$ be a non-empty finite set and $t \in [\inf f, +\infty)$. Then, $t$ is one of the following types
Case A
: If $h(f) = -\infty$, then
1. $t$ is y-stable if and only if $t \notin F^1(f)$.
2. $t$ is a n-isolated point if and only if $t \in F^1(f)$.
Case B
: If $h(f)$ is a finite value, then
1. $t$ is y-stable if and only if $t > h(f)$ and $t\notin F^1(f)$;
2. $t$ is y-right stable if and only if $t = h(f)$ and $h(f) \in H(f)$;
3. $t$ is n-stable if and only if $\inf f < t < h(f)$;
4. $t$ is n-right stable if and only if $t = \inf f < h(f)$ and $f^{-1}(\inf f) \ne \emptyset$;
5. $t$ is n-left stable if and only if $t = h(f)$ and $h(f) \notin H(f)$;
6. $t$ is a n-isolated point if and only if $t > h(f)$ and $t \in F^1(f)$.
[ Here, if we have item 2, then we does not have item 3’ and vice versa. ]{}
Now, to complete this subsection, we add an estimation of the number of connected components of $H(f)$ for the case $\# F(f) < +\infty$.
Let us denote $C(S)$ the number of connected components of $S \subset \mathbb{R}^n$, we have the following result
\[thm44\] Let $f: \mathbb{R}^n \to \mathbb{R}$ be an any polynomial of degree $d$. Then, if $\# F(f) < +\infty$, we have $$C(H(f)) \le (d-1)^{n-1} + 1.$$
Put $$F_{\mathbb{C}}(f):=\{t \in \mathbb{C} : \exists \{x^k\} \subset \mathbb{C}^n, \|x^k\| \to \infty, \|\nabla f(x^k)\| \to 0, f(x^k) \to t\}.$$
Since $\# F(f) < +\infty $, we have $ \# F_{\mathbb{C}}(f) < +\infty$. Then, according to Theorem 1.1 of [@Je], we have $$\# F(f) \le \# F_{\mathbb{C}}(f) \le (d-1)^{n-1}.$$ Hence, it follows from the first formula that $$C(H(f)) \le (d-1)^{n-1} + 1.$$
Case 3 - $F(f)$ is an infinite set
----------------------------------
\
In this case, the following lemma tells us that the set $H(f)$ has still very simple structure
\[Lemma2.1\] $H(f)$ is a semialgebraic subset of $\mathbb{R}$.
Using the first formula for $H(f)$ (Theorem \[Main\]), it is enough to show that $F^1(f)$ is semialgebraic.
We have $$\begin{aligned}
F^1(f) = \{t \in \mathbb{R}| \exists \delta > 0,\forall R> 0: \forall &\epsilon >0, \exists x \in\mathbb{R}^n: \|x\|^2 \ge R^2,\\ &0 < f(x) - t < \epsilon, {\operatornamewithlimits{dist}}(x, [f \le t]) \ge \delta\},\quad\ \text{(a)}
\end{aligned}$$ $$\{x \in \mathbb{R}^n: {\operatornamewithlimits{dist}}(x, [f \le t]) \ge \delta \} = \{x \in \mathbb{R}^n: \exists\delta \forall x_0 \in [f \le t], \|x - x_0\|^2 \ge \delta^2\}.\quad \text{(b)}$$ It follows from (a) and (b) that the set $F^1(f)$ can be determined by a first-order formula, hence it is a semialgebraic subset of $\mathbb{R}$.
Since $H(f)$ is a semialgebraic subset of $\mathbb{R}$, we have
\[cor2.1\] If $H(f) \ne \emptyset$ and $H(f) \ne \mathbb{R}$, then it is a union of finitely many points and intervals.
By Corollary \[cor2.1\], we have to consider three cases
1. $H(f) = \mathbb{R}$;
2. $H(f) = \emptyset$;
3. $H(f)$ is a non-empty proper semialgebraic subset of $\mathbb{R}$.
- In the case (a), we have only one stable type: $t$ is y-stable for all $t \in \mathbb{R}$;
- In the case (b), we have only one stable type: $t$ is n-stable for all $t \in \mathbb{R}$;
- In the case (c), $H(f)$ is a disjoint union of the sets of the following types: $$I_{(a^1_i, a^2_i)}, I_{[b^1_j, b^2_j)}, I_{(c^1_k, c^2_k]}, I_{[d^1_l, d^2_l]}, A(m), I_{-\infty}, I_{+\infty}.$$ Where
1. $I_{(a^1_i, a^2_i)} = \emptyset$ or $I_{(a^1_i, a^2_i)} = {(a^1_i, a^2_i)}, i = 1, \dots, p$;
2. $I_{[b^1_j, b^2_j)} = \emptyset$ or $I_{[b^1_j, b^2_j)} = {[b^1_j, b^2_j)}, j = 1, \dots, q$;
3. $I_{(c^1_k, c^2_k]} = \emptyset$ or $I_{(c^1_k, c^2_k]} = {(c^1_k, c^2_k]}, k = 1, \dots, r$;
4. $I_{[d^1_l, d^2_l]}=\emptyset$ or $I_{[d^1_l, d^2_l]} = {[d^1_l, d^2_l]}, l = 1, \dots, s$;
5. $A(m) = \emptyset$ or $A(m)=\{e_1, \dots, e_m\}$, where $e_1, \dots, e_m$ are isolated points;
6. $I_{-\infty} = \emptyset$ or $I_{-\infty}=(-\infty, a]$ or $I_{-\infty}=(-\infty, a)$, where $a \in \mathbb{R}$;
7. $I_{+\infty} = \emptyset$ or $I_{+\infty} = [b, +\infty)$ or $I_{+\infty} = (b, +\infty)$, where $b \in \mathbb{R}$.
Similarly, $\mathbb{R} \setminus H(f)$ is a disjoint union of the sets of the following types: $$I_{(a'^1_i, a'^2_i)}, I_{[b'^1_j, b'^2_j)}, I_{(c'^1_k, c'^2_k]}, I_{[d'^1_l, d'^2_l]}, A'(m'), I'_{-\infty}, I'_{+\infty}.$$
We have the following definition
\[def43\] Let $t \in \mathbb{R}$.
1. Recall that $t$ is said to be y-stable if $t \in H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset H(f)$;
2. Recall that $t$ is said to be y-right stable if $t \in H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset H(f)$ and $(t - \epsilon, t ) \cap H(f) = \emptyset$;
3. $t$ is said to be [*y-left stable*]{} if $t \in H(f)$ and there exists $\epsilon > 0$ such that $(t - \epsilon, t] \subset H(f)$ and $(t, t + \epsilon) \cap H(f) = \emptyset$;
4. $t$ is said to be [*y-isolated*]{} if $t \in H(f)$ and for $\epsilon > 0$ sufficiently small, $(t - \epsilon, t) \cup (t, t + \epsilon) \subset \mathbb{R} \setminus H(f)$;
5. Recall that $t$ is called n-stable if $t \in [\inf f, +\infty)\setminus H(f)$ and there exists an open interval $I(t)$ such that $t \in I(t) \subset [\inf f, +\infty) \setminus H(f)$;
6. Recall that $t$ is called n-right stable if $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $[t, t + \epsilon) \subset [\inf f, +\infty)\setminus H(f)$ and $(t - \epsilon, t ) \cap ([\inf f, +\infty) \setminus H(f)) \ne \emptyset$;
7. Recall that $t$ is called n-left stable if $t \in [\inf f, +\infty) \setminus H(f)$ and there exists $\epsilon > 0$ such that $(t - \epsilon, t] \subset [\inf f, +\infty) \setminus H(f)$ and $(t, t + \epsilon) \cap H(f) = \emptyset$;
8. Recall that $t$ is called n-isolated if $t \in [\inf f, +\infty) \setminus H(f)$ and for $\epsilon > 0$ sufficiently small, $(t - \epsilon, t) \cup (t, t + \epsilon) \subset H(f)$.
Using the first formula of $H(f)$, we have
\[thm45\] Let $H(f)$ be of the form (c) and $t \in [\inf f, +\infty)$. Then we have
1. $t$ is y-stable if and only if $t$ is an interior point of the sets $$I_{-\infty} \bigcup\cup_{i=1}^p I_{(a^1_i, a^2_i)} \bigcup \cup_{j=1}^q I_{[b^1_j, b^2_j)} \bigcup \cup_{k=1}^r I_{(c^1_k, c^2_k]} \bigcup\cup_{l=1}^s I_{[d^1_l, d^2_l]} \bigcup I_{+\infty};$$
2. $t$ is y-right stable if and only if we have $t = b^1_j$ or $t = d^1_l$ or $t = b$ (where $I_{+\infty} = [b, +\infty)$);
3. $t$ is y-left stable if and only if we have $t = c^2_k$ or $t = d^2_l$ or $t = a$ (where $I_{-\infty} = (-\infty, a]$);
4. $t$ is an y-isolated point if and only if $t \in A(m)$.
5. $t$ is n-stable if and only if $t$ is an interior point of the set: $$I'_{-\infty} \bigcup\cup_{i=1}^{p'} I_{(a'^1_i, a'^2_i)} \bigcup \cup_{j=1}^{q'} I_{[b'^1_j, b'^2_j)} \bigcup \cup_{k=1}^{r'} I_{(c'^1_k, c'^2_k]}\bigcup \cup_{l=1}^{s'} I_{[d'^1_l, d'^2_l]} \bigcup I'_{+\infty};$$
6. $t$ is n-right stable if and only if we have $t = b'^1_j$ or $t = d'^1_l$ or $t = b'$ (where $I'_{+\infty} = [b', +\infty)$);
7. $t$ is n-left stable if and only if we have $t = c'^2_k$ or $t = d'^2_l$ or $t = a'$ (where $I'_{-\infty} = (-\infty, a']$);
8. $t$ is an n-isolated point if and only if $t \in A'(m')$.
[ In the above list, we collect all types of stability that could theoretically exist. The problem of deciding when this or that type really appears, seems to be very difficult. ]{}
We finish our paper by considering the following simple example
\[Ex1\]
Let $f(x,y) = (y^2-1)^2 + (xy - 1)^2$ ([@HT]). Clearly, $f$ is of the form $(*)$.
We have $\dfrac{\partial f}{\partial y} = 4y^3 + 2x^2y - 4y - 2x = 2(2y^3 + x^2y - 2y - x)$. Hence, the roots of $\dfrac{\partial f}{\partial y} = 0$ are: $$\begin{aligned}
x_1(y) &= \dfrac{1 + \sqrt{-8y^4 + 8y^2 + 1}}{2y}\ \text{and}\ \lim_{y \to 0}x_1(y) = +\infty,\\
x_2(y) &= \dfrac{1 - \sqrt{-8y^4 + 8y^2 + 1}}{2y}\ \text{and}\ \lim_{y \to 0}x_2(y) = -\infty.
\end{aligned}$$ We have $$\begin{aligned}
\lim_{y \to 0}\dfrac{\partial f}{\partial x}(x_i(y), y) = 0, i =1,2 &\Rightarrow \lim\limits_{(x,y) \in V_1,\|(x,y)\|\to\infty}\|\nabla f(x,y)\| = 0;\\
\lim\limits_{y \to 0}f(x_i(y), y) = 1, i=1,2 &\Rightarrow \lim\limits_{(x,y) \in V_1,\|(x,y)\|\to\infty}f(x,y) = 1.
\end{aligned}$$ Hence $P(f) = \{1\}$.
It is not difficult to show that
- $F^2(f) = [0, 1)$, hence $h(f) = 1$;
- and $F^1(f) = \emptyset$.
Therefore, by the second formula, $H(f) = [1, +\infty)$. In this example, for any $t \in [0, +\infty)$:
- If $t \in (1, + \infty)$, then $t$ is y-stable;
- If $t = 1$, then $t$ is y-right stable;
- If $t \in (0,1)$, then $t$ is n-stable;
- If $t = 0$, then $t$ is n-right stable.
Acknowledgments {#acknowledgments .unnumbered}
---------------
This research was partially supported by National Foundation for Science and Technology Development (NAFOSTED), Vietnam; Grant numbers 101.04-2017.12 of the first author and 101.04-2019.302 of the second author.
[AA]{} A. Auslender and Crouzeix, *Global regularity theorem*, Math. Oper. Res., 13 (1988), 243-253.
D. Azé, [*A survey on error bounds for lower semicontinuous functions*]{}, Proceedings of 2003 MODE-SMAI Conference of ESAIM Proceedings, EDP Sci., Les Ulis, vol. 13, (2003), 1-17.
J. Bochnak, M. Coste and M. F. Roy, *Real algebraic geometry*, Springer, 1998.
J. Bolte, T. P. Nguyen, J. Peypouquet and B. W. Suter, [*From error bounds to the complexity of first-order descent methods for convex functions*]{}, Math. Program., Ser. A, vol. 165 (2017), 2, 471-507.
J. N. Corvellec, V. V. Montreanu, *Nonlinear error bounds for lower semi-continuous functions on metric spaces*, Math. Progam., Ser. A, vol. 114 (2008), 2, 291-319.
M. Coste, [*An introduction to Semialgebraic Geometry*]{}, Dottorato di ricerca in matematica/ Universita di Pisa, Dipartimento di Matematica, 2002.
J. W. Daniel, [*On perturbations in systems of linear inequalities*]{}, SIAM J. Numer. Anal., 10 (1973), pp. 299-307.
S. Deng, [*Perturbation analysis of a condition number for convex inequality systems and global error bounds for analytic systems*]{}, Math. Program., vol. 83 (1998), 263-276.
S. T. Dinh, H. V. Ha and Thao N. T., *Łojasiewicz inequality for polynomial functions on non-compact domains*, Int. J. of Math., 23 (2012), 1250033 (28 pages).
S. T. Dinh, H. V. Ha and T. S. Pham, [*Hölder-Type Global Error Bounds for Non-degenerate Polynomial Systems*]{}, Acta Mathematica Vietnamica, 42 (2017), 563–585.
D. Drusvyatskiy and A. S. Lewis, [*Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods*]{}, Math. Ope. Res., vol. 43 (2018), No. 3, 919-948.
I. Ekeland, *Nonconvex minimization problems*, Bull. A.M.S., No.1 (1974), 443-474.
L. Grafakos, [*Classical Fourier Analysis*]{}, Spinger, 2008.
H. V. Ha, [*Global Hölderian error bound for non-degenerate polynomials,*]{} SIAM J. Optim., 23 (2013), No. 2, 917-933.
H. V. Ha and V. D. Dang, [*On the global Lojasiewicz inequality for polynomial functions* ]{}, Ann. Polon. Math. 112 (2019), 21-47.
H. V. Ha and T. S. Pham, [*Genericity in polynomial optimization*]{}, World Scientific Publishing, 2017.
H. V. Ha and Thao. N. T., [*Newton polygon and distribution of integer points in sublevel sets*]{}, to appears in Math. Z.
A. J. Hoffman, *On approximate solutions of linear inequalities*, Journal of Research of the National Bureau of Standards, 49 (1952), 263-265.
A. Ioffe, [*Metric regularity - A survey, part I and part II*]{}, J. Aust. Math. Soc., 101 (2016), 188-243 and 376-417.
Z. Jelonek, [*On bifurcation points of a complex polynomial*]{}, Proc. Amer. Math. Soc., Vol. 131, no. 5 (2002), 1361-1367.
A. Jourani, [*Hoffman’s error bound, local controllability, and sensitivity analysis*]{}, SIAM J. Control Optim. 38(3) (2000), 947–970.
A. G. Kouchnirenko, *Polyhedres de Newton et nombre de Milnor*, Invent.math., 32 (1976), 1-31.
D. Klatte, *Hoffman’s error bound for systems of convex inequalities*, Mathematical Programming with data perturbations, 185-199, Lecture Notes in Pure and Appl. Math., 195, Dekker New York, 1998.
D. Klatte and W. Li, *Asymptotic constraint qualifications and global error bounds for convex inequalities*, Math. Progam., 84 (1999), 137-140.
K. Kurdyka, P. Orro and S. Simon, [*Semialgebraic Sard theorem for generalized critical values*]{}, J. Diff. Geom. 56 (2000), 67-92.
A. Kruger, M. A. López and M. Théra, [*Perturbation of error bounds*]{}, Math. Program,. Ser. B, Vol. 168 (2018), Issue 1-2, 533-554.
A. Kruger, H. V. Ngai and M. Théra, [*Stability of error bounds for convex constraint systems in Banach spaces*]{}, SIAM J. Optim., 20 (2010), No. 6, 3280–3296.
G. Li, *On the asymptotic well behaved functions and global error bound for convex polynomials*, SIAM J. Optim., 20 (2010), No.4, 1923-1943.
G. Li, *Global error bounds for piecewise convex polynomials*, Math. Program., Ser.A, vol. 137 (2013), Issue 1-2, 37-64.
W. Li, *Error bounds for piecewise convex quadratic programs and applications*, SIAM J. on Control and Optimization, 33 (1995), 1511-1529.
G. Li, B.S. Mordukhovich and T. S. Pham, [*New error bounds for polynomial systems with applications to Holderian stability in optimization and spectral theory of tensors,*]{} Math. Program. 153 (2015), No.2, 333–362.
G. Li, C. Tang and Z. X. Wei, *Error bound results for generalized D-gap functions of nonsmooth variational inequality problems*, J. Comp. Appl. Math. 233 (2010), no. 11, 2795-2806.
A. S. Lewis and J. S. Pang, *Error bounds for convex inequality systems, Generalized Convexity, Generalized Monotonicity*, J. P. Crouzeix, J. E. Martinez-Legaz and M.Volle (eds) (1998), 75-110.
Z. Q. Luo, *New error bounds and their applications to convergence analysis of iterative algorithms*, Math. Progam. Ser. B, 88 (2000), no. 2, 341-355.
X. D. Luo and Z. Q. Luo, *Extensions of Hoffman’s Error bound to polynomial systems*, SIAM J. Optim., 4 (1994), 383-392.
Z. Q. Luo and J.F. Sturm, *Error bound for quadratic systems, in High Perfomance Optimization*, H. Frenk, K. Roos,T. Terlaky, and Zhang, eds., Kluwer, Dordrecht, The Netherlands, (2000), 383-404.
Z. Q. Luo and P. Tseng, [*Perturbation Analysis of a Condition Number for Linear Systems*]{}, SIAM J. Matrix Anal. App., 15 (1994), 636-660.
O. L. Mangasarian, *A condition number for differentiable convex inequalities*, Math. Oper. Res., 10 (1985), 175-179.
H. V. Ngai, [*Global error bounds for systems of convex polynomials over polyhedral constraints*]{}, SIAM J. Optim., 25 (2015), No. 1, 521-539.
H. V. Ngai, A. Kruger and M. Théra, [*Stability of error bounds for semi-infinite convex constraint systems*]{}, SIAM J. Optim., 20 (2010), No. 4, 2080–2096.
K. F. Ng and X. Y. Zheng, [*Global error bounds with fractional exponents*]{}, Math. Program. Ser. B, 88 (2000), 357-370.
J. S. Pang, *Error bounds in Mathematical Programming*, Math. Program., Ser.B, 79 (1997), 299-332.
A. Parusinski, [*A note on singularities at infinity of complex polynomials*]{}, Banach Center Publication, 39 (1997), 131–141.
S. Robinson, *Regularity and stability of convex multivalued functions*, Math. Oper. Res., 1 (1975), no. 2, 130-143.
M. Schweighofer, [*Global optimization of polynomials using gradient tentacles and sums of squares*]{}, SIAM J. Optim., 17 (2006), No. 3, 920-942.
T. Wang, J.S. Pang, [*Global error bounds for convex quadratic inequality systems*]{}, Optimization 31 (1994), 1-12.
W. H. Yang, *Error bounds for convex polynomials*, SIAM J. Optim., 19 (2009), 1633-1647.
|
---
abstract: 'This work considers coordination and bargaining between two selfish users over a Gaussian interference channel. The usual information theoretic approach assumes full cooperation among users for codebook and rate selection. In the scenario investigated here, each user is willing to coordinate its actions only when an incentive exists and benefits of cooperation are fairly allocated. The users are first allowed to negotiate for the use of a simple Han-Kobayashi type scheme with fixed power split. Conditions for which users have incentives to cooperate are identified. Then, two different approaches are used to solve the associated bargaining problem. First, the Nash Bargaining Solution (NBS) is used as a tool to get fair information rates and the operating point is obtained as a result of an optimization problem. Next, a dynamic alternating-offer bargaining game (AOBG) from bargaining theory is introduced to model the bargaining process and the rates resulting from negotiation are characterized. The relationship between the NBS and the equilibrium outcome of the AOBG is studied and factors that may affect the bargaining outcome are discussed. Finally, under certain high signal-to-noise ratio regimes, the bargaining problem for the generalized degrees of freedom is studied.'
author:
- 'Xi Liu, and Elza Erkip, [^1]'
bibliography:
- 'IEEEabrv.bib'
- 'references.bib'
nocite: '[@*]'
title: 'A Game-Theoretic View of the Interference Channel: Impact of Coordination and Bargaining'
---
Gaussian interference channel, selfish user, coordination, bargaining
Introduction
============
Interference channel (IC) is a fundamental model in information theory for studying interference in communication systems. In this model, multiple senders transmit independent messages to their corresponding receivers via a common channel. The capacity region or the sum-rate capacity for the two-user Gaussian IC is only known in special cases such as the strong interference case [@refereces:Sato81][@references:Han81] or the noisy interference case[@references:Shang09]; the characterization of the capacity region for the general case remains an open problem. Recently, it has been shown in [@references:Etkin08] that a simplified version of a scheme due to Han and Kobayashi [@references:Han81] results in an achievable rate region that is within one bit of the capacity region of the complex Gaussian IC for all values of channel parameters. However, any type of Han-Kobayashi (H-K) scheme requires full cooperation[^2] between the two users through the choice of transmission strategy. In practice, users are selfish in the sense that they choose a transmission strategy to maximize their own rates. They may not have an incentive to comply with a certain rule as in the H-K scheme and therefore not all rate pairs in an achievable rate region are actually attainable. When there is no coordination among the users, interference is usually treated as noise, which is information theoretically suboptimal in most cases.
In this paper, we study a scenario where two users operating over a Gaussian IC are selfish but willing to coordinate and bargain to get fair information rates. When users have conflicting interests, the problem of achieving efficiency and fairness could be formulated as a game-theoretic problem. The Gaussian IC was studied using noncooperative game theory in [@references:Yu00][@references:Etkin07][@references:Larsson08], where it was assumed that the receivers treat the interference as Gaussian noise. For the related Gaussian multiple-access channel (MAC), it was shown in [@references:Gajic08] that in a noncooperative rate game with two selfish users choosing their transmission rates independently, all points on the dominant face of the capacity region are pure strategy Nash Equilibria (NE). However, no single NE is superior to the others, making it impossible to single out one particular NE to operate at. The authors resorted to a mixed strategy which is inefficient in performance. Noncooperative information theoretic games were considered by Berry and Tse in [@references:Berry08] assuming that each user can select any encoding and decoding strategy to maximize its own rate and a Nash equilibrium region was characterized for a class of deterministic IC’s. Extensions were made to a symmetric Gaussian IC in [@references:Berry09].
Another game theoretic approach for studying interfering links is through cooperative game theory. Coalitional games were studied in [@references:La04] for a Gaussian MAC and in [@references:Mathur_Sankar_Mandayam06][@references:Mathur_Sankar_Mandayam08] for Gaussian IC’s. In [@references:Mathur_Sankar_Mandayam06], the Nash Bargaining Solution (NBS) is considered for a Gaussian IC under the assumption of receiver cooperation, effectively translating the channel to a MAC. In [@references:Han05], the NBS was used as a tool to develop a fair resource allocation algorithm for uplink multi-user OFDMA systems. References [@references:Leshem06][@references:Leshem08] analyzed the NBS for the flat and frequency selective fading IC under the assumption of time or frequency division multiplexing (TDM/FDM). The emphasis there was on the weak interference case[^3]. However, as we will show later, for the strong and mixed interference regimes, the NBS based on TDM/FDM may not perform very well, due to the suboptimality of TDM/FDM in those regimes. Game theoretic solutions for the MISO and MIMO IC based on bargaining have been investigated in [@references:Jorswieck][@references:Larsson08][@references:zengmao09], where two or more users negotiate for an agreement on the choice of beamforming vectors or source covariance matrices whereas single-user detection is employed at the receivers.
In this paper, unlike the above literature, we allow for the use of H-K type schemes thereby resulting in a larger rate region and let the two users bargain on choices of codebook and rate to improve their achieved rates or generalized degrees of freedom compared with the uncoordinated case. We propose a two-phase mechanism for coordination between users. In the first phase, the two users negotiate and only if certain incentive conditions are satisfied they agree to use a simple H-K type scheme with a fixed power split that gives the optimal or close to optimal set of achievable rates[@references:Etkin08]. For different types of IC’s, we study the incentive conditions for users to coordinate their transmissions. In the second phase, provided that negotiation in the first phase is successful, the users bargain for rates over the H-K achievable rate region to find an acceptable operating point. Our primary contribution is the application of two different bargaining ideas from game theory to address the bargaining problem in the second phase: the cooperative bargaining approach using NBS and the noncooperative bargaining approach using alternating-offer bargaining games (AOBG). The advantage of the NBS is that it not only provides a Pareto optimal operating point from the point of view of the entire system, but is also consistent with the fairness axioms of game theory. However, one of the assumptions upon which cooperative bargaining is built is that the users are committed to the agreement reached in bargaining when the time comes for it to be implemented [@references:Binmore98]. In this sense, the NBS may not necessarily be the agreement reached in practice. Before the NBS can be used as the operating point, some form of centralized coordination is still needed to ensure that all the parties involved jointly agree to operate at such a point. In an unregulated environment, a centralized authority may be lacking and in such cases more realistic bargaining between users through communication over a side channel may become necessary. Besides, in most works that designate the NBS as a desired solution, each user’s cost of delay in bargaining is not taken into account and little is known regarding how bargaining proceeds. Motivated by all these, we will also study the bargaining problem under the noncooperative bargaining model AOBG [@references:Binmore98][@references:Martin] over the IC. This approach is different from the NBS in that it models the bargaining process between users explicitly as a non-cooperative multi-stage game in which the users alternate making offers until one is accepted. The equilibrium of such a game describes what bargaining strategies would be adopted by the users and thus provides a nice prediction to the result of noncooperative bargaining. To the best of our knowledge, our work provides the first application of dynamic AOBG from bargaining theory to network information theory.
Under the cooperative bargaining approach, the computation of the NBS over the H-K rate region is formulated as a convex optimization problem. Results show that the NBS exhibits significant rate improvements for both users compared with the uncoordinated case. Under the noncooperative bargaining approach, the two-user IC bargaining problem is considered in an uncoordinated environment where the ongoing bargaining may be interrupted, for example, by other users wishing to access the channel. Each user’s cost of delay in bargaining is derived from an exogenous probability which characterizes the risk of breakdown of bargaining due to some outside intervention. The AOBG with risk of breakdown is introduced to model the bargaining process and the negotiation outcome in terms of achievable rates is analyzed. We show that the equilibrium outcome of the AOBG lies on the individual rational efficient frontier of the rate region with its exact location depending on the exogenous probabilities of breakdown. When the breakdown probabilities are very small, it is shown that the equilibrium outcome approaches the Nash solution.
The remainder of this paper is organized as follows. In Section II, we present the channel model, describe the achievable region of a simple H-K type scheme using Gaussian codebooks and review the concept of the NBS and that of AOBG from game theory. We first illustrate how two selfish users bargain over the Gaussian MAC to get higher rates for both in Section III and then present the mechanism of coordination and bargaining for the two users over the Gaussian IC in Section IV. In Section V we consider the bargaining problem in certain high SNR regimes when the utility of each selfish user is measured by achieved generalized degree of freedom (g.d.o.f.) instead of allocated rate, and finally we draw conclusions in Section VI.
Before, we proceed to the next section, we introduce some notations that will be used in this paper.
- Italic letters (e.g. $x$, $X$) denote scalars; and bold letters $\mathbf{x}$ and $\mathbf{X}$ denote column vectors or matrices.
- $\mathbf{0}$ denotes the all-zero vector.
- $\mathbf{X}^t$ and $\mathbf{X}^{-1}$ denote the transpose and inverse of the matrix $\mathbf{X}$ respectively.
- For any two vectors $\mathbf{u}$ and $\mathbf{v}$, we denote $\mathbf{u} \geq \mathbf{v}$ if and only if $u_i \geq v_i$ for all $i$. $\mathbf{u} \leq \mathbf{v}$, $\mathbf{u} > \mathbf{v}$ and $\mathbf{u} <\mathbf{v}$ are defined similarly.
- $C(\cdot)$ is defined as $C(x) = \frac{1}{2}\log_2(1+x)$.
- $(\cdot)^+$ means $\max(\cdot,0)$.
- $\mathbb{R}$ denotes the set of real numbers.
System Model and Preliminaries
==============================
Channel Model
-------------
In this paper, we focus on the two-user standard Gaussian IC [@refereces:Sason04] as shown in Fig. 1 $$\begin{aligned}
Y_{1,t} = X_{1,t} + \sqrt{a}X_{2,t}+Z_{1,t}\\
Y_{2,t} = \sqrt{b}X_{1,t} + X_{2,t} + Z_{2,t}\end{aligned}$$ where $X_{i,t}$ and $Y_{i,t}$, $t = 1,...,n$ represent the input and output at transmitter and receiver $i \in \{1,2\}$ at time $t$, respectively, and $Z_{1,t}$ and $Z_{2,t}$ are i.i.d. Gaussian with zero mean and unit variance. Receiver $i$ is only interested in the message sent by transmitter $i$. For a given block length $n$, user $i$ sends a message $W_i \in \{1,2,..,2^{nR_i}\}$ by encoding it to a codeword $\mathbf{X_i} = (X_{i,1},X_{i,2},...,X_{i,n})$. The codewords $\mathbf{X_1}$ and $\mathbf{X_2}$ satisfy the average power constraints given by $$\frac{1}{n}\sum_{t=1}^n X_{1,t}^2 \leq P_1, \quad
\frac{1}{n}\sum_{t=1}^n X_{2,t}^2 \leq P_2 \label{eqn:powerconst}$$ Receiver $i$ observes the channel output $\mathbf{Y_i} = (Y_{i,1},Y_{i,2},...,Y_{i,n})$ and uses a decoding function $f_i: \mathbb{R}^n \rightarrow \{1,..,2^{nR_i}\}$ to get the estimate $\hat{W}_i$ of the transmitted message $W_i$. We define the average probabilities of error by the expressions $$\begin{aligned}
p_{e,1}^n = \text{P}\{f_1(\mathbf{Y_1})\neq W_1\}\\
p_{e,2}^n = \text{P}\{f_2(\mathbf{Y_2})\neq W_2\}\end{aligned}$$ and $$p_{e}^n = \max \{p_{e,1}^n,p_{e,2}^n\}.$$ A rate pair $(R_1, R_2)$ is said to be achievable if there is a sequence of $(2^{nR_1},2^{nR_2},n)$ codes with $p_e^n\rightarrow 0$ as $n\rightarrow \infty$. The capacity region of the interference channel is the closure of the set of all achievable rate pairs.
Constants $\sqrt{a}$ and $\sqrt{b}$ represent the real-valued channel gains of the interfering links. Depending on the values of $a$ and $b$, the two-user Gaussian IC can be classified as strong, weak and mixed. If $a \geq 1$ and $b \geq 1$, the channel is [*strong*]{} Gaussian IC; if $0<a<1$ and $0<b<1$, the channel is [*weak*]{} Gaussian IC; if either $0<a<1$ and $b\geq 1$, or $0<b<1$ and $a\geq 1$, the channel is [*mixed*]{} Gaussian IC. We let $\text{SNR}_i = P_i$ be the signal to noise ratio (SNR) of user $i$, and $\text{INR}_1 = aP_2(\text{INR}_2 = bP_1)$ be the interference to noise ratio (INR) of user 1(2).
The Han-Kobayashi Rate Region
-----------------------------
The best known inner bound for the two-user Gaussian IC is given by the full H-K achievable region [@references:Han81]. Even when the input distributions in the H-K scheme are restricted to be Gaussian, computation of the full H-K region by taking the union of all power splits into common and private messages and time sharing remains difficult due to numerous degrees of freedom involved in the problem [@references:Khandani09]. Therefore for the purpose of evaluating and computing bargaining solutions, we assume users employ Gaussian codebooks with equal length codewords and consider a simplified H-K type scheme with fixed power split and no time-sharing as in [@references:Etkin08]. Let $\alpha \in [0,1]$ and $\beta \in [0,1]$ denote the fractions of power allocated to the private messages (messages only to be decoded at intended receivers) of user 1 and user 2 respectively. We define $\mathcal{F}$ as the collection of all rate pairs $(R_1,R_2)\in \mathbb{R}^2_{+}$ satisfying $$R_1 \leq \phi_1 = C\left(\frac{P_1}{1+a\beta P_2}\right)\label{eqn:reg1}$$ $$R_2 \leq \phi_2 = C\left(\frac{P_2}{1+b\alpha P_1}\right)\label{eqn:reg2}$$ $$R_1 + R_2 \leq \phi_3 = \min\{\phi_{31},\phi_{32},\phi_{33}\}\label{eqn:reg3}$$ with $$\phi_{31} = C\left(\frac{P_1+a(1-\beta)P_2}{1+a\beta P_2}\right) + C\left(\frac{\beta P_2}{1+b\alpha P_1}\right)$$ $$\phi_{32} = C\left(\frac{\alpha P_1}{1+a\beta P_2}\right) + C\left(\frac{P_2+b(1-\alpha)P_1}{1+b\alpha P_1}\right)$$ $$\phi_{33} = C\left(\frac{\alpha P_1+a(1-\beta)P_2}{1+a\beta P_2}\right) + C\left(\frac{\beta P_2+b(1-\alpha)P_1}{1+b\alpha P_1}\right)$$ and $$\begin{array}{l l}
2R_1+R_2 \leq \phi_4 = &\displaystyle C\left(\frac{P_1+a(1-\beta)P_2}{1+a\beta P_2}\right) + C\left(\frac{\alpha P_1}{1+a\beta P_2}\right)\\
&\displaystyle + C\left(\frac{\beta P_2+b(1-\alpha)P_1}{1+b\alpha P_1}\right)
\end{array}\label{eqn:reg4}$$ $$\begin{array}{l l}
R_1+2R_2 \leq \phi_5 = &\displaystyle C\left(\frac{P_2+b(1-\alpha)P_1}{1+b\alpha P_1}\right) + C\left(\frac{\beta P_2}{1+b\alpha P_1}\right)\\
&\displaystyle + C\left(\frac{\alpha P_1+a(1-\beta)P_2}{1+a\beta P_2}\right)
\end{array}\label{eqn:reg5}$$
The region $\mathcal{F}$ is a polytope and a function of $\alpha$ and $\beta$. We denote the H-K scheme that achieves the rate region $\mathcal{F}$ by $\text{HK}(\alpha,\beta)$. For convenience, we also represent $\mathcal{F}$ in a matrix form as $\mathcal{F} = \{\mathbf{R}|\mathbf{R} \geq \mathbf{0},\: \mathbf{R} \leq \mathbf{R}^1,\: \text{and}\:\mathbf{A}_0\mathbf{R} \leq \mathbf{B}_0\}$, where $\mathbf{R} = (R_1\:R_2)^t$, $\mathbf{R}^1 = (\phi_1\:\phi_2)^t$, $\mathbf{B}_0 = (\phi_3\:\phi_4\:\phi_5)^t$, and $$\mathbf{A}_0 = \left(
\begin{array}{c c c}
1 & 2 & 1\\
1 & 1 & 2
\end{array}
\right)^t$$
In the strong interference regime $a\geq1$ and $b\geq1$, the capacity region is known [@refereces:Sato81][@references:Han81] and is achieved by $\text{HK}(0,0)$, i.e., both users send common messages only to be decoded at both destinations. This capacity region is the collection of all rate pairs $(R_1,R_2)$ satisfying $$\begin{aligned}
&R_1 \leq C(P_1),\nonumber\\
&R_2 \leq C(P_2),\nonumber\\
&R_1 + R_2 \leq \phi_6 = \min\{C(P_1+aP_2), C(bP_1+P_2)\}\label{eqn:cap_strong}\end{aligned}$$ Note that $\phi_6 = \phi_3$ for $\alpha = \beta = 0$.
Overview of Bargaining Games
----------------------------
A two-player bargaining problem consists of a pair $(\mathcal{G},\mathbf{g}^0)$ where $\mathcal{G}$ is a closed convex subset of $\mathbb{R}^2$, $\mathbf{g}^0 = (g_1^0\; g_2^0)^t$ is a vector in $\mathbb{R}^2$, and the set $\mathcal{G}\cap \{\mathbf{g}|\mathbf{g}\geq \mathbf{g}^0\}$ is nonempty and bounded. Here $\mathcal{G}$ is the set of all possible payoff allocations or agreements that the two players can jointly achieve, and $\mathbf{g}^0 \in \mathcal{G}$ is the payoff allocation that results if players fail to agree. We refer to $\mathcal{G}$ as the *feasible set* and to $\mathbf{g}^0$ as the *disagreement point*. The set $\mathcal{G}\cap \{\mathbf{g}|\mathbf{g}\geq \mathbf{g}^0\}$ is a subset of $\mathcal{G}$ which contains all payoff allocations no worse than $\mathbf{g}^0$. We refer to it as the *individual rational feasible set*. We say the bargaining problem $(\mathcal{G},\mathbf{g}^0)$ is [*essential*]{} iff there exists at least one allocation $\mathbf{g}'$ in $\mathcal{G}$ that is strictly better for both players than $\mathbf{g}^0$, i.e., the set $\mathcal{G} \cap \{\mathbf{g}|\mathbf{g}>\mathbf{g}^0\}$ is nonempty; we say $(\mathcal{G},\mathbf{g}^0)$ is *regular* iff $\mathcal{G}$ is essential and for any payoff allocation $\mathbf{g}$ in $\mathcal{G}$, $$\text{if } g_1>g_1^0, \text{ then } \exists \check{\mathbf{g}} \in \mathcal{G} \text{ such that } g_1^0\leq \check{g}_1<g_1 \text{ and } \check{g}_2>g_2,\label{eqn:regular1}$$ $$\text{if } g_2>g_2^0, \text{ then } \exists \hat{\mathbf{g}} \in \mathcal{G} \text{ such that } g_2^0\leq \hat{g}_2<g_2 \text{ and } \hat{g}_1>g_1,\label{eqn:regular2}$$ Here (\[eqn:regular1\]) and (\[eqn:regular2\]) state that whenever a player gets strictly higher payoff than in the disagreement point, then there exists another allocation such that the payoff of the player is reduced while the other player’s payoff is strictly increased. An agreement $\mathbf{g}$ is said to be *efficient* iff there is no agreement in the feasible set $\mathcal{G}$ that makes every player strictly better off. It is said to be *strongly efficient* or *Pareto optimal* iff there is no other agreement that makes every player at least as well off and at least one player strictly better off. We refer to the set of all efficient agreements as the *efficient frontier* of $\mathcal{G}$. In addition, we refer to the efficient frontier of the individual rational feasible set $\mathcal{G} \cap \{\mathbf{g}|\mathbf{g}\geq \mathbf{g}^0\}$ as the *individual rational efficient frontier*. Given that $\mathcal{G}$ is closed and convex, the regularity conditions in (\[eqn:regular1\]) and (\[eqn:regular2\]) hold iff the individual rational efficient frontier is strictly monotone, i.e., it contains no horizonal or vertical line segments. An example illustrating the concepts defined above is shown in Fig. \[fig:frontier\]. The bargaining problem described in Fig. \[fig:frontier\] is regular. We next describe two different bargaining approaches to solving the bargaining problem: NBS and AOBG.
### Nash Bargaining Solution
This bargaining problem is approached axiomatically by Nash [@references:Myerson91]. In this approach, $\mathbf{g}^* = \mathbf{\Phi}(\mathcal{G},\mathbf{g}^0)$ is said to be an NBS in $\mathcal{G}$ for $\mathbf{g}^0$, if the following axioms are satisfied.
1. Individual Rationality: $\Phi_i(\mathcal{G},\textbf{g}^0) \geq g^0_i, \forall i$
2. Feasibility: $\mathbf{\Phi}(\mathcal{G},\mathbf{g}^0)\in \mathcal{G}$
3. Pareto Optimality: $\mathbf{\Phi}(\mathcal{G},\mathbf{g}^0)$ is Pareto optimal.
4. Independence of Irrelevant Alternatives: For any closed convex set $\mathcal{G}'$, if $\mathcal{G}' \subseteq \mathcal{G}$ and $\mathbf{\Phi}(\mathcal{G},\mathbf{g}^0) \in \mathcal{G}'$, then $\mathbf{\Phi}(\mathcal{G}',\mathbf{g}^0) = \mathbf{\Phi}(\mathcal{G},\mathbf{g}^0)$.
5. Scale Invariance: For any numbers $\lambda_1, \lambda_2,\gamma_1$ and $\gamma_2$, such that $\lambda_1 > 0$ and $\lambda_2>0$, if $\mathcal{G}' = \{(\lambda_1 g_1+ \gamma_1,\lambda_2 g_2 + \gamma_2)|(g_1,g_2)\in \mathcal{G}\}$ and $\mathbf{\omega} =(\lambda_1 g^0_1+ \gamma_1,\lambda_2 g^0_2 + \gamma_2) $, then $\mathbf{\Phi}(\mathcal{G}',\mathbf{\omega}) = (\lambda_1 \Phi_1(\mathcal{G},\mathbf{g}^0)+ \gamma_1,\lambda_2 \Phi_2(\mathcal{G},\mathbf{g}^0) + \gamma_2)$.
6. Symmetry: If $g^0_1 = g^0_2$, and $\{(g_2,g_1)|(g_1,g_2)\in \mathcal{G}\} = \mathcal{G}$, then $\Phi_1(\mathcal{G},\mathbf{g}^0) = \Phi_2(\mathcal{G},\mathbf{g}^0)$.
Axioms (4)-(6) are also called [*axioms of fairness*]{}.
[[@references:Myerson91] There is a unique solution $\mathbf{g}^* = \mathbf{\Phi} (\mathcal{G}, \mathbf{g}^0)$ that satisfies all of the above six axioms. This solution is given by, $$\mathbf{\Phi} (\mathcal{G}, \mathbf{g}^0) = \arg \max _{\mathbf{g} \in \mathcal{G}, \mathbf{g} \geq \mathbf{g}^0}\prod _{i =1}^2 (g_i - g^0_i)\label{eqn:nashproduct}$$ ]{} The NBS selects the unique allocation that maximizes the Nash product in (\[eqn:nashproduct\]) over all feasible individual rational allocations in $\mathcal{G}\cap \{\mathbf{g}|\mathbf{g}\geq \mathbf{g}^0\}$. Note that for any essential bargaining problem, the Nash point should always satisfy $g^*_i > g^0_i, \forall i$.
### The Bargaining Game of Alternating Offers
In the cooperative approach to the bargaining problem $(\mathcal{G}, \mathbf{g}^0)$, the NBS is the solution that satisfies a list of properties such as Pareto optimality and fairness. However, using this approach, most information concerning the bargaining environment and procedure is abstracted away, and each user’ cost of delay in bargaining is not taken into account. A dynamic noncooperative model of bargaining called the *alternating-offer bargaining game*, on the other hand, provides a detailed description of the bargaining process. In the AOBG, two users take turns in making proposals of payoff allocation in $\mathcal{G}$ until one is accepted or negotiation breaks down.
An important issue regarding modeling of the AOBG is about players’ cost of delay in bargaining, as they are directly related to users’ motives to settle in an agreement rather than insist indefinitely on incompatible demands. Two common motivations are their sensitivity to time of delay in bargaining and their fear for the risk of breakdown of negotiation[@references:Binmore86]. In the bargaining game we consider in this paper, we derive users’ cost of delay in bargaining from an exogenous risk of breakdown; i.e., after each round, the bargaining process may terminate in disagreement permanently with an exogenous positive probability if the proposal made in that round gets rejected. In a wireless network, this probability could correspond to the event that other users present in the environment intervene and snatch the opportunity of negotiation on transmission strategies between a pair of users. For example, consider an uncoordinated environment when multiple users operate over a common channel. By default each user’s receiver only decodes the intended message from its transmitter and treats the other users’ signals as noise. However, groups of users are allowed to coordinate their transmission strategies to improve their respective rates. In the case of a two-user group, if one user’s proposal gets rejected by the other user in any bargaining round, it is reasonable to assume that it may terminate the bargaining process and turn to a third user for negotiation. The succeeding analysis for the AOBG with risk of breakdown is based on an extensive game with perfect information and chance moves from game theory [@references:Martin]. For completeness, a review of the related concepts from game theory is given in Appendix A.
Consider a regular bargaining problem $(\mathcal{G},\mathbf{g}^0)$ and the two players involved play a dynamic noncooperative game to determine an outcome. Let $p_1$ and $p_2$ be the probabilities of breakdown that satisfy $0<p_1<1$ and $0<p_2<1$. These probabilities of breakdown reflect the users’ cost of delay in bargaining and are assumed to be known by both users. The bargaining procedure of this game is as follows. Player 1 and player 2 alternate making an offer in every odd-numbered round and every even-numbered round respectively. An offer made in each round can be any agreement in the feasible set $\mathcal{G}$. Within each round, after the player whose turn it is to offer announces the proposal, the other player can either accept or reject. In any odd-numbered round, if player 2 rejects the offer made by player 1, there is a probability $p_1$ that the bargaining will end in the disagreement $\mathbf{g}^0$. Similarly, in any even-numbered round, if player 1 rejects the offer made by player 2, there is a probability $p_2$ that the bargaining will end in the disagreement $\mathbf{g}^0$. This process begins from round 1 and continues until some offer is accepted or the game ends in disagreement. When an offer is accepted, an agreement is applied and thus the users get the payoffs specified in the accepted offer. Note in the game described above, the two players only get payoffs at a single round in this game, which is the round at which the bargaining ends in either agreement or disagreement. A formal description of the above process in the context of an extensive game with perfect information and chance moves introduced in Appendix A is as follows. The player set is $N = \{1,2\}$. Let $T = \{1,2,3,...\}$ denote the index set of bargaining rounds. There is no limit on the number of bargaining rounds. We denote the offer made at round $t$ as $o(t)$. The set of histories $H$ is the set of all sequences of one of the following types:
1. $\emptyset$, or $(o(1),Re,Cn,o(2),Re,Cn,...,o(t),Re,Cn)$
2. $(o(1),Re,Cn,o(2),Re,Cn,...,o(t))$
3. $(o(1),Re,Cn,o(2),Re,Cn,...,o(t),Ac)$
4. $(o(1),Re,Cn,o(2),Re,Cn,...,o(t),Re)$
5. $(o(1),Re,Cn,o(2),Re,Cn,...,o(t),Re,Br)$
6. $(o(1),Re,Cn,o(2),Re,Cn,...)$
where $t\in T$, $o(t) \in \mathcal{G}$ for all $t$, $Ac$ means “accept”, $Re$ means “reject”, $Cn$ means bargaining continues and $Br$ means “breakdown”. Histories of Type III, type V and type VI are terminal and those of type VI are infinite. Given a nonterminal history $h$, the player whose turn it is to take an action chooses an agreement in $\mathcal{G}$ as a proposal after a history of type I, chooses a member of $\{Ac,Re\}$ after a history of type II and chooses a member of $\{Cn,Br\}$ after a history of type IV. The player function specifying which player takes an action after a history $h$ is given by: $P(h) = 1$ if $h$ is of either type I or type II and $t$ is even or if $h$ is empty; $P(h) = 2$ if $h$ is of either type I or type II and $t$ is odd; $P(h) = c$ (it is “chance”’s turn to move) if $h$ is of type IV. For each $h\in H$ with $P(h) = c$, the probability measure $f_c(\cdot|h)$ is given by: $f_c(Br|h) = p_1$ and $f_c(Cn|h) = 1- p_1$ if $h$ is of type IV and $t$ is odd; $f_c(Br|h) = p_2$ and $f_c(Cn|h) = 1- p_2$ if $h$ is of type IV and $t$ is even. Player $i$’s strategy $s_i$ in the game specifies its action to take at any stage of the game when it is its turn to move. When chance moves are present, we need to specify the players’ preferences $(\succeq_i)$ over the set of lotteries[^4] over terminal histories. We assume these preferences depend only on the final agreements[^5] reached in the terminal histories of lotteries and not on the path of rejected agreements that preceded them. Moreover, player $i$’s preference relation $\succeq_i$ over the set of all feasible agreements $\mathcal{G}$ can be represented by its payoff $g_i$ where $\mathbf{g} \in \mathcal{G}$. [For any regular two-player bargaining problem $(\mathcal{G},\mathbf{g}^0)$, the corresponding AOBG described above has a unique subgame perfect equilibrium (SPE). Let $(\bar{\mathbf{g}},\tilde{\mathbf{g}})$ be the unique pair of efficient agreements in $\mathcal{G}$ which satisfy $$\tilde{g}_1 = (1-p_2)(\bar{g}_1-g^0_1) + g^0_1 \label{eqn:aobg1}$$ $$\bar{g}_2 = (1-p_1)(\tilde{g}_2-g^0_2) + g^0_2\label{eqn:aobg2}$$ Let $o_i(t)$ denote user $i$’s payoff in the offer made in round $t$. In the subgame perfect equilibrium, the strategy of player 1 is given by $$s_1(h) =
\begin{cases}
\bar{\mathbf{g}}\quad & \text{if }h \text{ is of type I and } t \text{ is even}\\
Ac \quad & \text{if }h \text{ is of type II, } t \text{ is even, and } o_1(t)\geq \tilde{g}_1\\
Re \quad & \text{if }h \text{ is of type II, } t \text{ is even, and } o_1(t)< \tilde{g}_1\\
\end{cases}$$ and that of player 2 is given by $$s_2(h) =
\begin{cases}
\tilde{\mathbf{g}}\quad & \text{if }h \text{ is of type I and } t \text{ is odd}\\
Ac \quad & \text{if }h \text{ is of type II, } t \text{ is odd, and } o_2(t)\geq \bar{g}_2\\
Re \quad & \text{if }h \text{ is of type II, } t \text{ is even, and } o_2(t)< \bar{g}_2\\
\end{cases}$$ That is, player 1 always proposes an offer $\bar{\mathbf{g}}$ and accepts any offer $\mathbf{g}$ with $g_1 \geq \tilde{g}_1$; user 2 always proposes an offer $\tilde{\mathbf{g}}$ and accepts any offer $\mathbf{g}$ with $g_2 \geq \bar{g}_2$. Using these strategies, the outcome of the game is simply a single terminal history $(\bar{\mathbf{g}},Ac)$. Therefore, in equilibrium, the game will end in an agreement on $\bar{\mathbf{g}}$ at round 1. ]{}
The proof of this theorem is similar to that of Theorem 8.3 in [@references:Myerson91] with the disagreement outcome fixed to $\mathbf{g}^0$ after the breakdown in any round. Regularity of the bargaining problem is essential for the proof of the uniqueness of the subgame perfect equilibrium.
In [@references:Binmore86], it is found that as $p_1$ and $p_2$ approach to zero, the equilibrium outcome of the AOBG converges to the NBS. In other words, if there are no external forces to terminate the bargaining process, the equilibrium outcome of the dynamic game approaches the NBS. More discussion will be given on how the probabilities of breakdown $p_1$ and $p_2$ affect the equilibrium outcome of the bargaining game in the later sections.
For convenience, Table \[table:notations\] summarizes various notations used in this subsection.
**Notations** **Meanings**
------------------------------------------------------------ ------------------------------------------------------------
$\mathcal{G}$ feasible set
$g_i$ user $i$’s payoff in agreement $\mathbf{g}$
$\mathbf{g}^0$ disagreement point
$\mathbf{g}^*$, $\mathbf{\Phi}(\mathcal{G}, \mathbf{g}^0)$ NBS of $(\mathcal{G}, \mathbf{g}^0)$
$p_i$ probability of breakdown when user $i$’s offer is rejected
$o(t)$ offer made at round $t$
$H$ set of histories (defined in Appendix A)
$h$ a history (defined in Appendix A)
$P(h)$ player function (defined in Appendix A)
$f_c(\cdot|h)$ probability measure (defined in Appendix A)
$s_i(h)$ player $i$’s strategy (defined in Appendix A)
$Ac$ accept
$Re$ reject
$Br$ breakdown
$\bar{\mathbf{g}}$ offer of user $1$ in subgame perfect equilibrium
$\tilde{\mathbf{g}}$ offer of user $2$ in subgame perfect equilibrium
: Notations used in Section II-C.[]{data-label="table:notations"}
Bargaining over the Two-User Gaussian MAC
=========================================
Before we move to the Gaussian IC, we first illustrate the bargaining framework for a Gaussian MAC in which two users send information to one common receiver. Cooperative bargaining using the NBS has been discussed before for the MAC in [@references:Mathur_Sankar_Mandayam06]. In this section, we reconsider the bargaining problem in the two-user case and provide a closed-form solution for the NBS. Besides, we also study the bargaining outcome when a noncooperative bargaining approach is used. The results here also form the foundation for the solution of the strong IC, which will be studied later. The channel is $$Y_t = X_{1,t} + X_{2,t} + Z_t$$ where $X_{i,t}$ is the input of user $i$, $Y_{t}$ is the output and $Z_t$ is i.i.d. Gaussian noise with zero mean and unit variance at time $t= 1,2,...,n$. Each user has an individual average input power constraint $P_i$ given by (\[eqn:powerconst\]). The capacity region $\mathcal{C}_0$ is the set of all rate pairs $(R_1, R_2)$ such that $$\begin{aligned}
R_i \leq C(P_i), \: i \in \{1,2\}\\
R_1 + R_2 \leq C(P_1 + P_2) = \phi_0\end{aligned}$$ If the two users fully cooperate in codebook and rate selection, any point in $\mathcal{C}_0$ is achievable. When there is no coordination between users, in the worst case, one user’s signal can be treated as noise in the decoding of the other user, leading to rate $R_i^0 = C(\frac{P_i}{1+P_{3-i}})$ for user $i$. In [@references:Gajic08], $R_i^0$ is also called user $i$’s “safe rate”. If the two users are selfish but willing to coordinate for mutual benefits, they may bargain over $\mathcal{C}_0$ to obtain a preferred operating point with $\mathbf{R}^0$ serving as a disagreement point. In the following, we focus on how to find the solution to the bargaining problem $(\mathcal{C}_0,\mathbf{R}^0)$ using both the NBS approach and the AOBG approach respectively.
The NBS Approach
----------------
It can be easily observed that the feasible set $\mathcal{C}_0$ in the MAC case is bounded by only three linear constraints on $R_1$ and $R_2$. Before we move to determine the NBS in the MAC case, we first solve the NBS to the bargaining problem with a more general feasible set $\mathcal{G}$ and a particular disagreement point $\mathbf{g}^0$, the results of which will also be useful for the IC case in Section IV and Section V. We assume the feasible set $\mathcal{G}$ has the following general form: $$\mathcal{G} = \{\mathbf{g}\in \mathbb{R}^2 |\mathbf{g} \geq \mathbf{0},\;\mathbf{g}\leq \mathbf{g}^1\; \text{and}\; \mathbf{A}\mathbf{g}\leq \mathbf{B}\}$$ where $\mathbf{g}^1$ is a $2\times 1$ vector that contains the maximum possible payoff for each user, the $J\times 2$ matrix $\mathbf{A} = (A_{ji})$ and the $J\times 1$ vector $\mathbf{B}$ are related to the $J$ linear constraints.
Assuming that $\mathbf{g}^0<\mathbf{g}^1\:\text{and} \:\mathbf{A}\mathbf{g}^0<\mathbf{B}$, there exists a unique NBS $\mathbf{g}^*$ for the bargaining problem $(\mathcal{G},\mathbf{g}^0)$, which is given by
$$g_i^* = \min \left\{ g_i^1; g_i^0 +\frac{1}{\sum_{j = 1}^J \mu_j A_{ji}}\right\},\quad i\in\{1,2\}$$
where the Lagrange multipliers $\mu_j\geq 0$ ($j\in\{1,...,J\}$) are found by solving $(\mathbf{Ag}^*-\mathbf{B})_j \mu_j = 0$ and $\mathbf{A}\mathbf{g}^* \leq \mathbf{B}$.
Maximizing the Nash product in (\[eqn:nashproduct\]) is equivalent to maximizing its logarithm. Define $m(\mathbf{g}) = \ln(g_1 - g_1^0) + \ln(g_2 - g_2^0)$, then $m(\cdot): \mathcal{G}\cap \{\mathbf{g}|\mathbf{g} \geq \mathbf{g}_0\} \rightarrow \mathbb{R}^+$ is a strictly concave function of $\mathbf{g}$. Also note that the constraints $(Ag)_j \leq B_j,\; j \in \{1,2,...,J\}$ are linear in $g_1$ and $g_2$. So the first order Karush-Kuhn-Tucker conditions are necessary and sufficient for optimality [@references:Bertsekas]. Let $L(\mathbf{g},\mathbf{\lambda},\mathbf{\nu},\mathbf{\mu})$ denote the Lagrangian function and $\lambda_i \geq 0,\: i = 1,2$, $\nu_i \geq 0,\: i = 1,2$ and $\mu_j \geq 0,\: j = 1,2,...,J$ denote the Lagrange multipliers associated with the constraints, then we have $$\begin{array}{l}
L(\mathbf{g},\mathbf{\lambda},\mathbf{\nu},\mathbf{\mu}) = m(\mathbf{g}) + \sum_{i = 1}^2 \lambda_i(g_i - g_i^0)%\\
% \hspace{1.5cm}
+\sum_{i = 1}^2 \nu_i(g_i^1-g_i) + \sum_{j=1}^J \mu_j (B_j - (Ag)_j)
\end{array}$$ The first-order necessary and sufficient conditions yield $$1 + \left(\lambda_i - \nu_i - \sum_{j=1}^J \mu_j A_{ji}\right)(g_i^* - g_i^0) = 0; \: i = 1,2$$ and $$\begin{aligned}
(g_i^* - g_i^0)\lambda_i = 0; \quad \lambda_i \geq 0; \: i = 1,2\\
(g_i^1-g^*_i)\nu_i = 0; \quad \nu_i \geq 0; \: i = 1,2\\
((Ag^*)_j-B_j)\mu_j = 0; \quad \mu_l \geq 0;\: j = 1,2,...,J\end{aligned}$$ Since $g_i^* > g_i^0$ must hold, we have $\lambda_i = 0$ for $i = 1,2$. In addition, if $g_i^*<g_i^1$, then $\nu_i = 0$; otherwise $g_i^* = g_i^1$. Thus, the results in Proposition 1 follow.
In the MAC case, we have $\mathcal{G} = \mathcal{C}_0$, $\mathbf{g}^0 = (C(\frac{P_1}{1+P_2})\;C(\frac{P_2}{1+P_1}))^t$, $\mathbf{g}^1 = (C(P_1)\;C(P_2))^t$, $A = (1\; 1)$ and $B = \phi_0$ in Proposition 1. Note the conditions $\mathbf{g}^0<\mathbf{g}^1\:\text{and} \:\mathbf{A}\mathbf{g}^0<\mathbf{B}$ always hold; i.e., both users operating over the MAC always have incentives to cooperate. Since the only linear constraint is always active (i.e., $\mu_1>0$), the optimization problem can be solved fully and has a closed-form solution as summarized in the following proposition.
There exists a unique NBS for the two-user Gaussian MAC bargaining problem $(\mathcal{C}_0, \mathbf{R}^0)$, given by
$$\mathbf{R}^* = (R_1^0+\frac{1}{\mu_1}\;\: R_2^0+\frac{1}{\mu_1})^t$$
where $\mu_1 = \frac{2}{\phi_0-R_1^0-R_2^0}$.
The AOBG Approach
-----------------
In this subsection, we apply the AOBG framework to the case of two-user MAC and analyze the negotiation results.
For the two-user MAC bargaining problem $(\mathcal{C}_0,\mathbf{R}^0)$, the individual rational efficient frontier is strictly monotone and thus the regularity conditions in Section II always hold. Hence, using Theorem 2, we have the following proposition. [For the two-user MAC bargaining problem $(\mathcal{C}_0,\mathbf{R}^0)$, the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in the subgame perfect equilibrium of the AOBG is given by $$(\bar{R}_1\;\bar{R}_2\;\tilde{R}_1\;\tilde{R}_2)^t = M^{-1}(-p_2R_1^0\:\:p_1 R_2^0\:\: \phi_0\:\: \phi_0)^t\label{eqn:spemac}$$ where $$M = \begin{pmatrix}
1-p_2 & 0 & -1 & 0 \\
0 & 1 & 0 & -(1-p_1) \\
1 & 1 & 0 & 0 \\
0 & 0 & 1 & 1
\end{pmatrix}$$ In equilibrium, the game will end in an agreement on $\mathbf{\bar{R}}$ at round 1. ]{}
From (\[eqn:aobg1\]) and (\[eqn:aobg2\]) in Theorem 2, it follows that the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in the subgame perfect equilibrium must satisfy $$\tilde{R}_1 = (1-p_2)(\bar{R}_1-R^0_1) + R^0_1 \label{eqn:aobgmac1}$$ $$\bar{R}_2 = (1-p_1)(\tilde{R}_2-R^0_2) + R^0_2\label{eqn:aobgmac2}$$ In addition, since $\bar{\mathbf{R}}$ and $\tilde{\mathbf{R}}$ need to be efficient agreements, we have $$\bar{R}_1 + \bar{R}_2 = \phi_0 \label{eqn:aobgmac3}$$ $$\tilde{R}_1 + \tilde{R}_2 = \phi_0 \label{eqn:aobgmac4}$$ Solving (\[eqn:aobgmac1\]), (\[eqn:aobgmac2\]), (\[eqn:aobgmac3\]) and (\[eqn:aobgmac4\]), we obtain the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ as in the proposition.
Clearly, if user 2 makes an offer during the first round instead, the equilibrium outcome would be $\tilde{\mathbf{R}}$. It is not hard to see from (\[eqn:aobgmac1\]), (\[eqn:aobgmac2\]) that if $p_1 = p_2 = 0$, then we have $\tilde{\mathbf{R}} = \bar{\mathbf{R}}$.
In Fig. \[fig:macnbs\], the capacity region, the disagreement point and the NBS obtained using Proposition 2 are illustrated for $\text{SNR}_1 = 20$dB and $\text{SNR}_2 = 15$dB. Recall that the mixed strategy NE in [@references:Gajic08] has an average performance equal to the safe rates in $\mathbf{R}^0$. The NBS point which is the unique fair Pareto-optimal point in $\mathcal{C}_0$ is component-wise superior. This shows that bargaining can improve the rates for both selfish users in a MAC. Also included are the unique pairs of agreements $(\bar{\mathbf{R}},\tilde{\mathbf{R}})$ in Proposition 3 for two different choices of $p_1$ and $p_2$. Recall that offer of user 1 in subgame perfect equilibrium $\bar{\mathbf{R}}$ corresponds to the equilibrium outcome of the AOBG since we assume user 1 makes an offer first. If user 2 is the first mover instead, offer of user 2 in subgame perfect equilibrium $\tilde{\mathbf{R}}$ becomes the equilibrium outcome of the game. For a fixed pair of $p_1$ and $p_2$, each user’s rate in the equilibrium outcome is higher when it is the first mover than when it is not. Such a phenomenon is referred to as “first mover advantage” in [@references:Martin]. Finally, as shown in the figure, when $p_1$ and $p_2$ become smaller, both $\tilde{\mathbf{R}}$ and $\bar{\mathbf{R}}$ are closer to the Nash solution.
Two-User Gaussian IC
====================
For a general Gaussian IC, the capacity region is not known. While the full H-K rate region [@references:Han81] gives the largest known achievable rate region, as discussed in Section II-B, taking into account all possible power splits and different time-sharing strategies makes it computationally infeasible. For tractability, we consider a simple H-K type scheme with fixed power split and no time-sharing. For the strong interference case, we set $\alpha = \beta = 0$, which is known to be optimal[@refereces:Sato81]. For the weak and mixed interference cases, we choose the near-optimal power splits of [@references:Etkin08]. For weak interference $a<1$ and $b<1$, we set $\alpha = \min(1/(bP_1),1)$ and $\beta = \min(1/(aP_2),1)$; for mixed interference $a<1$ and $b\geq 1$, we set $\alpha = 0$ and $\beta = \min(1/(aP_2),1)$. In the uncoordinated case, each receiver treats the interfering signal as noise, leading to rates in disagreement point $\mathbf{R}^0= (C(\frac{P_1}{1+aP_2})\;C(\frac{P_2}{1+bP_1}))^t$.
The simple H-K scheme discussed above requires each user to split its rate for the benefit of both users. However, it is not always true that each user will be able to improve its rate over the disagreement point $\mathbf{R}^0$ as a result of the employed simple H-K scheme and the resulting bargaining problem will be essential as defined in Section II-C. In order to ensure that both selfish users will have motives to employ H-K coding, a pre-bargaining phase is added before the actual bargaining phase. We refer to this pre-bargaining phase as phase 1 and the bargaining phase that follows as phase 2.
In phase 1, users check whether the simple H-K scheme improves individual rates for both over those in disagreement $\mathbf{R}^0$. If there is no improvement for at least one user, then that user does not have the incentive to cooperate and negotiation breaks down. In such a scenario, users operate at the disagreement point $\mathbf{R}^0$. Otherwise, they reach an agreement on the use of the simple H-K scheme with the chosen power split and proceed to phase 2. In phase 2, the users bargain for a rate pair to operate at over the achievable rate region of the H-K scheme they agreed on earlier. The second phase can then be formulated as a two-user bargaining problem with the feasibility set $\mathcal{F}$ defined in Section II-B and disagreement point $\mathbf{R}^0$. Once a particular rate pair is determined as the solution of the second phase bargaining problem, related codebook information is shared between the users so that one user’s receiver can decode the other user’s common message as required by the adopted H-K scheme. If negotiation breaks down, in phase 2, the receivers are not provided with the interfering user’s codebook.
Phase 1: the Pre-bargaining Phase
---------------------------------
In this subsection, we discuss the pre-bargaining phase and study conditions under which both users have incentives to engage in the use of the simple H-K scheme discussed above.
For the two-user Gaussian IC, the pre-bargaining phase is successful and both users have incentives to employ an H-K scheme provided one of the following conditions hold. The conditions also list the H-K scheme employed by the users.
- Strong interference ($a \geq 1$ and $b \geq 1$): Users always employ HK(0,0);
- Weak interference ($a <1$ and $b<1$): Users employ HK($1/(bP_1)$,$1/(aP_2)$) iff $aP_2>1$ and $bP_1 > 1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 1/(bP_1)$ and $\beta = 1/(aP_2)$;
- Mixed interference ($a < 1$ and $b \geq 1$): Users employ HK($0$,$1/(aP_2)$) iff $aP_2>1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 0$ and $\beta = 1/(aP_2)$.
\[thm:incentive\]
See Appendix B.
Note that in the weak and mixed interference cases, when both $\text{SNR}$’s are high, the conditions $aP_2 > 1$ and $bP_1 > 1$ are satisfied for most channel gains and it only remains to check whether $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty. This implies that in the interference limited regimes, it is very likely that both users would have incentives to cooperate.
Phase 2: the Bargaining Phase
-----------------------------
### Nash Bargaining Solution over IC
After the users agree on an H-K scheme, in phase 2, if bargaining is cooperative, the NBS over the corresponding rate region $\mathcal{F}$ is employed as the operating point. Since the pre-bargaining in phase 1 is successful, we concentrate on the case when $\mathbf{R}^0<\mathbf{R}^1\:\text{and} \:\mathbf{A}_0\mathbf{R}^0<\mathbf{B}_0$ for the chosen HK($\alpha$, $\beta$) scheme and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty. Applying Proposition 1 with the feasible set $\mathcal{F}$ and the disagreement point $\mathbf{R}^0$, we have the following result. [Provided the pre-bargaining phase is successful, there exists a unique NBS for the bargaining problem $(\mathcal{F}, \mathbf{R}^0)$ in phase 2, which is characterized in Proposition 1 with $\mathcal{G} = \mathcal{F}$, $\mathbf{g}^0 = \mathbf{R}^0 = (C(\frac{P_1}{1+aP_2})\;C(\frac{P_2}{1+bP_1}))^t$, $\mathbf{g}^1 = (C(P_1)\;C(P_2))^t$, $\mathbf{A} = \mathbf{A}_0$ and $\mathbf{B} = \mathbf{B}_0$. ]{}
We will elaborate on the NBS in Section IV-C.
### Alternating-Offer Bargaining Games over IC
If bargaining is noncooperative in phase 2, analysis for the AOBG over the IC is similar to that over the MAC in the Section III; however, unlike in the MAC case, the associated bargaining problem over the IC is not always regular. If it is non-regular, the AOBG may have more than one subgame perfect equilibria resulting in distinct bargaining outcomes, which puts any of the subgame perfect equilibria and the corresponding outcome in doubt [@references:Myerson91]. Hence the non-regular case is not treated here. In the following, we discuss the regularity of the associated bargaining problem in different interference regimes and characterize the unique subgame perfect equilibrium of the AOBG when the bargaining problem is regular.
Provided the pre-bargaining phase is successful, in phase 2, the two-user Gaussian IC bargaining problem $(\mathcal{F}, \mathbf{R}^0)$ is regular iff one of the following conditions hold:
- Strong interference: $a = b = 1$;
- Weak interference: $R_1^0 \geq (\phi_5-2\phi_2)^+$ and $R_2^0 \geq (\phi_4-2\phi_1)^+$;
- Mixed interference: $R_1^0 \geq (\min(\phi_5-2\phi_2,\phi_3-\phi_2))^+$ and $R_2^0 \geq (\min(\phi_4-2\phi_1,\phi_3-\phi_1))^+$;
where $\phi_i, i=1,...,5$ are defined in (\[eqn:reg1\])-(\[eqn:reg5\]). \[thm:regularity\]
See Appendix C.
Fig. \[fig:regular\_range\] shows the set of cross-link power gains $(a,b)$ for which the associated bargaining problem is regular. Note the conditions for regularity not only include those in Proposition \[thm:regularity\] but also those in Proposition \[thm:incentive\] as well since we assume the pre-bargaining phase has been successful. In Fig. \[fig:regular\_range1\], we have $\text{SNR}_1 = \text{SNR}_2 = 20$dB. We observe that $(\mathcal{F}, \mathbf{R}^0)$ is regular for a large range of power gains in the weak interference regime. In Fig. \[fig:regular\_range2\], we set $\text{SNR}_1 = 20$dB and $\text{SNR}_2 = 30$dB, and observe that, in addition to part of the weak interference regime, $(\mathcal{F}, \mathbf{R}^0)$ is also regular for a range of power gains in the mixed interference regime. Besides, in both scenarios, the bargaining problem is regular for the special case of strong interference $a=b=1$. Finally, note that in the noisy interference regime when $a$, $b$, $P_1$ and $P_2$ satisfy $\sqrt{a}(bP_1+1)+\sqrt{b}(aP_2+1)\leq 1$ [@references:Shang09], since treating interference as noise is optimal, users never employ the H-K scheme and the pre-bargaining phase always fails.
When pre-bargaining in phase 1 is successful and the Gaussian IC bargaining problem $(\mathcal{F},\mathbf{R}^0)$ is regular, using Theorem 2, we have the following result. [For any regular bargaining problem $(\mathcal{F},\mathbf{R}^0)$ over the two-user Gaussian IC, the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in the subgame perfect equilibrium of the AOBG both lie on the individual rational efficient frontier of $\mathcal{F}$ and satisfy (\[eqn:aobg1\]) and (\[eqn:aobg2\]) with $\mathbf{R}^0 = (C(\frac{P_1}{1+aP_2})\;C(\frac{P_2}{1+bP_1}))^t$. \[thm:spe\_ic\] ]{}
In the strong interference case $a = b = 1$, the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in the subgame perfect equilibrium can be obtained using (\[eqn:spemac\]) in Proposition 3 with $\phi_0$ replaced by $\phi_6$. For the weak and mixed interference cases, since the shape of the H-K rate region and the relative location of the disagreement point vary as parameters $a$, $b$, $P_1$ and $P_2$ change, it is difficult to obtain a general expression for $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$. However, when all the parameters are given and the corresponding power split parameters $\alpha$ and $\beta$ are fixed, the H-K rate region and the disagreement point $\mathbf{R}^0$ can be determined accordingly. Since $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ both lie on the individual rational efficient frontier of $\mathcal{F}$ which is piecewise linear, we can compute $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ by solving linear equations.
Illustration of Results
-----------------------
The achievable rate region of the H-K scheme with the optimal or near-optimal power split discussed earlier and the corresponding NBS (we refer to it as H-K NBS) together with disagreement points are plotted for different values of channel parameters in Fig. \[fig:hkvstdm\]. For comparison, we also include the TDM regions and the corresponding NBS (we refer to it as TDM NBS). When TDM is employed, user $i$ transmits a fraction $\rho_i (0\leq \rho_i\leq 1)$ of the time under the constraint $\rho_1 + \rho_2 \leq 1$. For a given vector $\mathbf{\rho} = (\rho_1\;\rho_2)^t$, the rate obtained by user $i$ is given by $R_i(\mathbf{\rho}) = R_i(\rho_i) = \rho_iC(\frac{P_i}{\rho_i})$. Hence, the TDM rate region is given by $\mathcal{R}_{\text{TDM}} = \{\mathbf{R}|\mathbf{R} = (R_1(\rho_1)\;R_2(\rho_2))^t,\; \rho_1+\rho_2\leq 1\}$ and the TDM NBS is computed as the solution to the bargaining problem $(\mathcal{R}_{\text{TDM}},\mathbf{R}^0)$. The NBS based on TDM was also investigated for a Gaussian interference game in [@references:Leshem08] using the unique competitive solution studied there as the disagreement point. Note that for TDM Proposition 5 applies and since the efficient frontier of the TDM rate region is strictly monotone, the associated bargaining problem is regular as long as it is essential.
Since interference limited regimes are more of interest here, in these plots, we assume the signal to noise ratios for both users’ direct links are high, i.e, $\text{SNR}_1 = \text{SNR}_2 = 20$dB. In each case, the channel parameters are chosen according to Proposition \[thm:incentive\] so that the pre-bargaining phase is successful. In Fig. \[fig:subfig1\], both interfering links are strong, $\text{HK}(0,0)$ is employed. The H-K NBS strictly dominates the TDM one. Fig. \[fig:subfig2\] shows an example for mixed interference with $a = 0.1$ and $b = 3$. Since $aP_2 = 10 > 1$, $\text{HK}(0,0.1)$ is employed. In this example, although TDM results in some rate pairs that are outside the H-K rate region, the H-K NBS remains component-wise better than the TDM one. The weak interference case when $a = 0.2$ and $b = 0.5$ is plotted in Fig. \[fig:subfig3\]. For these parameters, we have $aP_2 = 20>1$ and $bP_1 = 50>1$, therefore $\text{HK}(0.02,0.05)$ is used. The H-K NBS in this case, though still much better than $\mathbf{R}^0$, is slightly worse than the TDM one. This is because the TDM rate region contains the H-K rate region due to the suboptimality of the simple H-K scheme in the weak regime. Finally, recall that while the TDM rate region does not depend on $a$ and $b$, since $\mathbf{R}^0$ does, the TDM NBS depends on $a$ and $b$ as well.
We compute the H-K NBS for different ranges of the channel parameters in Fig. \[fig:ratesvsb\]. We assume $\text{SNR}_1 = \text{SNR}_2 = 20$dB, $a = 1.5$ and $b$ varies from 0 to 3. The improvement of each user’s rate in $\mathbf{R}^*$ over the one in $\mathbf{R}^0$ increases as $b$ grows. When $b< a$, user 1’s rate in the NBS is less than user 2’s; however, as $b$ grows beyond $a$, user 1’s rate in the NBS surpasses user 2’s, which is due to the fairness property of the NBS. Alternatively we say a strong interfering link can give user 1 an advantage in bargaining.
In Fig. \[fig:icaobg\], the unique pair of agreements $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in the subgame perfect equilibrium of the AOBG is shown for mixed interference with $a = 0.2$, $b = 1.2$, $\text{SNR}_1 = 10$dB and $\text{SNR}_2 = 20$dB for three different choices of the pair of probabilities of breakdown $p_1$ and $p_2$. According to Proposition \[thm:incentive\], in phase 1, the two users decide to cooperate using $\text{HK}(0,0.05)$. Furthermore, by Proposition \[thm:regularity\], the bargaining problem in phase 2 is regular. As in the MAC case, user 1’s offer in subgame perfect equilibrium $\bar{\mathbf{R}}$ corresponds to the equilibrium outcome of the AOBG since we assume user 1 makes an offer first. If user 2 moves first instead, user 2’s offer in subgame perfect equilibrium $\tilde{\mathbf{R}}$ would become the equilibrium outcome of the game. We can see that as $p_1$ and $p_2$ change, $\bar{\mathbf{R}}$ and $\tilde{\mathbf{R}}$ move along the individual rational efficient frontier of $\mathcal{F}$. When $p_1 = 0.5$ and $p_2 = 0.5$, user 1’s rate in $\bar{\mathbf{R}}$ is greater than that in the NBS; but when $p_1 = 0.1$ and $p_2 = 0.5$, its rate in $\bar{\mathbf{R}}$ is smaller than that in the NBS. As $p_1$ and $p_2$ decrease to $0.1$, both $\bar{\mathbf{R}}$ and $\tilde{\mathbf{R}}$ become closer to the Nash solution. The rate of each user in the perfect equilibrium outcome $\bar{\mathbf{R}}$ as a function of breakdown probability $p_1$ is plotted in Fig. \[fig:speratevsp1\] when $p_2$ is fixed to 0.5 under the above channel parameters. As $p_1$ gets larger, user 1’s rate increases while user 2’s decreases. The larger $p_1$ becomes, the more likely that bargaining may permanently terminate in disagreement when user 1’s offer is rejected by user 2. This demonstrates that if user 1 fears less about bargaining breakdown, it can be more advantageous in bargaining. It should also be emphasized that due to regularity the equilibrium is unique and agreement is reached in round 1 in equilibrium. In this sense, the bargaining mechanism of AOBG is highly efficient.
Fig. \[fig:aobgtdm\] illustrates the perfect equilibrium outcomes of the AOBG when the H-K and TDM cooperating schemes are used respectively for an example of mixed interference with $a = 0.2$, $b = 1.2$, $\text{SNR}_1 = 20$dB and $\text{SNR}_2 = 30$dB. By Proposition’s \[thm:incentive\], \[thm:regularity\] and Fig. \[fig:regular\_range2\], incentive conditions in phase 1 are satisfied and the bargaining problem is regular. The equilibrium outcomes of the AOBG in the TDM case are obtained by applying Proposition \[thm:spe\_ic\]. Since the boundary of the TDM rate region is not linear, we compute the unique pair of $(\bar{\mathbf{R}}, \tilde{\mathbf{R}})$ in TDM numerically. The probabilities of breakdowns are set as $p_1 = p_2 = 0.5$. The NBS’s in both cases are also plotted for reference. We observe that the individual rational efficient frontiers for the H-K and TDM schemes intersect. Also, while user 2 gets higher rates in all the bargaining outcomes in TDM than in H-K, user 1’s rates in H-K are superior to those in TDM. Hence, we can conclude that, depending on the channel parameters and power constraints, the two users may have distinct preferences between the transmission schemes employed.
Bargaining for the Generalized Degree of Freedom
================================================
In the previous section, we have studied the bargaining problem in which the two selfish users over a Gaussian IC bargain for a fair rate pair over the rate region achieved by the simple H-K scheme. However, for fixed channel parameters $a$, $b$ and power constraints $P_1$ and $P_2$, the employed H-K scheme is a suboptimal one as it can only achieve within one bit to the capacity region in the weak and mixed regimes. In this section, we focus our attention on certain high $\text{SNR}$ regimes when the simple H-K scheme becomes asymptotically optimal and employ the g.d.o.f. as a performance measure for each user. As the g.d.o.f. approximates interference-limited performance well at high SNR’s for all interference regimes, the results in this section help us understand what the bargaining solution we would get if bargaining was done over the entire capacity region. Before we deal with the bargaining problem, we briefly review the concept of g.d.o.f. first.
Let $\mathcal{C}(\text{SNR}_1,\text{SNR}_2,\text{INR}_1,\text{INR}_2)$ denote the capacity region of a real Gaussian IC with parameters $\text{SNR}_1$, $\text{SNR}_2$, $\text{INR}_1$ and $\text{INR}_2$ defined in Section II, and let $$\begin{aligned}
\theta_1 = \frac{\log \text{SNR}_2}{\log \text{SNR}_1}\\
\theta_2 = \frac{\log \text{INR}_1}{\log \text{SNR}_1}\\
\theta_3 = \frac{\log \text{INR}_2}{\log \text{SNR}_1}\end{aligned}$$ Note that for the g.d.o.f analysis, $\theta_1$, $\theta_2$ and $\theta_3$ are fixed[^6]. In this section, we only focus on the nontrivial cases when $\theta_i > 0$, $i = 1,2,3$.
The generalized degrees of freedom region is defined as [@references:Etkin08] $$\begin{aligned}
\mathcal{D}(\theta_1,\theta_2,\theta_3) = \lim_{\substack{\text{SNR}_1,\text{SNR}_2,\text{INR}_1,\text{INR}_2\rightarrow \infty\\\theta_1,\theta_2,\theta_3\: \text{fixed}}}\{(d_1,d_2)|\nonumber\\
(\frac{d_1}{2}\log \text{SNR}_1,\frac{d_2}{2}\log \text{SNR}_2)\in \mathcal{C}(\text{SNR}_1,\text{SNR}_2,\text{INR}_1,\text{INR}_2)\}\end{aligned}$$ The generalized degrees of freedom $d_1$ and $d_2$ reflect to what extent interference affects communications. When the interference is absent, each user can achieve a rate $R_i = 1/2\log \text{SNR}_i$; as a result of interference, the single-user capacity is scaled by a factor $d_i$. The greater $d_i$ is, the less user $i$ is affected by interference. The following theorem from [@references:Etkin08] describes the optimal g.d.o.f. region of a two-user Gaussian IC.
In the strong interference regime $\text{INR}_1 \geq \text{SNR}_2$ and $\text{INR}_2 \geq \text{SNR}_1$ ($\theta_2 \geq \theta_1$ and $\theta_3 \geq 1$), the g.d.o.f. region $\mathcal{D}_1$ is given by $$d_1\leq 1
\label{eqn:gdof1}$$ $$d_2\leq 1
\label{eqn:gdof2}$$ $$d_1+\theta_1 d_2 \leq \varphi_1 = \min(\max(1,\theta_2),\max(\theta_1,\theta_3))$$ and it is achieved by $\text{HK}(0,0)$.
In the weak interference regime $\text{INR}_1 < \text{SNR}_2$ and $\text{INR}_2 < \text{SNR}_1$ ($\theta_2 < \theta_1$ and $\theta_3 < 1$), the g.d.o.f. region $\mathcal{D}_2$ is given by (\[eqn:gdof1\]), (\[eqn:gdof2\]) and $$\begin{array}{l l}
d_1+\theta_1 d_2 \leq \varphi_2 = & \min(1+(\theta_1-\theta_3)^+,\theta_1+(1-\theta_2)^+,\\
& \max(\theta_2,1-\theta_3)+\max(\theta_3,\theta_1-\theta_2))\label{eqn:weakdof1}
\end{array}$$ $$2d_1+\theta_1 d_2\leq \varphi_3 = \max (1,\theta_2) + \max(\theta_3,\theta_1-\theta_2) + 1-\theta_3\label{eqn:weakdof2}$$ $$d_1+2\theta_1 d_2\leq \varphi_4 = \max (\theta_1,\theta_3) + \max(\theta_2,1-\theta_3) + \theta_1-\theta_2\label{eqn:weakdof3}$$ and it is achieved by $\text{HK}(1/\text{INR}_2,1/\text{INR}_1)$.
In the mixed interference regime $\text{INR}_1 \geq \text{SNR}_2$ and $\text{INR}_2 < \text{SNR}_1$ ($\theta_2 \geq \theta_1$ and $\theta_3 < 1$), the g.d.o.f. region $\mathcal{D}_3$ is given by (\[eqn:gdof1\]), (\[eqn:gdof2\]) and $$d_1+\theta_1 d_2 \leq \varphi_5 = \min(1+(\theta_1-\theta_3)^+,\max(1,\theta_2))$$ $$d_1+2\theta_1 d_2 \leq \varphi_6 = \max(\theta_1,\theta_3) + \max(\theta_2,1-\theta_3)$$ and is achieved by $\text{HK}(1/\text{INR}_2,0)$.
Each selfish user aims to merely increase its own g.d.o.f.. If the two users do not coordinate, each user treats the other user’s signal as noise. In the uncoordinated case, the pair of rates in disagreement are given by $$\begin{aligned}
R_1^0 = \frac{1}{2}\log \left(1+\frac{\text{SNR}_1}{1+\text{INR}_1}\right)\\
R_2^0 = \frac{1}{2}\log \left(1+\frac{\text{SNR}_2}{1+\text{INR}_2}\right)
\end{aligned}$$ and thus the corresponding disagreement g.d.o.f. pair $\mathbf{d}^0 = (d_1^0\;d_2^0)^t$ can be obtained as $$%d_1^0 = \lim_{\substack{\text{SNR}_1,\text{INR}_1}\rightarrow \infty\\\alpha_2\:\text{fixed}}\frac{}{}
d_1^0 = \lim_{\substack{\text{SNR}_1,\text{SNR}_2,\text{INR}_1,\text{INR}_2\rightarrow \infty\\\theta_1,\theta_2,\theta_3\: \text{fixed}}} \frac{R_1^0}{\frac{1}{2}\log \text{SNR}_1}= (1-\theta_2)^+$$ and $$d_2^0 = \lim_{\substack{\text{SNR}_1,\text{SNR}_2,\text{INR}_1,\text{INR}_2\rightarrow \infty\\\theta_1,\theta_2,\theta_3\: \text{fixed}}} \frac{R_2^0}{\frac{1}{2}\log \text{SNR}_2}= (1-\frac{\theta_3}{\theta_1})^+$$
The problem of obtaining a fair pair of g.d.o.f. can be formulated as a bargaining problem with the feasible set being $\mathcal{D}(\theta_1,\theta_2,\theta_3)$ and the disagreement point being $\mathbf{d}^0$. The two-phase mechanism of coordination proposed in Section IV can also be applied here. In the following, Proposition \[thm:gdof1\] determines whether the two users have incentives to coordinate in phase 1 and Proposition \[thm:gdof2\] then solves the bargaining problem in the second phase by selecting the NBS as the desired operating point. A dynamic AOBG can also be formulated for the associated bargaining problem (if the regularity condition holds) but will be omitted here.
For the two-user Gaussian IC, the pre-bargaining phase is successful and both users have incentives to employ an H-K scheme provided one of the following conditions hold. The conditions also list the H-K scheme employed by the users.
- Strong interference ($\theta_2 \geq \theta_1$ and $\theta_3 \geq 1$): Users always employ HK(0,0);
- Weak interference ($\theta_2 < \theta_1$ and $\theta_3 < 1$): Users employ HK($1/\text{INR}_2$,$1/\text{INR}_1$) iff $(\theta_1,\theta_2,\theta_3)$ are such that $(d_1^0,d_2^0)$ satisfy (\[eqn:weakdof1\])-(\[eqn:weakdof3\]) all with strict inequality;
- Mixed interference ($\theta_2 \geq \theta_1$ and $\theta_3 < 1$): Users always employ HK($1/\text{INR}_2$,0).
\[thm:gdof1\]
Unlike in the strong and mixed interference regimes, in the weak interference regime the two users may not both necessarily have the incentives to cooperate. For instance, if $\theta_1 = 1$, $0<\theta_2 \leq \frac{1}{2}$ and $0<\theta_3 \leq \frac{1}{2}$, then $d_1^0+d_2^0 = \varphi_2 = 2-\theta_2-\theta_3 $. In this case, $\mathbf{d}^0$ lies on the boundary of $\mathcal{D}_2$ and is Pareto optimal, thus there is no bargaining outcome that can improve one user’s g.d.o.f. without decreasing the other’s. Also recall that in Section IV, in the mixed interference regime $a\geq1$ and $b<1$, for finite power constraints $P_1$ and $P_2$, even when $\text{INR}_2 = bP_1>1$, the disagreement point $\mathbf{R}^0$ may not lie strictly inside the rate region achieved by $\text{HK}(1/(bP_1),0)$ and thus pre-bargaining in phase 1 could fail. However, by Proposition \[thm:gdof1\], at high SNR’s, the pre-bargaining phase is always successful and both users have incentives to employ the simple H-K scheme.
Provided that the pre-bargaining in phase 1 is successful, the NBS in phase 2 can be characterized as follows:
- Strong interference ($\theta_2 \geq \theta_1$ and $\theta_3 \geq 1$): there exists a unique NBS for the bargaining problem $(\mathcal{D}_1,\mathbf{d}^0)$, which is characterized in Proposition 1 with $\mathcal{G} = \mathcal{D}_1$, $\mathbf{g}^0 = \mathbf{d}^0$, $\mathbf{g}^1 = (1\;1)^t$, $\mathbf{A} = (1\; \theta_1)$, $\mathbf{B} = \varphi_1$;
- Weak interference ($\theta_2 < \theta_1$ and $\theta_3 < 1$): if the bargaining problem $(\mathcal{D}_2,\mathbf{d}^0)$ is essential, there exists a unique NBS which is characterized in Proposition 1 with $\mathcal{G} = \mathcal{D}_2$, $\mathbf{g}^0 = \mathbf{d}^0$, $\mathbf{g}^1 = (1\;1)^t$, $\mathbf{B} = (\varphi_2\;\varphi_3\;\varphi_4)^t$ and $$\mathbf{A} = \left(
\begin{array}{c c c}
1 & 2 & 1\\
\theta_1 & \theta_1 & 2\theta_1
\end{array}
\right)^t;$$
- Mixed interference ($\theta_2 \geq \theta_1$) and $\theta_3 < 1$: there exists a unique NBS for the bargaining problem $(\mathcal{D}_3,\mathbf{d}^0)$ which is characterized in Proposition 1 with $\mathcal{G} = \mathcal{D}_3$, $\mathbf{g}^0 = \mathbf{d}^0$, $\mathbf{g}^1 = (1\;1)^t$, $\mathbf{B} = (\varphi_5\;\varphi_6)^t$ and $$\mathbf{A} = \left(
\begin{array}{c c}
1 & 1\\
\theta_1 & 2\theta_1
\end{array}
\right)^t.$$
\[thm:gdof2\]
The optimal g.d.o.f. region, the disagreement point and the NBS obtained are illustrated in Fig. \[fig:dofregions\] for an example in the mixed interference regime. For comparison, we also included the g.d.o.f. region that can be achieved when TDM is used and the corresponding NBS. The g.d.o.f. region in the TDM case is given by $\mathcal{D}_4 = \{\mathbf{d}|\mathbf{d}\geq 0,\; d_1+d_2 \leq 1\}$, which is strictly suboptimal except for some special cases such as the strong interference case with $\theta_1 = \theta_2 = \theta_3 = 1$ and the weak interference case with $\theta_1 = 1$ and $\theta_2 = \theta_3 = \frac{1}{2}$. The TDM NBS is computed as the solution to the bargaining problem $(\mathcal{D}_4, \mathbf{d}^0)$. It can be observed in Fig. \[fig:dofregions\] that the H-K NBS strictly dominates the TDM NBS. This implies that unlike Fig. \[fig:aobgtdm\] in Section IV, in certain high SNR regimes, both users would prefer to cooperate using the H-K scheme, rather than the TDM scheme.
Conclusions
===========
In this paper, we investigated the two-user Gaussian IC, under the assumption that the two users are selfish and interested in coordinating their transmission strategies only when they have incentives to do so. We proposed a two-phase mechanism for the users to coordinate, which consists of choosing a simple H-K type scheme with Gaussian codebooks and fixed power split in phase 1 and bargaining over the achievable rate region (or g.d.o.f. region) to obtain a fair operating point in phase 2. Both the NBS and the dynamic AOBG are considered to solve the bargaining problem in phase 2. As a problem of independent interest, and also as a tool for developing the optimal solution in the strong interference regime, we first studied the MAC before moving on to the IC. We showed that the proposed mechanism can gain substantial rate improvements for both users compared with the uncoordinated case. The results from the dynamic AOBG show that the bargaining game has a unique perfect equilibrium and the agreement is reached immediately in the first bargaining round provided that the associated bargaining problem is regular. The exogenous probabilities of breakdown and which user makes a proposal first also play important roles in the final outcome. When the selfish users’ cost of delay in bargaining are not negligible, that is, exogenous probabilities of breakdown are high, the equilibrium outcome deviates from the NBS. We conclude that when we consider coordination and bargaining over the IC, factors such as the users’ cost of delay in bargaining and the environment in which bargaining takes place should also be taken into consideration.
In this paper, we derived the cost of delay in bargaining from an exogenous probability of breakdown motivated by the fact that other users in the environment may randomly interrupt the process and the bargaining between a pair of users may terminate in disagreement if no offer is accepted after each round. It would be also interesting to model users’ cost of delay in bargaining under other assumptions such as each user’s payoff is discounted by a factor of $\delta$ after each round [@references:Rubinstein82][@references:Binmore86] or the amount of communication overhead incurred. Finally, the bargaining framework in this paper can be extended to the two-user MIMO IC using the results of [@references:Vishwanath04][@references:Shang09_archive][@references:Hsiang08][@references:Telatar07].
The Extensive Game with Perfect Information and Chance Moves
============================================================
An extensive game with perfect information $\Gamma = \langle N,H,P,(\succeq_i)\rangle$ has the following components [@references:Martin]:
- A player set $N$.
- A history set $H$. Each history in $H$ is a sequence of the form $(e_1,...,e_K)$, where $e_k\;(k=1,...,K)$ is an action taken by a player. If $K$ is $\infty$, the history is infinite. A history $(e_1,...,e_K)$ is *terminal* if it is infinite or if there is no $e_{K+1}$ such that $(e_1,...,e_{K+1})\in H$. The set of terminal histories and that of nonterminal histories are denoted $Q$ and $H\setminus Q$ respectively.
- A player function $P(h)$ that assigns to each nonterminal history $h\in H\setminus Q$ a member of $N$.
- For each player $i\in N$ a preference relation $\succeq_i$ on $Q$.
Let $h$ be a history of length $K$ and $e$ be an action. We denote by $(h,e)$ the history of length $K+1$ consisting of $h$ followed by $e$. After any nonterminal history $h\in H\setminus Q$, player $P(h)$ chooses an action from the set $E(h) = \{e|(h,e)\in H\}$. [A *strategy* of player $i\in N$ in the extensive game $\Gamma = \langle N,H,P,(\succeq_i)\rangle$ is a function $s_i$ that assigns an action in $E(h)$ to each nonterminal history $h\in H\setminus Q$ for which $P(h) = i$. ]{}
Let $s = (s_i)_{i\in N}$ be the strategy profile and $s_{-i}$ be the list of strategies $(s_j)_{j\in N\setminus\{i\}}$ for all players except $i$. Given a list $(s_j)_{j\in N\setminus\{i\}}$ and a strategy $s_i$, we also denote by $(s_{-i},s_i)$ the strategy profile. For each strategy profile $s$, we define the *outcome* $O(s)$ of $s$ to be the terminal history that results when each player $i\in N$ follows the precepts of $s_i$. [A *Nash equilibrium* of the extensive game $\Gamma = \langle N,H,P,(\succeq_i)\rangle$ is a strategy profile $s^*$ such that for every player $i\in N$ and for every strategy $s_i$ of player $i$, we have $$O(s^*_{-i},s^*_i)\succeq_i O(s^*_{-i},s_i)$$ ]{} [The *subgame* of the extensive game $\Gamma = \langle N,H,P,(\succeq_i)\rangle$ that follows the history $h$ is the extensive game $\Gamma(h) = \langle N,H|_h,P|_h,(\succeq_i|_h)\rangle$, where $H|_h$ is the set of sequences $h'$ of actions for which $(h,h')\in H$, $P|_h$ is defined by $P|_h(h') = P(h,h')$ for each $h'\in H|_h$, and $\succeq_i|_h$ is defined by $h'\succeq_i|_h h''$ if and only if $(h,h')\succeq_i (h,h'')$. ]{}
Given a strategy $s_i$ of player $i$ and a history $h$ in the extensive game $\Gamma$, denote by $s_i|_h$ the strategy that $s_i$ induces in the subgame $\Gamma(h)$ (i.e., $s_i|_h(h') = s_i(h,h')$ for each $h'\in H|_h)$.
[A *subgame perfect equilibrium* of an extensive game $\Gamma = \langle N,H,P,(\succeq_i)\rangle$ is a strategy profile $s^*$ in $\Gamma$ such that for any history $h$, the strategy profile $s^*|_h$ is a Nash equilibrium of the subgame $\Gamma(h)$. ]{}
A subgame-perfect equilibrium is a Nash equilibrium of the whole game with additional property that the equilibrium strategies induce a Nash equilibrium in every subgame as well.
If there is some *exogenous uncertainty*, the game becomes one with *chance moves* and we denote it by $\langle N,H,P,f_c, (\succeq_i)\rangle$. Under such an extension, $P$ is a function from the nonterminal histories in $H$ to $N \cup \{c\}$ (If $P(h) = c$, then chance determines the action taken after history $h$); for each $h\in H$ with $P(h) = c$, $f_c(\cdot|h)$ is a probability measure on the set $E(h)$ after history $h$; for each player $i\in N$, $\succeq_i$ is a preference relation on lotteries over the set of terminal histories. The outcome of a strategy profile is a probability distribution over terminal histories and the definition of an subgame perfect equilibrium remains the same as before.
Proof of Proposition \[thm:incentive\]
======================================
- In the strong interference case $a \geq 1$ and $b \geq 1$, we choose optimal $\alpha = \beta = 0$. Treating interference as noise is suboptimal and $\mathbf{R}^0$ always lies inside $\mathcal{F}$. The bargaining problem $(\mathcal{F},\mathbf{R^0})$ is essential and hence both users always have incentives to cooperate.
- In the weak interference $a <1$ and $b<1$, we choose the near-optimal power splits $\alpha = \min(1/(bP_1),1)$ and $\beta = \min(1/(aP_2),1)$. If $bP_1 \leq 1$, the scheme $\text{HK}(1,\beta)$ will not improve user 2’s rate over $\mathbf{R}^0$ and hence user 2 does not have an incentive to cooperate using such a scheme. The same will occur to user 1 if $aP_2 \leq 1$ and $\text{HK}(\alpha,1)$ is employed. However, if $aP_2>1$ and $bP_1 > 1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 1/(bP_1)$ and $\beta = 1/(aP_2)$, both users’ rates can be improved compared with those in $\mathbf{R}^0$.
- In the mixed interference with $a < 1$ and $b \geq 1$, we choose the near-optimal power splits $\alpha = 0$ and $\beta = \min(1/(aP_2),1)$. Similar to the weak case, only if $aP_2>1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 0$ and $\beta = 1/(aP_2)$, it is possible to improve both users’ rates relative to those in $\mathbf{R}^0$. Otherwise, at least one user does not have an incentive to cooperate and coordination breaks down.
Proof of Proposition \[thm:regularity\]
=======================================
- In the strong interference case, at phase 1, the users choose optimal $\alpha = \beta = 0$. The resulting capacity region is shown in Fig. \[fig:strreg\]. Note that only two extreme points of the region are in the first quadrant and they are $r_1 = (\phi_6-C(P_2),C(P_2))$ and $r_2 = (C(P_1),\phi_6-C(P_1))$. It is easy to show that $R_1^0\leq \phi_6-C(P_2)$ and $R_2^0 \leq \phi_6-C(P_1)$ with equalities holding only when $a = b = 1$. In order for the individual rational efficient frontier to be strictly monotone, it must contain no horizonal or vertical line segments, which requires $R_1^0\geq \phi_6-C(P_2)$ and $R_2^0 \geq \phi_6-C(P_1)$. Hence, the associated bargaining problem is regular iff $a = b = 1$.
- In the weak interference case $a<1$ and $b<1$, by Proposition \[thm:incentive\], in phase 1, both users have incentives to cooperate using $\text{HK}(1/(bP_1),1/(aP_2))$ if $aP_2>1$, $bP_1 > 1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 1/(bP_1)$ and $\beta = 1/(aP_2)$. The shape of achievable rate region is shown in Fig. \[fig:weakreg\]. It has been proved in [@references:Khandani09] that the points $r_i'\notin\mathcal{F}$ for $i\in \{1,2,...,6\}$. Therefore there are at most[^7] four extreme points in the first quadrant of Fig. \[fig:weakreg\], given by $$\begin{aligned}
&r_1 = (\phi_1,\phi_4-2\phi_1)\\
&r_2 = (\phi_4-\phi_3,2\phi_3-\phi_4)\\
&r_3 = (2\phi_3-\phi_5,\phi_5-\phi_3)\\
&r_4 = (\phi_5-2\phi_2,\phi_2)\end{aligned}$$ where $\phi_i,\: i\in\{1,2,...,5\}$ are given in (\[eqn:reg1\])-(\[eqn:reg5\]) with $\alpha = 1/(bP_1)$ and $\beta = 1/(aP_2)$. In order for the individual rational efficient frontier to be strictly monotone, it must contain no horizonal or vertical line segments. If $r_1$ is in the first quadrant, $R_2^0\geq \phi_4-2\phi_1$ must hold and similarly if $r_4$ is in the first quadrant, $R_1^0\geq \phi_5-2\phi_2$ must hold. Hence, the associated bargaining problem in the weak interference case is regular iff two additional conditions $R_1^0 \geq (\phi_5-2\phi_2)^+$ and $R_2^0 \geq (\phi_4-2\phi_1)^+$ are satisfied.
- In the mixed interference case $a<1$ and $b\geq 1$, by Proposition \[thm:incentive\], in phase 1, both users cooperate using $\text{HK}(0,1/(aP_2))$ if $aP_2>1$ and $\mathcal{F}\cap \{\mathbf{R}>\mathbf{R}^0\}$ is nonempty when $\alpha = 0$ and $\beta = 1/(aP_2)$. Similar to the weak interference case, there are at most four extreme points in the first quadrant of Fig. \[fig:weakreg\] except that $r_1' = (\phi_1,\phi_3-\phi_1)$ or $r_5'= (\phi_3-\phi_2,\phi_2)$ may become an extreme point of $\mathcal{F}$, depending on whether the constraint (\[eqn:reg4\]) or (\[eqn:reg5\]) is redundant or not respectively. In order for the individual rational efficient frontier to be strictly monotone, it must contain no horizonal or vertical line segments. If $r_1$ and $r_1'$ are both in the first quadrant, $R_2^0\geq \min(\phi_4-2\phi_1,\phi_3-\phi_1)$ must hold and if $r_4$ and $r_5'$ are both in the first quadrant, $R_1^0\geq \min(\phi_5-2\phi_2,\phi_3-\phi_2)$ must hold. Hence, the associated bargaining problem in the mixed interference case $a<1$ and $b\geq 1$ is regular iff two additional conditions $R_1^0 \geq (\min(\phi_5-2\phi_2,\phi_3-\phi_2))^+$ and $R_2^0 \geq (\min(\phi_4-2\phi_1,\phi_3-\phi_1))^+$ are satisfied.
Proof of Proposition \[thm:gdof1\]
==================================
For all interference regimes, since we assume $\theta_i>0$ for $i = 1,2,3$, it immediately follows that $d_1^0< 1$ and $d_2^0< 1$.
- In the strong interference regime, we have $\theta_2 \geq \theta_1$ and $\theta_3 \geq 1$. Depending on the values of $\theta_1$, $\theta_2$ and $\theta_3$, we have the following four cases:
- Case 1: $\theta_2 \geq 1$ and $\theta_3 \geq \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = 0$ and $\varphi_1 = \min(\theta_2,\theta_3)>0$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_1$ holds.
- Case 2: $\theta_2 < 1$ and $\theta_3 \geq \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = 1-\theta_2$ and $\varphi_1 = \min(1,\theta_3) = 1> 1-\theta_2$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_1$ holds.
- Case 3: $\theta_2 \geq 1$ and $\theta_3 < \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = \theta_1-\theta_3$ and $\varphi_1 = \min(\theta_2,\theta_1) = \theta_1>\theta_1-\theta_3$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_1$ holds.
- Case 4: $\theta_2 < 1$ and $\theta_3 < \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = 1-\theta_2+\theta_1-\theta_3$ and $\varphi_1 = \min(1,\theta_1)$. Since $1-\theta_2+\theta_1-\theta_3\leq 1-\theta_3<1$ and $1-\theta_2+\theta_1-\theta_3 \leq -\theta_2+\theta_1< \theta_1$, it follows that $1-\theta_2+\theta_1-\theta_3 < \min(1,\theta_1)$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_1$ holds.
Hence, $d_1^0 + \theta_1 d_2^0 < \varphi_1$ holds for all the values of the parameters in the range, and we can conclude that $\mathbf{d}^0$ is strictly inside of $\mathcal{D}_1$ and the bargaining problem $(\mathcal{D}_1,\mathbf{d}^0)$ is always essential.
- In the weak interference regime, we have $\theta_2 < \theta_1$ and $\theta_3 < 1$. In order for both users to have incentives to cooperate using HK($1/\text{INR}_2$,$1/\text{INR}_1$), $\mathbf{d}^0$ needs to lie strictly inside of $\mathcal{D}_2$, which is not true for all parameters of $\theta_1$,$\theta_2$,$\theta_3$. It happens only when $(\theta_1,\theta_2,\theta_3)$ are such that $(d_1^0,d_2^0)$ satisfy (\[eqn:weakdof1\])-(\[eqn:weakdof3\]) all with strict inequality.
- In the mixed interference regime, we have $\theta_2 \geq \theta_1$ and $\theta_3 < 1$. Depending on the values of $\theta_1$, $\theta_2$ and $\theta_3$, we have the following four cases:
- Case 1: $\theta_2 \leq 1$ and $\theta_3 \geq \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 =d_1^0 + 2\theta_1 d_2^0 = 1-\theta_2$ and $\varphi_5 = 1>1-\theta_2$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_5$ holds. Note that $\varphi_6 = \max(\theta_2+\theta_3,1)\geq 1>1-\theta_2$, hence $d_1^0 + 2\theta_1 d_2^0 < \varphi_6$ also holds.
- Case 2: $\theta_2 > 1$ and $\theta_3 \geq \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 =d_1^0 + 2\theta_1 d_2^0 = 0$ and $\varphi_5 = \theta_2>0$. Therefore, $d_1^0 + \theta_1 d_2^0 < \varphi_5$ holds. Also $\varphi_6 = \theta_2+\theta_3> 0$ and $d_1^0 + 2\theta_1 d_2^0 < \varphi_6$ holds.
- Case 3: $\theta_2 > 1$ and $\theta_3 < \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = \theta_1-\theta_3$ and $\varphi_5 = \min(1+\theta_1-\theta_3,\theta_2)$. Since $\theta_1-\theta_3 < \theta_1 \leq \theta_2$, it follows that $d_1^0 + \theta_1 d_2^0 < \varphi_5$. $d_1^0 + 2\theta_1 d_2^0 = 2(\theta_1-\theta_3)$ and $\varphi_6 = \theta_1+\theta_2 \geq 2\theta_1 >2(\theta_1-\theta_3)$. Therefore $d_1^0 + 2\theta_1 d_2^0 < \varphi_6$ also holds.
- Case 4: $\theta_2 \leq 1$ and $\theta_3 < \theta_1$. In this case, $d_1^0 + \theta_1 d_2^0 = 1-\theta_2+\theta_1-\theta_3\leq 1-\theta_3<1$ and $\varphi_5 = 1$. It follows that $d_1^0 + \theta_1 d_2^0 < \varphi_5$. $d_1^0 + 2\theta_1 d_2^0 = 1-\theta_2+2(\theta_1-\theta_3)$ and $\varphi_6 = \theta_1+\max(\theta_2,1-\theta_3)$. If $\theta_2\leq 1-\theta_3$, then $\varphi_6 = \theta_1+1-\theta_3$. $d_1^0 + 2\theta_1 d_2^0 - \varphi_6 = -\theta_2+\theta_1-\theta_3\leq -\theta_3<0$. Otherwise if $\theta_2> 1-\theta_3$, then $\varphi_6 = \theta_1+\theta_2$. $d_1^0 + 2\theta_1 d_2^0 - \varphi_6 = 1+\theta_1-2\theta_2-2\theta_3 = -\theta_3 +(1-\theta_3-\theta_2)+(\theta_1-\theta_2)<0$. Therefore $d_1^0 + 2\theta_1 d_2^0 < \varphi_6$ also holds.
Hence, $d_1^0 + \theta_1 d_2^0 < \varphi_5$ and $d_1^0 + 2\theta_1 d_2^0 < \varphi_6$ hold for all values of the parameters in the range, and we can conclude that $\mathbf{d}^0$ is strictly inside of $\mathcal{D}_3$ and the bargaining problem $(\mathcal{D}_3,\mathbf{d}^0)$ is always essential.
[^1]: This material is based upon work partially supported by NSF Grant No. 0635177, and by the Center for Advanced Technology in Telecommunications (CATT) of Polytechnic Institute of NYU.
[^2]: Throughout the paper, “cooperation” means cooperation for the choice of transmission strategy including codebook and rate selection, which is different from cooperation in information transmission as in cooperative communications[@references:Sendonaris03].
[^3]: In the NBS discussed in [@references:Leshem06][@references:Leshem08], it is assumed that a unique NE of an Gaussian interference game defined there exists and is selected as disagreement point. Typically, the NE is unique only when both interferences are weaker than the desired signals.
[^4]: Recall that, from Appendix A, when there are chance moves, the outcome of a strategy profile $s = (s_i)_{i\in N}$ is a probability distribution (or a lottery) over a set of terminal histories instead of a single terminal history.
[^5]: If the terminal history $h$ is of type III, the agreement is the last offer $o(t)$ in $h$; if $h$ is of type V instead, the agreement is the disagreement point $\mathbf{g}^0$. Also note that terminal histories of type VI do not occur with positive probability.
[^6]: Note that, to guarantee this, the channel parameters $a$ and $b$ need to change with power $P_1$ and $P_2$.
[^7]: In [@references:Khandani09], the authors concluded that there should be exactly four extreme points in the first quadrant, but we find that under some parameters one or two of the four points may actually not lie in the first quadrant. For instance, it is possible that $\phi_5-2\phi_2<0$, in which case $r_4$ is not in the first quadrant.
|
---
abstract: |
ACL2(r) is a variant of ACL2 that supports the irrational real and complex numbers. Its logical foundation is based on internal set theory (IST), an axiomatic formalization of non-standard analysis (NSA). Familiar ideas from analysis, such as continuity, differentiability, and integrability, are defined quite differently in NSA—some would argue the NSA definitions are more intuitive. In previous work, we have adopted the NSA definitions in ACL2(r), and simply taken as granted that these are equivalent to the traditional analysis notions, e.g., to the familiar $\epsilon$-$\delta$ definitions. However, we argue in this paper that there are circumstances when the more traditional definitions are advantageous in the setting of ACL2(r), precisely because the traditional notions are classical, so they are unencumbered by IST limitations on inference rules such as induction or the use of pseudo-lambda terms in functional instantiation. To address this concern, we describe a formal proof in ACL2(r) of the equivalence of the traditional and non-standards definitions of these notions.
[Keywords:]{}2(r), non-standard analysis, real analysis.
author:
- John Cowles
- Ruben Gamboa
bibliography:
- 'rag.bib'
title: 'Equivalence of the Traditional and Non-Standard Definitions of Concepts from Real Analysis'
---
Introduction {#intro}
============
ACL2(r) is a variant of ACL2 that has support for reasoning about the irrational numbers. The logical basis for ACL2(r) is *non-standard analysis* (NSA), and in particular, the axiomatic treatment of NSA developed as *internal set theory* (IST) [@Nel:nsa]. Traditional notions from analysis, such as limits, continuity, and derivatives, have counterparts in NSA
Previous formalizations of NSA typically prove that these definitions are equivalent early on. We resisted this in the development of ACL2(r), preferring simply to state that the NSA notions were the “official” notions in ACL2(r), and that the equivalence to the usual notions was a “well-known fact” outside the purview of ACL2(r). In this paper, we retract that statement for three reasons.
First, the traditional notions from real analysis require the use of quantifiers. For instance, we say that a function $f$ has limit $L$ as $x$ approaches $a$ iff $$\forall\epsilon>0, \exists \delta>0 \text{ such that }
|x-a|<\delta \Rightarrow |f(x)-L|<\epsilon.$$ While ACL2(r) has only limited support for quantifiers, this support is, in fact, sufficient to carry out the equivalence proofs. However, it should be noted that the support depends on recent enhancements to ACL2 that allow the introduction of Skolem functions with non-classical bodies. So, in fact, it is ACL2’s improved but still modest support for quantifiers that is sufficient. That story is interesting in and of itself.
Second, the benefit of formalization in general applies to this case, as the following anecdote illustrates. While trying to update the proof of the Fundamental Theorem of Calculus, we were struggling to formalize the notion of *continuously differentiable*, i.e., that $f$ is differentiable and $f'$ is continuous. To talk about the class of differentiable functions in ACL2(r), we use an `encapsulate` event to introduce an arbitrary differentiable functions. It would be very convenient to use the existing `encapsulate` for differentiable functions, and prove as a theorem that the derivative was continuous. That is to say, it would be very convenient if all derivative functions were continuous. Note: we mean “derivative” functions, not “differentiable” functions. The latter statement had previously been proved in ACL2(r).
Encouraged by Theorem 5.6 of [@Nel:nsa], one of us set out to prove that, indeed, all derivatives of functions are continuous.
Let $f:I\rightarrow\mathbb{R}$ where $I$ is an interval. If $f$ is differentiable on $I$, then $f'$ is continuous on $I$.
Nelson’s proof of this theorem begins with the following statement:
> We know that $$\label{eqn-deriv}
> \forall^\text{st}x \forall x_1 \forall x_2 \left\{ x_1 \approx x
> \wedge x_2 \approx x \wedge x_1 \ne x_2 \Rightarrow
> \frac{f(x_2)-f(x_1)}{x_2-x_1} \approx f'(x)\right\}.$$
This is, in fact, plausible from the definition of continuity, which is similar but with $x$ taking the place of $x_2$. The remainder of the proof was “trivially” (using the mathematician’s sense of the word) carried out in ACL2(r), so only the proof of this known fact remained. The hand proof for this fact was tortuous, but eminently plausible. Unfortunately, the last step in the proof failed, because it required that $y\cdot y_1 \approx y\cdot y_2$ whenever $y_1
\approx y_2$—but this is true only when $y$ is known to be limited.
The other of us was not fooled by Theorem 5.6: What about the function $x^2 \sin(1/x)$? The discrepancy was soon resolved. Nelson’s definition of derivative in [@Nel:nsa] is precisely Equation \[eqn-deriv\]. No wonder this was a known fact! And the problem is that Equation \[eqn-deriv\] is equivalent to the notion of continuously differentiable, and *not equivalent* to the usual notion of differentiability. But in that case, how are we to know if theorems in ACL2(r) correspond to the “usual” theorems in analysis. I.e., what if we had chosen Equation \[eqn-deriv\] as the definition of derivative in ACL2(r)? Preventing this situation from reoccurring is the second motivator for proving the equivalence of the definitions in ACL2(r) once and for all.
Third, the NSA definitions are non-classical; i.e., they use notions such as “infinitely close” and “standard.” Indeed, it is these non-classical properties that make NSA such a good fit for the equational reasoning of ACL2(r). However, non-classical functions are severely limited in ACL2(r): Induction can be used to prove theorems using non-classical functions only up to standard values of the free variables, and function symbols may not map to pseudo-lambda expressions in a functional instantiation [@GC:acl2r-theory]. As a practical consequence of these restrictions, it is impossible to prove that $\frac{d(x^n)}{dx} = n \cdot x^{n-1}$ by using the product rule and induction in ACL2(r). In [@Gam:dissertation], for example, this is shown only for standard values of $n$. However, using the traditional notion of differentiability, the result does follow from induction. This, too, would have been reason enough to undertake this work.
It should be emphasized that the main contribution of this paper is the formalization in ACL2(r) of the results described in this paper. The actual mathematical results are already well-known in the non-standard analysis community. Moreover, some of these equivalence results were formalized mechanically as early as [@BaBl:nsa]. The novelty here is the formalization in ACL2(r), which complicates things somewhat because of the poor support for (even first-order) set theory.
The rest of this paper is organized as follows. In Section \[series\], we discuss equivalent definitions regarding convergence of series[^1]. Section \[limits\] considers the limit of a function at a point. The results in this section are used in Section \[continuity\] to show that the notions of continuity at a point are also equivalent. This leads into the discussion of differentiability in Section \[differentiability\]. Finally, Section \[integrability\] deals with the equivalent definitions of Riemann integration.
Convergence of Series {#series}
=====================
In this section, we show that several definitions of convergence are in fact equivalent. In particular, we will consider the traditional definitions, e.g., as found in [@Rudin:analysis], and the corresponding concepts using non-standard analysis, e.g., as found in [@Robinson:nsa].
We start with the constrained function `Ser1`, which represents an arbitrary sequence; i.e., it is a fixed but arbitrary function that maps the natural numbers to the reals. Moreover, `Ser1` is assumed to be a classical function—otherwise, some of the equivalences do not hold. Similarly, the function `sumSer1-upto-n` defines the partial sum of `Ser1`, i.e., the sum of the values of `Ser1` from $0$ to `n`.
The first definition of convergence is the traditional one due to Weierstrass: $$(\exists L) (\forall \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon).$$ In ACL2, we can write the innermost quantified subformula of this definition as follows:
(defun-sk All-n-abs-sumSer1-upto-n-L<eps (L eps M)
(forall n (implies (and (standardp n)
(integerp n)
(> n M))
(< (abs (- (sumSer1-upto-n n) L))
eps))))
This version of the definition restricts `n` to be a standard integer, which makes it a non-classical formula. A different version omits this requirement, and it is a more direct translation of Weierstrass’s criterion.
(defun-sk Classical-All-n-abs-sumSer1-upto-n-L<eps (L eps M)
(forall n (implies (and (integerp n)
(> n M))
(< (abs (- (sumSer1-upto-n n) L))
eps))))
ACL2 can verify that these two conditions are equal to each other, but only when the parameters `L`, `eps`, and `M` are standard. This follows because `defchoose` is guaranteed to choose a standard witness for classical formulas and standard parameters. More precisely, the witness function is a classical formula is also classical, all all classical functions return standard values for standard inputs [@GC:acl2r-theory]. Once this basic equivalence is proved, it follows that both the classical and non-classical versions of Weierstrass’s criterion are equivalent. It is only necessary to add each of the remaining quantifiers one by one.
We note in passing that the two versions of Weierstrass’s criterion are *not* equivalent for non-standard values of the parameters `L`, `eps`, and `M`. Consider, for example, the case when `eps` is infinitesimally small. It is straightforward to define the sequence $\{a_n\}$ such that the partial sums are given by $\sum_{i=1}^{n}{a_i} = 1/n$, clearly converging to $0$. Indeed, for any $\epsilon>0$ there is an $N$ such that for all $m > N$, $\sum_{i=1}^{m}{a_i} = 1/m < 1/N < \epsilon$. However, for infinitesimally small $\epsilon$, the resulting $N$ is infinitesimally large. This is fine using the second (classical) version of Weierstrass’s criterion, but not according to the first, since for all standard $N$, $1/N > \epsilon$, so no standard $N$ can satisfy the criterion. However, the two criteria are equivalent when written as sentences, i.e., when they have no free variables.
Note that the only difference between the two versions of Weierstrass’s criterion is that one of them features only standard variables, whereas the other features arbitrary values for all quantified variables. Using the shorthand $\forall^\text{st}$ and $\exists^\text{st}$ to introduce quantifiers for standard variables, the two versions of the criteria can be written as follows:
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists^\text{st} M) (\forall^\text{st} n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
- $(\exists L) (\forall \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
It is obvious that these two statements are extreme variants, and that there are other possibilities mixing the two types of quantifiers. Indeed, we verified with ACL2 that the following versions are also equivalent to the above:
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists^\text{st} M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
- $(\exists^\text{st} L) (\forall^\text{st} \epsilon) (\exists M) (\forall n)
(n > M \Rightarrow | \sum_{i=0}^{n}{a_i} - L| < \epsilon)$
The last two versions of Weierstrass’s criterion are useful, because they are easier to show equivalent to the typical non-standard criterion for convergence: $(\exists L) (\forall n) (large(n) \Rightarrow \sum_{i=0}^{n}{a_i} \approx L)$, i.e., for large values of $n$, $\sum_{i=0}^{n}{a_i}$ is infinitely close to $L$. This is the convergence criterion used in [@Gam:dissertation], for example, where power series are used to introduce functions such as $e^x$.
There is another statement of the non-standard convergence criterion that appears weaker: $$(\exists L) (\exists M) (large(M) \wedge (\forall n) (n > M
\Rightarrow \sum_{i=0}^{n}{a_i} \approx L)).$$ This version does not require that $\sum_{i=0}^{n}{a_i} $ is close to $L$ for all large $n$, only that this is true for $n$ larger than some large $M$. We have shown in ACL2 that these statements are in fact equivalent to Weierstrass’s criterion for convergence. In fact, since $\{a_n\}$ is a classical sequence, the value of $L$ is guaranteed to be standard, so we can replace $(\exists L)$ with $(\exists^\text{st} L)$ in both of the non-classical convergence criteria given above and still retain equivalence.
When the sequence is composed of non-negative numbers, we can make even stronger guarantees. Let $\{b_n\}$ be such a sequence, which we introduce into ACL2 as the constrained function `Ser1a`. All the previous results about `Ser1`—i.e., about $\{a_n\}$—apply to `Ser1a`, and we can carry over these proofs in ACL2 by using functional instantiation.
Using the non-standard criterion for convergence, we can easily see that if $\sum_{i=0}^{\infty}b_n$ converges, then $\sum_{i=0}^{N}{b_i}$ is not infinitely large, where $N$ is a fixed but arbitrary large integer[^2]. This simply follows from the facts that $\sum_{i=0}^{N}{b_i}\approx L$ and $L$ is standard.
The converse of this fact is also true: if $\sum_{i=0}^{N}{b_i}$ is not infinitely large, then $\sum_{i=0}^{\infty}b_n$ converges. This is harder to prove formally. The key idea is as follows. Since $\sum_{i=0}^{N}{b_i}$ is not infinitely large, then $\sum_{i=0}^{N}{b_i}$ must be close to an unique standard real number, i.e., $\sum_{i=0}^{N}{b_i} \approx L$ for some standard $L$. $\sum b_i$ is monotonic, so for any standard $n$, $\sum_{i=0}^{n}{b_i} \le \sum_{i=0}^{N}{b_i}$. And since $L$ is the unique real number that is close to $\sum_{i=0}^{N}{b_i}$, we can conclude that $\sum_{i=0}^{n}{b_i} \le L$ for all standard $n$. Using the non-standard transfer principle, this is sufficient to conclude that $\sum_{i=0}^{n}{b_i} \le L$ for all $n$, not just the standard ones. Using monotonicity once more, it follows that whenever $n>N$, $\sum_{i=0}^{n}{b_i} \approx L$, which is precisely the (weak) non-standard convergence criterion above. Thus, the series $\sum_{i=0}^{N}{b_i}$ converges, according to any of the criteria above.
Similar results hold for divergence to positive infinity. Let $\{c_n\}$ be an arbitrary sequence. Weierstrass’s criterion is given by $(\forall^\text{st} B) (\exists^\text{st} M) (\forall^\text{st} n) (n > M \Rightarrow \sum_{i=0}^{n}{c_i} > B)$. As before, for classical $\{c_n\}$ this is equivalent to a criterion with quantifiers over all reals, not just the standard ones: $(\forall B) (\exists M) (\forall n) (n > M \Rightarrow \sum_{i=0}^{n}{c_i} > B)$. And just as before, other variants (with $B$ and $M$ standard or just $B$ standard) are also equivalent. Moreover, these are equivalent to the non-standard criterion for divergence to positive infinity, namely that $(\forall n) (large(n) \Rightarrow
large(\sum_{i=0}^{n}{c_i}))$. A seemingly weaker version of this criterion is also equivalent, where it is only necessary that $c_n$ is large for all $n$ beyond a given large integer: $(\exists M) (large(M) \wedge (\forall n) (n > M \Rightarrow large(\sum_{i=0}^{n}{c_i})))$. Finally, if the sequence $\{c_n\}$ consists of non-negative reals, then it is even easier to show divergence. It is only necessary to test whether $large(\sum_{i=0}^{N}{c_i})$ where $N$ is an arbitrary large integer, and as before we choose the ACL2 constant `i-large-integer` for this purpose.
Limits of Functions {#limits}
===================
In this section, we consider the notion of limits. In particular, we show that the following three notions are equivalent (for standard functions and parameters):
- The non-standard definition (for standard parameters $a$ and $L$): $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall x) (x \approx a \wedge x \ne a \Rightarrow f(x) \approx L\right)).$$
- The traditional definition over the classical reals: $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall^\text{st} \epsilon > 0) (\exists^\text{st} \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - L| < \epsilon)\right).$$
- The traditional definition over the hyperreals: $$\lim_{x \rightarrow a} f(x) = L \Leftrightarrow
\left((\forall \epsilon > 0) (\exists \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - L| < \epsilon)\right).$$
We begin by assuming the non-standard definition, which can be introduced in ACL2(r) by encapsulating the function $f$, its domain, and the limit function $L$, so that $\lim_{x \rightarrow a} f(x) =
L(a)$. The first step is to observe that $a\approx b$ is a shorthand notation for the condition that $|a-b|$ is infinitesimally small. Moreover, if $\epsilon>0$ is standard, then it must be (by definition) larger than any infinitesimally small number. Thus, we can prove that $$(\forall^\text{st} \epsilon > 0) \left((\forall x) (x \approx a \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)).$$ Similarly, if $\delta>0$ is infinitesimally small, then $|x - a| <
\delta$ implies that $x \approx a$. It follows then that $$(\forall^\text{st} \epsilon > 0)
(\forall \delta>0)
\left(small(\delta) \Rightarrow(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ It is an axiom of ACL2(r) that there exists a positive infinitesimal, namely `(/ (i-large-integer))`. Consequently, we can specialize the previous theorem with the constant $\delta_0$ (i.e., `(/ (i-large-integer))`). $$(\forall^\text{st} \epsilon > 0)
\left(0<\delta_0 \wedge small(\delta_0) \wedge (\forall x) \left(0< |x - a| < \delta_0 \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ Using ACL2 terminology, the specific number $\delta_0$ can be generalized to yield the following theorem: $$(\forall^\text{st} \epsilon > 0)
(\exists \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ Note that the statement inside the $\forall^\text{st}$ is classical; i.e., it does not use any of the notions from NSA, such as standard, infinitesimally close, infinitesimally small, etc. Consequently, we can use the transfer principle so that the quantifier ranges over all reals instead of just the standard reals. This results in the traditional definition of limits over the hyperreals: $$(\forall \epsilon > 0)
(\exists \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$ The transfer can also be used in the other direction. The introduction of the existential quantifier is done via `defun-sk`, and ACL2(r) introduces such quantifiers by creating a Skolem choice function $\delta(a,\epsilon)$ using `defchoose`. Since the criteria used to define this Skolem function are classical, `defchoose` introduces the Skolem function itself as classical. That means that when $a$ and $\epsilon$ are standard, so is $\delta(a, \epsilon)$. This observation is sufficient to show that $\lim_{x\rightarrow a} f(x) = L(a)$, using the traditional definition over the classical reals: $$(\forall^\text{st} \epsilon > 0)
(\exists^\text{st} \delta>0)
\left(
(\forall x) \left(0< |x - a| < \delta \wedge x \ne a
\Rightarrow |f(x) - L(a)| < \epsilon\right)\right).$$
It is worth noting that this last theorem is not obviously weaker or stronger than the previous one, where the quantifiers range over all reals, not just the standard ones. The reason is that the $\forall$ quantifier ranges over more values than $\forall^\text{st}$, so it would appear that using $\forall$ instead of $\forall^\text{st}$ yields a stronger result. However, this advantage is lost when one considers the $\exists$ quantifier, since $\exists^\text{st}$ gives an apparently stronger guarantee. In actual fact, the two statements are equivalent, since the transfer principle can be used to guarantee that the value guaranteed by $\exists$ can be safely assumed to be standard.
To complete the proof, we need to show that if $\lim_{x\rightarrow a}
f(x) = L(a)$, using the traditional definition over the standard reals, then $\lim_{x\rightarrow a} f(x) = L(a)$ using the non-standard definition. To do this, we introduce a new `encapsulate` where $f$ is constrained to have a limit using the traditional definition over the standard reals. We then proceed as follows. First, fix $\epsilon$ so that it is positive and standard. From the (standard real) definition of limit, it follows that $$(\exists^\text{st} \delta > 0) (\forall x) \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right).$$ Now suppose that $\delta_0$ is a positive, infinitesimally small number. It follows that $\delta_0 < \delta$ for any positive, standard $\delta$. In particular, this means that $$0 < \delta_0 \wedge (\forall x) \left(0<|x-a|<\delta_0
\Rightarrow | f(x) - L(a)| < \epsilon\right).$$ Since $\delta_0$ is an arbitrary positive infinitesimal, we can generalize it as follows: $$(\forall \delta > 0) \left(small(\delta) \Rightarrow (\forall x) \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right)\right).$$ Next, we remove the universal quantifier on $x$. This step does not have a dramatic impact on the mathematical statement, but it is more dramatic in ACL2(r), since it opens up a function introduced with `defun-sk`: $$(\forall \delta > 0) \left(small(\delta) \Rightarrow \left(0<|x-a|<\delta
\Rightarrow | f(x) - L(a)| < \epsilon\right)\right).$$ Recall that $x \approx a$ is a shorthand for $|x-a|$ is infinitesimally small. Thus, the theorem implies the following $$(\forall \delta > 0) \left(small(\delta) \Rightarrow \left(x \approx
a \wedge x \ne a \Rightarrow | f(x) - L(a)| <
\epsilon\right)\right).$$ At this point, the variable $\delta$ is unnecessary, so we are left with the following: $$x \approx a \wedge x \ne a \Rightarrow | f(x) - L(a)| <
\epsilon.$$ Now, recall that we fixed $\epsilon$ to be an arbitrary, positive, standard real. This means that what we have shown is actually the following: $$(\forall^\text{st} \epsilon) \left(x \approx a \wedge x \ne a \Rightarrow |f(x) - L(a)| <
\epsilon\right).$$ To complete the proof, it is only necessary to observe that if $|x-y|<\epsilon$ for all standard $\epsilon$, then $x \approx y$. We prove this in ACL2(r) by finding an explicit standard $\epsilon_0$ such that if $x \not\approx y$, then $|x-y| > \epsilon_0$. The details of that proof are tedious and not very elucidating, so we omit them from this discussion[^3]. Once that lemma is proved, however, it follows that $\lim_{x\rightarrow a} f(x) = L(a)$ using the non-standard definition: $$x \approx a \wedge x \ne a \Rightarrow f(x) \approx
L(a).$$
These results show that the three definitions of limit are indeed equivalent, at least when $f$ and $L$ are classical, and $a$ is standard.
Continuity of Functions {#continuity}
=======================
Now we consider the notion of continuity. The function $f$ is said to be continuous at $a$ if $\lim_{x \rightarrow a} f(x) = f(a)$. Since this uses the notion of limit, it is no surprise that there are three different characterizations which are equivalent (for standard functions and parameters):
- The non-standard definition (for standard parameter $a$): $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall x) x \approx a \wedge x \ne a \Rightarrow f(x) \approx
f(a)\right).$$
- The traditional definition over the classical reals: $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall^\text{st} \epsilon > 0) (\exists^\text{st} \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - f(a)| < \epsilon)\right).$$
- The traditional definition over the hyperreals: $$f \text{ is continuous at } a \Leftrightarrow
\left((\forall \epsilon > 0) (\exists \delta>0) (0<|x-a|<\delta
\Rightarrow | f(x) - f(a)| < \epsilon)\right).$$
What this means is that the notion of continuity can be completely reduced to the notion of limits. In particular, the results from Section \[limits\] can be functionally instantiated to derive the results for continuity. It is only necessary to instantiate both functions $f(x)$ and $L(x)$ to the same function $f(x)$.
Differentiability of Functions {#differentiability}
==============================
Next, we consider differentiability. At first sight, it appears that we can also define differentiability in terms of limits. After all, $f'$ is the derivative of $f$ iff $$\lim_{\epsilon \rightarrow 0} \frac{f(x+\epsilon) - f(x)}{\epsilon} = f'(x).$$ The problem, however, is that the difference quotient on the left of the equation is a function of both $x$ and $\epsilon$, and having free variables complicates functional instantiation when non-classical functions are under consideration. So we chose to prove this result essentially from scratch, although the pattern is very similar to the equivalence of limits.
Before proceeding, however, it is worth noting one other equivalence of interest. The non-standard definition of differentiability is as follows: $$\begin{gathered}
standard(a) \wedge x_1 \approx a \wedge x_1 \ne a \wedge x_2 \approx a \wedge x_2 \ne a \Rightarrow \\
\qquad\left(\neg large\left(\frac{f(x_1) - f(a)}{x_1 - a}\right) \wedge
\frac{f(x_1) - f(a)}{x_1 - a} \approx \frac{f(x_2) - f(a)}{x_2 - a}\right).\end{gathered}$$ The form of this definition was chosen because it does not have a dependency on $f'$, so it can be applied to functions even when their derivative is unknown. However, when $f'$ is known, a simpler definition can be used: $$standard(a) \wedge x \approx a \wedge x \ne a \Rightarrow \left(\frac{f(x) - f(a)}{x - a} \approx f'(a)\right).$$ In fact, this latter form is the definition of differentiability that was used in [@ReGa:automatic-differentiator]. In that context, ACL2(r) was able to automatically define $f'$ from the definition of $f$, so $f'$ was always known and the simpler definition was appropriate.
So the first result we show is to relate the definitions of differentiable and derivative. To do so, we can begin with a differentiable function $f$ and define $f'$ (for standard $a$) as follows: $$f'(a) \equiv standard\text{ }part\left(\frac{f(a+\epsilon) - f(a)}{\epsilon}\right)$$ where $\epsilon$ is a fixed but arbitrary, positive, small real, e.g., `(/ (i-large-integer))`. By assumption, the difference quotient at $a$ is not large for $x_1 = a +
\epsilon$. Since $f'(a)$ is defined as the standard part of the difference quotient, it follows that it really is close to the difference quotient, so $f'$ really is the derivative of $f$.
Conversely, suppose $f'$ is the derivative of $f$. Since $f'$ is classical and $a$ is standard, it follows that $f'(a)$ is standard, and in particular it is not large. Therefore, for any $x_1$ such that $x_1 \approx a$ and $x_1 \ne a$, the difference quotient at $x_1$ must be close to $f'(a)$ (by definition of derivative). It follows then that the difference quotient at $x_1$ is not large, since it’s close to something that is not large. Moreover, since $\approx$ is transitive, if $x_2$ is also such that $x_2 \approx a$ and $x_2 \ne a$, then the difference quotients at $x_1$ and $x_2$ are both close to $f'(a)$, so they must also be close to each other. Thus, $f$ is differentiable according to the non-standard criterion. This simple argument is sufficient to combine the results of differentiability in ACL2(r) with the automatic differentiator described in [@ReGa:automatic-differentiator], making the automatic differentiator much more useful, since the notion of differentiability it uses is now consistent with the main definition in ACL2(r).
Next, we show that the non-standard definition of derivative is equivalent to the traditional definition (both for the hyperreals and for the standard reals). The proof is nearly identical to the corresponding proof about limits, so we omit it here.
Discussion {#discussion .unnumbered}
----------
There is a possible misconception that needs to be corrected. We have shown that the three different notions of differentiability are equivalent in principle. However, this is far from sufficient in practice.
To understand the problem, consider a function such as $x^n$, which may be represented in ACL2(r) as `(expt x n)`. In a real application of analysis, we may want to show that $f(x) = x-x^{2n}$ achieves its maximum value at $x=1/\sqrt[2n-1]{2n}$. ACL2(r) has the basic lemmas that are needed to do this:
- $\frac{d(x^n)}{dx} = n \cdot x^{n-1}$ (at least for standard $n$)
- Chain rule
- Extreme value theorem (EVT)
- Mean value theorem (MVT)
But these lemmas cannot be used directly. Consider the chain rule, for example. Its conclusion is about the differentiability of $f
\circ g$, and the notion of differentiability is the non-standard definition. What this means is that the functions $f$ and $g$ cannot be instantiated with pseudo-lambda expressions, so $f$ and $g$ must be unary, and that rules out $x^n$ which is formally a binary function, even if we think of it as unary because $n$ is fixed.
Moreover, suppose that we have a stronger theorem, namely that $$\frac{d(x^n)}{dx} = n \cdot x^{n-1}$$ for all $n$, not just the standard ones. It’s possible to prove this using induction and the hyperreal definition of differentiability (since it’s a purely classical definition, so induction can be used over all the naturals, not just the standard ones). Suppose we want to invoke the MVT on $x^n$ over some interval $[a,b]$. It is not possible to use the equivalence of the hyperreal and non-standard definitions. The reason, again, is that the non-standard definition is non-classical, so we cannot use pseudo-lambdas in functional instantiations. Even though the two definitions of differentiability are equivalent for arbitrary (unary) $f(x)$, they are not equivalent for the function $x^n$ (which is binary).
It may seem that this is an unnecessary limitation in the part of ACL2(r). But actually, it’s just part of the definition. The non-standard definition says that the difference quotient of $f$ is close to $f'$ at standard points $x$. It says nothing about non-standard points. But when a binary function is considered, e.g., $x^n$, what should happen when $x$ is standard but $n$ is not? In general, the difference quotient need *not* be close to the derivative.
This fact can be seen quite vividly by fixing $x=2$ and $N$ an arbitrary (for now), large natural number. Is the derivative with respect to $x$ of $x^n$ close to the difference quotient when $x=2$ and $n=N$? The answer can be no, as the following derivation shows: $$\begin{aligned}
\frac{(2+\epsilon)^N - 2^N}{\epsilon} &= \frac{(2^N + N \epsilon
2^{N-1} + {N \choose 2} \epsilon^2 2^{N-2} + \cdots + \epsilon^N) - 2^N}{\epsilon}\\
& = \frac{N \epsilon 2^{N-1} + {N \choose 2} \epsilon^2 2^{N-2} + \cdots + \epsilon^N}{\epsilon}\\
& = \frac{\epsilon(N 2^{N-1} + {N \choose 2} \epsilon 2^{N-2} + \cdots + \epsilon^{N-1}}{\epsilon}\\
& = N 2^{N-1} + {N \choose 2} \epsilon 2^{N-2} + \cdots + \epsilon^{N-1}\\\end{aligned}$$ All terms except the first have a factor of $\epsilon$, so if $N$ were limited, those terms would be infinitesimally small, and thus the derivative would be close to the difference quotient. But if $N$ is large, ${N \choose 2} = \frac{N(N-1)}{2}$ is also large. And if $N =
{\lceil 1/\epsilon \rceil}$, then ${N\choose2}\epsilon$ is roughly $N/2$, which is large. So the difference between the difference quotient and the derivative is arbitrarily large!
This shows that it is not reasonable to expect that we can convert from the traditional to the non-standard definition of derivative in all cases. Therefore, we cannot use previously proved results, such as the MVT directly.
A little subterfuge resolves the practical problem. What must be done is to prove a new version of the MVT (and other useful theorems about differentiability) for functions that are differentiable according to the $\epsilon$-$\delta$ criterion for reals or hyperreals, as desired. Of course, the proofs follow directly from the earlier proofs. For instance, suppose that $f(x)$ is differentiable according to the hyperreal criterion. Then, we can use the equivalence theorems to show that $f(x)$ is differentiable according to the non-standard criterion. In turn, this means that we can prove the MVT for $f(x)$ using functional instantiation. Now, the MVT is a classical statement, so we instantiate it functionally with pseudo-lambda expressions. E.g., we can now use the MVT on $f(x) \rightarrow
(\lambda (x) x^n)$. So even though we cannot say that $x^n$ satisfies the non-standard criterion for differentiability, we can still use the practical results of differentiability, but only after proving analogues of these theorems (e.g., IVT, MVT, etc.) for the classical versions of differentiability. The proof of these theorems is a straightforward functional instantiation of the original theorems. We have done this for the key lemmas about differentiation (e.g., MVT, EVT, Rolle’s Theorem, derivative composition rules, chain rule, derivative of inverse functions). We have also done this for some of the other equivalences, e.g., the Intermediate Value Theorem for continuous functions.
Integrability of Functions {#integrability}
==========================
The theory of integration in ACL2(r) was first developed in [@Kau:ftc], which describes a proof of a version of the Fundamental Theorem of Calculus (FTC). The version of the FTC presented there is sometimes called the First Fundamental Theorem of Calculus, and it states that if $f$ is integrable, then a function $g$ can be defined as $g(x) = \int_{0}^{x}{f(t) dt}$, and that $g'(x) =
f(x)$. As part of this proof effort, we redid the proof in [@Kau:ftc], and generalized the result to what is sometimes called the Second Fundamental Theorem of Calculus. This more familiar form says that if $f'(x)$ is continuous on $[a,b]$, then $\int_{a}^{b}{f'(x) dx} = f(b) - f(a)$.
The integral formalized in [@Kau:ftc] is the Riemann integral, and the non-standard version of integrability is as follows: $$\int_{a}^{b}{f(x) dx} = L \Leftrightarrow (\forall P)
\left(P \text{ is a partition of } [a, b] \wedge small(||P||) \Rightarrow \Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) \approx L \right)$$ $P$ is a monotonically increasing partition of $[a,b]$ if $P$ is given by a list $P = [ x_1,
x_2, \dots, x_n]$ with $x_1=a$ and $x_n=b$. The term $||P||$ denotes the maximum value of $x_i -x_{i+1}$ in the partition $P$.
The traditional definition uses limits instead of the notion of infinitesimally close. It can be written as follows: $$\int_{a}^{b}{f(x) dx} = L \Leftrightarrow
\lim_{||P|| \rightarrow 0}
\left(\Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) \right) \approx L.$$ The notion of limit is strange here, because what approaches 0 is $||P||$. Many partitions can have the same value of $||P||$, so this limit ranges over all such partitions at the same time.
Opening up the definition of limits, integrals can be expressed as follows: $$\begin{gathered}
\int_{a}^{b}{f(x) dx} = L \Leftrightarrow \\
\qquad (\forall \epsilon>0)(\exists
\delta>0)(\forall P) \\
\qquad\qquad
\left(P \text{ is a partition of } [a, b] \wedge ||P|| < \delta \Rightarrow \left|\Sigma_{x_i \in P}
\left(f(x_i) (x_{i} - x_{i-1})\right) - L\right| < \epsilon\right).\end{gathered}$$ Once integrals are viewed in this way, the remainder of the proof is clear. Specifically, it follows the same line of reasoning as in Section \[limits\]. First, the $\delta$ that exists depends on $a$, $b$, and $\epsilon$, so it is standard when those are standard. Second, since there is a standard $\delta$ that is sufficient, any infinitesimal can take the place of $\delta$, and then the condition $||P||<\delta$ can be recast as $small(||P||)$. Finally, since the Riemann sum is within $\epsilon$ of $L$, for an arbitrary, positive, standard $\epsilon$, it must be that the Riemann sum is infinitesimally close to $L$. So the two definitions are, in fact, equivalent.
Conclusions
===========
In this paper, we showed how the non-standard definitions of traditional concepts from analysis are in fact equivalent to the traditional $\epsilon$-$\delta$ definitions. The results are especially important in ACL2(r) because the non-standard definitions feature non-classical notions, such as “infinitely close” and “infinitely small.” Consequently, they are limited in the use of induction and functional instantiation. However, the traditional notions are (by definition) classical, so they are unencumbered by such limitations.
This presents an interesting dilemma. In our experience, analysis style proofs are much easier to do and automate using non-standard analysis. However, *using* those results in subsequent proof attempts is much easier to do with the traditional (i.e., classical) statements. The distinction we’re making is between *proving* the correctness of Taylor’s Theorem, say, and actually *using* Taylor’s Theorem in a larger verification effort. For example, the formalization of Taylor’s Theorem in [@SaGa:sqrt] took extreme care to push free variables (including what were really summation indexes for the series) all the way into the original `encapsulate` introducing the function to be approximated. However, now that the equivalences are proved, a more elegant approach can be followed: First, prove a “clean” version of Taylor’s Theorem using NSA, then use that result to show that Taylor’s Theorem also holds using the traditional definition of derivative. The “traditional” version of Taylor’s Theorem would then be used with no restrictions during functional instantiation, so free variables would no longer present a problem. We plan to pursue this idea for Taylor’s Theorem in the near future, as part of a comprehensive verification effort into the implementation of hardware algorithms for square root and various trigonometric and exponential functions.
[^1]: Readers who attended the ACL2 Workshop in 2013 will recognize many of the results in this section, because they were presented in a Rump Session there.
[^2]: The ACL2 constant `(i-large-integer)` is often used to denote an otherwise unspecified large integer, and that is what we use in this case.
[^3]: The interested reader can consult the definition of `standard-lower-bound-of-diff` which produces the constant $\epsilon_0$ mentioned above, and the lemmas `standards-are-in-order-2`, `standards-are-in-order`, `rlfn-classic-has-limits-step-3`, and the more trivial lemmas leading up to the main theorem `rlfn-classical-has-a-limit-using-nonstandard-criterion`.
|
---
abstract: 'In this paper, we construct an adiabatic invariant for a large 1–$d$ lattice of particles, which is the so called Klein Gordon lattice. The time evolution of such a quantity is bounded by a stretched exponential as the perturbation parameters tend to zero. At variance with the results available in the literature, our result holds uniformly in the thermodynamic limit. The proof consists of two steps: first, one uses techniques of Hamiltonian perturbation theory to construct a formal adiabatic invariant; second, one uses probabilistic methods to show that, with large probability, the adiabatic invariant is approximately constant. As a corollary, we can give a bound from below to the relaxation time for the considered system, through estimates on the autocorrelation of the adiabatic invariant.'
author:
- 'Andrea Carati[^1]'
- Alberto Mario Maiocchi
title: Exponentially long stability times for a nonlinear lattice in the thermodynamic limit
---
Introduction
============
One of the open problems of Hamiltonian perturbation theory is how to extend to infinite dimensional systems, at a finite specific energy (or temperature), the results known for systems with a finite number of degrees of freedom. Indeed there exist results both for infinite systems such as partial differential equation (see, for example, [@pde1; @pde2]) or on infinite lattice systems (see [@froehlich; @enfinita]), but only for a finite total energy of the sistem, i.e., at zero temperature.
In the present paper we provide perturbation estimates on the so called Klein Gordon lattice in the thermodynamic limit, at a finite temperature, by controlling, in place of the usual $L^\infty$ norm, the $L^2$ norm relative to the Gibbs measure. If we denote by $H$ the Hamiltonian of the system and by $\mathcal{M}$ the corresponding phase space, the Gibbs measure is defined by $$\label{eq:gibbs}
\mu({\mathrm{d}}x){\buildrel {\rm def} \over {=} }\frac{\exp(-\beta H(x))}{\mathcal{Z}(\beta)}{\mathrm{d}}x\ ,$$ where $ \mathcal{Z}(\beta) {\buildrel {\rm def} \over {=} }\int_\mathcal{M} \exp(-\beta H(x)){\mathrm{d}}x $ is the partition function, and $\beta>0$ the inverse temperature.
We construct an adiabatic invariant whose time derivative has an $L^2$ norm exponentially small in the perturbation parameters (see Theorem \[teor:main\]). The construction of the adiabatic invariant is standard (see [@giorgilli]), but the estimate of its time derivative in the $L^2$ norm involves some probabilistic techniques, which have been developed in the frame of statistical mechanics. In fact, since $L^2$ is not a Banach algebra, the usual scheme of perturbation estimates cannot be implemented. So the use of the algebra property is replaced here by a control of the decay of spatial correlations between the sites of the lattice, making use of techniques introduced by Dobrushin (see [@do1]). In particular, we are able to show that, for lattices in any dimension with finite range interaction (i.e., in which each particle interacts only with a finite numbers of neighbouring ones) the spatial correlations decay exponentially fast with the distance. This requires also an estimate on the marginal probability densities induced by the measure $\mu$ on subsystems of finite size: this is done by adapting to lattices the techniques introduced by Bogolyubov et al. (see [@bogoljubov]) in interacting gas theory.
The paper is organized as follows. The main result on the considered model, namely the construction of an adiabatic invariant in the thermodynamic limit, is stated in Section \[sez:stab\] (Theorem \[teor:main\]), togheter with two corollaries concerning a control on the time evolution of the adiabatic invariant and a lower bound to its time autocorrelation. Then, in Section \[sez:schema\], we present the scheme of the proof of Theorem \[teor:main\], whereas the fundamental ingredients of the proof are separately given in the subsequent three sections. The first one (Section \[sez:telchi\]) concerns perturbation techniques and deals with the formal construction of the adiabatic invariant. The other two sections have a probabilistic nature: the estimate of the marginal probability is given in Section \[sez:marginale\] together with the estimate of the norm of the time derivative of the adiabatic invariant. In Section \[sez:condizionata\], we state Theorem \[teor:correlazioni\_generico\] in which the estimate of the spatial correlations is given, which enables us to give an estimate on the variance of the adiabatic invariant. The proof of Theorem \[teor:correlazioni\_generico\] requires to apply a technique due to Dobrushin and Pechersky (see [@do2]), and is reported in Appendix \[app:dim\_correlazioni\]. In Section \[sez:definizione\] we discuss how a lower bound on the time autocorrelation provides information on the relaxation time to equilibrium. The conclusions follow in Section \[sez:conclusione\]. Most of the proofs of a more technical character are given in two appendices.
Stabiliy estimate in the Klein Gordon lattice {#sez:stab}
=============================================
In the literature, as a prototype of several models, the so called Klein Gordon lattice is studied (see [@parisi1]–[@cgs]). From a physical point of view, it mimics a chain of particles, each free to move about a site of a lattice, subjected both to an on–site restoring nonlinear force and to a linear coupling with the nearest neighbours. It can also be seen as a discretization of the one–dimensional $\Phi^4$ model, which plays a major role in field theory.
The Hamiltonian of such a system, in suitably rescaled variables, can be written as $H= H_0+H_1$, in which $$\label{eq:ham}
H_0{\buildrel {\rm def} \over {=} }\sum_{i=1}^N \omega\left(\frac{p_i^2}2+\frac{q_i^2}2\right)
\quad \mbox{and }H_1{\buildrel {\rm def} \over {=} }\ \eps\sum_{i=1}^{N-1}
\frac{q_iq_{i+1}}{\omega} +\sum_{i=1}^N\frac{q_i^4}{4\omega^2}\ ,$$ where $p=(p_1,\ldots,p_N)$ and $q=(q_1,\ldots,q_N)$ are canonically conjugated variables in the phase space $\mathcal M$, and $\eps$ is a positive parameter, while $\omega$ is defined by $\omega{\buildrel {\rm def} \over {=} }\sqrt{1+2\eps}$. Since we don’t want to face in this paper the problem of small divisors, which typically arises in perturbation theory, we confine ourselves to the case of small $\eps $, i.e, of small coupling between the sites.
We aim at showing that, for small enough $\eps$ and sufficiently large $\beta$, there exists an adiabatic invariant for $H$ (see Theorem \[teor:main\] below). To come to a precise statement, we need some preliminaries.
As usual, $\langle X \rangle$ will denote the mean value of a dynamical variable $X$ with respect to the Gibbs measure $\mu$ relative to the given Hamiltonian $H$ at a given $\beta$, i.e., $$\langle X \rangle {\buildrel {\rm def} \over {=} }\int_{\mathcal M} X(x) \mu({\mathrm{d}}x)\ .$$ The $L^2(\mathcal M,\mu)$ norm of $X$ is then $\left\|X\right\|{\buildrel {\rm def} \over {=} }\sqrt{\langle X^2\rangle}$ and its variance $\sigma^2_X$ is defined according to $\sigma_X^2{\buildrel {\rm def} \over {=} }\langle X^2\rangle-\left\langle
X\right \rangle^2$. Finally, we also recall that the correlation coefficient of two dynamical variables $X$ and $Y$ is $$\label{eq:coeff_correlazione}
\rho_{X,Y}{\buildrel {\rm def} \over {=} }\frac{\langle XY\rangle -\langle X\rangle \langle
Y\rangle} {\sigma_X \sigma_Y}\ ,$$ and that $X$ and $Y$ are said to be *uncorrelated* if $\rho_{X,Y}=0$.
We can now state our main theorem, in which $[\cdot,\cdot]$ denotes Poisson bracket,
\[teor:main\] There exist positive constants $\eps^*$, $\kappa$, independent of $N$, such that if $\eps<\eps^*$ and $\beta>\eps^{-1}$, then there exists a polynomial function $\bar X$ uncorrelated with $H$ such that $$\label{eq:tesi}
\frac {\|\, [\bar X,H]\, \|}{\sigma_{\bar X}} \le
\exp\left[ -\left( \frac
1{\kappa \left(\eps+\beta^{-1}\right)}\right)^{1/4}\right] {\buildrel {\rm def} \over {=} }\frac 1{\bar t}\ .$$
[1ex ]{}**Remark.** We require $\bar X$ to be uncorrelated with $H$ in order that our adiabatic invariant be sufficiently different from the Hamiltonian, which is obviously a constant of motion.
[1ex ]{}Before the proof, we point out immediately that this theorem has two relevant (and strictly related) consequences on the time evolution of the dynamical variable $\bar X$. They will make clear in which sense $\bar t$ at the r.h.s. of (\[eq:tesi\]) can be seen as a stability time. The first consequence (Corollary \[cor:prob\]) concerns the probability $\mathbf P$ that the value of the variable $\bar X$ changes significantly from its original value. Indeed, it entails that the probability of such a change is practically negligible if $t<\bar t$. The second consequence (Corollary \[cor:autocorr\]) is a lower bound on the time autocorrelation of $\bar X$. We take here as definition of time autocorrelation of a dynamical variable the following one: $$C_X(t){\buildrel {\rm def} \over {=} }\rho_{X_t,X}\ ,$$ where $X_t(x){\buildrel {\rm def} \over {=} }X(\Phi^t x)$, $\Phi^t$ is the flow generated by $H$ and $\rho$ the correlation coefficient defined by (\[eq:coeff\_correlazione\]). We have chosen to rescale the usual definition, dividing it by $\sigma^2_X$, because the variance of $X$ is the natural scale of its autocorrelation, since $C_X(0)=1$ and the inequality $|C_X(t)|\le 1$ holds for any $t$.
We report here both results, which follow from Theorem \[teor:main\] and from the simple estimate $\| X_t - X\|^2 \le t^2 \|
[X,H] \|^2 $. The latter can be found in the proof of Theorem 1 of paper [@carati] and is however reported here in Section \[sez:definizione\] in order to make that section self contained.
\[cor:prob\] In the hypotheses of Theorem \[teor:main\], for any $\lambda>0$ one has $$\mathbf P\left(\left|\bar X_t -\bar X\right|\ge \lambda\,
\sigma_{\bar X}\right) \le \frac 1{\lambda^2}\left(\frac t{\bar t}
\right)^2\ .$$
\[cor:autocorr\] In the hypotheses of Theorem \[teor:main\], one has $$C_{\bar X}(t) \ge 1-\frac 12 \left(\frac
t{\bar t}\right)^2\ .$$
[1ex ]{}**Remark.** We observe that the notion of stability time for dynamical systems is not unambiguously defined. In Section \[sez:definizione\] we will provide a definition of “relaxation time” in terms of time autocorrelation of dynamical variables, which seems to us significant from a physical point of view. With such a definition, Theorem \[teor:main\] turns out to mean that the “relaxation time” is exponentially long in the perturbation parameters.
Scheme of the proof of Theorem \[teor:main\] {#sez:schema}
============================================
First we use a variant of the classical construction scheme of approximate integrals of motion (see [@cherry]) in order to perform the construction of the adiabatic invariant as a formal power series. Precisely, we use the scheme developed by Giorgilli and Galgani for a direct construction of integrals of motion (see [@giorgilli] and Section \[sez:telchi\] for the actual implementation). It is well known that the series thus obtained are, in general, divergent, so that the standard procedure consists in using as approximate integral of motion a truncation of the series. Denoting by $Y_n$ the series truncated at order $2n+2$, it turns out that it has the form $$\label{eq:intprim}
Y_n{\buildrel {\rm def} \over {=} }H_0+ \sum_{j=1}^n P_j(p,q)\ ,$$ where $P_j$ are suitable polynomials. In order to make such a quantity uncorrelated with $H$, it is convenient to consider $X_n{\buildrel {\rm def} \over {=} }Y_n-H$ instead of $Y_n$ itself.
In order to make the construction rigorous, one has to add rigorous estimates of the variance $\sigma^2_{X_n}$ of $X_n$, and of the $L^2$ norm of $[X_n,H]$. The first step to get such estimates consists in controlling the structure of the polynomials $P_j$ (which, in particular, contain only finite range couplings) and the size of their coefficients. This is done recursively, by a variant of the technique of the paper [@giorg], which is implemented in Section \[sez:telchi\] (see Lemma \[lemma:coeff\_nostro\_caso\]). We emphasize that, at variance with the original paper, we obtain here estimates independent of the number of degrees of freedom.
Then, due to the structure of the polynomials $P_j$, to get the needed $L^2$ estimates one has to compute the $L^2$ norm with respect to the Gibbs measure of the monomials appearing in $P_j$. The key step for this computation consists in giving an upper bound independent of $N$ to the marginal probabilities of the Gibbs measure. Such an estimate is obtained by adapting techniques developed by Bogolyubov and Ruelle (see [@bogoljubov] and [@rue]) and is reported in Lemma \[lemma:marginale\] of Section \[sez:marginale\]. One thus obtains the following bound $$\label{eq:prima}
\left\| \dot X_n \right\| \le \sqrt{N}\left(\sqrt 2\beta\right )^{-1}
\left(n!\right)^4
\left(\beta^{-1}+\eps\right)^n \kappa_1^n \ ,$$ which is valid for a suitable constant $\kappa_1>0$, provided $\eps$ is small enough and $\beta$ large enough (see Lemma \[lemma:stima\_P\_punto\] of Section \[sez:marginale\]).
We emphasize the presence of the factor $\sqrt N$ and that $\kappa_1$ is independent of $N$. It will be shown that actually the l.h.s. of (\[eq:prima\]) is of order $\sqrt N$ even if it is the square root of a sum of $O(N^2)$ terms. This is due to the fact that most of the terms have zero mean because the measure is even in $p$ and furthermore the $p$’s are independent variables.
To get the Theorem, one also needs an estimate of $\sigma_{X_n}$ from below. This is obtained in two steps, which are based on the remark that $\sigma_{X_n}\ge \sigma_{X_1} - \sigma_{\mathcal R}$, where $\mathcal
R{\buildrel {\rm def} \over {=} }X_n-X_1$ is a remainder.
First we compute explicitly $X_1$ and estimate from below $\sigma_{X_1}$, obtaining a bound proportional to $\sqrt N$. Then, we estimate from above $\sigma_{\mathcal
R}$. Precisely, we use techniques introduced by Dobrushin in papers [@do1; @do2] to show that $\sigma_{\mathcal
R}$ behaves as $\sqrt N$ (see Lemma \[lemma:stima\_P\] of Section \[sez:condizionata\]). We remark that this is the analogue of the law of large numbers. We recall that Dobrushin’s techniques enable us to show that spatial correlations between variables pertaining to different lattice sites decrease exponentially with the distance between the sites, so that the monomials appearing in $P_j$ are essentially independent, and the variance of $P_n$ is essentially the sum of the variances of each monomial. This leads to Lemma \[lemma:stima\_correlazione\] of Section \[sez:condizionata\], which shows that, for small enough $\eps$ and large enough $\beta$, for $n<\kappa_2^{-1/4}(\eps+\beta^{-1})^{-1/4}$ there holds $$\label{eq:seconda}
\sigma_{X_n}\ge \sqrt N(\eps+\beta^{-1})/(8 \beta)\ ,$$ where again $\kappa_2$ is a positive constant.
Then one finds the optimal $n$, call it $\bar n$, such that the ratio $\|[X_{\bar n},H]\|/\sigma_{X_{\bar n}}$ takes the minimal value. Notice that, as $n$ belongs to a bounded domain, the minimum can be attained at the boundary. The optimization is immediately done, once the estimates are given both for the $L^2$ norm $\|[X_n,H]\|$ of the time–derivative of the quasi integral of motion $X_n$, and for its variance $\sigma_{X_n}^2$. Then, the function $\bar X$ satisfying (\[eq:tesi\]) of Theorem \[teor:main\] is simply given by $\bar X{\buildrel {\rm def} \over {=} }X_{\bar n}-H\rho_{X_{\bar
n},H}\,\sigma_{X_{\bar n}}/\sigma_H$. The identity $\sigma^2_{\bar X} =(1-\rho^2_{X_{\bar
n},H}) \sigma^2_{X_{\bar n}}$, together with the upper bound to $\rho_{X_{\bar
n},H}$ given by Lemma \[lemma:stima\_correlazione\], enables us to extend all conclusions from $X_{\bar n}$ to $\bar X$.
Construction of the adiabatic invariant {#sez:telchi}
=======================================
Following [@giorgilli], we look for the formal integral of motion by looking for a sequence of polynomials $\chi=
\left\{\chi_s\right\}_{s\ge 1}$ such that $$\label{eq:formale}
\left[H,T_\chi H_0\right]=0\quad\mbox{at any order,}$$ where $T_\chi$ is a linear operator, whose action on a polynomial function $f$ is formally defined by[^2] $$\label{eq:telchi}
T_\chi f{\buildrel {\rm def} \over {=} }\sum_{s\ge 0} \left(T_\chi f\right)_s\ , \!\quad\! \mbox{with }
\left(T_\chi f \right)_0{\buildrel {\rm def} \over {=} }f\ , \!\quad\! \left(T_\chi f\right)_s{\buildrel {\rm def} \over {=} }\sum_{j=1}^s \frac js [\chi_j,\left(T_\chi f\right)_{s-j}]\ .$$ Inserting the expansion of $T_{\chi} H_0$ and $H$ in (\[eq:formale\]) and equating terms of equal order one gets the system $$\label{eq:formale1}
\Theta_0=H_0\ ,\quad \Theta_s - L_0\chi_s=\Psi_s\quad\mbox{for }s>0\ ,$$ where $$\label{eq:determinazione_Psi}
\begin{split}
\Psi_1&{\buildrel {\rm def} \over {=} }H_1\ , \\
\Psi_s&{\buildrel {\rm def} \over {=} }- \sum_{l=1}^{s-1}\frac ls \left[\chi_l, \left(T_\chi
H_0\right)_{s-l}\right] -
\sum_{l=1}^{s-1} \left(T_\chi {\Theta}_l\right)_{s-l}\quad \mbox{for }s\ge
2\ ,
\end{split}$$ $L_0{\buildrel {\rm def} \over {=} }[H_0,\cdot]$ is the homological operator and (\[eq:formale1\]) has to be read as an equation for the unknowns $\chi_s$, $\Theta_s$, which have to belong, respectively, to the range and to the kernel of the operator $L_0$. By defining the projections $\Pi_\mathcal{N}$, $\Pi_\mathcal{R}$, respectively on the kernel $\mathcal{N}$ and on the range $\mathcal{R}$ of $L_0$, one thus determines recursively $$\label{eq:determinazione_chi}
\chi_s=-L_0^{-1}\Pi_\mathcal{R}\Psi_s\ , \quad {\Theta}_s=\Pi_\mathcal{N}
\Psi_s\quad \mbox{for } s\ge 1\ .$$ The approximate integral of motion is then obtained by truncating the sequence $T_\chi H_0$ at a suitable order.
We have to estimate the action of the operator $T_\chi$ on the class of functions $f(p,q)$ we are interested in, in a norm which is well suited for our problem. Such a norm is defined as follows. Let $\mathcal{H}^{r,i}_s$ denote the class of monomials[^3] $p^kq^l$ of degree $s$, i.e., with $|k|+|l|=s$, which furthermore depend on sites that are at most $r$ lattice steps away from $i$, namely such that $k_j=l_j=0$ if $|i-j|\ge r$. We denote by $\mathcal{P}_{s,r}$ the set of all homogeneous polynomials of degree $s$ that can be decomposed as $$\label{eq:tipo_di_funzioni}
f=\sum_{i=1}^N\sum_{j=1}^{\left|\mathcal{H}^{r,i}_s\right|}
c_{ij}f_{ij}\ ,$$ with $f_{ij}\in \mathcal{H}^{r,i}_s$, where $\left|\mathcal{H}^{r,i}_s\right|$ is the cardinality of $\mathcal{H}^{r,i}_s$. To $f\in \mathcal P_{s,r}$ we associate a norm,[^4] defined by $$\label{eq:norma_coeff}
\left\|f\right\|_+{\buildrel {\rm def} \over {=} }\min\left\{ \max_{i\in\{1,\ldots,N\}} \sum_{j=1}^{
\left|\mathcal{H}^{r,i}_s\right|} |c_{ij}|\right\} \ ,$$ where the minimum is taken over all possible decompositions of $f$.
Now, we can estimate the action of $T_\chi$ on any function $f\in
\mathcal{P}_{s,r}$ according to the following Lemma, which is proved in Appendix \[app:coeff\].
\[lemma:coeff\] Let $T_\chi$ be the operator defined by (\[eq:telchi\]), relative to the sequence $\chi=\{\chi_s\}_{s \ge 0}$ which solves the system of equations (\[eq:determinazione\_chi\]–\[eq:determinazione\_Psi\]) for the Hamiltonian (\[eq:ham\]). Then, for any $f(p,q)\in
\mathcal{P}_{2s+2,r}$, one has $(T_\chi f)_n= \sum_{l=0}^n f_n^{(s+l)} ,
$ where $f_n^{(s+l)}\in \mathcal{P}_{2s+2l+2,r+n-l}$ and $$\label{eq:induzione_T_chi_2}
\left\|f_n^{(s+l)}\right\|_+\le 2^{6n} 2^{5(n-1)}2^{2s+l+2}
n!\frac{(n+r)!}{r!}\frac{(n+s)!}{s!}
\frac{n!}{l!(n-l)!}\eps^{n-l}\left\|f\right\|_+\ .$$
Lemma \[lemma:coeff\_nostro\_caso\] below will give bounds to the adiabatic invariant obtained by truncating at a finite order the formal power series which defines $T_\chi H_0$. In particular, the adiabatic invariant will simply be $Y_n= \sum_{s=0}^n \left( T_\chi H_0 \right)_s ,$ so that the polynomials $P_j$ appearing at the r.h.s. of (\[eq:intprim\]) of Theorem \[teor:main\] are $$\label{eq:definizione_P_n}
P_j {\buildrel {\rm def} \over {=} }\left(T_\chi H_0\right)_j
\ ,$$ while the quantity we will focus on will be $$\label{eq:definizione_X_n}
X_n{\buildrel {\rm def} \over {=} }Y_n-H=-{\Theta}_1+\sum_{j=2}^n P_j\ .$$ The time derivative of $X_n$ is then given by $$\label{eq:derivata_troncata}
\dot X_n{\buildrel {\rm def} \over {=} }\left[X_n,H\right]=
\left[P_n,H_1\right]\ ,$$ which is a polynomial of order $2n+4$. In order to obtain the estimates of the $L^2$–norm, eventually, it is of interest to take into account the parity properties of the operator $T_\chi$, with respect to the canonical coordinate $p$. So we define as $\mathcal{P}^+$ the space of polynomials of even order in $p$, and $\mathcal{P}^-$ the space of those of odd order in $p$.
Finally, we can state
\[lemma:coeff\_nostro\_caso\] For the adiabatic invariant constructed through $T_\chi H_0$ (see (\[eq:definizione\_P\_n\])) one can write $$\label{eq:scomposizione}
P_n=\sum_{l=0}^n\frac{n!}{l!(n-l)!}\eps^{n-l} P_n^{(l)}\ ,$$ where $ P_n^{(l)}\in\mathcal {P}^+ \cap\mathcal{P}_{2l+2,n-l}$ and $$\label{eq:norma_coeff_P}
\left\|P_n^{(l)}\right\|_+\le \mathcal{D}_n\ ,\quad \mbox{with }
\mathcal{D}_n{\buildrel {\rm def} \over {=} }2^{12n}
\left(n!\right)^3\ .$$ Furthermore, one has $$\left[X_n,H\right]=\sum_{l=0}^{n+1}\frac{(n+1)!}{l!(n+1-l)!}\eps^{n+1-l} \dot
X_n^{(l)}\ ,$$ with $\dot X_n^{(l)}\in\mathcal{P}^-\cap \mathcal{P}_{2l+2,n+1-l}$ and $$\label{eq:norma_coeff_P_punto}
\left\|\dot X_n^{(l)}\right\|_+\le \mathcal{C}_n\ , \quad\mbox{with }
\mathcal{C}_n {\buildrel {\rm def} \over {=} }48\cdot 2^{12n}
n!\left((n+1)!\right)^2 \ .$$
[1ex ]{}**Proof.** The proof of the upper bounds is mainly based on the application of Lemma \[lemma:coeff\] to the function $H_0\in\mathcal P_{2,0}$, together with the simple bound $\left\| H_0\right\|_+= \omega\le 2$, which holds for small enough $\eps$. This proves equations (\[eq:scomposizione\]), (\[eq:norma\_coeff\_P\]). Then, we use the fact that $[X_n,H]=[P_n,H_1]$ and the upper bound to the norm of the Poisson brackets of two variables provided by Lemma \[lemma:par\_Poisson\] of Appendix \[app:coeff\]. This gives equation (\[eq:norma\_coeff\_P\_punto\]).
The parity properties are obtained by observing that $[\mathcal
P^\pm,\mathcal P^\pm]\subset \mathcal P^+$ and $[\mathcal
P^\pm,\mathcal P^\mp]\subset \mathcal P^-$, as well as $\Pi_\mathcal{N}(\mathcal{P}^+)\subset \mathcal{P}^+$ and $\Pi_\mathcal{N}(\mathcal{P}^-)\subset\mathcal{P}^-$ and that the similar inclusions regarding $\Pi_\mathcal{R}$hold, and then working recursively.
Q.E.D.
[1ex ]{}
Marginal probability estimates {#sez:marginale}
==============================
The aim of this section is to prove the bound on the norm of $\dot
X_n$ given by the following
\[lemma:stima\_P\_punto\] There exist constants $\bar\beta>0$, $\bar\eps>0$, $\kappa_1>0$ such that, for any $\beta>\bar\beta$ and for any $\eps<\bar\eps$, for $\dot X_n$ defined by (\[eq:derivata\_troncata\]) of Section \[sez:telchi\] one has $$\label{eq:stima_P_punto}
\left\| \dot X_n \right\| \le \sqrt{N}\left(\sqrt 2\beta\right )^{-1}
\left(n!\right)^4
\left(\beta^{-1}+\eps\right)^n \kappa_1^n \ .
$$
The key tool of the proof is an estimate of the probability that the coordinates of a finite number $s$ of sites are near some fixed values. Such an estimate is given in the following Subection \[sottosez:stima\_marginale\], whereas the proof of Lemma \[lemma:stima\_P\_punto\] is given in Subection \[sottosez:stima\_X\_punto\].
Estimates on the marginal probability {#sottosez:stima_marginale}
-------------------------------------
Everything is trivial for the $p$ coordinates, for which the measure can be decomposed as a product: from a probabilistic point of view, this means that every $p_j$ is independent of the $q$ and of any $p_i$, for $i\neq j$. We focus, instead, on the $q$ coordinates, which are independent of the $p$, but depend on each other. Then, we must study the relevant part of the density, which is given by $$\label{eq:definizione_D_N}
D_N(q_1,\ldots,q_N){\buildrel {\rm def} \over {=} }\frac{1}{Z_N}\exp\left[-\beta U_N(
q_1,\ldots,q_N)\right]\ ,$$ where $Z_N$ is the “spatial” partition function $$\label{eq:definizione_Z_N}
Z_N{\buildrel {\rm def} \over {=} }\int_{-\infty}^{+\infty}{\mathrm{d}}q_1\ldots \int_{-\infty}^{+\infty}{\mathrm{d}}q_N
\,\exp\left[-\beta U_N(q_1,\ldots, q_N)\right]$$ and $U_N$ the part of Hamiltonian (\[eq:ham\]) which depends on $q$, namely, the potential $$U_N\left(q_1\ldots,q_N\right){\buildrel {\rm def} \over {=} }\sum_{i=1}^N\left(\omega\frac{
q_i^2}{2}+\frac{q_i^4}{4\omega^2}\right) +
\eps \sum_{i=1}^{N-1}\frac{ q_iq_{i+1}}{\omega}\ .$$
The main point is then to estimate the marginal probability $F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots, q_{i_s})$ that we are going to define. Given a set of indices $i_1<i_2<\ldots<i_s$ we say that they form a connected block if $i_{j+1}=i_j+1$, i.e., if they label a “connected” chain. We say that a sequence of indices $i_1<i_2<\ldots<i_s$ form ${{\mathfrak{x}}}$ blocks if the set $\{i_j\}_{j=1}^s$ can be decomposed into ${{\mathfrak{x}}}$ connected blocks, which furthermore are not connected to each other. Given a set of indices $i_1<i_2<\ldots<i_s$ we define $$\label{eq:definizione_F_N_s}
F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots, q_{i_s}){\buildrel {\rm def} \over {=} }\int_{-\infty}^{+\infty}{\mathrm{d}}q_{i_{s+1}}\ldots \int_{-\infty}^{+\infty}{\mathrm{d}}q_{i_N}\, D_N( q_1,\ldots,
q_N)\ ,$$ where ${{\mathfrak{x}}}$ is the number of blocks in the set $\{i_j\}_{j=1}^s$. We remark here that such a quantity depends on the number of particles, $N$, but we will find for it an upper bound independent of $N$. In fact, the estimate will depend only on $s$ and ${{\mathfrak{x}}}$, but not on the precise choice of the sites.
Define the two functions $$\label{eq:def_n_s}
\begin{split}
n_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s}) &\!{\buildrel {\rm def} \over {=} }\!
\exp\!\left[\!-\beta\!\!\left(\! \sum_{k=1}^s \!
\left( \frac{q_{i_k}^2}{2\omega} + \frac{q_{i_k}^4}{4
\omega^2}\right)\!+\! \eps\!\!\sum_{k,l=1}^s\!
\delta_{i_l,i_k+1}\!\frac{(q_{i_k}\!\!-q_{i_l})^2}{2\omega}\!\right)\!\right]\\
&\le \exp\!\left(-\beta \sum_{k=1}^s\frac{q_{i_k}^2}{2\omega}\right) \ ,
\end{split}$$ $$\label{eq:def_n_tilde_s}
\tilde{n}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s}) \!{\buildrel {\rm def} \over {=} }\!
\exp\!\left[\!-\beta\!\left( \sum_{k=1}^s
\left( \frac{\omega q_{i_k}^2}{2} + \frac{q_{i_k}^4}{4
\omega^2}\right)\!+ \eps\!\!\sum_{k,l=1}^s
\delta_{i_l,i_k+1}\frac{q_{i_k}q_{i_l}}{\omega}\!\right) \!\right]\ ,$$ where $\delta_{i,j}$ is the Krönecker delta. [1ex ]{}**Remark.** Notice that $n_{s,{{\mathfrak{x}}}}$ is the configurational part of the Gibbs measure of the system with variables $q_{i_1},\ldots,q_{i_s}$ and free boundary conditions, apart from the absence of the normalization factor (i.e., the partition function), whereas $\tilde{n}_{s,{{\mathfrak{x}}}}$ is the analogous quantity for the same system, but with fixed boundary conditions. Thus, they differ only because of the different dependence on the coordinates at the sites lying on the boundary of the blocks, the number of which, $\gamma$, satisfies ${{\mathfrak{x}}}\le\gamma\le2{{\mathfrak{x}}}$. If we denote by $m_1,\ldots,m_{\gamma}$ the indices of these sites, we can write the identity $$\label{eq:rapporto_n_n_tilde}
\frac{n_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s})}{\tilde{n}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s})}
= \prod_{j=1}^\gamma\exp \left(\frac{\beta\eps}{\omega} \alpha_{m_j} q_{m_j}^2
\right)\le \prod_{j=1}^\gamma\exp \left(\frac{\beta\eps}{\omega} q_{m_j}^2
\right)\ ,$$ where the factor $\alpha_{m_j}$ is equal to 1 or $1/2$ according to whether the site $m_j$ is isolated (i.e., the block is composed of only that site) or not. [1ex ]{}
Then the following lemma, which is the main result of the present subsection, holds
\[lemma:marginale\] There exist constants $\bar\beta>0$, $\bar\eps>0$, $K>0$ and a sequence $\mathfrak{C}_{{\mathfrak{x}}}>0$ such that, for any $\beta >\bar\beta$ and for any $\eps<\bar\eps$, one has the inequalities $$\label{eq:maggiorazione_F_N_s}
F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s}) \le \mathfrak{C}_{{\mathfrak{x}}}K^s
\left( \frac{\beta}{2
\pi\omega}\right)^{s/2}n_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s})$$ and $$\label{eq:minorazione_F_N_s}
F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1}\!,\ldots,\!q_{i_s}\!) \!\ge\!
\frac{1}{\mathfrak{C}_{{\mathfrak{x}}}} \!\left(\!\! \frac{\beta}{2
\pi\omega}\!\!\right)^{s/2}\!\!\!\tilde{n}_{s,{{\mathfrak{x}}}}(q_{i_1}\!,\ldots,\!q_{i_s}\!)
\exp \!\!\left(
\!\!-8\eps{{\mathfrak{x}}}\sqrt
\frac{\beta}{2\omega}\sum_{j=1}^\gamma \!\left|q_{m_j}
\right|\!\!\right)\ .$$
The proof of such a lemma is based on the techniques of paper [@bogoljubov], which apply quite simply to the case of periodic boundary conditions (see Lemma \[lemma:periodico\] below), on account of the translational invariance. Thus, it is also useful to introduce the density $\tilde{D}_N$ relative to the periodic system, defined by $$\label{eq:definizione_D_tilde_N}
\tilde{D}_N(q_1,\ldots,q_N){\buildrel {\rm def} \over {=} }\frac 1{Q_N} \exp\left[-\beta
U_N(q_1,\ldots,q_N)+\beta \eps q_1q_N\right]\ .$$ In this definition there appears the partition function for the periodic case $$\label{eq:definizione_Q_N}
Q_N{\buildrel {\rm def} \over {=} }\int_{-\infty}^{+\infty}{\mathrm{d}}q_1\ldots \int_{-\infty}^{+\infty}{\mathrm{d}}q_N
\,\exp\left[-\beta U_N(q_1,\ldots, q_N)+\beta \eps
q_1 q_N\right]\ .$$
For the periodic system it is simple to estimate two relevant quantities. The former is the ratio between the partition function for $N-1$ particles and that for $N$ particles, i.e., the ratio $Q_{N-1}/Q_N$. The relation between $Q_N$ and $Z_N$ is then obtained as a particular case of Lemma \[lemma:rapporto\_Z\_Q\], which will be stated later on. The latter is the probability, evaluated with respect to the density for $N$ particles, that the coordinates of $r$ particles have an absolute value smaller than $\Theta\sqrt{2\omega/\beta}$, for a given $\Theta$. In other terms, we need an estimate of the following quantity $$\label{definizione_P_N}
\begin{split}
\mbox{\bf{P}}_N\!\!\left(|q_1|<\Theta\sqrt{\frac{2\omega}{\beta}}\wedge
\ldots\wedge
|q_r|<\right.& \left.\Theta\sqrt{\frac{2\omega}{\beta}} \right) {\buildrel {\rm def} \over {=} }\int_{-\infty}^{+\infty}\!\!\!{\mathrm{d}}q_1\ldots\! \int_{-\infty}^{+\infty}\!\!\!{\mathrm{d}}q_N
{\bf{1}}_{|q_1|<\Theta\sqrt{2\omega/\beta}} \\
& \times\ldots \times{\bf{1}}_{|q_r|<\Theta\sqrt{2\omega/\beta}}\, \tilde{D}_N(q_1,
\ldots,q_N) \ ,
\end{split}$$ in which ${\bf 1}_A$ is the indicator function of the set $A$. We can now give the mentioned estimates by the following lemma, whose proof is deferred to Appendix \[app:dim\_lemma\_periodico\].
\[lemma:periodico\] There exist constants $\beta_0>0$, $\eps_0>0$, $K_0>2$ such that, for any $\beta >\beta_0$ and $\eps<\eps_0$, one has $$\label{eq:maggiorazione_frazione_Q_N}
\frac{Q_{N-1}}{Q_N}\le K_0\sqrt{\frac{\beta}{2\pi\omega}}\ .$$ Furthermore, if $\Theta\ge2\sqrt{r\log (4r K_0)}$, one has $$\label{eq:minorazione_P_N}
\mbox{\bf{P}}_N\left(|q_1|<\Theta\sqrt{\frac{2\omega}\beta}\wedge \ldots\wedge
|q_r|<\Theta\sqrt{\frac{2\omega}\beta} \right)\ge \frac 12\ .$$
This result enables us to give the proof of Lemma \[lemma:marginale\].
1em [1ex ]{}**Proof of Lemma \[lemma:marginale\]** First write $$D_N\left( q_1.\ldots,q_N\right)=n_{N-s,{{\mathfrak{x}}}'}\left(q_{i_{s+1}},\ldots ,
q_{i_N}) \right) n_{s,{{\mathfrak{x}}}}\left(q_{i_1},\ldots,q_{i_s}\right)
I\left(q_1, \ldots,q_N\right)\ ,$$ for a suitable ${{\mathfrak{x}}}'$, with ${{\mathfrak{x}}}-1\le {{\mathfrak{x}}}'\le {{\mathfrak{x}}}+1$ (the lower and the upper bound are attained, respectively, if both $1$ and $N$ are contained in $i_1,\ldots,i_s$ or none of them), where $I$ contains the terms of interaction between the “internal” and the external part of the system. Remarking that $I\le1$, one gets $$\begin{aligned}
\label{prima_maggiorazione}
F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s})&\le&
\frac{1}{Z_N}\left(
\int_{-\infty}^{+\infty}{\mathrm{d}}q_{i_{s+1}} \ldots
\int_{-\infty}^{+\infty}{\mathrm{d}}q_{i_N}\times\right. \nonumber\\
&&\left.\frac{}{}\times n_{N-s,{{\mathfrak{x}}}'}(q_{i_{s+1}},\ldots ,
q_{i_N}) \right) n_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s})\ .\end{aligned}$$
We now have to estimate the integral appearing in (\[prima\_maggiorazione\]). More in general, in the course of the proof we need to estimate integrals of a similar type. This will be done in Lemma \[lemma:rapporto\_Z\_Q\], which will be given in a while. Introduce the quantities: $$\label{eq:definizione_Q_corsivo}
\begin{split}
\bar{\mathcal{Q}}^{{\mathfrak{x}}}_M &{\buildrel {\rm def} \over {=} }\inf_{B(M,{{\mathfrak{x}}})}\int_{-\infty}^{+\infty}{\mathrm{d}}q_1 \ldots
\int_{-\infty}^{+\infty}{\mathrm{d}}q_M n_{M,{{\mathfrak{x}}}}(q_1,\ldots,q_M)\ ,\\
\mathcal{Q}^{{\mathfrak{x}}}_M &{\buildrel {\rm def} \over {=} }\sup_{B(M,{{\mathfrak{x}}})}\int_{-\infty}^{+\infty}{\mathrm{d}}q_1 \ldots
\int_{-\infty}^{+\infty}{\mathrm{d}}q_M n_{M,{{\mathfrak{x}}}}(q_1,\ldots,q_M)\ ,
\end{split}$$ where $B(M,{{\mathfrak{x}}})$ denotes the collection of all possible partitions of $M$ indices in ${{\mathfrak{x}}}$ blocks. It is also convenient to consider the quantities defined in a similar way, by integrating $\tilde{n}_{M,{{\mathfrak{x}}}}$ in place of $n_{M,{{\mathfrak{x}}}}$, namely $$\label{eq:definizione_Z}
\begin{split}
\bar{Z}^{{\mathfrak{x}}}_M &{\buildrel {\rm def} \over {=} }\inf_{B(M,{{\mathfrak{x}}})}\int_{-\infty}^{+\infty}{\mathrm{d}}q_1 \ldots
\int_{-\infty}^{+\infty}{\mathrm{d}}q_M \tilde{n}_{M,{{\mathfrak{x}}}}(q_1,\ldots,q_M)\ ,\\
Z^{{\mathfrak{x}}}_M &{\buildrel {\rm def} \over {=} }\sup_{B(M,{{\mathfrak{x}}})}\int_{-\infty}^{+\infty}{\mathrm{d}}q_1 \ldots
\int_{-\infty}^{+\infty}{\mathrm{d}}q_M \tilde{n}_{M,{{\mathfrak{x}}}}(q_1,\ldots,q_M)\ .
\end{split}$$ It is easily shown that for ${{\mathfrak{x}}}=1$ one has $\bar{Z}^1_M=Z^1_M=Z_N$. In order to link them to $Q_N$ and to each other, we use the following lemma, the proof of which is deferred to Appendix \[app:dim\_lemma\_rapporto\].
\[lemma:rapporto\_Z\_Q\] Let $\beta_0>0$, $\eps_0>0$ and $K_0>2$ be constants such that Lemma \[lemma:periodico\] holds. Then, for any $\beta
>\beta_0$ and any $\eps<\eps_0$, the inequalities $$\label{eq:minorazione_rapporto_Z_Q}
\frac{\bar{\mathcal{Q}}^{{\mathfrak{x}}}_M}{Q_M}\ge1\ ,\quad\frac{\bar Z^{{\mathfrak{x}}}_M}{Q_M}\ge \frac
12 \left(8{{\mathfrak{x}}}K_0\right)^{-32\eps_0 {{\mathfrak{x}}}^2}$$ hold. Furthermore, the chain of inequalities $$\label{eq:maggiorazione_rapporto_Z_Q}
\frac{Z^{{\mathfrak{x}}}_M}{Q_M}\le\frac{\mathcal{Q}^{{\mathfrak{x}}}_M}{Q_M}\le 2 K_0^{{\mathfrak{x}}}\exp\left(4{{\mathfrak{x}}}\eps_0\bar\kappa({{\mathfrak{x}}},K_0)\right) \ ,$$ holds, where $\bar \kappa({{\mathfrak{x}}},K_0)$ is the solution of the equation $$\label{eq:determinazione_kappa_barra}
K_0^{2{{\mathfrak{x}}}} \Gamma({{\mathfrak{x}}},\bar \kappa)=\frac 12\ ,$$ $\Gamma(s,x)$ being the upper regularized Gamma function $$\label{eq:gamma_reg}
\Gamma(s,x) {\buildrel {\rm def} \over {=} }\frac 1{(s-1)!}\int_{x}^{+\infty} t^{s-1} e^{-t} {\mathrm{d}}t \ .$$
The previous lemma enables one to see that $Z_N^{-1}\le 2(8K_0)^{32\eps_0}/Q_N,
$ while the integral appearing in (\[prima\_maggiorazione\]) is estimated by $$\mathcal{Q}^{{{\mathfrak{x}}}'}_{N-s}\le
2K_0^{{{\mathfrak{x}}}+1}\exp\left[4({{\mathfrak{x}}}+1)\eps_0\bar\kappa({{\mathfrak{x}}}+1,K_0)\right] Q_{N-s}\ .$$ Thus, due to relation (\[eq:maggiorazione\_frazione\_Q\_N\]) of Lemma \[lemma:periodico\], one easily sees that $$\label{maggiorazione_periodico}
\frac{Q_{N-s}}{Q_N}= \prod_{i=1}^s\frac{Q_{N-i}}{Q_{N-i+1}}\le
K_0^s\left(\frac{\beta}{2\pi\omega}\right)^{s/2}\ ,$$ so that (\[eq:maggiorazione\_F\_N\_s\]) is proved, taking $$\label{eq:prima_maggiorazione_gotico}
\mathfrak{C}_{{\mathfrak{x}}}\ge
2^{96\eps_0+2}K_0^{{{\mathfrak{x}}}+1+32\eps_0}\exp\left[4({{\mathfrak{x}}}+1)\eps_0 \bar\kappa
({{\mathfrak{x}}}+1,K_0)\right]\ .$$
We come now to the proof of (\[eq:minorazione\_F\_N\_s\]). To this end, we write $$\nonumber
\begin{split}
D_N(q_1,\ldots,q_N)&=\frac {Q_{N-s}}{Z_N} \tilde{D}_{N-s}(q_{i_{s+1}},\ldots,q_{i_N}) \tilde{n}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,
q_{i_s}) \\
&\times G(q_{m_1},\ldots,q_{m_\gamma},q_{l_1},\ldots,q_{{l_{\gamma'}}})
\ ,
\end{split}$$ where the sites $l_i$ are the ones which are contiguous to the blocks, but not contained in them, taken by keeping the relative order. Furthermore, due to the periodicity there appear factors depending on $q_1$ and $q_N$, if the sites 1 and $N$ are not contained in $i_1,\ldots,i_s$. In this case, we put $q_{l_1}=q_1$ and $q_{l_{\gamma'}}=q_N$. We denote by $\gamma'$, with $\gamma'\le2{{\mathfrak{x}}}+2$, the number of such indices. The explicit expression of the function $G$ is complicated. Plainly, it represents the product of the factors $\exp(-\sum_j\beta\eps
q_{m_j}q_{l_i}/\omega)$ among all sites $l_i$ contiguous to $m_j$, and just the factor $\exp(\beta\eps
q_{l_i}q_{l_{i+1}}/\omega)$, when $l_i$ and $l_{i+1}$ belong to different blocks.[^5] In any case, a lower bound to $G$ in the region $$\mathcal{A} {\buildrel {\rm def} \over {=} }\left\{|q_{l_1}|\!<\!2{{\mathfrak{x}}}\sqrt{\log(4 K_0)}
\sqrt{2\omega /\beta}
\;\wedge\;\ldots\;\wedge\; |q_{l_{\gamma'}}|\!<\! 2{{\mathfrak{x}}}\sqrt{\log(4 K_0)}
\sqrt{2\omega /\beta}
\right\}$$ is given by $$G \ge
\exp\left( -4\eps{{\mathfrak{x}}}\sqrt \frac{\beta}{2\omega}\sqrt{\log(4
K_0)}\sum_{j=1}^\gamma \left|q_{m_j}
\right|\right) (4K_0)^{-8\eps ({{\mathfrak{x}}}^2+{{\mathfrak{x}}})} \ .$$ So, we can write $$\label{eq:minorazione_intermedia_F_N_s}
\begin{split}
F^{(N)}_{s,{{\mathfrak{x}}}}(q_{i_1},\ldots,q_{i_s}) &\ge
\tilde{n}_s(q_{i_1},\ldots,q_{i_s})(4K_0)^{-8\eps ({{\mathfrak{x}}}^2+{{\mathfrak{x}}})}\frac{Q_{N-s}}{Z_N} \,
\mbox{\bf{P}}_{N-s}\left(\mathcal{A}\right) \\
&\times \exp\left( -4\eps{{\mathfrak{x}}}\sqrt
\frac{\beta}{2\omega}\sqrt{\log(4 K_0)}\sum_{j=1}^\gamma \left|q_{m_j}
\right|\right)\ ,
\end{split}$$ where the (positive) contribution of the integral over $\mathcal A^c$ was neglected.
The term with $\mbox{\bf{P}}_{N-s}(\mathcal A)$ in (\[eq:minorazione\_intermedia\_F\_N\_s\]) is bounded from below by relation (\[eq:minorazione\_P\_N\]) of Lemma \[lemma:periodico\]. As for the fraction, by Lemma \[lemma:rapporto\_Z\_Q\] we obtain $$Q_{N-s}\ge \frac{1}{2K_0\exp(8\eps_0\bar\kappa(1,K_0))}\mathcal{Q}_{N-s}^1\ .$$ Now, operating as in the deduction of formula (\[maggiorazione\_periodico\]), it is sufficient to observe that $\mathcal{Q}_{N-1}^1\ge \sqrt{\beta/(2\pi\omega)} \mathcal{Q}_N^1
$ to obtain $\mathcal{Q}_{N-s}^1\ge \left(\beta/2(\pi\omega)\right)^{s/2}
\mathcal{Q}_N^1$. Then, choosing $\bar \eps$, $\bar \beta$ such that $K_0\le e^4/4$ and observing that $ \mathcal{Q}_N^1\ge Z_N$, one gets (\[eq:minorazione\_F\_N\_s\]) with $$\label{eq:seconda_maggiorazione_gotico}
\mathfrak{C}_{{\mathfrak{x}}}\ge
\left(4K_0\right)^{8\eps_0({{\mathfrak{x}}}^2+{{\mathfrak{x}}})+1}\exp(8\eps_0\bar\kappa(1,K_0)
\ .$$ Finally, $\mathfrak{C}_{{\mathfrak{x}}}$ can be chosen as the maximum of the r.h.s. of (\[eq:prima\_maggiorazione\_gotico\]) and of (\[eq:seconda\_maggiorazione\_gotico\]) . This concludes the proof.
Q.E.D.
[1ex ]{}
Estimate of $\|\dot X_n\|$ {#sottosez:stima_X_punto}
--------------------------
We apply directly inequality (\[eq:maggiorazione\_F\_N\_s\]) to get the proof of Lemma \[lemma:stima\_P\_punto\], using the fact that such a quantity is a sum of polynomials depending at most on $2n+3$ sites, as can be seen by Lemma \[lemma:coeff\_nostro\_caso\] of Section \[sez:telchi\].
[1ex ]{}**Proof of Lemma \[lemma:stima\_P\_punto\]** The key ingredient of the proof is, as stated in Section \[sez:telchi\], that the polynomials $P_n$ are even in the $p$ coordinates, so that the $\dot X_n$’s are odd in the $p$. On account of that, $\dot X_n^2$ is a sum in which the terms coming from the product of two monomials depending on separated groups of sites contain at least one $p_i$ to an odd power. Since the measure is even with respect to any $p$, these terms have a vanishing integral.
We formalize this way of reasoning by decomposing $\dot X_n$ as $
\dot X_n= \sum_{i=1}^N f_i,
$ where the $f_i$’s are polynomials depending at most on the sites between $i-n-1$ and $i+n+1$. Then, the $L^2$–norm of $\dot X_n$ is expressed according to $
\left\|\dot X_n\right\|^2=\sum_{i,j=1}^N \langle f_i f_j\rangle\ .
$ In this sum, all the terms with $|i-j|>2n+2$ vanish, while the other ones are estimated in terms of $\left\|\dot X_n\right\|_+$ in the following way.
On account of Lemma \[lemma:coeff\_nostro\_caso\], we can write $$f_i=\sum_{l=0}^{n+1}\frac{(n+1)!}{l!(n+1-l)!}\eps^{n+1-l}\sum_{s=1}^{
|\mathcal{H}^{n+1-l,i}_{2l+2}|}
c_{is,l} f_{is}^{(l)}\ ,$$ in which $f_{is}^{(l)}$ is a monomial in $\mathcal{H}^{n+1-l,i}_{2l+2}$ and the decomposition in these monomials is performed in such a way that $\sup_{i,l}\sum_s |c_{is,l}|\le \mathcal C_n$. Then, we sum on $j$ and obtain that $$\left\|\dot X_n\right\|^2\le(4n+5)\, \mathcal
C_n^2\sum_{i=1}^N\sum_{l=0}^{n+1}\frac{(2n+2)!}{l!(2n+2-
l)!}\eps^{2n+2- l}\sup_{g\in\mathcal{H}^{2n+2-l,i}_{2l+4}}
\langle g\rangle\ ,$$ where we used the fact that the only nonvanishing contributions to the integral come from the product of $f_{js}^{(l-r)}\in
\mathcal{H}^{n+1-l+r,j}_{2l-2r+2}$ and $f_{km}^{(r)}\in
\mathcal{H}^{n+1-r,k}_{2r+2}$, for $|j-k|\le 2n+2-l$, so that $g{\buildrel {\rm def} \over {=} }f_{js}^{(l-r)} f_{km}^{(r)}\in \mathcal{H}^{2n+2-l,i}_{2l+4}$, for a suitable $i$ between $j$ and $k$.
Then, we make use of (\[eq:maggiorazione\_F\_N\_s\]) together with the estimate (\[eq:def\_n\_s\]) for $n_{s,{{\mathfrak{x}}}}$ to bound the mean value of any function in $\mathcal{H}^{2n+2-l,i}_{2l+4}$. In fact, one has $$\begin{split}
\sup_{g\in\mathcal{H}^{2n+2-l,i}_{2l+4}} \langle g\rangle &\le
\mathfrak{C}_1K^{4n-2l+5}\sqrt\frac{\beta}{2\pi\omega}
\int_{-\infty}^{\infty} x^{2l+4} \exp\left(-\frac{\beta
}{2\omega} x^2\right){\mathrm{d}}x\\
&= \mathfrak{C}_1 K^{4n-2l+5} \left(
\frac{2\omega}{\beta} \right)^{l+2} \frac{(2l+3)!!}{2^{l+2}}\ .
\end{split}$$ So, the inequality $$\left\|\dot X_n\right\|^2 \le \mathfrak{C}_1K^{4n+5} (4n+5) (2n+4)!\,
(2\omega)^{2n+4} \beta^{-2}\left( \eps+\frac 1{\beta}\right)^{2n+2}N
\mathcal{C}^2_n$$ holds. Thus, choosing a suitable $\kappa_1>0$ and using the value of $\mathcal{C}_n$ given by (\[eq:norma\_coeff\_P\_punto\]) of Lemma \[lemma:coeff\_nostro\_caso\], inequality (\[eq:stima\_P\_punto\]) is satisfied.
Q.E.D.
[1ex ]{}
Estimate of the variance of the adiabatic invariant {#sez:condizionata}
===================================================
In the present Section we prove the following Lemma \[lemma:stima\_correlazione\], which was used in the proof of Theorem \[teor:main\]. The lemma concerns estimates on the variance $\sigma^2_{X_n}$ and on the correlation $\rho_{X_n,H}$ of the adiabatic invariant and reads
\[lemma:stima\_correlazione\] There exist positive constants $\tilde{\eps}>0$, $\kappa_2>0$, $\kappa_3>1$, such that, for any $\eps<\tilde \eps$, for any $\beta>\eps^{-1}$ and for $n<\kappa_2^{-1/4}(\eps+\beta^{-1})^{-1/4}$, with $X_n$ defined by (\[eq:definizione\_X\_n\]), the following inequalities hold: $$\label{eq:minorazione_sigma_X_n}
\sigma_{X_n}\ge \sqrt N\, \frac{\eps+\beta^{-1}}{8 \beta}$$ and $$\label{eq:stima_correlazione}
\left|\rho_{X_n,H}\right|\le
\left(1+\frac 1{\kappa_3}\frac{\eps^2}{\left(\eps+ \beta^{-1}
\right)^2} \right)^{-1/2}\ .$$
The proof of this lemma requires the study the spatial correlations between quantities depending on two separate blocks. The study of these properties has to be performed within the general frame of Gibbsian fields and conditional probabilities. In the present Section we provide the necessary notions and give a proposition of a general character concerning the decay of spatial correlations for lattices with finite range of interaction, i.e, Theorem \[teor:correlazioni\_generico\] of Subsection \[sottosez:corrspaziali\], from which it will be possible to finally come to the proof of Lemma \[lemma:stima\_correlazione\], which will be given in Subsection \[sottosez:stima\_sigma\_X\].
Our treatment of conditional probabilities is inspired in particular by the work of Dobrushin (see [@do1]). More precisely, we will make reference to paper [@do1] for the main ideas, and to the subsequent beautiful but underestimated subsequent paper [@do2], by Dobrushin and Pechersky, for a more direct relation to our problem. As a matter of fact, most of the ideas of this section are already contained in works [@do1] and [@do2], but the explicit result on the spatial correlations given here required some additional work. We recall that, since Gibbsian fields and the related techniques were introduced in order to deal with infinite lattices, our result holds even if the number of sites tends to infinity.
The present section is structured as follows: in Subsection \[sottosez:corr\_e\_cond\] the link between spatial correlations and conditional probabilities is shown, and in Subsection \[sottosez:corrspaziali\] we state Theorem \[teor:correlazioni\_generico\], whose proof is deferred to Appendix \[app:dim\_correlazioni\]. Such a result is used in order to obtain an upper bound to $\sigma_{P_n}$, stated in Lemma \[lemma:stima\_P\] of Subection \[sottosez:stima\_sigma\_P\], whence the proof of Lemma \[lemma:stima\_correlazione\] easily follows, as shown in Subection \[sottosez:stima\_sigma\_X\].
Link between spatial correlations and conditional probability {#sottosez:corr_e_cond}
-------------------------------------------------------------
In order to prove Lemma \[lemma:stima\_correlazione\] we have to estimate quantities such as $
\langle fg\rangle-\langle f\rangle \langle g\rangle
$, relative to the Gibbs measure $\mu$, where $f$ is a function which depends on sites belonging to a set $\tilde V$, while $g$ depends only on sites in $V$, with $V\cap \tilde V =\emptyset$. Our aim is to show that such correlations decrease as the distance between $\tilde V$ and $V$ increases, where the distance $d(V,\tilde V)$ is defined for example as $d(V,\tilde V){\buildrel {\rm def} \over {=} }\inf_{{i}\in V,{j}\in \tilde V}
|{i}- {j}|$.
We start showing the relation between the spatial correlations and the conditional probability in a setting more general than ours. We consider as given a measure $\mu$ on $\mathbb
R^{|{T}|}$, with ${T}\subset \mathbb Z^\nu$, which induces on the measurable set $A\subset \mathbb R^{|\tilde V|}$ the probability $$P_{\tilde V}(A){\buildrel {\rm def} \over {=} }\int_{\mathbb R^{|{T}|}} {\mathrm{d}}\mu(x)
{\bf 1}_{A\times {T}\backslash \tilde V}(x)\ ,$$ where ${\bf 1}_A$ is the indicator function of the set $A$. One can express the quantity we are interested in as $$\label{eq:correlazioni_condizionato}
\langle fg\rangle\!-\!\langle f\rangle \langle g\rangle\! =\!\! \int_{\mathbb
R^{|\tilde V|}} \!\!f({\mathbf{x}}) P_{\tilde V}({\mathrm{d}}{\mathbf{x}}) \!\left(
\!\int_{\mathbb R^{|V|}}\!\! g({\mathbf{y}}) P_V ({\mathrm{d}}{\mathbf{y}}|{\mathrm{d}}{\mathbf{x}}) -
\!\!\int_{\mathbb R^{| V|}}\!\! g({\mathbf{y}}) P_V ({\mathrm{d}}{\mathbf{y}})\right)\ ,$$ where $P_V(B|A)$ represents the conditional probability of the measurable set $B\subset \mathbb R^{|V|}$, once $A$ is given. So, in order to estimate the correlation between two functions, it is sufficient to estimate the difference enclosed in brackets at the r.h.s. of (\[eq:correlazioni\_condizionato\]). Now, we notice that for any pair of probabilities $P$ and $\tilde P$ on $\mathbb R^{|V|}$, one has $$\label{eq:diff_misure_1}
\begin{split}
\left|\int_{\mathbb R^{| V|}} g({\mathbf{x}}) P({\mathrm{d}}{\mathbf{x}})-\int_{\mathbb R^{| V|}} g({\mathbf{y}}) \tilde P({\mathrm{d}}{\mathbf{y}})\right| \le
\int_{\mathbb R^{|V|}\times \mathbb R^{|V|}} & \left( |g({\mathbf{x}})|
+ |g({\mathbf{y}})| \right) \\
&\times {\bf 1}_{{\mathbf{x}}\neq {\mathbf{y}}}\, Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})\ ,
\end{split}$$ in which $Q$ is *any* probability on $\mathbb R^{|V|}\times \mathbb R^{|V|}$ such that $P$ and $\tilde P$ are its marginal probabilities. In other terms $Q$ is a joint probability of $P$ and $\tilde P$, i.e., for any measurable $B\subset \mathbb R^{|V|}$ one has $$\label{eq:congiunta}
P(B)=Q\left(B\times \mathbb R^{|V|}\right) \quad \mbox{and } \tilde
P(B)=Q \left(\mathbb R^{|V|}\times B\right)\ .$$ Remark here that $Q$ is not unique: indeed, such a probability provides also a way to define a distance between two probabilities defined on the same set $V$ of indices, by $$\label{eq:def_distanza_prob}
D(P,\tilde P){\buildrel {\rm def} \over {=} }\inf_Q \int_{\mathbb R^{|V|}\times\mathbb R^{|V|}}
{\bf 1}_{{\mathbf{x}}\neq{\mathbf{y}}}\, Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})\ .$$ We stress that the infimum is attained, i.e., there exists a probability measure $\bar Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})$ such that $D(P,\tilde P)= \int
{\bf 1}_{{\mathbf{x}}\neq{\mathbf{y}}} \bar Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})$ (see Lemma 1 of paper [@do2]). For the following, we suppose that it is possible to find a compact function $h$,[^6] with domain in $\mathbb R$, such that $$\label{eq:maggiorazione_g_h}
|g({\mathbf{x}})|\le \sum_{{i}\in V}h(x_{i})\ ,$$ as is the case for the monomials we are dealing with. The bounds will then be given in terms of $h$. Now, observing that ${\bf 1}_{{\mathbf{x}}\neq {\mathbf{y}}}\le
\sum_{{i}\in V} {\bf 1}_{x_{i}\neq y_{i}}$, we can rewrite (\[eq:diff\_misure\_1\]) as $$\label{eq:diff_misure_2}
\begin{split}
\left|\int_{\mathbb R^{| V|}} g({\mathbf{x}}) P({\mathrm{d}}{\mathbf{x}})-\int_{\mathbb R^{| V|}} g({\mathbf{y}}) \tilde P({\mathrm{d}}{\mathbf{y}})\right|
\le
\sum_{{i},{j}\in V}\int_{\mathbb R^2} &\left( h(x_{j})
+ h(y_{j}) \right) \\
&\times {\bf 1}_{x_{i}\neq y_{i}} Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})\ .
\end{split}$$ This way, we can make a direct connectino with paper [@do2], in which the problem of estimating the r.h.s. of the above expression is dealt with. We summarize here the results and the methods we need.
Main argument and theorem on correlations {#sottosez:corrspaziali}
-----------------------------------------
In the quoted work [@do2], the framework is more general than ours, because it deals with the problem of defining a “probability” for the configuration of an actually infinite $\nu$-dimensional lattice of particle, in terms of the set of conditional probabilities on each site, which is called the *specification* $\Gamma$. Now, our case is in principle different, because our lattice is finite, and the probability is defined through the Gibbs measure. In particular, the specification too is assigned by such a measure.
However, as proved in [@do2], under suitable assumptions assigning the specification uniquely determines the probability, i.e. the Gibbsian field, which, in our case, turns out to be precisely that of Gibbs. So, in our case, it is equivalent to speak in terms of specification or in terms of measure. Indeed, in this subsection we will speak in terms of specifications, and in the following one we will show that the specification determined by Gibbs measure (\[eq:gibbs\]) with the Hamiltonian (\[eq:ham\]) satisfies the assumptions of [@do2] (i.e. Conditions \[cond:compattezza\] and \[cond:contratt\] below).
We notice that the r.h.s. of (\[eq:diff\_misure\_2\]) can be bounded from above if one estimates the quantity $$\label{eq:def_lambda}
\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}}) {\buildrel {\rm def} \over {=} }\max \left\{ \mbox{\bf E} \left[
{\bf 1}_{\xi_{{\mathbf{{j}}}}^1\neq \xi_{{\mathbf{{j}}}}^2} h\left( \xi^1_{{\mathbf{{i}}}}\right)\right], \mbox{\bf E} \left[
{\bf 1}_{\xi_{{\mathbf{{j}}}}^1\neq \xi_{{\mathbf{{j}}}}^2} h\left( \xi^2_{{\mathbf{{i}}}}\right)\right]
\right\}\ ,
$$ where $\xi^1$ and $\xi^2$ are two Gibbsian fields which assign, respectively, the probabilities $P$ and $\tilde P$ appearing in (\[eq:diff\_misure\_2\]) and the expectations are obtained by integrating over a joint probability $Q$ of the two fields. Indeed, in [@do2] an upper bound just to $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$ is given, by requiring that two suitable conditions are satisfied.
So, by adopting the same techniques of [@do2] we bound from above the r.h.s. (\[eq:diff\_misure\_2\]) and, thus, the l.h.s. of (\[eq:correlazioni\_condizionato\]). Such a bound is given in Theorem \[teor:correlazioni\_generico\] below. In order to state it, we recall the main notations of [@do2].
First, we consider a lattice of sites contained in ${T}\subset \mathbb
Z^\nu$, and a finite–range specification with a radius of interaction $r$ (this means that the conditional probability at site ${{\mathbf{{i}}}}$ does not depend on the conditioning at sites ${{\mathbf{{j}}}}$ for $|{{\mathbf{{i}}}}-{{\mathbf{{j}}}}|>r$). Then, for a vector ${\mathbf{x}}\in\mathbb R^{|{T}|}$ we denote by $P_{{{\mathbf{{i}}}},{\mathbf{x}}}({\mathrm{d}}x)$ the probability distribution conditioned to ${\mathbf{x}}$ everywhere but at site ${{\mathbf{{i}}}}$. The specification $\Gamma$ is defined by $$\Gamma {\buildrel {\rm def} \over {=} }\left\{ P_{{{\mathbf{{i}}}},{\mathbf{x}}} : {{\mathbf{{i}}}}\in {T}, {\mathbf{x}} \in \mathbb{R}^{|T|} \right\} \ .$$ Furthermore, we will say that a continuous positive function $h$ on a metric space $\mathfrak{X}$ is compact if, for any $k\ge 0$, the set $\{x\in \mathfrak{X}: h(x)\le k\}$ is compact.
For a fixed integer ${r}$, let $
\partial_{r}V{\buildrel {\rm def} \over {=} }\left\{{{\mathbf{{j}}}}\in {T}: {{\mathbf{{j}}}}\not \in V,
\min_{{{\mathbf{k}}}\in V}|{{\mathbf{{j}}}}-{{\mathbf{k}}}|\le
{r}\right\}
$ be the boundary of thickness ${r}$ of a set $V\subset {T}$. We call $a$ the number of indices such that $|{{\mathbf{{i}}}}|\le r$, ${{\mathbf{{i}}}}\neq 0$, where ${r}$ is the range of interaction.
If $Z_0$ is a maximal subgroup of $\mathbb{Z}^\nu$ satisfying the condition $|{{\mathbf{{j}}}}-{{\mathbf{k}}}|>{r}$, for ${{\mathbf{{j}}}},{{\mathbf{k}}}\in Z_0$, we denote by $b$ the number of elements in the factor group $\mathbb{Z}^\nu\backslash Z_0$.
The conditions of paper [@do2] (which are hypotheses on the specification $\Gamma$, once the compact function $h$ is given) are the following
\[cond:compattezza\] Let $h$ be a compact function on $\mathbb{R}$ and let $C\ge 0$ and $c_{{\mathbf{{j}}}}\ge 0$, for $|{{\mathbf{{j}}}}|\le {r}$, ${{\mathbf{{j}}}}\neq 0$, be some constants. We suppose that
1. $\displaystyle\delta{\buildrel {\rm def} \over {=} }\sum_{|{{\mathbf{{j}}}}|\le{r}\,,\,{{\mathbf{{j}}}}\neq 0}c_{{\mathbf{{j}}}}<
\frac{1}{a^b}$ ;
2. for any ${{\mathbf{{i}}}}\in {T}$ and any ${\mathbf{x}}\in \mathbb R^{|{T}|}$ one has $$\int_{\mathbb R} h(x)P_{{{\mathbf{{i}}}},{\mathbf{x}}}({\mathrm{d}}x)\le C+\sum_{{{\mathbf{{j}}}}\in\partial_{r}\{{{\mathbf{{i}}}}\}}
c_{{{\mathbf{{j}}}}-{{\mathbf{{i}}}}} h\left(x_{{\mathbf{{j}}}}\right) \ .$$
\[cond:contratt\] Let $\bar{K}\ge 0$ and $k_{{\mathbf{{j}}}}=k_{{\mathbf{{j}}}}(\bar{K})\ge 0$, for $|{{\mathbf{{j}}}}|\le{r}$, ${{\mathbf{{j}}}}\neq 0$, be constants and $h$ be a compact function. We suppose that
1. $\displaystyle\alpha{\buildrel {\rm def} \over {=} }\sum_{|{{\mathbf{{j}}}}|\le{r}\,,\,{{\mathbf{{j}}}}\neq 0}k_{{\mathbf{{j}}}}< 1$ ;
2. for any ${{\mathbf{{i}}}}\in {T}$ and any pair of configurations ${\mathbf{x}}^1,{\mathbf{x}}^2\in \mathbb R^{|{T}|}$ such that $$\max_{{{\mathbf{{j}}}}\in \partial_{r}\{{{\mathbf{{i}}}}\}} \max \left\{
h\left(x^1_{{\mathbf{{j}}}}\right), h\left(x^2_{{\mathbf{{j}}}}\right) \right\}\le \bar{K}\ ,$$ one has the inequality $$D\left(P_{{{\mathbf{{i}}}},{\mathbf{x}}^1},P_{{{\mathbf{{i}}}},{\mathbf{x}}^2}\right)\le \sum_{{{\mathbf{{j}}}}\in\partial_{r}\{{{\mathbf{{i}}}}\}}
k_{{{\mathbf{{j}}}}-{{\mathbf{{i}}}}} {\bf 1}_{x^1_{{\mathbf{{j}}}}\neq x^2_{{\mathbf{{j}}}}} \ ,$$ where $D(\cdot,\cdot)$ is the distance defined by (\[eq:def\_distanza\_prob\]).
The set of specifications (i.e., the sets of conditional probabilities on every site) which satisfy Condition \[cond:compattezza\] for the constants $C,\delta$ and the compact function $h$ will be denoted by $\Theta(h,C,\delta)$. We will instead denote by $\Delta(h,\bar{K},\alpha)$ the set of specifications which satisfy Condition \[cond:contratt\] for the constants $\bar{K},\alpha$ and the compact function $h$.
If the specification satisfies Conditions \[cond:compattezza\] and \[cond:contratt\], Dobrushin and Pechersky show that the specification uniquely determines the probability (see Theorem 1 of [@do2]). In particular, also the marginal probability $P_{V}$ on $V$, and the probability $P_V(\cdot|{\mathrm{d}}{\mathbf{x}})$ conditioned to a vector ${\mathbf{x}}$ in $\tilde V$ are determined by $\Gamma$, and the maximum value taken by $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$ of (\[eq:def\_lambda\]) is bounded from above.
As a matter of fact, in [@do2] the authors do not investigate the explicit dependence of $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$ on $|{{\mathbf{{i}}}}-{{\mathbf{{j}}}}|$, which is what we need, but such a dependence can be obtained by a slight modification of their way of reasoning, which leads to the proof of Theorem \[teor:correlazioni\_generico\] that appears in Appendix \[app:dim\_correlazioni\]. We need also to introduce an upper bound to the mean value of $h$ with respect to the probability $P_V$ on $V$ and to the probability $P_V(\cdot,{\mathrm{d}}{\mathbf{x}})$, i.e., $$\label{eq:definizione_h_x}
\langle h\rangle_{{\mathbf{x}}} {\buildrel {\rm def} \over {=} }\sup_{{{\mathbf{{i}}}}\in V} \max\left\{ \int_{\mathbb
R} h(y) P_{{\mathbf{{i}}}}({\mathrm{d}}y|{\mathrm{d}}{\mathbf{x}}),
\int_{\mathbb R} h(y) P_{{\mathbf{{i}}}}({\mathrm{d}}y),C
\right\}\ ,$$ where ${\mathbf{x}}\in \tilde V$, while $C$ is the constant entering Condition \[cond:compattezza\]. So we can state
\[teor:correlazioni\_generico\] Let $f$ be a measurable function from $\mathbb R^{|\tilde V|}$ to $\mathbb R$, depending on the sites lying in the set $\tilde
V$. Let $g$ be a measurable function from $\mathbb R^{|V|}$ to $\mathbb R$, depending on the sites contained in the set $V$, with $V\cap \tilde V=\emptyset$, and $h$ be a compact function such that inequality (\[eq:maggiorazione\_g\_h\]) is satisfied.
Let $\Gamma\in \Theta(h,C,\delta)$. Then, there exists a constant $\bar{K}_0$, depending on $h,C,a,b$ only, such that, if $\Gamma\in
\Delta(h,\bar{K}_0C,\alpha)\cap \Theta(h,C,\delta)$ with any $\alpha$, one can find constants $D,
c>0$, for which one has $$\label{eq:correlazioni_generico}
\left|\langle fg\rangle -\langle f\rangle\langle g\rangle \right|
\le D\left|V\right|^2 \left| \int_{\mathbb R^{|
V|}} f({\mathbf{x}})\langle h\rangle_{{\mathbf{x}}} P_{\tilde V} ({\mathrm{d}}{\mathbf{x}})\right| \exp\left(-c d(V,\tilde V)\right)\ .$$ The constant $D$ depends on $a,b,\alpha,\delta$ only, while one has $$\label{eq:lunghezza_correlazione}
c{\buildrel {\rm def} \over {=} }-\frac{1}{b{r}}\log\left[\frac 12\left(\max\{\alpha,\delta
a^b\}+1\right) \right] \ .$$
Estimate of the variance of $P_n$ {#sottosez:stima_sigma_P}
---------------------------------
The previous way of proceeding can be fitted to our case by choosing as specification that given by the Gibbs measure relative to $H$. As $f$ and $g$, we choose polynomials in $p$ and $q$, depending on two disjoint sets of sites. In fact, on account of Lemma \[lemma:coeff\_nostro\_caso\], we know that the $P_n$ are constituted by a sum of such terms. This way we can study $\sigma_{P_n}$ and state that it is bounded from above as in the following
\[lemma:stima\_P\] There exist constants $\bar\beta>0$, $\bar\eps>0$, $k'>0$ such that, for any $\beta>\bar\beta$ and any $\eps<\bar\eps$, one has, for $n<1/\eps$, $$\label{eq:stima_P}
\sigma_{P_n} \le \sqrt N\,n!\left(k'\right)^n \mathcal{D}_n\left(
\sqrt 2\beta\right)^{-1}
\left(\eps+ \beta^{-1} \right)^n \ ,$$ where the polynomials $P_n$ are defined by (\[eq:definizione\_P\_n\]) and $\mathcal{D}_n$ are given in Lemma \[lemma:coeff\_nostro\_caso\].
[1ex ]{}**Proof.** Lemma \[lemma:coeff\_nostro\_caso\] provides the necessary estimates for the coefficients which appear in the sum defining $P_n$ (see (\[eq:scomposizione\]–\[eq:norma\_coeff\_P\])): we can write $
P_n=\sum_{i=1}^N f_i ,
$ where $f_i$ are polynomials depending at most on the sites between $i-n$ and $i+n$. The variance can be expressed as $
\sigma^2_{P_n}=\sum_{i,j=1}^N\left(\langle f_if_j\rangle-\langle
f_i\rangle\langle f_j\rangle\right).
$ We then consider the set $\mathcal{S}_1{\buildrel {\rm def} \over {=} }\{(i,j) : |j-i| \le
2n\}$ and $\mathcal{S}_2= \mathcal S_1^c$. The proof goes on by finding an upper bound separately for the contributions coming from these two sets: for the latter, we use the methods developed in the present section, while the terms of the former group are estimated in a way similar to that of Lemma \[lemma:stima\_P\_punto\].
We start from $\mathcal S_1$. Firstly, we observe that, in general, one has $$\left|\langle
f_if_j\rangle -\langle f_i\rangle \langle
f_j\rangle\right|\le \sigma_{f_i}\sigma_{f_j}
\le \max \left\{\langle f_i^2\rangle, \langle f_j^2\rangle\right\}\ ;$$ so that it suffices to evaluate $\sup_i\langle f_i^2 \rangle$ in a way similar to Lemma \[lemma:stima\_P\_punto\] and to sum over the $j$ with $|j-i|\le 2n$ to see that the contribution due to the terms in this set is smaller than $$\sum_{i,j\in \mathcal{S}_1}\left|\langle
f_if_j\rangle -\langle f_i\rangle \langle
f_j\rangle\right|\le\mathfrak{C}_1 K^{4n+1} (4n+1) (2n+1)!
(2\omega)^{2n} \beta^{-2}\left( \eps+\frac 1{\beta}\right)^{2n}N
\mathcal{D}^2_n\ .$$
We come now to $\mathcal S_2$. We will show below that the specification coming from the Gibbs measure satisfies the hypotheses of Theorem \[teor:correlazioni\_generico\], and we make use of it in estimating the terms in the set $\mathcal{S}_2$ in the following way. We separate the terms of a definite degree by writing $$f_i=\sum_{l=0}^n\frac{n!}{l!(n-l)!}\eps^{n-l}\sum_{s=1}^{|\mathcal{H}^{n-l,i}_{2l+2}|}
c_{is,l} f_{is}^{(l)}\ ,$$ in which $f_{is}^{(l)}$ is a monomial in $\mathcal{H}^{n-l,i}_{2l+2}$ and $\sup_{i,l}\sum_s |c_{is,l}|\le \mathcal D_n$. We fix an index $i$ and use Theorem \[teor:correlazioni\_generico\] with $f=f_i$ and $g=f_{js}^{(l)}$, for every $j\neq i$; then, we sum over $l$, $s$, $j$ and $i$, subsequently. For each $l$ we choose the compact function $h_l(x)$ as $|x|^{2l+2}$, which satisfies (\[eq:maggiorazione\_g\_h\]) for any $f_{js}^{(l)}$. We will show that, for $\beta$ large enough and $\eps$ small enough, one has $$\label{eq:somma_h}
\sum_{l=0}^{n} \langle
h_l\rangle_{{\mathbf{x}}} \le k^n n! \frac 1\beta\left(\eps+\frac
1\beta\right)^n\exp\left(\sum_{{j}=1}^{|\tilde
V|}\left(\frac {\beta\eps}{\omega} x_{j}^2+ 8\eps\sqrt {\frac
{\beta}{2\omega}} \left|x_{j}\right|
\right)\right)\ ,$$ for a suitable constant $k$. Then the sum on $s$ brings in a factor $\mathcal{D}_n$. As regards the integration in $\tilde V$, we observe that we can write $$\beta \frac {1-2\eps}{2} x_{j}^2- 8\eps\sqrt {\frac
{\beta}{2\omega}} \left|x_{j}\right|\ge \frac \beta4 x_{j}^2-1\ ,$$ provided $\eps$ is small enough. Thus, there exists $\bar k>0$ such that $$\left|\langle
f_if_j\rangle -\langle f_i\rangle \langle
f_j\rangle\right|\le \bar k^n (n!)^2 \mathcal D_n^2\frac 1{\beta^2}\left( \eps
+\frac 1\beta\right)^{2n} \exp\left[
-c(|i-j|-2n-1) \right]\ ,$$ where the constant $c$ is defined by (\[eq:lunghezza\_correlazione\]), and is, in the present case, equal to $\log(4/3)/2$. Since the sum over $j$ of such terms converges as $N\to \infty$, the proof will be concluded if we show that (\[eq:somma\_h\]) and the hypotheses of Theorem \[teor:correlazioni\_generico\] are satisfied.
We proceed as in the proof of Theorem 2 of paper [@do2], starting from the explicit form of the conditional probability distribution given by the Gibbs measure: one has $$P_{{i},{\mathbf{x}}}(x)= \frac 1{Z_{{\mathbf{x}}}} \exp\left[-\beta\left(
\frac{\omega x^2}{2}+\frac{x^4}{4\omega^2}+\frac \eps\omega x
\cdot x_{{i}-1} + \frac \eps\omega x \cdot x_{{i}+1}
\right)\right] \ ,$$ in which there appears the conditional partition function $$Z_{{\mathbf{x}}}{\buildrel {\rm def} \over {=} }\int_{\mathbb R} \exp\left[-\beta\left(
\frac{\omega x^2}{2}+\frac{x^4}{4\omega^2}+\frac \eps\omega x
\cdot x_{{i}-1} + \frac \eps\omega x \cdot x_{{i}+1}
\right)\right] {\mathrm{d}}x \ .$$
As regards Condition \[cond:compattezza\], we consider $\eps<2^{-5}$ fixed and define $$\hat
y({\mathbf{x}}){\buildrel {\rm def} \over {=} }\max \left\{\left| x_{{i}-1}\right|,
\left| x_{{i}+1} \right|\right\}\quad \mbox{and } y({\mathbf{x}}){\buildrel {\rm def} \over {=} }\min\left\{1/ \hat y({\mathbf{x}}),\sqrt{\beta}\right\}.$$ So, it is easily proved that, for $\beta>1$, inequality $Z_{{\mathbf{x}}}\ge \bar c y({\mathbf{x}})/\beta$, holds for some $\bar c>0$ indipendent of ${\mathbf{x}}$, $\beta$ and $\eps$. To prove it, it is sufficient to observe that the integrand of $Z_{{\mathbf{x}}}$ is bounded away from zero if $|x|\le
y({\mathbf{x}})/\beta$, and then to integrate over such an interval. We now show item 2 of Condition \[cond:compattezza\] for $h(x)=|x|^{2l+2}$. We note that $$\begin{aligned}
\int_{|x|\ge \hat y({\mathbf{x}})/4} h_l(x) P_{{i},{\mathbf{x}}}(x){\mathrm{d}}x &\le& \frac
2{Z_{{\mathbf{x}}}} \int_{1/(4 y({\mathbf{x}}))}^{+\infty} x^{2l+2}
\exp\left(-\beta\frac{ x^2}{4}\right){\mathrm{d}}x\\
&\le& \frac{2\sqrt \beta }{\bar c y({\mathbf{x}})}\frac
1{\beta^{l+1}}\int_{\sqrt\beta/(4y({\mathbf{x}}))}^{+\infty} x^{2l+2} e^{-x^2/4}{\mathrm{d}}x
\ ,\end{aligned}$$ which, in turn, is smaller than $(l+1)!(B/\beta)^{l+1} $ for a suitable constant $B$, independent of $\eps$, $\beta$ and $l$, since the integral decreases as an exponential function of $\sqrt{\beta}/y({\mathbf{x}})$. Here, we have chosen $(l+1)!$ so that the previous relation is satisfied independently of $l$. Furthermore, one has $$\int_{|x|\le \hat y({\mathbf{x}})/4} h_l(x) P_{{i},{\mathbf{x}}}(x){\mathrm{d}}x \le 2^{-4l-4}
\left(\hat y({\mathbf{x}})\right)^{2l+2}
$$ and $(h_l(x_{{i}-1})+h_l(x_{{i}+1}))/h_l(\hat y(x))\ge 1$, for any ${\mathbf{x}}$: this implies that $$\int_{\mathbb R} h_l(x)P_{{i},{\mathbf{x}}}({\mathrm{d}}x)\le
(l+1)!\left(\frac{B}{\beta}\right) ^{l+1}+
\frac 1{16} h_l\left(x_{{i}-1}\right)+\frac 1{16} h_l\left(
x_{{i}+1}\right) \ .$$ So, item 2 of Condition \[cond:compattezza\] holds with $ C= (l+1)!\left(B/\beta\right) ^{l+1}
$ and $c_1=c_{-1}=1/16$. Since we have $a=2$, $b=2$, ${r}=1$, item 1 of Condition \[cond:compattezza\] holds with $\delta
a^b=1/2$.
Condition \[cond:contratt\] is proved by computing two limits, first letting $\beta$ tend to infinity and then letting $\eps\to 0$. In fact, let ${\mathbf{x}}^{(m)}$, for $m=1,2$, be two different configuration such that $|x^{(m)}_{{i}-1}|\le
\tilde K/e \sqrt{Bl/(e\beta)}$ and $| x^{(m)}_{{i}+1}|\le\tilde
K/e\sqrt{Bl/(e\beta)}$. Then it is easily checked that $$\lim_{\beta\to \infty}\int_{\mathbb R}\left|P_{{i},{\mathbf{x}}^{(1)}}(x)-P_{{i},{\mathbf{x}}^{(2)}}(x)\right|{\mathrm{d}}x =\int_{\mathbb{R}}
{\mathrm{d}}z\, e^{-z^2} \left|f(\eps,z,{\mathbf{z}}^{(1)})-f(\eps,z,{\mathbf{z}}^{(2)})\right|\ ,$$ where $$f(\eps,z,{\mathbf{z}}^{(m)}){\buildrel {\rm def} \over {=} }\exp\left(-\frac{\eps}{\omega}z\left( z^{(m)}_{i-1}
+z^{(m)}_{i+1}\right) +\frac{\eps^2}{2\omega^2} \left(z^{(m)}_{i-1}+
z^{(m)}_{i+1}\right)^2 \right)$$ and $z=x\sqrt\beta$, $z^{(m)}_j=x^{(m)}_j\sqrt{\beta}$. Now, by the dominated convergence theorem, one has that the limit for $\eps\to 0$ of $f(\eps,z,{\mathbf{z}}^{(m)})$ is equal to 1. Here, use is made of the fact that $\eps |z^{(m)}_j|$ can be bounded from above by $\eps \tilde K/e
\sqrt{Bl/e}\le \tilde K/e \sqrt{B\eps/e}$, because $l\le n$, and $n$ is smaller than $1/\eps$, by hypothesis. So, for $\beta$ sufficiently large and $\eps$ small enough one has $$\int_{\mathbb R}\left|P_{{i},{\mathbf{x}}^1}(x)-P_{{i},{\mathbf{x}}^2}(x)\right|{\mathrm{d}}x \le \frac 14 \ .$$ We have chosen a bound to ${\mathbf{x}}^m$ of this particular form, because the constant $\bar{K}$ in Condition \[cond:contratt\] turns out to be smaller than $\tilde K^{2l+2}$, so that it is independent of $\beta$. So Condition \[cond:contratt\] holds with $\bar K=\tilde K^{2l+2}$, $k_{-1}=k_1=1/2$ and $\alpha=1/2$. Thus, Theorem \[teor:correlazioni\_generico\] holds.
There still remains to estimate $\langle h_l\rangle_{{\mathbf{x}}}$ in our case. By looking at its definition (\[eq:definizione\_h\_x\]), we notice that we have to estimate the integrals $\int h(y)
P_{{\mathbf{{i}}}}({\mathrm{d}}y|{\mathrm{d}}{\mathbf{x}})$ and $\int h(y) P_{{\mathbf{{i}}}}({\mathrm{d}}y)$. Now, on account of Lemma \[lemma:marginale\] and relation (\[eq:rapporto\_n\_n\_tilde\]), the distribution functions of $P_{{\mathbf{{i}}}}({\mathrm{d}}y|{\mathrm{d}}{\mathbf{x}})$ and $P_{{\mathbf{{i}}}}({\mathrm{d}}y)$ can be bounded by $$ K^{2n+2}\mathfrak{C}_2\mathfrak{C}_1\sqrt{\frac{\beta}{2\pi\omega}}
\exp\left(\sum_{{j}=1}^{|\tilde
V|} \left(\frac {\beta\eps}{\omega} x_{j}^2+ 8\eps\sqrt {\frac
{\beta}{2\omega}} \left|x_{j}\right|\right)\!\right)\!e^{-\beta
y^2/(2\omega)}\ .$$ Then, we use the bound to $C$ previously found, together with the fact that $$\sqrt{\frac{\beta}{2\pi\omega}}\int_{\mathbb{R}} y^{2l+2}e^{-\beta
y^2/(2\omega)}{\mathrm{d}}y= \left(\frac{2\omega}{\beta}\right)^{l+1}
\frac{(2l+1)!!} {2^{l+1}},$$ and we get (\[eq:somma\_h\]). This concludes the proof.
Q.E.D.
[1ex ]{}
Estimate of the variance of $X_n$ {#sottosez:stima_sigma_X}
---------------------------------
Lemma \[lemma:stima\_P\] of Section \[sottosez:stima\_sigma\_P\] enables us to bound from below the variance of $X_n$ defined by (\[eq:definizione\_X\_n\]) and to estimate the correlation coefficient $\rho_{X_n,H}$, according to Lemma \[lemma:stima\_correlazione\], which we prove here.
[1ex ]{}**Proof of Lemma \[lemma:stima\_correlazione\]** We start by recalling that, on account of (\[eq:definizione\_X\_n\]), one has $X_n=
-{\Theta}_1+ \sum_{j=2}^n P_j$, with ${\Theta}_1$ defined by equations (\[eq:determinazione\_chi\]–\[eq:determinazione\_Psi\]) of Section \[sez:telchi\]. It is easily seen that ${\Theta}_1=F+G+\mathcal{R}_1$, in which $$F{\buildrel {\rm def} \over {=} }-\frac{\eps
}{2 \omega} \sum_{i=1}^{N-1} p_ip_{i+1}\quad \mbox{and }G{\buildrel {\rm def} \over {=} }\frac 3{32
\omega^2} \sum_{i=1}^N p_i^4 \ ,$$ and $\mathcal{R}_1$ is the remainder. Then, we study the properties of $F$ and $G$, for which the mean value, the variance and the correlation with $H$ can be computed almost exactly, and we extend such properties to ${\Theta}_1$, and to the whole $X_n$, by observing that, in some sense, ${\Theta}_1$ is the term of first order in $\eps+\beta^{-1}$.
As regards formula (\[eq:minorazione\_sigma\_X\_n\]), we notice that, since $F$ is odd in the momenta, while $G$, $\mathcal{R}_1$ and the measure are even, then $F$ is uncorrelated both with $G$ and with $\mathcal{R}_1$. Furthermore, one can observe that $$\left\langle
G\,R_1\right\rangle- \left\langle
G\right\rangle\left\langle R_1\right\rangle =
\frac{9}{2^{10} \omega^4} \sum_{i=1}^N \langle q_i^2\rangle \left(\langle
p_i^6\rangle-\langle p_i^4\rangle \langle p_i^2\rangle\right)\ ,$$ and use the estimates of Lemma \[lemma:marginale\] to bound from above $\langle q_i^2\rangle$, in order to prove that $ \sigma^2_{{\Theta}_1}\ge \sigma^2_F + \sigma^2_G +2C_{G,\mathcal R_1} \ge
N(\eps^2+\beta^{-2})/(8\beta^2)$, where the second inequality holds for $\eps$ and $\beta^{-1}$ small enough. On the other hand, making use of (\[eq:stima\_P\]) of Lemma \[lemma:stima\_P\] together with the estimate for $\mathcal{D}_n$ given by (\[eq:norma\_coeff\_P\]) of Lemma \[lemma:coeff\_nostro\_caso\], one has $$\sigma_{X_n} \ge \sigma_{{\Theta}_1} - \sum_{j=2}^n \sigma_{P_j} \ge
\sqrt{N}\,\frac {\eps+\beta^{-1}}{4\beta} \left(1-
\sum_{j=2}^n (j!)^4 \left(\eps+\beta^{-1}\right)^{j-1}
\kappa_2^j \right) \ ,$$ for a suitable constant $\kappa_2$, if $\beta^{-1}$ and $\eps$ are sufficiently small. Now, for $n<\kappa_2^{-1/4}(\eps+\beta^{-1})^{-1/4}$, the sum is smaller than a constant multiplied by $\eps+\beta^{-1}$ and this proves (\[eq:minorazione\_sigma\_X\_n\]).
As for (\[eq:stima\_correlazione\]), we observe that, since $H$ is even in the momenta, $F$ and $H$ are uncorrelated, so that, using $\rho_X,Y<1$, one gets $$\left|\rho_{X_n,H}\right| \le \frac
1{\sigma_{X_n}\sigma_H}\left(\left|C_{{\Theta}_1 -F,H}
\right|+\sum_{j=2}^{n} \left|C_{P_j,H} \right|\right) \le
\frac{\sigma_{{\Theta}_1-F}}{\sigma_{{\Theta}_1}}\frac{\sigma_{{\Theta}_1}}{\sigma_{X_n}} +
\frac{\sum_{j=2}^n\sigma_{P_j}}{\sigma_{X_n}}\ .$$ As we have just shown, for $n<\kappa_2^{-1/4}(\eps+\beta^{-1})^{-1/4}$ the last term at the r.h.s. tends to zero as $\eps +\beta^{-1}$, and in the same way behaves $\sigma_{{\Theta}_1}/\sigma_{X_n}-1$. So, we limit ourselves to study $\sigma_{{\Theta}_1-F}/\sigma_{{\Theta}_1}=
1/\sqrt{1+\sigma^2_F/\sigma^2_{{\Theta}_1-F}}$. By computing explicitly $\sigma^2_F$ and applying the upper bound (\[eq:stima\_P\]) to $\sigma^2_{{\Theta}_1}\ge
\sigma^2_{{\Theta}_1-F}$, we get that there exists a constant $\bar
\kappa\ge 1$ such that $$\frac{\sigma_{{\Theta}_1-F}}{\sigma_{{\Theta}_1}}\le \left(1+\frac
1{\kappa}\frac{\eps^2}{\left(\eps+ \beta^{-1} \right)^2}
\right)^{-1/2}\ .$$ Since the r.h.s. differs from 1 by a quantity larger than $\eps^2\beta^2$, the corrections given by the other terms can be neglected if $\beta\ge \eps^{-1}$. This completes the proof.
Q.E.D.
[1ex ]{}
Relation between stability estimates and relaxation times {#sez:definizione}
=========================================================
In the present section we discuss which implications the existence of an adiabatic invariant has in the frame of ergodic theory. The main point is that it can provide a lower bound to the relaxation time to equilibrium. Since there is no agreement in the literature on the definition of relaxation time, we will give here a mathematically clear form to such a concept. To this end, we need a preliminary discussion of an a priori bound to the time autocorrelations (see Theorem \[teor:mix\]). This is provided in Section \[sottosez:rilassamento\]. Then, in Section \[sottosez:mescolamento\], we define the concept of relaxation time.
Relaxation times and time correlations {#sottosez:rilassamento}
--------------------------------------
One of the open problems in statistical mechanics is that of thermalization, i.e., to establish whether a system, starting from a given microscopic state, does attain thermodynamic equilibrium, and, if this is the case, to estimate the time scale needed to reach it. Such a time scale is usually called the *relaxation time*. From a physical point of view the situation is complicated, because certain systems, for example gases, reach equilibrium on a very short time scale, while others, for example glasses, are believed to reach equilibrium on geological time scales.
Linear response theory (see [@green; @kubo]) shows that susceptibilities can be expressed in terms of the time autocorrelations of suitable dynamical variables (namely, those conjugated to the perturbing field). In particular, the susceptibilities assume the equilibrium values only for measurements which last a time large enough, i.e., larger than the time needed by the time autocorrelations to become negligible.
Now, the time correlations between pairs of dynamical variables are widely studied in the case of chaotic systems (see, for example, [@liverani; @keller-liv] or the monograph [@chernov]). For such systems, the correlations are known to tend to zero, as $t\to \infty$, and one of the problems is to estimate the decay rate, for long times, of the time autocorrelations of all dynamical variables. This however amounts to give an upper bound to the time autocorrelations. From the standpoint of linear response theory, it is also significant to bound from below the time autocorrelations of suitably chosen dynamical variables, because this leads to a lower bound to the relaxation time. Corollary \[cor:autocorr\] of Theorem \[teor:main\] gives an estimate of such a kind, showing that, for the system here considered, the relaxation time is larger than a constant $\bar t$, which is exponentially large in the perturbation parameters (and moreover does not depend on the number of degrees of freedom of the system). Results analogous to Corollary \[cor:autocorr\] are of a general type. In fact one has
\[teor:mix\] Suppose that, for a dynamilcal variable $X$, there exists a constant $\eta > 0$ such that $$\label{eq:ipt}
\left\| [X,H] \right\| \le \eta \sigma_X \ ;$$ then one has $$\label{eq:mixst}
C_X(t) \ge 1- \frac 12 \eta^2 t^2\ .$$
[1ex ]{}**Remark.** This theorem is a slight modification of Theorem 1 of [@carati] and is proved in the same way. On the other hand, we think that the decision to focus on the time autocorrelation, which we make here at variance with paper [@carati], is crucial, if one aims at obtaining significant estimates in the thermodynamic limit. [1ex ]{}
[1ex ]{}**Proof.** Introduce the difference $\delta {\buildrel {\rm def} \over {=} }X_t - X$. As $X_t$ satisfies the Liouville equation and $X$ is time–independent, one has $\partial_t \delta =$ $ \partial_t X_t = - [H,X_t]$, which in terms of $\delta$ takes the form $$\label{eq3}
\partial_t \delta = - [H,\delta] + Y \ ,$$ with $Y {\buildrel {\rm def} \over {=} }- [H,X]$. It is well known that, $\mu$ being invariant, the solutions of the Liouville equation are generated by a one–parameter group $\hat U(t)$ of unitary operators in the sense that $X_t = \hat
U(t) X $. As $\delta(0)=0$, the solution of equation (\[eq3\]) is given by $$\delta = \int_0^t \hat U(t-s) Y {\mathrm{d}}s \ ,$$ so that, $\hat U$ being unitary, one gets the estimate $$\left\|\delta \right\|\le \int_0^t \left\|\hat U(t-s) Y\right\| {\mathrm{d}}s = t
\left\|Y\right\| \le \eta t \sigma_X\ .$$ Then, one gets the thesis by using the simple identity $$C_X(t) =1 - \frac{\left\| X_t - X\right\|^2}{2\sigma^2_X}\ .$$
Q.E.D.
[1ex ]{}
Definition and evaluation of relaxation times {#sottosez:mescolamento}
---------------------------------------------
Taking into account the relation between susceptibilities and time autocorrelations, it is meaningful to introduce a parameter $a=a(t)$, with values in $[0,1]$, which estimates how much the system is close to equilibrium, after a finite time $t$. So, for the set $\mathcal{B}\subset L^2\cap \mathcal{C}^{\infty}$ of the dynamical variables uncorrelated with $H$, we define
The correlation level $a(t)$ at time $t$ is defined as $
a(t){\buildrel {\rm def} \over {=} }\sup_{X\in \mathcal{B}}\left|C_X(t) \right|
$.
[1ex ]{}**Remark.** One can limit oneself to the smooth observables, because these are the physically relevant ones.[^7] [1ex ]{}
In the chaotic case $a$ tends to 0 as $t\to \infty$: thus, looking for the decay to zero of the correlations is equivalent to looking at the asymptotic behaviour about zero of $t(a)$, the inverse function of $a(t)$.[^8] On the other hand, according to linear response theory, it is more meaningful to look at the time after which the correlations are below a certain threshold. So, we introduce the following notion
\[def:2\] The relaxation time relative to level $a$ is defined as $t(a){\buildrel {\rm def} \over {=} }\inf t^*(a) ,$ where $t^*(a)$ is such that $$\sup_{X\in\mathcal{B}}\left|C_{X}(t) \right|
\le a \quad \mbox{for all } t\ge t^*(a)\ .$$
[1ex ]{}**Remark.** In order to provide a significant lower bound to the relaxation time $t(a)$ at level $a$, as previously defined, it is clearly sufficient to find the time at which the autocorrelation of at least one dynamical variable $X$ uncorrelated with the Hamiltonian is certainly larger than $a$. Now, the following corollary on the relaxation time descends immediately from Theorem \[teor:mix\]:
\[cor:mix\] Suppose there exists a dynamical variable $X\in\mathcal{B}$ and a constant $\eta > 0$ such that $\left\| [X,H] \right\| \le \eta \sigma_X$; then one has $$t(a)\ge \sqrt{2(1-a)} \,\frac 1 \eta\ .$$
[1ex ]{}
The point is that many Hamiltonian systems of interest for Solid State Physics reduce to integrable ones in some limit, while, on the other hand, for integrable systems one has $t(a)=+\infty$ for any $a<1$, since their integrals of motion remain correlated for all times. The question is then, what is the behaviour of such systems when the perturbation is small, i.e., to study the ergodic properties of slightly perturbed (or nearly integrable) Hamiltonian systems. It is natural to think that there exists a sort of continuity as the perturbation diminishes. Continuity can in fact be recovered in terms of the time needed for the system to reach thermalization (i.e., a sufficiently low correlation level). This is indeed the case in the system we have considered, because we can say that, as a consequence of Theorem \[teor:main\], one has the lower bound $$t(a) \ge \sqrt{2(1-a)}\,\frac{\eps}{\kappa} \exp\left(\frac
1{\kappa \left(\eps+\beta^{-1}\right)}\right)^{1/4}\ ,$$ which goes to infinity as both $\eps$ and $\beta^{-1}\to 0$.
Conclusions {#sez:conclusione}
===========
In this paper, we have constructed, for the Klein Gordon lattice, an adiabatic invariant, i.e., a dynamical variable whose time derivative is small as a stretched exponential with the perturbation parameters. Thus our result is similar to those which are known in Hamiltonian perturbation theory in the case of a finite number of degree of freedom or in the case of an infinite number of them, but at a fixed total energy (see [@partfinite; @enfinita]). The new feature of the present work is that our theorem remains valid in the thermodynamic limit, because the given bound turns out to be independent of the number of particles, and depends only on intensive quantities. As a corollary, we bound from below the stability time of such a model.
We now add some comments. The first one concerns the fact that in our model we have two perturbation parameters, $\eps$ and $1/\beta$. We believe however that the only really relevant parameter is $1/\beta$. Indeed, at least formally the parameter $\eps$ can be arbitrarily decreased by performing a suitable normal form change of coordinates ([@giorg2]), and it seems to us that such a normal form does not alter in any fundamental feature the perturbation $H_1$ (i.e., its local character). At the moment, however, we are unable to say anything definite on this point.
In any case, while the estimate of Section \[sez:condizionata\] could presumably be applied to models more general than that studied in this paper, the estimates on the marginal probabilities of Section \[sez:marginale\] are especially adapted to our model. It would be important to improve our method, making it more flexible in order to cover more general situations.
In our opinion, the real big problem that remains is the construction of adiabatic invariants for problems in which small denominators appear. This problem could be overcome in particular cases (see, for example, paper [@carati]), but no precise strategy exists yet for the general case. We plan to tackle this problem soon.
### Acknowledgements {#acknowledgements .unnumbered}
We thank very much Prof. A. Giorgilli for suggestions regarding the construction of the adiabatic invariant by making use of the operator $T_\chi$, and the definition of the norms of Section \[sez:telchi\].
We also thank Prof. L. Galgani and D. Bambusi for a careful reading of the manuscript, useful comments and lucid discussions.
Estimates for the construction of the adiabatic invariant {#app:coeff}
=========================================================
Here we intend to prove Lemma \[lemma:coeff\] of Section \[sez:telchi\]. In order to do that, we need to recollect the usual algebraic properties used in perturbation theory (see [@giorg]), adapted to our norm $\|\cdot\|_+$, defined by (\[eq:norma\_coeff\]). Such properties are stated in Lemmas \[lemma:passaggio\_complesse\]–\[lemma:proiezioni\] later on, then the proof of Lemma \[lemma:coeff\] is briefly sketched.
In order to develop the perturbation theory, a primary role is played by the action of the operator $L_0$ and by the projections on its kernel and its range (see Section \[sez:telchi\]), and these are more easily discussed in terms of the complex variables which diagonalize $L_0$. These are implicitly defined by $$\label{eq:coord_complesse}
q_l=\frac{1}{\sqrt{2}}(\xi_l+i\eta_l)\ , \quad p_l=\frac{1}{
\sqrt{2} }(\xi_l-i\eta_l)\ ,\quad 1\le l\le N$$ and in such variables one has $$\label{eq:autovalori}
L_0\xi^j\eta^k=i\omega(|k|-|j|)\xi^j\eta^k\ .$$
We must, however, take into account the fact that the norm $\|\cdot\|_+$ is not invariant under such a change of coordinates. In fact, such a norm is formally well defined also for polynomials depending on the variables $(\xi,\eta)$ if, in the definition of $\mathcal{H}^{r,i}_s$ and $\mathcal{P}_{s,r}$, we simply substitute for $(p,q)$ the pair $(\xi,\eta)$. In that case, denoting by $f'$ the transform of $f$ via (\[eq:coord\_complesse\]), one will have, in general, $\|f\|_+\neq \|f'\|_+$. On the other hand, the following lemma, whose proof is identical to that of Lemma A.1 of paper [@giorg], enables one to estimate the difference between the norms of the two functions.
\[lemma:passaggio\_complesse\] Let $f(q,p)$ be in $\mathcal{P}_{s,r}$ and let $f'(\xi,\eta)$ be the transform of $f$ via (\[eq:coord\_complesse\]). Then, one has $f'\in
\mathcal{P}_{s,r}$ and $ \left\|f'\right\|_+\le 2^\frac s2 \left\|f\right\|_+.$ Moreover, let $g'(\xi,\eta)$ be in $ \mathcal{P}_{s,r}$ and let $g(q,p)$ be the transform of $g$ via the inverse of (\[eq:coord\_complesse\]). Then, one has $g\in
\mathcal{P}_{s,r}$ and $
\left\|g\right\|_+\le 2^\frac s2 \left\|g'\right\|_+.
$
We need also the following lemmas
\[lemma:par\_Poisson\] Let $f$ be in $\mathcal{P}_{s,r}$ and $g$ in $\mathcal{P}_{s',r'}$. Then, $[f,g]\in\mathcal{P}_{s+s'-2,r+r'}$ and one has, both in real and in complex variables, the inequality $$\left\|[f,g]\right\|_+\le (2r+2r'+1)s s'\left\|f\right\|_+ \left\|g
\right\|_+\ .$$
[1ex ]{}**Proof.** See Lemma A.2 of [@giorg], noticing that, for any fixed $i$, each term of $f$ contained in $\mathcal{H}^{r,i}_s$ has Poisson bracket different from 0 only with the monomials of $\mathcal{H}^{r',k}_{s'}$ such that $|i-k|\le r+r'$. The number of such monomials appearing in the decomposition of $g$ is smaller than $2r+2r'+1$.
Q.E.D.
[1ex ]{}
\[lemma:proiezioni\] Let $f\in\mathcal{P}_{s,r}$ be a polynomial in complex variables. Then $\Pi_\mathcal{N}f$, $\Pi_\mathcal{R}f$ and $L_0^{-1}\Pi_\mathcal{R}f$ belong to $\mathcal{P}_{s,r}$ and the following inequalities hold: $$\left\|\Pi_\mathcal{N}f\right\|_+ \le
\left\|f\right\|_+\ , \quad
\left\|\Pi_\mathcal{R}f\right\|_+ \le \left\|f\right\|_+
\ ,\quad
\left\|L_0^{-1}\Pi_\mathcal{R} f\right\|_+ \le \left\|
f\right\|_+ \ .$$
[1ex ]{}**Proof.** The fact that $L_0^{-1}f$ belongs to $\mathcal{P}_{s,r}$ comes directly from Lemma \[lemma:par\_Poisson\], as $H_0$ is in $\mathcal{P}_{2,0}$. The remaining statements are a consequence of the fact that $L_0$ is diagonal in complex coordinates and that the smallest eigenvalue of $L_0$ on $\mathcal{R}$ has modulus $\omega \ge 1$, in virtue of (\[eq:autovalori\]).
Q.E.D.
[1ex ]{}
[1ex ]{}**Proof of Lemma \[lemma:coeff\]** We pass to complex variables via Lemma \[lemma:passaggio\_complesse\] and proceed by induction on $n$, checking at each step even two supplementary inductive hypotheses:
- $\Psi_n$ can be decomposed as $
\Psi_n=\sum_{l=0}^n \Psi_n^{(l)} ,$ where $\Psi_n^{(l)}\in \mathcal{P}_{2l+2,n-l}$;
- the following bound holds $$\left\|\Psi_n^{(l)}\right\|_+\le 2^n2^{10(n-1)}
\left(n!\right)^2(n-1)!\frac{n!}{l!(n-l)!}\eps^{n-l}\ .$$
As a matter of facts, on account of Lemma \[lemma:proiezioni\], such an estimate enables one to control the contributions due to $\chi_n$ and ${\Theta}_n$, which appear in the recurrent procedure that determines $\chi_s$, for $s\ge n$. Then, we come back to real variables via lemma \[lemma:passaggio\_complesse\] again.
Q.E.D.
[1ex ]{}
Technical proofs
================
Proof of Lemma \[lemma:periodico\] {#app:dim_lemma_periodico}
----------------------------------
We start by proving formula (\[eq:maggiorazione\_frazione\_Q\_N\]). On account of the symmetry of the periodic system, one can pass from a system with $N-1$ particles to one with $N$ by inserting one more particle after the $i$–th site, for $i=1,\ldots,N-1$. The potential energy of the corresponding system is given by $$U_N(q_1,\ldots,q_N,q)= U_{N-1}-\frac
\eps{2\omega}\left(q_{i+1}-q_i\right)^2 + \frac \eps{2\omega}
\left(q-q_i\right)^2+ \frac \eps{2\omega}\left(q- q_{i+1}\right)^2+
\frac {q^2}{2\omega}+ \frac{q^4}{4\omega^2}\ .$$ Neglecting the second term at the r.h.s. (which gives a contribution to the partition function which can be bounded from below by 1, and averaging over $i$ in order to get a traslational invariant system, one gets $$\label{minorazione_1}
\begin{split}
\frac{Q_N}{Q_{N-1}} \ge& \frac{1}{N-1}\sum_{i=1}^{N-1}
\int_{-\infty}^{+\infty}{\mathrm{d}}q_1\ldots
\int_{-\infty}^{+\infty} {\mathrm{d}}q_{N-1}\,\tilde{D}_{N-1}(q_1,\ldots,q_{N-1})
\times \\
&\times \int_{-\infty}^{+\infty} \!\!\!{\mathrm{d}}q
\exp\!\left[\!-\frac{\beta}{2\omega} \left(\!
q^2+\frac{q^4}{2\omega}+ \eps(q-q_i)^2+\eps(q-q_{i+1})^2\right)\!\right]\ .
\end{split}$$ Here we have put $q_N=q_1$. Then, we introduce the function $\varphi_{q_i}(q){\buildrel {\rm def} \over {=} }1-\exp\left[-\beta\eps(q-q_i)^2/(2\omega)\right], $ for which the inequality $$\exp\left[-\frac{\beta\eps}{2\omega}(q-q_i)^2+\frac{\beta\eps}{2\omega}
(q-q_{i+1})^2\right]\ge 1
-\varphi_{q_i}(q) -\varphi_{q_{i+1}}(q)$$ holds. We will show now that $\varphi_{q_i}(q)$ is small except for a set of small measure. Making use of the previuos inequality, relation (\[minorazione\_1\]) becomes $$\label{minorazione_2}
\begin{split}
\frac{Q_N}{Q_{N-1}} \ge a(\beta,\eps)- \int_{-\infty}^{+\infty}{\mathrm{d}}q_1\ldots \int_{-\infty}^{+\infty} {\mathrm{d}}q_{N-1}\,\tilde D_{N-1}(q_1,\ldots,q_{N-1})\times\\ \times
\int_{-\infty}^{+\infty} {\mathrm{d}}q\, \frac{2}{N-1}\sum_{i=1}^{N-1}
\varphi_{q_i}(q)\exp\left[-\frac{\beta}{2\omega}\left(q^2+\frac{q^4}{2
\omega}\right)\right]
\ ,
\end{split}$$ in which the function $a(\beta,\eps)$ is defined by[^9] $$a(\beta,\eps){\buildrel {\rm def} \over {=} }\int_{-\infty}^{+\infty}{\mathrm{d}}q\exp\left[-\frac{\beta}{2\omega}\left(
q^2+\frac{q^4}{2\omega} \right)\right] =\frac{\sqrt{2\omega}
\, e^\frac{\beta}{8}} {2}
K_{\frac{1}{4}} \left(\frac{ \beta}{8} \right)\ ,$$ where $K_\alpha (x)$ is the Bessel modified function of second kind. The well known properties of $K_\alpha (x)$ imply that $a(\beta,\eps)$ can be written as $a(\beta,\eps) = G(\beta,\eps)\sqrt{2\pi\omega/\beta},$ where $G$ is a function always smaller than 1, approaching 1, at fixed $\eps$, as $\beta\to+\infty$. We go on by dealing with the integral in (\[minorazione\_2\]), first giving an upper bound for the innermost integral over $q$. We estimate it by splitting the phase space of the $N-1$ particles periodic system in two sets: we will fix $\kappa>0$ and consider $\Omega(N-1,\kappa)$, which is defined by $$\label{eq:definizione_Omega}
\Omega(N-1,\kappa){\buildrel {\rm def} \over {=} }\left\{(q_1,\ldots,q_{N-1}) \mbox{ such that }
\sum_{i=1}^{N-1}q_i^2< \frac{2\omega}{\beta}\kappa(N-1)\right\} \ ,$$ and its complement. In the latter set, the integral is simply bounded from above by $2a(\beta,\eps)$. On the other hand, in order to estimate the integral in the set $\Omega(N-1,\kappa)$, we observe that, for any $\kappa_1$, the number of particles for which $\left|q_i\right|\ge \sqrt{\kappa_1\kappa2\omega/\beta}$ holds cannot exceed $(N-1)/\kappa_1$. For these particles the integral is estimated again by $2a(\beta,\eps)$. For the purpose of estimating the contribution of the remaining particles, we introduce the function $$I(\beta,\eps,\kappa,\kappa_1) {\buildrel {\rm def} \over {=} }\frac{1}{a(\beta,\eps)}
\sup_{\left|y\right|<\sqrt{\kappa_1\kappa2\omega/\beta}}
\int_{-\infty}^{+\infty}
\varphi_y(q)\exp\left(-\frac{\beta}{2\omega}q^2\right)\, {\mathrm{d}}q\ .$$ We point out that $I(\beta,\eps,\kappa,\kappa_1)$ tends to 0 as $\eps$ tends to 0, for $\beta,\kappa,\kappa_1$ fixed. Then, in the region $\Omega(N-1,\kappa)$, for any $\kappa_1> 1$, one has the bound $$\int_{-\infty}^{+\infty} \!\!{\mathrm{d}}q\, \frac{2}{N-1}\!\sum_{i=1}^{N-1}
\varphi_{q_i}(q)\exp\left[-\frac{\beta}{2\omega}\!\left(q^2+\frac{q^4}{2
\omega}\right)\right] \!
\le\!
\left [\frac 2{\kappa_1} + 2 I(\beta,\eps,\kappa,\kappa_1)\right]\!
a(\beta,\eps) \ .$$ We notice that we have provided estimates independent of $q_i$, so the integrals over $q_1,\ldots,q_{N-1}$ appearing in (\[minorazione\_2\]) can simply be estimated as the product of these upper bounds times the measures of the sets in which the bounds hold. Now, we observe that the measure of $\Omega^c(N-1,\kappa)$ is estimated by $$\int_{\Omega^c(N-1,\kappa)}{\mathrm{d}}q_1\ldots {\mathrm{d}}q_{N-1}\,\tilde
D_{N-1}(q_1,\ldots,q_{N-1}) \le
\frac {R_{N-1}(\beta,\kappa)}{Q_{N-1}}$$ where the function $R_{N-1}(\beta,\kappa)$ is defined by $$\label{eq:definizione_resto}
\begin{split}
R_{N-1}(\beta,\kappa) &{\buildrel {\rm def} \over {=} }\int_{\Omega^c(N-1,\kappa)} {\mathrm{d}}q_1\ldots
{\mathrm{d}}q_{N-1}\, \exp \left(
-\frac{\beta}{2\omega}\sum_{i=1}^{N-1} q_i^2\right)\\
&= \left(\frac{2\pi\omega}{\beta} \right)^\frac{N-1}{2} \Gamma\left(
\frac{N-1}{2},\kappa (N-1)\right)
\end{split}$$ and $\Gamma(s,x)$ is defined by (\[eq:gamma\_reg\]). This way one obtains, finally, $$\label{minorazione_frazione_finale}
\frac{Q_N}{Q_{N-1}}\ge \left(1-\frac 2{\kappa_1}-2I(\beta,\eps,\kappa,\kappa_1)-
2\frac{R_{N-1}(\beta,\kappa)}{Q_{N-1}}\right) a(\beta,\eps)
\ .$$ From this expression one can prove (\[eq:maggiorazione\_frazione\_Q\_N\]) by induction on $N$.
0.5em We now come to the proof of (\[eq:minorazione\_P\_N\]). We make use of the trivial inequality $
\mbox{\bf{P}}(A_1\cap \ldots \cap A_r)\ge 1-\sum_{i=1}^r
\mbox{\bf{P}}(A_i^c)
$, which holds for any probability and any collection of sets $A_1,\ldots,A_r$. Consequently, we obtain $$\mbox{\bf{P}}_N\left(|q_1|\!<\Theta\sqrt{\frac{2\omega}\beta}
\;\wedge\;\ldots\;\wedge\; |q_r|\!<\! \Theta\sqrt{\frac{2\omega}
\beta} \right) \ge 1-r\cdot \mbox{\bf{P}}_N \left(|q_1| \!\ge\!
\Theta\sqrt{\frac{2\omega} \beta}\right)\ .$$ because, due to the translation invariance of the periodic system, every set has the same measure. Recall that ${\bf{P}}_N \Big (|q_1| \!\ge\!
\Theta\sqrt{2\omega/\beta}\Big)$ is just the integral of $\tilde D_N$ times ${\bf{1}}_{|q_i|\ge\Theta \sqrt{2\omega/\beta}}$. A bound to this integral can be found proceeding as above, i.e., by symmetrizing on $q_i$, fixing $\kappa>0$ and integrating separately over $\Omega(N,\kappa)$ and its complement (recall that $\Omega(N,\kappa)$ is defined by (\[eq:definizione\_Omega\])). This way we get $$\begin{aligned}
\mbox{\bf{P}}_N\left(|q_1| \ge \Theta\sqrt{\frac{2\omega}
\beta}\right) &\le& \frac 1{NQ_N}\sum_{i=1}^N \int_{\Omega(N,\kappa)}
{\bf{1}}_{|q_i|\ge\Theta \sqrt{2\omega/\beta}} \tilde
D_N(q_1,\ldots,q_N)\\
& &
+ \frac 1{Q_N}R_N(\beta,\kappa)\ ,\end{aligned}$$ where $R_N$ is defined by (\[eq:definizione\_resto\]), and we bound ${\bf{1}}_{|q_i|\ge\Theta \sqrt{2\omega/\beta}}$ by 1 in $\Omega^c(N,\kappa)$. It is straightforward to notice that the number of sites for which $|q_i|\ge \Theta\sqrt{2\omega/\beta}$, in the interior of $\Omega(N,\kappa)$, cannot exceed $N \kappa/\Theta^2$. Therefore, the former term at the r.h.s. of the previous formula is smaller than $1/4r$ if $\Theta\ge 2\sqrt{\kappa r}$. As far as the latter is concerned, we can choose $\kappa$ such that $R_N(\beta,\kappa)/Q_N\le 1/4r,
$ as we have shown above. For example, we can fix $\kappa = \log
(4rK_0)$. This suffices to infer that, for $\Theta\ge 2\sqrt{r \log (4r K_0)}$, (\[eq:minorazione\_P\_N\]) is valid.
Proof of Lemma \[lemma:rapporto\_Z\_Q\] {#app:dim_lemma_rapporto}
---------------------------------------
The first inequality in (\[eq:minorazione\_rapporto\_Z\_Q\]) comes directly from the fact that the integrand appearing in the definition of $Q_M$ is smaller than the function $n_{M,{{\mathfrak{x}}}}$, i.e., the integrand in the definition of $\bar{\mathcal{Q}}^{{\mathfrak{x}}}_M$.
As regards the second inequality in (\[eq:minorazione\_rapporto\_Z\_Q\]), we note that the integrand of $Q_M$ is equal to the one of $\bar Z^{{\mathfrak{x}}}_M$ multiplied by ${{\mathfrak{x}}}$ terms of the form $\exp(-\beta\eps q_{m_i} q_{m_{i+1}}/\omega)$ at the sites, in number $2{{\mathfrak{x}}}$, on the boundary of the blocks, which we denote by $m_1,\ldots,m_{2{{\mathfrak{x}}}}$, with the convention that $m_{2{{\mathfrak{x}}}+1}=m_1$. Then, we integrate only in the region in which the $q$ coordinate of each of these sites is smaller than $\Theta\sqrt{2\omega/\beta}$, with $\Theta=2\sqrt{2{{\mathfrak{x}}}\log (8{{\mathfrak{x}}}K_0)}$, and we observe that $$\begin{aligned}
\bar Z^{{\mathfrak{x}}}_M &\ge& \exp\left(-4{{\mathfrak{x}}}\eps \Theta^2\right) Q_M
\mbox{\bf{P}}_M\left(|q_{m_1}|\!<\!\Theta\sqrt{2\omega/\beta}
\wedge\ldots\wedge |q_{m_{2{{\mathfrak{x}}}}}|\!<\!
\Theta\sqrt{2\omega/\beta} \right) \\
&\ge& \frac{Q_M}{2}\left(8{{\mathfrak{x}}}K_0\right)^{-32 \eps_0 {{\mathfrak{x}}}^2}\ .\end{aligned}$$ Here, $Q_M$ comes from the normalization of the probability, and in the second line use is made of Lemma \[lemma:periodico\].
We come now to inequalities (\[eq:maggiorazione\_rapporto\_Z\_Q\]) and observe at once that the first one is trivial, because, on account of the identity in (\[eq:rapporto\_n\_n\_tilde\]), one has $\tilde n_{M,{{\mathfrak{x}}}}\le
n_{M,{{\mathfrak{x}}}}$. The second one is more complicated: we begin by proving it in the case in which each block is constituted by an even number of elements.
In order to estimate $\mathcal{Q}^{{\mathfrak{x}}}_M$, we divide again the phase space of the system in the region $\tilde\Omega$ in which $|q_{m_1}|<\sqrt{2\omega\kappa/
\beta},\ldots,|q_{m_{2{{\mathfrak{x}}}}}| < \sqrt{2\omega\kappa/
\beta}$, where $\kappa>0$ is a constant to be determined, and in its complement $\tilde\Omega^c$. The integral over $\tilde\Omega$ is smaller than $ Q_M\cdot\exp\left(4{{\mathfrak{x}}}\eps\kappa \right),$ while, as regards the complement, we notice that it is contained in the set in which $\sum_{i=1}^{2{{\mathfrak{x}}}} q_i^2\ge
2\kappa\omega/\beta$. Thus, the integral over such a region is bounded from above by $
\mathcal{Q}^{{{\mathfrak{x}}}_1}_{M-2{{\mathfrak{x}}}}\left(2\pi\omega/\beta\right)^{{\mathfrak{x}}}\Gamma({{\mathfrak{x}}},\kappa)$, with ${{\mathfrak{x}}}_1\le{{\mathfrak{x}}},$ where we have dropped some positive term in the potentials, then we have integrated first over $q_{m_1},\ldots,q_{m_{2{{\mathfrak{x}}}}}$ (which gives the term $\left( 2\pi\omega/\beta\right)^{{\mathfrak{x}}}\Gamma({{\mathfrak{x}}},\kappa)$, with $\Gamma({{\mathfrak{x}}},\kappa)$ defined by (\[eq:gamma\_reg\])); then the blocks made of just 2 particles disappear, so that the integration over the remaining positions gives the term $ \mathcal{Q}^{{{\mathfrak{x}}}_1}_{M-2{{\mathfrak{x}}}}$. This way we get $$\mathcal{Q}_M^{{\mathfrak{x}}}\le Q_M\cdot\exp\left(4{{\mathfrak{x}}}\eps
\kappa\right) +\left( \frac {2\pi\omega}\beta\right)^{{\mathfrak{x}}}\Gamma({{\mathfrak{x}}},
\kappa) \cdot\mathcal{Q}_{M-2{{\mathfrak{x}}}}^{{{\mathfrak{x}}}_1}\ ,\quad\mbox{with}\quad
{{\mathfrak{x}}}_1\le{{\mathfrak{x}}}\ .$$
Now, we apply the previous inequality to the function $\mathcal{Q}_{M-2{{\mathfrak{x}}}}^{{{\mathfrak{x}}}_1}$ at the r.h.s, and we end up with a relation similar to the previous one, in which however there appears the function $\mathcal{Q}_{M-2{{\mathfrak{x}}}-2{{\mathfrak{x}}}_1}^{{{\mathfrak{x}}}_2}$, with ${{\mathfrak{x}}}_2\le
{{\mathfrak{x}}}_1$. So, we can iterate this procedure, observing that $\Gamma({{\mathfrak{x}}},\kappa)$ is an increasing function of ${{\mathfrak{x}}}$, and we get $$\mathcal{Q}_M^{{\mathfrak{x}}}\le \exp\left(4{{\mathfrak{x}}}\eps\kappa\right)
\sum_{j=0}^J\left( \frac {2\pi\omega}\beta\right)^{\sigma_j} Q_{M-2\sigma_j}
\left(\Gamma({{\mathfrak{x}}},\kappa)\right)^j\ , \quad\mbox{with}
\quad Q_0{\buildrel {\rm def} \over {=} }1\ ,$$ where we define $\sigma_j=\sum_{k=0}^j
{{\mathfrak{x}}}_k$, with ${{\mathfrak{x}}}_0={{\mathfrak{x}}}$, and $J$ represents the integer such that $\sigma_J=M/2$. We make use of inequality (\[eq:maggiorazione\_frazione\_Q\_N\]) of Lemma \[lemma:periodico\] and finally get, if the series converges, $
\mathcal{Q}_M^{{\mathfrak{x}}}\le \exp\left(4{{\mathfrak{x}}}\eps\kappa\right)
Q_M \sum_{j=0}^\infty (K_0^{2{{\mathfrak{x}}}} \Gamma({{\mathfrak{x}}},\kappa) )^j
$. We point out that the common ratio of this geometric series is a decreasing function of $\kappa$, which tends to 0 as $\kappa\to+\infty$: thus, we choose $\bar \kappa=\bar\kappa({{\mathfrak{x}}},K_0)$ so as to satisfy (\[eq:determinazione\_kappa\_barra\]), and obtain the relation $$\label{eq:caso_pari}
\mathcal{Q}_M^{{\mathfrak{x}}}\le 2\exp\left(4{{\mathfrak{x}}}\eps\bar\kappa({{\mathfrak{x}}},K_0)\right)
Q_M\ .$$
If a number $\lambda\le{{\mathfrak{x}}}$ of blocks is constituted by an odd number of elements, we integrate on one of the sites on the boundary of each of these blocks, in order that each of the blocks, in number ${{\mathfrak{x}}}'$, of the resulting lattice contains an even number of elements. By dropping some suitably chosen interaction terms in the potential, one gets $
\mathcal{Q}^{{\mathfrak{x}}}_M\le
\left(2\pi\omega/\beta\right)^{\lambda/2}\mathcal{Q}^{{{\mathfrak{x}}}'}_{M-\lambda},$ with ${{\mathfrak{x}}}' \le {{\mathfrak{x}}}$, where the blocks made of just one particles disappear. Now we can use (\[eq:caso\_pari\]) with $Q_{M-\lambda}$ instead of $Q_M$. Then, making use of Lemma \[lemma:periodico\] to express $Q_{M-\lambda}$ in terms of $Q_M$, we get (\[eq:maggiorazione\_rapporto\_Z\_Q\]).
Proof of Theorem \[teor:correlazioni\_generico\] {#app:dim_correlazioni}
------------------------------------------------
As already said, the proof is performed by bounding from above every term at the r.h.s. of (\[eq:diff\_misure\_2\]), i.e., $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$, for ${{\mathbf{{j}}}},{{\mathbf{{i}}}}\in V$.
We point out that the expectations in the definition of $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$ depend on the choice of $Q$, which is not completely fixed by its marginal probabilities (see comments on relations (\[eq:congiunta\])). In fact, the main part of paper [@do2] consists in introducing a suitable reconstruction operator (on the space of the joint probabilities) which enables one to find a joint probability distribution that minimizes $\lambda$, starting from an initially chosen one. We adopt the same technique, with the only difference that we apply it not at all sites of the lattice, but only at those lying on the complement of a fixed set $\bar V$ (we will call it $\bar
{T}{\buildrel {\rm def} \over {=} }{T}\backslash\bar V$).
We also need to control, together with $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$, the auxiliary quantity $$\gamma({{\mathbf{{i}}}}) {\buildrel {\rm def} \over {=} }\mbox{\bf E} \left[
{\bf 1}_{\xi_{{\mathbf{{i}}}}^1\neq \xi_{{\mathbf{{i}}}}^2} \right]\ ,$$ where $\xi^1$ and $\xi^2$ are the same Gibbsian fields entering in (\[eq:def\_lambda\]).
We introduce, then, the main tool of the proof, i.e., the reconstruction operator $U_{{\mathbf{{i}}}}$, with ${{\mathbf{{i}}}}\in {T}$, which will enable us to construct the joint probabilities $Q({\mathrm{d}}{\mathbf{x}},{\mathrm{d}}{\mathbf{y}})$ on $V$ (see formula (\[eq:diff\_misure\_2\])). This operator is defined on a couple of fields $(\xi^1,\xi^2)$ having the same conditional probability at ${{\mathbf{{i}}}}$, as follows. For each pair of configurations ${\mathbf{x}}^1,{\mathbf{x}}^2\in \mathbb R^{|T|}$, we denote by $P^{{\mathbf{{i}}}}_{{\mathbf{x}}^1,{\mathbf{x}}^2}$ the measure on $\mathbb R^2$ for which the minimum of the distance between $P_{{{\mathbf{{i}}}},{\mathbf{x}}^1}$ and $P_{{{\mathbf{{i}}}},{\mathbf{x}}^2}$ is attained, i.e., such that, for any measurable $B\subset \mathbb R$, one has $$\begin{split}
&P^{{\mathbf{{i}}}}_{{\mathbf{x}}^1,{\mathbf{x}}^2}(\mathbb R\times B) = P_{{{\mathbf{{i}}}},{\mathbf{x}}^1}(B)\ ,\quad
P^{{\mathbf{{i}}}}_{{\mathbf{x}}^1,{\mathbf{x}}^2}(B\times \mathbb R) = P_{{{\mathbf{{i}}}},{\mathbf{x}}^2}(B)\ ,\\
&\mbox{and }\int_{\mathbb R^2}{\bf 1}_{x\neq y} P^{{\mathbf{{i}}}}_{{\mathbf{x}}^1,{\mathbf{x}}^2}({\mathrm{d}}x,{\mathrm{d}}y)=
D\left(P_{{{\mathbf{{i}}}},{\mathbf{x}}^1},P_{{{\mathbf{{i}}}},{\mathbf{x}}^2}\right)\ .
\end{split}$$ Such a definition enables us to describe the action of $U_{{\mathbf{{i}}}}$, because this operator maps the couple $(\xi^1,\xi^2)$ into $(\hat\xi^1,\hat\xi^2)$ such that, for any measurable $C\subset \mathbb R^2$, $$P\left(\left(\hat\xi^1_{{\mathbf{{i}}}},\hat\xi^2_{{\mathbf{{i}}}}\right)\in C \,|\,
\hat\xi^1_{{T}\backslash \{{{\mathbf{{i}}}}\}}={\mathbf{x}}^1_{{T}\backslash\{{{\mathbf{{i}}}}\}}, \hat
\xi^2_{{T}\backslash \{{{\mathbf{{i}}}}\}}={\mathbf{x}}^2_{{T}\backslash
\{{{\mathbf{{i}}}}\}}\right)=P^{{\mathbf{{i}}}}_{{\mathbf{x}}^1,{\mathbf{x}}^2}(C)\ ,$$ and, for any finite $V\subset {T}$ not containing ${{\mathbf{{i}}}}$, the joint probability distribution of $(\hat\xi^1_V,\hat\xi^2_V)$ coincides with that of $(\xi^1_V,\xi^2_V)$.
The effect of $U_{{\mathbf{{i}}}}$ on $\gamma({{\mathbf{{i}}}})$ and $\lambda(u,{{\mathbf{{i}}}})$ is described in detail in Lemmas 2, 3 and 4 of the work [@do2]. Following such a paper, we adopt the convention that the quantities relative to the reconstructed couple $(\hat\xi^1,\hat\xi^2)$ are distinguished from the corresponding ones relative to $(\xi^1,\xi^2)$ by adding the symbol \^. For every set $S\subset T$ we define the operator $$\label{eq:definizione_U_S}
U_S{\buildrel {\rm def} \over {=} }U_{{{\mathbf{{i}}}}_1}\circ U_{{{\mathbf{{i}}}}_2}\circ\ldots\circ U_{{{\mathbf{{i}}}}_m}\ ,$$ where the order of the points ${{\mathbf{{i}}}}_1,\ldots,{{\mathbf{{i}}}}_m$, contained in $S\cup \partial_{b{r}} S$ is chosen in a suitable way. This is described in full detail in the proof of the following Lemma \[lemma:ricostruzione\]. If we define $
\gamma_S{\buildrel {\rm def} \over {=} }\sup_{{{\mathbf{{i}}}}\in S} \gamma({{\mathbf{{i}}}})$ and $
\lambda_S{\buildrel {\rm def} \over {=} }\sup_{{{\mathbf{{j}}}},{{\mathbf{{i}}}}\in S}\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}}),
$ we can describe the action of $U_S$ on a couple of fields having the same conditional probability on $S\cup \partial_{b{r}} S$ accordingly to the following lemma, which is proved in Appendix \[app:condiz\].
\[lemma:ricostruzione\] Let $(\xi^1,\xi^2)$ be a couple of fields having the same conditional probability on $S\cup \partial_{b{r}} S$, given by a specification $\Gamma\in \Theta(h,C,\delta)\cap
\Delta(h,\bar{K}C,\alpha)$, and $(\hat\xi^1,\hat\xi^2)=U_S(\xi^1,\xi^2)$. Then, one has $$\left(\begin{array}{c}
\hat\gamma_S\\
\hat \lambda_S
\end{array}\right) \le
A\left(\begin{array}{c}
\gamma_{S\cup \partial_{b{r}} S}\\
\lambda_{S\cup \partial_{b{r}} S}
\end{array}\right)\ ,$$ in which the matrix $A$ is defined by $$A{\buildrel {\rm def} \over {=} }\left(\begin{array}{cc}
\alpha +N\bar{K}^{-1}& C^{-1}M\bar{K}^{-1}\\
C\left(R+N\bar{K}^{-1}\right) & \delta a^b+M\bar{K}^{-1}
\end{array}
\right)\ ,$$ where $N,M$ and $R$ are constants depending on $a$ and $b$ only.
We remark that, if the eigenvalues of $A$ are smaller than 1, the reconstructed quantities are smaller than the initial ones. So, we want to iterate the reconstruction procedure as much as possible. It turns out that we can iterate the procedure at most a number of times proportional to the distance between $V$ and $\tilde V$. The reason is the following.
In our case, $\xi^1$ is the field relative to the equilibrium Gibbs measure and $\xi^2$ that relative to the probability conditioned to the configuration ${\mathbf{x}}$ on $\tilde V$, which we consider as fixed. It is apparent that such fields have the same conditional probability on every set which does not intersect $\tilde
V$, but not on the whole ${T}$; by hypothesis, this conditional probability is that given by $\Gamma\in \Theta(h,C,\delta)\cap
\Delta(h,\bar{K}C,\alpha)$. Since the reconstuction procedure shrinks the set $S$ on which we can control $\gamma$ and $\lambda$, we can iterate it until $V\subset S$. So, the maximum number of iterations is attained if we start by reconstructing on $V\cup \partial_{n b {r}}
V$, where $n$ is the largest number such that $ \partial_{(n+1) b {r}}
V\cap \tilde V=\emptyset$. We use Lemma \[lemma:ricostruzione\] as the first step of a recurrent scheme, by applying each time $U_{V_m}$, where $V_{m+1}=V_m\cup\partial_{b{r}}
\tilde V_m$, $V_0=V$. In virtue of Lemma \[lemma:ricostruzione\], after the application of $U_{V_m}$, one has $$\left(\begin{array}{c}
\hat\gamma_{V_m}\\
\hat \lambda_{V_m}
\end{array}\right) \le
A\left(\begin{array}{c}
\gamma_{V_{m+1}}\\
\lambda_{V_{m+1}}
\end{array}\right)\ .$$ Thus we get that the final values of $\gamma_V$ and $\lambda_V$ are smaller than the result of the application of the matrix $A^n$ to the vector with components $\gamma$, $\lambda$. Moreover, we observe that we can write $A=J^{-1}\tilde{A}
J$, where $\tilde A$ and $J$ are defined by $$\tilde A{\buildrel {\rm def} \over {=} }\left(\begin{array}{cc}
\alpha +N\bar{K}^{-1}& M\bar{K}^{-1}\\
R+N\bar{K}^{-1} & \delta a^b+M\bar{K}^{-1}
\end{array}
\right) \quad\mbox{and }
J{\buildrel {\rm def} \over {=} }\left(\begin{array}{cc}
C& 0\\
0 & 1
\end{array}
\right)\ .$$ This way we get $A^n=J^{-1}\tilde A^nJ$. As the component $\lambda_V$, which is the one we are interested in, is not affected by the action of $J^{-1}$, we can write that it is the second component of the matrix product $$\left.\tilde A\right.^n \left(\begin{array}{c}
C\gamma\\ \lambda\end{array}\right)\ .$$ Since the eigenvalues of $\tilde A$ are smaller than $
G{\buildrel {\rm def} \over {=} }\max\{(\alpha+1)/2,(\delta a^b+1)/2\}<1,
$ if $\bar{K}$ is large enough, there exists $\bar{K}_0$ such that $$\lambda_V\le (D/2)\max\{\lambda,C \gamma\}
G^n\ ,$$ where $D$ is a constant depending on $a,b,\alpha,\delta$ only. On the other hand, $n=d(V,\tilde V)/(b{r})$ and $\gamma\le
1$, from which there follows $$\lambda_V\le \frac D2\langle h\rangle_{{\mathbf{x}}} \exp\left(-c\,d(V,\tilde V)\right)\ ,$$ where $c$ is defined in (\[eq:lunghezza\_correlazione\]).
In order to show (\[eq:correlazioni\_generico\]), we need only the use of (\[eq:diff\_misure\_2\]) in estimating the term in brackets of (\[eq:correlazioni\_condizionato\]). We then observe that the r.h.s. of (\[eq:diff\_misure\_2\]) is smaller than $2 |V|^2
\lambda_V$, for the joint probability we have just found, and this concludes the proof.
Proof of Lemma \[lemma:ricostruzione\] {#app:condiz}
--------------------------------------
Lemma 5 of work [@do2] shows the result of the application of $U_{{\mathbf{{i}}}}$, in a suitably chosen order, to every site of ${T}$ in sequence: one obtains that, for the couple of fields $(\xi^1,\xi^2)$, with the same specification $\Gamma\in \Theta(h,C,\delta)\cap
\Delta(h,\bar{K}C,\alpha)$ and $\bar{K}\ge 1$, and for the reconstructed couple $(\hat\xi^1,\hat\xi^2)$ the following matrix relation holds[^10] $$\label{eq:matrice_completo}
\left(\begin{array}{c}
\hat\gamma_T\\
\hat \lambda_T
\end{array}\right) \le
A
\left(\begin{array}{c}
\gamma_T\\
\lambda_T
\end{array}\right)\ .$$
In the proof of Lemma 5 of [@do2], the order of the $U_{{\mathbf{{i}}}}$’s is chosen in the following way: the lattice is partitioned in $b$ disjoint sublattice, $Z_0,\ldots, Z_{b-1}$, which are the cosets in $T$ of $\mathbb{Z}^\nu$ with respect to $Z_0$. Then the reconstruction is applied in sequence to each sublattice, and use is made of the fact that, if ${{\mathbf{{i}}}}\in Z_l$, there exists a bound to $\gamma({{\mathbf{{j}}}})$, $\lambda({{\mathbf{{j}}}},{{\mathbf{k}}})$ and $\lambda({{\mathbf{k}}},{{\mathbf{{j}}}})$ for ${{\mathbf{{j}}}}\in
Z_l\backslash \{{{\mathbf{{i}}}}\}$ and ${{\mathbf{k}}}\in T\backslash \{{{\mathbf{{i}}}}\}$ which does not change after the application of $U_{{\mathbf{{i}}}}$. In particular, this implies that the bounds do not change for the already reconstructed sites in $Z_l$. Neither does the reconstruction at site ${{\mathbf{{i}}}}$ change, on account of Lemma 4 of paper [@do2], the value of $\lambda({{\mathbf{{i}}}},{{\mathbf{{j}}}})$ and $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$, for $|{{\mathbf{{j}}}}-{{\mathbf{{i}}}}|>{r}$. In this sense the reconstruction is local.
So, the values of $\hat \gamma_{V}$ and $\hat\lambda_{V}$, after one application of $U_{{\mathbf{{i}}}}$, depend at most on the values of $\gamma({{\mathbf{{i}}}})$ and $\lambda({{\mathbf{{j}}}},{{\mathbf{{i}}}})$ in $V\cup\partial_{r}V$. It is thus appearent that we can control the values of the reconstructed quantities only in a set $V$ smaller than the set $V'$ on which we control $\gamma$ and $\lambda$ initially. In particular, $V$ can be chosen so that $V'=V\cup \partial_{r}V$. Therefore, for any $V\subset S$, we define for $l=0,\ldots,b-1$ a nested sequence of sets $V_{l+1}{\buildrel {\rm def} \over {=} }V_l\cup \partial_{r}V_l$, with $V_0{\buildrel {\rm def} \over {=} }V$ and the operator $U_V$, as $U_V=U_{V_0\cap Z_0}\circ \ldots\circ
U_{V_{b-1}\cap Z_{b-1}}$, and notice that (see the above remarks) the order in which the sites in $V_l\cap Z_l$ are chosen does not matter. Then, after the application of $U_V$, one has that $$\left(\begin{array}{c}
\hat\gamma_V\\
\hat \lambda_V
\end{array}\right) \le
A
\left(\begin{array}{c}
\gamma_{V_b}\\
\lambda_{V_b}
\end{array}\right)\ ,$$ for the same matrix $A$ appearing in (\[eq:matrice\_completo\]). This concludes the proof.
S. B. Kuksin, *Analysis of Hamiltonian PDEs* (Oxford University Press, Oxford 2000). D. Bambusi, B. Grébert, Duke Math. J. **135**, (2006) 507-567. J. Fröhlich, T. Spencer, C. E. Wayne, J. Stat. Phys **42**, (1986) 247-274. D. Bambusi, A. Giorgilli, J. Stat. Phys. **71**, (1993) 569-606. A. Giorgilli, L. Galgani, Cel. Mech. **17**, (1978) 267-280. R. L. Dobrushin, Theory Probab. Appl. **13**, (1968) 197-224. N. N. Bogolyubov, B. I. Khatset, D. Ya. Petrina, Ukr. J. Phys. **53**, Special Issue, (2008) 168-184, available at `http://www.ujp.bitp.kiev.ua/files/file/papers/53/special_issue/53SI34p.pdf`; russian original in Teor. Mat. Fiz., **1:2**, (1969) 251-274. R. L. Dobrushin, E. A. Pechersky, in J.V. Prokhorov, K. Ito (eds.), *Probability Theory and Mathematical Statistics*, (Springer, Berlin 1983), 97-110. E. Fucito, F. Marchesoni, E. Marinari, G. Parisi, L. Peliti, S. Ruffo, A. Vulpiani, J. Phys.–Paris **43**, (1982) 707-714. G. Parisi, Europhys. Lett. **40**, (1997) 357-362. S. Flach, Phys. Rev. E **58**, (1998) R4116-R4119. D. Bambusi, D. Muraro, T. Penati, Phys. Lett. A **372**, (2008) 2039-2042. D. Bambusi, A. Carati, T. Penati, Nonlinearity **22**, (2009) 923-946. A. Carati, L. Galgani, F. Santolini, Chaos **19**, (2009) 023123. A. Carati, J. Stat. Phys. **128**, (2007) 1057-1077. T.M. Cherry, Proc. Camb. Phil. Soc. **22**, (1924) 325349. A. Giorgilli, Ann. Inst. Henri Poincaré **48**, (1988) 423-439. D. Ruelle, Commun. Math. Phys. **50**, (1976) 189-194. M. S. Green, J. Chem. Phys **22**, (1954) 398-413. R. Kubo, J. Phys. Soc. Jpn. **12**, (1957) 570-586. C. Liverani, Ann. Math. **159**, (2004) 1275-1312. G. Keller, C. Liverani, Commun. Math. Phys. **262**, (2006) 33-50. N. Chernov, R. Markarian, *Chaotic billiards* (AMS, Providence 2006). J. D. Crawford, J. R. Cary, Physica D **6**, (1983) 223-232. N. N. Nekhoroshev Russ. Math. Surv. **32:6**, (1977) 1-65. A. Giorgilli, S. Paleari, T. Penati, *Extensive adiabatic invariants for nonlinear chains*, preprint.
[^1]: Università di Milano, Dipartimento di Matematica, Via Saldini 50, 20133 Milano, Italy. E-mail: `andrea.carati@unimi.it`
[^2]: Notice that in paper [@giorgilli] the $\chi_s$ were required to be homogeneous polynomials of degree $s+2$.However, there is no problem in considering the present more general case.
[^3]: We adopt here the multi–index notation: $k=k_1,\ldots, k_N$ and $l=l_1,\ldots,l_N$ are vectors of integers, with $|k|=|k_1|+\ldots+|k_2|$. So, $p^kq^l=
p_1^{k_1}\cdot \ldots \cdot p_N^{k_N}q_1^{l_1}\cdot \ldots \cdot
q_N^{l_N}$.
[^4]: One can check that this is indeed a norm.
[^5]: Notice that the index $i$ lies in $\{1,\ldots,\gamma'\}$. But if in the expression $\exp(\beta\eps
q_{l_i}q_{l_{i+1}}/\omega)$ there appears $q_{l_{\gamma'+1}}$ then one has to intend simply $l_{\gamma'+1}$ as $l_1$.
[^6]: See later for the definition of a compact function, according to the convention of paper [@do2].
[^7]: Notice that one can find “pathological” functions for which the decay is arbitrarily slow (see [@crawford]) even for strongly chaotic systems. For this reason, the control is usually restricted to a fixed continuity class.
[^8]: As a matter of fact, we cannot guarantee that $a(t)$ is invertible, but we will give below a meaningful univocal definition of $t(a)$ (see definition \[def:2\]).
[^9]: Remark that the function $a(\beta,\eps)$ depends on $\eps$ only via the term $\omega=\sqrt{1+2\eps}$.
[^10]: As a matter of fact, $A$ is not the same matrix which appears in [@do2], since we needed to make the dependence on $C$ explicit. Our statement can be proved by checking, in proving the induction (11)-(14) of [@do2], that the constants $N(\cdot,\cdot)$ are proportional to $C^2$, the constants $N(\cdot)$ and $M(\cdot,\cdot)$ are proportional to $C$ and the constants $M(\cdot)$ are independent of $C$.
|
---
abstract: 'A decision maker starts from a judgmental decision and moves to the closest boundary of the confidence interval. This statistical decision rule is admissible and does not perform worse than the judgmental decision with a probability equal to the confidence level, which is interpreted as a *coefficient of statistical risk aversion*. The confidence level is related to the decision maker’s aversion to uncertainty and can be elicited with laboratory experiments using urns à la Ellsberg. The decision rule is applied to a problem of asset allocation for an investor whose judgmental decision is to keep all her wealth in cash.'
author:
- 'Simone Manganelli[^1]'
date: 'March, 2019'
title: Deciding with Judgment
---
[**Keywords**: Statistical Decision Theory; Hypothesis Testing; Confidence Intervals; Statistical Risk Aversion; Portfolio Selection.]{}
[**JEL Codes**: C1; C11; C12; C13; D81.]{}
Introduction
============
Most people take decisions in an uncertain environment without resorting to formal statistical analysis (Tversky and Kahneman, 1974). I refer to these decisions as *judgmental decisions*. Statistical decision theory uses data to prescribe optimal choices under a set of assumptions (Wald, 1950), but has no explicit role for judgmental decisions. This paper is concerned with the following questions: Is a given judgmental decision optimal in the light of empirical evidence? If not, how can it be improved?
The answer to the first question is obtained by testing whether, for a given loss function, the first derivative evaluated at the judgmental decision is equal to zero. The answer to the second question is derived from the closest boundary of the confidence interval. The decision rule incorporating judgment is admissible and does not perform worse than the judgmental decision with a probabilty equal to the confidence level. The implication is that abandoning a judgmental decision to follow a statistical procedure always carries the risk of choosing an action worse than the original judgmental decision. This may happen with a probability bounded above by the confidence level.
For concreteness, consider an investor who is about to take the judgmental decision $\tilde{a}$, say, to hold all her assets in cash. She asks an econometrician for advice on whether she should invest some of her money in a stock market index. The best prediction of the econometrician depends on an estimated parameter $\hat{\theta}$, which is affected by estimation risk. For a given utility function provided by the investor, the econometrician can construct a loss function $L(\theta,\tilde{a})$, the loss experienced by the investor if the decision $\tilde{a}$ is taken and the true parameter is $\theta$. Suppose the econometrician is able to recover the distribution of the gradient $\nabla_a L(\hat{\theta}, \tilde{a})$ around the true, but unknown $\theta$. It is possible to test whether the investor’s decision $\tilde{a}$ is optimal by testing the null hypothesis that $\nabla_a L(\theta, \tilde{a})$ is equal to zero. If the null hypothesis is not rejected, the econometrician cannot recommend any deviation from $\tilde{a}$. If the null hypothesis is rejected, statistical evidence suggests that marginal deviations from $\tilde{a}$ decrease the loss function relative to $L(\theta,\tilde{a})$.
![Statistical Decision Tree[]{data-label="DecisionTree"}](Figure1)
Denote with $\alpha$ the confidence level used to implement the hypothesis testing. The investor is facing the decision problem depicted in figure \[DecisionTree\]. The investor has two possible choices. She can hold on to her judgmental decision $\tilde{a}$, denoted by the action $J$, incurring in the loss $L(\theta, \tilde{a})$. Alternatively, she can follow the econometrician’s advice, which is equivalent to accepting the bet $\mathcal{L}_{\alpha}$. In this case, she does not know whether she is facing the upper part of the decision tree, denoted by the node $H_0$, or the lower part, denoted by $H_1$. $H_0$ is the unfavorable scenario, in which the null hypothesis is true, so that any deviation from the judgmental decision $\tilde{a}$ results in a higher loss. A marginal $\varepsilon >0$ move away from $\tilde{a}$ results in the loss $L(\theta,\tilde{a}) + |\nabla_aL(\theta,\tilde{a})|\varepsilon$. $H_1$ is the favorable scenario, as one correctly rejects the null hypothesis that $\tilde{a}$ is optimal, producing decisions with lower loss. In this case, a marginal $\varepsilon$ move away from $\tilde{a}$ results in the loss $L(\theta,\tilde{a}) - |\nabla_aL(\theta,\tilde{a})|\varepsilon$. The dash line connecting the two nodes represents true uncertainty for the decision maker, in the sense that it is not possible to attach any probability to being in $H_0$ or in $H_1$. The decision maker can choose the confidence level $\alpha$, which puts an upper bound to the probability that the null is wrongly rejected when it is true. Notice that $\alpha$ represents also the lower bound probability of correctly rejecting $H_0$ when it is false.
In case of rejection, the investor faces a new, but identical decision problem, except that $\tilde{a}$ is replaced by $\tilde{a} \pm \varepsilon$ (the sign depends on the sign of the empirical gradient). This new action will be rejected if $\nabla_a L(\hat{\theta}, \tilde{a} \pm \varepsilon)$ also falls in the rejection region. Iterating this argument forward, the preferred decision of the investor is the action $\tilde{a}\pm \hat{\Delta}$ which lies at the boundary of the $(1-\alpha)$-confidence interval of $\nabla_a L(\hat{\theta}, \tilde{a} \pm \hat{\Delta})$, the point where the null hypothesis that the decision $\tilde{a}\pm \hat{\Delta}$ is optimal can no longer be rejected. This decision is characterized by the fact that it will produce a loss higher than the original judgmental decision $\tilde{a}$ with probability at most $\alpha$.
The contribution of this paper lies at the intersection between statistics and decision theory. Statistical decision theory emerged as a discipline in the 1950’s with the works of Wald (1950) and Savage (1954). Recent contributions in decision theory focus on modeling behavior when beliefs cannot be quantified by a unique Bayesian prior (Gilboa and Marinacci, 2013) and on models of heuristics describing how people arrive at judgmental decisions (Gennaioli and Shleifer, 2010). This paper, however, is not concerned with the axiomatic foundations of decision theory, but rather with how data can be used to help decision makers improve their judgmental decisions. It falls within Clive Granger’s tradition that *‘to obtain any kind of best value for a point forecast, one requires a criterion against which various alternatives can be judged’* (Granger and Newbold, 1986, p. 121; see also Granger and Machina, 2006). Recent contributions within this tradition are Patton and Timmermann (2012) and Elliott and Timmermann (2016). Other contributions include Chamberlain (2000) and Geweke and Whiteman (2006), who deal with forecasting using Bayesian statistical decision theory, and Manski (2013 and the references therein), who uses statistical decision theory in the presence of ambiguity for partial identification of treatment response.
The paper is structured as follows. Section 2 sets up the decision environment and introduces the concept of judgment in frequentist statistics. Judgment is defined as a pair formed by a judgmental decision $\tilde{a}$ and a confidence level $\alpha$. Judgment is used to set up the hypothesis to test whether the action $\tilde{a}$ is optimal. Two key results of this section are that the decision rule incorporating judgment is admissible, and that it is either the judgmental decision itself or is at the boundary of the confidence interval of the sample gradient of the loss function.
Section 3 discusses the choice of the confidence level $\alpha$. As illustrated in figure \[DecisionTree\], the confidence level $\alpha$ puts an upper bound to the probability that the statistical decision rule performs worse than the judgmental decision. The confidence level can therefore be interpreted as the willingness of the decision maker to take statistical risk and is referred to as the *coefficient of statistical risk aversion*. This concept is closely linked to the idea of ambiguity aversion. The section also discusses how the confidence level $\alpha$ can be elicited with a simple experiment involving urns à la Ellsberg.
Section 4 uses an asset allocation problem as a working example to illustrate the empirical performance of various decision rules. Section 5 concludes.
Statistical Decision Rules with Judgment {#Decision with judgment}
========================================
This section introduces the concept of judgment and shows how hypothesis testing can be used to arrive at optimal decisions. For concreteness, I solve a simple asset allocation problem, but the example can be easily generalized.
Consider an investor holding cash, yielding zero nominal returns. The objective is to minimize a loss function, by deciding what fraction $a \in \mathbb{R}$ to invest in a stock market index, yielding the uncertain return $X$. The decision environment is formally defined as follows.
\[Decision Environment\] Let $\Phi(x)$ denote the cdf of the standard normal distribution. The decision environment is defined by:
1. $X - \theta \sim \Phi(x)$, where $\theta \in \mathbb{R}$ is unknown.
2. One sample realization $x \in \mathbb{R}$ is observed.[^2]
3. $a \in \mathbb{R}$ denotes the action of the decision maker.
4. The decision maker minimizes the loss function $L(\theta,a) = -a\theta + 0.5 a^2$.
**Remark: General case —** This decision environment can be generalized to cover any continuously differentiable and strictly convex loss function, at the cost of more cumbersome notation. The intuition is the following. Since the main object of interest is the first derivative of the loss function evaluated at $a$ and at the maximum likelihood estimator $\hat{\theta}$, an approximation of the first order conditions around the population parameter $\theta$ gives: $$\nabla_{a}L(\hat{\theta}, a) \approx \nabla_{a}L(\theta, a) + \nabla_{a\theta}L(\hat{\theta}, a)(\hat{\theta}-\theta)$$ The statistical properties of the gradient can therefore be deduced from the statistical properties of $\hat{\theta}$. The strict convexity of the loss function guarantees that there is a one to one mapping between $a$ and the gradient (although not linear as in the decision environment above). $\Box$
Consider the following standard definition of a decision rule (Wald, 1950):
\[**Decision Rule**\] $\delta(X): \mathbb{R} \rightarrow \mathbb{R}$ is a decision rule, such that if $X=x$ is the sample realization, $\delta(x)$ is the action that will be taken.
Classical statistics as developed by Neyman and Fisher has no explicit role for *epistemic* uncertainty (as defined by Marinacci, 2015), as it was motivated by the desire for objectivity. Non sample information is, nevertheless, implicitly introduced in various forms, in particular in the choice of the confidence level and the choice of the hypothesis to be tested.
Judgment
--------
I introduce the following definition of judgment.
\[**Judgment**\] **Judgment** is the pair $A \equiv \{\tilde{a}, \alpha\}$. $\tilde{a} \in \mathbb{R}$ is the **judgmental decision**. $\alpha \in [0,1]$ is the **confidence level**.
Judgment is routinely used in hypothesis testing, for instance when testing whether a regression coefficient is statistically different from zero (with zero in this case playing the role of the judgmental decision), for a given confidence level (usually 1%, 5% or 10%). I say nothing about how the judgmental decision is formed. This question is explored by Tversky and Kahneman (1974) and subsequent research. The choice of the confidence level is discussed in section \[section\_confidenceLevel\]. For the purpose of this paper, judgment is a primitive to the decision problem, like the loss function.
Hypothesis Testing
------------------
The decision maker can test whether $\tilde{a}$ is optimal by testing if the gradient $\nabla_a L(\theta,\tilde{a}) = -\theta + \tilde{a}$ is equal to zero. A test statistic for the gradient can be obtained by replacing $\theta$ with its maximum likelihood estimator $X$.
The novel insight of this paper stems from the realization that the hypothesis to be tested should be conditional on the sample realization $x$. Having observed a negative, say, sample gradient $\nabla_a L(x,\tilde{a})$, one can conclude that values of $a$ higher than $\tilde{a}$ decrease the empirical loss function. The decision maker is interested, however, in the population value of the loss function. If the population gradient is positive, higher values of $a$ would increase the loss function, rather than decrease it. Analogous, but opposite considerations hold if the sample gradient is positive. The null hypothesis to be tested is therefore that the population gradient has opposite sign relative to the sample gradient. For a discussion of the importance of conditioning in statistics, see chapter 10 of Lehmann and Romano (2005) or section 1.6.3 of Berger (1985) and the references therein.
To formalize, partition the sample space according to the sign taken by the sample gradient as follows: $$\begin{aligned}
C_{-} \equiv \{x \in \mathbb{R}: -x+\tilde{a} \le 0\} \\
C_{+} \equiv \{x \in \mathbb{R}: -x+\tilde{a} > 0\} \end{aligned}$$
Two cases are possible:
i\) $x \in C_-$, implying that the null hypothesis to be tested is: $$\label{Null1}
H_0: -\theta + \tilde{a} \ge 0 \quad \text{vs} \quad H_1: -\theta + \tilde{a} < 0$$
ii\) $x \in C_+$, implying that the null hypothesis to be tested is: $$\label{Null2}
H_0: -\theta + \tilde{a} \le 0 \quad \text{vs} \quad H_1: -\theta + \tilde{a} > 0$$
Decision
--------
In an hypothesis testing decision problem, only two actions are possible: The null hypothesis is either accepted or rejected. Let $0\le\gamma\le1$ and $\Phi(c_{\alpha})=\alpha$, and consider again the two cases, conditional on the sample realization $x$. Given the judgment $A=\{\tilde{a},\alpha\},$ define the test functions $\psi_i^A(x), i \in \{-,+\}$ associated with the hypotheses -:
i\) $x \in C_-$
$$\label{TestFunction1}
\psi_-^A(x) = \begin{cases}
0 \quad \text{if }c_{\alpha/2}<-x+\tilde{a}\le0\\
\gamma \quad \text{if }-x+\tilde{a}=c_{\alpha/2}\\
1 \quad \text{if } -x+\tilde{a}<c_{\alpha/2}
\end{cases}$$
ii\) $x \in C_+$
$$\label{TestFunction2}
\psi_+^A(x) = \begin{cases}
0 \quad \text{if }0 < -x+\tilde{a} < c_{1-\alpha/2}\\
\gamma \quad \text{if }-x+\tilde{a}=c_{1-\alpha/2}\\
1 \quad \text{if } -x+\tilde{a}>c_{1-\alpha/2}
\end{cases}$$
The following theorem derives the decision compatible with judgment:
[**(Decision with judgment)**]{}\[Frequentist decision\] Consider the decision environment of Definition \[Decision Environment\]. A decision maker with judgment $A=\{\tilde{a},\alpha\}$ selects the following decision rule: $$\label{Frequentist decision rule}
\delta^A(X) = I(-X+\tilde{a}\le0) \delta_-^A(X) + I(-X+\tilde{a}> 0) \delta_+^A(X)$$ where $I(\cdot)$ is an indicator function which takes value 1 if its argument is true and 0 otherwise, $$\begin{aligned}
\delta_-^A(X) & = \tilde{a}(1-\psi_-^A(X))+(x+c_{\alpha/2})\psi_-^A(X), \\
\delta_+^A(X) & = \tilde{a}(1-\psi_+^A(X))+(x+c_{1-\alpha/2})\psi_+^A(X),\end{aligned}$$ $c_{\alpha}\equiv \Phi^{-1}(\alpha)$, and $\psi_i^A(X), i\in\{-,+\}$ are the test functions defined in -.
**Proof** — See Appendix.
The decision rule depends not only on the random variable $X$, but also on the sample realization $x$. To understand the intuition, consider the case i) $x \in C_-$ and the associated null hypothesis $H_0: -\theta+\tilde{a} \ge 0$. The null hypothesis is a statement about the population gradient evaluated at the judgmental decision $\tilde{a}$. It says that *marginally* higher values of $\tilde{a}$ do not decrease the loss function. If it is not rejected at the given confidence level $\alpha$, the chosen action must be $\tilde{a}$. Rejection of the null hypothesis, on the other hand, implies accepting the alternative, which states that *marginally* higher values of $\tilde{a}$ decrease the loss function. Denote the new action marginally away from $\tilde{a}$ with $a_{\varepsilon} = \tilde{a} + \varepsilon$, for $\varepsilon > 0$ and sufficiently small. Notice that $a_{\varepsilon} $ is not random and it is possible to test whether it is optimal, by testing again whether additional marginal moves from $a_{\varepsilon} $ increase the loss function. This reasoning holds for all null hypotheses $H_0: -\theta+a \ge 0$ for any $a \in [\tilde{a}, x+c_{\alpha/2})$. The first null hypothesis which is not rejected is $H_0:-\theta+\hat{a}\ge 0$, where $\hat{a} = x+c_{\alpha/2}$.
The next theorem shows that this decision cannot be improved.
[**(Admissibility)**]{}\[Admissibility\] The decision $\delta^A(X)$ of Theorem \[Frequentist decision\] is admissible.
**Proof** — See Appendix.
The admissibility result is a direct consequence of Karlin-Rubin theorem applied to the test functions -. It follows from the fact that the randomness of the decision rule stems from the indicator functions determining the sign of the gradient and from the (conditional) test functions $\psi_i^A(X)$, $i\in\{-,+\}$. The actions to be taken in case of rejection ($x+c_{\alpha/2}$ or $x+c_{1-\alpha/2}$) or non rejection ($\tilde{a}$) are not random.
Choosing the Confidence Level {#section_confidenceLevel}
=============================
The confidence level $\alpha$ determines the willingness of the decision maker to take statistical risk and therefore I equivalently refer to it as the *coefficient of statistical risk aversion*. The intuition follows from the decision tree of figure \[DecisionTree\]. A decision maker facing a statistical decision problem is about to take the judgmental decision $\tilde{a}$. The econometrician suggests a statistical decision rule, which by its random nature may perform worse than $\tilde{a}$. The choice of $\alpha$ puts an upper bound to the probability that the statistical decision rule may perform worse than $\tilde{a}$.
This intuition is formalized by the following theorem.
\[Interpretation\] Consider the decision environment of Definition \[Decision Environment\] and assume the decision maker has judgment $A \equiv \{\tilde{a},\alpha\}$. The decision rule $\delta^{A}(X)$ in performs worse than the judgmental decision $\tilde{a}$ with probability not greater than $\alpha$: $$P_{\theta}(L(\theta, \delta^{A}(X)) > L(\theta, \tilde{a})) \le \alpha$$
**Proof** — See Appendix.
An extremely statistical risk averse decision maker chooses $\alpha = 0$. A zero confidence level results in a degenerate confidence interval which coincides with the entire real line. As a consequence, it is never possible to reject the null hypothesis that the judgmental decision $\tilde{a}$ is optimal. At the other extreme, a statistical risk loving decision maker chooses $\alpha=1$. When $\alpha=1$ the confidence interval degenerates into a single point, which coincides with the *maximum likelihood* decision. In this case, the null hypothesis that $\tilde{a}$ is optimal is always rejected and the decision maker is fully exposed to the possibility that the statistical decision rule will perform worse than the judgmental decision. An intermediate case of statistical risk aversion is represented by the *subjective classical* estimator of Manganelli (2009), which sets $\alpha \in (0,1)$.
The degree of statistical risk aversion $\alpha$ can be elicited with an experiment à la Ellsberg (1961) where the decision maker has to choose among different couples of urns. Accepting the advice of an econometrician is like accepting a bet with Nature where the probabilities of the payoff are only partially specified.
Consider two urns with 100 balls each. Urn 1 contains only white and black balls, Urn 2 contains white and red balls. If the black ball is extracted, the respondent loses €100. If the red ball is extracted, the respondent wins an amount in euros which gives an increase in utility equivalent to the reduction in utility produced by the loss of €100. If the white ball is extracted, nothing happens. The respondent can choose among the composition of the urns described in table \[Elicitation\]. By accepting one of the bets from 1 to 99, she can control the upper bound probability of losing in case balls are drawn from Urn 1. By choosing this upper bound probability, she automatically chooses the lower bound probability of winning in case the ball is drawn from Urn 2.
----- ---------- ---------- ---------- ----------
Bet White Black White Red
0 100 0 100 0
1 $\ge 99$ $\le 1$ $\le 99$ $\ge 1$
2 $\ge 98$ $\le 2$ $\le 98$ $\ge 2$
... ... ... ... ...
98 $\ge 2$ $\le 98$ $\le 2$ $\ge 98$
99 $\ge 1$ $\le 99$ $\le 1$ $\ge 99$
100 0 100 0 100
----- ---------- ---------- ---------- ----------
: Experiment to elicit the confidence level $\alpha$[]{data-label="Elicitation"}
To understand the link with the statistical decision problem, consider again figure \[DecisionTree\]. Urn 1 corresponds to node $H_0$ in the upper part of the decision tree in figure \[DecisionTree\]. Urn 2 corresponds to node $H_1$ in the lower part of the decision tree. The worst case scenario is when the null hypothesis is true, as in this case deviations from $\tilde{a}$ increase the loss. However, even in this case, according to the decision rule there is still the possibility that the null hypothesis is not rejected, in which case the chosen action is $\tilde{a}$. The choice of the confidence level $\alpha$ controls the probability of wrongly rejecting the null. When the null hypothesis is true, it is like having the ball extracted from Urn 1, and choosing $\alpha$ is like choosing the maximum number of black balls contained in Urn 1. The favorable scenario is when the conditional null hypothesis is false. In this case, rejection of the null leads to the choice of a better action, in the sense that it produces a lower loss. When the null hypothesis is false, it is like having the ball extracted from Urn 2. The probability of correctly rejecting the null depends on the power of the test, but is in any case greater than the chosen confidence level $\alpha$. Choosing $\alpha$ is like choosing the minimum number of red balls contained in Urn 2.
In real world situations, one does not know whether the null hypothesis is true or not, which represents genuine uncertainty and is indicated by the dashed line in figure 1. This is like saying to the participants in the experiment that it is unknown from which urn the ball will be extracted. An extremely statistical risk averse player would always choose not to participate to the bet and retain the judgmental decision $\tilde{a}$, a choice corresponding to bet 0 in the table. A statistical risk loving player would choose bet 100. In general, players with higher degrees of statistical risk aversion would choose bets with lower numbers.
An Asset Allocation Example
===========================
This section implements the decision with judgment, solving a standard portfolio allocation problem.
The empirical implementation of the mean-variance asset allocation model introduced by Markowitz (1952) has puzzled economists for a long time. Despite its theoretical success, it is well-known that plug-in and Bayesian estimators of the portfolio weights produce volatile asset allocations which usually perform poorly out of sample, due to estimation errors (Jobson and Korkie 1981, Brandt 2007). This paper takes a different perspective on this problem. The decision with judgment provides an asset allocation which does not perform worse than any given judgmental allocation with a probability equal to the confidence level.
To implement the statistical decision rules, I take a monthly series of closing prices for the EuroStoxx50 index, from January 1999 until December 2015. EuroStoxx50 covers the 50 leading Blue-chip stocks for the Eurozone. The data is taken from Bloomberg. The closing prices are converted into period log returns. Table \[table:1\] reports summary statistics.
--------------- -------- ----------- -------- --------- -------
Obs Mean Std. Dev. Median Min Max
\[0.5ex\] 206 -0.06% 5.57% 0.66% -20.62% 13.70
\[0.5ex\]
--------------- -------- ----------- -------- --------- -------
: Summary statistics
\[table:1\]
The exercise consists of forecasting the next period optimal investment in the Eurostoxx50 index of a person who holds €100 cash. I take the first 7 years of data as pre-sample observations, to estimate the optimal investment for January 2006. The estimation window then expands by one observation at a time, the new allocation is estimated, and the whole exercise is repeated until the end of the sample.
To directly apply the decision with judgment as discussed in section \[Decision with judgment\], which assumes the variance to be known, I transform the data as follows. I first divide the return series of each window by the full sample standard deviation, and next multiply them by the square root of the number of observations in the estimation sample. Denoting by $\{\tilde{x}_t\}_{t=1}^{n}$ the original time series of log returns, let $\sigma$ be the full sample standard deviation and $n_1<n$ the size of the first estimation sample. Then, for each $n_1+s$, $s=0,1,2,...,n-n_1-1$, define: $$\{x_t\}_{t=1}^{n_1+s} \equiv \{\sqrt{(n_1+s)}\tilde{x}_t/\sigma\}_{t=1}^{n_1+s} \quad \textrm{and} \quad \bar{x}_{n_1+s} \equiv (n_1+s)^{-1}\sum_{t=1}^{n_1+s}x_t$$
I the estimates by providing the full sample standard deviation, so that the only parameter to be estimated is the mean return. Under the assumption that the full sample standard deviation is the population value, by the central limit theorem $\bar{x}_{n_1+s}$ is normally distributed with variance equal to one and unknown mean. I can therefore implement the decision rule with judgment, using the single observation $\bar{x}_{n_1+s}$ for each period $n_1+s$.
![Evolution of portfolio values[]{data-label="Values"}](Figure6bis)
Figure \[Values\] reports the portfolio values associated with different decision rules. For comparison, I have also included a Bayesian decision with a standard normal prior. Suppose the starting value of the portfolio in January 2006 is €[100]{}. By the end of the sample, after 10 years, an investor using the maximum likelihood decision rule would have lost one quarter of the value of her portfolio. The situation is slightly better with the Bayesian decision rule, as it delivers a loss of around 12%. The decision with judgment at confidence level at 1% does not lose anything because it never predicts deviating from the judgmental allocation of holding all the money in cash.
The point of this discussion is not to evaluate whether one decision rule is better than the other, as the decision rules differ only with respect to the choice of the confidence level and the prior, which are both a subjective choice. The purpose is rather to illustrate the implications of choosing different confidence levels. By choosing the maximum likelihood and Bayesian estimators, one has no control on the statistical risk she is going to bear. The decision with judgment, instead, allows the investor to choose a constant probability of underperforming the judgmental allocation: She can be confident that the resulting asset allocation is not worse than the judgmental allocation with the chosen probability. The case of the EuroStoxx50, however, represents only one possible draw, which turned out to be particularly adverse to the maximum likelihood and Bayesian estimators. Had the resulting allocation implied positive returns by the end of the sample, maximum likelihood and Bayesian estimators would have outperformed the decisions with judgment. There is no free lunch: Decision rules with higher statistical risk aversion produce allocations with greater protection to underperformance relative to the judgmental allocation, but also have lower upside potential. In statistical jargon, lower confidence levels protect the decision maker from Type I errors, but imply higher probabilities of Type II errors.
Conclusion
==========
Judgment plays an important role not just for individuals, but also in policy institutions. Most policy decisions are shaped by the judgment of policy makers. When advising a policy maker, the econometrician can test whether the preferred judgmental decision is supported by data. If not, the decision incorporating judgment is always at the closest boundary of the confidence interval. The probability of obtaining higher losses than those implied by the judgmental decision is bounded by the given confidence level.
The confidence level reflects the attitude of the decision maker towards statistical uncertainty. I have referred to it as the coefficient of statistical risk aversion. It can be elicited with experiments involving urns à la Ellsberg. Decision makers characterized by an exteme form of statistical risk aversion (a confidence level equal to 0) always follow their own judgmental decision and ignore the advice of the econometrician. At the other extreme, statistical risk loving decision makers (with a confidence level equal to 1) ignore their judgment and always follow the econometrician’s advice, which in this special case corresponds to the maximum likelihood decision. Policy makers are likely characterized by high, but not extreme, coefficients of statistical risk aversion. The framework provided in this paper to measure it may have profound policy implications.
****
**Proof of Theorem \[Frequentist decision\] —** Consider only the case i) $I(-x+\tilde{a} \le 0) =1$. The other case can be proven in a similar way. If $\psi_-^A(x)=0$, the null hypothesis $H_0:-\theta+\tilde{a}\ge0$ is not rejected at the given confidence level $\alpha$. $\tilde{a}$ is therefore retained as the chosen action.
If $\psi_-^A(x)=1$, the null hypothesis is rejected. Rejection of the null implies acceptance of the alternative hypothesis $H_0:-\theta+\tilde{a} \le 0$, that is marginal moves away from $\tilde{a}$ by a sufficiently small amount $\Delta>0$ decrease the loss function.
Consider now the family of null hypotheses $H_0^{\Delta}: -\theta+\tilde{a} +\Delta \ge 0$ for ${\Delta>0}$. Define also the family of rejection regions $R_{\Delta}\equiv\{y \in \mathbb{R} :-y+\tilde{a}+\Delta< c_{\alpha/2}\}$. Clearly, $x \notin R_{\Delta}$ for any $\Delta \ge \hat{\Delta}\equiv c_{\alpha/2}+x-\tilde{a}$, that is the null hypothesis $H_0^{\Delta}$ is not rejected at the confidence level $\alpha$ for any $\Delta \ge \hat{\Delta}$.
Denote with $\hat{a}$ the chosen action and suppose that $\hat{a} \ne \tilde{a} + \hat{\Delta}$. If $\hat{a} = \tilde{a} + \Delta < \tilde{a} + \hat{\Delta}$, this implies that $x \in R_{\Delta}$, that is $H_0^{\Delta}: -\theta+\tilde{a} + \Delta\ge0$ is rejected. Therefore this decision cannot be optimal.
If $\hat{a} = \tilde{a} + \Delta > \tilde{a} + \hat{\Delta}$, continuity implies that it exists $\varepsilon>0$ such that the null $H_0^{\Delta-\varepsilon}: -\theta+(\hat{a} + \Delta - \varepsilon) \ge 0$ was rejected at the given confidence level $\alpha$, even though $x \notin R_{\Delta-\varepsilon}$, which implies a contradiction.
The chosen action must therefore be $\hat{a} =\tilde{a} +\hat{\Delta} =c_{\alpha/2}+x$. $\Box$
**Proof of Theorem \[Admissibility\] —** The risk function of a generic decision $\delta^*$ is: $$\begin{aligned}
R(\theta, \delta^*) = E_{\theta}(I(-X+\tilde{a} \le 0)R_-(\theta, \delta^*) + I(-X+\tilde{a} > 0) R_+(\theta, \delta^*))\end{aligned}$$ where $$\begin{aligned}
\label{Risk1}
R_-(\theta, \delta^*) & \equiv E_{\theta|-X+\tilde{a}\le 0}(L(\theta, \delta^*(X))) \\
R_+(\theta, \delta^*) &\equiv E_{\theta|-X+\tilde{a}>0}(L(\theta, \delta^*(X))) \label{Risk2}\end{aligned}$$ and the expectations are taken with respect to the truncated normal distribution.
Consider equation . The case for equation can be proven similarly. I prove that the decision $\delta_-^A(X)$ is admissible with respect to the truncated normal distribution. This implies that $R_-(\theta, \delta_-^A) \le R_-(\theta, \delta^*)$ for all $\theta$. Since the same holds for $\delta_+^A(X)$, these two results together imply that $R(\theta,\delta^A) \le R(\theta,\delta^*)$ and therefore that $\delta^A(X)$ is admissible.
To prove that $\delta_-^A(X)$ is admissible with respect to the truncated normal distribution, I verify that the conditions of theorem 4 of Karlin and Rubin (1956) hold. First, note that the truncated normal distribution belongs to the exponential family and therefore possesses a monotone likelihood ratio (see section 1 of Karlin and Rubin, 1956). Second, conditional on observing $-X+\tilde{a} \le 0$, the decision rule of theorem \[Frequentist decision\] foresees two actions: either the null hypothesis is accepted or rejected. Denote these actions with $a_1$ and $a_2$, respectively. Define the corresponding losses from the original loss function: $$\begin{aligned}
L_1(\theta) &\equiv -\tilde{a}\theta + 0.5\tilde{a}^2 \\
L_2(\theta) &\equiv -(x +c_{\alpha/2})\theta + 0.5 (x +c_{\alpha/2})^2\end{aligned}$$ and note that $$\begin{aligned}
L_1(\theta)-L_2(\theta) = -\theta(\tilde{a}-x-c_{\alpha/2})+0.5(\tilde{a}^2-(x+c_{\alpha/2})^2)\end{aligned}$$ This function is linear in $\theta$ and therefore changes sign only once as a function of $\theta$, specifically at the finite value $\theta=0.5(\tilde{a}+x+c_{\alpha/2})$. Since $\psi_-^A(x)$ is a monotone procedure, the conditions of theorem 4 of Karlin and Rubin (1956) are satisfied and $\delta_-^A(X)$ is admissible. $\Box$
**Proof of Theorem \[Interpretation\]** — Partitioning the sample space with respect to the gradient $-X+\tilde{a}$: $$\begin{aligned}
I(L(\theta, \delta^{A}(X)) > L(\theta,\tilde{a})) &= I(-X+\tilde{a} \le 0) I(L(\theta, \delta^{A}_-(X)) > L(\theta,\tilde{a})) \\
&+I(-X+\tilde{a} > 0) I(L(\theta, \delta^{A}_+(X)) > L(\theta,\tilde{a}))\end{aligned}$$ Consider again only the case i) $I(-X+\tilde{a} \le 0) = 1$, as the other one is similar.
Let us find out first the values of $a$ for which $L(\theta, a) > L(\theta,\tilde{a})$. This is equivalent to finding out when the function $-a\theta + 0.5a^2 + \tilde{a}\theta - 0.5\tilde{a}^2$ is positive, which it is for $a<\theta - |-\theta + \tilde{a}|$ and $a>\theta + |-\theta + \tilde{a}|$. Therefore: $$\begin{aligned}
\nonumber
I&(-X+\tilde{a} \le 0) I(L(\theta, \delta^{A}_-(X)) > L(\theta,\tilde{a})) = \\
&= I(-X+\tilde{a} \le 0) (I(\delta^{A}_-(X)<\theta - |-\theta + \tilde{a}|) + \label{P1} \\
& \qquad \qquad \qquad \quad + I(\delta^{A}_-(X)>\theta + |-\theta + \tilde{a}|)) \label{P2}\end{aligned}$$ and note also that $\delta^{A}_-(X) = \tilde{a} +(x+c_{\alpha/2}-\tilde{a})\psi_-^A(X)$.
Suppose first that $-\theta + \tilde{a}>0$. Substituting the decision rule and rearranging the terms, the term is equal to: $$\begin{aligned}
I(\delta^{A}_-(X)<\theta - |-\theta + \tilde{a}|)&=I(\delta^{A}_-(X)<2\theta-\tilde{a})\\
&= I((x+c_{\alpha/2}-\tilde{a})\psi_-^A(X) < 2\theta - 2\tilde{a}) \\
&=0\end{aligned}$$ because $(x+c_{\alpha/2}-\tilde{a})\psi_-^A(X) \ge 0$, and the term is equal to: $$\begin{aligned}
I&(-X+\tilde{a}\le 0)I(\delta^{A}_-(X)>\tilde{a})\\
&\quad = I(-X+\tilde{a}\le 0)I((x+c_{\alpha/2}-\tilde{a})\psi_-^A(X) > 0) \\
&\quad = I(-X+\tilde{a}\le 0) I(-X+\tilde{a}<c_{\alpha/2})\\
&\quad =I(-X+\theta<c_{\alpha/2}+\theta-\tilde{a}) \\
&\quad \le I(-X+\theta<c_{\alpha/2})\end{aligned}$$ where the inequality follows from the fact that the case currently analyzed is $-\theta+\tilde{a}>0$.
If $-\theta + \tilde{a}<0$, similar reasoning can be used to show that the terms and are equal to: $$\begin{aligned}
I(\delta^{A}_-(X)<\tilde{a})&= 0 \\
I(-X+\tilde{a}\le 0)I(\delta^{A}_-(X)>2\theta - \tilde{a}) &\le I(-X+\theta<c_{\alpha/2})\end{aligned}$$
Combining all these results gives: $$\begin{aligned}
E_{\theta}(I(L(\theta, \delta^{A}(X)) > L(\theta,\tilde{a}))) &\le E_{\theta} (I(-X+\theta<c_{\alpha/2}) + I(-X+\theta>c_{1-\alpha/2})) \\
&=\alpha \quad \Box\end{aligned}$$
**** Berger, J. O. (1985), *Statistical Decision Theory and Bayesian Analysis* (2nd ed.), New York: Springer-Verlag.
Brandt, M.W. (2009), Portfolio Choice Problems, in Y. Ait-Sahalia and L. P. Hansen (eds.), *Handbook of Financial Econometrics*, North Holland.
Chamberlain, G., (2000), Econometrics and decision theory, *Journal of Econometrics*, 95 (2), 255-283.
G. Elliott and A. Timmermann (2016), *Economic Forecasting*, Princeton University Press.
Ellsberg, D. (1961), Risk, Ambiguity, and the Savage Axioms, *The Quarterly Journal of Economics*, 75 (4), 643–669.
Gennaioli, N. and A. Shleifer (2010), What Comes to Mind, *The Quarterly Journal of Economics*, 125 (4), 1399–1433.
Gilboa, I. and M. Marinacci (2013), Ambiguity and the Bayesian Paradigm, in *Advances in Economics and Econometrics: Theory and Applications*, Tenth World Congress of the Econometric Society, D. Acemoglu, M. Arellano and E. Dekel (Eds.), New York, Cambridge University Press.
Geweke, J. and C. Whiteman (2006), Bayesian Forecasting, in *Handbook of Economic Forecasting, Volume I*, edited by G. Elliott, C. W. J. Granger and A. Timmermann, Elsevier.
Granger, C.W.J. and M.J. Machina (2006), Forecasting and decision theory, in G. Elliott, C. Granger and A. Timmermann (eds.), *Handbook of Economic Forecasting*, vol.1, 81-98, Elsevier.
Granger, C.W.J. and P. Newbold (1986), *Forecasting Economic Time Series*, Academic Press.
Jobson, J.D. and B. Korkie (1981), Estimation for Markowitz Efficient Portfolios, *Journal of the American Statistical Association*, 75, 544-554.
Karlin, S. and H. Rubin (1956), The Theory of Decision Procedures for Distributions with Monotone Likelihood Ratio, *The Annals of Mathematical Statistics*, 27, 272-299.
Lehmann, E.L. and J.P. Romano (2005), *Testing Statistical Hypothesis*, Springer.
Manganelli, S. (2009), Forecasting with Judgment, *Journal of Business and Economic Statistics*, 27 (4), 553-563.
Manski, C.F. (2013), *Public policy in an uncertain world: analysis and decisions*, Harvard University Press.
Markowitz, H.M. (1952), Portfolio Selection, *Journal of Finance*, 39, 47-61.
Marinacci, M. (2015), Model Uncertainty, *Journal of European Economic Association*, 13, 998-1076.
Savage, L.J. (1954), *The Foundations of Statistics*, New York, John Wiley & Sons.
Patton, A.J. and A. Timmermann (2012), Forecast Rationality Tests Based on Multi-Horizon Bounds, *Journal of Business and Economic Statistics*, 30 (1), 1-17.
Tversky, A. and D. Kahneman (1974), Judgment under Uncertainty: Heuristics and Biases, *Science*, 1124-1131.
Wald, A. (1950), *Statistical Decision Functions*, New York, John Wiley & Sons.
[^1]: European Central Bank, simone.manganelli@ecb.int. I would like to thank for useful comments and suggestions Andrew Patton, Joel Sobel, Harald Uhlig, as well as participants at the NBER Summer Institute, the Duke/UNC conference on New Developments in Measuring and Forecasting Financial Volatility, the Southampton Finance and Econometrics Workshop, the NBER-NSF Seminar on Bayesian Inference in Econometrics and Statistics in St. Louis, the Berlin conference Modern Econometrics Faces Machine Learning, the $10^{th}$ SoFiE conference in New York, the HeiKaMEtrics Network on Financial Econometrics in Heidelberg, the German Statistical Association in Rostock, as well as seminar participants at the ECB, Humboldt University Berlin, Technische Universitaet Dresden, the University of Bern, the UZH Finance Seminar in Zurich, Tinbergen Institute’s Econometrics seminar in Amsterdam, the Finance Seminar Series at the Goethe University in Frankfurt, and the de Finetti Risk Seminars at Bocconi University.
[^2]: I denote random variables with upper case letters ($X$) and their realization with lower case letters ($x$).
|
---
abstract: 'Spin chains have long been considered as candidates for quantum channels to facilitate quantum communication. We consider the transfer of a single excitation along a spin-1/2 chain governed by Heisenberg-type interactions. We build on the work of Balachandran and Gong [@Balachandran2008], and show that by applying optimal control to an external parabolic magnetic field, one can drastically increase the propagation rate by two orders of magnitude. In particular, we show that the theoretical maximum propagation rate can be reached, where the propagation of the excitation takes the form of a dispersed wave. We conclude that optimal control is not only a useful tool for experimental application, but also for theoretical enquiry into the physical limits and dynamics of many-body quantum systems.'
author:
- Michael Murphy
- Simone Montangero
- Vittorio Giovannetti
- Tommaso Calarco
bibliography:
- 'reduced.bib'
title: Communication at the quantum speed limit along a spin chain
---
Introduction
============
Quantum computers promise to allow efficient simulation of large dynamic and complex systems and deliver performance advantages over their classical counterparts. One of the central considerations for the construction of a quantum computer is an infrastructure that can rapidly and robustly transport qubit states between sites where qubit operations can be performed. The components for this infrastructure may be though of as quantum channels for quantum information transfer. One of the technologies under investigation to constitute such a channel is the one-dimensional spin-chain [@Bose2007; @Balachandran2008; @Romero-isart2007; @Christandl2004; @Subrahmanyam2004; @Bose2003; @Kay2006; @Burgarth2009; @Burgarth2007; @Giovannetti2006], which consists of a string of particles coupled via their spin degrees of freedom, each acting as an effective two-level quantum system. As is customary in quantum information processing, proper engineering of the control parameters of the system is essential to achieve the high fidelity necessary for robust quantum computation. This can be obtained, for instance, by employing a numerical optimisation method which, for the specific settings of the problem, seeks the optimal control pulses that allow one to implement the desired operation [@Calarco2004; @Montangero2007; @DeChiara2008; @Schulte-Herbruggen2005; @Khaneja2005; @Grace2007; @Nebendahl2009; @Sporl2007; @Schirmer2009a; @Nigmatullin2009; @Tesch2002]. In this paper, we apply such a method, known in the literature as the Krotov method [@Sklarz2002; @Tannor1992; @Somloi1993; @Palao2002; @Palao2003], to the case of quantum state transfer along a one-dimensional spin chain. The specific system we use was introduced by Balachandran and Gong [@Balachandran2008], but here we show that by designing the external driving parameters with optimal control methods, one can obtain a significant increase in fidelity, even over short time scales [@Schirmer2009].
These high-fidelity, high-speed transmissions exhibit interesting characteristics. If one ignores the effects taking place near the boundaries, the evolution of the excitation is that of a dispersed wave, moving with almost constant velocity along the chain. This velocity is independent of the chain length, and furthermore has an upper bound, indicating the presence of a fundamental limit on the rate of transmission. Through a closer analysis, we show that this limit can be directly related to the theoretical maximum speed of the state transfer allowable by the laws of quantum mechanics [@Yung2006; @Giovannetti2003; @Giovannetti2004; @Pfeifer1993; @Lloyd2000; @Caneva2009].
Producing time-optimal gates has already been explored in the literature [@Carlini2006; @Carlini2007; @Khaneja2001; @Khaneja2002; @Reiss2003; @Fisher2009] where the authors considered geodesics on the Bloch sphere for systems with a low number of dimensions. Unfortunately, extension of these methods to many-body systems (such as the system we consider here) are prohibitively difficult. Conversely, the numerical optimisation methods that we employ have little difficulty in finding sets of optimal solutions, even at this limit. In effect, we demonstrate that through application of optimal control, we can not only transmit the excitation with a high fidelity, but also at the fastest possible speed. One can even reverse the problem, implying that optimal control can be used to probe such fundamental dynamical limits on many-body quantum systems. Such tools will be invaluable as the ambition of quantum science leads it towards investigations of systems of greater complexity which are less tractable analytically.
The paper is arranged as follows. In Section \[sec:sc\], we describe the system used for information transfer in more detail and the precise scheme which we will use for propagating quantum information in the system. Section \[sec:oct\] discusses the application of optimal control to the transfer scheme, and shows that optimal control can effect significant gains in the transfer speed. We then discuss the fundamental limit of these improvements in Section \[sec:conn\], and show that optimal control in fact allows us to reach this limit, thus allowing us to transfer the spin state in the fastest possible time allowable by the laws of quantum mechanics. Finally, we present the conclusions in Section \[sec:conc\].
Spin chains as quantum channels {#sec:sc}
===============================
Overview
--------
Using spin chains as quantum channels for communication between two parties was first proposed by Bose in 2003 [@Bose2003] and later developed in a series of papers (we refer the reader to Ref. [@Bose2007] for a review). The idea is relatively simple: Alice (the sender) has a quantum state she wants to relay to Bob (the receiver). Between them is a one-dimensional chain of $N$ spin-1/2 particles which are coupled via nearest-neighbour interactions. Alice has access to the first spin in this chain, and can prepare its spin state as she chooses. Bob has access to the final site (the terms ‘spin’ and ‘site’ will be used interchangeably), whose state he can read out. Following [@Balachandran2008], we apply an external parabolic magnetic field, which Alice can control. The procedure for sending quantum information along the chain is as follows.
1. The spin chain is prepared in its ground state with respect to the external magnetic field.
2. Alice prepares the initial spin state to be the state she wishes to transfer.
3. By manipulating the magnetic field, Alice controls the propagation of the spin along the chain, which takes place due to the coupling between the spin degrees of freedom.
4. After some prescribed time when the state has been transferred to the final site, Bob reads out the state of this site.
The Hilbert space and Hamiltonian
---------------------------------
The model we consider is sketched in Fig. \[fig:sc1\]. It is composed of a one-dimensional spin-1/2 chain with $N$ sites, where distances are measured by the variable $x$ (although this may not be a physical distance). We will consider uniform Heisenberg nearest-neighbour couplings characterised by the same coupling strength $J$, and the presence of a parabolic external magnetic field in the $z$-direction, normal to the direction $x$. Consequently, the field will act on the $n$th site as $$B_n(t) = C(t)\bigl(x_n - d(t)\bigr)^2\:,$$ where $d(t)$ is the position of the field minimum along $x$ at time $t$, and $C(t)$ is a measure of the global field strength.
![(Colour online) The one-dimensional spin chain used for information transfer. The (blue) filled circles represent sites along the chain, with the applied magnetic field depicted. The effective couplings are indicated operating between the sites.[]{data-label="fig:sc1"}](fig1)
The Hamiltonian then takes the form $$\label{eq:ham}
H(t) = -\frac{J}{2}\sum_{n=1}^{N-1}\vec{\sigma}_n\cdot\vec{\sigma}_{n+1} +
\sum_{n=1}^{N}B_n(t)\sigma_n^z\:,$$ where $n$ labels the spin sites, with $n = 1$ and $n = N$ referring to the first and last spins, respectively, and $\vec{\sigma}_n =
(\sigma^x_n, \sigma^y_n, \sigma^z_n)$ are the Pauli spin operators for the $n$th spin. For convenience, all system parameters are scaled to make them dimensionless, and the coupling strength is set to $J =
1$.
The dynamics are governed by the interplay between the nearest-neighbour interactions and the interaction of each site with the external parabolic magnetic field. When sites are far from the field minimum, the local field strength dominates over the nearest-neighbour interactions, effectively ‘switching off’ the coupling between sites. For sites near the minimum where the field is weak, the nearest-neighbour coupling dominates, and the neighbouring sites interact with each other. These two processes control the propagation of spin states along the chain.
Communicating quantum information
---------------------------------
We identify the computational basis for our system with the quantised states of each spin, such that ${\ensuremath{\lvert 0 \rangle}} = {\ensuremath{\lvert \downarrow \rangle}}$ (spin down with respect to $z$) and ${\ensuremath{\lvert 1 \rangle}} = {\ensuremath{\lvert \uparrow \rangle}}$ (spin up). Assume that Alice prepares the chain in the initial state [$\lvert \Psi(0) \rangle$]{}, with the first spin site in the state [$\lvert 1 \rangle$]{}, and all other sites to in their ground state [$\lvert 0 \rangle$]{}. We can write this state as $$\label{eq:psi0}
{\ensuremath{\lvert \Psi(t = 0) \rangle}} = {\ensuremath{\lvert \varphi_1 \rangle}} \equiv {\ensuremath{\lvert 1 \rangle}} \otimes {\ensuremath{\lvert 0 \rangle}} \otimes {\ensuremath{\lvert 0 \rangle}} \otimes
\dots \otimes {\ensuremath{\lvert 0 \rangle}}\:,$$ with the first spin site in the state [$\lvert 1 \rangle$]{}, and all other sites to in their ground state [$\lvert 0 \rangle$]{}. The states ${\ensuremath{\lvert \varphi_n \rangle}}$ are defined as $$\label{eq:phin}
{\ensuremath{\lvert \varphi_n \rangle}} \equiv \bigotimes_{m=1}^{N}
{\ensuremath{\lvert \delta_{mn} \rangle}}\:, \quad n = 1, \dots, N,$$ where $\delta_{mn}$ is the Kronecker delta. Alice’s goal is to manipulate the magnetic field parameters $C(t)$ and $d(t)$ such that at the final time $T$ the final state obeys $$\label{eq:final_cond}
|{\ensuremath{\langle \Psi(T) \vert \varphi_N \rangle}}|^2 = 1\:.$$ The protocol for transferring the state is based on that described in Ref. [@Balachandran2008], which we outline in Figure \[fig:sc2\]. The transfer begins with the state [$\lvert \Psi(0) \rangle$]{} and with the potential minimum centred at $x = 0$. At first, the interaction between the first two sites dominates over the interaction with the locally weak magnetic field, and so the sites interact and the spin state migrates from the first site to the second. As the field minimum moves along the $x$-axis, nearest-neighbour interactions are effectively switched on for pairs of spins closest to the minimum, and switched off for spin pairs that are distant. By correctly moving the field minimum and adjusting the field strength, the spin state is able to traverse the chain.
![(Colour online) The transfer begins with the state [$\lvert \Psi(0) \rangle$]{} and with the potential minimum centred at $x = 0$. (a) The excitation is localised at the first site. (b) The field minimum moves along the chain during the evolution. (c) The spin state has been completely transferred to the final site in the chain.[]{data-label="fig:sc2"}](fig2)
The condition in Eq. means that we do not preserve the phase of the initial state; to achieve this (as also discussed in Ref.[@Balachandran2008]) one can use dual-rail encoding [@Burgarth2005; @Burgarth2005a; @Burgarth2005b], whereby one encodes the qubit in the entanglement phase of a pair of spin chains. In what follows, we shall only consider the phase-insensitive transfer of a single excitation.
Since each site of the chain has two internal spin states, the size of the Hilbert space $\mathcal{H}$ scales exponentially with the number of sites, so that for $N (\geq 1)$ sites, $\mathrm{dim}\:\mathcal{H} =
2^N$. However, since $[H(t),\sum_{n=1}^{N} \sigma^z_n] = 0$, the state of the system only evolves within the subspace $\mathcal{U} \subset
\mathcal{H}$, spanned by the $N$ basis states ${\ensuremath{\lvert \phi_n \rangle}}$ [@Bose2007]. The reduced size of the effective Hilbert space is particularly beneficial when one wants to numerically simulate the evolution efficiently. We do this by solving the associated Schrödinger equation $$\label{eq:schrod}
i \frac{\partial}{\partial t}{\ensuremath{\lvert \Psi(t) \rangle}} = \hat{H}(t) {\ensuremath{\lvert \Psi(t) \rangle}}\:,$$ where $\hat{H}$ is the matrix form of the Hamiltonian that acts only on the subspace $\mathcal{U}$ $$\label{eq:ham2}
\hat{H}(t) \equiv \hat{H}_0 + \hat{H}_1(t)\:,$$ with $$\hat{H}_0 =-2J + J\begin{pmatrix}
1&1&0&\cdots&0&0\\
1&0&1&&0&0\\ \vspace{-0.05in}
0&1&0&&0&0\\
\vdots&&&\ddots&&\vdots\\
0&0&0&&0&1\\
0&0&0&\cdots&1&1
\end{pmatrix}$$ and $$\hat{H}_1(t) = \mathrm{diag}\left(f_0(t),f_1(t),\dots,f_{N-1}(t)\right)\:,$$ where $f_n(t) = C(t)\left(x_n - d(t)\right)$ (note that we have rescaled the energy so that spins pointing down do not contribute to the total energy). The Schrödinger equation is integrated numerically using the Crank-Nicolson method [@Crank1996].
This scheme was first considered by Balachandran and Gong [@Balachandran2008], who showed that by choosing $d(t) = st$ and $C(t) = k$, where $s$ and $k$ are constant, one is able to adiabatically transfer the state across the entire chain with relatively good fidelities. However, the transfer rates here are very slow, with transfer times typically on the order of $10^4J$ for fidelities greater than 99%.
In many proposed implementations of quantum computers, it is likely that transport processes will take up a significant amount of the total operating time. It therefore seems clear that one should seek to minimise the time required for these processes. However, according to quantum mechanics there is some fundamental limit which restricts the speed at which we can communicate with our spin chain, referred to in the literature as the ‘quantum speed limit’ (QSL) [@Pfeifer1993; @Lloyd2000; @Giovannetti2003; @Yung2006; @Levitin2009; @Caneva2009]. The goal is to come as close as possible to this limit, effectively communicating at the highest possible speed allowable by quantum mechanics. We shall see in the next section that optimal control can help us in this endeavour.
Optimal dynamics {#sec:oct}
================
We can state our problem in the following way: we start with an initial state, and want to control the system to produce the desired final state. In our case, the initial state is ${\ensuremath{\lvert \varphi_1 \rangle}}$, and we want to achieve the final state ${\ensuremath{\lvert \varphi_N \rangle}}$ (up to a global phase). We can control the evolution of the system using the external magnetic field, in particular the time-dependent controls $d(t)$ and $C(t)$. (Although in principle we could also control the inter-spin coupling $J$, this is much more difficult to achieve experimentally.) Optimal control theory provides us with a set of tools to search for the optimal way to control the system, often referred to as the set of optimal *controls*.
Here, we implement an optimal control algorithm most commonly known as the Krotov method. In outline, the method works as follows.
1. We solve the Schrödinger equation from to find [$\lvert \Psi(T) \rangle$]{}, where $T$ is the total evolution time.
2. \[item:loop\] We define the co-state ${\ensuremath{\lvert \chi(T) \rangle}} =
{\ensuremath{\lvert \Psi(T) \rangle}}{\ensuremath{\langle \Psi(T) \vert \varphi_N \rangle}}$. This state is propagated backwards to the initial time.
3. The initial state is then propagated forward again through time, but at each time step we calculate the matrix elements $${\ensuremath{\langle \chi(t) \rvert}}\frac{\partial H(u_n(t); t)}{\partial u_n(t)}{\ensuremath{\lvert \Psi(t) \rangle}}$$ for the two controls $u_1(t) = d(t)$ and $u_2(t) = C(t)$. The matrix elements are then used to update the control functions, which are then used to propagate $\Psi(t)$ to the next time step.
4. We can then calculate the fidelity of the transport $$F \equiv |{\ensuremath{\langle \Psi(T) \vert \varphi_N \rangle}}|^2\:,$$ which tells us how close we were to achieving our goal. (Note that we will often refer not to the fidelity, but to the infidelity $I \equiv
1 - F$.) If we achieve fidelity $F = 1$ (up to a given threshold), we stop the optimisation, otherwise we begin again at step \[item:loop\].
There are several aspects in implementing the algorithm which are described in more detail in Ref. [@Sklarz2002]. Figure \[fig:nocontpop\] shows the non-adiabatic transfer of a spin excitation across a chain of $N = 101$ spins without applying optimal control. One sees that during propagation, much of the spin excitation has been left behind. One way to correct this would be to lower the field strength: this will allow neighbouring sites to interact for longer, so that more of the excitation can be transmitted. However, this causes the excitation to spread out, which can be seen in Fig. \[fig:nocontpop2\]. In comparison with Fig. \[fig:nocontpop\], we see that although we have not left as much of the excitation behind, we have spread it over more sites.
![(Color online) Excitation probability plotted against $x$ for a spin chain with $N = 101$ sites, at times (a) 0, (b) 100 and (c) 200, in units of $J^{-1}$. Here, $d(t) = 0.5 t$ and $C(t) = 1$. The final fidelity is only around 15%.[]{data-label="fig:nocontpop"}](fig3)
![(Color online) Excitation probability plotted against $x$ for a spin chain with $N = 101$ sites, at times (a) 0, (b) 100 and (c) 200, in units of $J^{-1}$. Here, $d(t) = 0.5 t$ and $C(t) =
0.1$. The final fidelity here is around 5%.[]{data-label="fig:nocontpop2"}](fig4)
![(Color online) Excitation probability plotted against $x$ for a spin chain with $N = 101$ sites, at times (a) 0, (b) 100 and (c) 200, in units of $J^{-1}$. Here, $d(t)$ and $C(t)$ were optimally controlled. The final fidelity is $> 99\%$.[]{data-label="fig:contpop"}](fig5)
After applying optimal control (300 iterations of the update procedure), we arrive at the evolution shown in Fig. \[fig:contpop\]. Here we see that we no longer leave excitation behind at the initial spin sites, and although we spread out the excitation during transport, we successfully recover the highly localised final state, giving a final fidelity $F$ that differs from unity by $<10^{-4}$. The pulses required to achieve this result are shown in Fig. \[fig:contpulse\].
![(Color online) The optimal control pulses for (a) $C(t)$ and (b) $d(t) - d_0(t)$, where $d_0(t) = 0.5 t$. The main features here are large perturbations at the initial and final time due to the boundaries, and slower modulations for the intermediate stage of the transport.[]{data-label="fig:contpulse"}](fig6)
Typical features of these pulses are large modulations at the boundaries, necessary for ‘accelerating’ (it will be useful here to imagine an excitation wave) the excitation at the initial time, and then ‘decelerating’ it near the final time. Small modulations are required at intermediate times in order to prevent the excitation from spreading over too many sites. It is also worth noting that the speed achieved here is at least two orders of magnitude faster than is possible in the adiabatic case for comparable fidelities [@Balachandran2008].
If we decrease $T$, we find for all times $T$ shorter than a particular time ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ that even after applying the optimisation algorithm we are still unable to achieve high-fidelity state transfer. In other words, there is a minimum time required to perform the transfer [@Caneva2009]. The lower-bound on the value of ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ is set by the quantum speed limit (QSL); no transfer can take place faster than the QSL allows. Fig. \[fig:oct\_at\_sl\] shows the same transfer of excitation as in Fig. \[fig:contpop\], but in this case we have set the total allowed time $T = {\ensuremath{T_{\mathrm{QSL}}^\ast}}$ (how we determined ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ is shown later). One sees clearly that the evolution of the system is that of a wave of excitation, moving with an almost constant velocity along the chain.
![(Color online) The probability density of the wavefunction along the chain at different times: (a) $t = 0$, (b) $t = T/4$, (c) $t = T/2$, (d) $t = 3T/4$, and (e) $t = T$, where $T = {\ensuremath{T_{\mathrm{QSL}}^\ast}}=
56.50J^{-1}$. Both $d(t)$ and $C(t)$ were found after 100,000 iterations of the optimal control algorithm.[]{data-label="fig:oct_at_sl"}](fig7)
![(Color online) The probability density of the wavefunction along the chain at different times: (a) $t = 0$, (b) $t = T/4$, (c) $t = T/2$, (d) $t = 3T/4$, and (e) $t = T$, where $T < {\ensuremath{T_{\mathrm{QSL}}^\ast}}=
53.30J^{-1}$. Both $d(t)$ and $C(t)$ were found after 100,000 iterations of the optimal control algorithm.[]{data-label="fig:oct_past_sl"}](fig8)
When we choose time $T < {\ensuremath{T_{\mathrm{QSL}}^\ast}}$, we find accordingly that the optimal control algorithm is unable find an optimal solution, even after many thousands of iterations. This is a strong indication that we have gone beyond the quantum speed limit, and there is no solution by which we can transfer the excitation across the chain in the given time. The evolution of the system in this case is shown in Fig \[fig:oct\_past\_sl\]. In comparison to Fig. \[fig:oct\_at\_sl\], one sees that the evolution looks much the same. However, if one compares the excitation profile at $T/2$ for both evolutions, one sees that while the evolution at the QSL has the excitation wave centred at the 51st site (i.e. the halfway point), the evolution for a time $T < {\ensuremath{T_{\mathrm{QSL}}^\ast}}$ falls short of the halfway point after $T/2$. This is an indication that we are indeed beyond the QSL, since if we cannot reach the halfway point before half the time has elapsed, we might well guess that we cannot reach the final site in the remaining half of the time.
We can see this failure of the optimisation algorithm more clearly in Fig. \[fig:oct\_error\].
![(Color online) The decrease in infidelity of a transfer across a chain with 101 sites against the iterations of the control algorithm. The solid (red) line is the convergence for a transfer time $T = 70.92J^{-1} > {\ensuremath{T_{\mathrm{QSL}}^\ast}}$, the dashed (green) line for a transfer time $T = {\ensuremath{T_{\mathrm{QSL}}^\ast}}= 56.50J^{-1}$, and the dotted (blue) line for a transfer time of $T < {\ensuremath{T_{\mathrm{QSL}}^\ast}}= 53.33J^{-1}$.[]{data-label="fig:oct_error"}](fig9)
For times $T > {\ensuremath{T_{\mathrm{QSL}}^\ast}}$, the infidelity converges almost exponentially towards zero. For times $T < {\ensuremath{T_{\mathrm{QSL}}^\ast}}$, the decrease in infidelity saturates after several hundred iterations.
Another indication that the QSL has been reached can be found by examining the average “velocity” of the excitation wave as it moves across the chain. Given a total time $T$ for the propagation, the average rate at which the excitation *should* be transmitted is given by $v_a = (N - 1)/T$. Examining the dynamics, we can see that for much of the propagation time, the excitation moves along the chain with an (approximately) constant velocity. We can quantify this velocity as $$\label{eqn:actual_speed}
v_d = \frac{4}{T^2} \int_{\frac{T}{4}}^{\frac{3T}{4}}
\langle x \rangle {\ensuremath{\:\mathrm{d}}}t\:,$$ where $\langle x \rangle = {\ensuremath{\langle \Psi(t) \rvert}} x {\ensuremath{\lvert \Psi(t) \rangle}}$ is the expectation value of the position of the excitation along the chain. In other words, we take the average position of the excitation in the time interval $[T/4,3T/4]$ (to avoid effects at the ends of the chain) and divide by the average time taken to reach that position, $T/2$.
In the ideal case, we would have $v_a = v_d$, in which case the optimal solution would be the transit of the excitation along the chain at exactly the average rate required to reach the other end. However, as we cross the threshold set by the QSL, we should find that $v_d$ reaches a maximum, which is the maximum speed at which the excitation can propagate. This is exactly what is seen in Fig. \[fig:oct\_wavespeed\].
![(Color online) The average speed of the excitation wave $v_d$ versus $v_a$. The solid (red) line shows the effect of filtering on the optimised pulses for a chain length of 41 sites, the long-dashed (green) line shows the same for 61 sites, the short-dashed (dark blue) line for 81 sites, the dotted (pink) line for 101 sites, and the dashed-dotted (light blue) line for 121 sites. The black dotted line is the line $v_a = v_d$.[]{data-label="fig:oct_wavespeed"}](fig10)
The last issue we want to address is robustness. In essence, how much information in the control pulses given in Fig. \[fig:contpulse\] (and indeed in all of the control pulses at the QSL) can be discarded without detriment to the transfer fidelity? Figure \[fig:ftrans\]
![(Color online) The solid (red) line shows the Fourier transform of the pulse $d(t) - d_0(t)$ for a chain length of 101 spins with a total time $T = 56.50J^{-1}$.[]{data-label="fig:ftrans"}](fig11)
shows an example spectrum of a pulse for $d(t)$ for a transfer along a chain of 101 spins at the QSL, and Fig. \[fig:oct\_fourier\]
![(Color online) The infidelity of the transfer related to the maximum frequency component of the controls retained after filtering. For key, see Fig. \[fig:oct\_wavespeed\].[]{data-label="fig:oct_fourier"}](fig12)
shows the effect on the fidelity after filtering the optimised pulses. The filter applied is a simple frequency cutoff: the pulse (in frequency space) is convoluted with a function $$\label{eq:conv_func}
\gamma(\nu; \nu_{\text{max}}) = \begin{cases}
\nu& \text{if}\ |\nu| \leq \nu_{\text{max}}, \\
0&\text{otherwise},
\end{cases}$$ where $\nu_{\text{max}}$ is the maximum allowed frequency in the pulse. We see that not all of the frequencies in the control pulses need be retained; on average, we only need frequencies up to around $4 J$ in order to maintain a high fidelity. Note that this is independent of the chain length $N$. Figure \[fig:smoothed\_pulses\] shows a set of pulses that transport the excitation along a chain of 101 spins with an infidelity $I<10^{-4}$, where the maximum frequency component is $\sim 4J$.
![(Color online) The optimal control pulses for (a) $C(t)$ and (b) $d(t) - d_0(t)$, where $d_0(t) = 1.77 t$. The maximum frequency component is $\sim4 J$.[]{data-label="fig:smoothed_pulses"}](fig13)
Theoretical limits of non-relativistic quantum theory {#sec:conn}
=====================================================
Does the limit ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ discussed in the last chapter have a physical origin, or is it simply a numerical constraint, stemming from the construction of the optimisation routine itself? If, in fact, we *are* able to reach the physical limit by application of optimal control routines, then it would appear that optimal control can not only be used to improve the operation of experimental implementations, but indeed to probe a system’s dynamics and physical limits. This connection was already investigated in Ref. [@Caneva2009]; here, we elucidate further the methods that were applied and more specific conclusions.
The physical limits on quantum systems (and hence any physical system) have been investigated theoretically in quantum systems for several years; such considerations lead Lloyd in 2000 to calculate the maximum rate at which any machine can process information [@Lloyd2000]. In particular, the notion of a “quantum speed limit” has been reported by several authors. We briefly recount this theory and its particular application to our problem.
The quantum speed limit {#sec:qsl}
-----------------------
What is the absolute maximum speed at which we can transfer information along our chain? This amounts to finding the minimum time it takes for the given initial state ${\ensuremath{\lvert \Psi(0) \rangle}}$ to evolve to the goal state ${\ensuremath{\lvert \varphi_N \rangle}}$. A possible route for finding this minimum time was explored by Carlini *et al.* [@Carlini2006], where it was shown that one may derive the time-optimal Hamiltonian for a given state evolution by minimising the quantum action $S$ of the system, by which the problem may be interpreted as a quantum analogue of the classical brachistochrone. In principle, the same procedure could be performed in our case, but the complexity of the calculation is prohibitive for a many-body system like ours. Hence we ask a somewhat simpler question, as in Giovannetti *et al.* [@Giovannetti2003]: how fast can a quantum system under a time-independent Hamiltonian evolve in time?
The notions of energy and time are not inseparable, an idea that presents itself in the enigmatic time-energy uncertainty relation [@Busch2008]. Hence the minimum time in which we can perform some given evolution must be connected to the related energy scales. This minimum time is referred to as the quantum speed limit (QSL). For the case where the evolution is from an initial state to an orthogonal state for a time-independent Hamiltonian, this relation can be written explicitly as [@Giovannetti2004] $$\label{eq:tqsl_static}
{\ensuremath{\tau_{\mathrm{QSL}}}}\equiv \max\left(\frac{\pi \hbar}{2
E},\frac{\pi \hbar}{2 \Delta E}\right)\:,$$ with $$\begin{gathered}
\label{eq:del_e}
\Delta E \equiv \sqrt{{\ensuremath{\langle \psi(0) \rvert}}[\hat{H}(t) - E
(t)]^2{\ensuremath{\lvert \psi(0) \rangle}}}\:,\\
\label{eq:ela}
E \equiv {\ensuremath{\langle \psi(0) \rvert}}\hat{H}(t){\ensuremath{\lvert \psi(0) \rangle}}\:.\end{gathered}$$ As pointed out, this is only valid when the time evolution is governed by a time-independent Hamiltonian: $E$ and $\Delta E$ are a measure of the energy resources available in the system only at the initial time, which for time-independent Hamiltonians defines a fixed energy scale. In our case, the methodology must be slightly modified, by considering instead the *mean* energy spread of our system as it evolves under our time-dependent Hamiltonian, which we find by averaging the instantaneous energy spread of the system over the time interval $[0,T]$. By integrating over time, we effectively apply the bound to infinitesimal time steps ${\ensuremath{\:\mathrm{d}}}t$ where the Hamiltonian is approximately constant. We modify the definition in Eq. to read [@Caneva2009] $$\label{eq:tqsl_av2}
{\ensuremath{\tau_{\mathrm{QSL}}}}\equiv \max\left\{\frac{\pi\hbar}{2J}, \frac{\pi \hbar}{2 \Delta\mathcal{E}_2}\right\}\:,$$ where $$\Delta\mathcal{E} = \frac{1}{T}\int_0^T \Delta E (t) {\ensuremath{\:\mathrm{d}}}t\:,$$ with $$\begin{gathered}
\label{eq:del_elam}
\Delta E (t) \equiv \sqrt{{\ensuremath{\langle \Psi(t) \rvert}}[\hat{H}(t) - E
(t)]^2{\ensuremath{\lvert \Psi(t) \rangle}}}\:,\\
\label{eq:elam}
E (t) \equiv {\ensuremath{\langle \Psi(t) \rvert}}\hat{H}(t){\ensuremath{\lvert \Psi(t) \rangle}}\:.\end{gathered}$$ As was already pointed out, this speed limit defines the time it takes to rotate from the initial state to an orthogonal state. Since the initial and final sites are not directly coupled, we cannot immediately rotate from the initial state to our goal state. Due to this condition, we postulate that the speed limit must be interpreted as an effective *time-per-site*; the total time it takes to traverse the chain is this time-per-site multiplied by the number of sites (minus one) in the chain, or equivalently, the number of edges we have between the initial state vertex and the final state vertex when one views the spin chain as a connected graph.
Equation effectively states that the minimum time it takes to rotate from the current system state to an orthogonal state is bounded from below by $\pi \hbar / (2 J)$ (we shall see later that for the evolutions we consider, the second term in Eq. is always less than this term, so that we can neglect it). By considering the speed limit of a simple two-spin system with a coupling strength $J$, we can associate this bound with the time it takes to swap an excitation between only two sites, given that for the initial state the excitation is completely localised on one of the two sites. Using the reasoning above, we see that the quantum speed limit theory predicts that the minimum time to traverse the chain is given simply by the time it takes to perform a swap between two neighbouring sites (which we shall henceforth refer to as ‘orthogonal swaps’) mutiplied by the number of sites in the chain (minus one). However, in our particular system, at some intermediate time it may be (as we have already seen from the results in Section \[sec:oct\]) that the excitation does not perform repeated swap operations, but rather moves along the chain as a dispersed “wave”. If one now imagines the picture of the excitation wave moving from site to site, we note that two excitation waves centred at neighbouring sites are not orthogonal, unlike when we have the excitation fully localised on a single site. This means that we can expect the actual propagation time to be *shorter* than the one calculated from simply doing repeated orthogonal swap operations. The optimised system performs a controlled excitation-wave propagation, which we can view as a cascade of *effective* swap operations, each shorter in duration than that given by the orthogonal swap. We are then motivated to write the total time to traverse the chain as $$\label{eq:fullqsl}
T_{\mathrm{QSL}} = \gamma (N - 1) {\ensuremath{\tau_{\mathrm{QSL}}}}\:,$$ where $\gamma$ is a dimensionless constant that quantifies the effective swap duration in terms of the orthogonal swap. As a side remark, we note that one can also imagine mapping the full chain with the effective swaps onto a shorter chain with orthogonal swaps, which is analogous to a reduction of the transmission length of the chain. Similar ideas have already been explored for long range interactions in Ref. [@Gualdi2008].
Comparing limits {#sec:comp}
----------------
As already alluded to in Section \[sec:oct\] there comes a point where the optimal control algorithm is no longer able to reach an optimal solution. We aim to show that this limit on the evolution time (which we denoted by ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$) corresponds to the quantum speed limit for the system ${\ensuremath{T_{\mathrm{QSL}}}}$ discussed in Section \[sec:qsl\].
![(Colour online.) The infidelity reached after $R = 100,000$ iterations for different chain lengths $N$ and effective time-per-site $(T - b)/(N - 1)$, where $b = 3.65$.[]{data-label="fig:qsl_data"}](fig14)
The procedure for determining ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ is as follows. We select a chain length $N$, and set some initial evolution time $T$ which we assume to be longer than the corresponding ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$. We perform optimal control on the system for a fixed number of iterations $R$. We then repeat this for shorter and shorter times $T$. The results of the simulations are shown in Fig. \[fig:qsl\_data\]. Note that we plot the effective time-per-site $(T - b)/(N - 1)$ in order to make comparisons between chains of differing lengths easier. One sees clearly that for longer times, we are able to complete the state transfers with high fidelities. As we reduce the time, we begin to see that the final value of the infidelity does not converge to zero, even after many thousands of iterations of the control algorithm. Somewhere in between these two extremes lies the limit of the optimal control algorithm. We quantify this by setting a threshold $\varepsilon$ for the infidelity; the time ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ for each $N$ is defined as the smallest value of the time $T$ for which the infidelity $I < \varepsilon$ after $R$ iterations. This threshold obeys a linear relation: $${\ensuremath{T_{\mathrm{QSL}}^\ast}}\approx a (N-1) + b$$ with $a = 0.34$ and $b = 3.65$. Note that this is *a posteriori* the same $b$ used in the effective time-per-site $(T - b)/(N - 1)$ for Fig. \[fig:qsl\_data\]. The introduction of the constant $b$ describes additional effects due to the boundaries of the chain, where the excitation wave is generated at the beginning of the evolution, and then collapsed into a localised excitation at the end. Additionally, $b$ is not dependent on $N$ (unless the chain lenght is of the order of the width of the spin-wave).
![(Colour online.) A comparison of the quantum-speed-limit-time ${\ensuremath{T_{\mathrm{QSL}}}}$ with the optimal control limit ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$. The solid (red) line is ${\ensuremath{T_{\mathrm{QSL}}}}$ with $\gamma = 1$, which is the repeated orthogonal swaps. The (blue) crosses are ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$ for different $N$ in the range 21–131 with $\varepsilon = 5\cdot 10^{-5}$. The dashed (green) line is ${\ensuremath{T_{\mathrm{QSL}}}}+ b$ with $\gamma = 0.34$.[]{data-label="fig:qsl_oct_comp"}](fig15)
We now compare the results from the quantum speed limit ${\ensuremath{T_{\mathrm{QSL}}}}$ with ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$, which is shown in Fig. \[fig:qsl\_oct\_comp\]. In order to evaluate Eq. for each value of $N$, we must numerically calculate the second term in the bracket, since it depends upon the time evolved state ${\ensuremath{\lvert \psi(t) \rangle}}$. For all points at our defined threshold, this comes out to be less than the first term in the bracket in Eq. , so that the speed limit is given simply by the effective swap time. One finds that optimal control outperforms what can be achieved through applying repeated swap operations between adjacent spins. Furthermore, by ignoring boundary effects for ${\ensuremath{T_{\mathrm{QSL}}^\ast}}$, we find that our model for the quantum speed limit fits the data with a value of $\gamma = 0.34$. This means that the speed limit achieved with the optimal control can be described (ignoring the ends of the chain) as a cascade of effective swaps.
Conclusion {#sec:conc}
==========
We have shown that we can successfully apply optimal control to the system given in Eq. to produce fast transfers of excitations along spin chains; two orders of magnitude faster, in fact, than was reported in Ref. [@Balachandran2008] for comparable fidelities. This has application in the fast transport of quantum states over short distances. Furthermore, we have found a fundamental limit for optimal control beyond which optimisation is not possible, and identified it as a speed limit on the dynamics of the system, which is manifested by the dynamics as the propagation of an excitation wave with constant velocity. We compare this with the standard formulation of the quantum speed limit, and show that for our many-body problem, the quantum speed limit implies that the optimal strategy for transport is characterised by effective swaps along the chain. We confirm this through a comparison with the numerical results.
It is interesting to note that aside from the theory on the quantum speed limit, there is a large body of work concerned with a similar bound specifically for spin systems, namely the Lieb-Robinson bound [@Lieb1972; @Robinson1976; @Bravyi2006; @Hamma2009]. It would be interesting to investigate the connection between this bound and the QSL, although it is likely difficult to quantify this explicitly.
We have shown that not only is optimal control a useful tool for the optimisation of tasks relevant for quantum information processing (specifically transmission of quantum information along a spin chain), but also as a means to probe the limits of many-body quantum systems where the theoretical methods become unwieldy. We expect that given the generality of the method, it should be able to probe fundamental limits of many quantum systems that can be efficiently simulated. Indeed, we used the same technique to prove a bound on the duration of a unitary <span style="font-variant:small-caps;">swap</span> operation on a spin chain, showing that it was achievable in a time that scaled only polynomially with the number of sites [@Burgarth2009] (although it was not shown that this was a fundamental limit). We will continue with such investigations in future work.
We would like to thank L. Viola for valuable discussions. We acknowledge financial support by the EU under the contracts MRTN-CT-2006-035369 (EMALI), IP-EUROSQIP, IP-SCALA, an IP-AQUTE, and from the German SFB TRR21. We thank the bwGRID for computational resources.
|
---
abstract: 'A systematic field theory is presented for charged systems. The one-loop level corresponds to the classical Debye-Hückel (DH) theory, and exhibits the full hierarchy of multi-body correlations determined by pair-distribution functions given by the screened DH potential. Higher-loop corrections can lead to attractive pair interactions between colloids in asymmetric ionic environments. The free energy follows as a loop-wise expansion in half-integer powers of the density; the resulting two-phase demixing region shows pronounced deviations from DH theory for strongly charged colloids.'
address:
- '$^{\$}$Max-Planck-Institut für Kolloid- und Grenzflächenforschung, Kantstr. 55, 14513 Teltow, Germany'
- '$^*$Service de Physique Théorique, CEA-Saclay, 91191 Gif sur Yvette, France'
author:
- 'Roland R. Netz$^{\$}$ and Henri Orland$^*$'
title: Field theory for charged fluids and colloids
---
[2]{}
Since the early work of Debye and Hückel (DH) it is known that electrostatic interactions in a mixture of positively and negatively charged particles produce a net attraction[@DH]. This is due to charge screening: Each charge is (on average) predominantly surrounded by oppositely charged particles, which thus leads to an overall attraction between the particles. The resulting DH free energy contribution has been theoretically demonstrated to lead to phase separation in the context of ionic fluids[@Fisher1] and colloidal mixtures[@Roij], in agreement with experimental[@Sengers] and numerical[@Valleau] work on ionic fluids and experiments on colloidal mixtures[@Tata]. The exact nature and origin of the DH term has remained somewhat unclear, and several improvements have been devised based on series expansions and liquid-state theory[@Stell], explicit incorporation of dipole pairs[@Fisher1], and density-functional theory[@Roij].
A second intensely debated question concerns the possible existence of attractive interactions between similarly charged objects in electrolyte solution[@Jensen]. Experimentally, such an attraction has been seen for DNA[@Bloomfield] and strongly charged microspheres which are confined between charged plates[@Crocker]. Clearly, the phase separation observed for colloidal mixtures[@Tata] is a priori [*not*]{} an indication for such an attractive interaction, because the dense phase is induced by attractions between oppositely charged particles, as becomes explicit within DH theory (also, see the discussion in [@Roij]).
In this article we present a systematic field theory for charged systems, and calculate both the free energy and the effective interactions between charged particles immersed in an electrolyte solution. At the one-loop level, we recover the classical DH theory, the nature of which transpires in an especially lucid fashion within our framework: we find the full hierarchy of multi-body correlations to be present, with all pair-distribution functions given by the screened DH interaction. This means in specific that triplet correlations are already included at the DH level (in contrast to implicit assumptions in recent theories[@Fisher1]), and that effective interactions between similarly charged particles are repulsive. At higher order in our theory (which corresponds to including multi-loop diagrams), non-trivial multi-body interactions appear, and, consequently, the multibody correlations acquire contributions which [*cannot*]{} be described as superposition of pair correlations. Also, the effective pair interaction receives corrections which can be attractive if i) the electrolyte is asymmetric and consists of multivalent counterions and monovalent coions, or if ii) the colloidal charge is overcompensated by salt ions. The latter situation is realized in experiments on charged microspheres, where a strong attraction is only found in the vicinity of a charged wall[@Crocker]. The free energy of an ionic solution, expanded in the number of loops, follows to be a series in half-integer powers of the density, and thus constitutes a systematic low-density expansion[@McQuarrie]. The effects of higher-loop contributions on the demixing transition become increasingly important for highly charged colloids and lead to pronounced deviations from DH theory.
To proceed, we consider the partition function of $N$ charged, fixed test particles, immersed in a multi-component electrolyte solution with (in general) $M$ different types of ions, $$\begin{aligned}
\label{part1}
Z[\{R_N\}] &=&
\prod_{j=1}^M \left[ \frac{1}{n_j !}
\prod_{k=1}^{n_j}
\int \frac{{\rm d} {\bf r}_k^{(j)}}{\lambda^3} \right]
\nonumber \\ && \exp \left\{ -
\frac{1}{2} \int {\rm d} {\bf r}
{\rm d} {\bf r}' \hat{\rho}_c({\bf r})
v({\bf r}-{\bf r}') \hat{\rho}_c({\bf r}') \right\},\end{aligned}$$ where $v({\bf r}) = \ell_B /r$ is the Coulomb operator and the charge density operator $\hat{\rho}_c$ is defined by $$\hat{\rho}_c({\bf r}) \equiv \sum_{i=1}^N Q_i\delta({\bf r} -{\bf R}_i)
+\sum_{j=1}^M \sum_{k=1}^{n_j} q_j\delta({\bf r} -{\bf r}_k^{(j)} )$$ with $Q_i$ and $q_j$ being the charges (in units of the elementary charge $e$) of the test particles and the ions, respectively. The length $\lambda$ is an arbitrary constant, and the Bjerrum length $\ell_B \equiv e^2/4 \pi \epsilon k_B T$ defines the length at which two unit charges interact with thermal energy $k_BT$. Electroneutrality of course requires $\sum_{i=1}^N Q_i +\sum_{j=1}^M n_j q_j =0$.
Noting that the inverse Coulomb operator can be explicitly written as $v^{-1}({\bf r}) = - \nabla^2 \delta({\bf r}) / 4 \pi \ell_B $, after a Hubbard-Stratonovich transformation, the partition function is given by $$\label{part1b}
Z[\{R_N\}] = \int \frac{{\cal D}\phi}{Z_0}
\exp \left\{-\frac{1}{8 \pi \ell_B}
\int {\rm d}{\bf r}(\nabla \phi)^2 - i
\sum_{i=1}^N Q_i \phi({\bf R}_i) +
\sum_{j=1}^M n_j \log \left[\int \frac{{\rm d}{\bf r} }{V}
{\rm e}^{- i q_j \phi({\bf r}) } \right] +{\cal S} \right\},$$ where $Z_0$ is the partition function of the inverse Coulomb operator, $Z_0 \sim \det v$, and the entropy of ideal mixing is ${\cal S} \equiv - \sum_j n_j \ln (\lambda^3 c_j)$ with $c_j \equiv n_j/V$ denoting the concentration of ion species $j$. Performing a cumulant expansion of (\[part1b\]) in powers of $\phi$, we can rewrite the partition function as $$\label{part2}
Z[\{R_N\}] = \int \frac{{\cal D}\phi}{Z_0} \exp \left\{-\frac{1}{2 }
\int {\rm d}{\bf r}{\rm d}{\bf r}'
\phi({\bf r}) v^{-1}_{\rm DH}({\bf r}-{\bf r}') \phi({\bf r}')
- i \sum_{i=1}^N Q_i \phi({\bf R}_i) +
W[\phi] +{\cal S} \right\},$$ where $v_{\rm DH}$ is determined via the inverse operator equation (the so-called Dyson equation in field theory) $$\label{defDH}
v^{-1}_{\rm DH}({\bf r}) \equiv v^{-1} ({\bf r}) + I_2 \delta ({\bf r})$$ which is solved by the well-known DH interaction $ v_{\rm DH}({\bf r}) = \ell_B e^{-r \kappa}/r $ with the screening length $\kappa^{-1}$ defined by $\kappa^2 \equiv 4 \pi \ell_B I_2$. All anharmonic terms are contained in the non-local potential $W$, which is up to eighth order given by $$\begin{aligned}
\label{W}
W[\phi] &=& \frac{i I_3 V}{3!} \overline{\phi^3} +
\frac{I_4 V}{4!} \left(\overline{\phi^4}-3\overline{\phi^2}^2 \right)
-\frac{i I_5 V}{5!} \left(\overline{\phi^5}-10 \overline{\phi^2}
\; \overline{\phi^3} \right) -
\frac{I_6 V}{6!} \left(\overline{\phi^6}-15 \overline{\phi^4} \;
\overline{\phi^2}-10 \overline{\phi^3}^2+30\overline{\phi^2}^3\right)
\nonumber \\ && +
\frac{I_8 V}{8!} \left(\overline{\phi^8}-28 \overline{\phi^6}\;
\overline{\phi^2}-56 \overline{\phi^5}\;\overline{\phi^3}-35
\overline{\phi^4}^2+420\overline{\phi^4}\; \overline{\phi^2}^2+
560\overline{\phi^2}\; \overline{\phi^3}^2-630
\overline{\phi^2}^4\right).\end{aligned}$$ We have introduced the generalized ionic strength $I_n$, which is defined as $ I_n \equiv \sum_{j=1} q_j^n c_j $ and can take both positive and negative values. In these equations $\overline{\phi^n}$ denotes moments of the field, $
\overline{\phi^n } \equiv \int {\rm d}{\bf r} \phi^n({\bf r})/V$. The action is invariant with respect to a change of the gauge field $\overline{\phi}$. We therefore set $\overline{\phi}=0$. The linear term in $\phi$ in Eq.(\[part2\]) can be removed by a shift of the fluctuating field $\phi$, and the partition function then takes the form $$\label{part3}
Z[\{R_N\}] = \exp \left\{ {\cal S}- \frac{1}{2 } \sum_{i,j} Q_i Q_j
v_{\rm DH}({\bf R}_i-{\bf R}_j) \right\}
\int \frac{{\cal D}\phi}{Z_0} \exp \left\{-\frac{1}{2 }
\int {\rm d}{\bf r}{\rm d}{\bf r}'
\phi({\bf r}) v^{-1}_{\rm DH}({\bf r}-{\bf r}') \phi({\bf r}')
+ W[\tilde{\phi} ] \right\}$$
[2]{} where $\tilde{\phi}({\bf r}) \equiv \phi({\bf r})
-i \sum_{i} Q_i v_{\rm DH}({\bf r}-{\bf R}_i)$. Up to this point, our calculations are (in principle) [*exact*]{}: keeping terms of all powers in $W$ in (\[part3\]) leads to a model equivalent to the original partition function (\[part1\]). They are also [*systematic*]{}, in that keeping terms of higher and higher order of $W$ should make the resulting theory a more and more faithful representation of the underlying physical model. In fact, we will demonstrate that the lowest approximation, where the potential $W$ and thus anharmonic terms in $\phi$ are neglected altogether, is equivalent to the classical DH theory. In this case, it follows from (\[part3\]) that the dimensionless pair interaction $U_2$ between two test particles is just the DH potential, $$\label{U2}
U_2({\bf R}_1-{\bf R}_2) = Q_i Q_j v_{\rm DH}({\bf R}_1-{\bf R}_2)$$ and the two-point correlation function is $g_2({\bf R}_1-{\bf R}_2) \propto e^{-U({\bf R}_1-{\bf R}_2)} $ with a proportionality constant such that it is normalized[@comment1]. Neglecting $W$, there are no multibody interactions between test particles in (\[part3\]), and higher-order correlation functions are therefore given by products of the pair correlation function, $
g_3({\bf R}_1,{\bf R}_2,{\bf R}_3) \propto g_2({\bf R}_1-{\bf R}_2)
g_2({\bf R}_2-{\bf R}_3) g_2({\bf R}_1-{\bf R}_3)$, and so on. This is the superposition principle, known as a postulate from liquid state theory; it is exactly obeyed in DH theory. To connect to liquid state theory, we note that from Eq.(\[defDH\]) one obtains by inversion the integral equation (which in fact holds for [*any*]{} pair interaction $v$) $$v({\bf r}) = v_{\rm DH}({\bf r}) + I_2 \int {\rm d} {\bf r}'
v_{\rm DH}({\bf r}') v({\bf r}-{\bf r}'),$$ the field-theoretic version of the Ornstein-Zernicke equation. The DH interaction is the exact solution of this integral equation[@comment2]. The DH theory therefore contains correlations of all orders, and there is no need to explicitly add higher-order correlations (compare [@Fisher1]). Improvements can only come from adding non-trivial higher-body effective interactions, i.e., from violations of the superposition principle. This is the effect of the potential $W$, as we will demonstrate in the following.
Expanding $W[\phi]$ in the exponential of Eq.(\[part3\]), the first correction comes from the cubic term, $$\label{Z3}
Z[\{R_N\}] \propto \exp \left\{ {\cal S}- \frac{1}{2 } \sum_{i,j} Q_i Q_j
v_{\rm DH}({\bf R}_i-{\bf R}_j)
-{ I_3 \over 6} \sum_{i,j,k} Q_i Q_j Q_k
\Omega_3({\bf R}_i,{\bf R}_j,{\bf R}_k)
\right\}$$ with the three-point vertex given by $
\Omega_3({\bf R}_1,{\bf R}_2,{\bf R}_3) \equiv
\int{\rm d} {\bf r} \; v_{\rm DH}({\bf r}-{\bf R}_1)
v_{\rm DH}({\bf r}-{\bf R}_2) v_{\rm DH}({\bf r}-{\bf R}_3).
$
[2]{} The summation over $(i,j,k)$ in (\[Z3\]) is unrestricted. By considering the case where two of the three indices are equal, one obtains a correction to the pair interaction, which reads $$\label{DU2}
\Delta U_2({\bf R}) =\frac{I_3 \ell_B^3 (Q_1^2 Q_2+Q_1 Q_2^2)}{6}
\; \Xi(R \kappa),$$ where the function $\Xi$ (which is positive) is determined by the integral $\Xi(x) \equiv \int {\rm d}{\bf r} e^{-2r}
e^{-|{\bf r}-{\bf x}|}/ r^2
|{\bf r}-{\bf x}| =
2\pi\left\{ e^{-x}\left( \ln 3-\Gamma[0,x]\right)
+e^x \Gamma[0,3x] \right\}/x. $ The asymptotic behavior of $\Xi$ is $$\Xi(x) \simeq
\left\{ \begin{array}{llll}
& -4 \pi \ln x
& {\rm for} & x \ll 1 \\
& 2\pi \ln 3 \frac{\displaystyle e^{-x}}{\displaystyle x}
& {\rm for } & x \gg 1, \\
\end{array} \right.$$ and thus shows the same asymptotic behavior as the DH repulsion for large separations. When do we expect attractive interactions between likely charged particles? In other words, when does the prefactor in Eq.(\[DU2\]) become negative? If we assume the test particles to be positively charged, the condition for attraction is that the third-order ionic strength $I_3$ is negative. Assuming a homogeneous [*solution*]{} of positive macroions with charge $Z$, concentration $c$, and counter ions of valency $z$, the third-order ionic strength is $I_3 = c Z(Z^2-z^2)$ and clearly always positive: similarly charged particles at finite concentration do not attract each other, in agreement with experiments. On the other hand, considering two [*single*]{} charged macroions, it is easy to see that $I_3$ is negative if the salt solution is asymmetric: for positive macroions, attraction is therefore possible i) if one has an electrolyte consisting of negative ions with a higher valency than the positive ions or ii) if there are more negative than positive ions in the local environment (as in experiments between two charged walls[@Crocker]). Comparing the strength of the DH repulsion Eq.(\[U2\]) and the attraction Eq.(\[DU2\]) at large separation we find the attraction to dominate for $c \ell_B^3 Z^2>9(m^2+1)/(m^3-1)^2 \pi \ln^23$ for the case of a $m:1$ electrolyte[@comment3]. Experimentally, it is well-known that asymmetric salts like Calciumchloride induce the precipitation of negatively charged macroions and negatively charged polymers[@Bloomfield].
The phase behavior of charged colloidal mixtures follows from the free energy with all particle coordinates integrated over, $$\label{free1}
{\cal F} = -\ln Z = -{\cal S} -\ln \left[ \frac{Z_2}{Z_0} \right]
- \ln \left\langle e^{W[\phi]}\right\rangle.$$ The DH partition function is $Z_2 \sim \det v_{\rm DH} $, whereas higher-order correlations are contained in $W$. The expectation value in (\[free1\]) is evaluated with the DH propagator. We first evaluate the DH free energy, $$f_{\rm DH} \equiv
-\frac{a^3}{V} \log \left[ \frac{Z_2}{Z_0} \right] =
-\left(\frac{a}{2 \pi}\right)^3 \int {\rm d} {\bf q} \log \sqrt{
\frac{ q^2}{q^2 + \kappa^2}}$$ where the momentum integral goes over a cube of length $2 \pi/a$. Since the integrand is isotropic we distort the integration volume to a sphere and obtain after a straightforward integration $$f_{\rm DH} = -
\frac{a^3 \kappa^3}{6 \pi^2 } \arctan\left[\frac{\pi }{a \kappa}\right] +
\frac{a^2 \kappa^2}{6 \pi} +
\frac{ \pi}{12} \log\left [1+\frac{a^2 \kappa^2}{\pi^2}\right].$$ In the limit $a \rightarrow 0$ one obtains the well-known result $f _{\rm DH} \simeq -a^3\kappa^3 /12 \pi$ (plus corrections which scale linearly in $\kappa^2$ and thus correspond to an unimportant shift in the chemical potential). Our corrections as a function of the cut-off take in an approximate fashion the finite ion-sizes into account. The free energy contribution $\Delta f \equiv -\frac{a^3}{V} \ln \left\langle e^{W[\phi]}\right\rangle$ is, using Eq.(\[W\]), given by $$\Delta f
= \frac{a^3 I_3^2}{12}\left[ \chi_3 + \frac{3}{2} \langle \phi^2 \rangle^2
\chi_1\right] -\frac{a^3 I_4^2}{48} \chi_4 +
\frac{ a^3 I_5 I_3}{32} \langle \phi^2 \rangle^3 \chi_1.$$ The expectation value $\langle \phi^2 \rangle$ is $$\langle \phi^2 \rangle = \frac{\ell_B}{2 \pi^2} \int
\frac{{\rm d}{\bf q}}{q^2+\kappa^2}=\frac{2 \ell_B}{a}\left[
1-\frac{a \kappa}{\pi} \arctan\left(\frac{\pi}{a \kappa}\right)\right]$$ and the generalized susceptibility $\chi_n$ is defined as $$\chi_n \equiv \int {\rm d}{\bf r} \langle \phi_0 \phi_r \rangle^n =
4 \pi \ell_B^n (n \kappa)^{n-3} \Gamma[3-n, an \kappa].$$ Naive scaling predicts the generalized susceptibilities in $\Delta f$ to scale like $\chi_n \sim c^{(n-3)/2}$ as a function of the ion density $c$. Since the dominant terms in $\Delta f$ scale as $I_n^2 \chi_n$, one would thus obtain a systematic free-energy expansion in half-integer powers of the density, starting with the DH term $f_{\rm DH}$, which asymptotically scales as $c^{3/2}$[@McQuarrie]. In practice, the integrals in $\chi_n$ diverge in the ultraviolet and thus depend on the ion radius $a$ in a crucial way, which leads to changes from the naive scaling picture. We now present results for the simplified case of colloids of charge $Z$ and concentration $c$ with counterions of valency $z$. Introducing the energy scale $\epsilon \equiv zZ \ell_B/a$ and the total ion volume fraction $\tilde{c} \equiv a^3 c(1+Z/z)$ the DH theory (which amounts to neglecting the term $\Delta f$) predicts a critical point at $\epsilon \simeq 5.63$ and $\tilde{c} \simeq
0.0418$, independent of the colloid charge $Z$. Including higher-order terms contained in $\Delta f$ the critical interaction strength $ \epsilon$ and volume fraction $\tilde{c}$ depend on $Z/z$, as shown in Fig. 1. The limiting values for $Z/z=1$ are $\epsilon \simeq 5.61$ and $\tilde{c} \simeq
0.0416$ and are thus very similar to the DH case[@comment4]. For $Z/z>1$ the deviations from DH theory are pronounced. In Fig.2 we show coexistence curves for $Z/z=1$, $2$, and $10$ (solid, broken, and dotted lines, respectively). The coexistence curve as predicted by DH theory would be indistinguishable from the solid line.
In summary, we introduced a systematic field theory to describe charged colloidal suspensions, including fluctuations of and correlations between charged particles. Attractive colloidal interactions are predicted for asymmetric electrolyte solutions, as realized for multivalent $m:1$ salt solutions and in the neighborhood of charged walls. A critical point of demixing occurs even in the absence of attractive colloidal interactions, whose location in the temperature-density plane depends strongly on the colloidal charge. The hard-core repulsion between colloids and ions has been incorporated by a small-distance cut-off, similar to recent lattice theories[@Borukhov]. This is a rather poor description of colloidal systems, because here a large size difference between colloids and ions exists. We hope to treat size asymmetries more accurately in the future using perturbative treatments of the hard-core interactions between all particle pairs.
P.W. Debye and E. Hückel, Z. Phys. [**24**]{}, 185 (1923); see also Ref.\[2\] in [@Fisher1].
M.E. Fisher and Y. Levin, Phys. Rev. Lett. [**71**]{}, 3826 (1993).
R. van Roij and J.-P. Hansen, Phys. Rev. Lett. [**79**]{}, 3082 (1997).
J.M.H. Levelt Sengers and J.A. Given, Mol. Phys. [**80**]{}, 899 (1993).
J.P. Valleau, J. Chem. Phys. [**95**]{}, 584 (1991).
B.V.R. Tata, M. Rajalakshmi, and A.K. Arora, Phys. Rev. Lett. [**69**]{}, 3778 (1992).
G.R. Stell, K.C. Wu, and B. Larsen, Phys. Rev. Lett. [**37**]{}, 1369 (1976).
N. Grønbech-Jensen, R.J. Mashl, R.F. Bruinsma, and W.M. Gelbart, Phys. Rev. Lett. [**78**]{}, 2477 (1997); B.-Y. Ha and A.J. Liu, [*ibid.*]{} [**79**]{}, 1289 (1997); K.S. Schmitz, Langmuir [**13**]{}, 5849 (1997).
V.A. Bloomfield, Biopolymers [**31**]{}, 1471 (1991).
G.M. Kepler and S. Fraden, Phys. Rev. Lett. [**73**]{}, 356 (1994); J.C. Crocker and D.G. Grier, [*ibid.*]{} [**77**]{}, 1897 (1996); A.E. Larsen and D.G. Grier, Nature [**385**]{}, 231 (1997).
Note that the ordinary virial expansion method fails for ionic fluids, as shown in D.A. McQuarrie, [*Statistical Mechanics*]{} (Harper & Row, New York, 1976), Chap. 15.
It is easily seen that a small-distance cutoff $a$ is needed in order to make the correlation function normalizable. We interpret this cutoff as an effective hard-core radius.
A second interpretation of the DH interaction is obtained by rewriting Eq.(\[defDH\]) in matrix notation as $ v_{\rm DH} = v \left[ \delta + I_2 v \right]^{-1}
\simeq v - I_2 v^2 + I_2^2 v^3 - I_2^3 v^4 + \cdots $. The DH interaction thus is the renormalized interaction which results from all possible combinations of indirect interaction terms mediated by salt ions.
An effective attractive interactions does not necessarily mean coagulation of particles, since the centrifugal potential (which is logarithmic) has to be overcome.
MC estimates of critical parameters for $Z/z=1$ range from $\tilde{c} \simeq 0.07$, $\epsilon \simeq 14.3$[@Valleau] to $\tilde{c} \simeq 0.03$ , $\epsilon \simeq17.5$[@Fisher1]; we attribute the discrepancy to our results, especially in $\epsilon$, to our approximate way of including the hardcore repulsion. The improved DH theory yields $\tilde{c} \simeq 0.028$ and $\epsilon \simeq 17.5$ [@Fisher1].
I. Borukhov, D. Andelman, and H. Orland, Phys. Rev. Lett. [**79**]{}, 435 (1997).
|
---
author:
- Yun Soo Myung
- and Taeyoon Moon
title: 'Cosmological singleton gravity theory and dS/LCFT correspondence'
---
Introduction
============
The singleton theory is quite interesting because it provides two coupled scalar equations which are combined to yield the degenerate fourth-order equation which is the same equation for the degenerate Pais-Uhlenbeck oscillator [@Pais:1950za]. The Dirac quantization of the Pais-Uhlenbeck oscillator was carried out in [@Mannheim:2004qz; @Smilga:2008pr]. In the anti-de Sitter (AdS) literature, this describes a dipole pair field (singleton) of the AdS group [@Flato:1986uh]. Later on, this theory was used widely to derive the AdS/logarithmic conformal field theory (LCFT) correspondence [@Ghezelbash:1998rj; @Kogan:1999bn; @Myung:1999nd; @Grumiller:2013at] and the de Sitter (dS)/LCFT correspondence [@Kehagias:2012pd]. In other words, the singleton action on the AdS/dS background is a bulk action to derive the LCFT [@Gurarie:1993xq; @Flohr:2001zs] on its boundary. Explicitly, a dipole pair ($\varphi_1,\varphi_2$) on AdS/dS space are dual to the rank-2 LCFT with two operators ($\sigma_1,\sigma_2$).
On the other hand, the detection of primordial gravitational waves by BICEP2 [@Ade:2014xna] has indicated that the cosmic inflation occurred at a high scale of $10^{16}$ GeV. A single scalar field (inflaton) is still known to be a promising model for describing the slow-roll (dS-like) inflation [@Baumann:2008aq; @Baumann:2009ds]. An important issue to be resolved indicates that the tensor-to-scalar ratio is given by $r=0.2^{+0.07}_{-0.05}$ (considering the dust reduction, it reduces to $r=0.16^{+0.06}_{-0.05}$) which is outside of the $95\%$ confidence level of the Planck measurement [@Ade:2013uln]. Accordingly, many literature have provided plausible ways to reduce the tension between BICEP2 and Planck measurement [@Hertzberg:2014aha; @Choudhury:2014kma; @Gong:2014cqa; @Kim:2014dba; @Anchordoqui:2014bga; @Bhattacharya:2014gva; @Li:2014kla]. Also, it is meaningful to mention recent claims that the entire signal may be due to polarized dust emission [@Mortonson:2014bja; @Flauger:2014qra; @Adam:2014oea].
The dS/CFT correspondence has predicted the form of the three-point correlator of the operator which is dual to the inflaton perturbation generated during slow-roll inflation [@Maldacena:2002vr]. This dual correlator was related closely to the three-point correlator of the curvature perturbation generated during slow-roll inflation. Importantly, this correspondence has provided the first derivation of the non-Gaussianity from the single field inflation.
Hence, it is quite interesting to compute the power spectrum of singleton (other than inflaton) generated during dS inflation because its equation is a degenerate fourth-order equation. In order to compute the power spectrum, one needs to choose the Bunch-Davies vacuum in the subhorizon limit of $z\to \infty$. Therefore, one has to quantize the singleton canonically as we do for the inflaton. Also, it is important to see whether the dS/LCFT correspondence plays a crucial role in computing the power spectrum in the superhorizon limit of $z\to
0$ [@Kehagias:2012pd]. As far as we know, there is no direct evidence for the dS/LCFT correspondence. We will show that the momentum LCFT-correlators $\langle \sigma_a(k)\sigma_b(-k)\rangle$ obtained from the extrapolation approach take the same form as the power spectra \[${\cal P}_{{ab},0}(k,-1)]\times k^{-3}$. This shows that the dS/LCFT correspondence works well for obtaining the power spectra in the superhorizon limit.
Singleton gravity theory
=========================
Let us first consider the singleton gravity theory where a dipole pair $\phi_1$ and $\phi_2$ are coupled minimally to Einstein gravity. The action is given by $$\label{SGA}
S_{\rm SG}=S_{\rm E}+S_{\rm S}=\int d^4x
\sqrt{-g}\Big[\Big(\frac{R}{2\kappa}-\Lambda\Big)-\Big(\partial_\mu\phi_1\partial^\mu\phi_2+m^2\phi_1\phi_2+\frac{\mu^2}{2}\phi_1^2
\Big)\Big],$$ where the first two terms are introduced to provide de Sitter background with $\Lambda>0$ and the last three terms ($S_{\rm S}$) represent the singleton theory composed of two scalars $\phi_1$ and $\phi_2$ [@Ghezelbash:1998rj; @Kogan:1999bn; @Myung:1999nd]. Here we have $\kappa=8\pi G=1/M^2_{\rm P}$, $M_{\rm P}$ being the reduced Planck mass and $m^2$ is the degenerate mass-squared for the singleton. We stress that $S_{\rm SG}$ denotes the action for the singleton gravity theory, whereas $S_{\rm S}$ is the action for the singleton theory itself.
The Einstein equation takes the form $$\label{ein-eq}
G_{\mu\nu} +\kappa \Lambda g_{\mu\nu}=\kappa T_{\mu\nu}$$ with the energy-momentum tensor $$T_{\mu\nu}=2\partial_{\mu}\phi_1\partial_\nu
\phi_2-g_{\mu\nu}\Big(\partial_\mu\phi_1\partial^\mu\phi_2+m^2\phi_1\phi_2+\frac{\mu^2}{2}\phi_1^2\Big).$$ On the other hand, two scalar field equations are coupled to be $$\label{b-eq1}
(\nabla^2-m^2)\phi_1=0,~~(\nabla^2-m^2)\phi_2=\mu^2\phi_1$$ which are combined to give a degenerate fourth-order equation $$\label{b-eq2}
(\nabla^2-m^2)^2\phi_2=0.$$ This reveals the nature of the singleton theory as $S_{\rm S}$ takes the following form upon using (\[b-eq1\]) to eliminate the auxiliary field $\phi_1$ [@Rivelles:2003jd; @Kim:2013waf]: $$S_{\rm S}=\frac{1}{2\mu^2} \int
d^4x\sqrt{-g}(\nabla^2-m^2)\phi_2(\nabla^2-m^2)\phi_2.$$ The solution of dS spacetime comes out when one chooses the vanishing scalars $$\bar{R}=4\kappa \Lambda,~~\bar{\phi}_1=\bar{\phi}_2=0.$$ Explicitly, curvature quantities are given by $$\bar{R}_{\mu\nu\rho\sigma}=H^2(\bar{g}_{\mu\rho}\bar{g}_{\nu\sigma}-\bar{g}_{\mu\sigma}\bar{g}_{\nu\rho}),~~\bar{R}_{\mu\nu}=3H^2\bar{g}_{\mu\nu}$$ with a constant Hubble parameter $H^2=\kappa \Lambda/3$. We choose the dS background explicitly by choosing a conformal time $\eta$ $$\begin{aligned}
\label{frw}
ds^2_{\rm dS}=\bar{g}_{\mu\nu}dx^\mu
dx^\nu=a(\eta)^2\Big[-d\eta^2+\delta_{ij}dx^idx^j\Big],\end{aligned}$$ where the conformal scale factor is $$\begin{aligned}
a(\eta)=-\frac{1}{H\eta}\to a(t)=e^{Ht}.\end{aligned}$$ Here the latter denotes the scale factor with respect to cosmic time $t$. During the dS stage, $a$ goes from small to a very large value like $a_f/a_i\simeq 10^{30}$ which implies that the conformal time $\eta=-1/aH(z=-k\eta)$ runs from $-\infty(\infty)$\[the infinite past\] to $0^-(0)$ \[the infinite future\]. The two boundaries (${\rm
\partial dS}_{\infty/0}$) of dS space are located at $\eta=-\infty$ together with a point $\eta=0^-$ which make the boundary compact [@Maldacena:2002vr]. It is worth noting that the Bunch-Davies vacuum will be chosen at $\eta=-\infty$, while the dual (L)CFT can be thought of as living on a spatial slice at $\eta=0^-$.
We choose the Newtonian gauge of $B=E=0 $ and $\bar{E}_i=0$ for cosmological perturbation around the dS background (\[frw\]). In this case, the cosmologically perturbed metric can be simplified to be $$\begin{aligned}
\label{so3-met}
ds^2=a(\eta)^2\Big[-(1+2\Psi)d\eta^2+2\Psi_i d\eta
dx^{i}+\Big\{(1+2\Phi)\delta_{ij}+h_{ij}\Big\}dx^idx^j\Big]\end{aligned}$$ with transverse-traceless tensor $\partial_ih^{ij}=h=0$. Also, one has the scalar perturbations $$\phi_1=
\bar{\phi}_1+\varphi_1,~~\phi_2= \bar{\phi}_2+\varphi_2.$$ In order to get the cosmological perturbed equations, one linearize the Einstein equation (\[ein-eq\]) directly around the dS $$\begin{aligned}
\delta R_{\mu\nu}(h)-3H^2h_{\mu\nu}=0 \to
\bar{\nabla}^2h_{ij}=0.\label{heq}\end{aligned}$$ We would like to mention briefly two metric scalars $\Psi$ and $\Phi$, and a vector $\Psi_i$. The linearized Einstein equation requires $\Psi=-\Phi$ which was used to define the comoving curvature perturbation in the slow-roll inflation and thus, they are not physically propagating modes. In the dS inflation, there is no coupling between $\{\Psi,\Phi\}$ and $\{\varphi_1,\varphi_2\}$ because of $\bar{\phi}_1=\bar{\phi}_2=0$. The vector is also a non-propagating mode in the singleton gravity theory because it has no its kinetic term. The linearized scalar equations are given by $$\begin{aligned}
&&(\bar{\nabla}^2-m^2)\varphi_1=0,\nonumber\\
&&\label{sing-eq1}(\bar{\nabla}^2-m^2)\varphi_2=\mu^2\varphi_1.\end{aligned}$$ These are combined to provide a degenerate fourth-order scalar equation $$\label{sing-eq2}
(\bar{\nabla}^2-m^2)^2\varphi_2=0,$$ which is our main equation to be solved for cosmological purpose.
dS/LCFT correspondence in the superhorizon
==========================================
First of all, we briefly review what are similarities and differences between AdS/CFT and dS/CFT dictionaries. The first version of the AdS/CFT dictionary was stated in terms of an equivalence between bulk and boundary partition functions in the presence of deformations: $$Z_{\rm bulk}[\phi_0,{\cal M}]=Z_{\rm CFT}[\phi_0,{\cal O},\partial
{\cal M}],$$ where on the bulk side $\phi_0$ specifies the boundary conditions of bulk field $\phi$ propagating on ${\cal M}$, whereas on the boundary CFT $\phi_0$ denotes the sources of operators ${\cal O}$ on the boundary $\partial {\cal M}$. Correlator of dual CFT can be computed by differentiating the partition function with respect to the sources and then, setting them to zero as $$\label{diff-c}
\langle {\cal O}({\bf x}){\cal O}({\bf y})\rangle_{\rm
d}=\frac{\delta^2 Z_{\rm CFT}}{\delta\phi_0({\bf
x})\delta\phi_0({\bf y})}\Big|_{\phi_0=0}.$$ This is called “differentiate" (GKPW) dictionary [@Banks:1998dd]. The second version consists of computing bulk-to-boundary propagators first and pulling CFT correlators to the boundary as $$\label{extra-c}
\langle {\cal O}({\bf x}){\cal O}({\bf y})\rangle_{\rm
e}=\lim_{z\to
0}z^{-2\Delta}\langle \phi({\bf x},z)\phi({\bf y},z)\rangle.$$ This version was used in [@Susskind:1998dq] and was referred to “extrapolate" (BDHM) dictionary [@Polchinski:1999ry].
Concerning correlation functions of a free massive scalar in AdS and dS, the following three statements appear importantly [@Harlow:2011ke]:\
(a) In Euclidean AdS$_{d+1}$ with $\ell^2_{\rm AdS}=1$, either the differentiation of the partition function with respect to sources or extrapolation of the bulk operators to the boundary produce CFT correlators of an operator with dimension $\Delta=\frac{d}{2}+\frac{\sqrt{d^2+4m^2}}{2}$.\
(b) In Lorentzian dS$_{d+1}$ with $\ell^2_{\rm dS}=1$, the extrapolated bulk correlators are a sum of two contributions. One is the leading behavior of a CFT correlator of an operator with dimension $d-\delta=\frac{d}{2}-\frac{\sqrt{d^2-4m^2}}{2}$, whereas the other comes from the leading behavior of a CFT correlator of an operator with dimension $\delta=\frac{d}{2}+\frac{\sqrt{d^2-4m^2}}{2}$.\
(c) In Lorentzian dS$_{d+1}$ with $\ell^2_{\rm dS}=1$, functional derivatives of late-time Schrödinger wave-function produce CFT correlators with dimension $\delta$ only.\
The dominant term in (b) was computed by Witten for a particular scalar [@Witten:2001kn], while a massless version of statement (c) was firstly made by Maldacena [@Maldacena:2002vr]. This implies that the dS/CFT “extrapolate" and “differentiate" dictionaries are inequivalent to each other. Particularly, the dimension of CFT operators associated to a massive scalar is different: $\triangle_+(=\delta)=\frac{3}{2}+\sqrt{\frac{9}{4}-\frac{m^2}{H^2}}$ for “differentiate" dictionary and both $\triangle_{\pm}=\frac{3}{2}\pm
\sqrt{\frac{9}{4}-\frac{m^2}{H^2}}(\triangle_-=w)$ for “extrapolate" dictionary in four dimensional dS space. Accordingly, following (c) to compute cosmological correlator of a massive scalar, it in momentum space is inversely proportional to CFT correlator with dimension $\Delta_+$ as $$\langle\phi(k)\phi(-k)\rangle \propto \frac{1}{2{\rm Re}\langle {\cal O}(k){\cal
O}(-k)\rangle}_{\rm d}\propto \frac{1}{k^{-3+2\triangle_+}}=k^{2w-3},$$ which leads to the power spectrum for a massive scalar in the superhorizon limit. If one employs (c) to derive the dS/LCFT correspondence, the approach (c) may break down for deriving LCFT correlators because all LCFT correlators in AdS$_{d+1}$ were derived based on the extrapolation approach (b) [@Ghezelbash:1998rj; @Kogan:1999bn; @Myung:1999nd; @Grumiller:2013at]. Hence, we wish to use the extrapolation approach (b) to derive the LCFT correlators from the bulk correlators. In this case, the cosmological correlator is directly proportional to the CFT correlator with different dimension $\triangle_-$ $$\label{dir-rel}
\langle\phi(k)\phi(-k)\rangle \propto
\langle\sigma(k)\sigma(-k)\rangle_{\rm e} \propto k^{2w-3}$$ as was shown in (\[extra-c\]).
To develop the dS/LCFT correspondence [@Kehagias:2012pd], we first solve Eqs.(\[sing-eq1\]) and (\[sing-eq2\]) for the singleton gravity theory in the superhorizon limit of $\eta\to 0^-$. Their solutions are given by $$\varphi_{1,0} \sim \eta^w,~~\varphi_{2,0} \sim \eta^w\ln[-\eta]$$ with $$w=\frac{3}{2}\Bigg(1-\sqrt{1-\frac{4m^2}{9H^2}}\Bigg).$$ The scaling of $\varphi_{a,0}$ with $a=1,2$ is not conventional as they transform under $$\label{trans}
\varphi_{1,0} \to \lambda^w\varphi_{1,0},~~\varphi_{2,0} \to
\lambda^w\Big[\varphi_{2,0}+\ln (\lambda)\varphi_{1,0}\Big].$$ A pair of dipole fields $(\varphi_1,\varphi_2)$ is coupled to $(\sigma_1,\sigma_2)$-operators on the boundary (${\rm \partial dS}$) of $\eta\to 0^-$. The explicit connection between $\varphi_{a,0}$ and $\sigma_a$ is encoded by [@Seery:2006tq] $$\begin{aligned}
\label{ds-lcft}
&&Z_{S}[{\varphi_{a,0}}]=Z_{\rm LCFT}[{\varphi_{a,0}}],\\
\label{ds-lcft1}&&Z_{S}[{\varphi_{a,0}}]=e^{-\delta S_{\rm
S}[\{\varphi_{a,0}\}]},\\
\label{ds-lcft2}&&Z_{\rm LCFT}[\varphi_{a,0}]= \langle
e^{-\int_{{\rm
\partial dS}_0} d^3x\varphi_{a,0}({\bf x})\sigma_a({\bf
x})}\rangle,\end{aligned}$$ where the expectation value $\langle \cdots \rangle$ is taken in the LCFT with the boundary fields $\varphi_{a,0}$ as sources. Eq.(\[ds-lcft\]) is a statement of the dS/LCFT correspondence. Here the bulk action is given by $$\delta S_{\rm S}[\{\varphi_a\}]=-\int_{\rm dS}
d^4x\sqrt{-\bar{g}}\Big[\partial_\mu\varphi_1\partial^\mu\varphi_2+m^2\varphi_1\varphi_2+\frac{\mu^2}{2}\varphi_1^2\Big].$$ The bulk transformation (\[trans\]) indicates that two operator $\sigma_a$ of conformal dimension $w$ transform under dilations as $$i[D,\sigma_a]=\Big(x^i\partial_i \delta^b_a+\Delta^b_a\Big)\sigma_b,$$ where a dimension matrix $\Delta^b_a$ is brought to the Jordan cell form as $$\label{b-jcell}
\Delta^b_a=
\left(
\begin{array}{cc}
w & 0 \\
1 & w \\
\end{array}
\right).$$ This implies that $\sigma_a$ transform under dilations of ${\bf x}
\to \lambda {\bf x}$ as $$\sigma_a({\bf x}) \to \sigma'_a(\lambda {\bf x})=\Big(e^{\Delta \ln
\lambda}\Big)^b_a\sigma_b(\lambda{\bf x}).$$
In order to find the LCFT correlators $\langle \sigma_a({\bf
x})\sigma_b({\bf y})\rangle$, one might use the Ward identities for scale and special conformal transformations [@Kehagias:2012pd]. In this work, we wish to rederive them by using the extrapolation approach (b) (see Appendix for detail computations). The two-point functions of $\sigma_1$ and $\sigma_2$ are determined by $$\begin{aligned}
&& \label{cc0} _{\rm C}\langle \sigma_1({\bf x})\sigma_1({\bf
y})\rangle_{\rm C}=0,\\
\label{cc1} && _{\rm C}\langle \sigma_1({\bf x})\sigma_2({\bf
y})\rangle_{\rm C}=~_{\rm C}\langle \sigma_2({\bf x})\sigma_1({\bf
y})\rangle_{\rm C}=\frac{A}{|{\bf x}-{\bf
y}|^{2w}}, \\
&& \label{cc2}_{\rm C}\langle \sigma_2({\bf x})\sigma_2({\bf
y})\rangle_{\rm C}=\frac{A}{|{\bf x}-{\bf y}|^{2w}}\Big(-2\ln|{\bf
x}-{\bf y}|+D\Big).\end{aligned}$$ Here $w$ is a degenerate dimension of $\sigma_1$ and $\sigma_2$. The coefficient $A=w(2w-3)$ is determined by the normalization of $\sigma_1$ and $\sigma_2$. However, $D$ is arbitrary. The CFT vacuum $|0\rangle_{\rm C}$ is defined by three Virasoro operators $L_n|0\rangle_{\rm C}=0$ for $n=0,\pm1$. The highest-weight state $|\sigma_a\rangle_{\rm C}=\sigma_a(0)|0\rangle_{\rm C}$ for two primary fields $\sigma_a$ of conformal weight $h=w/2$ is defined by $$\label{cft-jcell}
L_0|\sigma_1\rangle_{\rm C}=h|\sigma_1\rangle_{\rm C},~~
L_0|\sigma_2\rangle_{\rm C}=|\sigma_1\rangle_{\rm C}+
h|\sigma_2\rangle_{\rm C},~~L_n|\sigma_a\rangle_{\rm C}=0 ~{\rm
for}~n>0.$$ This implies that for any pair of degenerate operators $\sigma_1$ and $\sigma_2$ (logarithmic pair), the Hamiltonian ($L_0$) becomes non-diagonalizable which shows us a crucial difference from an ordinary CFT. Actually, Eq.(\[cft-jcell\]) represents the CFT version of the bulk transformation (\[trans\]). Eqs.(\[cc0\])-(\[cc2\]) are summarized to be $$\label{lcft-mat}
_{\rm C}\langle \sigma_a({\bf x})\sigma_b({\bf y})\rangle_{\rm C}
=\left(
\begin{array}{cc}
0 & {\rm CFT} \\
{\rm CFT} & {\rm LCFT} \\
\end{array}
\right),$$ where CFT and LCFT represent their correlators in (\[cc1\]) and (\[cc2\]), respectively.
In order to derive the relevant correlators in momentum space, one has to use the relation $$\frac{1}{|{\bf x}-{\bf y}|^{2w}}=\frac{\Gamma(\frac{3}{2}-w)}{4^w
\pi^{3/2}\Gamma(w)}\int d^3{\bf k}|{\bf k}|^{2w-3}e^{i{\bf k}\cdot
({\bf x}-{\bf y})},$$ where we observe an inverse-relation of exponent $2w$ between $|{\bf x}|$-space and $k=|{\bf k}|$-space. Finally, the correlators in momentum space are easily evaluated as [@Kehagias:2012pd] $$\begin{aligned}
\label{m0} \langle \sigma_1({\bf k}_1)\sigma_1({\bf
k}_2)\rangle'&=&0,\\
\label{m1} \langle \sigma_1({\bf k}_1)\sigma_2({\bf
k}_2)\rangle'&=&\frac{A_0(w)}{k_1^{3-2w}}, \\
\langle \sigma_2({\bf k}_1)\sigma_2({\bf k}_2)\rangle'&=&D\langle
\sigma_1({\bf k}_1)\sigma_2({\bf
k}_2)\rangle'+\frac{\partial}{\partial w}\langle \sigma_1({\bf
k}_1)\sigma_2({\bf k}_2)\rangle' \nonumber \\
\label{m2}&=&\frac{A_0(w)}{k_1^{3-2w}}\Bigg(2\ln[k_1]+D+\frac{A_{0,w}}{A_0(w)}\Bigg),\end{aligned}$$ where the prime ($'$) represents correlators without the $(2\pi)^3\delta^3(\Sigma_i{\bf k}_i)$ and $A_{0,w}=4w-3$ denotes derivatives of $A_0(w)=w(2w-3)$ with respect to $w$. These correlators will be compared to the power spectra in the superhorizon limit of $z\to 0$.
Singleton propagation in dS spacetime
=====================================
In order to compute the power spectrum, we have to know the solution to singleton equations Eqs.(\[sing-eq1\]) and (\[sing-eq2\]) in the whole range of $\eta(z)$. For this purpose, the scalars $\varphi_{i}$ can be expanded in Fourier modes $\phi^{i}_{\bf
k}(\eta)$ $$\begin{aligned}
\label{scafou}
\varphi_{i}(\eta,{\bf x})=\frac{1}{(2\pi)^{\frac{3}{2}}}\int
d^3k~\phi^{i}_{\bf k}(\eta)e^{i{\bf k}\cdot{\bf x}}.\end{aligned}$$ The first equation of (\[sing-eq1\]) leads to $$\begin{aligned}
\label{scalar-eq2}
\Bigg[\frac{d^2}{d \eta^2}-\frac{2}{\eta}\frac{d}{d
\eta}+k^2+\frac{m^2}{H^2}\frac{1}{\eta^2}\Bigg]\phi^1_{\bf
k}(\eta)=0,\end{aligned}$$ which can be further transformed into $$\begin{aligned}
\label{scalar-eq3}
\Bigg[\frac{d^2}{d\eta^2}+k^2-\frac{2}{\eta^2}+\frac{m^2}{H^2}\frac{1}{\eta^2}\Bigg]\tilde{\phi}^1_{\bf
k}(\eta)=0\end{aligned}$$ for $\tilde{\phi}^1_{\bf k}=a\phi^1_{\bf k}=-\phi^1_{\bf
k}/(H\eta)=\frac{k}{Hz}\phi^1_{\bf k}$. Expressing (\[scalar-eq3\]) in terms of $z=-k\eta$ leads to $$\begin{aligned}
\label{scalars-eq4}
\Bigg[\frac{d^2}{dz^2}+1-\Big(2-\frac{m^2}{H^2}\Big)\frac{1}{z^2}\Bigg]\tilde{\phi}^1_{\bf
k}(z)=0.\end{aligned}$$ Introducing $\tilde{\phi}^1_{\bf
k}=\sqrt{z}\tilde{\tilde{\phi}}^1_{\bf k}$ further, it leads to the Bessel’s equation as $$\begin{aligned}
\label{scalar-eq4}
\Bigg[\frac{d^2}{dz^2}+\frac{1}{z}\frac{d}{dz}+1-\frac{\nu^2}{z^2}\Bigg]\tilde{\tilde{\phi}}^1_{\bf
k}(z)=0\end{aligned}$$ with the index $$\nu=\sqrt{\frac{9}{4}-\frac{m^2}{H^2}}.$$ The solution to (\[scalar-eq4\]) is given by the Hankel function $H^{(1)}_\nu$. Accordingly, one has the solution to (\[scalar-eq2\]) $$\label{scalar-eq5}
\phi^1_{\bf k}(z)={\cal
C}\frac{\sqrt{z}}{a}\tilde{\tilde{\phi}}^1_{\bf k}={\cal
C}\frac{H}{k}z^{3/2}H^{(1)}_{\nu}(z)$$ with ${\cal C}$ undetermined constant. In the subhorizon limit of $z\to \infty$, Eq.(\[scalar-eq2\]) reduces to $$\label{scalar-eq6}
\Big[\frac{d^2}{dz^2}-\frac{2}{z}\frac{d}{dz}+1\Big]\phi^{1}_{{\bf
k},\infty}(z)=0$$ which leads the positive-frequency solution with the normalization $1/\sqrt{2k}$ $$\label{scalar-eq7}
\phi^{1}_{{\bf k},\infty}(z)=\frac{H}{\sqrt{2k^3}}(i+z)e^{iz}.$$ This is a typical mode solution of a massless scalar propagating on dS spacetime. Inspired by (\[scalar-eq7\]) and asymptotic form of $H^{(1)}_\nu$, $\phi^1_{\bf k}(z)$ is fixed by $$\label{scalar-eq10}
\phi^1_{\bf k}(z)=\frac{H}{\sqrt{2k^3}}
\sqrt{\frac{\pi}{2}}e^{i(\frac{\pi\nu}{2}+\frac{\pi}{4})}z^{3/2}H^{(1)}_{\nu}(z).$$
In the superhorizon limit of $z\to0$, Eq.(\[scalar-eq2\]) takes the form $$\label{scalar-eq11}
\Bigg[\frac{d^2}{dz^2}-\frac{2}{z}\frac{d}{dz}+\frac{m^2}{H^2}\frac{1}{z^2}\Bigg]\phi^1_{{\bf
k},0}(z)=0,$$ whose solution is $$\phi^1_{{\bf k},0}(z)=\frac{H}{\sqrt{2k^3}}z^{w}$$ with $$w=\frac{3}{2}-\nu.$$
On the other hand, plugging (\[scafou\]) into (\[sing-eq2\]) leads to the degenerate fourth-order differential equation $$\begin{aligned}
\Bigg[\eta^2\frac{d^2}{d\eta^2}-2\eta\frac{d}{d\eta}+k^2\eta^2+\frac{m^2}{H^2}\Bigg]^2\phi^2_{\bf
k}(\eta)=0\label{s2-eq2}\end{aligned}$$ which seems difficult to be solved directly. However, we may solve Eq.(\[s2-eq2\]) in the two limits of subhorizon and superhorizon. In the subhorizon limit of $z\to \infty$, Eq.(\[s2-eq2\]) takes the form $$\label{sub-eq1}
\Bigg[\frac{d^4}{dz^4}+2\Big(1-\frac{1}{z^2}\Big)\frac{d^2}{dz^2}+\frac{4}{z^3}\frac{d}{dz}+\Big(1-\frac{2}{z^2}\Big)\Bigg]\phi^2_{{\bf
k},\infty}=0.$$ whose direct solution is given by $$\begin{aligned}
\label{sub-sol}
\phi^{2,d}_{{\bf
k},\infty}=\Big[\tilde{c}_2(i+z)+\tilde{c}_1\Big(2i+(z-i)e^{-2iz}{\rm
Ei}(2iz)\Big)\Big]e^{iz}\end{aligned}$$ with two coefficients $\tilde{c}_1$ and $\tilde{c}_2$. The c.c. of $\phi^{2,d}_{{\bf k},\infty}$ is a solution to (\[sub-eq1\]) too. Here ${\rm Ei}(2iz)$ is the exponential integral function defined by [@AS] $$\begin{aligned}
{\rm Ei}(2iz)={\rm Ci}(2z)+i{\rm Si}(2z)+i\frac{\pi}{2},\end{aligned}$$ where the cosine-integral and sine-integral functions are given by $$\begin{aligned}
{\rm Ci}(2z)=\int^{2z}_{0}\frac{{\rm cos} t}{t}dt,~~{\rm
Si}(2z)=\int^{2z}_{0}\frac{{\rm sin} t}{t}dt.\end{aligned}$$ We note that ${\rm Ei}(2iz)$ satisfies the fourth-order equation $$\begin{aligned}
(z-i)z^3\frac{d^4{\rm Ei}}{dz^4}&-&4iz^4 \frac{d^3{\rm Ei}}{dz^3}+2z(i-z-4iz^2-2z^3)\frac{d^2{\rm Ei}}{dz^2}\nonumber
\\
&-&4(i-z-iz^2+2z^3)\frac{d{\rm Ei}}{dz}-8e^{2iz}=0.\end{aligned}$$ However, we wish to point out that the direct solution (\[sub-sol\]) is not suitable for choosing the Bunch-Davies vacuum to give quantum fluctuations. In order to find an appropriate solution, we note that $(\bar{\nabla}^2-m^2)\varphi_2=\mu^2\varphi_1$ in (\[sing-eq1\]) reduces to in the subhorizon limit $$\label{phi2t}
\Big[\frac{d^2}{dz^2}-\frac{2}{z}\frac{d}{dz}+1\Big]\phi^{2}_{{\bf
k},\infty}(z)=0,$$ whose solution is $$\label{phi2s-sol} \phi^{2}_{{\bf
k},\infty}(z)=\tilde{c}_2(i+z)e^{iz}.$$ We note that $ \phi^{2}_{{\bf k},\infty}(z)$ is included as the first term of (\[sub-sol\]) \[as a solution to the fourth-order equation (\[sub-eq1\])\].
On the other hand, Eq.(\[s2-eq2\]) takes the form in the superhorizon limit of $z\to 0$ as $$\begin{aligned}
\Bigg[z^2\frac{d^2}{dz^2}-2z\frac{d}{dz}+\frac{m^2}{H^2}\Bigg]^2\phi^2_{{\bf
k},0}(z)=0\label{super-eq2}\end{aligned}$$ whose solution is given by $$\label{super-phi2}
\phi^2_{{\bf k},0}(z)\propto z^w\ln z.$$ This also satisfies $$\begin{aligned}
(-H^2)\Bigg[z^2\frac{d^2}{dz^2}-2z\frac{d}{dz}+\frac{m^2}{H^2}\Bigg]\phi^2_{{\bf
k},0}(z)=\mu^2\phi^1_{{\bf k},0}(z)\label{super-eq3}\end{aligned}$$ for $\mu^2=(3-2w)H^2$ which is the superhorizon limit of Eq.(\[sing-eq1\]). The presence of “$\ln z$" implies that (\[super-phi2\]) is a solution to the fourth-order equation (\[super-eq2\])
Finally, the trick used in [@Kogan:1999bn] implies that one may solve (\[s2-eq2\]) directly by differentiating $(\bar{\nabla}^2-m^2)\varphi_1=0$ with respect to $m^2$. The explicit steps are given by $$\begin{aligned}
\frac{d}{dm^2}&\times&\left(-z^2H^2\frac{d^2}{dz^2}+2z
H^2\frac{d}{dz}-z^2H^2-m^2\right)\phi_{\bf k}^1(z)
=0\\
&\rightarrow&\left(-z^2H^2\frac{d^2}{dz^2}+2z
H^2\frac{d}{dz}-z^2H^2-m^2\right)\frac{d}{dm^2}\phi_{\bf
k}^1(z) =\phi_{\bf k}^1(z)\\
&\leftrightarrow&\left(-z^2H^2\frac{d^2}{dz^2}+2z
H^2\frac{d}{dz}-z^2H^2-m^2\right)\phi_{\bf k}^2(z) =\mu^2\phi_{\bf
k}^1(z)\label{phi2e1}\end{aligned}$$ which provides a way to obtain $\phi_{\bf k}^2(z)$ from $\phi_{\bf k}^1(z)$ as $$\phi_{\bf k}^2(z)=\mu^2\frac{d}{dm^2}\phi_{\bf
k}^1(z)\label{phi2e}.$$ We note that (\[s2-eq2\]) can be obtained by acting $(\bar{\nabla}^2-m^2)$ on (\[phi2e1\]). Explicitly, $\frac{d}{dm^2}\phi_{\bf k}^1(z)$ is computed to be $$\begin{aligned}
\frac{d}{dm^2}\phi_{\bf k}^1(z)&=&-\frac{1}{2\nu
H\sqrt{2k^3}}\sqrt{\frac{\pi}{2}}e^{i\left(\frac{\pi\nu}{2}+\frac{\pi}{4}\right)}z^{3/2}
\Bigg\{\pi\Big(\frac{i}{2}-\cot[\nu\pi]\Big)H_{\nu}^{(1)}
+i\csc[\nu\pi]\times\nonumber\\
&&\hspace*{10em}\Big(e^{-\nu\pi
i}\frac{\partial}{\partial\nu}J_{\nu}-\frac{\partial}{\partial\nu}J_{-\nu}
-\pi i e^{-\nu\pi i}J_{\nu}\Big)\Bigg\},\label{phi1e}\end{aligned}$$ where $$\begin{aligned}
\frac{\partial}{\partial\nu}J_{\nu}(z)=J_{\nu}\ln\Big[\frac{z}{2}\Big]
-\Big(\frac{z}{2}\Big)^{\nu}\sum_{k=0}^{\infty}(-1)^{k}
\frac{\psi(\nu+k+1)}{\Gamma(\nu+k+1)}\frac{(\frac{z^2}{4})^k}{k!}\end{aligned}$$ with the digamma function $\psi(x)=\partial\ln[\Gamma(x)]/\partial
x$. Here we observe the appearance of $\ln[z]$-term. It turns out that $\phi_{\bf k}^2(z)$ takes the form when considering $J_{\pm\nu} \to \Gamma(\pm\nu+1)^{-1}(z/2)^{\pm\nu}$ in the superhorizon limit of $z\to0$ as $$\begin{aligned}
\phi_{\bf k}^2(z)\sim z^{w}\ln[z],\end{aligned}$$ which recovers (\[super-phi2\]). We mention that $\frac{\partial}{\partial\nu}J_{-\nu}$ in (\[phi1e\]) is dominant because it behaves as $z^{-\nu}\ln[z]$ in the superhorizon limit of $z\to 0$. However, we do not recover its asymptotic form (\[phi2s-sol\]) in the subhorizon limit of $z\to\infty$. Hence, it is not easy to obtain a full solution $\phi_{\bf k}^2(z)$ to (\[s2-eq2\]) by the trick used in [@Kogan:1999bn]. Fortunately, its superhorizon-limit solution (\[super-phi2\]) could be found by this trick.
Power spectra
=============
The power spectrum is defined by the two-point function which could be computed when one chooses the Bunch-Davies (BD) vacuum state $|0\rangle_{\rm BD}$ in the subhorizon limit (${\rm \partial
dS}_\infty$) of $\eta\to -\infty(z\to
\infty)$ [@Baumann:2009ds]. The defining relation is given by $$_{\rm BD}\langle0|{\cal F}(\eta,\bold{x}){\cal
F}(\eta,\bold{y})|0\rangle_{\rm BD}=\int d^3k \frac{{\cal P}_{\cal
F}}{4\pi k^3}e^{i \bold{k}\cdot (\bold{x}-\bold{y})},$$ where ${\cal F}$ represents singleton and tensor and $k=\sqrt{\bold{k}\cdot \bold{k}}$ is the comoving wave number. Quantum fluctuations were created on all length scales with wave number $k$. Cosmologically relevant fluctuations start their lives inside the Hubble radius which defines the subhorizon: $k~\gg aH$. On later, the comoving Hubble radius $1/(aH)$ shrinks during inflation while keeping the wavenumber $k$ constant. Eventually, all fluctuations exit the comoving Hubble radius, they reside on the superhorizon region of $k~\ll aH$ after horizon crossing.
In general, one may compute the power spectrum of scalar and tensor by taking the BD vacuum. In the dS inflation, we choose the subhorizon limit of $z\to \infty$ to define the BD vacuum. This implies that in the infinite past of $\eta\to -\infty(z\to \infty)$, all observable modes had time-independent frequencies $\omega=k$ and the Mukhanov-Sasaki equation reduces to ${\cal F}''_{{\bf
k},\infty}+k^2{\cal F}_{{\bf k},\infty}\approx0$ whose positive solution is given by ${\cal F}_{{\bf
k},\infty}=e^{-ik\eta}/\sqrt{2k}=e^{iz}/\sqrt{2k}$. This defines a preferable set of mode functions and a unique physical vacuum, the BD vacuum $|0\rangle_{\rm BD}$.
On the other hand, we choose the superhorizon region of $z \ll 1$ to get a finite form of the power spectrum which stays alive after decaying. For example, fluctuations of a massless scalar ($\bar{\nabla}^2\delta \phi=0$) and tensor ($\bar{\nabla}^2h_{ij}=0$) with different normalization originate on subhorizon scales and they propagate for a long time on superhorizon scales. This can be checked by computing their power spectra given by $$\begin{aligned}
\label{powerst}
{\cal P}_{\rm \delta\phi}&=&\frac{H^2}{(2\pi)^2}[1+z^2],\\
\label{powerst1}{\cal P}_{\rm h}&=&2\times \Big(\frac{2}{M_{\rm
P}}\Big)^2\times{\cal P}_{\rm \delta\phi}= \frac{2H^2}{\pi^2M^2_{\rm
P}}[1+z^2].\end{aligned}$$ In the limit of $z\to 0$, they are finite as $$\label{fpowerst} {\cal P}_{{\rm
\delta \phi},0}=\frac{H^2}{(2\pi)^2},~~{\cal P}_{{\rm h},0}=
\frac{2H^2}{\pi^2M^2_{\rm P}}.$$ Accordingly, it would be very interesting to check what happens when one computes the power spectra for the dipole pair (singleton) generated from during the dS inflation in the framework of the singleton gravity theory.
To compute the power spectrum, we have to know the commutation relations and the Wronskian conditions. The canonical conjugate momenta are given by $$\pi_1=a^2\frac{d\varphi_2}{d\eta},~~\pi_2=a^2\frac{d\varphi_1}{d\eta}.$$ The canonical quantization is accomplished by imposing equal-time commutation relations: $$\begin{aligned}
\label{comm}
[\hat{\varphi}_{1}(\eta,{\bf x}),\hat{\pi}_{1}(\eta,{\bf
y})]=i\delta^3({\bf x}-{\bf y}),~~[\hat{\varphi}_2(\eta,{\bf
x}),\hat{\pi}_{2}(\eta,{\bf y})]=i\delta^3({\bf x}-{\bf y}).\end{aligned}$$ The two operators $\hat{\varphi}_{1}$ and $\hat{\varphi}_{2}$ are expanded in terms of Fourier modes as [@Rivelles:2003jd; @Jimenez:2012ak; @Kim:2013waf] $$\begin{aligned}
\label{hex1}
\hat{\varphi}_{1}(z,{\bf x})&=&\frac{1}{(2\pi)^{\frac{3}{2}}}\int
d^3kN\Bigg[\Big(i\hat{c}_1({\bf k})\phi^1_{\bf k}(z)e^{i{\bf
k}\cdot{\bf
x}}\Big)+{\rm h.c.}\Bigg], \\
\label{hex2} \hat{\varphi}_2(z,{\bf
x})&=&\frac{1}{(2\pi)^{\frac{3}{2}}}\int
d^3k\tilde{N}\Bigg[\Big(\hat{c}_2({\bf k})\phi^1_{\rm
k}(z)+\hat{c}_1({\bf k})\phi^2_{\rm k}(z)\Big)e^{i{\bf k}\cdot{\bf
x}}+{\rm h.c.}\Bigg]\end{aligned}$$ with $N$ and $\tilde{N}$ the normalization constants. Plugging (\[hex1\]) and (\[hex2\]) into (\[comm\]) determines the relation of normalization constants as $N\tilde{N}=1/2k $ and commutation relations between $\hat{c}_a({\bf k})$ and $\hat{c}^{\dagger}_b({\bf k}')$ as $$\label{scft}
[\hat{c}_a({\bf k}), \hat{c}^{\dagger}_b({\bf k}')]= 2k
\left(
\begin{array}{cc}
0 & -i \\
i & 1 \\
\end{array}
\right)\delta^3({\bf k}-{\bf k}')$$ which reflects the quantization of singleton. Here, the commutation relation of $[\hat{c}_2({\bf k}),
\hat{c}^{\dagger}_2({\bf k}')]$ is implemented by the following Wronskian condition with (\[scalar-eq7\]) and $\tilde{c}_2=-iH/(2\sqrt{2k^3})$ in (\[phi2s-sol\]): $$\begin{aligned}
a^2\Big(\phi^1_{{\bf k},\infty}\frac{d\phi^{2*}_{{\bf
k},\infty}}{dz}-\phi^{2*}_{{\bf k},\infty}\frac{d\phi^{1}_{{\rm
k},\infty}}{dz}+\phi^{1*}_{{\bf k},\infty}\frac{d\phi^{2}_{{\bf
k},\infty}}{dz}-\phi^{2}_{{\bf k},\infty}\frac{d\phi^{1*}_{{\rm
k},\infty}}{dz}\Big)=\frac{1}{k}.\end{aligned}$$ It is important to note that the commutation relations (\[scft\]) were used to derive the power spectra of conformal gravity [@Myung:2014cra]. On the other hand, if one uses the solution $\phi^{1}_{{\bf
k},\infty}$ (\[scalar-eq7\]) and $\phi^{2,d}_{{\bf k},\infty}$ (\[sub-sol\]), the Wronskian condition leads to $$\begin{aligned}
&&a^2\Big(\phi^1_{{\bf k},\infty}\frac{d\phi^{2,d*}_{{\bf
k},\infty}}{dz}-\phi^{2,d*}_{{\bf k},\infty}\frac{d\phi^{1}_{{\rm
k},\infty}}{dz}+\phi^{1*}_{{\bf k},\infty}\frac{d\phi^{2,d}_{{\bf
k},\infty}}{dz}-\phi^{2,d}_{{\bf k},\infty}\frac{d\phi^{1*}_{{\rm
k},\infty}}{dz}\Big)\nonumber
\\
&&=-\sqrt{\frac{k}{2}}\frac{1}{H}\Bigg[2i(-\tilde{c}_2+\tilde{c}^*_2)+(\tilde{c}_1+\tilde{c}_1^*)\Big(\frac{1}{z^3}+\frac{3}{z}\Big)\Bigg]\end{aligned}$$ which cannot be independent of $z$ unless $\tilde{c}_1=\tilde{c}_1^*=0$, This explains why the direct solution $\phi^{2,d}_{{\bf k},\infty}$ (\[sub-sol\]) is not suitable for choosing the Bunch-Davies vacuum in the subhorizon limit. At this stage, we wish to mention when do the fluctuations of singleton become classical. The commutators in (\[comm\]) commute on the superhorizon region of $z<1$ after horizon crossing.
We are ready to compute the power spectrum of the dipole pair defined by $$\begin{aligned}
\label{power}
_{\rm BD}\langle0|\hat{\varphi}_{a}(\eta,{\bf
x})\hat{\varphi}_{b}(\eta,{\bf y})|0\rangle_{\rm BD}=\int
d^3k\frac{{\cal P}_{\rm ab}}{4\pi k^3}e^{i{\bf k}\cdot({\bf x}-{\bf
y})}.\end{aligned}$$ Here we choose the BD vacuum $|0\rangle_{\rm BD}$ by imposing $\hat{c}_a({\bf k})|0\rangle_{\rm BD}=0$. On the other hand, the cosmological correlator defined in momentum space are related to the power spectra as [@Baumann:2009ds] $$\label{mom-corr}
\langle\phi^a_{\bf k}\phi^b_{{\bf k}'}\rangle =(2\pi)^3\delta^3({\bf
k}+{\bf k}')\frac{2\pi^2}{k^3}P_{ab}(k).$$ Since the singleton theory is quite different from the two-free scalar theory, we explain what the BD vacuum is. For this purpose, we remind the reader that the Gupta-Bleuler condition of $B^+({\bf x})|$phys$\rangle=0$ where $B$ is a conjugate momentum of scalar photon $A_0$ was introduced to extract the physical states of transverse photons $A_1$ and $A_2$ by confining scalar photon $A_0$ and longitudinal photon $A_3$ as members of quartet [@AI; @Kugo:1979gm]. For this purpose, we note that the dipole pair ($\varphi_1,\varphi_2$) is turned into the zero-norm state by making use of the BRST transformation in Minkowski spacetime [@Kim:2013mfa]. We suggest that if the dS/LCFT correspondence works, the boundary logarithmic operator $\sigma_2$ is related to the negative-norm state of $\varphi_2$. In order to remove the negative-norm state, we impose the subsidiary condition as $\varphi_1^+({\bf
x})|$phys$\rangle=0$ where $\varphi_1^+({\bf x})$ is the positive-frequency part of the field operator. Then, the physical space ($|$phys$\rangle$) will not include any $\varphi_2$-particle state. This corresponds to the dipole mechanism to cancel the negative-norm state. Here, the subsidiary condition of $\varphi_1^+({\bf x})|$phys$\rangle=0$ is translated into $\hat{c}_1({\bf k})|$phys$\rangle=0$ which shares a property of the BD vacuum $|0\rangle_{\rm BD}$ defined by $\hat{c}_1({\bf
k})|0\rangle_{\rm BD}=0$, in addition to $\hat{c}_2({\bf
k})|0\rangle_{\rm BD}=0$.
The tensor power spectrum for $\varphi_1$ is given as $$\begin{aligned}
{\cal P}_{\rm 11}=0\end{aligned}$$ when one used the unconventional commutation relation $[\hat{c}_1({\bf k}), \hat{c}^{\dagger}_1({\bf k}')]=0$.
On the other hand, it turns out that the power spectrum of $\varphi_{2}$ is defined by $$\begin{aligned}
\label{pw22}
{\cal P}_{\rm 22}&\equiv& {\cal P}_{\rm 22}^{(1)}+{\cal P}_{\rm
22}^{(2)}\nonumber\\
&=&\frac{k^3}{2\pi^2}\Bigg(\Big|\phi_{\bf k}^1\Big|^2+i(\phi_{\bf
k}^1\phi_{\bf k}^{2*}-\phi_{\bf k}^2\phi_{\bf k}^{1*})\Bigg),\end{aligned}$$ where ${\cal P}_{\rm 22}^{(1,2)}$ denote the (first, second) term in (\[pw22\]) and we fixed $\tilde{N}=1/\sqrt{2k}$. Note that ${\cal
P}_{\rm 22}^{(1)}$ can be written as $$\begin{aligned}
{\cal P}_{\rm 22}^{(1)}=\frac{k^3}{2\pi^2}\Big|\phi_{\bf k}^1\Big|^2
=\frac{H^2}{8\pi}z^3|e^{i(\frac{\pi
\nu}{2}+\frac{\pi}{4})}H_{\nu}^{(1)}(z)|^2.\end{aligned}$$ In the superhorizon limit of $z\to 0$, the power spectrum takes the form $${\cal P}_{\rm 22}^{(1)}\Big|_{z\to
0}=\Big(\frac{H}{2\pi}\Big)^2\Big(\frac{\Gamma(\nu)}{\Gamma(3/2)}\Big)^2\Big(\frac{z}{2}\Big)^{2w}\equiv
\xi^2z^{2w},~\xi^2=\frac{1}{2^{2w}}\Big(\frac{H}{2\pi}\Big)^2\Big(\frac{\Gamma(\nu)}{\Gamma(3/2)}\Big)^2.$$ which implies that ${\cal P}_{\rm 22}^{(1)}$ approaches zero as $z\to 0$. In the massless case of $m^2=0~(\nu=3/2,w=0)$, ${\cal
P}_{\rm 22}^{(1)}$ leads to the power spectrum ${\cal P}_{\rm \delta
\phi}=(H/2\pi)^2$ in (\[powerst\]) for a massless scalar.
It is important to note that in the superhorizon limit of $z\to 0$, ${\cal P}_{\rm 22}^{(2)}$ is given by $$\begin{aligned}
\label{pw222}
{\cal P}_{\rm 22}^{(2)}\sim 2\xi^2 z^{2w}\ln[z],\end{aligned}$$ which implies that ${\cal P}_{\rm 22}^{(2)}$ approaches zero as $z\to 0$. In deriving (\[pw222\]), $\xi$ denotes a real quantity given by $\phi_{\bf k}^1= -i\xi z^{w}$ and $\phi_{\bf k}^2\sim \xi
z^{w}\ln[z]$. We mention that the remaining power spectra ${\cal P}_{\rm
12}$ and ${\cal P}_{\rm 21}$ take the same form as ${\cal P}_{\rm
22}^{(1)}$ $$\begin{aligned}
{\cal P}_{\rm 12}~=~{\cal P}_{\rm
21}&=&\frac{k^3}{2\pi^2}\Big|\phi_{\bf k}^1\Big|^2\nonumber\\
&=&{\cal P}_{\rm 22}^{(1)},\end{aligned}$$ where we fixed $N=1/\sqrt{2k}$.
Finally, we obtain the power spectra of singleton in the superhorizon limit of $z\to 0$ $$\begin{aligned}
\label{ps-mat1}
{\cal P}_{{ab},0}(z)&\sim&\xi^2 \left(
\begin{array}{cc}
0 & z^{2w} \\
z^{2w} & z^{2w}(1+2\ln [z])\\
\end{array}
\right).
\end{aligned}$$ Its explicit form is given by $$\begin{aligned}
\label{ps-mat2}
{\cal P}_{{ab},0}(k,\eta)&\sim&\xi^2 \left(
\begin{array}{cc}
0 & (-k\eta)^{2w} \\
(-k\eta)^{2w} & (-k\eta)^{2w}(1+2\ln [-k\eta])\\
\end{array}
\right).
\end{aligned}$$ For $\eta=-\epsilon(0<\epsilon\ll1)$ near $\eta=0^-$ [@Larsen:2003pf], (\[ps-mat2\]) takes the form $$\label{ps-mat3}{\cal P}_{{ab},0}(k,-\epsilon)\sim \xi^2 \left(
\begin{array}{cc}
0 & (\epsilon k)^{2w} \\
(\epsilon k)^{2w} & ( \epsilon k)^{2w}(1+2\ln [\epsilon k])\\
\end{array}
\right).$$ Interestingly, $k^{-3}{\cal P}_{{ab},0}(k,-1)$ has the same form as the momentum correlators of LCFT $\langle
\sigma_a(k)\sigma_b(-k)\rangle$ with $D=(2w-1)(w-3)/(w(2w-3))$ in (\[m0\])-(\[m2\]). This may show how the dS/LCFT correspondence works for deriving the power spectra in the superhorizon limit. For a light singleton with $m^2 \ll H^2$, one has $w\simeq
\frac{m^2}{3H^2}$. Hence, these power spectra are given by $$\label{ps-light}
{\cal P}_{{ab},0}|_{\frac{m^2}{ H^2}\ll1} (k,-\epsilon)\propto\left(
\begin{array}{cc}
0 & (\epsilon k)^{\frac{2m^2}{3H^2}} \\
(\epsilon k)^{\frac{2m^2}{3H^2}} & (\epsilon k)^{\frac{2m^2}{3H^2}}(1+2\ln[\epsilon k])\\
\end{array}
\right)$$ whose spectral indices are given by $$\label{sp-light}
n_{{ab},0}|_{\frac{m^2}{ H^2}\ll1}(k,-\epsilon)-1 =\frac{d\ln{\cal
P}_{{ab},0}|_{\frac{m^2}{ H^2}\ll1}(k,-\epsilon)}{d\ln k}= \left(
\begin{array}{cc}
0 & \frac{2m^2}{3H^2} \\
\frac{2m^2}{3H^2} & \frac{2m^2}{3H^2}+\frac{2}{(1+2\ln [\epsilon k])}\\
\end{array}
\right).$$ We observe here that $n_{{ab},0}|_{\frac{m^2}{ H^2}\ll1}$ gets a new contribution $\frac{2}{(1+2\ln [\epsilon k])}$ from the due to the logarithmic short distance singularity. Also, we observe that ${\cal P}_{{22},0}|_{\frac{m^2}{ H^2}\ll1} (k,-\epsilon) <0$ for $\epsilon k<0.607$. There is no such condition for a massive scalar propagating on the dS spacetime.
At this stage, we briefly mention how to resolve the $\epsilon$-dependence. To compute the power spectra and spectral indices correctly, one has to choose a proper slice near $\eta=0^-$. This may be done by taking $\eta=-\epsilon$ firstly, and letting $\epsilon \to 0$ on later. We note that the $\epsilon$-dependence appears in the power spectra (\[ps-mat3\]) and spectral indices (\[sp-light\]). As was shown in the dS/CFT correspondence [@Larsen:2003pf], the cut-off $\epsilon$ acts like a renormalization scale which is well-known from the UV CFT renormalization theory. The cosmic evolution can be seen as a reversed renormalization group flow, from the IR fixed point (big bang) of the dual CFT to the UV fixed point (late times) of the dual CFT theory [@Schalm:2012pi]. Inflation occurs at a certain intermediate stage during the renormalization group flow. This is called as dS holography. Accordingly, in order to obtain the $\epsilon$-independent power spectra and spectral indices, we should introduce proper counter terms to renormalize the power spectra and spectral indices.
In the massless singleton of $m^2=0(\nu=3/2,w=0)$, the corresponding power spectra take the form $$\label{ps-massless}
{\cal P}_{{ab},0}\Big|_{m^2\to 0} =\Big(\frac{H}{2\pi}\Big)^2\left(
\begin{array}{cc}
0 & 1 \\
1 & 1+2\ln [z]\\
\end{array}
\right)$$ in the superhorizon limit. Here, we note that ${\cal P}_{{12},0}|_{m^2\to 0}$ is just the power spectrum of a massless scalar ${\cal P}_{\delta \phi,0}$ (\[fpowerst\]) in the superhorizon limit.
Discussions
===========
In this work, we have obtained the power spectra of singleton generated during the dS inflation. Even though we did not know a complete solution of $\phi^{2}_{\bf k}$ to the degenerate fourth-order equation (\[s2-eq2\]) in whole region, we have obtained the power spectra which show that the dS/LCFT correspondence plays an important role in determining the power spectra in the superhorizon limit. Considering (\[mom-corr\]) and (\[ps-mat2\]), one has $k^{-3}{\cal P}_{{ab},0}(k,-1)\propto
\langle \phi_{\bf k}^a\phi_{-{\bf k}}^b\rangle$. Hence, the cosmological correlators $\langle \phi_{\bf k}^a\phi_{-{\bf
k}}^b\rangle$ are directly proportional to the momentum LCFT-correlators $\langle \sigma_a(k)\sigma_b(-k)\rangle$ in (\[m0\])-(\[m2\]). Here we note that LCFT correlators were derived from the “extrapolate" dictionary (b). This is compared to the “differentiate" dictionary where (\[dir-rel\]) states that the cosmological correlator was inversely proportional to the CFT correlator [@Maldacena:2002vr]. Furthermore, we have computed the spectral indices (\[sp-light\]) for a light singleton which contains a logarithmic correction, in compared to the massive scalar.
In computing the power spectra, we have used two vacua located at $z=\infty$ (${\rm \partial dS}_\infty$) and $z=0$ (${\rm \partial
dS}_0$): the BD vacuum $|0\rangle_{\rm BD}$ in the subhorizon limit of $z\to \infty(\eta\to -\infty)$ and the CFT vacuum $|0\rangle_{\rm
C}$ to define the correlators of operators $\sigma_a$ in the superhorizon limit of $z\to0(\eta \to 0^-)$. The BD vacuum $|0\rangle_{\rm BD}$ is annihilated by the two lowering operators as $c_a({\bf k})|0\rangle_{\rm BD}=0$, and it relates to the $|{\rm
phys}\rangle$ which annihilates the negative norm state in the quantum electrodynamics. This is because the singleton theory is not a two-free scalar theory. In addition, the commutation relations (\[scft\]) designed for the singleton quantization played an important role to derive the power spectra in the superhorizon limit. On the other hand, the CFT vacuum $|0\rangle_{\rm C}$ was defined by imposing the Virasoro operators $L_n|0\rangle_{\rm C}=0$ for $n=0,\pm1$. The highest-weight state $|\Phi\rangle_{\rm
C}=\Phi(0)|0\rangle_{\rm C}$ for any primary field $\Phi$ of conformal weight $h$ is defined by $L_0|\Phi\rangle_{\rm
C}=h|\Phi\rangle_{\rm C}$ and $L_n|\Phi\rangle_{\rm C}=0$ for $n>0$.
Consequently, we have derived the power spectra and spectral indices of singleton in the superhorizon limit by using two boundary conditions at the infinite past ($\eta=-\infty$) and infinite future ($\eta=0^-$) where the BD vacuum was taken on the former time, while the CFT vacuum was employed on the latter time. The dS/LCFT correspondence was firstly realized as the computation of singleton power spectra. Since the LCFT as dual to the singleton suffers from the non-unitarity (for example, ${\cal P}_{{22},0}|_{\frac{m^2}{
H^2}\ll1} (k,-\epsilon) <0$ for $\epsilon k<0.607$), a truncation mechanism will be introduced to cure the non-unitarity in dS spacetime [@Bergshoeff:2012sc; @Grumiller:2013at; @Kim:2013mfa]. However, there remains nothing ($\sigma_{11}=0$) for the rank-2 LCFT dual to singleton after truncating (\[lcft-mat\]). If one considers three-coupled scalar theory instead of singleton, its dual correlators will be not a $2\times
2$ matrix (\[lcft-mat\]) but a $3\times 3$ matrix of $$\label{3by3}\tilde{\sigma}_{ab} \propto \left(
\begin{array}{ccc}
0 & 0& {\rm CFT} \\
0 & {\rm CFT}& {\rm LCFT}\\
{\rm CFT}&{\rm LCFT}&{\rm LCFT}^2 \\
\end{array}
\right).$$ The truncation process be carried out by throwing all terms which generate the third column and row of (\[3by3\]). Actually, this corresponds to finding a unitary CFT. We point out that a unitary CFT ($\tilde{\sigma}_{22}$) obtained after truncation is nothing but an ordinary CFT.
Finally, let us ask how could this scenario account for cosmological observables like the amplitude of the power spectrum and the tensor-to-scalar ratio in the cosmic microwave background. In this work, we have chosen the dS inflation with $\dot{\phi_1}=\dot{\phi_2}=0$ instead of the slow-roll (dS-like) inflation for simplicity. If we choose the slow-roll inflation, then the Einstein equation takes the form of $G_{\mu\nu}=T_{\mu\nu}/M^2_{\rm P}$ which provides the energy density $\rho=\dot{\phi_1}\dot{\phi_2}+(m^2\phi_1\phi_2+\mu^2\phi_1^2/2)$ and the pressure $p=\dot{\phi_1}\dot{\phi_2}-(m^2\phi_1\phi_2+\mu^2\phi_1^2/2)$. The first and second Friedmann equations are given by $H^2=\frac{\rho}{3M^2_{\rm P}}$ and $\dot{H}=-\frac{\rho+p}{2M^2_{\rm P}}$. Also, their scalar equations are given by $\ddot{\phi}_1+3H\dot{\phi}_1+m^2\phi_1=0$ and $\ddot{\phi}_1+3H\dot{\phi}_1+m^2\phi_2=-\mu^2\phi_1$ which are combined to give $(\frac{d^2}{dt^2}+3H\frac{d}{dt}+m^2)^2\phi_2=0$. However, it requires a formidable task to perform its cosmological perturbations around the slow-roll inflation instead of the dS inflation. Hence, we wish to remain “cosmological perturbations of singleton" as a future work by answering to the question how could this theory account for the observed cosmological parameters in the cosmic microwave background.
On the other hand, one may consider the holographic inflation and thus, the dS/CFT correspondence determines the tensor central charge. If one accepts holographic inflation such that the dS inflation era of our universe is approximately described by a dual CFT$_3$ living on the spatial slice at the end of inflation, the BICEP2 results might determine the central charge $c_{\rm
T}=1.2\times 10^{9}$ of the CFT$_3$ [@Larsen:2014wpa]. This is because every CFT$_3$ has a transverse-traceless tensor $T_{ij}$ with two DOF which satisfies $\langle T_{ij}({\bf x})T_{kl}({\bf
0})\rangle=\frac{c_{\rm T}}{|{\bf x}|^6} I_{ij,kl}({\bf x})$. Since a single complex scalar $\psi$ represents two polarization modes of the graviton, its tensor correlator in momentum space is defined by $\langle\psi_{\bf k}\psi_{{\bf k}'}\rangle=(2\pi)^3\delta^3({\bf
k}+{\bf k}')\frac{2\pi^2}{k^3}\frac{{\cal P}_{\rm T}}{2}$ which determines the tenor power spectrum ${\cal P}_{\rm T}=2\Big(\frac{H
t_{\rm P}}{\pi}\Big)^2={\cal P}_{{\rm h},0}$ in (\[fpowerst\]). This was determined to be $5\times 10^{-10}$ by BICEP2 [@Ade:2014xna]. Also, its improvement of energy-momentum tensor was reported in [@Kawai:2014vxa] by including a curvature coupling of $\zeta \phi^2 R$. As a result, if one uses the critical gravity including curvature squared terms to describe the holographic inflation, the dS/LCFT picture for tensor modes would play a role in determining other cosmological observables.
Appendix: LCFT correlators from “extrapolate" dictionary {#appendix-lcft-correlators-from-extrapolate-dictionary .unnumbered}
========================================================
In this appendix, we derive the LCFT correlators by making use of the extrapolation approach (b) in the superhorizon limit. For this purpose, we consider the Green’s function for a massive scalar propagating on dS spacetime $$\label{green}
G_0(\eta,{\bf x};\eta',{\bf
y})=\frac{H^2}{16\pi}\Gamma(\triangle_+)\Gamma(\triangle_-)~_2F_1(\triangle_+,\triangle_-,2;1-\frac{\xi}{4})$$ with $\xi=\frac{-(\eta-\eta')^2+|{\bf x}-{\bf y}|^2}{\eta \eta'}$. Taking a transformation form of hypergeometric function $$\begin{aligned}
_2F_1(\triangle_+,\triangle_-,2;1-\frac{\xi}{4})=
\Big(\frac{4}{\xi}\Big)^{\triangle_-}~
_2F_1\Big(\triangle_-,2-\triangle_+,2;\frac{1-\frac{\xi}{4}}{-\frac{\xi}{4}}\Big),\end{aligned}$$ we obtain the asymptotic form for $\triangle_-=w$ $$\label{g0e}
\lim_{\eta,\eta'\to 0}(\eta\eta')^{-w}G_0(\eta,{\bf x};\eta',{\bf
y})\propto\frac{1}{|{\bf x}-{\bf y}|^{2w}},$$ which corresponds to LCFT correlators $_{\rm e}\langle {\cal O}_{1}({\bf x}){\cal O}_{2}({\bf y})\rangle_{\rm e}
=_{\rm e}\langle {\cal O}_{2}({\bf x}){\cal
O}_{1}({\bf y})\rangle_{\rm e}$. Furthermore, the Green’s function $G_1$ is derived by taking derivative with respect to $w$ as $$\begin{aligned}
G_1=\frac{d}{dw}G_0=\Big(\frac{4}{\xi}\Big)^{w}\Big(-\ln\Big[\frac{\xi}{4}\Big]+\frac{1}{F}\frac{\partial F}{\partial w}\Big)F,\end{aligned}$$ where $F$ denotes $F=H^2\Gamma(3-w)\Gamma(w)_2F_1(w,w-1,2;1-4/\xi)/(16\pi)$. It turns out that its asymptotic form is given by $$\begin{aligned}
\lim_{\eta,\eta'\to 0}(\eta\eta')^{-w}G_1(\eta,{\bf
x};\eta',{\bf y})\propto\frac{1}{|{\bf x}-{\bf
y}|^{2w}}\Big(-2\ln|{\bf x}-{\bf
y}|+\zeta_1\Big),\label{g1e}\end{aligned}$$ where $\zeta_1$ is some constant and (\[g1e\]) corresponds to $_{\rm e}\langle {\cal O}_{2}({\bf x}){\cal O}_{2}({\bf
y})\rangle_{\rm e}$.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No.2012-R1A1A2A10040499).
[99]{} A. Pais and G. E. Uhlenbeck, Phys. Rev. [**79**]{}, 145 (1950). P. D. Mannheim and A. Davidson, Phys. Rev. A [**71**]{}, 042110 (2005) \[hep-th/0408104\]. A. V. Smilga, SIGMA [**5**]{}, 017 (2009) \[arXiv:0808.0139 \[quant-ph\]\].
M. Flato and C. Fronsdal, Commun. Math. Phys. [**108**]{}, 469 (1987). A. M. Ghezelbash, M. Khorrami and A. Aghamohammadi, Int. J. Mod. Phys. A [**14**]{}, 2581 (1999) \[hep-th/9807034\].
I. I. Kogan, Phys. Lett. B [**458**]{}, 66 (1999) \[hep-th/9903162\]. Y. S. Myung and H. W. Lee, JHEP [**9910**]{}, 009 (1999) \[hep-th/9904056\]. D. Grumiller, W. Riedler, J. Rosseel and T. Zojer, J. Phys. A [**46**]{}, 494002 (2013) \[arXiv:1302.0280 \[hep-th\]\].
A. Kehagias and A. Riotto, Nucl. Phys. B [**864**]{}, 492 (2012) \[arXiv:1205.1523 \[hep-th\]\]. V. Gurarie, Nucl. Phys. B [**410**]{}, 535 (1993) \[hep-th/9303160\]. M. Flohr, Int. J. Mod. Phys. A [**18**]{}, 4497 (2003) \[hep-th/0111228\]. P. A. R. Ade [*et al.*]{} \[BICEP2 Collaboration\], Phys. Rev. Lett. [**112**]{}, 241101 (2014) \[arXiv:1403.3985 \[astro-ph.CO\]\]. D. Baumann [*et al.*]{} \[CMBPol Study Team Collaboration\], AIP Conf. Proc. [**1141**]{}, 10 (2009) \[arXiv:0811.3919 \[astro-ph\]\].
D. Baumann, arXiv:0907.5424 \[hep-th\].
P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1303.5082 \[astro-ph.CO\]. M. P. Hertzberg, arXiv:1403.5253 \[hep-th\].
S. Choudhury and A. Mazumdar, arXiv:1403.5549 \[hep-th\]. Y. Gong and Y. Gong, Phys. Lett. B [**734**]{}, 41 (2014) \[arXiv:1403.5716 \[gr-qc\]\]. J. E. Kim, arXiv:1404.4022 \[hep-ph\].
L. A. Anchordoqui, arXiv:1407.8105 \[astro-ph.CO\]. K. Bhattacharya, J. Chakrabortty, S. Das and T. Mondal, arXiv:1408.3966 \[hep-ph\]. Y. C. Li, F. Q. Wu, Y. J. Lu and X. L. Chen, arXiv:1409.0294 \[astro-ph.CO\]. M. J. Mortonson and U. Seljak, arXiv:1405.5857 \[astro-ph.CO\].
R. Flauger, J. C. Hill and D. N. Spergel, JCAP [**1408**]{}, 039 (2014) \[arXiv:1405.7351 \[astro-ph.CO\]\]. R. Adam [*et al.*]{} \[ Planck Collaboration\], arXiv:1409.5738 \[astro-ph.CO\].
J. M. Maldacena, JHEP [**0305**]{}, 013 (2003) \[astro-ph/0210603\].
V. O. Rivelles, Phys. Lett. B [**577**]{}, 137 (2003) \[hep-th/0304073\].
Y. -W. Kim, Y. S. Myung and Y. -J. Park, Mod. Phys. Lett. A [**28**]{}, 1350182 (2013) \[arXiv:1305.7312 \[hep-th\]\]. T. Banks, M. R. Douglas, G. T. Horowitz and E. J. Martinec, hep-th/9808016. L. Susskind and E. Witten, hep-th/9805114. J. Polchinski, hep-th/9901076. D. Harlow and D. Stanford, arXiv:1104.2621 \[hep-th\]. E. Witten, hep-th/0106109.
D. Seery and J. E. Lidsey, JCAP [**0606**]{}, 001 (2006) \[astro-ph/0604209\].
M. Abramowitz and A. Stegun, Handbook of Mathematical functions, (Dover publications, New York, 1970).
J. B. Jimenez, E. Dio and R. Durrer, JHEP [**1304**]{}, 030 (2013) \[arXiv:1211.0441 \[hep-th\]\].
Y. S. Myung and T. Moon, arXiv:1407.0441 \[gr-qc\].
I. J. R. Aitchison, An informal introduction to gauge field theories, (Cambridge Univ. Press, London, 1982).
T. Kugo and I. Ojima, Prog. Theor. Phys. Suppl. [**66**]{}, 1 (1979). Y. -W. Kim, Y. S. Myung and Y. -J. Park, Phys. Rev. D [**88**]{}, 085032 (2013) \[arXiv:1307.6932\].
F. Larsen and R. McNees, JHEP [**0307**]{}, 051 (2003) \[hep-th/0307026\]. K. Schalm, G. Shiu and T. van der Aalst, JCAP [**1303**]{}, 005 (2013) \[arXiv:1211.2157 \[hep-th\]\].
E. A. Bergshoeff, S. de Haan, W. Merbis, M. Porrati and J. Rosseel, JHEP [**1204**]{}, 134 (2012) \[arXiv:1201.0449 \[hep-th\]\]. F. Larsen and A. Strominger, arXiv:1405.1762 \[hep-th\]. S. Kawai and Y. Nakayama, arXiv:1403.6220 \[hep-th\].
|
\[attack\_classification\]
for tree=
myleaf/.style=
label=
\[align=left\]below:
\#1
, s sep=1cm
\[Black-Box Adversarial Attacks,rectangle,rounded corners,draw \[Gradient Estimation,rectangle,rounded corners,draw,align=center, myleaf=[$\bullet$ Chen et al.[@chen_2017]\
$\bullet$ Ilyas et al.[@ilyas_2018]\
$\bullet$ Cheng et al.[@cheng]\
$\bullet$ Bhagoji et al.[@bhagoji]\
$\bullet$ Du et al.[@du]\
$\bullet$ Tu et al.[@tu]\
$\bullet$ Ilyas et al.[@ilyas_2019]]{} \] \[Transferability,rectangle,rounded corners,draw,align=center, myleaf=[$\bullet$ Papernot et al.[@papernot]\
$\bullet$ Shi et al.[@shi]\
$\bullet$ Dong et al.[@dong]]{} \] \[Local Search,rectangle,rounded corners,draw,align=center, myleaf=[$\bullet$ Narodytska et al.[@narodytska_2016]\
$\bullet$ Narodytska et al.[@narodytska_2017]\
$\bullet$ Brendel et al.[@brendel]\
$\bullet$ Chen et al.[@chen_2019]\
$\bullet$ Brunner et al.[@brunner]\
$\bullet$ Li et al.[@li]\
$\bullet$ Alzantot et al.[@alzantot]\
$\bullet$ Guo et al.[@guo]]{} \] \[Combinatorics,rectangle,rounded corners,draw,align=center, myleaf=[$\bullet$ Moon et al.[@moon]]{} \] \] ;
|
---
abstract: 'We consider redundant analogues of the $f$- and $h$-vectors of simplicial complexes and present bases of $\mathbb{R}^{m+1}$ related to these “long” $f$- and $h$-vectors describing the face systems $\Phi\subseteq\mathbf{2}^{\{1,\ldots,m\}}$; we list the corresponding change of basis matrices. The representations of the long $f$- and $h$-vectors of a face system with respect to various bases are expressed based on partitions of the system into Boolean intervals.'
address: 'Data-Center Co., RU-620034, Ekaterinburg, P.O. Box 5, Russian Federation'
author:
- 'Andrey O. Matveev'
title: 'Faces and Bases: Boolean Intervals'
---
[^1]
Introduction and preliminaries
==============================
Let $V$ be a finite set and let $\mathbf{2}^{V}$ denote the [*simplex*]{} $\{F:\ F\subseteq V\}$. A family $\Delta\subseteq\mathbf{2}^{V}$ is called an [*abstract simplicial complex*]{} (or a [*complex*]{}) on the [*vertex*]{} set $V$ if, given subsets $A$ and $B$ of $V$, the inclusions $A\subseteq B\in\Delta$ imply $A\in\Delta$, and if $\{v\}\in\Delta$, for any $v\in V$; see, e.g., [@BB; @B; @BH; @BP; @Hibi; @MS; @St1; @Z]. If $\Gamma$ is a complex such that $\Gamma\subset\Delta$ (that is, $\Gamma$ is a [*subcomplex*]{} of $\Delta$) then the family $\Delta-\Gamma$ is called a [*relative simplicial complex*]{}, see [@St1 §III.7].
If $\Psi$ is a relative complex then the sets $F\in\Psi$ are called the [*faces*]{} of $\Psi$. The [*dimension*]{} $\dim(F)$ of a face $F$ by definition equals $|F|-1$; the cardinality $|F|$ is called the [*size*]{} of $F$. Let $\#$ denote the number of sets in a family. If $\#\Psi>0$ then the [*size*]{} $\operatorname{\mathit{d}}(\Psi)$ of $\Psi$ is defined by $\operatorname{\mathit{d}}(\Psi):=\max_{F\in\Psi}|F|$, and the [*dimension*]{} $\dim(\Psi)$ of $\Psi$ by definition is $\operatorname{\mathit{d}}(\Psi)-1$.
The row vector $\pmb{f}(\Psi):=\bigl(f_0(\Psi),f_1(\Psi),\ldots,f_{\dim(\Psi)}\bigr)
\in\mathbb{N}^{\operatorname{\mathit{d}}(\Psi)}$, where $f_i(\Psi):=\#\{F\in\Psi:\
|F|=i+1\}$, is called the [*$f$-vector*]{} of $\Psi$. The row [*$h$-vector*]{} $\pmb{h}(\Psi):=\bigl(h_0(\Psi),h_1(\Psi),\ldots,h_{\operatorname{\mathit{d}}(\Psi)}\bigr)
\in\mathbb{Z}^{\operatorname{\mathit{d}}(\Psi)+1}$ of $\Psi$ is defined by $$\sum_{i=0}^{\operatorname{\mathit{d}}(\Psi)}h_i(\Psi)\cdot\mathrm{y}^{\operatorname{\mathit{d}}(\Psi)-i}:=
\sum_{i=0}^{\operatorname{\mathit{d}}(\Psi)}f_{i-1}(\Psi)\cdot(\mathrm{y}-1)^{\operatorname{\mathit{d}}(\Psi)-i}\
.$$
In this note we consider redundant analogues $\pmb{f}(\Phi;|V|)\in\mathbb{N}^{|V|+1}$ and $\pmb{h}(\Phi;|V|)\in\mathbb{Z}^{|V|+1}$ of the $f$- and $h$-vectors that can be used in some situations for describing the combinatorial properties of arbitrary [*face systems*]{} $\Phi\subseteq\mathbf{2}^{V}$.
For a positive integer $m$, let $[m]$ denote the set $\{1,2,\ldots,m\}$. We relate to a face system $\Phi\subseteq\mathbf{2}^{[m]}$ the row vectors $$\begin{aligned}
\label{eq:6}
\pmb{f}(\Phi;m):&=\bigl(f_0(\Phi;m),f_1(\Phi;m),\ldots,f_m(\Phi;m)\bigr)
\in\mathbb{N}^{m+1}\ ,\\ \label{eq:7}
\pmb{h}(\Phi;m):&=\bigl(h_0(\Phi;m),h_1(\Phi;m),\ldots,h_m(\Phi;m)\bigr)
\in\mathbb{Z}^{m+1}\ ,\end{aligned}$$ where $f_i(\Phi;m):=\#\{F\in\Phi:\ |F|=i\}$, for $0\leq i\leq m$, and the vector $\pmb{h}(\Phi;m)$ is defined by $$\sum_{i=0}^m h_i(\Phi;m)\cdot\mathrm{y}^{m-i}:=\sum_{i=0}^m
f_i(\Phi;m)\cdot(\mathrm{y}-1)^{m-i}\ .$$
Note that if $\Psi\subset\mathbf{2}^{[m]}$ is a relative complex then we set $f_0(\Psi;m):=f_{-1}(\Psi):=\#\{F\in\Psi:\
|F|=0\}\in\{0,1\}$, $f_i(\Psi;m):=f_{i-1}(\Psi)$, for $1\leq
i\leq\operatorname{\mathit{d}}(\Psi)$ and, finally, $f_i(\Psi;m):=0$, for $\operatorname{\mathit{d}}(\Psi)+1\leq i\leq m$.
Vectors (\[eq:6\]) and (\[eq:7\]) go back to analogous constructions that appear, e.g., in [@McMSh; @McMWa]. In some situations, these “long” $f$- and $h$-vectors either can be used as an intermediate description of face systems or they can independently be involved in combinatorial problems and computations, see, e.g., [@M2]. Since the maps $\Phi\mapsto\pmb{f}(\Phi;m)$ and $\Phi\mapsto\pmb{h}(\Phi;m)$ from the Boolean lattice $\mathcal{D}(m)$ of all face systems (ordered by inclusion) to $\mathbb{Z}^{m+1}$ are [*valuations*]{} on $\mathcal{D}(m)$, the long $f$- and $h$-vectors can also be used in the study of decomposition problems; here, a basic construction is a [*Boolean interval*]{}, that is, the family $[A,C]:=\{B\in\mathbf{2}^{[m]}:\ A\subseteq B\subseteq C\}$, for some faces $A\subseteq C\subseteq[m]$.
We consider the vectors $\pmb{f}(\Phi;m)$ and $\pmb{h}(\Phi;m)$ as elements from the real Euclidean space $\mathbb{R}^{m+1}$ of row vectors. We present several bases of $\mathbb{R}^{m+1}$ related to face systems and list the corresponding change of basis matrices.
See, e.g., [@Aigner §IV.4] on valuations, [@MS Chapter 5] on Alexander duality, [@BarvinokConv §VI.6], [@BR Chapter 5], [@BH §II.5], [@BP §§1.2, 3.6, 8.6], [@Hibi §III.11], [@McMSh §5.1], [@St1 §§II.3, II.6, III.6], [@St2 §3.14], [@Z §8.3] on the Dehn-Sommerville relations, and [@HJ] on matrix analysis.
Notation
========
Throughout this note, $m$ means a positive integer; all vectors are of dimension $(m+1)$, and all matrices are $(m+1)\times(m+1)$ matrices. The components of vectors as well as the rows and columns of matrices are indexed starting with zero. For a vector $\pmb{w}$, $\pmb{w}^{\top}$ denotes its transpose.
If $\Phi$ is a face system, $\#\Phi>0$, then its [*size*]{} $\operatorname{\mathit{d}}(\Phi)$ is defined by $\operatorname{\mathit{d}}(\Phi):=\max_{F\in\Phi}|F|$.
We denote the empty set by $\hat{0}$, and we use the notation $\emptyset$ to denote the family containing no sets. We have $\#\emptyset=0$, $\#\{\hat{0}\}=1$, and $$\begin{aligned}
\pmb{f}(\emptyset;m)&=\pmb{h}(\emptyset;m)=(0,0,\ldots,0)\ ,\\
\pmb{f}(\{\hat{0}\};m)&=\pmb{h}(\mathbf{2}^{[m]};m)=(1,0,\ldots,0)\
.\end{aligned}$$
$\boldsymbol{\iota}(m):=(1,1,\ldots,1)$; $\boldsymbol{\tau}(m):=(2^m,2^{m-1},\ldots,1)$.
$\mathbf{I}(m)$ is the [*identity matrix*]{}.
$\mathbf{U}(m)$ is the [*backward identity matrix*]{} whose $(i,j)$th entry is the Kronecker delta $\delta_{i+j,m}$.
$\mathbf{T}(m)$ is the [*forward shift matrix*]{} whose $(i,j)$th entry is $\delta_{j-i,1}$.
If $\boldsymbol{\mathfrak{B}}:=(\pmb{b}_0,\ldots,\pmb{b}_m)$ is a basis of $\mathbb{R}^{m+1}$ then, given a vector $\pmb{w}\in\mathbb{R}^{m+1}$, we denote by $[\pmb{w}]_{\boldsymbol{\mathfrak{B}}}:=\bigl(
\kappa_0(\pmb{w},\boldsymbol{\mathfrak{B}}),\ldots,
\kappa_m(\pmb{w},\boldsymbol{\mathfrak{B}})\bigr)\in\mathbb{R}^{m+1}$ the $(m+1)$-tuple satisfying the equality $\sum_{i=0}^m\kappa_i(\pmb{w},\boldsymbol{\mathfrak{B}})\cdot\pmb{b}_i=\pmb{w}$.
The long $f$- and $h$-vectors
=============================
We recall the properties of vectors (\[eq:6\]) and (\[eq:7\]) described in [@M1].
- The maps $\Phi\mapsto\pmb{f}(\Phi;m)$ and $\Phi\mapsto\pmb{h}(\Phi;m)$ are valuations $\mathcal{D}(m)\to\mathbb{Z}^{m+1}$ on the Boolean lattice $\mathcal{D}(m)$ of all face systems (ordered by inclusion) contained in $\mathbf{2}^{[m]}$.
- Let $\Psi\subseteq\mathbf{2}^{[m]}$ be a relative complex. $$\begin{aligned}
h_l(\Psi)&=\sum_{k=0}^l\binom{m-\operatorname{\mathit{d}}(\Psi)-1+l-k}{l-k}h_k(\Psi;m)\
,\ \ \ 0\leq l\leq\operatorname{\mathit{d}}(\Psi)\ ;\\ h_l(\Psi;m)&=(-1)^l\sum_{k=0}^l
(-1)^k\binom{m-\operatorname{\mathit{d}}(\Psi)}{l-k}h_k(\Psi)\ ,\ \ \ 0\leq l\leq m\ .\end{aligned}$$
- Let $\Phi\subseteq\mathbf{2}^{[m]}$.
- $$\begin{aligned}
h_l(\Phi;m)&=(-1)^l\sum_{k=0}^l(-1)^k\binom{m-k}{l-k}f_k(\Phi;m)\
,\\ f_l(\Phi;m)&=\sum_{k=0}^l\binom{m-k}{l-k}h_k(\Phi;m)\ ,\ \ \
0\leq l\leq m\ .\end{aligned}$$
- $$\begin{aligned}
h_0(\Phi;m)&=f_0(\Phi;m)\ ,\\
h_1(\Phi;m)&=f_1(\Phi;m)-mf_0(\Phi;m)\ ,\\
h_m(\Phi;m)&=(-1)^m\sum_{k=0}^m(-1)^k f_k(\Phi;m)\ ,\\
\pmb{h}(\Phi;m)\cdot\boldsymbol{\iota}(m)^{\top}&=f_m(\Phi;m)\ .\end{aligned}$$
- $$\pmb{h}(\Phi;m)\cdot\boldsymbol{\tau}(m)^{\top}
=\pmb{f}(\Phi;m)\cdot\boldsymbol{\iota}(m)^{\top}=\#\Phi\ .$$
- Consider the face system $$\Phi^{\star}:=\{[m]-F:\ F\in\mathbf{2}^{[m]},\ F\not\in\Phi\}$$ “dual” to the system $\Phi$.
$$\begin{aligned}
h_l(\Phi;m)+(-1)^l\sum_{k=l}^m\binom{k}{l}h_k(\Phi^{\star};m)&=\delta_{l,0}\
,\ \ \ 0\leq l\leq m\ ;\\
h_m(\Phi;m)&=(-1)^{m+1}h_m(\Phi^{\star};m)\ .\end{aligned}$$
If $\Delta$ is a complex on the vertex set $[m]$ then the complex $\Delta^{\star}$ is called its [*Alexander dual*]{}. If $\#\Delta>0$ and $\#\Delta^{\star}>0$ then $$\begin{aligned}
h_l(\Delta;m)&=0\ ,\ \ \ 1\leq l\leq m-\operatorname{\mathit{d}}(\Delta^{\star})-1\
,\\
h_{m-\operatorname{\mathit{d}}(\Delta^{\star})}(\Delta;m)&=-f_{\operatorname{\mathit{d}}(\Delta^{\star})}
(\Delta^{\star};m)\ .\end{aligned}$$
Bases, and change of basis matrices
===================================
We relate to the simplex $\mathbf{2}^{[m]}$ three pairs of bases of the space $\mathbb{R}^{m+1}$. Let $\{F_0,\ldots,
F_m\}\subset\mathbf{2}^{[m]}$ be a face system such that $|F_k|=k$, for $0\leq k\leq m$; here, $F_0:=\hat{0}$ and $F_m:=[m]$.
The first pair consists of the bases $\bigl(\
\pmb{f}(\{F_0\};m),\pmb{f}(\{F_1\};m),\ldots,$ $\pmb{f}(\{F_m\};m)\ \bigr)$ and $\bigl(\
\pmb{h}(\{F_0\};m),\pmb{h}(\{F_1\};m),\ldots,\pmb{h}(\{F_m\};m)\
\bigr)$.
The bases $\bigl(\
\pmb{f}([F_0,F_0];m),\pmb{f}([F_0,F_1];m),\ldots,\pmb{f}([F_0,F_m];m)\
\bigr)$ and $\bigl(\
\pmb{h}([F_0,F_0];m),\pmb{h}([F_0,F_1];m),\ldots,
\pmb{h}([F_0,F_m];m)\ \bigr)$ compose the second pair.
The third pair consists of the bases $\bigl(\
\pmb{f}([F_m,F_m];m),\pmb{f}([F_{m-1},F_m];m),$ $\ldots,\pmb{f}([F_0,F_m];$ $m)\ \bigr)$ and $\bigl(\
\pmb{h}([F_m,F_m];m),\pmb{h}([F_{m-1},F_m];m),\ldots,\pmb{h}([F_0,$ $F_m];m)\ \bigr)$:
- We use the notation $\operatorname{\boldsymbol{\mathfrak{S}}}_{m}$ to denote the [*standard basis*]{} $\bigl(\boldsymbol{\sigma}(i;m):\ 0\leq i\leq m\bigr)$ of $\mathbb{R}^{m+1}$, where $$\boldsymbol{\sigma}(i;m):=(1,0,\ldots,0)\cdot\mathbf{T}(m)^i\ .$$
We define a basis $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m:=\bigl(\boldsymbol{\vartheta}^{\bullet}(i;m):\
0\leq i\leq m\bigr)$ of $\mathbb{R}^{m+1}$, where $$\boldsymbol{\vartheta}^{\bullet}(i;m):=
\bigl(\vartheta^{\bullet}_0(i;m),\vartheta^{\bullet}_1(i;m),\ldots,
\vartheta^{\bullet}_m(i;m)\bigr)\in\mathbb{Z}^{m+1}\ ,$$ by $$\vartheta^{\bullet}_j(i;m):=(-1)^{j-i}\tbinom{m-i}{j-i}\ ,\ \ \
0\leq j\leq m\ .$$
- Bases $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m:=
\bigl(\boldsymbol{\varphi}^{\blacktriangle}(i;m):\ 0\leq i\leq
m\bigr)$ and $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m:=
\bigl(\boldsymbol{\vartheta}^{\blacktriangle}(i;m):\ 0\leq i\leq
m\bigr)$ of $\mathbb{R}^{m+1}$ are defined in the following way: $$\boldsymbol{\varphi}^{\blacktriangle}(i;m):=\bigl(\varphi^{\blacktriangle}_0(i;m),
\varphi^{\blacktriangle}_1(i;m),\ldots,\varphi^{\blacktriangle}_m(i;m)\bigr)
\in\mathbb{N}^{m+1}\ ,$$ where $$\varphi^{\blacktriangle}_j(i;m):=\tbinom{i}{j}\ ,\ \ \ 0\leq j\leq
m\ ,$$ and $$\boldsymbol{\vartheta}^{\blacktriangle}(i;m):=
\bigl(\vartheta^{\blacktriangle}_0(i;m),
\vartheta^{\blacktriangle}_1(i;m),\ldots,\vartheta^{\blacktriangle}_m(i;m)\bigr)
\in\mathbb{Z}^{m+1}\ ,$$ where $$\vartheta^{\blacktriangle}_j(i;m):=(-1)^{j}\tbinom{m-i}{j}\ ,\ \ \
0\leq j\leq m\ .$$
The notations $\boldsymbol{\varphi}(i;m)$ and $\boldsymbol{\vartheta}(i;m)$ were used in [@M1] instead of $\boldsymbol{\varphi}^{\blacktriangle}(i;m)$ and $\boldsymbol{\vartheta}^{\blacktriangle}(i;m)$, respectively.
- The third pair consists of bases $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m:=
\bigl(\boldsymbol{\varphi}^{\blacktriangledown}(i;m):\ 0\leq i\leq
m\bigr)$ and $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m:=
\bigl(\boldsymbol{\vartheta}^{\blacktriangledown}(i;m):\ 0\leq
i\leq m\bigr)$ of $\mathbb{R}^{m+1}$ defined as follows: $$\boldsymbol{\varphi}^{\blacktriangledown}(i;m):=
\bigl(\varphi^{\blacktriangledown}_0(i;m),
\varphi^{\blacktriangledown}_1(i;m),\ldots,
\varphi^{\blacktriangledown}_m(i;m)\bigr)\in\mathbb{N}^{m+1}\ ,$$ where $$\varphi^{\blacktriangledown}_j(i;m):=\tbinom{i}{m-j}\ ,\ \ \ 0\leq
j\leq m\ ,$$ and $$\boldsymbol{\vartheta}^{\blacktriangledown}(i;m):=
\bigl(\vartheta^{\blacktriangledown}_0(i;m),
\vartheta^{\blacktriangledown}_1(i;m),\ldots,
\vartheta^{\blacktriangledown}_m(i;m)\bigr)\in\mathbb{Z}^{m+1}\ ,$$ where $$\vartheta^{\blacktriangledown}_j(i;m):=\delta_{m-i,j}\ ,\ \ \
0\leq j\leq m\ .$$ Note that $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ is up to rearrangement the standard basis $\operatorname{\boldsymbol{\mathfrak{S}}}_{m}$.
Let $\mathbf{S}(m)$ be the change of basis matrix from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$: [$$\mathbf{S}(m):=\begin{pmatrix}
\boldsymbol{\vartheta}^{\bullet}(0;m)\\ \vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)
\end{pmatrix}\ ;$$ ]{} the $(i,j)$th entry of the inverse matrix $\mathbf{S}(m)^{-1}$ is $\tbinom{m-i}{j-i}$.
For any $i\in\mathbb{N}$, $i\leq m$, we have $$\begin{aligned}
\boldsymbol{\vartheta}^{\bullet}(i;m)&=
\boldsymbol{\sigma}(i;m)\cdot\mathbf{S}(m)\ ,\\
\boldsymbol{\vartheta}^{\blacktriangle}(i;m)&=
\boldsymbol{\varphi}^{\blacktriangle}(i;m)\cdot\mathbf{S}(m)\ ,\\
\boldsymbol{\vartheta}^{\blacktriangledown}(i;m)&=
\boldsymbol{\varphi}^{\blacktriangledown}(i;m)\cdot\mathbf{S}(m)\
.\end{aligned}$$
For any face system $\Phi\subseteq\mathbf{2}^{[m]}$, we have $$\begin{aligned}
\label{eq:5} \pmb{h}(\Phi;m)&= \pmb{f}(\Phi;m)\cdot\mathbf{S}(m)
=\sum_{l=0}^m
f_l(\Phi;m)\cdot\boldsymbol{\vartheta}^{\bullet}(l;m) \ ,\\
\pmb{f}(\Phi;m)&= \pmb{h}(\Phi;m)\cdot\mathbf{S}(m)^{-1}\ .\end{aligned}$$
The change of basis matrices corresponding to the bases defined above are collected in Table \[table:1\].
Representations of the long $f$- and $h$-vectors with respect to some bases
===========================================================================
If $\Phi\subseteq\mathbf{2}^{[m]}$ then we by (\[eq:5\]) have $$\pmb{f}(\Phi;m)=[\pmb{h}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m}\ ,$$ and several observations follow: $$\begin{aligned}
[\pmb{h}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m}&=
[\pmb{f}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m}\ ;\\
[\pmb{h}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m}&=
[\pmb{f}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m}\\ \nonumber
&=\pmb{h}(\Phi;m)\cdot \mathbf{U}(m)\ ;\\
[\pmb{f}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m}&=\pmb{f}(\Phi;m)\cdot
\mathbf{U}(m)\\ \nonumber
&=[\pmb{h}(\Phi;m)]_{\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m}\cdot \mathbf{U}(m)\ .\end{aligned}$$
Partitions of face systems into Boolean intervals, and the long $f$- and $h$-vectors
====================================================================================
If $$\label{eq:9}
\Phi=[A_1,B_1]\dot\cup\cdots\dot\cup[A_{\theta},B_{\theta}]$$ is a partition of a face system $\Phi\subseteq\mathbf{2}^{[m]}$, $\#\Phi>0$, into Boolean intervals $[A_k,B_k]$, $1\leq
k\leq\theta$, then we call the collection $\mathsf{P}$ of positive integers $\mathsf{p}_{ij}$ defined by $$\mathsf{p}_{ij}:=\#\{[A_k,B_k]:\ |B_k-A_k|=i,\ |A_k|=j\}>0$$ the [*profile*]{} of partition (\[eq:9\]). If $\theta=\#\Phi$ then $\mathsf{p}_{0l}=f_l(\Phi;m)$ whenever $f_l(\Phi;m)>0$. Table \[table:2\] collects the representations of the vectors $\pmb{f}(\Phi;m)$ and $\pmb{h}(\Phi;m)$ with respect to various bases.
Appendix: Dehn-Sommerville type relations
=========================================
The $h$-vector of a complex $\Delta$ satisfies the [*Dehn-Sommerville relations*]{} if it holds $$h_l(\Delta)=h_{\operatorname{\mathit{d}}(\Delta)-l}(\Delta)\ ,\ \ \ 0\leq l\leq
\operatorname{\mathit{d}}(\Delta)$$ or, equivalently (see, e.g., [@McMSh p. 171]), $$h_l(\Delta;m)=(-1)^{m-\operatorname{\mathit{d}}(\Delta)}h_{m-l}(\Delta;m)\ ,\ \ \
0\leq l\leq m\ .$$
We say, for brevity, that a face system $\Phi\subset\mathbf{2}^{[m]}$ is a [*DS-system*]{} if the Dehn-Sommerville type relations $$\label{eq:2} h_l(\Phi;m)=(-1)^{m-\operatorname{\mathit{d}}(\Phi)}h_{m-l}(\Phi;m)\ ,\ \
\ 0\leq l\leq m$$ hold. The systems $\emptyset$ and $\{\hat{0}\}$ are DS-systems.
If $\#\Phi>0$, then define the integer $$\label{eq:3} \eta(\Phi):=\begin{cases}|\bigcup_{F\in\Phi}F|,
&\text{if $|\bigcup_{F\in\Phi}F|\equiv \operatorname{\mathit{d}}(\Phi)\pmod{2}$,}\\
|\bigcup_{F\in\Phi}F|+1, &\text{if
$|\bigcup_{F\in\Phi}F|\not\equiv \operatorname{\mathit{d}}(\Phi)\pmod{2}$.}
\end{cases}$$ Note that, given a complex $\Delta$ with $v$ vertices, $v>0$, we have $$\eta(\Delta)=\begin{cases}v, &\text{if $v\equiv
\operatorname{\mathit{d}}(\Delta)\pmod{2}$,}\\ v+1, &\text{if $v\not\equiv
\operatorname{\mathit{d}}(\Delta)\pmod{2}$.}
\end{cases}$$
Equality (\[eq:2\]) and definition (\[eq:3\]) lead to the following observation: A face system $\Phi$ with $\#\Phi>0$ is a DS-system if and only if for any $n\in\mathbb{P}$ such that $$\label{eq:8}
\begin{split}
\eta(\Phi)&\leq n\ ,\\ n&\equiv\operatorname{\mathit{d}}(\Phi)\pmod{2}\ ,
\end{split}$$ it holds $$h_l(\Phi;n)=h_{n-l}(\Phi;n)\ ,\ \ \ 0\leq l\leq n\ ,$$ or, equivalently, $$\label{eq:11} \pmb{h}(\Phi;n)=\pmb{h}(\Phi;n)\cdot\mathbf{U}(n)\ ,$$ that is, $\pmb{h}(\Phi;n)$ is a [*left eigenvector*]{} of the $(n+1)\times(n+1)$ backward identity matrix corresponding to the eigenvalue $1$.
We come to the following conclusion:
Let $\Phi$ be a DS-system with $\#\Phi>0$, and let $n$ be a positive integer satisfying conditions (\[eq:8\]). Let $l\in\mathbb{N}$, $l\leq n$.
- $$\begin{aligned}
\nonumber \kappa_l\bigl(\pmb{h}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_n\bigr)&=\kappa_l\bigl(\pmb{f}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_n\bigr)\\ &=(-1)^{n-l}f_l(\Phi;n)\ ;\\
\nonumber \kappa_l\bigl(\pmb{h}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_n\bigr)&=\kappa_l\bigl(\pmb{f}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_n\bigr)\\ &=h_l(\Phi;n)=h_{n-l}(\Phi;n)\
;\\ \kappa_l\bigl(\pmb{h}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_n\bigr)&=\kappa_l\bigl(\pmb{h}(\Phi;n),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_n\bigr)\ .\end{aligned}$$
- If $\mathsf{P}$ is the profile of a partition of $\Phi$ into Boolean intervals then the following equalities hold: $$\begin{gathered}
\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^{i+j}
\binom{j}{l-i}=(-1)^n\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{l-j}\ ;\\ \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{n-i-j}{l-i}=(-1)^n\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{n-i-j}{l-j}\ ;\\ \nonumber
\sum_s\binom{s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{n-i-j}{s-j}
\\ =(-1)^n\sum_s\binom{n-s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{n-i-j}{s-j}\ .\end{gathered}$$
[|c|c|c|c|]{} *Change of basis matrix & *$(i,j)$th entry & *Notation & *Case $m=3$\
****
from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangle}(m;m)\end{smallmatrix}\right)$ &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ & $\binom{i}{j}$ & & $\left(\begin{smallmatrix}1&0&0&0\\ 1&1&0&0\\
1&2&1&0\\ 1&3&3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangle}(m;m)\end{smallmatrix}\right)^{-1}$ &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ & $(-1)^{i+j}\binom{i}{j}$ & & $\left(\begin{smallmatrix}1&0&0&0\\
-1&1&0&0\\ 1&-2&1&0\\ -1&3&-3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)$ &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ & $\binom{i}{m-j}$ & & $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&1&1\\
0&1&2&1\\ 1&3&3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\varphi}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\varphi}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)^{-1}$ &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ & $(-1)^{m-j-i}\binom{m-i}{j}$ & & $\left(\begin{smallmatrix}-1&3&-3&1\\ 1&-2&1&0\\ -1&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ & $(-1)^j\binom{m-i}{j}$ & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangle}(m;m)\end{smallmatrix}\right)$ & $\left(\begin{smallmatrix}1&-3&3&-1\\ 1&-2&1&0\\ 1&-1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ & $(-1)^{m-j}\binom{i}{m-j}$ & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangle}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangle}(m;m)\end{smallmatrix}\right)^{-1}$ & $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&-1&1\\ 0&1&-2&1\\
-1&3&-3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)$, or &\
& $\delta_{m-i,j}$ & $\mathbf{U}(m)$, or & $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ & & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\blacktriangledown}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\blacktriangledown}(m;m)\end{smallmatrix}\right)^{-1}$ &\
$\quad$$\quad$$\quad$$\quad$$\quad$
[|c|c|c|c|]{} *Change of basis matrix & *$(i,j)$th entry & *Notation & *Case $m=3$\
****
from $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ & $(-1)^{j-i}\binom{m-i}{j-i}$ & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\bullet}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)\end{smallmatrix}\right)$, or & $\left(\begin{smallmatrix}1&-3&3&-1\\ 0&1&-2&1\\ 0&0&1&-1\\
0&0&0&1\end{smallmatrix}\right)$\
& & $\mathbf{S}(m)$&\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ to $\operatorname{\boldsymbol{\mathfrak{S}}}_m$ & $\binom{m-i}{j-i}$ & $\left(\begin{smallmatrix}\boldsymbol{\vartheta}^{\bullet}(0;m)\\
\vdots\\
\boldsymbol{\vartheta}^{\bullet}(m;m)\end{smallmatrix}\right)^{-1}$, or & $\left(\begin{smallmatrix}1&3&3&1\\ 0&1&2&1\\ 0&0&1&1\\
0&0&0&1\end{smallmatrix}\right)$\
& & $\mathbf{S}(m)^{-1}$&\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ & $(-1)^j2^{m-j-i}\binom{m-i}{j}$ & & $\left(\begin{smallmatrix}8&-12&6&-1\\ 4&-4&1&0\\ 2&-1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ & $(-1)^{m-j}2^{i+j-m}\binom{i}{m-j}$ & & $\left(\begin{smallmatrix}0&0&0&1\\ 0&0&-1&2\\ 0&1&-4&4\\
-1&6&-12&8\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ & & &\
& $(-1)^{m-j}\binom{m-i}{m-j}$ & & $\left(\begin{smallmatrix}-1&3&-3&1\\ 0&1&-2&1\\ 0&0&-1&1\\
0&0&0&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ & & &\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ & $(-1)^{m-j}\sum_{s=\max\{m-i,m-j\}}^m\binom{i}{m-s}\binom{s}{m-j}$ & & $\left(\begin{smallmatrix}-1&3&-3&1\\ -1&4&-5&2\\ -1&5&-8&4\\
-1&6&-12&8\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m$ & $(-1)^{m-j}\sum_{s=0}^{\min\{m-i,m-j\}}\binom{m-i}{s}\binom{m-s}{j}$ & & $\left(\begin{smallmatrix}-8&12&-6&1\\ -4&8&-5&1\\ -2&5&-4&1\\
-1&3&-3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ & $\sum_{s=0}^{\min\{i,j\}}\binom{i}{s}\binom{m-s}{m-j}$ & & $\left(\begin{smallmatrix}1&3&3&1\\ 1&4&5&2\\ 1&5&8&4\\
1&6&12&8\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ & $(-1)^{i+j}\sum_{s=\max\{i,j\}}^m\binom{m-i}{m-s}\binom{s}{j}$ & & $\left(\begin{smallmatrix}8&-12&6&-1\\ -4&8&-5&1\\ 2&-5&4&-1\\
-1&3&-3&1\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ to $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ & $2^{i+j-m}\binom{i}{m-j}$ & & $\left(\begin{smallmatrix}0&0&0&1\\
0&0&1&2\\ 0&1&4&4\\ 1&6&12&8\end{smallmatrix}\right)$\
from $\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m$ to $\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m$ & $(-2)^{m-j-i}\binom{m-i}{j}$ & & $\left(\begin{smallmatrix}-8&12&-6&1\\ 4&-4&1&0\\ -2&1&0&0\\
1&0&0&0\end{smallmatrix}\right)$\
[|c|c|]{}
*$l$th component & *Expression\
**
$f_l(\Phi;m)$ & $\sum_{i,j}{\mathsf p}_{ij}\cdot\binom{i}{l-j}$\
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m\bigr)$ & $\sum_s\binom{m-s}{m-l}\sum_{i,j}{\mathsf
p }_{ij}\cdot \binom{i}{s-j} $\
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m\bigr)$ & $(-1)^l\sum_{i,j}{\mathsf p
}_{ij}\cdot(-1)^{i+j}\binom{j}{l-i}$\
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m\bigr)$ & $(-1)^{m-l}\sum_s\binom{s}{m-l}\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{s-j} $\
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m\bigr)$ & $(-1)^{m-l}\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-i}$\
$\kappa_l\bigl(\boldsymbol{\pmb{f}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m\bigr)$ & $\sum_{i,j}{\mathsf p
}_{ij}\cdot\binom{i}{m-l-j}$\
$h_l(\Phi;m)$ & $(-1)^l\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-j}$\
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\bullet}_m\bigr)$ & $\sum_{i,j}{\mathsf
p}_{ij}\cdot\binom{i}{l-j}$\
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangle}_m\bigr)$ & $(-1)^l\sum_s\binom{s}{l}\sum_{i,j}{\mathsf p}_{ij}\cdot(-1)^j
\binom{m-i-j}{s-j}$\
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangle}_m\bigr)$ & $(-1)^l \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^{i+j}\binom{j}{l-i}$\
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{F}}}^{\blacktriangledown}_m\bigr)$ & $(-1)^{m-l}\sum_s\binom{m-s}{l}\sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j \binom{m-i-j}{s-j}$\
$\kappa_l\bigl(\boldsymbol{\pmb{h}}(\Phi;m),
\operatorname{\boldsymbol{\mathfrak{H}}}^{\blacktriangledown}_m\bigr)$ & $(-1)^{m-l} \sum_{i,j}{\mathsf
p}_{ij}\cdot(-1)^j\binom{m-i-j}{l-i}$\
[9]{} M. Aigner, [*Combinatorial Theory*]{}, Grundlehren der Mathematischen Wissenschaften, vol. 234, Springer-Verlag, Berlin, 1979.
A.I. Barvinok, [*A Course in Convexity*]{}, Graduate Studies in Mathematics, vol. 54, American Mathematical Society, Providence, RI, 2002.
M. Beck and S. Robins, [*Computing the Continuous Discretely. Integer-point Enumeration in Polyhedra*]{}, Undergraduate Texts in Mathematics, Springer-Verlag, [*to appear*]{}.
L.J. Billera and A. Björner, [*Face numbers of polytopes and complexes*]{}, in: [*Handbook of Discrete and Computational Geometry*]{}, J.E. Goodman and J. O’Rourke (eds.) CRC Press, Boca Raton, New York, 1997, 291–310.
A. Björner, [*Topological methods*]{}, in: [*Handbook of Combinatorics*]{}, R.L. Graham, M. Grötschel and L. Lovász (eds.) [*Vol. 2*]{}, Elsevier, Amsterdam, 1995, 1819–1872.
W. Bruns and J. Herzog, [*Cohen-Macaulay Rings, Second edition*]{}, Cambridge Studies in Advanced Mathematics, vol. 39, Cambridge University Press, Cambridge, 1998.
V.M. Buchstaber and T.E. Panov,[*Toricheskie Deistviya v Topologii i Kombinatorike*]{}. (in Russian) \[[*Torus Actions in Topology and Combinatorics*]{}\] Moskovskii Tsentr Nepreryvnogo Matematicheskogo Obrazovaniya, Moscow, 2004.
T. Hibi, [*Algebraic Combinatorics on Convex Polytopes*]{}, Carslaw Publications, Glebe, Australia, 1992.
R.A. Horn and C.R. Johnson, [*Matrix Analysis*]{}, Cambridge University Press, Cambridge, 1986.
A.O. Matveev, [*Enumerating faces of complexes and valuations on distributive lattices*]{}, Discrete Math. Appl. [**10**]{} (2000) no. 4, 403–421 (translation from Diskret. Mat. [**12**]{} (2000) no. 3, 76–94.)
A.O. Matveev, [*Faces and bases: Dehn-Sommerville type relations*]{}, preprint (2004).
P. McMullen and G.C. Shephard, [*Convex Polytopes and the Upper Bound Conjecture.*]{} Prepared in collaboration with J.E. Reeve and A.A. Ball. London Mathematical Society Lecture Note Series, vol. 3, Cambridge University Press, London New York, 1971.
P. McMullen and D.W. Walkup, [*A generalized lower-bound conjecture for simplicial polytopes*]{}, Mathematika [**18**]{} (1971) 264–273.
E. Miller and B. Sturmfels, [*Combinatorial Commutative Algebra*]{}, Graduate Texts in Mathematics, Springer-Verlag, 2004, [*to appear*]{}.
R.P. Stanley, [*Combinatorics and Commutative Algebra, Second edition*]{}, Progress in Mathematics, Vol. 41, Birkhauser Boston, Inc., Boston, MA, 1996.
R.P. Stanley, [*Enumerative Combinatorics, Vol. 1, Second edition*]{}, Cambridge Studies in Advanced Mathematics, vol. 49, Cambridge University Press, Cambridge, 1997.
G.M. Ziegler, [*Lectures on Polytopes, Second edition*]{}, Graduate Texts in Mathematics, vol. 152, Springer-Verlag, New York, 1998.
[^1]: 2000 [*Mathematics Subject Classification*]{}. 13F55, 15A99.
|
---
abstract: 'With the aim of determining the statistical properties of relativistic turbulence and unveiling novel and non-classical features, we present the results of direct numerical simulations of driven turbulence in an ultrarelativistic hot plasma using high-order numerical schemes. We study the statistical properties of flows with average Mach number ranging from $\sim 0.4$ to $\sim 1.7$ and with average Lorentz factors up to $\sim 1.7$. We find that flow quantities, such as the energy density or the local Lorentz factor, show large spatial variance even in the subsonic case as compressibility is enhanced by relativistic effects. The velocity field is highly intermittent, but its power-spectrum is found to be in good agreement with the predictions of the classical theory of Kolmogorov. Overall, our results indicate that relativistic effects are able to significantly enhance the intermittency of the flow and affect the high-order statistics of the velocity field, while leaving unchanged the low-order statistics, which instead appear to be universal and in good agreement with the classical Kolmogorov theory. To the best of our knowledge, these are the most accurate simulations of driven relativistic turbulence to date.'
author:
- David Radice
- Luciano Rezzolla
title: Universality and intermittency in relativistic turbulent flows of a hot plasma
---
Introduction
============
Turbulence is an ubiquitous phenomenon in nature as it plays a fundamental role in shaping the dynamics of systems ranging from the mixture of air and oil in a car engine, up to the rarefied hot plasma composing the intergalactic medium. Relativistic hydrodynamics is a fundamental ingredient in the modeling of a number of systems characterized by high Lorentz-factor flows, strong gravity or relativistic temperatures. Examples include the early Universe, relativistic jets, gamma-ray-bursts (GRBs), relativistic heavy-ion collisions and core-collapse supernovae [@Font08].
Despite the importance of relativistic hydrodynamics and the reasonable expectation that turbulence is likely to play an important role in many of the systems mentioned above, extremely little is known about turbulence in a relativistic regime. For this reason, the study of relativistic turbulence may be of fundamental importance to develop a quantitative description of many astrophysical systems. Furthermore the comparative study of classical and relativistic turbulence can be useful also for a better understanding of classical turbulence. For instance, the study by @Cho2005 of relativistic force-free turbulence, [*i.e.* ]{}MHD turbulence in the limit where the plasma inertia and momentum are neglected, gave important insights in the understanding of strong-Alfvénic turbulence. In particular, it provided a first important confirmation of the model by @Goldreich1995, whose prediction of a $-5/3$ slope for the energy spectrum has been recently confirmed in classical MHD by [@Beresnyak2009; @Beresnyak2011a]. To this aim, we have performed a series of high-order direct numerical simulations of driven relativistic turbulence of a hot plasma.
Model and method
================
We consider an idealized model of an ultrarelativistic fluid with four-velocity $u^{\mu} = W (1, v^i)$, where $W \equiv (1 -
v_iv^i)^{-1/2}$ is the Lorentz factor and $v^i$ is the three-velocity in units where $c = 1$. The fluid is modeled as perfect and described by the stress-energy tensor $$T_{\mu\nu} = (\rho + p) u_{\mu} u_{\nu} + p\, g_{\mu\nu}\,,$$ where $\rho$ is the (local-rest-frame) energy density, $p$ is the pressure, $u_{\mu}$ the four-velocity, and $g_{\mu\nu}$ is the spacetime metric, which we take to be the Minkowski one. We evolve the equations describing conservation of energy and momentum in the presence of an externally imposed Minkowskian force $F^{\mu}$, [*i.e.* ]{}$\nabla_{\nu} T^{\mu\nu} = F^{\mu}$, where the forcing term is written as $F^{\mu} = \tilde{F}(0, f^i)$. More specifically, the spatial part of the force, $f^i$, is a zero-average, solenoidal, random, vector field with a spectral distribution which has compact support in the low wavenumber part of the Fourier spectrum. Moreover, $f^{i}$, is kept fixed during the evolution and it is the same for all the models, while $\tilde{F}$ is either a constant or a simple function of time (see below for details).
![image](fig1a.eps){width="\columnwidth"}
![image](fig1b.eps){width="0.8\columnwidth"}
The time component of the forcing term, $F^0$, is set to be zero, so that the driving force is able to accelerate fluid elements without changing their total energy (in the Eulerian frame). Note that this is conceptually equivalent to the addition of a cooling term balancing the effect of the work done on the system by the driving force. On the other hand, we impose a minimum value for the energy density in the local-rest-frame, $\rho_{\rm min}$. This choice is motivated essentially by numerical reasons (the very large Lorentz factor produced can lead to unphysical point-wise values of $\rho$) and has the effect of slowly heating up the fluid. Furthermore, this floor does not affect the momentum of the fluid and only the temperature is increased. From a physical point of view, our approach mimics the fact that in the low-density regions, the constituents of the plasma are easily accelerated to very high Lorentz factors, hence emitting bremsstrahlung radiation heating up the surrounding regions. The net effect is that energy is subtracted from the driving force and converted into thermal energy of the fluid, heating it up. In general $\rho_{\rm min}$ is chosen to be two orders of magnitude smaller than the initial energy density, but we have verified that the results presented here are insensitive to the specific value chosen for $\rho_{\rm min}$ by performing simulations where the floor value is changed by up to two orders of magnitude without significant differences.
The set of relativistic-hydrodynamic equations is closed by the equation of state (EOS) $p = \frac{1}{3} \rho$, thus modelling a hot, optically-thick, radiation-pressure dominated plasma, such as the electron-positron plasma in a GRB fireball or the matter in the radiation-dominated era of the early Universe. The EOS used can be thought as the relativistic equivalent of the classical isothermal EOS in that the sound speed is a constant, [*i.e.* ]{}$c_s^2 = 1/3$. At the same time, an ultrarelativistic fluid is fundamentally different from a classical isothermal fluid. For instance, its “inertia” is entirely determined by the temperature and the notion of rest-mass density is lost since the latter is minute (or zero for a pure photon gas) when compared with the internal one. For these reasons, there is no direct classical counterpart of an ultrarelativistic fluid and a relativistic description is needed even for small velocities.
We solve the equations of relativistic hydrodynamics in a 3D periodic domain using the high-resolution shock capturing scheme described in [@Radice2012a]. In particular, ours is a flux-vector-splitting scheme [@Toro99], using the fifth-order MP5 reconstruction [@suresh_1997_amp], in local characteristic variables [@Hawke2001], with a linearized flux-split algorithm with entropy and carbuncle fix [@Radice2012a].
Basic flow properties
=====================
Our analysis is based on the study of four different models, which we label as `A`, `B`, `C` and `D`, and which differ for the initial amplitude of the driving factor $\tilde{F}=1, 2,
5$ for models `A`–`C`, and $\tilde{F}(t) = 10 +
\frac{1}{2} t$ for the extreme model `D`. Each model was evolved using three different uniform resolutions of $128^3$, $256^3$ and $512^3$ grid-zones over the same unit lengthscale. As a result, model `A` is subsonic, model `B` is transonic and models `C` and `D` are instead supersonic. The spatial and time-averaged relativistic Mach numbers $\langle v W \rangle/(c_s W_s)$ are $0.362$, $0.543$, $1.003$ and $1.759$ for our models `A`, `B`, `C` and `D`, while the average Lorentz factors are $1.038$, $1.085$, $1.278$ and $1.732$ respectively
The initial conditions are simple: a constant energy density and a zero-velocity field. The forcing term, which is enabled at time $t = 0$, quickly accelerates the fluid, which becomes turbulent. By the time when we start to sample the data, [*i.e.* ]{}at $t = 10$ (light-)crossing times, turbulence is fully developed and the flow has reached a stationary state. The evolution is then carried out up to time $t = 40$, thus providing data for 15, equally-spaced timeslices over $30$ crossing times. As a representative indicator of the dynamics of the system, we show in the left panel of Fig. \[fig:lorentz\] the time evolution of the average Lorentz factor for the different models considered. Note that the Lorentz factor grows very rapidly during the first few crossing times and then settles to a quasi-stationary evolution. Furthermore, the average grows nonlinearly with the increase of the driving term, going from $\langle W \rangle \simeq 1.04$ for the subsonic model `A`, up to $\langle W \rangle \simeq 1.73$ for the most supersonic model `D`.
Flow quantities such as the energy density, the Mach number or the Lorentz factor show large spatial variance, even in our subsonic model. Similar deviations from the average mass density, have been reported also in classical turbulent flows of weakly compressible fluids [@Benzi2008], where it was noticed that compressible effects, leading to the formation of front-like structures in the density and entropy fields, cannot be neglected even at low Mach numbers. In the same way, relativistic effects in the kinematics of the fluid, such those due to nonlinear couplings via the Lorentz factor [@Rezzolla02], have to be taken into account even when the average Lorentz factor is small. The probability distribution functions (PDFs) of the Lorentz factor are shown in the right panel of Fig. \[fig:lorentz\] for the different models. Clearly, as the forcing is increased, the distribution widens, reaching Lorentz factors as large as $W\simeq 40$ ([*i.e.* ]{}to speeds $v\simeq
0.9997$). Even in the most “classical” case `A`, the flow shows patches of fluid moving at ultrarelativistic speeds. Also shown in Fig. \[fig:lorentz\] is the logarithm of the Lorentz factor on the $(y,z)$ plane and at $t=40$ for model `D`, highlighting the large spatial variations of $W$ and the formation of front-like structures.
Universality
============
As customary in studies of turbulence, we have analyzed the power spectrum of the velocity field $$E_{\boldsymbol{v}}(k) \equiv \frac{1}{2} \int_{|\boldsymbol{k}|=k}
| \hat{\boldsymbol{v}}(\boldsymbol{k}) |^2\, d\boldsymbol{k}\,,$$ where $\boldsymbol{k}$ is a wavenumber three-vector and $$\hat{\boldsymbol{v}}(\boldsymbol{k}) \equiv
\int_V \boldsymbol{v}(\boldsymbol{x})
e^{-2 \pi i \boldsymbol{k}\cdot\boldsymbol{x}}\, d\boldsymbol{x}\,,$$ with $V$ being the three-volume of our computational domain. A number of recent studies have analyzed the scaling of the velocity power spectrum in the inertial range, that is, in the range in wavenumbers between the lengthscale of the problem and the scale at which dissipation dominates. More specifically, @Inoue2011 has reported evidences of a Kolmogorov $k^{-5/3}$ scaling in a freely-decaying MHD turbulence, but has not provided a systematic convergence study of the spectrum. Evidences for a $k^{-5/3}$ scaling were also found by @Zhang09, in the case of the kinetic-energy spectrum, which coincides with the velocity power-spectrum in the incompressible case. Finally, @Zrake2011a has performed a significantly more systematic study for driven, transonic, MHD turbulence, but obtained only a very small (if any) coverage of the inertial range.
![Power spectra of the velocity field. Different lines refer to the three resolutions used and to the different values of the driving force. The spectra are scaled assuming a $k^{-5/3}$ law.[]{data-label="fig:vel_spectrum"}](fig2.eps){width="\columnwidth"}
The time-averaged velocity power spectra computed from our simulations are shown in Fig. \[fig:vel\_spectrum\]. Different lines refer to the three different resolutions used, $128^3$ (dash-dotted), $256^3$ (dashed) and $512^3$ (solid lines), and to the different values of the driving force. To highlight the presence and extension of the inertial range, the spectra are scaled assuming a $k^{-5/3}$ law, with curves at different resolutions shifted of a factor two or four, and nicely overlapping with the high-resolution one in the dissipation region. Clearly, simulations at higher resolutions would be needed to have power-spectra which are more accurate and with larger inertial ranges, but overall, Fig. \[fig:vel\_spectrum\] convincingly demonstrates the good statistical convergence of our code and gives a strong support to the idea that the *key* prediction of the Kolmogorov model (K41) [@Kolmogorov1991a] carries over to the relativistic case. Indeed, not only does the velocity spectrum for our subsonic model `A` shows a region, of about a decade in length, compatible with a $k^{-5/3}$ scaling, but this continues to be the case even as we increase the forcing and enter the regime of relativistic supersonic turbulence with model `D`. In this transition, the velocity spectrum in the inertial range, the range of lengthscales where the flow is scale-invariant, is simply “shifted upwards” in a self-similar way, with a progressive flattening of the bottleneck region, the bump in the spectrum due to the non-linear dissipation introduced by our numerical scheme. Steeper or shallower scalings, such as the Burgers one, $k^{-2}$, or a $k^{-4/3}$ one, are also clearly incompatible with our data.
These results have been confirmed in a preliminary study where we pushed our resolution for model D, the most extreme one, to $1024^3$.
All in all, this is one of our main results: the velocity power spectrum in the inertial range is *universal*, that is, insensitive to relativistic effects, at least in the subsonic and mildly supersonic cases. Note that this does *not* mean that the Kolmogorov theory is directly applicable to relativistic flows. We point out that the velocity power spectrum is *not* equal to the kinetic energy density in Fourier space, as in the classical incompressible case. This is because of the corrections to the expression of the kinetic energy due to the fluid compressibility (which is not zero) and the Lorentz factor (we recall that the relativistic kinetic energy is $T = \rho W(W-1) \simeq
\frac{1}{2}\rho v^2 + \mathcal{O}(v^4)$). For this reason, the interpretation of the velocity power spectrum requires great care. Finally we note that already in the Newtonian turbulence the velocity power-spectrum is known to have large deviations from the $k^{-5/3}$ scalings for highly supersonic flows. In particular @Kritsuk2007 reported spectra with scaling close to the Burgers one. Similar deviations could also manifest themselves in the relativistic case for higher values of the Mach number, but these regimes are currently not-accessible by our code.
Intermittency
=============
Not all of the information about relativistic turbulent flows is contained in the velocity power spectrum. Particularly important in a relativistic context is the intermittency of the velocity field, that is, the local appearance of anomalous, short-lived flow features, which we have studied by looking at the parallel-structure functions of order $p$ $$\label{eq:structure.function}
S^\parallel_p(r) \equiv \big\langle |\delta_r v|^p \big\rangle,
\quad
\delta_r v = \big[
\boldsymbol{v}(\boldsymbol{x}+\boldsymbol{r}) -
\boldsymbol{v}(\boldsymbol{x})\big] \cdot \frac{\boldsymbol{r}}{r}$$ where $\boldsymbol{r}$ is a vector of length $r$ and the average is over space and time.
![\[fig:S3\] Compensated, third-order, parallel structure function computed for the different models as functions of $r /
\Delta$. Note the very good match with the classical $S_3^\parallel \sim r$ behaviour.](fig3.eps){width="\columnwidth"}
Figure \[fig:S3\] reports the compensated, third-order, parallel structure function, $S_3^\parallel$, as functions of $r / \Delta$, where $\Delta$ is the grid spacing. Within the inertial range, classical incompressible turbulence has a precise prediction: the Kolmogorov $4/5$-law, for which $\langle (\delta_r v)^3 \rangle =
\frac{4}{5} \epsilon r$, where $\epsilon$ is the kinetic-energy dissipation rate. This translates into $S_3^\parallel \sim \epsilon
r$. As shown in the figure, the structure functions are somewhat noisy at small scales, but are consistent with the classical prediction over a wide range of lengthscales, with linear fits showing deviations of $\sim 5\%$, and an increase of $\epsilon$ with the driving force.
Although even in the classical compressible case, the $4/5$-law is not strictly valid, we can use it to obtain a rough estimate of the turbulent velocity dissipation rate [@Porter2002]. We find that $\epsilon$, as measured from $S_3^\parallel$ or directly from $\langle (\delta_r v)^3
\rangle$, grows linearly with the Lorentz factor, in contrast with the classical theory, where it is known to be independent of the Reynolds number. This is consistent with the observations that in a relativistic regime the turbulent velocity shows an exponential decay in time [@Zrake2011; @Inoue2011], as opposed to the power-law decay seen in classical compressible and incompressible turbulence. An explanation for this behaviour might be that, since the inertia of the fluid grows linearly with the Lorentz factor, an increasingly large rate of energy injection is needed to balance the kinetic energy losses when the average Lorentz factor is increased.
1.0cm
The scaling exponents of the parallel structure functions, $\zeta^\parallel_p$ have been computed up to $p=10$ using the extended-self-similarity (ESS) technique [@Benzi1993] and are summarized in Table \[tab:structure\]. The errors are estimated by computing the exponents without the ESS or using only the data at the final time. We also show the values as computed using the classical K41 theory, as well as using the estimates by She and Leveque (SL) [@She1994] for incompressible, [*i.e.* ]{}$\zeta^\parallel_p =
\frac{p}{9} + 2 - 2 (\frac{2}{3})^{p/3}$, and shock-dominated, [*i.e.* ]{}$\zeta^\parallel_p = \frac{p}{9} + 1 - (\frac{1}{3})^{p/3}$ [@Boldyrev2002], turbulence.
![\[fig:velpdf\]PDFs of the velocity $v_z$ for the different models considered (solid lines). As the forcing is increased, the PDFs flatten, while constrained to be in $(-1,1)$ (shaded area). Increasingly large deviations from Gaussianity (dashed lines) appear in the relativistic regime.](fig4.eps){width="\columnwidth"}
Not surprisingly and as also observed in the classical case for high Mach number flows [@Kritsuk2007][^1], as the flow becomes supersonic, the high-order exponents tend to flatten out and be compatible with the Burgers scaling, as the most singular velocity structures become two-dimensional shock waves. $\zeta_2^\parallel$, instead, is compatible with the She-Leveque model even in the supersonic case. This is consistent with the observed scaling of the velocity power spectrum, which presents only small intermittency corrections to the $k^{-5/3}$ scaling. Previous classical studies of weakly compressible [@Benzi2008] and weakly supersonic turbulence [@Porter2002] found the scaling exponents to be in very good agreement with the ones of the incompressible case and to be well described by the SL model. This is very different from what we observe even in our subsonic model `A`, in which the exponents are significantly flatter than in the SL model, suggesting a stronger intermittency correction. This deviation is another important result of our simulations.
One non-classical source of intermittency is the genuinely relativistic constraint that the velocity field cannot be Gaussian as the PDFs must have compact support in $(-1, 1)$. This is shown by the behaviour of the PDFs of $v_z$ and plotted as solid lines in the shaded area of Fig. \[fig:velpdf\]. Clearly, as the Lorentz factor increases, the PDFs become flatter and, as a consequence, the velocity field shows larger deviations from Gaussianity (dashed lines). Stated differently, relativistic turbulence is significantly more intermittent than its classical counterpart.
Conclusions
===========
Using a series of high-order direct numerical simulations of driven relativistic turbulence in a hot plasma, we have explored the statistical properties of relativistic turbulent flows with average Mach numbers ranging from $0.4$ to $1.7$ and average Lorentz factors up to $1.7$. We have found that relativistic effects enhance significantly the intermittency of the flow and affect the high-order statistics of the velocity field. Nevertheless, the low-order statistics appear to be universal, [*i.e.* ]{}independent from the Lorentz factor, and in good agreement with the classical Kolmogorov theory.
In the future we plan to pursue a more systematic investigation of the properties of relativistic turbulent flows at higher resolution.
We thank M.A. Aloy, P. Cerdá-Durán, A. MacFadyen, M. Obergaulinger and J. Zrake for discussions. The calculations were performed on the clusters at the AEI and on the SuperMUC cluster at the LRZ. Partial support comes from the DFG grant SFB/Transregio 7 and by “CompStar”, a Research Networking Programme of the ESF.
[21]{} natexlab\#1[\#1]{}
Benzi, R., Biferale, L., Fisher, R., Kadanoff, L., Lamb, D., & Toschi, F. 2008, Physical Review Letters, 100, 234503
Benzi, R., Ciliberto, S., Tripiccione, R., Baudet, C., Massaioli, F., & Succi, S. 1993, Phys. Rev. E, 48, 29
Beresnyak, A. 2011, Physical Review Letters, 106, 075001
Beresnyak, a., & Lazarian, a. 2009, The Astrophysical Journal, 702, 1190
Boldyrev, S. 2002, The Astrophysical Journal, 569, 841
Cho, J. 2005, The Astrophysical Journal, 621, 324
Font, J. A. 2008, Living Rev. Relativ., 6, 4
Goldreich, P., & Sridhar, S. 1995, The Astrophysical Journal, 438, 763
Hawke, I. 2001, PhD thesis, University of Cambridge
Inoue, T., Asano, K., & Ioka, K. 2011, Astrophys. J., 734, 77
Kolmogorov, A. N. 1991, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 434, 9
Kritsuk, A., Norman, M., Padoan, P., & Wagner, R. 2007, The Astrophysical Journal, 665, 416
Porter, D., Pouquet, A., & Woodward, P. 2002, Phys. Rev. E, 66, 1
, D., & [Rezzolla]{}, L. 2012, Astronomy and Astrophysics, 547, A26
, L., & [Zanotti]{}, O. 2002, Phys. Rev. Lett., 89, 114501
She, Z., & Leveque, E. 1994, Phys. Rev. Lett., 72, 336
Suresh, A., & Huynh, H. T. 1997, Journal of Computational Physics, 136, 83
Toro, E. F. 1999, Riemann Solvers and Numerical Methods for Fluid Dynamics (Springer-Verlag)
, W., [MacFadyen]{}, A., & [Wang]{}, P. 2009, Astrophys. J., 692, L40
Zrake, J., & MacFadyen, A. I. 2011, AIP Conf. Proc., 1358, 102–105
Zrake, J., & MacFadyen, A. I. 2012, Astrophys. J., 744, 32
[^1]: Note however that @Kritsuk2007 also find significant deviations in $\zeta_3^\parallel$ from one, which we do not observe.
|
---
abstract: 'In the present paper we prove that the Hilbert scheme of 0-dimensional subspaces on supercurves of dimension $(1|1)$ exists and it is smooth. We show that the Hilbert scheme is not split in general.'
author:
- Mi Young Jang
title: 'Families of 0-dimensional subspaces on supercurves of dimension $(1|1)$'
---
Introduction
============
Supergeometry is a ${\mathbb{Z}}_2$-graded generalization of the ordinary geometry. For references, see [@manin; @berezin; @leites]. After it has been shown that the supermoduli space is not projected [@projected], the importance of establishing mathematical foundations about supermoduli spaces, (analytic) superspaces, supermanifolds, etc. has increased.
The construction for the Hilbert scheme of ordinary projective space was developed by Alexander Grothendieck [@grothendieck]. In this paper, we first show the existence of the (analytic) Hilbert scheme ${{\rm Hilb}}(S)$ of 0-dimensional subspaces on a supercurve $S$ of dimension $(1|1)$ (see \[supercurve\] for definition). This Hilbert scheme can be broken up into disjoint union ${{\rm Hilb}}(S)=\bigcup_{(p,q)}{{\rm Hilb}}^{p|q}(S)$ where ${{\rm Hilb}}^{p|q}(S)$ is a smooth superspace of dim $(p|p)$. This can be seen as an analogous result to the ordinary case that the Hilbert scheme of $p$ points on a smooth surface is smooth and has dimension $2p$ [@fogarty].
Furthermore, we use the defining equation of the Hilbert scheme to see if ${{\rm Hilb}}^{p|q}(S)$ is split or not. We show that, for any $k$, the Hilbert scheme ${{\rm Hilb}}^{1|1} \Pi \left( {\mathcal{O}}_{{\mathbb{P}^1}}(k)\right)$ is split, whereas the Hilbert scheme ${{\rm Hilb}}^{2|1} \Pi \left( {\mathcal{O}}_{{\mathbb{P}^1}}(k) \right)$ is not split. In fact, ${{\rm Hilb}}^{2|1} \Pi \left( {\mathcal{O}}_{{\mathbb{P}^1}}(k) \right)$ is not even projected. This also guarantees that any superspace containing ${{\rm Hilb}}^{2|1} \Pi \left( {\mathcal{O}}_{{\mathbb{P}^1}}(k) \right)$ is not split.
Backgrounds
===========
Supergeometry
-------------
Ordinary geometry can be generalized by supergeometry which has an additional anti-commutative part. Definitions about supergeometry can be found, for example, in Manin’s book [@manin]. In this section, we will review definitions of major terms.
A *superspace* is a pair $(S,{\mathcal{O}}_S)$ where $S$ is a topological space and ${\mathcal{O}}_S={\mathcal{O}}_{S,0}\oplus {\mathcal{O}}_{S,1}$ is a sheaf of supercommutative rings which is a locally ringed space. Let $\mathcal{J}$ be the ideal generated by the odd part ${\mathcal{O}}_{S,1}$. The bosonic space $S_{b} \subset S$ is the closed subspace $\left(S,{\mathcal{O}}_S/\mathcal{J}\right)$.
From now on, we will only consider the superspaces over ${\mathbb{C}}$.
Similar to the ordinary space, locally free sheaves on superspaces can be defined. The only difference is that they have even and odd ranks. For example, a free sheaf of rank $(p{{\,|\,}}q)$ on a superspace $S$ is ${\mathcal{O}}_S^{\,p} \oplus \Pi {\mathcal{O}}_S^{\,q}$, where $\Pi {\mathcal{O}}_S^{\,q}$ is the parity reversed bundle of ${\mathcal{O}}_S^{\,q}$.
A superspace $(S,{\mathcal{O}}_S)$ is said to be *split* if it is isomorphic to $S(S_{b},\mathcal{E}):=\left(S_{b},\wedge^\bullet \mathcal{E}^\vee \right)$, where $\mathcal{E}$ is a locally free sheaf of ${\mathcal{O}}_{S_{b}}$-modules. Let $m$ be the *dimension* of $S_b$ and let $n$ be the rank of $\mathcal{E}$. Then the dimension of the superspace $(S,{\mathcal{O}}_S)$ is $(m|n)$. We say a superspace $(S,{\mathcal{O}}_S)$ is *locally split* if for any $x\in S$ there is a neighborhood $U$ of $x$ such that $(U,{\mathcal{O}}_S|_U)$ is split.
For the rest of this paper, we mainly discuss about analytic superspaces. One basic property of analytic superspace is that, like ordinary analytic spaces, we can take local coordinates.
An analytic affine superspace $${\mathbb{C}}^{m|n}=({\mathbb{C}}^m, {\mathcal{O}}_{{\mathbb{C}}^{m|n}})=S({\mathbb{C}}^m,{\mathcal{O}}_{{\mathbb{C}}^m}^{\,n})$$ is one of the simplest examples of the split superspace. Here, ${\mathcal{O}}_{{\mathbb{C}}^{m}}$ represents the sheaf of analytic functions and the structure sheaf is given by ${\mathcal{O}}_{{\mathbb{C}}^{m|n}}={\mathcal{O}}_{\,{\mathbb{C}}^m}[ \theta_1,\cdots,\theta_n ]$.
Let $U$ be an open subset of ${\mathbb{C}}^m$. For an ideal $I \subset {\mathcal{O}}_{{\mathbb{C}}^{m|n}}(U) $, we can define an closed subset $Z(I):=Z \left( I \cap {\mathcal{O}}_{\, {\mathbb{C}}^m}(U) \right) \subset {\mathbb{C}}^m$. The analytic subspace defined by $I$ on U is the superspace $\left( Z(I), {\mathcal{O}}_Z:={\mathcal{O}}_U/I \right)$.
An analytic superspace $(S,{\mathcal{O}}_S)$ is a superspace which is locally isomorphic to some analytic subspace.
We say that an analytic superspace $(S,{\mathcal{O}}_S)$ is *smooth* at $x \in S$ if there is an open neighborhood $U$ of $x$ such that $(U,{\mathcal{O}}_S|_U)$ is isomorphic to an open subspace of some analytic affine superspace. An analytic superspace $(S,{\mathcal{O}}_S)$ is called smooth if it is smooth at every point in $S$.
A locally split analytic superspace $(S,{\mathcal{O}}_S)$ is called a *supermanifold* if $S_b$ is a manifold. Note that a locally split analytic superspace $(S,{\mathcal{O}}_S)$ is smooth if and only if it is a supermanifold.
\[supercurve\] A supercurve is a complex supermanifold of dimension $(1|n)$ for some non-negative integer $n$.
We will focus on analytic superspaces and will drop “analytic” for simplicity, and denoting it as superspaces.
Hilbert Scheme
--------------
i) Let $S$ be a superspace. The *Hilbert functor* ${\mathcal{H}}^{p|q}_{S}$ is the contravariant functor from the category $\mathfrak{S}$ of superspaces to the category of sets defined as follows:
$${\mathcal{H}}^{p|q}_{S}(B)=
\left\{
\begin{array}{c|c}
\multirow{4}{*}{ \xymatrix{
\mathcal{Z} \ar@{^{(}->}[r] \ar[d]^\pi & S \times B \ar[ld] \\ B} } & \mathcal{Z} \text{ is a closed subspace of } \\
& S \times B \text{ and } \pi_*{\mathcal{O}}_\mathcal{Z} \text{ is a locally } \\
& \text{ free ${\mathcal{O}}_B$-module of rank } (p\,|\,q) \\
\end{array}
\right\}$$
The morphism is defined by the pullback $${\mathcal{H}}^{p|q}_{S}(f)= f^*: {\mathcal{H}}^{p|q}_{S}(B) \rightarrow {\mathcal{H}}^{p|q}_{S}(C)$$ where $f:C \rightarrow B$ and $B, C \in \mathfrak{S}$.
ii) Suppose that the Hilbert functor $\mathcal{H}^{p|q}_{S}$ is representable by the superspace ${{\rm Hilb}}^{p|q}(S)$. We call this the *analytic Hilbert scheme*, abbreviated to the Hilbert scheme.
The Hilbert functor $\mathcal{H}^{1|1}_{{\mathbb{C}}^{1|1}}$ is representable by ${\mathbb{C}}^{1|1}$. $$\xymatrix{
\mathcal{Z} \ar@{^(->}[r] \ar[d]_{\pi} & {\mathbb{C}}^{1|1}_{x {{\,|\,}}\theta }\times {\mathbb{C}}^{1|1}_{a {{\,|\,}}\alpha} \ar[dl] \\
{\mathbb{C}}^{1|1}_{a{{\,|\,}}\alpha}
}$$ where the subscripts define coordinates and $\mathcal{Z}$ is defined by the ideal $(x+a + \alpha \theta)$. This can be checked directly, or as a consequence of the proof of Theorem \[hilb\].
We prove the following theorem.
\[hilb\] Let $S$ be a supercurve. Then the functor ${\mathcal{H}}^{p|q}_{S}$ is representable by the smooth superspace ${{\rm Hilb}}^{p|q} (S)$ of dimension $(p{{\,|\,}}p)$.
Obstruction class for splitting
-------------------------------
In this section, we review the definition an obstruction class which has a critical role in verifying splitness of supermanifolds [@projected].
Consider a supermanifold $S=(M,{\mathcal{O}}_S)$ and let $\mathcal{J} \subset {\mathcal{O}}_S$ be the sheaf of ideals generated by all nilpotents. Observe that $S$ is locally isomorphic to the split model $S(M,{\mathcal{E}})$, where ${\mathcal{E}}$ is defined by ${\mathcal{E}}=(\mathcal{J}/\mathcal{J}^2)^\vee$. As shown in [@projected], it induces an element $\phi \in H^1 (M, {\rm Aut}(\wedge^\bullet {\mathcal{E}}) ) $. Let $G$ be the set of automorphisms of $\wedge^\bullet {\mathcal{E}}$ which act trivially on $M$ and ${\mathcal{E}}$. Since the induced automorphism preverves $M$ and ${\mathcal{E}}$, we can say that $\phi \in H^1 (M, G ) $. Conversely, an element $S$ in $H^1(M,G)$, with the ideal $\mathcal{J}$ generated by all nilpotents, gives a superspace which is locally isomorphic to $S(M,{\mathcal{E}})$ and $\mathcal{J}/\mathcal{J}^2 \simeq {\mathcal{E}}^\vee$.
Consider the filtration of $S$ $$M =S^{(0)}\subset S^{(1)}\subset \cdots \subset S^{(n)}=S$$ where $S^{(i)}=(M,{\mathcal{O}}_S/\mathcal{J}^{i+1})$ and $n=\rm{rank}{\,{\mathcal{E}}}$. Define $G^{(i)}$ to be the set of automorphisms of $S$ which are trivial on $S^{(i-1)}$ for $i=2,3,\cdots$. Note that there is an isomorphism $$G^{(i)}/G^{(i+1)} {\simeq}\, T_{(-)^i}M \otimes \wedge^{i} {\mathcal{E}}$$ where $T_{(-)^i}=T_-$ is an odd tangent space if $i$ is odd and $T_{(-)^i}=T_+$ is an even tangent space if $i$ is even. Moreover, this isomorphism induces an exact sequence $$H^1(M,G^{(i+1)}) \rightarrow H^1(M,G^{(i)}) \xrightarrow{\omega} H^1( M,T_{(-)^i}M \otimes \wedge^{i} {\mathcal{E}})$$
Start with $\psi^{(1)}:=\phi$ and we define obstruction classes inductively. Suppose we have $\phi^{(i-1)} \in H^1(M,G^{(i)})$. If $\omega(\phi^{(i-1)})=0$, then there exists $\phi^{(i)} \in H^1(M,G^{(i+1)})$ such that $\phi^{(i)}$ maps to $\phi^{(i-1)}$.
The i-th obstruction class is defined by $$\omega_i:=\omega(\phi^{(i-1)}) \in T_{(-)^i}M \otimes \wedge^{i} {\mathcal{E}}$$ Observe $G^{(2)}=G$ and $\phi^{(1)}=\phi$.
In section 5.1, we will use the fact that if the second obstruction class $\omega_2$ is not vanishing, then the superspace is not split.
Local structure of the Hilbert schemes
======================================
The Hilbert scheme of the affine space ${\mathbb{C}}^{1| 1}$ is the basis for the construction of the 0-dimensional family on supercurves. Let $(x{{\,|\,}}\theta)$ be coordinates on ${\mathbb{C}}^{1|1}$.
\[basis\] Let $\mathcal{Y}\subset {\mathbb{C}}^{1|1}$ be a subspace such that $\dim_{\,{\mathbb{C}}}{H^0({\mathbb{C}}^{1|1},{\mathcal{O}}_\mathcal{Y})}=(p {{\,|\,}}q)$. Then $H^0({\mathbb{C}}^{1|1},{\mathcal{O}}_\mathcal{Y})$ has basis $1,x,\dots,x^{p-1},\theta, \theta x,\dots, \theta x^{q-1}$ as a ${\mathbb{C}}$-vector space.
\[inverse\] Let $X=\left(x_{ij}\right)$ be an $n \times n$ (left) invertible matrix and let $\Gamma=(\gamma_{ij})$ be an $n \times n$ matrix such that $\gamma_{ij}^{\,2}=0$ for each $i$ and $j$, then $X + \Gamma$ is (left) invertible.
\[gen\] Pick $[\mathcal{Z} \xrightarrow{\pi} B] \in \mathcal{H}^{p|q}_{{\mathbb{C}}^{1|1}}(B)$, then $\pi_*{\mathcal{O}}_{\mathcal{Z}}$ is a free ${\mathcal{O}}_{\mathcal{B}}$-module generated by $1,x,\dots,x^{p-1}, \theta, \theta x,\dots,\theta x^{q-1}$, i.e., $\pi_* {\mathcal{O}}_{\mathcal{Z}}$ is isomorphic to ${\mathcal{O}}_{\mathcal{B}}^p \oplus \Pi {\mathcal{O}}_{\mathcal{B}}^q$.
Observe that the stalk $(\pi_*{\mathcal{O}}_{\mathcal{Z}})_t$ is a free ${\mathcal{O}}_{\mathcal{B},t}$-module of rank $(p\,|\,q)$ for each $t\in B$. Let $M_{n,m}(R)=\left( a_{ij} \right)$ denote an $n \times m$ matrix, where $a_{ij}\in R$. Let $\left\{ f_i \in \left(\pi_*{\mathcal{O}}_{\mathcal{Z}} \right)_t^0 \right\}_{i=1}^p$ be even generators and let $\left\{ g_j \in \left(\pi_*{\mathcal{O}}_{\mathcal{Z}}\right)_t^1 \right\}_{j=1}^q$ be odd generators. Denote $\left(f_i\right)_{i=1}^p$, $(g_j)_{j=1}^q$, $(x^i)_{i=0}^{p-1}$ and $(x^j\theta)_{j=0}^{q-1}$ by $F$, $G$, $X$ and $X\Theta$. Then we can find $A \in M_{p,p}(({\mathcal{O}}_{B,t})^0)$, $B \in M_{p,q}(({\mathcal{O}}_{B,t})^1)$, $C \in M_{q,p}(({\mathcal{O}}_{B,t})^1)$ and $ D\in M_{q,q}(({\mathcal{O}}_{B,t})^0)$ such that
$$\begin{pmatrix} X \\ X\Theta
\end{pmatrix}
=\begin{pmatrix}
A & B\\
C & D
\end{pmatrix}
\cdot
\begin{pmatrix}F\\ G
\end{pmatrix}$$
Consider the surjection to the fiber $\mathcal{Z}_t$ at $t$ $${\mathcal{O}}_\mathcal{Z}\rightarrow {\mathcal{O}}_{\mathcal{Z}_t}\rightarrow 0$$
Then this map induces the diagram $$\xymatrix{
(\pi_*{\mathcal{O}}_\mathcal{Z})_t \ar[r]^{\phi} \ar[d]_{q_1} & (\pi_*{\mathcal{O}}_{\mathcal{Z}_t})_t \ar[d]^{q_2} \\
\cfrac{(\pi_*{\mathcal{O}}_\mathcal{Z})_t}{\mathfrak{m}_t(\pi_*{\mathcal{O}}_\mathcal{Z})_t} \ar[r]^{\widetilde{\phi}}
& \cfrac{(\pi_*{\mathcal{O}}_{\mathcal{Z}_t})_t} {\mathfrak{m}_t(\pi_*{\mathcal{O}}_{\mathcal{Z}_t})_t}
}$$ where $\mathfrak{m}_t$ is the maximal ideal of the local ring ${\mathcal{O}}_{B,t}$. Observe that $\widetilde{\phi}$ is a ${\mathbb{C}}$-linear isomorphism and, by the lemma \[basis\], $\cfrac{(\pi_*{\mathcal{O}}_{\mathcal{Z}_t})_t} {\mathfrak{m}_t(\pi_*{\mathcal{O}}_{\mathcal{Z}_t})_t}$ is generated by $1,x,\cdots,x^{p-1}$ and $ \theta, \theta x, \cdots, \theta x^{q-1}$.
Let $\overline{h}$ represent the image of $h$ by the quotient map $q_k$ and let $A=\left( a_{ij}\right)$ and $D=\left( d_{ij} \right)$. Then we have $$\overline{A}F
=
X
\text{ and }
\overline{D}
G
= X\Theta$$ where $\overline{A}=\left(\overline{a_{ij}}\right)$ and $\overline{D}=(\overline{d_{ij}})$ are invertible. By the lemma \[inverse\], $A$, $D$ and $-CA^{-1}B+D$ are invertible. Therefore, $\begin{pmatrix}A&B\\
C&D
\end{pmatrix}$ has the left inverse $\begin{pmatrix}
A^{-1}+A^{-1}B(-CA^{-1}B+D)^{-1}CA^{-1} & -A^{-1}B(-CA^{-1}B+D)^{-1} \\
-(-CA^{-1}B+D)^{-1}CA^{-1} & (-CA^{-1}B+D)^{-1}
\end{pmatrix}$
Flattening Stratification
-------------------------
\[nakayama\] *(Nakayama’s lemma [@lam])* Let $R$ be any ring $R$ with the Jacobson ideal $J(R) \subset R$. For any finitely generated left $R$-module $M$, $J(R)M=M$ implies $M=0$.
Flattening stratification for superspaces can be done in a similar way to the ordinary cases. ([@flat])
*(Flattening Stratification)* \[FlatteningStratification\] Let $\mathcal{B}$ be a Noetherian superspace and ${\mathcal{F}}$ be a coherent sheaf of modules on ${\mathbb{C}}^{1|1} \times \mathcal{B}$. Suppose that the support of each fiber of the projection map $\pi: {\mathcal{F}}\rightarrow \mathcal{B}$ is zero dimensional. For each $(p\,,q) \in {\mathbb{Z}}\times {\mathbb{Z}}$, there is a locally closed subspace $\mathcal{B}_{(p,q)} \subset \mathcal{B}$ such that
i) $\pi_*{\mathcal{F}}|_{B_{(p,q)}}$ is locally free of rank $(p{{\,|\,}}q)$,
ii) $\dot{\bigcup}_{p,q} B_{(p,q)}=B$
iii) Such stratification is universal. (I.e. for any morphism $f:C \rightarrow B$, the induced map $f^*{\mathcal{F}}\rightarrow C$ is flat of rank $(p{{\,|\,}}q)$ if and only if $f$ factors through $C \rightarrow B_{(p,q)} \hookrightarrow B$)
Pick any $b \in \mathcal{B}$ such that $\dim_{k(b)} {{\mathcal{F}}_b \times_{{\mathcal{O}}_B} {{\rm Spec}\,}\ k(b) } =(p{{\,|\,}}q)$, where $k(b)$ is the quotient field at $b$. By the lemma \[nakayama\], we can find some neighborhood $U$ of $b$ and the exact sequence $${\mathcal{O}}_U^{\,s}\oplus \Pi {\mathcal{O}}_U^{\,t} \xrightarrow{\sigma} {\mathcal{O}}_U^{\,p} \oplus \Pi {\mathcal{O}}_U^{\,q} \xrightarrow{\zeta} {\mathcal{F}}|_U \rightarrow 0$$
For any morphism $f: V \rightarrow U$ to the subspace $U=(U, {\mathcal{O}}_B|_U)$, we get the induced exact sequence $${\mathcal{O}}_V^{\,s} \oplus \Pi {\mathcal{O}}_V^{\,t} \xrightarrow{f^*\sigma} {\mathcal{O}}_V^{\,p} \oplus \Pi {\mathcal{O}}_V^{\,q} \xrightarrow{f^*\zeta} f^* {\mathcal{F}}\rightarrow 0$$
Note that $f^*{\mathcal{F}}$ is free of rank $(p {{\,|\,}}q)$ if and only if $f^*\sigma=0$. Let $\sigma$ be represented by the matrix $(\sigma_{ij})$. If $f^*\sigma_{ij}=0$ for all $i$ and $j$, then $f$ factors through the inclusion $U_\sigma \hookrightarrow U$ where $U_\sigma$ is the closed subspace of $U$ defined by the ideal $I_\sigma=(\sigma_{ij})$, and vice versa. Therefore, $f^*{\mathcal{F}}$ is flat over $V$ if and only if $f$ factors through $U_\sigma \hookrightarrow U$. It proves that $U_\sigma$ represents the functor $\mathcal{G}_U$ where $\mathcal{G}_U(f:V \rightarrow U)=\{ f^*{\mathcal{F}}\rightarrow V \text{ is flat of rank }(p{{\,|\,}}q) \}$, i.e. $U_\sigma$ is universal. Moreover, the universality guarantees that we can glue all $U_\sigma$’s with fixed $(p{{\,|\,}}q)$ and $B_{p,q} := \cup_\sigma U_\sigma$ satisfies the required properties.
Defining Equation for the Hilbert Scheme {#equ}
----------------------------------------
Let $\mathcal{Y}$ be the subspace $\mathcal{Y} \subset {\mathbb{C}}^{1|1}$ generated by the ideal $I=(x^p,x^q\theta)$. Consider the embedding
$$\xymatrix{
\mathcal{Y} \ar@{^{(}->}[r] \ar[d] & \widetilde{\mathcal{Y}} \ar@{^{(}->}[r] \ar[d] & {\mathbb{C}}^{1|1} \times {\mathbb{C}}^{p+q|p+q} \ar[dl]\\
{{\rm Spec}\,}{\mathbb{C}}\ar[r] & {\mathbb{C}}^{p+q|p+q} &
}$$ where $\widetilde{\mathcal{Y}}$ is the subspace defined by the ideal $$\widetilde{I}=(f:=x^p + \sum\limits_{i=0}^{p-1}{a_i x^i} + \sum\limits_{i=0}^{q-1}{\alpha_i x^i} \theta,\ g:=x^q \theta + \sum\limits_{i=0}^{q-1}{b_i x^i}\theta + \sum\limits_{i=0}^{p-1}{\beta_i x^i})$$ and $(a_0,\dots,a_{p-1},b_0,\dots,b_{q-1} {{\,|\,}}\alpha_0,\dots,\alpha_{q-1},\beta_0,\dots,\beta_{p-1})$ are coordinates on ${\mathbb{C}}^{p+q|p+q}$.
${\mathbb{C}}^{p+q|p+q}_{(p,q)}$ is isomorphic to ${\mathbb{C}}^{p|p}$.
To make a calculation easier, we need to change coordinates. First, apply the long division with the divisor $x^q+\sum_{i=0}^{q-1}b_i x^i$. $$\begin{split}
f=&(x^q+\sum_{i=0}^{q-1}b_i x^i)(x^{p-q}+\sum_{i=0}^{p-q-1}c'_i x^i)+\sum_{i=0}^{q-1}d\,'_i x^i + \sum_{i=0}^{q-1}\gamma_i x^i\theta\\
g=&(x^q + \sum_{i=0}^{q-1}b_i x^i)(\theta + \sum_{i=0}^{p-q-1}\delta_i x^i) +\sum_{i=0}^{q-1}\epsilon_i x^i
\end{split}$$
Use coordinate change to make this form $$\begin{split}
f=& (x^q+\sum_{i=0}^{q-1}b_i x^i)(x^{p-q}+\sum_{i=0}^{p-q-1}a_i x^i)+\sum_{i=0}^{q-1}c_i x^i + \sum_{i=0}^{q-1}\beta_i x^i (\theta + \sum_{i=0}^{p-q-1}\alpha_i x^i)\\
g=& (x^q + \sum_{i=0}^{q-1}b_i x^i)(\theta + \sum_{i=0}^{p-q-1}\alpha_i x^i) +\sum_{i=0}^{q-1}\gamma_i x^i
\end{split}$$
Let $\mathcal{Z}$ be the restriction of $\widetilde{\mathcal{Y}}$ to ${\mathbb{C}}^{p+q|p+q}_{(p,q)}$. $$\xymatrix{
\mathcal{Y} \ar@{^{(}->}[r] \ar[d] & \mathcal{Z} \ar@{^{(}->}[r] \ar[d]^\pi & \widetilde{\mathcal{Y}} \ar[d]\\
{{\rm Spec}\,}({\mathbb{C}}) \ar@{^(->}[r] & {\mathbb{C}}^{p+q|p+q}_{(p,q)} \ar@{^{(}->}[r] & {\mathbb{C}}^{p+q|p+q}
}$$
Let $\phi : {\mathcal{O}}^{\,p}_U \oplus \Pi {\mathcal{O}}^{\,q}_U \rightarrow \pi_*{\mathcal{O}}_{\mathcal{Z}} \big{|}_U $ be the map sending $(...,A_i,...|...,\mathcal{A}_j,...)$ to $\sum_{i=0}^{p-1} A_i x^i +\sum_{j=0}^{q}\mathcal{A}_j x^j \theta$. As in the proof of the theorem \[FlatteningStratification\], there is an open set $U \subset{\mathbb{C}}^{p+q|p+q}$ and the exact sequence $${\mathcal{O}}_{U}^{\,s} \oplus \Pi {\mathcal{O}}_U^{\,t} \xrightarrow{\sigma}
{\mathcal{O}}^{\,p}_U \oplus \Pi {\mathcal{O}}^{\,q}_U \xrightarrow{\phi}
\pi_*{\mathcal{O}}_{\mathcal{Z}} \big{|}_U\rightarrow 0$$ such that ${\mathbb{C}}^{p+q|p+q}_{(p,q)}$ is generated by $\sigma=(\sigma_{ij})$.
First of all, compute two elements of $\ker\phi$. For simplicity, denote $\sum_{i=0}^{p-q-1}a_i x^i, \sum_{i=0}^{q-1}b_i x^i,\cdots$ by $a,b,\cdots$. $$\begin{aligned}
f(&\theta + \alpha)-g(x^{p-q}+a)\\
=\,&c(\theta+\alpha)-\gamma(x^{p-q}+a)\\
=\,&(\sum_{i=0}^{q-1}c_i x^i)\theta + (\sum_{i=0}^{q-1}c_i x^i)(\sum_{i=0}^{p-q-1}\alpha_i x^i) -\sum_{i=0}^{q-1}\gamma_i x^i(x^{p-q}+\sum_{i=0}^{p-q-1}a_i x^i)\\
\bigskip
g(&\theta + \alpha)\\
=\,&\gamma(\theta + \alpha)\\
=\,&(\sum_{i=0}^{q-1}\gamma_i x^i)\theta + (\sum_{i=0}^{q-1}\gamma_i x^i)(\sum_{i=0}^{p-q-1}\alpha_i x^i)\end{aligned}$$
Hence, we find two elements of the kernel $$h:=((c_0\alpha_0-a_0\gamma_0, \cdots ,\gamma_{q-1},\; \overbrace{0,\cdots,0}^{p-q}\;),(c_0 , \cdots , c_{q-1}))$$ and $$k:=((\gamma_0 \alpha_0 , \cdots,\gamma_{q-1}\alpha_{p-q-1},\; \overbrace{0,\cdots , 0}^{q}\;), (\gamma_0 , \cdots , \gamma_{q-1}
))$$
Since $ {\mathbb{C}}^{p+q|p+q}_{(p,q)}$ is contained in $\mathcal{H}:= Z\left(\left(c_i,\gamma_i\right)_{i=0}^{q-1}\right) \subset {\mathbb{C}}^{p+q|p+q}$, we can shrink ${\mathbb{C}}^{p+q|p+q}$ to $\mathcal{H}$ and repeat the same process.
Then there is another short exact sequence and an open set $U $ $${\mathcal{O}}^{\,s'}_U \oplus \Pi {\mathcal{O}}^{\,t'}_U \xrightarrow{\sigma_{\mathcal{H}}}
{\mathcal{O}}^{\,p}_U \oplus \Pi {\mathcal{O}}^{\,q}_U \xrightarrow{\phi_\mathcal{H}} \pi_* {\mathcal{O}}_\mathcal{Z} \rightarrow 0$$
Pick an element in the kernel $$\begin{aligned}
\sum&_{i=0}^{p-1}A_i x^i + \theta \sum_{i=0}^{q-1}B_i x^i \\
=& C f + D g \\
=& C (x^q+b) (x^{p-q}+a)+C \beta \alpha + C\beta\theta
+D \theta(x^q+b) + D \alpha(x^q+b)\end{aligned}$$ where $A_i,B_j \in \Gamma(U,{\mathcal{O}}_{{\mathbb{C}}^{p+q|p+q}})$ and $C,D \in \Gamma\left({\mathbb{C}}^{1|1}\times U,{\mathcal{O}}_{{\mathbb{C}}^{1|1}\times {\mathbb{C}}^{p+q|p+q}} \right)$.
Then we get $$\begin{split}\label{ker1}
\sum_{i=0}^{p-1}A_i x^i= C(x^q+\sum_{i=0}^{q-1}b_i x^i)(x^{p-q}+\sum_{i=0}^{p-q-1}a_i x^i)
\qquad\qquad\qquad \quad\qquad\\ \qquad \qquad\qquad
+C(\sum_{i=0}^{q-1}\beta_i x^i) (\sum_{i=0}^{p-q-1}\alpha_i x^i)
+D (\sum_{i=0}^{p-q-1}\alpha_i x^i)(x^q+\sum_{i=0}^{q-1}b_i x^i)
\end{split}$$ $$\label{ker2}
\sum_{i=0}^{q-1}B_i x^i=C(\sum_{i=0}^{q-1}\beta_i x^i)+D(x^q+\sum_{i=0}^{q-1}b_i x^i)
\qquad \qquad\qquad\qquad \qquad$$
By comparing coefficient of $x^p$ in \[ker1\], we can see $C=0$. Similarly, from \[ker2\] we get $D=0$. Therefore, $A_i=B_i=0$ for all $i$.
Therefore, $\phi$ is an isomorphism and ${\mathbb{C}}^{p+q|p+q}_{(p,q)}=\mathcal{H}$ is defined by the ideal $\left( \sigma_{ij}\right)$ where $$\sigma=
\begin{pmatrix}
c_0 \alpha_0 -a_0 \gamma_0 & \cdots & 0 & \vline & c_0 & \cdots & c_{q-1}\\
\gamma_0 \alpha_0 & \cdots & 0 & \vline & \gamma_0 & \cdots & \gamma_{q-1}
\end{pmatrix}$$ I.e., $\left( \sigma_{ij}\right) =\left( c_0,\cdots,c_{q-1}, \, \gamma_0,\cdots, \gamma_{q-1} \right)$.
Moreover, ${\mathbb{C}}^{p+q|p+q}_{(p,q)} \simeq {\mathbb{C}}^{p|p}$.
\[local\] ${\mathbb{C}}^{p|p}$ represents the Hilbert functor $\mathcal{H}^{p|q}_{{\mathbb{C}}^{1|1}}$.
Pick any flat family in $\mathcal{H}^{p|q}_{{\mathbb{C}}^{1|1}}(B)$. $$\xymatrix{
\mathcal{Y} \ar@{^{(}->}[r] \ar[dr]_p & {\mathbb{C}}^{1|1} \times B \ar[d] \\
& B
}$$ By the lemma \[basis\], $\mathcal{Y}$ is defined by an ideal $$\left(x^p +\sum_{i=0}^{p-1}c_i x^i + \sum_{i=0}^{q-1} \gamma_i x^i \theta,\, x^q \theta + \sum_{i=0}^{q-1}d_i x^i\theta + \sum_{i=0}^{p-1}\delta_i x^i \right)$$ where $c_i,d_i \in \left(H^0(B,{\mathcal{O}}_B)\right)^0, \gamma_i,\delta_i \in \left(H^0(B,{\mathcal{O}}_B)\right)^1$. Then there is a natural map $B \rightarrow {\mathbb{C}}^{p+q|p+q}$ and this map factors through ${\mathbb{C}}^{p+q|p+q}_{(p,q)}$ since the map $p$ is flat. Observe that $p$ is the pull back of $\pi$ and such a map is unique.
From now on, we will fix coordinate $$(a_0,\cdots,a_{p-q-1},b_0,\cdots,b_{q-1} {{\,|\,}}\alpha_0,\cdots,\alpha_{p-q-1},\beta_0,\cdots,\beta_{q-1})$$ on ${{\rm Hilb}}^{p|q}({\mathbb{C}}^{1|1}) \simeq {\mathbb{C}}^{p|p}$ as $$\begin{aligned}
\left(
f= (x^q+\sum_{i=0}^{q-1}b_i x^i)(x^{p-q}+\sum_{i=0}^{p-q-1}a_i x^i) + \sum_{i=0}^{q-1}\beta_i x^i (\theta + \sum_{i=0}^{p-q-1}\alpha_i x^i) \right.\\
\left.
g= (x^q + \sum_{i=0}^{q-1}b_i x^i)(\theta + \sum_{i=0}^{p-q-1}\alpha_i x^i) \right)\end{aligned}$$
Families of 0-dimensional subspaces on supercurves
==================================================
Let $S$ be a smooth supercurve. By applying the theorem \[local\] to a suitable representable open cover of $\mathcal{H}^{p|q}_S$, we can show the representability of the Hilbert functor $\mathcal{H}^{p|q}_S$. Note that $\left( {{\rm Hilb}}^{p|q}(S) \right)_{red} ={{\rm Hilb}}^p(S_{red})$ and hence the finiteness and Hausdorff conditions hold automatically.
*Proof of the Theorem \[hilb\].*
Let $U=\{U_i\}_{i=1}^r$ be a set of $r$ disjoint open subsets of $S$ such that each $U_i$ is isomorphic to some nonempty open subset of ${\mathbb{C}}^1$. For such $U$, we can define an open subfunctor $$\mathcal{H}^{p|q}_{S,U} := \coprod_{ \substack {\sum p_i=p \\ \sum q_i =q} } \dot{\bigcup_i}\; \mathcal{H}^{p_i|q_i}_{U_i}$$
Observe the following facts.
: $\mathcal{H}^{p|q}_S = \bigcup_{U} \mathcal{H}^{p|q}_{S,U}$.
: Each $\mathcal{H}^{p|q}_{S,U}$ is an open subfunctor of $\mathcal{H}^{p|q}_S$ and representable by the smooth superspace of dimension $(p|p)$.
Therefore, the Hilbert functor $\mathcal{H}^{p|q}_S$ is representable by a dimension $(p|p)$ smooth superspace.
For the ordinary Hilbert scheme of points, the Hilbert scheme ${{\rm Hilb}}^{4}({\mathbb{C}}^{3})$ is not smooth. We can check this by check the non-smoothness of ${{\rm Hilb}}^{4}({\mathbb{C}}^{3})$ at $I=m^2=(x,y,z)^2$. In my PhD thesis, I’ll deal with smoothness or non-smoothness of the Hilbert scheme ${{\rm Hilb}}^{p|q}({\mathbb{C}}^{1|2})$. Actually, it turns out that ${{\rm Hilb}}^{p|q}({\mathbb{C}}^{1|2})$ is not smooth for certain cases.
Non-splitness of the Hilbert scheme
===================================
In the previous sections, we not only show the existence of the Hilbert schemes but also find the local structure. As an application, we check the splitness of the Hilbert scheme.
Consider a line bundle $V= {\mathcal{O}}_{{\mathbb{P}^1}}(k)$ on ${{\mathbb{P}^1}}$. The supermanifold $\Pi V$ is a smooth supercurve.
Observe that $\left( {{\rm Hilb}}^{1|1}(\Pi V) \right)_b = {{\mathbb{P}^1}}$ has standard affine open cover ${{\mathbb{P}^1}}= U_0 \cup U_1$ and we can assign affine coordinates on each $U_i$ $$\Pi V |_{U_0} \simeq {\mathbb{C}}^{1|1}_{x,\theta}$$ $$\Pi V |_{U_1} \simeq {\mathbb{C}}^{1|1}_{y,\psi}$$
Then we have ${{\rm Hilb}}^{1|1}(\Pi V)|_{U_0}\simeq {\mathbb{C}}^{1|1}_{a,\alpha}$ and ${{\rm Hilb}}^{1|1}(\Pi V)|_{U_1} \simeq {\mathbb{C}}^{1|1}_{b,\beta}$ , from the Theorem \[local\]. From the already known relations $x=a+\alpha \theta$, $y=b+\beta \psi$, $y=1/x$, $\psi=\theta/x^k$ and $b=\frac{1}{a}$ on the intersection $U_0 \cap U_1$, we can compute the transition map $\beta = -a^{k-2} \alpha$. Therefore, ${{\rm Hilb}}^{1|1}(\Pi V) = \Pi W$ where $W={\mathcal{O}}(-k +2)={\mathcal{O}}(2) \otimes V^\vee$ and ${{\rm Hilb}}^{1|1}(\Pi V)$ is split.
Let $V={\mathcal{O}}_{{\mathbb{P}^1}}(k)$ be a line bundle on ${{\mathbb{P}^1}}$. We will show non-splitness of the Hilbert scheme ${{\rm Hilb}}^{2|1} \left(\Pi V \right)$. Note that the bosonic part of ${{\rm Hilb}}^{2|1}\left( \Pi V \right)$ is ${{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$. We can see this simply by modding out by the odd part.
Let $\Delta \subset {{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$ be the diagonal. Let $U_{ij}=U_i \times U_j \subset {{\mathbb{P}^1}}_{[z_0,z_1]}\times {{\mathbb{P}^1}}_{[w_0,w_1]}$ be an open subset, where $U_i$ is defined by $z_i \neq 0$ and $U_j$ is defined by $w_j\neq 0$. Then ${{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$ has another open cover $${{\mathbb{P}^1}}\times {{\mathbb{P}^1}}= U_{00} \cup \left( U_{10} - \Delta \right) \cup \left( U_{01} - \Delta \right) \cup U_{11}$$
Define $V_1:=U_{00}$, $V_2:= U_{10}-\Delta$,$V_3:=U_{01}-\Delta$ and $V_4:=U_{11}$.
Define $p_{10}$ and $p_{01}$ to be the projections to the reduced parts $$p_{10}: {{\rm Hilb}}^{1|1}(\Pi V|_{U_1}) \times {{\rm Hilb}}^{1|0} (\Pi V|_{U_0}) \rightarrow U_1 \times U_0 \subset {{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$$ $$p_{01}: {{\rm Hilb}}^{1|1}(\Pi V|_{U_0}) \times {{\rm Hilb}}^{1|0} (\Pi V|_{U_1}) \rightarrow U_0 \times U_1 \subset {{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$$
Then we can define a pullback $\Delta^* := p^* \Delta$, for each $p=p_{10},p_{01}$.
First, observe that there are natural inclusion maps $${{\rm Hilb}}^{1|1}(\Pi V |_{U_1}) \times {{\rm Hilb}}^{1|0}(\Pi V|_{U_0}) - \Delta^*
\stackrel{\sim}{\rightarrow} {{\rm Hilb}}^{2|1}(\Pi V)|_{V_2} \hookrightarrow {{\rm Hilb}}^{2|1}(\Pi V)$$ $${{\rm Hilb}}^{1|1}(\Pi V |_{U_0}) \times {{\rm Hilb}}^{1|0}(\Pi V|_{U_1}) - \Delta^*
\stackrel{\sim}{\rightarrow} {{\rm Hilb}}^{2|1}(\Pi V)|_{V_3} \hookrightarrow {{\rm Hilb}}^{2|1}(\Pi V)$$
From the above inclusions, we can easily see that the Hilbert scheme ${{\rm Hilb}}^{2|1}(\Pi V)$ can be covered by four open subspaces $${{\rm Hilb}}^{2|1}(\Pi V)|_{V_1} \cup {{\rm Hilb}}^{2|1}(\Pi V)|_{V_2} \cup {{\rm Hilb}}^{2|1}(\Pi V)|_{V_3} \cup {{\rm Hilb}}^{2|1}(\Pi V)|_{V_4}$$
To make this argument complete, we need to glue all open subsets. Let us start with gluing $V_1$ and $V_3$. On each open set $U_i$, we can trivialize and assign coordinates of $\Pi V$. $$\begin{aligned}
\Pi V|_{U_0} \simeq &\; {\mathbb{C}}^{1|1}_{x,\theta}\\
\Pi V|_{U_1} \simeq&\; {\mathbb{C}}^{1|1}_{y,\psi}\\\end{aligned}$$
Assign coordinates induced from the Section \[equ\] $$\begin{aligned}
{{\rm Hilb}}^{2|1}(\Pi V) \Big{|}_{V_3} \simeq&\; {{\rm Hilb}}^{1|1}(\Pi V|_{U_0}) \times {{\rm Hilb}}^{1|0}(\Pi V |_{U_1}) -\Delta^* \\
\simeq & \; {\mathbb{C}}^{1|1}_{c_1{{\,|\,}}\gamma_1} \times {\mathbb{C}}^{1|1}_{c_2{{\,|\,}}\gamma_2} - \widetilde{\Delta} \\
{{\rm Hilb}}^{2|1}(\Pi V)|_{V_1} \simeq &\; {\mathbb{C}}^{2|2}_{a_1,a_2 {{\,|\,}}\alpha_1,\alpha_2}\end{aligned}$$ where $\widetilde{\Delta}$ is defined by $c_1c_2=1$.
On the intersection $V_1 \cap V_3$, we have $c_2\neq 0$ and identities $y=\frac{1}{x}$ and $\psi =\frac{\theta}{x^k}$. Compute the gluing map ${\mathbb{C}}^{1|1}_{c_1{{\,|\,}}\gamma_1} \times {\mathbb{C}}^{1|1}_{c_2 {{\,|\,}}\gamma_2} - \Delta \rightarrow {\mathbb{C}}^{2|2}_{a_1,a_2 {{\,|\,}}\alpha_1,\alpha_2}$ to be the isomorphism induced by the following calculation $$\label{glue}
\begin{split}
&\left( (c_1|\, \gamma_1),(c_2|\,\gamma_2) \right) \\
&\ \ \mapsto \left< x+c_1+\gamma_1\theta \right> \times \left< y+c_2, \psi + \gamma_2 \right>\\
&\ \ \mapsto \left< (x+c_1+\gamma_1\theta )(y+c_2),(x+c_1+\gamma_1\theta )(\psi+\gamma_2) \right>\\
&\ \ =\left< (x+c_1+\gamma_1 \theta)(x+\frac{1}{c_2}),(x+c_1+\gamma_1\theta)(\theta+\frac{\gamma_2}{(-c_2)^k}) \right> \\
&\ \ = \left< \left(x+ c_1 - \gamma_1\gamma_2(-c_2)^{-k}\right) (x+c_2^{-1})
+ \gamma_1 (c_2^{-1}-c_1) ( \theta+\gamma_2(-c_2)^{-k}),\right. \\
&\left. \qquad\qquad\qquad\qquad\qquad\qquad \quad\
\left(x+ c_1- \gamma_1 \gamma_2(-c_2)^{-k}\right) (\theta + \gamma_2(-c_2)^{-k})\right>\\
&\ \ \mapsto \left(c_1 - \gamma_1 \gamma_2 (-c_2)^{-k},\, \frac{1}{c_2} \, \bigg{|} \, \gamma_1\left(\frac{1}{c_2}-c_1\right),\, \gamma_2(-c_2)^{-k}\right)
\end{split}$$
One can similarly compute gluing maps on each intersection $V_i \cap V_j$ for all $i$ and $j$, and easily check the transitivity.
Let $W$ be the vector bundle on $\left( {{\rm Hilb}}^{2|1} \Pi V \right)_{b}$defined by $W^\vee = \mathcal{J}/\mathcal{J}^2$, where $\mathcal{J}$ is the ideal sheaf of ${{\rm Hilb}}^{2|1} \Pi V$ generated by nilpotents. To check the non-splitness of the ${{\rm Hilb}}^{2|1} \Pi V$, it is enough to find the obstruction class $ \omega_2=w(\varphi^{(1)}) \in {\rm H}^1({{\mathbb{P}^1}}\times {{\mathbb{P}^1}},\mathcal{T}_{{{\mathbb{P}^1}}\times {{\mathbb{P}^1}}} \otimes \wedge ^2 W^\vee) $ and check that it is not vanishing. ([@projected])
Since $\wedge ^2 W^\vee$ is a line bundle on ${{\mathbb{P}^1}}\times {{\mathbb{P}^1}}$, there are some $a$ and $b$ such that $\wedge ^2 W^\vee \simeq {\mathcal{O}}(a,b)$.
From the computation (\[glue\]), we know that the transition map on $V_1 \cap V_3$ is $$\begin{split} \label{on13}
a_1 &\mapsto c_1 - \gamma_1\gamma_2(-c_2)^{-k}\\
a_2 &\mapsto \frac{1}{c_2} \\
\alpha_1 &\mapsto \gamma_1(\cfrac{1}{c_2}-c_1)\\
\alpha_2 &\mapsto \gamma_2(-c_2)^{-k}
\end{split}$$
Assign coordinates, $${{\rm Hilb}}^{2|1}(\Pi V) \big{|}_{V_2} \simeq {\mathbb{C}}^{1|1}_{b_1|\beta_1}\times {\mathbb{C}}^{1|1}_{b_2|\beta_2} -\Delta^*$$ For $b$ and $\beta$’s, we have equations $$x+b_2=0, \ \theta + \beta_2=0 \text{ and }
y+b_1+ \beta_1 \psi=0$$ On $V_1 \cap V_2$, by using the identities $xy=1$ and $\psi=\theta/x^k$, we get $$\left< y+b_1+ \beta_1 \psi \right> = \left< x +{b_1}^{-1} - \beta_1(-b_1)^{k-2}\theta \right>$$ Then we can compute that $$\left< \left( x +{b_1}^{-1} - \beta_1(-b_1)^{k-2}\theta \right),\left(x+b_2,\theta+\beta_2 \right) \right>$$ corresponds to the ideal $$\begin{aligned}
\left< ( x+ \frac{1}{b_1} + \beta_1(-b_1)^{k-2} \beta_2 )(x+ b_2) - \beta_1(-b_1)^{k-2}(b_2-\frac{1}{b_1})(\theta + \beta_2),\right.\\ \left.
(x+ \frac{1}{b_1} + \beta_1(-b_1)^{k-2}\beta_2)( \theta + \beta_2) \right> \end{aligned}$$
By comparing above ideal with $$\left< (x + a_1)(x + a_2) + \alpha_1(\theta + \alpha_2), \, (x+a_1)(\theta + \alpha_2) \right>$$ we can check that the transition map is $$\begin{split}\label{on12}
a_1 &\mapsto \frac{1}{b_1} + \beta_1 \beta_2 (-b_1)^{k-2} \\
a_2 &\mapsto b_2 \\
\alpha_1 &\mapsto - \beta_1(-b_1)^{k-2}(b_2-\frac{1}{b_1})\\
\alpha_2 &\mapsto \beta_2
\end{split}$$
Transition maps for $V_{24}:=V_2\cap V_4$ and $V_{34}:=V_3\cap V_4$ can be computed from transition maps for $V_{13}$ and $V_{12}$ by changing variables.
For $a$ and $b$, where $\wedge ^2 W^\vee \simeq {\mathcal{O}}(a,b)$, we have the following lemma.
$a=k-3$ and $b=-k-1$
Restrict $\wedge^2W^\vee $ to ${{\mathbb{P}^1}}\times \{0\}$. Then the transition map on $V_1 \cap V_2$ gives the transition map on ${{\mathbb{P}^1}}\times \{0\} \simeq {{\mathbb{P}^1}}$. Change coordinates on $V_2$ by $\beta_1(b_1b_2-1) \mapsto \beta_1$, then the transition map (\[on12\]) gives us $\alpha_1\alpha_2 = \beta_1\beta_2(-b_1)^{k-3}$ and $a=k-3$. To find $b$, we need to restrict the line bundle to $\{0\} \times {{\mathbb{P}^1}}$. Then a transition map on $V_2 \cap V_4$ gives $$\begin{aligned}
\delta_1 &\mapsto \frac{\beta_1}{b_2}\\
\delta_2 &\mapsto -(-b_2)^{-k}\beta_2\end{aligned}$$ Note that, by setting $b_1=0$, this transition map can be derived from the transition map of ${{\rm Hilb}}^{2|1}(\Pi V)$ on $V_2 \cap V_4$. From the transition map, we get $\delta_1 \delta_2 = - \beta_1 \beta_2 (-b_2)^{-k-1}$ and $b=-k-1$.
Let $V$ be the line bundle ${\mathcal{O}}_{{\mathbb{P}^1}}(k)$ on ${{\mathbb{P}^1}}$. For any $k$, the Hilbert scheme ${{\rm Hilb}}^{2|1} \Pi V$ is non-split.
It is enough to show that the obstruction class $\Psi \in H^1({{\mathbb{P}^1}}\times {{\mathbb{P}^1}},\mathcal{T}\otimes \wedge^2 W^\vee)$ defined by ${{\rm Hilb}}^{2|1} \Pi V$ is non-zero.
1. On $V_{12}:= V_1 \cap V_2$, the transition map (\[on12\]) defines $$\Psi_{12}= - \cfrac{\alpha_1\alpha_2}{a_2-a_1}\ \cfrac{\partial}{\partial a_1}$$
2. On $V_{13}:= V_1 \cap V_3$, the transition map (\[on13\]) gives $$\Psi_{13}
= -(-c_2)^{-k}\gamma_1\gamma_2 \ \cfrac{\partial}{\partial a_1}
=-\frac{\alpha_1\alpha_2 }{a_2-a_1}\ \cfrac{\partial}{\partial a_1}$$
3. On $V_{23} := V_2 \cap V_3$, we have $\Psi_{23}=0$ because $V_{23} \subset V_{12}\cap V_{13}$.
Now, we need to show that $\Psi$ is non-zero.
Suppose that there are $\sigma_i$’s such that $\Psi_{ij}=\sigma_j-\sigma_i$ on each $V_{ij}$. Then we can find $f(\frac{z_1}{z_0},\frac{w_1}{w_0}) \in k\left[\frac{z_1}{z_0}, \frac{w_1}{w_0} \right]$, $g(\frac{z_0}{z_1},\frac{w_1}{w_0}) \in k\left[ \frac{z_0}{z_1},\frac{w_1}{w_0} \right]$ and $h(\frac{z_1}{z_0},\frac{w_0}{w_1}) \in k\left[ \frac{z_1}{z_0},\frac{w_0}{w_1}\right]$ such that $$\begin{aligned}
\sigma_1=f\left( \frac{z_1}{z_0},\frac{w_1}{w_0} \right)\alpha_1\alpha_2\ \cfrac{\partial}{\partial(\frac{z_1}{z_0})}
+f'\left( \frac{z_1}{z_0},\frac{w_1}{w_0} \right)\alpha_1\alpha_2\ \cfrac{\partial}{\partial(\frac{w_1}{w_0})} \\
\sigma_2= g\left(\frac{z_0}{z_1},\frac{w_1}{w_0}\right) \beta_1 \beta_2\ \cfrac{\partial}{\partial(\frac{z_0}{z_1})}
+g'\left(\frac{z_0}{z_1},\frac{w_1}{w_0}\right) \beta_1 \beta_2\ \cfrac{\partial}{\partial(\frac{w_1}{w_0})}\\
\sigma_3= h\left( \frac{z_1}{z_0},\frac{w_0}{w_1} \right) \gamma_1 \gamma_2\ \cfrac{\partial}{\partial (\frac{z_1}{z_0})}
+h'\left( \frac{z_1}{z_0},\frac{w_0}{w_1} \right) \gamma_1 \gamma_2\ \cfrac{\partial}{\partial (\frac{w_0}{w_1})}\end{aligned}$$
Observe that $$\begin{aligned}
\Psi_{12}
=&-(-\frac{z_0}{z_1})^{k-2}\beta_1\beta_2 \cfrac{\partial}{\partial (\frac{z_1}{z_0})} \\
=&\ - f \left( \frac{z_1}{z_0}, \frac{w_1}{w_0} \right) \left( b_2-\frac{1}{b_1} \right)(-b_1)^{k-2}
\beta_1\beta_2 \cfrac{\partial}{\partial (\frac{z_1}{z_0})} \\
&\qquad\qquad
+g\left( \frac{z_0}{z_1}, \frac{w_1}{w_0} \right) \left( \frac{z_1}{z_0} \right)^2
\beta_1 \beta_2 \cfrac{\partial}{\partial (\frac{z_1}{z_0})}
+( \cdots )\cfrac{\partial}{\partial(\frac{w_1}{w_0})}\end{aligned}$$ Therefore, we have $$-\left( -\frac{z_0}{z_1} \right)^k = -g\left( \frac{z_0}{z_1},\frac{w_1}{w_0} \right)
+ f\left( \frac{z_1}{z_0}, \frac{w_1}{w_0} \right)
\left( \frac{w_1}{w_0}-\frac{z_1}{z_0} \right) \left( - \frac{z_0}{z_1} \right)^k
\label{20}$$
Similarly, $\Psi_{13}$ gives $$-\left( -\frac{w_1}{w_0} \right)^k = h \left( \frac{z_1}{z_0}, \frac{w_0}{w_1} \right)
-f\left( \frac{z_1}{z_0}, \frac{w_1}{w_0} \right)\left( \frac{w_1}{w_0} -\frac{z_1}{z_0} \right) \left( -\frac{w_1}{w_0}\right)^k
\label{21}$$
Also, $ \Psi_{23}$ gives us $$h\left( \frac{z_1}{z_0} ,\frac{w_0}{w_1} \right) -g\left( \frac{z_0}{z_1},\frac{w_1}{w_0} \right) \left(- \frac{w_1}{w_0} \right)^k \left( -\frac{z_1}{z_0} \right)^k =0
\label{22}$$
Finally, we will derive a contradiction for any $k$.
1. . If $k$ is positive, $g \left( \frac{z_0}{z_1}, \frac{w_1}{w_0} \right) \cdot \left(-\frac{w_1}{w_0} \right)^k \left( - \frac{z_1}{z_0}\right)^k $ have a term with $w_0$ at the denominator for $g\neq 0$. To make the equation (\[22\]) true, $g$ and $h$ must be zero. However, the equation (\[21\]) implies that $$f\left( \frac{z_1}{z_0}, \frac{w_1}{w_0} \right) \cdot \left( \frac{w_1}{w_0} -\frac{z_1}{z_0} \right) =-1$$ which is a contradiction.
2. . If $k<0$, $g \left( \frac{z_0}{z_1}, \frac{w_1}{w_0} \right) \cdot \left(-\frac{w_1}{w_0} \right)^k \left( - \frac{z_1}{z_0}\right)^k $ has $z_1$ at the denominator for $g \neq 0$, In a similar way to the case $k>0$, we can derive a contradiction.
3. . If $k=0$, the equation (\[22\]) becomes $h\left( \frac{z_1}{z_0} ,\frac{w_0}{w_1} \right) =g\left( \frac{z_0}{z_1},\frac{w_1}{w_0} \right)$. Therefore, $h\left( \frac{z_1}{z_0} ,\frac{w_0}{w_1} \right) =g\left( \frac{z_0}{z_1},\frac{w_1}{w_0} \right) =c$ for some constant $c$. Then $$(\ref{20}) \Rightarrow f\left( \frac{z_1}{z_0}, \frac{w_1}{w_0} \right) \cdot \left( \frac{w_1}{w_0} -\frac{z_1}{z_0} \right)
-c = -1$$ The only possible case is $f=0$ and $c=1$. Plug in $f=0$ and $h=1$ to (\[21\]) and then we get a contradiction.
Hence, we show that the obstruction class $\Psi$ is nonzero.
[99]{} Manin, Yuri I. Gauge Field Theory And Complex Geometry. Berlin : Springer, 1997. Berezin, F. A. Introduction To Superanalysis. Dordrecht : D. Reidel Pub. Co. ; 1987. Leites, Dmitrii Aleksandrovich. “Introduction to the theory of supermanifolds.” Russian Mathematical Surveys 35, no. 1 (1980): 1. Ron Donagi, and Edward Witten. Supermoduli space is not projected. String-math 2012. Vol. 90. American Mathematical Society, 2015.(Donagi, Ron, Sheldon Katz, Albrecht Klemm, and David Morrison. eds.) Grothendieck, Alexander. “Techniques de construction et théorèmes d’existence en géométrie algébrique IV: Les schémas de Hilbert.” Séminaire Bourbaki 6 (1960): 249-276. Fogarty, John. “Algebraic families on an algebraic surface.” American Journal of Mathematics 90, no. 2 (1968): 511-521. Lam, T. Y. A First Course In Noncommutative Rings. New York : Springer-Verlag, 1991. Topiwala, Pankaj, and Jeffrey M. Rabin. “The super GAGA principle and families of super Riemann surfaces.” Proceedings of the American Mathematical Society (1991): 11-20. Hartshorne, Robin. Algebraic Geometry. New York : Springer-Verlag, 1977. Nitsure, Nitin. “Construction of Hilbert and Quot schemes.” arXiv preprint math/0504590 (2005).
|
---
abstract: 'We present a Friedmann-Robertson-Walker quantum cosmological model in the presence of Chaplygin gas and perfect fluid for early and late time epoches. In this work, we consider perfect fluid as an effective potential and apply Schutz’s variational formalism to the Chaplygin gas which recovers the notion of time. These give rise to Schrödinger-Wheeler-DeWitt equation for the scale factor. We use the eigenfunctions in order to construct wave packets and study the time dependent behavior of the expectation value of the scale factor using the many-worlds interpretation of quantum mechanics. We show that contrary to the classical case, the expectation value of the scale factor avoids singularity at quantum level. Moreover, this model predicts that the expansion of Universe is accelerating for the late times.'
author:
- |
P. Pedram[^1], S. Jalalzadeh[^2],\
[Department of Physics, Shahid Beheshti University, Evin, Tehran 19839, Iran]{}
title: Quantum FRW cosmological solutions in the presence of Chaplygin gas and perfect fluid
---
-1.5cm
*Pacs*:[ 98.80.Qc, 04.40.Nr, 04.60.Ds]{}
Introduction {#sec1}
============
Supernova Ia (SNIa) observations show that the expansion of the Universe is accelerating [@Riess:1998cb], contrary to Friedmann-Robertson-Walker (FRW) cosmological models, with non-relativistic matter and radiation. Also cosmic microwave background radiation (CMBR) data [@Spergel:2003cb; @2a] is suggesting that the expansion of our Universe seems to be in an accelerated state which is referred to “dark energy” effect [@3a]. Cosmological constant, $\Lambda$, as the vacuum energy can be responsible for this evolution by providing a negative pressure [@3b; @3c]. Unfortunately, the observed value of $\Lambda$ is $120$ orders of magnitude smaller than the one computed from field theory models [@3b; @3c]. Quintessence is an alternative to consider a dynamical vacuum energy [@Wetterich:fm], involving one or two scalar fields, some with potentials justified from supergravity theories [@Brax:1999yv]. However, the fine-tuning problem of these models which arises from cosmic coincidence issue has no satisfactory solution.
The Chaplygin gas model is an interesting proposal [@Kamenshchik], describing a transition from a Universe filled with dust-like matter to an accelerating expanding stage. This model was later generalized in Ref. [@Kamenshchik; @A]. The generalized Chaplygin gas model is described by a perfect fluid obeying an exotic equation of state [@Kamenshchik; @A] $$p=-\frac{A}{\rho ^{\alpha }}, \label{cgi1}$$ where $A$ is a positive constant and $0<\alpha \leq 1$. The standard Chaplygin gas [@Kamenshchik] corresponds to $\alpha =1$. Some publications [@Bento; @3rp; @2rp; @Fabris; @C; @Ogawa; @NewBD; @18a; @20a; @21a; @23a; @27a; @Rev1; @Rev2; @DySy; @Jackiw; @setare1; @setare2; @setare3] and reviews [@Rev1; @Rev2] which studied the Chaplygin gas cosmological models have already appeared in the literature. The Chaplygin gas can be obtained from the string Nambu-Goto action in the light cone coordinate [@Jackiw]. Since the application of string theory in principle is in very high energy when the quantum effects is important in early Universe, a quantum cosmological study of the Chaplygin gas is also well founded.
Recently, Quantum mechanical description of a FRW model with a generalized Chaplygin gas has been discussed in Ref. [@Buahmadi] in order to retrieve explicit mathematical expressions for the different quantum mechanical states and determine the transition probabilities towards an accelerated stage. Moreover, quantization of FRW model in the presence of Chaplygin gas has been discussed in Ref. [@chap]. There, we have considered matter as the Chaplygin gas and discussed the early time behavior of expectation value of the scale factor through the application of Schutz’s formalism. In this paper, aside from the Chaplygin gas which is coupled to gravity and has an advantage of furnishing a variable connected to matter which can be identified with time, we also include the perfect fluid in this scenario and investigate the analytical solutions in both early and late time Universes. Schutz’s formalism [@11; @12] gives dynamics to the matter degrees of freedom in interaction with the gravitational field. Using proper canonical transformations, at least one conjugate momentum operator associated with matter appears linearly in the action integral. Therefore, a Schrödinger-like equation can be obtained with the matter variable playing the role of time. The application of Schutz’s formalism in Stephani and FRW perfect fluid cosmological models has been discussed in Refs. [@PLB; @pedramCQG2; @FRW]. Note that our approach in principle is different from Monerat *et. al* [@monerat2], where they correspond the dynamical variable to the perfect fluid instead of Chaplygin gas and resort to the numerical methods to obtain the time evolution of an initial wave packet. Note that there is considerable evidence that the early Universe is dominated by radiation. Therefore, a natural setting for quantum cosmology is the one where radiation has the predominant role [@ref1]. On the other hand the Chaplygin gas is dominated by the non-relativistic matter at early times (See the following section). This seems to be in contradiction with our knowledge of baby Universe. According to [@ref2], inflation can be accommodated within the generalized Chaplygin gas scenario. Hence, the way adopted to avoid this inconsistency is that the radiation dominated phase is followed by Chaplygin dominated period so that we have the so-called Chaplygin inflation [@ref2]. Also, it would be more suitable to consider the field theory representation of the Chaplygin gas [@ref3] to describe quantum cosmology. In this way, the Chaplygin gas can be viewed as a modification of gravity as was first pointed out in [@ref3]. Also, the authors of [@ref4] have recently shown that the Chaplygin gas model has a geometrical explanation within the context of brane world theory for any $\alpha$. Consequently in these models the equation $$\begin{aligned}
\label{ref}
\rho
=\left[A+\frac{B}{a(t)^{3(\alpha+1)}}\right]^{\frac{1}{1+\alpha}},\end{aligned}$$ is a consequence of stress-energy conservation for a scalar field on the brane [@ref3], or conservation of induced dark matter on the brane [@ref4; @ref5]. Here, $a(t)$ is the scale-factor of the universe and $B$ is a positive integration constant. Therefore, it is evocative to view the contribution of Chaplygin gas to the stress-energy tensor as a brane induced modification of gravity. In this article, we used the fluid description for the Chaplygin gas and for Lagrangian formalism, the corresponding pressure. Consequently, if we rely on the model described in [@ref4], we will have covariance in our model.
The paper is organized as follows. In Sec. \[sec2\], the quantum cosmological model with a Chaplygin gas, as a portion of the matter content is constructed in Schutz’s formalism [@11; @12] for early and late time Universes. Then the Schrödinger-Wheeler-DeWitt (SWD) equation in minisuperspace is obtained to quantize the model under the action of a perfect fluid effective potential. The wave function depends on the scale factor $a$ and on the canonical variable associated to the Chaplygin gas, which in the Schutz’s variational formalism plays the role of time $T$. We separate the wave function into two parts, one depends only on the scale factor and the other depends on the time. The time dependent part of the solution is $e^{iEt}$, where $E$ is the energy. In Sec. \[sec3\], we construct wave packets from the eigenfunctions and compute the time-dependent expectation values of the scale factor to investigate the existence of singularities at quantum level. Moreover, we present some analytical solutions in both early and late time epoches. In Sec. \[sec4\], we present our conclusions.
The Model {#sec2}
=========
The action for gravity plus Chaplygin gas in Schutz’s formalism is written as $$\begin{aligned}
S= \int_Md^4x\sqrt{-g}\, R + 2\int_{\partial
M}d^3x\sqrt{h}\, h_{ab}\, K^{ab}+ \int_Md^4x\sqrt{-g}\,\, p_f +
\int_Md^4x\sqrt{-g}\,\, p_c,\label{action}\end{aligned}$$ here, $K^{ab}$ is the extrinsic curvature and $h_{ab}$ is the induced metric over the three-dimensional spatial hypersurface, which is the boundary $\partial M$ of the four dimensional manifold $M$. We choose units such that the factor $16\pi G$ becomes equal to one. $p_f$ and $p_c$ denote Chaplygin gas and perfect fluid pressure, respectively. Note that, according to [@11] the above action is equivalent to the usual Hawking-Ellis formalism for perfect fluid description [@ref7]. Perfect fluid satisfies the barotropic equation of state $$\begin{aligned}
p_f=w\rho_f,\quad\quad w\leq1.\end{aligned}$$ The first two terms were first obtained in [@7] and the last term of (\[action\]) represents the matter contribution to the total action. In Schutz’s formalism [@11; @12] the fluid’s four-velocity can be expressed in terms of five potentials $\Phi$, $\zeta$, $\beta$, $\theta$ and $S$ $$u_\nu = \frac{1}{\mu}(\Phi_{,\nu} + \zeta\beta_{,\nu} + \theta S_{,\nu})$$ where $\mu$ is the specific enthalpy. $S$ is the specific entropy, and the potentials $\zeta$ and $\beta$ are connected with rotation which are absent of models in the Friedmann-Robertson-Walker (FRW) type. The variables $\Phi$ and $\theta$ have no clear physical meaning. The four-velocity also satisfies the normalization condition $$u^\nu u_\nu = -1.$$ The FRW metric $$ds^2 = - N^2(t)dt^2 + a^2(t)g_{ij}dx^idx^j,$$ can be inserted in the action (\[action\]), where $N(t)$ is the lapse function and $g_{ij}$ is the metric on the constant-curvature spatial section. Following the thermodynamic description of Ref. [@14], the basic thermodynamic relations take the form $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\rho_c &=& \rho_0[1+\Pi], \quad h=1+\Pi+p_c/\rho_0, \\ \nonumber
\tau dS &=&
d\Pi+p_c\,d(1/\rho_0),\\
&=&\frac{(1+\Pi)^{-\alpha}}{1+\alpha}d\left[(1+\Pi)^{1+\alpha}+\frac{A}{\rho_0^{1+\alpha}
}\right].\end{aligned}$$ It then follows that to within a factor $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\tau &=& \frac{(1+\Pi)^{-\alpha}}{1+\alpha}, \\
S &=& (1+\Pi)^{1+\alpha}+\frac{A}{\rho_0^{1+\alpha}}.\end{aligned}$$ Therefore, the equation of state takes the form $$p_c=-A\left[\frac{1}{A}\left(1-\frac{\,\,h^{\frac{1+\alpha}{\alpha}}}{S^{1/\alpha}}\right)\right]^{\frac{1+\alpha}{\alpha}}.$$ The particle number density and energy density are, respectively, $$\begin{aligned}
% \nonumber to remove numbering (before each equation)
\rho_c &=& \left[\frac{1}{A}\left(1-\frac{\,\,h^{\frac{1+\alpha}{\alpha}}}{S^{1/\alpha}} \right) \right]^{\frac{-1}{\,\,1+\alpha}}, \\
\rho_0 &=& \frac{\rho+p}{h},\end{aligned}$$ where $h=(\dot{\Phi}+\theta\dot{S})/N$. After dropping the surface terms, the final reduced action takes the form $$\begin{aligned}
S = \int dt\biggr\{-6\frac{\dot a^2a}{N} + 6kNa -N a^3 \rho_f -N a^3
A\left[\frac{1}{A}\left(1-\frac{\,\,(\dot{\Phi}+\theta\dot{S})^{\frac{1+\alpha}{\alpha}}}{N^{\frac{1+\alpha}{\alpha}}
S^{1/\alpha}}\right)\right]^{\frac{1+\alpha}{\alpha}}\biggr\}.\end{aligned}$$ The reduced action may be further simplified using canonical methods [@14], resulting in the super-Hamiltonian $$\label{superH}
{\cal H} = - \frac{p_a^2}{24a} -6ka +a^3 \rho_f +\left(S
p_{\Phi}^{1+\alpha}+A a^{3(1+\alpha)} \right)^{\frac{1}{1+\alpha}},$$ where $p_a= -12{\dot aa}/{N}$ and $p_\Phi =\frac{\displaystyle
\partial{\cal L}}{\displaystyle \partial \dot{\Phi}}\,$. However, an analytical quantum mechanical treatment of this FRW minisuperspace with the above Hamiltonian does not seem feasible. Therefore, it requires some approximation. We study the Chaplygin gas expression in early and late times limits, namely for small scale factors $S
p_{\Phi}^{1+\alpha}\gg A a^{3(1+\alpha)}$ [@Buahmadi; @chap] and large scale factors $S p_{\Phi}^{1+\alpha}\ll A a^{3(1+\alpha)}$, separately. So for early Universe, we can use the following expansion $$\begin{aligned}
\big(S p_{\Phi}^{1+\alpha}+ A a^{3(1+\alpha)}\big)
^{\frac1{1+\alpha}}\approx S^{\frac{1}{1+\alpha}} p_{\Phi}\bigg[
1+\frac1{1+\alpha}\frac{Aa^{3(\alpha+1)}}{S p_{\Phi}^{1+\alpha}}
+\frac12\frac1{1+\alpha}\left( \frac1{1+\alpha}-1\right)
\frac{A^{2}}{S^2
p_{\Phi}^{2(1+\alpha)}}a^{6(\alpha+1)}+\ldots\bigg].\end{aligned}$$ Hence, up to the leading order, the super-Hamiltonian takes the form $${\cal H} = - \frac{p_a^2}{24a}-6ka +a^3
\rho_f+S^{\frac{1}{1+\alpha}} p_{\Phi}.$$ The following additional canonical transformations $$\begin{aligned}
T =-(1+\alpha)p_\Phi^{-1} S^{\frac{\alpha}{1+\alpha}}p_S, \quad
\quad p_T =S^{\frac{1}{1+\alpha}} p_\Phi,\end{aligned}$$ and use of the explicit form of the energy density of the perfect fluid $\rho_f=\frac{\displaystyle B}{\displaystyle a^{3(1+w)}}$, simplify the super-Hamiltonian to $${\cal H} = - \frac{p_a^2}{24a} -6ka +Ba^{-3w}+
p_T,\label{EqHamiltonian}$$ where $B$ is a constant and the momentum $p_T$ is the only remaining canonical variable associated with matter. It appears linearly in the super-Hamiltonian. The parameter $k$ defines the curvature of the spatial section, taking the values $0, 1, - 1$ for a flat, positive-curvature or negative-curvature Universe, respectively.
The classical dynamics is governed by the Hamilton equations, derived from Eq. (\[EqHamiltonian\]) and Poisson brackets as $$\left\{
\begin{array}{llllll}
\dot{a} =&\{a,N{\cal H}\}=-\frac{\displaystyle Np_{a}}{\displaystyle 12a} ,\\
& \\
\dot{p_{a}} =&\{p_{a},N{\cal H}\}=- \frac{N}{24a^2}p_a^2+6Nk+3wNBa^{-3w-1}, \\
& \\
\dot{T} =&\{T,N{\cal H}\}=N\, ,\\
& \\
\dot{p_{T}} =&\{p_{T},N{\cal H}\}=0\, .\\
& \\
\end{array}
\right. \label{4}$$ We also have the constraint equation ${\cal H} = 0$. Choosing the gauge $N=1$, we have the following solutions for the system $$\begin{aligned}
\label{class1}
T&=&t,\\\label{class2} p_T&=&\textrm{const.},\\\label{class3}
\ddot{a}&=&-\frac{\dot
a^2}{2a}-\frac{k}{2a}-\frac{1}{4}wBa^{-3w-2},\\\label{class4}
0&=&-6a\dot a^2-6k a +Ba^{-3w}+\,p_T.\end{aligned}$$ The WD equation in minisuperspace can be obtained by imposing the standard quantization conditions on the canonical momenta ($p_a=-i\frac{\displaystyle
\partial}{\displaystyle \partial a}$, $p_T=-i\frac{\displaystyle
\partial}{\displaystyle \partial T}$ ) and demanding that the super-Hamiltonian operator annihilate the wave function ($\hbar =1$) $$\label{sle} \frac{\partial^2\Psi}{\partial a^2} -
(144ka^2-24Ba^{1-3w})\Psi - i24a\frac{\partial\Psi}{\partial t} = 0.$$ In this equation according to (\[class1\]), $T=t$ corresponds to the time coordinate. As discussed in [@nivaldo; @15], in order for the Hamiltonian operator ${\hat H}$ to be self-adjoint the inner product of any two wave functions $\Phi$ and $\Psi$ must take the form $$\label{inner}
(\Phi,\Psi) = \int_0^\infty a\,\Phi^*\Psi da,$$ On the other hand, the wave functions should satisfy the following boundary conditions $$\label{boundary} \Psi(0,t) = 0
\quad \mbox{or} \quad \frac{\partial\Psi (a,t)}{\partial
a}\bigg\vert_{a = 0} = 0.$$ The SWD equation (\[sle\]) can be solved by separation of variables as follows $$\psi(a,t) = e^{iEt}\psi(a), \label{11}$$ where the $a$ dependent part of the wave function $\psi(a)$ satisfies $$\label{sle2} -\psi''(a) +(144 ka^2-24Ba^{1-3w})\psi(a)
=24Ea\,\psi(a),$$ and the prime means derivative with respect to $a$.
Now, we consider late time Universe when $S p_{\Phi}^{1+\alpha}\ll A
a^{3(1+\alpha)}$. Using the expression $$\begin{aligned}
\big(S p_{\Phi}^{1+\alpha}+ A a^{3(1+\alpha)}\big)
^{\frac1{1+\alpha}}\approx A^{\frac{1}{1+\alpha}} a^3\bigg[
1+\frac1{1+\alpha}\frac{S p_{\Phi}^{1+\alpha}}{ Aa^{3(\alpha+1)}}
+\frac12\frac1{1+\alpha}\left( \frac1{1+\alpha}-1\right) \frac{S^2
p_{\Phi}^{2(1+\alpha)}}{A^{2}a^{6(\alpha+1)}}+\ldots\bigg],\end{aligned}$$ up to the first order, the super-Hamiltonian (\[superH\]) takes the form $${\cal H} = - \frac{p_a^2}{24a}-6ka +a^3
\rho_f+A^{\frac{1}{1+\alpha}}
a^3+\frac{A^{\frac{\alpha}{1+\alpha}}}{1+\alpha}a^{-3\alpha}S
p_{\Phi}^{1+\alpha},$$ The following additional canonical transformations $$\begin{aligned}
\hspace{-5mm}T
=-(1+\alpha)A^{-\frac{\alpha}{1+\alpha}}p_{\Phi}^{-(1+\alpha)}p_S,
\,\, p_T =\frac{A^{\frac{\alpha}{1+\alpha}}}{1+\alpha}S
p_{\Phi}^{1+\alpha},\end{aligned}$$ simplify the super-Hamiltonian to $${\cal H} = - \frac{p_a^2}{24a} -6ka +Ba^{-3w}+A^{\frac{1}{1+\alpha}}
a^3+ a^{-3\alpha} p_T.\label{EqHamiltonian-b}$$ The classical dynamics is governed by the Hamilton equations, derived from Eq. (\[EqHamiltonian\]) and Poisson brackets as $$\left\{
\begin{array}{llllll}
\dot{a} =&\{a,N{\cal H}\}=-\frac{\displaystyle Np_{a}}{\displaystyle 12a} ,\\
& \\
\dot{p_{a}} =&\{p_{a},N{\cal H}\}=- \frac{N}{24a^2}p_a^2+6Nk+3wNBa^{-3w-1}\\
&\\&
-3NA^{\frac{1}{1+\alpha}}a^2+3\alpha N \,a^{-3\alpha-1}p_T, \\
& \\
\dot{T} =&\{T,N{\cal H}\}=Na^{-3\alpha}\, ,\\
& \\
\dot{p_{T}} =&\{p_{T},N{\cal H}\}=0\, .\\
& \\
\end{array}
\right. \label{4-b}$$ We also have the constraint equation ${\cal H} = 0$. Choosing the gauge $N=a^{3\alpha}$, we have the following solutions for the system $$\begin{aligned}
\label{class1b}
T&=&t,\\
p_T&=&\textrm{const.},\\
\ddot{a}&=&(3\alpha-\frac{1}{2})\frac{\dot
a^2}{a}-\frac{k}{2}a^{6\alpha-1}-\frac{1}{4}wBa^{6\alpha-3w-2}
+\frac{1}{4}A^{\frac{1}{1+\alpha}}a^{6\alpha+1}-\frac{1}{4}\alpha
p_T a^{3\alpha-2},\\
0&=&-6a^{-6\alpha+1}\dot a^2-6k a +Ba^{-3w} +A^{\frac{1}{1+\alpha}}
a^3+a^{-3\alpha}\,p_T.\end{aligned}$$ It is important to note that these equations predict an accelerating Universe for late times. For large values of the scale factor we can simplify the above equations and find the acceleration parameter $$q=\frac{a\ddot{a}}{\dot{a}^2}=3\alpha-1,$$ which is positive for $\alpha>1/3$. Now, imposing the standard quantization conditions on the canonical momenta and demanding that the super-Hamiltonian operator annihilates the wave function, we are led to SWD equation in minisuperspace ($\hbar =1$) $$\label{sle-b} \frac{\partial^2\Psi}{\partial a^2} -
(144ka^2-24Ba^{1-3w}-24A^{\frac{1}{1+\alpha}} a^4)\Psi -
i24a^{1-3\alpha}\frac{\partial\Psi}{\partial t} = 0 .$$ Here, according to (\[class1b\]), $T=t$ corresponds to the time coordinate. Demanding that the Hamiltonian operator ${\hat H}$ to be self-adjoint, the inner product of any two wave functions $\Phi$ and $\Psi$ must take the form [@nivaldo; @15] $$(\Phi,\Psi) = \int_0^\infty a^{1-3\alpha}\,\Phi^*\Psi da.$$ The SWD equation (\[sle-b\]) can be solved by separation of variables as follows $$\psi(a,t) = e^{iEt}\psi(a), \label{11-b}$$ where the $a$ dependent part of the wave function $\psi(a)$ satisfies $$\begin{aligned}
\label{sle2-b} -\psi''(a) +\left(144 ka^2-24Ba^{1-3w}
-24A^{\frac{1}{1+\alpha}} a^4\right)\psi(a)
=24Ea^{1-3\alpha}\,\psi(a),\end{aligned}$$ and the prime means derivative with respect to $a$. Note that effective Chaplygin gas term ($24A^{\frac{1}{1+\alpha}}$) plays the role of a positive cosmological constant. In particular, when $\alpha=1/3$ and $w=1/3$, this equation reduces to the FRW model with positive cosmological constant and radiation which has been studied in Ref. [@monerat].
Results {#sec3}
=======
In this Section we first study the issue of singularity avoidance in quantum cosmology in the early Universe and then present some analytical solutions in both early and late Universes.
For $k = 0$ the time-independent Wheeler-DeWitt equation (\[sle2\]), in the dust dominated Universe ($w=0$), reduces to $$\label{eq-dust} \psi'' + 24(E+B)a\psi = 0.$$ The above equation has the following general time-dependent solutions under the form of Bessel functions $$\label{bessel} \Psi_E' =
e^{iEt}\sqrt{a}\biggr[c_1J_{\frac{1}{3}}\biggr(\frac{\sqrt{96E'}}{3}a^{\frac{3}{2}}\biggl)
+
c_2Y_{\frac{1}{3}}\biggr(\frac{\sqrt{96E'}}{3}a^{\frac{3}{2}}\biggl)\biggl],$$ where $E'=E+B$. Now, the wave packets can be constructed by superimposing these solutions to obtain physically allowed wave functions. The general structure of these wave packets are $$\Psi(a,t) = \int_0^\infty A(E')\Psi_E'(a,t)dE' .$$ We choose $c_2 = 0$ for satisfying the first boundary condition (\[boundary\]). Defining $r =\frac{\sqrt{96E'}}{3}$, simple analytical expressions for the wave packet can be found by choosing $A(E')$ to be a quasi-gaussian function $$\Psi(a,t) = \sqrt{a}e^{-iBt}\int_0^\infty r^{\nu + 1}e^{-\gamma r^2
+ i\frac{3}{32}r^{2} t}J_\nu(ra^\frac{3}{2})dr,$$ where $\nu = \frac{1}{3}$ and $\gamma$ is an arbitrary positive constant. The above integral is known [@gradshteyn], and the wave packet takes the form $$\label{wp} \Psi(a,t) =
a\frac{e^{-\frac{a^{3}}{4Z}-iBt}}{(-2Z)^{\frac{4}{3}}},$$ where $Z=\gamma-i\frac{3}{32}t$. Now, we can verify what these quantum models predict for the behavior of the scale factor of the Universe. By adopting the many-worlds interpretation [@tipler; @everett], and with regards to the inner product relation (\[inner\]), the expectation value of the scale factor $$<a>(t) = \frac{\int_0^\infty a\Psi(a,t)^*a\Psi(a,t)da}
{\int_0^\infty a\Psi(a,t)^*\Psi(a,t)da},$$ is easily computed, leading to $$<a>(t) \propto
\biggr[\frac{9}{(32)^2\gamma^2}t^2 + 1\biggl]^\frac{1}{3} .$$ These solutions represent a no singular Universe which goes asymptotically over to the corresponding flat classical model for dust ($w=0$) dominated epoch (\[class1\]-\[class4\])(Fig. \[fig1\]) $$a(t) \propto t^{2/3}.$$
![The time behavior of the expected value for the scale factor $\langle a\rangle(t)$ (solid line) and the classical scale factor $a(t)$ (dashed line) for dust dominated Universe ($w=0$) and flat space time ($k=0$).[]{data-label="fig1"}](pic "fig:"){width="8cm"}\
In the case $k=1$ and $w=0$ the time-independent Wheeler-DeWitt equation (\[sle2\]) reduces to $$-{\psi}^{\prime \prime}(a) + \left(- 24E'a +
144a^{2}\right){\psi}(a)=0.$$ Defining new variable $x=12a - E'$ we find $$\label{k1}
-\frac{d^{2}\psi}{dx^{2}}+\left[-
\frac{E'^{2}}{144}+\frac{x^{2}}{144} \right]\psi(a) =0.$$ Equation (\[k1\]) is similar to the time-independent Schrödinger equation for a simple harmonic oscillator with unit mass and energy $\lambda$ $$-\frac{d^{2}\psi}{dx^{2}}+\left[- 2\lambda+w^{2}x^{2}\right]\psi(x) =0,$$ where $2\lambda = E'^{2}/144$ and $w=1/12$. Therefore, the allowed values of $\lambda$ are $w(n+1/2)$ and the possible values of $E'$ are $$E'_{n}=\sqrt{12(2n+1)}\,\, , \mbox{\hspace{0.8cm}} n=0,1,2,...\quad.$$ therefore, the stationary solutions are $${\Psi}_{n}(a,t)=e^{iE_{n}t}{\varphi}_{n}\left(12a - E'_{n}\right),
\label{k1-final}$$ where $${\varphi}_{n}(x)=H_n\bigg(\frac{x}{\sqrt{12}}\bigg)e^{-x^2/24}\,\, ,
%\frac{(-1)^{n}}{\sqrt{2^{n}n!\sqrt{n}}}e^{x^{2}/2}\frac{d^{n}}{dx^{n}}\left( e^{-x^{2}}\right) , \mbox{\hspace{0.8cm}} n=0,1,2,...
\label{dust7}$$ and $H_n$ are Hermite polynomials. The wave functions (\[k1-final\]) are similar to the stationary quantum wormholes as defined in [@Hawking]. However, neither of the boundary conditions (\[boundary\]) can be satisfied by the these wave functions.
In $k=-1$ and $w=0$ case, equation (\[sle2\]) reduces to $${\psi}^{\prime \prime}(a) + \left(24E'a +
144a^{2}\right){\psi}(a)=0,$$ where the solutions are $$\begin{aligned}
\label{whittaker} \Psi (a,t)=e^{iEt}(12a+E')^{-1/2}\bigg\{
C_{1}M_{\frac{iE^2}{48},\frac{1}{4}}\left(\frac{i(12a+E')^2}{12}\right)+
C_{2}W_{\frac{iE^2}{48},\frac{1}{4}}\left(\frac{i(12a+E')^2}{12}\right)\bigg\},\end{aligned}$$ where $M_{\kappa , \lambda}$ and $W_{\kappa , \lambda}$ are Whittaker functions. The Whittaker functions do not automatically vanish at $a=0$. Therefore, we need to take both $C_{1}\neq 0$ and $C_{2}\neq 0$ to satisfy $\Psi(0,t)=0$.
For $w=-1/3$, the SWD equation (\[sle2\]) can be written as $$-\psi''(a) +24(6 k-B)a^2\psi(a) =24Ea\,\psi(a),$$ which as before, has the solutions in the form of Simple Harmonic Oscillator (\[k1-final\]) with discrete spectrum or Whittaker function (\[whittaker\]) for positive or negative value of $(6
k-B)$, respectively.
For $k=0$ and $w=1/3$ (radiation), the WD equation (\[sle2\]) reduces to $$\label{radiation-1} -\psi''(a) -24B\psi(a) =24Ea\psi(a),$$ which can be rewritten as $$\psi''(a) +24E\left(a+ \frac{\displaystyle B}{\displaystyle
E}\right)\psi(a) =0,$$ by taking $x= a+\frac{\displaystyle B}{\displaystyle E}$ we have $$\frac{d^2}{dx^2}\psi(x) +24 E x\psi(x) =0,$$ which is the Airy’s differential equation. We solve this equation for $E>0$ and $E<0$, separately.
For $E>0$, this equation has two solutions as $\mbox{Ai}\left[-(24
E)^{1/3}x\right]$ and $\mbox{Bi}\left[-(24 E))^{1/3}x\right]$. First one is exponentially decreasing function of $x$ and the second one grows exponentially and is physically unacceptable. Therefore, the solution is $$\psi(a)= \mbox{Ai}\left[-(24 E))^{1/3}\left(a+\frac{\displaystyle
B}{\displaystyle E}\right)\right].$$ We choose the first boundary condition (\[boundary\]), which leads to $$\mbox{Ai}\left[{ -}{ \left(24E\right)^{1/3}}\frac{\displaystyle
B}{\displaystyle E}\right]=0.$$ Airy’s function $\mbox{Ai}(x)$ has infinitely many negative zeros $z_n = -a_n$, where $a_n>0$, therefore, the energy levels quantize and take the values $$E_n = \left(\frac{{24}^{1/3}B}{a_n}\right)^{3/2}.$$ The time-dependent eigenfunctions take the form $$\Psi_n(a,t)=e^{iE_n
t}\mbox{Ai}\left[-(24E_n)^{1/3}\left(a+\frac{\displaystyle
B}{\displaystyle E_n}\right)\right].$$
For $E<0$, this equation has also two solutions as $\mbox{Ai}\left[(24 |E|)^{1/3}x\right]$ and $\mbox{Bi}\left[(24
|E|))^{1/3}x\right]$. Since the second one grows exponentially and is physically unacceptable, the solution is $$\psi(a)= \mbox{Ai}\left[(24 |E|))^{1/3}\left(a-\frac{\displaystyle
B}{\displaystyle |E|}\right)\right].$$ We choose the first boundary condition (\[boundary\]), which leads to $$\mbox{Ai}\left[-{ \left(24|E|\right)^{1/3}}\frac{\displaystyle
B}{\displaystyle |E|}\right]=0,$$ therefore, the energy levels quantize and take the values $$E_n = -\left(\frac{{24}^{1/3}B}{a_n}\right)^{3/2}.$$ The time-dependent eigenfunctions take the form $$\label{radiation-final-1}
\Psi_n(a,t)=e^{iE_n
t}\mbox{Ai}\left[(24|E_n|)^{1/3}\left(a-\frac{\displaystyle
B}{\displaystyle |E_n|}\right)\right].$$ It is important to note that Airy’s function $ \mbox{Ai}(x)$ has an oscillatory behavior for $x<0$ ($a<\frac{\displaystyle
B}{\displaystyle |E_n|}$) whiles for $x>0$ ($a>\frac{\displaystyle
B}{\displaystyle |E_n|}$) decreases monotonically and is an exponentially damped function for large $x$ (Fig. \[fig2\]). Therefore, the solutions (\[radiation-final-1\]) show a classical behavior for small $a$ and a quantum behavior for large $a$. This is contrary to usually expected results for previous case. In fact detecting quantum gravitational effects in large Universes is noticeable which has been also observed in FRW, Stephani, and Kaluza-Klein models [@lemos1999; @pedramCQG2; @Coliteste].
![Plot of the wave function ($\psi(a)$) for $B=1$ and $n=8$, showing the oscillatory behavior for the small values of the scale factor and exponential damping for the large values of the scale factor.[]{data-label="fig2"}](myfigure "fig:"){width="8cm"}\
In $k=1$ and $w=1/3$ (radiation) case, the WD equation (\[sle2\]) reduces to $$\label{cosmic-1} -\psi''(a) + (144 a^2-24B)\psi(a) =24Ea\psi(a).$$ The above equation can be written as $$-\psi''(a) +144
\left[\left(a-\frac{E}{12}\right)^2-\left(\frac{E}{12}\right)^2\right]\psi
=0,$$ by taking $x=a- \frac{\displaystyle E}{\displaystyle 12}$ we have $$-\frac{d^2}{dx^2}\psi(x) + 144x^2\psi(x) =(E^2+24B)\psi(x).$$ This equation is identical to the time-independent Schrödinger equation for a simple harmonic oscillator with unit mass and energy $\lambda$ $$-\frac{d^{2}\psi(x)}{dx^{2}}+\omega^{2}x^{2}\psi(x) =2\lambda
\psi(x),$$ where $2\lambda = (E^2+24B)$ and $\omega^2=144$. Therefore, the allowed values of $\lambda$ are $\omega(n+1/2)$ and the possible values of $E$ are $$E_{n}=\sqrt{6(n+1/2)-24B}\,\, , \mbox{\hspace{0.8cm}}
n=0,1,2,...\quad ,$$ therefore, the stationary solutions are $${\Psi}_{n}(a,t)=e^{iE_{n}t}{\varphi}_{n}\left(a- \frac{\displaystyle
E_n}{\displaystyle 12}\right),$$ $${\varphi}_{n}(x)=H_n\left((12)^{\frac{1}{2}}x\right)e^{-3\,\,x^2},$$ where $H_n$ are Hermite polynomials. However, neither of the boundary conditions (\[boundary\]) can be satisfied by these wave functions.
Now, we present some analytical solutions for the late time Universe. For flat space time ($k=0$), dust epoch ($w=0$), and standard Chaplygin gas ($\alpha=1$), equation (\[sle2-b\]) reduces to $$-\psi''(a) +\left(-24Ba -24A^{\frac{1}{1+\alpha}} a^4\right)\psi(a)
=24Ea^{-2}\,\psi(a),$$ where the solutions are $$\begin{aligned}
\nonumber
\Psi (a,t)&=&e^{iEt}e^{2 i \sqrt{\frac{2}{3}}
A^{\frac{1}{2(1+\alpha)}} a^3}
a^{\frac{1}{2}-\frac{1}{2} \sqrt{1-96
E}}\\ \nonumber &\times&\bigg\{C_1\,\,
U\left(\frac{1}{6} \left(-2 i
\sqrt{6} BA^{\frac{-1}{2(1+\alpha)}}-\sqrt{1-96
E}+3\right),1-\frac{1}{3} \sqrt{1-96
E},-4 i \sqrt{\frac{2}{3}} A^{\frac{1}{2(1+\alpha)}}
a^3\right)\\ &+&C_2\,\,
L_{\frac{1}{6} (2 i \sqrt{6}
BA^{\frac{-1}{2(1+\alpha)}}+\sqrt{1-96
E}-3)}^{-\frac{1}{3} \sqrt{1-96
E}}\left(-4 i \sqrt{\frac{2}{3}} A^{\frac{1}{2(1+\alpha)}}
a^3\right)\bigg\}.\end{aligned}$$ Here $U(a,b,c)$ is the confluent hypergeometric function and $L_n^a(x)$ is the generalized Laguerre polynomial. We need to take both $C_{1}\neq 0$ and $C_{2}\neq 0$ to satisfy $\Psi(0,t)=0$.
For flat space time ($k=0$), stiff matter ($w=1$), and standard Chaplygin gas ($\alpha=1$), equation (\[sle2-b\]) reduces to $$-\psi''(a) +\left(-24Ba^{-2} -24A^{\frac{1}{1+\alpha}}
a^4\right)\psi(a) =24Ea^{-2}\,\psi(a),$$ with the solutions as $$\begin{aligned}
\nonumber
\Psi (a,t)&=&e^{iEt}\bigg\{C_1\,\,\sqrt{a} \,J_{-\frac{1}{6}
\sqrt{-96( B+
E)+1}}\left(2 \sqrt{\frac{2}{3}} A^{\frac{1}{2(1+\alpha)}}
a^3\right)\\ &+&C_2\,\,\sqrt{a}\, J_{\frac{1}{6} \sqrt{-96( B+
E)+1}}\left(2 \sqrt{\frac{2}{3}} A^{\frac{1}{2(1+\alpha)}}
a^3\right)\bigg\}.\end{aligned}$$ Here again, we have $C_{1}\neq 0$ and $C_{2}\neq 0$ in order to satisfy the first boundary condition (\[boundary\]).
Conclusions {#sec4}
===========
In this work we have investigated minisuperspace FRW quantum cosmological models with Chaplygin gas and perfect fluid as the matter content in early and late times. The use of Schutz’s formalism for the Chaplygin gas allowed us to obtain SWD equations with the perfect fluid’s effective potential. We have obtained eigenfunctions and therefore acceptable wave packets were constructed by appropriate linear combination of these eigenfunctions. The time evolution of the expectation value of the scale factor has been determined in the spirit of the many-worlds interpretation of quantum cosmology. We have showed that contrary to the classical case, the expectation value of the scale factor avoids singularity at the quantum level. Moreover, this model predicts an accelerated Universe for late times.
[100]{} A. G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], Astron. J. [**116**]{}, 1009 (1998) \[arXiv:astro-ph/9805201\]; S. Perlmutter [*et al.*]{} \[Supernova Cosmology Project Collaboration\], Astrophys. J. [**517**]{}, 565 (1999) \[arXiv:astro-ph/9812133\]; J. L. Tonry [*et al.*]{}, Astrophys. J. [**594**]{}, 1 (2003) \[arXiv:astro-ph/0305008\]. D. N. Spergel [*et al.*]{}, Astrophys. J. Suppl. [**148**]{}, 175 (2003) \[arXiv:astro-ph/0302209\]; C. L. Bennett [*et al.*]{}, Astrophys. J. Suppl. [**148**]{}, 1 (2003) \[arXiv:astro-ph/0302207\]. M. Tegmark [*et al.*]{} \[SDSS Collaboration\], \[arXiv:astro-ph/0310723\]. V. Sahni, Class. Quant. Grav. [**19**]{}, 3435 (2002) \[arXiv:astro-ph/0202076\]. S. Weinberg, Rev. Mod. Phys. [**61**]{}, 1 (1989). P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. [**75**]{}, 559 (2003) \[arXiv:astro-ph/0207347\]. C. Wetterich, Nucl. Phys. B [**302**]{}, 668 (1988); B. Ratra and P. J. E. Peebles, Phys. Rev. D [**37**]{}, 3406 (1988); R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys. Rev. Lett. [**80**]{}, 1582 (1998) \[arXiv:astro-ph/9708069\]; P. F. González-Díaz, Phys. Rev. D [**62**]{}, 023513 (2000) \[arXiv:astro-ph/0004125\]; Y. Fujii, Phys. Rev. D [**62**]{}, 064004 (2000) \[arXiv:gr-qc/9908021\]. P. Brax and J. Martin, Phys. Rev. D [**61**]{}, 103502 (2000) \[arXiv:astro-ph/9912046\]. A. Y. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B [**511**]{}, 265 (2001) \[arXiv:gr-qc/0103004\]. M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D [**66**]{}, 043507 (2002) \[arXiv:gr-qc/0202064\]. M. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D [**67**]{}, 063003 (2003) \[arXiv:astro-ph/0210468\]. M. C. Bento, O. Bertolami and A. A. Sen, Phys. Lett. B [**575**]{}, 172 (2003) \[arXiv:astro-ph/0303538\]; L. Amendola, F. Finelli, C. Burigana and D. Carturan, JCAP [**0307**]{}, 005 (2003) \[arXiv:astro-ph/0304325\]. R. Bean and O. Dore, Phys. Rev. D [**68**]{}, 023515 (2003) \[arXiv:astro-ph/0301308\]; A. Dev, D. Jain and J.S. Alcaniz, astro-ph/0311056; M. Biesiada, W. Godlowski and M. Szydlowski, astro-ph/0403305. J. C. Fabris, S. V. Goncalves and P. E. de Souza, Gen. Rel. Grav. [**34**]{}, 53 (2002) \[arXiv:gr-qc/0103083\]; Gen. Rel. Grav. [**34**]{}, 2111 (2002) \[arXiv:astro-ph/0203441\]; T. Multamaki, M. Manera and E. Gaztanaga, Phys. Rev. D [**69**]{}, 023004 (2004) \[arXiv:astro-ph/0307533\]. V. Gorini, A. Kamenshchik and U. Moschella, Phys. Rev. D [**67**]{}, 063509 (2003) \[arXiv:astro-ph/0209395\]; R. Colistete, J. C. Fabris, S. V. Goncalves and P. E. de Souza, \[arXiv:gr-qc/0210079\], H. Sandvik, M. Tegmark, M. Zaldarriaga and I. Waga, \[arXiv:astro-ph/0212114\]; L. M. Beca, P. P. Avelino, J. P. de Carvalho and C. J. Martins, Phys. Rev. D [**67**]{}, 101301 (2003) \[arXiv:astro-ph/0303564\]. N. Ogawa, Phys. Rev. D [**62**]{}, 085023 (2000) \[arXiv:hep-th/0003288\]. M. Bordemann and J. Hoppe, Phys. Lett. B [**317**]{}, 315 (1993) \[arXiv:hep-th/9307036\]. M. Hassaine and P. A. Horvathy, Lett. Math. Phys. [**57**]{}, 33 (2001) \[arXiv:hep-th/0101044\]. G. W. Gibbons, Grav. Cosmol. [**8**]{}, 2 (2002) \[arXiv:hep-th/0104015\]. M. Hassaine, Phys. Lett. A [**290**]{}, 157 (2001) \[arXiv:hep-th/0106252\]. G. M. Kremer, Gen. Rel. Grav. [**35**]{}, 1459 (2003) \[arXiv:gr-qc/0303103\]. H. B. Benaoum, \[arXiv:hep-th/0205140\]. V. Gorini, A. Kamenshchik, U. Moschella and V. Pasquier, \[arXiv:gr-qc/0403062\]. O. Bertolami, \[arXiv:astro-ph/0403310\]. M. Szydlowski and W. Czja, Phys. Rev. D [**69**]{}, 023506 (2004) \[arXiv:astro-ph/0306579\]. R. Jackiw, \[arXiv:physics/0010042\]. M. R. Setare, Phys.Lett. B 644, 99 (2007). M. R. Setare, Phys. Lett. B 648, 329 (2007). M. Roos, \[arXiv:0704.0882\]. M. Bouhmadi-López, P. V. Moniz, Phys. Rev. D **71**, 063521 (2005). P. Pedram, S. Jalalzadeh and S. S. Gousheh, Int. J. Theor. Phys. DOI: 10.1007/s10773-007-9436-9 \[arXiv:0705.3587\]. B. F. Schutz, Phys. Rev. D [**2**]{}, 2762 (1970). B. F. Schutz, Phys. Rev. D [**4**]{}, 3559 (1971). P. Pedram, S. Jalalzadeh and S. S. Gousheh, Phys. Lett. B. In press, doi:10.1016/j.physletb.2007.08.077, \[arXiv:0708.4143\]. P. Pedram, S. Jalalzadeh and S. S. Gousheh, Class. Quantum Grav. In press, \[arXiv:0709.1620\]. F. G. Alvarenga, J. C. Fabris, N. A. Lemos, and G. A. Monerat, Gen. Rel. Grav. **34** 651 (2002). G. A. Monerat, G. Oliveira-Neto, E. V. Corrêa Silva, L. G. Ferreira Filho, P. Romildo, Jr., J. C. Fabris, R. Fracalossi, S. V. B. Gonçalves, and F. G. Alvarenga, Phys. Rev. D **76**, 024017 (2007) O. Bertolami and J. Mourao, Class Quantum Grav. **8**, 1271 (1991). O. Bertolami and V. Duvvuri, Phys. Lett. B **640**, 121 (2006). T. Barreiro, A.A. Sen, Phys. Rev. D **70**, 124013 (2004). M. Heydari-Fard and H. R. Sepangi, to appear in Phys. Rev. D, \[arXiv: 0710.2666\]. S. Jalalzadeh and H. R. Sepangi, Class. Quant Grav. **22**, 2035 (2005). G. F. R. Ellis and S. W. Hawking, Large Scale Structure of Space Time, (Cambridge University Press, 1973); R. Mansouri and F. Nasseri, Phys. Rev. D **60**, 123512 (1999). R. Arnowitt, S. Deser and C. W. Misner, [*Gravitation: An Introduction to Current Research*]{}, edited by L. Witten, Wiley, New York (1962). V. G. Lapchinskii and V. A. Rubakov, Theor. Math. Phys. [**33**]{}, 1076 (1977). N. A. Lemos, J. Math. Phys. [**37**]{}, 1449 (1996). F. G. Alvarenga and N. A. Lemos, Gen. Rel. Grav. [**30**]{}, 681 (1998). J. Acacio de Barros, E. V. Corrêa Silva, G. A. Monerat, G. Oliveira-Neto, L. G. Ferreira Filho, and P. Romildo, Phys. Rev. D **75**, 104004 (2007). I. S. Gradshteyn and I. M. Ryzhik, [*Table of Integrals, Series and Products*]{} (Academic, New York, 1980), formula 6.631-4. F. J. Tipler, Phys. Rep. [**137**]{}, 231 (1986). H. Everett, III, Rev. Mod. Phys. [**29**]{}, 454 (1957). S. W. Hawking and D. B. Page, Phys. Rev. D **42**, 2655 (1990). N. A. Lemos, F. G. Alvarenga, Gen. Rel. Grav. **31**, 1743 (1999), \[arXiv:gr-qc/9906061\]. R. Coliteste, Jr., J. C. Fabris and N. Pinto-Neto, Phys. Rev. D [**57**]{}, 4707 (1998).
[^1]: Email: pouria.pedram@gmail.com
[^2]: Email: s-jalalzadeh@sbu.ac.ir
|
---
abstract: 'The weighted $k$-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors *efficiently and adaptively, for each data point whose value we wish to estimate*. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods.'
author:
- 'Oren Anava[^1]'
- 'Kfir Y. Levy[^2]'
bibliography:
- 'bib.bib'
title: '$k^*$-Nearest Neighbors: From Global to Local'
---
Introduction
============
The $k$-nearest neighbors ($k$-NN) algorithm [@cover; @hodges], and Nadarays-Watson estimation [@nadaraya; @watson] are the cornerstones of non-parametric learning. Owing to their simplicity and flexibility, these procedures had become the methods of choice in many scenarios [@top10], especially in settings where the underlying model is complex. Modern applications of the $k$-NN algorithm include recommendation systems [@recommend], text categorization [@text], heart disease classification [@heart], and financial market prediction [@markets], amongst others.
A successful application of the weighted $k$-NN algorithm requires a careful choice of three ingredients: the number of nearest neighbors $k$, the weight vector $\balpha$, and the distance metric. The latter requires domain knowledge and is thus henceforth assumed to be set and known in advance to the learner. Surprisingly, even under this assumption, the problem of choosing the optimal $k$ and $\balpha$ is not fully understood and has been studied extensively since the $1950$’s under many different regimes. Most of the theoretic work focuses on the asymptotic regime in which the number of samples $n$ goes to infinity [@devroye2013probabilistic; @samworth; @stone], and ignores the practical regime in which $n$ is finite. More importantly, the vast majority of $k$-NN studies aim at finding an optimal value of $k$ per dataset, which seems to overlook the specific structure of the dataset and the properties of the data points whose labels we wish to estimate. While kernel based methods such as Nadaraya-Watson enable an adaptive choice of the weight vector $\balpha$, theres still remains the question of how to choose the *kernel’s bandwidth* $\sigma$, which could be thought of as the parallel of the number of neighbors $k$ in $k$-NN. Moreover, there is no principled approach towards choosing the kernel function in practice.
In this paper we offer a coherent and principled approach to *adaptively* choosing the number of neighbors $k$ and the corresponding weight vector $\balpha \in \reals^k$ per decision point. Given a new decision point, we aim to find the best locally weighted predictor, in the sense of minimizing the distance between our prediction and the ground truth. In addition to yielding predictions, our approach enbles us to provide a *per decision point* guarantee for the confidence of our predictions. Fig. \[figs\] illustrates the importance of choosing $k$ adaptively. In contrast to previous works on non-parametric regression/classification, we do not assume that the data $\{(x_i,y_i)\}_{i=1}^n$ arrives from some (unknown) underlying distribution, but rather make a weaker assumption that the labels $\{y_i\}_{i=1}^n$ are independent given the data points $\{x_i\}_{i=1}^n$, allowing the latter to be chosen arbitrarily. Alongside providing a theoretical basis for our approach, we conduct an empirical study that demonstrates its superiority with respect to the state-of-the-art.
This paper is organized as follows. In Section \[sec:def\] we introduce our setting and assumptions, and derive the locally optimal prediction problem. In Section \[sec:alg\] we analyze the solution of the above prediction problem, and introduce a greedy algorithm designed to *efficiently* find the *exact* solution. Section \[sec:Experiments\] presents our experimental study, and Section \[sec:Conclusion\] concludes.
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig1.png){width="\textwidth"}
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig2.png){width="\textwidth"}
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig3.png){width="\textwidth"}
Related Work
------------
Asymptotic universal consistency is the most widely known theoretical guarantee for $k$-NN. This powerful guarantee implies that as the number of samples $n$ goes to infinity, and also $k\to \infty$, $k/n\to 0$, then the risk of the $k$-NN rule converges to the risk of the Bayes classifier for any underlying data distribution. Similar guarantees hold for weighted $k$-NN rules, with the additional assumptions that $\sum_{i=1}^k\alpha_i=1$ and $\max_{i\leq n}\alpha_i \to 0$, [@stone; @devroye2013probabilistic]. In the regime of practical interest where the number of samples $n$ is finite, using $k=\lfloor \sqrt{n}\rfloor$ neighbors is a widely mentioned rule of thumb [@devroye2013probabilistic]. Nevertheless, this rule often yields poor results, and in the regime of finite samples it is usually advised to choose $k$ using cross-validation. Similar consistency results apply to kernel based local methods [@devroye1980distribution; @gyorfi2006distribution].
A novel study of $k$-NN by Samworth, [@samworth], derives a closed form expression for the optimal weight vector, and extracts the optimal number of neighbors. However, this result is only optimal under several restrictive assumptions, and only holds for the asymptotic regime where $n\to \infty$. Furthermore, the above optimal number of neighbors/weights do not adapt, but are rather fixed over all decision points given the dataset. In the context of kernel based methods, it is possible to extract an expression for the optimal kernel’s bandwidth $\sigma$ [@gyorfi2006distribution; @fan1996local]. Nevertheless, this bandwidth is fixed over all decision points, and is only optimal under several restrictive assumptions.
There exist several heuristics to adaptively choosing the number of neighbors and weights separately for each decision point. In [@wettschereck1994locally; @sun2010adaptive] it is suggested to use local cross-validation in order to adapt the value of $k$ to different decision points. Conversely, Ghosh [@ghosh] takes a Bayesian approach towards choosing $k$ adaptively. Focusing on the multiclass classification setup, it is suggested in [@baoli2004adaptive] to consider different values of $k$ for each class, choosing $k$ proportionally to the class populations. Similarly, there exist several attitudes towards adaptively choosing the kernel’s bandwidth $\sigma$, for kernel based methods [@abramson1982bandwidth; @silverman1986density; @demir2010adaptive; @aljuhani2014modification].
Learning the distance metric for $k$-NN was extensively studied throughout the last decade. There are several approaches towards metric learning, which roughly divide into linear/non-linear learning methods. It was found that metric learning may significantly affect the performance of $k$-NN in numerous applications, including computer vision, text analysis, program analysis and more. A comprehensive survey by Kulis [@metric] provides a review of the metric learning literature. Throughout this work we assume that the distance metric is fixed, and thus the focus is on finding the best (in a sense) values of $k$ and $\balpha$ for each new data point.
Two comprehensive monographs, [@devroye2013probabilistic] and [@devroye2015Lectures], provide an extensive survey of the existing literature regarding $k$-NN rules, including theoretical guarantees, useful practices, limitations and more.
Problem Definition {#sec:def}
==================
In this section we present our setting and assumptions, and formulate the locally weighted optimal estimation problem. Recall we seek to find the best local prediction in a sense of minimizing the distance between this prediction and the ground truth. The problem at hand is thus defined as follows: We are given $n$ data points $x_1, \ldots , x_n\in \reals^d$, and $n$ corresponding labels[^3] $y_1, \ldots , y_n\in \reals $. Assume that for any $i \in \{1,\ldots,n\} = [n]$ it holds that $y_i = f( x_i ) + \epsilon_i$, where $f(\cdot)$ and $\epsilon_i$ are such that:
1. **$\mathbf{f(\cdot)}$ is a Lipschitz continuous function:** For any $x,y \in \reals^d$ it holds that $ \left| f(x) - f(y) \right| \leq L \cdot d (x,y) $, where the distance function $d(\cdot,\cdot)$ is set and known in advance. This assumption is rather standard when considering nearest neighbors-based algorithms, and is required in our analysis to bound the so-called *bias* term (to be later defined). In the *binary classification* setup we assume that $f:\reals^d \mapsto [0,1]$, and that given $x$ its label $y\in\{0,1\}$ is distributed $ \text{Bernoulli}(f(x))$.
2. **$\mathbf{\epsilon_i}$’s are noise terms:** For any $i \in [n]$ it holds that $\mathbb{E} \left[ \epsilon_i | x_i \right] = 0 $ and $ | \epsilon_i | \leq b$ for some given $b>0$. In addition, it is assumed that given the data points $\{x_i\}_{i=1}^n$ then the noise terms $\{\epsilon_i\}_{i=1}^n$ are independent. This assumption is later used in our analysis to apply Hoeffding’s inequality and bound the so-called *variance* term (to be later defined). Alternatively, we could assume that $ \mathbb{E} \left[ \epsilon_i^2\vert x_i \right] \leq b$ (instead of $ | \epsilon_i | \leq b$), and apply Bernstein inequalities. The results and analysis remain qualitatively similar.
Given a new data point $x_0$, our task is to estimate $f(x_0)$, where we restrict the estimator $\hat{f}(x_0)$ to be of the form $ \hat{f}(x_0)= \sum_{i=1}^n \alpha_i y_i $. That is, the estimator is a weighted average of the given noisy labels. Formally, we aim at minimizing the absolute distance between our prediction and the ground truth $f(x_0)$, which translates into $$\min_{\balpha \in \Delta_n} \left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| \qquad \mathbf{(P1)} ,$$ where we minimize over the simplex, $\Delta_n = \{ \balpha \in \reals^n | \sum_{i=1}^n \alpha_i = 1 \text{ and } \alpha_i \geq 0,\;\forall i \}$. Decomposing the objective of $\mathbf{(P1)}$ into a sum of bias and variance terms, we arrive at the following relaxed objective: $$\begin{aligned}
\left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| & = \left| \sum_{i=1}^n \alpha_i \left( y_i - f(x_i) + f(x_i) \right) - f(x_0) \right| \\
& = \left| \sum_{i=1}^n \alpha_i \epsilon_i + \sum_{i=1}^n \alpha_i \left( f(x_i) - f(x_0) \right) \right| \\
& \leq \left| \sum_{i=1}^n \alpha_i \epsilon_i \right| + \left| \sum_{i=1}^n \alpha_i \left( f(x_i) - f(x_0) \right) \right| \\
& \leq \left| \sum_{i=1}^n \alpha_i \epsilon_i \right| + L \sum_{i=1}^n \alpha_i d ( x_i , x_0 ) .\end{aligned}$$ By Hoeffding’s inequality (see supplementary material) it follows that $\left| \sum_{i=1}^n \alpha_i \epsilon_i \right| \leq C\| \balpha \|_2$ for $C = b \sqrt{2 \log \left( \frac{2}{\delta} \right) }$, w.p. at least $1-\delta$. We thus arrive at a new optimization problem $\mathbf{(P2)}$, such that solving it would yield a guarantee for $\mathbf{(P1)}$ with high probability: $$\min_{\balpha \in \Delta_n} C\| \balpha \|_2 + L \sum_{i=1}^n \alpha_i d ( x_i , x_0 ) \qquad \mathbf{(P2)}.$$ The first term in $\mathbf{(P2)}$ corresponds to the noise in the labels and is therefore denoted as the *variance* term, whereas the second term corresponds to the distance between $f(x_0)$ and $\{f(x_i)\}_{i=1}^n$ and is thus denoted as the *bias* term.
Algorithm and Analysis {#sec:alg}
======================
In this section we discuss the properties of the optimal solution for $\mathbf{(P2)}$, and present a greedy algorithm which is designed in order to efficiently find the exact solution of the latter objective (see Section \[sec:algEfficeint\]). Given a decision point $x_0$, Theorem \[thm:Main\] demonstrates that the optimal weight $\alpha_i$ of the data point $x_i$ is proportional to $-d(x_i,x_0)$ (closer points are given more weight). Interestingly, this weight decay is quite slow compared to popular weight kernels, which utilize sharper decay schemes, e.g., exponential/inversely-proportional. Theorem \[thm:Main\] also implies a cutoff effect, meaning that there exists $k^*\in[n]$, such that only the $k^*$ nearest neighbors of $x_0$ donate to the prediction of its label. Note that both $\alpha$ and $k^*$ may adapt from one $x_0$ to another. Also notice that the optimal weights depend on a single parameter $L/C$, namely the Lipschitz to noise ratio. As $L/C$ grows $k^*$ tends to be smaller, which is quite intuitive.
Without loss of generality, assume that the points are ordered in ascending order according to their distance from $x_0$, i.e., $d(x_1,x_0)\leq d(x_2,x_0)\leq\ldots\leq d(x_n,x_0)$. Also, let $\bbeta\in \reals^n$ be such that $\beta_i = {L d(x_i,x_0)}/{C} $. Then, the following is our main theorem:
\[thm:Main\] There exists $\lambda>0$ such that the optimal solution of $\mathbf{(P2)}$ is of the form $$\begin{aligned}
\label{eq:alphaStar}
\alpha^*_i = \frac{\left( \lambda-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda \right\} }{\sum_{i=1}^n \left( \lambda-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda \right\} } .\end{aligned}$$ Furthermore, the value of $\mathbf{(P2)}$ at the optimum is $C\lambda$.
Following is a direct corollary of the above Theorem:
\[cor:Main\] There exists $1\leq k^*\leq n$ such that for the optimal solution of $\mathbf{(P2)}$ the following applies: $$\begin{aligned}
\alpha_i^* >0; \; \forall i\leq k^* \quad \text{ and } \quad \alpha_i^* =0;\; \forall i> k^* .\end{aligned}$$
Notice that $\mathbf{(P2)}$ may be written as follows: $$\min_{\balpha \in \Delta_n} C \left( \| \balpha \|_2 + \balpha^\top \bbeta \right) \qquad \mathbf{(P2)}.$$ We henceforth ignore the parameter $C$. In order to find the solution of $\mathbf{(P2)}$, let us first consider its Lagrangian: $$L(\balpha,\lambda,\btheta) = \| \balpha \|_2 + \balpha^\top \bbeta + \lambda \left( 1-\sum_{i=1}^n\alpha_i \right) - \sum_{i=1}^n \theta_i \alpha_i ,$$ where $\lambda\in\reals$ is the multiplier of the equality constraint $\sum_i\alpha_i=1$, and $\theta_1,\ldots,\theta_n\geq 0$ are the multipliers of the inequality constraints $\alpha_i\geq 0,\; \forall i\in[n]$. Since $\mathbf{(P2)}$ is convex, any solution satisfying the KKT conditions is a global minimum. Deriving the Lagrangian with respect to $\balpha$, we get that for any $i\in[n]$: $$\begin{aligned}
\frac{\alpha_i}{\| \balpha\|_2} = \lambda-\beta_i +\theta_i .\end{aligned}$$ Denote by $\balpha^*$ the optimal solution of $\mathbf{(P2)}$. By the KKT conditions, for any $\alpha^*_i>0$ it follows that $\theta_i=0$. Otherwise, for any $i$ such that $\alpha^*_i=0$ it follows that $\theta_i\geq0$, which implies $\lambda\leq\beta_i$. Thus, for any nonzero weight $\alpha^*_i>0$ the following holds: $$\begin{aligned}
\label{eq:KKTNonz}
\frac{\alpha^*_i}{\| \balpha^*\|_2} = \lambda-\beta_i .\end{aligned}$$ Squaring and summing Equation over all the nonzero entries of $\balpha$, we arrive at the following equation for $\lambda$: $$\begin{aligned}
\label{eq:LambdaEq}
1 = \sum_{\alpha^*_i>0}\frac{\left( \alpha^*_i \right) ^2}{\| \balpha^* \|_2^2} =\sum_{\alpha^*_i>0} (\lambda-\beta_i)^2 .\end{aligned}$$
Next, we show that the value of the objective at the optimum is $C \lambda$. Indeed, note that by Equation and the equality constraint $\sum_i\alpha^*_i=1$, any $\alpha^*_i>0$ satisfies $$\begin{aligned}
\label{eq:solution}
\alpha^*_i = \frac{\lambda-\beta_i}{A},\quad \text{ where }\quad A=\sum_{\alpha^*_i>0} (\lambda-\beta_i) .\end{aligned}$$ Plugging the above into the objective of $\mathbf{(P2)}$ yields $$\begin{aligned}
C \left( \| \balpha^* \|_2 + \balpha^{*\top} \bbeta \right) &=\frac{C}{A}\sqrt{\sum_{\alpha^*_i>0}(\lambda-\beta_i)^2}+\frac{C}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)(\beta_i-\lambda+\lambda)\\
& =\frac{C}{A} -\frac{C}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)^2+\frac{C\lambda}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)\\
& = C \lambda ,\end{aligned}$$ where in the last equality we used Equation , and substituted $A = \sum_{\alpha^*_i>0}(\lambda-\beta_i)$.
Solving $\mathbf{(P2)}$ Efficiently {#sec:algEfficeint}
-----------------------------------
Note that $\mathbf{(P2)}$ is a convex optimization problem, and it can be therefore (*approximately*) solved efficiently, e.g., via any first order algorithm. Concretely, given an accuracy $\epsilon>0$, any off-the-shelf convex optimization method would require a running time which is $\poly(n,\frac{1}{\epsilon})$ in order to find an $\epsilon$-optimal solution to $\mathbf{(P2)}$[^4]. Note that the calculation of (the unsorted) $\bbeta$ requires an additional computational cost of $O(nd)$.
Here we present an efficient method that computes the *exact* solution of $\mathbf{(P2)}$. In addition to the $O(nd)$ cost for calculating $\bbeta$, our algorithm requires an $O(n\log n)$ cost for sorting the entries of $\bbeta$, as well as an additional running time of $O(k^*)$, where $k^*$ is the number of non-zero elements at the optimum. Thus, the running time of our method is independent of any accuracy $\epsilon$, and may be significantly better compared to any off-the-shelf optimization method. Note that in some cases [@indyk1998approximate], using advanced data structures may decrease the cost of finding the nearest neighbors (i.e., the sorted $\bbeta$), yielding a running time substantially smaller than $O(nd+n \log n)$.
Our method is depicted in Algorithm \[algorithm:KstarNN\]. Quite intuitively, the core idea is to greedily add neighbors according to their distance form $x_0$ until a stopping condition is fulfilled (indicating that we have found the optimal solution). Letting $\mathcal{C}_{\text{sortNN}}$, be the computational cost of calculating the sorted vector $\bbeta$, the following theorem presents our guarantees.\
**Input**: vector of ordered distances $\bbeta\in \reals^n$, noisy labels $y_1,\ldots,y_n \in \reals$ : $\lambda_0 =\beta_1+1$, $k=0$ $k\gets k+1$ $ \lambda_k = \frac{1}{k}\left( \sum_{i=1}^k \beta_i + \sqrt{ k + \left( \sum_{i=1}^k \beta_i \right)^2 - k \sum_{i=1}^k \beta_i^2 } \right) $ **Return**: estimation $\hat{f}(x_0)=\sum_i \alpha_i y_i$, where $\balpha\in \Delta_n$ is a weight vector such $
\alpha_i = \frac{\left( \lambda_k-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda_k \right\} }{\sum_{i=1}^n \left( \lambda_k-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda_k \right\} } $
\[thm:alg\] Algorithm \[algorithm:KstarNN\] finds the exact solution of $\mathbf{(P2)}$ within $k^*$ iterations, with an $O(k^{*}+\mathcal{C}_{\text{sortNN}})$ running time.
Denote by $\balpha^*$ the optimal solution of $\mathbf{(P2)}$, and by $k^*$ the corresponding number of nonzero weights. By Corollary \[cor:Main\], these $k^*$ nonzero weights correspond to the $k^*$ smallest values of $\bbeta$. Thus, we are left to show that (1) the optimal $\lambda$ is of the form calculated by the algorithm; and (2) the algorithm halts after exactly $k^*$ iterations and outputs the optimal solution.
Let us first find the optimal $\lambda$. Since the non-zero elements of the optimal solution correspond to the $k^*$ smallest values of $\bbeta$, then Equation is equivalent to the following quadratic equation in $\lambda$: $$\begin{aligned}
k^*\lambda^2 - 2\lambda\sum_{i=1}^{k^*}\beta_i + \left( \sum_{i=1}^{k^*}\beta_i^2-1 \right) =0 .\end{aligned}$$ Solving for $\lambda$ and neglecting the solution that does not agree with $\alpha_i\geq 0,\;\forall i\in[n]$, we get $$\begin{aligned}
\label{eq:lambda}
\lambda = \frac{1}{k^*}\left( \sum_{i=1}^{k^*} \beta_i + \sqrt{ k^* + \left( \sum_{i=1}^{k^*} \beta_i \right)^2 - k^* \sum_{i=1}^{k^*} \beta_i^2 } \right)~.\end{aligned}$$ The above implies that given $k^*$, the optimal solution (satisfying KKT) can be directly derived by a calculation of $\lambda$ according to Equation and computing the $\alpha_i$’s according to Equation . Since Algorithm \[algorithm:KstarNN\] calculates $\lambda$ and $\balpha$ in the form appearing in Equations and respectively, it is therefore sufficient to show that it halts after exactly $k^*$ iterations in order to prove its optimality. The latter is a direct consequence of the following conditions:
1. Upon reaching iteration $k^*$ Algorithm \[algorithm:KstarNN\] necessarily halts.
2. For any $k\leq k^*$ it holds that $\lambda_k \in \reals$.
3. For any $k<k^*$ Algorithm \[algorithm:KstarNN\] does not halt.
Note that the first condition together with the second condition imply that $\lambda_k$ is well defined until the algorithm halts (in the sense that the $``>"$operation in the **while** condition is meaningful). The first condition together with the third condition imply that the algorithm halts after exactly $k^*$ iterations, which concludes the proof. We are now left to show that the above three conditions hold:
**Condition (1):** Note that upon reaching $k^*$, Algorithm \[algorithm:KstarNN\] necessarily calculates the optimal $\lambda=\lambda_{k^*}$. Moreover, the entries of $\balpha^*$ whose indices are greater than $k^*$ are necessarily zero, and in particular, $\alpha_{k^*+1}^*=0$. By Equation , this implies that $\lambda_{k^*}\leq \beta_{k^*+1}$, and therefore the algorithm halts upon reaching $k^*$.
In order to establish conditions (2) and (3) we require the following lemma:
\[lem:lambda\_kOpt\] Let $\lambda_k$ be as calculated by Algorithm \[algorithm:KstarNN\] at iteration $k$. Then, for any $k\leq k^*$ the following holds: $$\begin{aligned}
\lambda_k = \min_{\balpha\in \Delta_n^{(k)} }\left( \| \balpha \|_2 + \balpha^\top \bbeta\right),\;
\text{ where } \Delta_n^{(k)} = \{ \balpha\in \Delta_n : \alpha_i = 0,\; \forall i>k\} \end{aligned}$$
The proof of Lemma \[lem:lambda\_kOpt\] appears in Appendix \[sec:Proof\_lem:lambda\_kOpt\]. We are now ready to prove the remaining conditions.
**Condition (2):** Lemma \[lem:lambda\_kOpt\] states that $\lambda_k$ is the solution of a convex program over a nonempty set, therefore $\lambda_k\in\reals$.
**Condition (3):** By definition $\Delta_{n}^{(k)}\subset \Delta_n^{(k+1)}$ for any $k < n$. Therefore, Lemma \[lem:lambda\_kOpt\] implies that $\lambda_{k}\geq \lambda_{k+1}$ for any $k<k^*$ (minimizing the same objective with stricter constraints yields a higher optimal value). Now assume by contradiction that Algorithm \[algorithm:KstarNN\] halts at some $k_0<k^*$, then the stopping condition of the algorithm implies that $\lambda_{k_0}\leq \beta_{k_0+1}$. Combining the latter with $\lambda_{k} \geq \lambda_{k+1},\; \forall k\leq k^*$, and using $\beta_k\leq \beta_{k+1},\; \forall k\leq n$, we conclude that: $$\begin{aligned}
\lambda_{k^*}\leq \lambda_{k_{0}+1}\leq \lambda_{k_0}\leq \beta_{k_0+1}\leq \beta_{k^*}~.\end{aligned}$$ The above implies that $\alpha_{k^*}=0$ (see Equation ), which contradicts Corollary \[cor:Main\] and the definition of $k^*$.
#### Running time:
Note that the main running time burden of Algorithm \[algorithm:KstarNN\] is the calculation of $\lambda_k$ for any $k\leq k^*$. A naive calculation of $\lambda_k$ requires an $O(k)$ running time. However, note that $\lambda_k$ depends only on $\sum_{i=1}^k\beta_i$ and $\sum_{i=1}^k \beta_i^2$. Updating these sums incrementally implies that we require only $O(1)$ running time per iteration, yielding a total running time of $O(k^*)$. The remaining $O(\mathcal{C}_{\text{sortNN}})$ running time is required in order to calculate the (sorted) $\bbeta$.
Special Cases
-------------
The aim of this section is to discuss two special cases in which the bound of our algorithm coincides with familiar bounds in the literature, thus justifying the relaxed objective of $\mathbf{(P2)}$. We present here only a high-level description of both cases, and defer the formal details to the full version of the paper.
The solution of $\mathbf{(P2)}$ is a high probability upper-bound on the true prediction error $ \left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| $. Two interesting cases to consider in this context are $\beta_i = 0$ for all $i \in [n] $, and $\beta_1 = \ldots = \beta_n = \beta > 0$. In the first case, our algorithm includes all labels in the computation of $\lambda$, thus yielding a confidence bound of $2 C \lambda = 2 b \sqrt{ (2 / n) \log \left( 2 / \delta \right) }$ for the prediction error (with probability $1-\delta$). Not surprisingly, this bound coincides with the standard Hoeffding bound for the task of estimating the mean value of a given distribution based on noisy observations drawn from this distribution. Since the latter is known to be tight (in general), so is the confidence bound obtained by our algorithm. In the second case as well, our algorithm will use all data points to arrive at the confidence bound $2 C \lambda = 2 L d + 2 b \sqrt{ (2 / n) \log \left( 2 / \delta \right) }$, where we denote $d(x_1,x_0)= \ldots = d(x_n,x_0) = d$. The second term is again tight by concentration arguments, whereas the first term cannot be improved due to Lipschitz property of $f(\cdot)$, thus yielding an overall tight confidence bound for our prediction in this case.
Experimental Results {#sec:Experiments}
====================
The following experiments demonstrate the effectiveness of the proposed algorithm on several datasets. We start by presenting the baselines used for the comparison.
Baselines
---------
#### The standard $\mathbf{k}$-NN:
Given $k$, the standard ${k}$-NN finds the $k$ nearest data points to $x_0$ (assume without loss of generality that these data points are $x_1,\ldots,x_k$), and then estimates $\hat{f}(x_0) = \frac{1}{k} \sum_{i=1}^k y_i $.
#### The Nadaraya-Watson estimator:
This estimator assigns the data points with weights that are proportional to some given similarity kernel $K:\reals^d \times \reals^d \mapsto \reals_{+}$. That is, $$\begin{aligned}
\hat{f}(x_0) =\frac{ \sum_{i=1}^n K(x_i,x_0) y_i}{\sum_{i=1}^n K(x_i,x_0)} .\end{aligned}$$ Popular choices of kernel functions include the Gaussian kernel $K(x_i,x_j) = \frac{1}{\sigma} e^{-\frac{\|x_i-x_j \|^2}{2\sigma^2}}$; Epanechnikov Kernel $K(x_i,x_j) = \frac{3}{4} \left(1-\frac{\|x_i-x_j \|^2}{\sigma^2}\right)\1_{\left\{\|x_i-x_j \|\leq \sigma \right\}}$; and the triangular kernel $K(x_i,x_j) = \left(1-\frac{\|x_i-x_j \|}{\sigma}\right)\1_{\left\{\|x_i-x_j \|\leq \sigma \right\}}$. Due to lack of space, we present here only the best performing kernel function among the three listed above (on the tested datasets), which is the Gaussian kernel.
Datasets
--------
In our experiments we use 8 real-world datasets, all are available in the UCI repository website (<https://archive.ics.uci.edu/ml/>). In each of the datasets, the features vector consists of real values only, whereas the labels take different forms: in the first 6 datasets (QSAR, Diabetes, PopFailures, Sonar, Ionosphere, and Fertility), the labels are binary $y_i \in \{0,1\}$. In the last two datasets (Slump and Yacht), the labels are real-valued. Note that our algorithm (as well as the other two baselines) applies to all datasets without requiring any adjustment. The number of samples $n$ and the dimension of each sample $d$ are given in Table \[t1\] for each dataset.
\[tb\]
Experimental Setup
------------------
We randomly divide each dataset into two halves (one used for validation and the other for test). On the first half (the validation set), we run the two baselines and our algorithm with different values of $k$, $\sigma$ and $L/C$ (respectively), using $5$-fold cross validation. Specifically, we consider values of $k$ in $\{1,2,\ldots,10\}$ and values of $\sigma$ and $L/C$ in $\{ 0.001 , 0.005 , 0.01 , 0.05 , 0.1 , 0.5 , 1 , 5 , 10\}$. The best values of $k$, $\sigma$ and $L/C$ are then used in the second half of the dataset (the test set) to obtain the results presented in Table \[t1\]. For our algorithm, the range of $k$ that corresponds to the selection of $L/C$ is also given. Notice that we present here the average absolute error of our prediction, as a consequence of our theoretical guarantees.
Results and Discussion
----------------------
As evidenced by Table \[t1\], our algorithm outperforms the baselines on $7$ (out of $8$) datasets, where on $3$ datasets the outperformance is significant. It can also be seen that whereas the standard $k$-NN is restricted to choose one value of $k$ per dataset, our algorithm fully utilizes the ability to choose $k$ adaptively per data point. This validates our theoretical findings, and highlights the advantage of adaptive selection of $k$.
Conclusions and Future Directions {#sec:Conclusion}
=================================
We have introduced a principled approach to locally weighted optimal estimation. By explicitly phrasing the bias-variance tradeoff, we defined the notion of optimal weights and optimal number of neighbors per decision point, and consequently devised an efficient method to extract them. Note that our approach could be extended to handle multiclass classification, as well as scenarios in which predictions of different data points correlate (and we have an estimate of their correlations). Due to lack of space we leave these extensions to the full version of the paper.
A shortcoming of current non-parametric methods, including our $k^*$-NN algorithm, is their limited geometrical perspective. Concretely, all of these methods only consider the distances between the decision point and dataset points, i.e., $\{ d(x_0,x_i)\}_{i=1}^n$, and *ignore* the geometrical relation between the dataset points, i.e., $\{ d(x_i,x_j)\}_{i,j=1}^n$. We believe that our approach opens an avenue for taking advantage of this additional geometrical information, which may have a great affect over the quality of our predictions.
Hoeffding’s Inequality
======================
Let $\{ \epsilon_i \}_{i=1}^n \in [L_i,U_i]^n$ be a sequence of independent random variables, such that $\mathbb{E} \left[ \epsilon_i \right] = \mu_i$. Then, it holds that $$\mathbb{P} \left( \left| \sum_{i=1}^n \epsilon_i - \sum_{i=1}^n \mu_i \right| \geq \varepsilon \right) \leq 2e^{-\frac{2 \varepsilon^2}{\sum_{i=1}^n (U_i - L_i)^2} } .$$
Proof of Lemma \[lem:lambda\_kOpt\] {#sec:Proof_lem:lambda_kOpt}
===================================
First note that for $k=k^*$ the lemma holds immediately by Theorem \[thm:Main\]. In what follows, we establish the lemma for $k<k^*$. Thus, set $k$, let $\Delta_n^{(k)} = \{ \balpha\in \Delta_n : \alpha_i = 0,\; \forall i>k\}$, and consider the following optimization problem: $$\begin{aligned}
\min_{\balpha\in \Delta_n^{(k)} }\left( \| \balpha \|_2 + \balpha^\top \bbeta\right)~ \qquad \mathbf{(P2_k)}.\end{aligned}$$ Similarly to the proof of Theorem \[thm:Main\] and Corollary \[cor:Main\], it can be shown that there exists $\bar{k}\leq k$ such that the optimal solution of $\mathbf{(P2_k)}$ is of the form $(\alpha_1, \ldots,\alpha_{\bar{k}},0\ldots,0)$, where $\alpha_i>0, \; \forall i\leq \bar{k}$. Moreover, given $\bar{k}$ it can be shown that the value of $\mathbf{(P2_k)}$ at the optimum equals $\lambda$, where $$\begin{aligned}
\lambda = \frac{1}{\bar{k} }\left( \sum_{i=1}^{\bar{k}} \beta_i + \sqrt{ \bar{k} + \left( \sum_{i=1}^{\bar{k}} \beta_i \right)^2 - \bar{k} \sum_{i=1}^{\bar{k}} \beta_i^2 } \right) ~,\end{aligned}$$ which is of the form calculated in Algorithm \[algorithm:KstarNN\]. The above implies that showing $\bar{k}=k$ concludes the proof. Now, assume by contradiction that $\bar{k}<k$, then it is immediate to show that the resulting solution of $\mathbf{(P2_k)}$ also satisfies the KKT conditions of the original problem $\mathbf{(P2)}$, and is therefore an optimal solution to $\mathbf{(P2)}$. However, this stands in contradiction to the fact that $\bar{k}< k^*$, and thus it must hold that $\bar{k}=k$, which establishes the lemma.
[^1]: The Voleon Group. Email: `oren@voleon.com`.
[^2]: Department of Computer Science, ETH Zürich. Email: `yehuda.levy@inf.ethz.ch`.
[^3]: Note that our analysis holds for both setups of classification/regression. For brevity we use a *classification* task terminology, relating to the $y_i$’s as *labels*. Our analysis extends directly to the regression setup.
[^4]: Note that $\mathbf{(P2)}$ is not strongly-convex, and therefore the polynomial dependence on $1/\epsilon$ rather than $\log(1/\epsilon)$ for first order methods. Other methods such as the Ellipsoid depend logarithmically on $1/\epsilon$, but suffer a worse dependence on $n$ compared to first order methods.
|
---
abstract: 'Multi-object manipulation problems in continuous state and action spaces can be solved by planners that search over sampled values for the continuous parameters of operators. The efficiency of these planners depends critically on the effectiveness of the samplers used, but effective sampling in turn depends on details of the robot, environment, and task. Our strategy is to learn functions called *specializers* that generate values for continuous operator parameters, given a state description and values for the discrete parameters. Rather than trying to learn a single specializer for each operator from large amounts of data on a single task, we take a [*modular meta-learning*]{} approach. We train on multiple tasks and learn a variety of specializers that, on a new task, can be quickly adapted using relatively little data – thus, our system *learns quickly to plan quickly* using these specializers. We validate our approach experimentally in simulated 3D pick-and-place tasks with continuous state and action spaces. Visit `http://tinyurl.com/chitnis-icra-19` for a supplementary video.'
author:
- |
**Rohan Chitnis Leslie Pack Kaelbling Tomás Lozano-Pérez**\
\
MIT Computer Science and Artificial Intelligence Laboratory\
`{ronuchit, lpk, tlp}@mit.edu` [^1]
bibliography:
- 'references.bib'
title: '**Learning Quickly to Plan Quickly Using Modular Meta-Learning**'
---
Acknowledgments {#acknowledgments .unnumbered}
===============
We gratefully acknowledge support from NSF grants 1420316, 1523767, and 1723381; from AFOSR grant FA9550-17-1-0165; from Honda Research; and from Draper Laboratory. Rohan is supported by an NSF Graduate Research Fellowship. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of our sponsors.
[^1]: Presented at the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada.
|
---
author:
- 'A. Rocchi'
- ', R. Cardarelli'
- ', B. Liberti'
- ', G. Aielli'
- ', E. Alunno Camelia'
- ', P. Camarri'
- ', M. Cirillo'
- ', A. Di Ciaccio'
- ', L. Di Stante'
- ', M. Lucci'
- ', E. Pastori'
- ', L. Pizzimento'
- ', G. Proto'
- ', E. Tusi'
- ', and R. Santonico'
title: 'Linearity and rate capability measurements of RPC with semi-insulating crystalline electrodes operating in avalanche mode'
---
Introduction {#sec:intro}
============
RPCs detectors are widely used in high energy physics experiment because of their excellent intrinsic time resolution and response. The concept behind the detection mechanism allows producing high strength and cost effective detectors [@Sant]. For this reason this kind of detector has been used mainly to cover large area experiments. The electronics front-end upgrade and the progress in the gas gap design have improved the RPCs rate capability performances up to $10\;kHz/cm^2$ [@Atlas]. Nevertheless, the total charge, integrated during the detector operations, produces an increase of the electrode bulk resistivity resulting in a significant rate capability degradation [@ageing]. For this reason, the RPCs detectors should work at a fraction of their intrinsic rate capability depending on the experiment conditions and lifetime. It can be concluded that the effective rate capability of the RPC detectors is limited mainly due to the electrodes ageing problem and that new materials should be investigated.
The RPCs signal charge doesn’t show a sharp distribution when the detector is operated in a low gain avalanche mode. The detector response is not proportional to the energy released by a single detected particle. If the number of synchronous interactive particles becomes high, the central limit theorem can be applied and the signal average charge is proportional to the number of bunched particles. This feature was exploited in the past to measure extensive air showers, paving the way of calorimetry to the RPC detectors [@Iacovacci]. In the recent years many proposal were moved to use RPC as the active area of the calorimeter in both cosmic ray and collider experiments [@Iuppa] [@DHCAL]. The RPC linear limit was measured only for the streamer operating mode. In the saturated avalanche mode, the signal charge shows large fluctuations with respect to the average value, but the linear limit should increase proportionally to the average charge reduction. RPC operated in saturated avalanche mode, therefore, could be exploited in that situations where the synchronous particles density is very high.
Detector design
===============
In this paper the results obtained through RPCs with semi-insulating Gallium Arsenide (SI-GaAs) electrodes, designed for high energy physics, are described. The electrodes were produced by ITME [@ITME] and all the technical specifications are listed in table \[tab:GaAs2\_spec\]. As compared to the high pressure laminate, the SI-GaAs has a crystalline structure with a high carriers mobility, so that the bulk conductance is limited only by the very low free carriers concentration. The SI-GaAs bulk resistivity, two orders of magnitude lower then that of high pressure laminate, leads to better rate capability performances. The electrodes holder consists of three parts: a spacer and two frames. The spacer hosts the gas inlet, the gas outlet and the electrodes, keeping them $1\;mm$ spaced. The frames push the electrodes on the spacer. The holder dimensions are $10\;cm\times10\;cm\times0.8\;cm$ and the detector active area is $25\;cm^2$. The device is hosted in a watertight Aluminium box that shields it from electromagnetic noise.\
The electrodes contacts are made by sputtering Aluminium on the wafers surface and consist in four pads on both the high voltage and ground side. Each pad is $\sim6.25 cm^2$ large. The pads bonding is made with a silver paint drop. On the high voltage side the pads are connected in parallel; on the ground side, instead, each pad is connected to the ground plate through $100\;k\Omega$ resistor. A picture of the detector is shown in figure \[Prot2\_pic\].
![[]{data-label="Prot2_pic"}](GaAs2){width="6.cm"}
-------------- -- --
****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\] ****
\[3pt\]
-------------- -- --
\[tab:GaAs2\_spec\]
Linearity measurements at BTF (LNF)
===================================
This linearity study is based on the tests performed by M. Iacovacci and S. Mastroianni at the BTF of Laboratori Nazionali di Frascati, in which the intrinsic linearity of bakelite RPCs operated in streamer mode has been measured at different impinging particle densities [@Iacovacci]. The upper limit for the linearity response, is expected to increase of at least a order of magnitude with the streamer to avalanche transition due to the average charge reduction. The linearity response was studied using secondary electrons beam with energy of $250\;MeV$ produced at the BTF of Laboratori Nazionali di Frascati [@BTF]. The particles bunch multiplicity was changed in a range between $1\;particle/bunch$ to $\sim300\;particles/bunch$. The mechanical frame holding the detectors is shown in figure \[btf\_setup\]. The detector used for this measurement is tagged with the number two. Detectors provided from the facility were used to control the beam properties: the Medipix detector worked as real time monitor for the beam position and intensity, the BTF lead glass Cherenkov calorimeter measures the beam intensity. The premixed gas mixture consists of $95\%$ C$_{2}$H$_{2}$F$_{4} + 4.5\%$ iC$_{4}$H$_{10} + 0.5\%$ SF$_{6}$ and was stocked in a small $8\;l$ container placed at the base of the trolley. The ATLAS-like RPC detector worked as reference to monitor the gas conditions. The frontal plastic scintillator signals were acquired to improve the geometric acceptance during off-line analysis. The total interacting material in front of the calorimeter consists of $\sim0.1$ radiation length. The beam profile had a Gaussian shape with a $0.8\;mm$ $\sigma_y$ on the vertical direction and a $2.5\;mm$ $\sigma_x$ on the horizontal direction. In the test reported in [@Iacovacci], the beam spot had the same dimensions as the vacuum pipe ($5\times3\;cm^2$) and was entirely unfocused. In the presented test, this configuration was not usable because of the detector dimensions, whose pad is a quarter-circle with radius $2.8\;cm$. The bunch frequency was $\sim20\;Hz$ and the bunch time width was $10\;ns$: it is assumed that the electrode material, although it is a main structural detector parameter, has a marginal role in processes that occur within a sub-microsecond time scale. The detector effective high voltage was set to $6100\;V$ where the detector efficiency, without front-end amplifier, is $85$% and the average induced prompt charge is $0.45\;pC$. The results of the intensity scan were presented in figure \[Linearity tot\]. The RPC prompt charge was plotted as function of the multiplicity measured with the calorimeter. Different colors correspond to different acquisitions.
![[]{data-label="Linearity tot"}](Setup){width="5cm"}
![[]{data-label="Linearity tot"}](Linearity_tot){width="8.5cm"}
The statistical analysis of the data set was performed dividing the whole interval in six sub-ranges centred with respect to the average intensity values set during the scan. In order to locally check the linearity response, a linear fit on the RPC average charge has been performed for each sub-range. The fit results are shown in figure \[linearity\_fit\]. The p-value calculated with the $\chi^2$ of the linear fits, for each sub-range, results smaller than $0.05$ which is the lower limit for the $95$% confidence level, therefore the linear model is consistent with the data sets. As the last step of this statistical analysis, the agreement of the fit residuals, defined as the normalized difference between the observed and the fit value, with the Gaussian distribution was verified.
![[]{data-label="linearity_fit"}](linearity_fit){width="15cm"}
The slope extrapolated by the linear fits for each sub-range does not show a relation with the particles multiplicity . The intercept, instead, increases systematically with the sub-range index. This result could be caused by systematic errors related to small variations of the experimental parameters neglected during the data acquisition as the reference calorimeter high voltage and the beam position fluctuations. The Gaussian shape of the beam profile does not allow defining a constant particles density. For this purpose a lower limit for the particle density is defined approximating the Gauss distribution with a uniform distribution within $2\; \sigma$ range centred on the beam center. The area of the considered region is enclosed by an ellipse with major and minor semi-axis $\sigma_x$ and $\sigma_y$ respectively, and measurements $\sim 6.28\;mm^2$. An electron has $\sim47$% chance of being in this region, therefore, for $300\; particles/bunch$ multiplicity, the particles density is $\sim 22\times 10^6\;particles/m^2$. It can be concluded that the RPC detector, operated in avalanche mode, has a linear response in each measured multiplicity subinterval with a confidence level greater than 95% up to $\sim22\times 10^6\;particles/m^2$.
Rate Capability measurement at GIF++ (EHN1)
===========================================
The rate capability measurement was performed at the Gamma Irradiation Facility located at the Experimental Hall North 1 of CERN. The radiation field was produced by a $14.9\;TBq$ $^{137}$Cs source and could be attenuated through absorption filters with different values. The test was performed by placing the detector at $1.5\;m$ in front of the source on the downstream side and measuring the total current and the counting rate by varying the absorber filters. An Atlas like RPC was placed beside the detector under test in way to compare the current response. The readout scheme for the counting rate measurement is described in figure \[Rate\_cap\]. The signals from the four pads were amplified with a low noise front-end charge amplifier [@Cardarelli:FE], discriminated, processed with a logic or, and counted with a scaler in a $50\;s$ time interval. The discriminators threshold was set to $\sim 17\;fC$ and the NIM signals shape was formed with $200\;ns$ time width.
![[]{data-label="Rate_cap"}](Rate_setup){width="10cm"}
The measurement was repeated for three different values of the absorber. The facility simulation [@gif_irad] show a maximum photons current of $\sim10^7\;cm^{-2}s{-1}$ with the filters in position $1$ (no absorber), therefore, assuming the RPC gamma conversion factor is $5\times10^{-3}$, the maximum detectable rate was $5\times10^4\; cm^{-2}s{-1}$. The current measured without absorber was divided by that measured for different absorption factors. The relation \[eq\_abs1\] shows how, at fixed average charge $\bar{Q}$, the ratio between the currents $I$, measured with different absorption factors, should be consistent with the ratio between the absorption factors. The deviation of the current ratios from the theoretical absorption factor ratios indicates that the counting rate saturates to a value lower than the effective particle flux $\Phi$. The same considerations hold for the ratios between counting rates measured at different absorption factors.
$$\frac{I_{ABS1}}{I_{ABSn}}=\frac{\Phi_{ABS1}\bar{Q}}{\Phi_{ABSn}\bar{Q}}=\frac{n\Phi_{ABSn}}{\Phi_{ABSn}}=n
\label{eq_abs1}$$
The current ratio (circles) and the counting rate ratios (triangles) as a function of the high voltage working point are shown in figure \[Rate1\]. The detector under test, shows a good response for the first three high voltage working points, where current and counting rate ratios are consistent with the absorption factor ratios. The counting rate measurement results are shown in figure \[Rate2\]: the maximum measured counting rate is $36\times 10^3\;cm^{-2}s^{-1}$. The rate value measured at $5870\;V$ without absorber is $34\times10^3\;cm^{-2}s^{-1}$. With the source absorber set to $2.2$ the counting rate results to be $17.5\times 10^3\;cm^{-2}s^{-1}$ which gives $\sim10\%$ saturation. With the source absorber set to $4.6$ the counting rate results to be $9.5 \times 10^3\;cm^{-2}s^{-1}$ which corresponds to a saturation of $\sim22\%$.
![[]{data-label="Rate2"}](Rate_Current_ABS){width="7.3cm"}
![[]{data-label="Rate2"}](Rate){width="7.5cm"}
The developed detector is capable of counting up to $\sim30\times 10^3\;cm^{-2}s{-1}$ photons. This value is significantly limited by the high threshold value set to $\sim 17\;fC$, nevertheless the result is more than four times higher than the measured rate capability of $1\;mm$ gas gap HPL electrodes RPCs with $\sim3\;fC$ threshold [@Atlas]. The saturation effect must be further investigated performing more accurate measurements.
Conclusions
===========
RPCs with Semi-Insulating electrodes have been found simple to build and cheap. The excellent wafer surface uniformity allows using this material without linseed oil coating. The linearity limit of the RPC operated in avalanche mode was studied and it can be claimed, with a confidence level above $95\%$, that the RPC response is linear in each considered sub-range up to $\sim 22\times 10^6\;particles/m^2$. The maximum measured counting rate is $\sim37\;kHz/cm^2$ when the detector works in a uniform gamma radiation field, whose flux is $\sim10^4\;kHz/cm^2$. The ratios between the currents and the counting rates , measured with different background absorption factors, agree with the expected values for working voltages up to $5700\;V$ and with the discrimination threshold set to $\sim17\;fC$. A saturation of $22\%$ was observed at working voltage set to $5870\;V$, where the detector efficiency was measured to be $85\%$. The discrimination threshold could be lowered down to $\sim3\;fC$ improving the electromagnetic shielding so that the efficiency knee point could be moved down to at least $5600\;V$, where no saturation effects were observed.
[99]{}
R. Santonico and R. Cardarelli, *Development of Resistive Plate Counters*, *NIM section A* [**187**]{} (1981) 377-380.
The ATLAS Collaboration, *Technical Design Report for the Phase-II Upgrade of the ATLAS Muon Spectrometer: CERN-LHCC-2017-017* *ATLAS-TDR-026*.
G. Aielli et al., *RPC ageing studies*, *NIM Section A*, [**478**]{} (2002) 271-276.
R. Bartoli et al., *Linearity of bakelite Resistive Plate Chambers operated in streamer mode*, *NIM Section A*, [**947**]{} (2019) 1-6.
R. Iuppa, *Potential of RPCs in cosmic ray experiments for the next decade*, *Jinst*, [**10**]{} (2015).
J. Repond, *Resistive Plate Chambers for imaging Calorimetry-the DHCAL*, *Jinst*, [**9**]{} (2014).
Institute of Electronic Materials Technology, *http://www.itme.edu.pl/index.php?page=itme*.
B. Buonomo et al., *Performance and Upgrade of the DAFNE Beam Test Facility (BTF)*, *IEEE Trans.Nucl.Sci*, [**52**]{} (2005) 824-829.
D. Pfeiffer et al., *The radiation field in the Gamma Irradiation Facility GIF++ at CERN*, *NIM SectionA*, [**866**]{} (2017) 91-103.
R. Cardarelli et al., *Performance of RPCs and diamond detectors using a new very fast low noise preamplifier*, *Jinst*, [**8**]{} (2013).
|
=msbm9 scaled 1
INTRODUCTION
============
The GL-theory is widely used for studying the general properties of the superconducting state. This theory leads to two coupled non-linear equations for the order parameter $\psi$ and the magnetic field vector-potential ${\bf A}$, which are usually solved using various simplifying assumptions. In a number of papers \[2–8\] the particular case was considered of a long superconducting cylinder of radius $R$, placed in the axial magnetic field $H$. In this case the 3-dimensional GL-equations reduce to one-dimensional form, which enables one to find numerically the exact self-consistent solutions. In such way it is possible to study specific non-linear effects, as well as the role of the sample boundary. For instance, it was shown in \[6,7\], that one-dimensional solution for the order parameter $\psi$ (with fixed vorticity $m$ and varying $H$) may change its form either gradually (in one interval of parameters $(R,\kappa$), $\kappa$ is the GL-parameter), or abruptly (in the other interval of $(R,\kappa)$), undergoing I-order jump transformation. Such jump transformations, in principle, may be observable, because they are accompanied by jumps of the magnetisation, $M(H)$.
In the present paper the phase boundary is found, which divides the region of parameters ($R,\kappa$), where the superconducting solution (of fixed $m$) ends up (in the increasing external field) by I-order jump to normal state ($\psi\equiv 0$), from the region ($R,\kappa$), where the solution vanishes gradually, by II-order phase transition. This phase boundary is a complicated function of $R$ and $\kappa$, different from the simple boundary $\kappa=1/\sqrt{2}$, which divides the I- and II-order phase transitions in infinite (open) superconductors \[9\]. Other topics are also touched on (such as metastability, the paramagnetic Meissner effect, the pinning of vortices to the sample boundary, the linearized equation approximation, etc.).
In Sec. II the problem is formulated and the basic GL-equations, used in calculations, are written. Sec. II contains the numerical results, alongside with necessary comments. In Sec. IV the results are summarized and discussed.
EQUATIONS
=========
Below the case is considered of a long superconducting cylinder of radius $R$, in the external magnetic field $H\ge 0$, which is parallel to the cylinder element. In the cylindrical co-ordinates the system of GL-equations may be written in dimensionless form \[6\] $${ {d^2U} \over {d\rho^2} } - {1\over \rho}{ {dU}\over {d\rho} } -
\psi^2 U=0, \eqno(1)$$ -0.5cm $${ {d^2 \psi} \over {d\rho^2} } + {1\over\rho}
{{d\psi}\over{d\rho}} + \kappa^2 (\psi - \psi^3) - { {U^2 } \over {\rho^2}
}\psi =0. \eqno(2)$$ Here $U(\rho)$ is the dimensionless field potential; $b(\rho)$ is the dimensionless magnetic field; $\psi(\rho)$ is the normalized order parameter; $\rho=r/\lambda$, $\lambda$ is the field penetration length; $\lambda=\kappa\xi$, where $\xi$ is the coherence length, $\kappa$ is the GL-parameter. The dimensioned potential $A$, field $B$ and current $j_s$ are related to the corresponding dimensionless quantities by the formulae \[6\]: $$A={ {\phi_0} \over {2\pi\lambda} }{ U + m \over \rho },\quad
B={ {\phi_0} \over {2\pi\lambda^2} }b,\quad
b={1\over \rho}{{dU}\over{d\rho}},$$ -0.5cm $$j(\rho)=j_s\Big/ { {c\phi_0} \over {8\pi^2\lambda^3} }=
-\psi^2 {U\over \rho},\quad \rho = {r\over \lambda}. \eqno(3)$$ (The field $B$ in (3) is normalized by $H_\lambda=\phi_0/(2\pi\lambda^2)$, with $b=B/H_\lambda$; instead of $H_\lambda$ one can normalize by $H_\xi=\phi_0/(2\pi\xi^2)$, or by $H_{\kappa} = \phi_0/(2\pi\xi\lambda) = H_\xi/\kappa$. The coefficients in (1), (2) would change accordingly.) The vorticity $m$ in (3) specifies how many flux quanta are associated with the vortex, centered at the cylinder axis (the so-called giant-vortex state).
The boundary conditions to Eq. (1) are \[7\]:
$$U\big|_{\rho =0} = -m,\quad
\left. { {dU}\over{d\rho} }\right|_{\rho =\rho_1}=h_\lambda. \eqno(4)$$ where $\rho_1=R/\lambda$, $h_\lambda=H/H_\lambda$.
The boundary conditions to Eq. (2) are \[7\]: $$\left. {d\psi \over d\rho} \right|_{\rho =0} =0, \quad
\left. {d\psi \over d\rho} \right|_{\rho=\rho_1} =0 \quad (m=0),$$ -0.9cm $$\eqno(5)$$ -0.9cm $$\psi|_{\rho=0}=0,\quad
\left. { d\psi \over d\rho} \right|_{\rho=\rho_1} =0 \quad (m>0).$$
The magnetic moment (or, magnetisation) of the cylinder, related to the unity volume, may be written in a form $${M\over V}={1\over V}\int {B-H \over 4\pi }dv = { B_{av}-H \over 4\pi },$$ $$B_{av}={1\over V}\int B({\bf r})dv={1\over S}\Phi_1,$$ where $B_{av}$ is the mean field value inside the superconductor, $\Phi_1$ is the total magnetic flux, confined in the cylinder. In the normalization (3), denoting $\overline{b}=B_{av}/H_\lambda$, $h_\lambda=H/H_\lambda$, $M_\lambda=M/H_\lambda$, one finds $$\begin{aligned}
\qquad\qquad\quad 4\pi M_\lambda=\overline{b}-h_\lambda, \quad
\overline{b}={2\over\rho_1^2}(U_1+m), \quad \quad (6) \\
\phi_1={\Phi_1 \over \phi_0}=U_1+m, \quad
U_1=U(\rho_1),\quad \rho_1={R\over\lambda}.\end{aligned}$$
Accordingly, the normalized Gibbs free energy of the system may be written as \[7\] $$\Delta g=\Delta G\Big/ \left( { H_{c{\rm m}}^2 \over 8\pi } V \right)=
g_0-{8\pi M_\lambda \over \kappa^2} h_\lambda +{4m \over \kappa^2}
{b(0)-h_\lambda \over \rho_1^2}, \eqno(7)$$ -0.7cm $$g_0={2\over\rho_1^2} \int_0^{\rho_1}
\rho d\rho \left[ \psi^4-2\psi^2+{1\over\kappa^2} \left(
{d\psi \over d\rho} \right)^2 \right].$$ Here $\Delta G=G_s-G_n$ is the difference of free energies in superconducting and normal states; $b(0)=B(0)/H_\lambda$, $B(0)$ is the magnetic field at the cylinder axis; $H_{c{\rm m}}=\phi_0/(2\pi\sqrt{2}\lambda\xi)$ is thermodinamical critical field of massive superconductor; $g_0$ is the condensation energy with account for the order parameter gradient. The expressions (6), (7) may be used for calculating the corresponding quantities. \[Instead of $\rho_1$ the notation $R_\lambda=R/\lambda\equiv\rho_1$ will be used below.\]
NUMERICAL RESULTS
=================
The solutions of Eqs. (1)–(5) depend on the space co-ordinate $\rho$ and several parameters, for instance, $\psi(\rho)=\psi(m,R_\lambda,\kappa,h_\lambda;\rho)$ (analogously for the potential $U(\rho)$ and the field $b(\rho)$). Let the vorticity $m$ be fixed ($m=0,\,1,\,2,\dots$) and consider at first the case $m=0$ (the vortex-free Meissner state). Consider the plane of parameters ($R_\lambda,\kappa$) (see Fig. 1($a$)). In every point of this plane there exists a set of solutions of Eqs. (1)–(5), which depend parametrically on the external field $h_\lambda$. (Several points, laying along the line $R_\lambda=4$ are numerated as [*1-6*]{}.) One may envisage a peep-hole, pearced in every point ($R_\lambda,\kappa$), which allows to see the content of the corresponding sub-volume. The set of solutions $\psi(h_\lambda;\rho)$ is unique for each sub-volume and may be characterised, for instance, by the field dependence of the maximal value of the order parameter $\psi_{max}(h_\lambda)$, or by the form of the magnetisation curve $M_\lambda(h_\lambda)$ (6). The examples of such dependencies in different points of the plane $(R_\lambda,\kappa)$ are given in Figs. 1($b,c$) (only the case $h_\lambda\ge 0$ is considered; some illustrations for the case $h_\lambda<0$, as well as the corresponding co-ordinate dependencies, may be found in \[6,7\]).
It is clear from Fig. 1($b$), that the characteristic behavior $\psi_{max}(h_\lambda)$ depends essentially on the value of $\kappa$. For small $\kappa$, the value $\psi_{max}(h_\lambda)$ terminates (the curves [*1-4*]{}) by jump at some point $h_\lambda=h_s$, where (if $h_\lambda$ is increased) the I-order phase transition to normal state ($\psi(\rho)\equiv 0$) occures. The region, where the superconducting solutions terminate by I-order phase transition, is marked in Fig. 1($a$) as $s_{\rm I}$.
For larger $\kappa$ (the curves [*5,6*]{}) there is also a jump in $\psi_{max}(h_\lambda)$ at some point $h_\lambda=h_s$, but with a “tail” appearing on the curve. If the field $h_\lambda$ increases, the superconducting solutions [*5,6*]{} vanish gradually at the point $h_c$, by II-order phase transition to normal state. The region, where the superconducting solutions terminate by II-order phase transition, is marked in Fig. 1($a$) as $s_{\rm II}$. \[The appearance of the tail on the magnetisation curve means the transition of the solution to the edge-suppressed form, see \[7\] for details.\]
It is evident, that for small radius cylinder ($R_\lambda<1.69$) the superconducting solution terminates by II-order phase transition, even in type-1 (i.e. small $\kappa$) superconductors \[10\]. The transformation of the solutions with diminishing radius $R_\lambda$ is illustrated in Fig. 2 for $\kappa=0.7$.
Note, that if the line $\kappa=1$ in Fig. 1($a$) is followed from large to small $R_\lambda$, the superconducting states, which lay along this line, display at first the II-order phase transition in magnetic field (for larger $R_\lambda$), then the I-order (for intermediate $R_\lambda$), and again the II-order (for smaller $R_\lambda$). Only, if $\kappa>1.05$, all the solutions display II-order behavior.
Notice also, that the state $m=0$ is totally diamagnetic ($-4\pi M_\lambda>0$).
Because in every point of $s_{\rm II}$-region the order parameter vanishes by II-order phase transition ($\psi_{max}\to 0$, see Fig. 1($b$)), the superconducting phase boundary in magnetic field, $h_c$, may be found analitically, by linearizing the system (1), (2) (with account, that $\psi\ll 1$ and $b\approx h_\lambda$), and passing to single linear equation for the order parameter \[11\] , whose solution may be expressed in terms of the Kummer functions (see also \[3,4,5,12\]). However, inside $s_{\rm I}$-region (where the solution terminates by jump from a finite value $\psi_{max}$ to zero) the phase boundary $h_s$ (i.e. the highest field $h_\lambda$ still compatible with the superconductivity) can not be found by solving the linearized equation, but full system (1)–(5) is needed.
The analogous investigation can be carried out in the case $m=1$ (see Fig. 3), with a single vortex on the cylinder axis.
In Fig. 3($a$) are shown: the region $s_{\rm I}$, where the superconducting state terminates (if the field is increased) by I-order jump to the normal state, having finite value $\psi_{max}$ at the transition point; the region $s_{\rm II}$, where the superconductivity vanishes by II-order phase transition; the curve $s_{\rm I-II}$, which represents the boundary between I- and II-order phase transitions.
The behavior of the order parameter $\psi_{max}(h_\lambda)$ and of the magnetisation $M_\lambda(h_\lambda)$ in different points of the plane $(R_\lambda,\kappa)$ are shown in Figs. 3($b,c$) (and in Fig. 4). For small $\kappa$ (the curves [*1,2*]{}) the solutions terminate by I-order jump. When the line $s_{\rm I-II}$ is crossed, the tail appears on the curves [*3,4*]{}, which widens, if $R_\lambda$ and $\kappa$ increase. If $R_\lambda$ diminishes (Fig. 4), the magnitude of the jump in $\psi_{max}$ also diminishes, and the solutions terminate (if the field is increased) by II-order phase transition to normal state.
On the curve $C_{ns}$ (Fig. 3($a$)) the value $\psi_{max}=0$. The letter $n$ denotes the normal metal region ($\psi\equiv 0$); here the superconducting state ($m=1$) is impossible. \[In this region the radius $R_\lambda$ is too small, and the vortex own field is too strong to be confined within the mesoscopic sample.\] It is evident, that when the radius $R_\lambda$ diminishes (but $\kappa$ is fixed) the transition from $s$- to $n$-state always is II-order phase transition, however the width of the region between the curves $s_{\rm I-II}$ and $C_{ns}$ (where II-order transition exists) gets very small for small $\kappa$. The curve $C_{ns}$ may be well approximated by the dependence $R_\lambda\sim a/\kappa$ (or $R_\xi=\kappa R_\lambda=a$), with $a=1.34$.
Notice, that in any point of $s$-region in Fig. 3($a$) the magnetisation function $M_\lambda(h_\lambda)$ (Fig. 3($c$)) has two parts: the paramagnetic ($M_\lambda>0$) and diamagnetic ($M_\lambda<0$). This is because the superconducting current has two components, $j_s=j_p+j_d$. One of these currents ($j_p$) screens the own field of the vortex ($m=1$) and flows around the vortex axis in counter-clockwise direction (the paramagnetic current). The second current ($j_d$) screens out the external field $h_\lambda$ and flows near the cylinder surface in clock-wise direction (the diamagnetic current). Depending on which of these currents prevail, the magnetisation (or, equivalently, the magnetic moment $M_\lambda=(1/2c)\int [{\bf j}_s{\bf r}]dv$) can change sign, as function of $h_\lambda$ (see \[8\] for details). Recall, that in the vortex-free state ($m=0$) there exists only diamagnetic current, i.e. $M_\lambda<0$, see Fig. 1($c$). In presence of the vortex ($m=1$), but in absensce of the external field $(h_\lambda=0)$, the screening current and the magnetic moment correspond to the paramagnetic state. This state is metastable, because the vortex-free state posesses smaller free-energy, than the state $m=1$ \[2–8\].
The curve $P_0$ in Fig. 3($a$) corresponds to the minimal radius $R_\lambda$, when the paramagnetic vortex state ($m=1$) can still exist inside the homogenious cylinder in absence of the field ($h_\lambda=0$). \[The metastable vortex is held inside by the pinning to the cylinder boundary.\] In those points $(R_\lambda,\kappa)$, which lay below the curve $P_0$, to held the vortex inside the cylinder it is necessary to impose a finite external field, $h_\lambda>0$. (This corresponds to the field stimulated and re-entrant superconductivity \[2–8\].) Notice, that if the cylinder radius $R$ and the parameter $\kappa$ are fixed, to cross the paramagnetic pinning boundary ($P_0$) it is sufficient to vary only the sample temperature, because $R_\lambda=R/\lambda(T)$.
The presence of a smooth tail in the function $\psi_{max}(h_\lambda)$ \[Figs. 3($b$) and 4($a$)\] allows (as in the case $m=0$) to use the linear approximation ($\psi\ll 1$) for finding the upper boundary of the superconducting state, $h_c$. In the region of I-order jumps \[$s_{\rm I}$ in Fig. 3($a$)\], where the function $\psi(\rho)$ is not small, the linear approach fails and more rigorous analysis, based on full system of non-linear equations (1)–(5), is neccessary. \[The boundaries $s_{\rm I-II}(\kappa)$ and $C_{ns}(\kappa)$ themselves can not be found from the linear equation, because the latter does not depend on $\kappa$ \[11\]. The comparison of the results of the rigorous and linear analysis will be reported elsewhere.\]
Similarly, one can consider the higher giant-vortex states ($m>1$, see Fig. 5 for $m=2$). Here also exist the boundaries of I- and II-order phase transitions, the jumps on the magnetisation curves, the paramagnetic and diamagnetic currents, and other peculiarities, which are analogous to those presented in Figs. 1–4.
Conclusion and discussion
=========================
Basing on self-consistent solution of non-linear system of GL-equations, the boundary, $s_{\rm I-II}$, is found, which separates the regions, where the superconducting state of the cylinder is destroyed by the external magnetic field either by I-order jump (the region $s_{\rm I}$), or gradually, by II-order phase transition (the regioin $s_{\rm II}$). This boundary is a complicated function of the parameters ($m,R_\lambda,\kappa$) \[see Figs. 1($a$), 3($a$), 5\].
Note, that in the case of an infinite (open) superconductor the phase boundary between I- and II-order transitions lays at the value $\kappa=1/\sqrt{2}$ \[9\]. (At this value the surface energy at the interface of superconducting and normal metals vanishes, and the magnetisation $M(H)$ acquires a smooth tail \[9\].) However, the case of infinite superconductor is degenerated, in the sense that the total number of vortices in the open system can not be defined. Due to this degeneration there are many solutions of the system (1), (2) with different $m$, and it is possible to consider the superconducting state as a linear combination of states with different vorticities $m$ \[9\]. In the bounded system this degeneracy is removed, and it is necessary to consider the states of fixed vorticity $m$ (the quantum number $m$ is now a topological invariant). \[It is easy to prove that self-consistent solution of Eqs. (1)–(5) with $\psi\ne 0$ is unique for every value of $h_\lambda$.\] The mentiond difference of the $s_{\rm I-II}$-boundary from the value $\kappa=1/\sqrt{2}$ is due to the difference in geometries and to the account of the space-quantization effects, present in the bounded system. \[To trace the limiting transition from the bounded to open geometry, it might be neccessary to consider the case of flattend elliptical cylinder, which models the geometry of an infinite slab, adopted in \[9\].\]
Mention in conclusion, that the main attention in the present work was paid to the mathematical side of the problem: to describe the $s_{\rm I-II}$-boundary basing on formal solutions of GL-equations. The important physical question of compairing the Gibbs free energies of various mathematically possible states, and finding the most stable ground state (which the system would occupy in equilibrium), was put aside. (Some illustrations of the Gibbs free energy behavior, found from Eq. (7), are given, for instance, in \[6-8\].) In justification, it may be reminded, that the physical system may occupy not only the groung state, but also the excited metastable states of higher energy. \[In particular, the controversial paramagnetic Meissner effect may be attributed to the metastable vortex states in the mesoscopic system, see \[8\] for details). The formal solutions of GL-equations describe all possible states, including stable and metastable one, thus the full analysis, based on these equations, may be pertinent to the experiment. However, the case of infinitely long cylinder, considered above, approximates rather poorly the geometry, used in real experiments. In this respect, the superconducting disks, considered in \[4,5\], are more adequate, though it would be more difficult to obtain rigorous solutions for the bounded 3-dimensional sample. Thus, further analysis of the questions, touched on in the present paper (as well as possible connection with experiment), is necessary.
Acknowledgments
===============
I am grateful to V. L. Ginzburg for the interest in this work and valuable discussions. I thank also F. M. Peeters and J. J. Palacios for sending the reprints of recent papers, where closely connected problems are considered.
\[1\] V.L. Ginzburg, L.D. Landau, Zh.Exp.Teor.Fyz., [**10**]{}, 1064 (1950).\
\[2\] H.J. Fink et al., Phys.Rev. [**151**]{}, 219, (1996); [**168**]{}, 168 (1968); Phys.Rev.B, [**20**]{}, 1947 (1979).\
\[3\] V.V. Moshchalkov, X.G. Qiu, V. Bruyndoncx, Phys.Rev.B[**55**]{}, 11 793 (1997).\
\[4\] J.J. Palacios, Phys.Rev.B [**58**]{}, R5948 (1998); Physica B, [**256-258**]{}, 610 (1998); Phys.Rev.Lett., [**83**]{}, 2409 (1999); [**84**]{}, 1796 (2000).\
\[5\] V.Schweigert, F.Peeters et.al., Phys.Rev.Lett., [**79**]{}, 4653 (1997); Supralatt. and Microstruct., [**25**]{}, 1195 (1999); Phys.Rev.B[**59**]{}, 6039 (1999); Physica C[**332**]{}, 266,426,255 (2000); cond-mat/0001110 (2000).\
\[6\] G.F. Zharkov, V.G. Zharkov, Physica Scripta [**57**]{}, 664 (1998); G.F. Zharkov, V.G. Zharkov, A.Yu. Zvetkov, Phys.Rev.B[**61**]{}, 12 293 (2000).\
\[7\] G.F. Zharkov, V.G. Zharkov, A.Yu. Zvetkov, “Self-consistent solutions of G–L-equations and edge-suppressed states in magnetic field”, cond-matt/0008217 (2000) (submitted to Phys.Rev.B).\
\[8\] G.F. Zharkov, “Paramagnetic Meissner effect in superconductors from self-consistent solution of GL-equations”, cond-matt/0009043 (2000) (submitted to Phys.Rev.B).\
\[9\] A.A. Abrikosov, [*Fundamentals of the Theory of Metals*]{} (North-Holland, Amsterdam, 1988).\
\[10\] V.L. Ginzburg, Zh.Exp.Teor.Fyz. [**34**]{}, 113 (1958); Soviet Phys.– JETP [**7**]{}, 78 (1958).\
\[11\] D. Saint-James, P. de Gennes, Phys.Lett. [**7**]{}, 306 (1963); D. Saint-James, Phys. Lett. [**15**]{}, 13 (1965).\
\[12\] Yu. N. Ovchinnikov, Sov. Phys. JETP, [**52**]{}, 755 (1980).\
**Figures captions**
Fig. 1. ($a$) – The boundary ($s_{\rm I-II}$) between regions ($s_{\rm I}$ and $s_{\rm II}$), where I- or II-order phase transitions to normal state ($\psi\equiv 0$) in magnetic field occure. ($b$) – The behavior $\psi_{max}(h_\lambda)$ in the points [*1-6*]{} ($m=0,\,R_\lambda=4$) in Fig. 1($a$). In the region $s_{\rm I}$ the order parameter vanishes by I-order jump. In the region $s_{\rm II}$ the order parameter $\psi_{max}(h_\lambda)$ has a “tail”, and vanishes smoothly, by II-order phase transition. ($c$) – Analogous behavior for magnetisation, $M_\lambda(h_\lambda)$. The peep-holes [*1-9*]{} in ($a$) are pearced in the points: [*1*]{} – $\kappa=0.2$, [*2*]{} – $\kappa=0.4$, [*3*]{} – $\kappa=0.7$, [*4*]{} – $\kappa=1$, [*5*]{} – $\kappa=1.0.5$, [*6*]{} – $\kappa=1.2$ ($R_\lambda=4$); [*7*]{} – $R_\lambda=3$, [*8*]{} – $R\lambda=2$, [*9*]{} – $R\lambda=1.5$ ($\kappa=0.7$).
Fig. 2. The dependencies: ($a$) – $\psi_{max}(h_\lambda)$ and ($b$) – $M_\lambda(h_\lambda)$ for $m=0,\,\kappa=0.7$. The numeration of curves correspond to points [*3,7–9*]{} in Fig. 1($a$).
Fig. 3. Analogous to Fig. 1, but for $m=1$. The dashed curve $C_{ns}$ in Fig. 3($a$) separates the normal ($n$) and superconducting ($s$) regions. The curve $P_0$ marks the points ($R_\lambda,\kappa$), where the metastable vortex state ($m=1$) may still exist in absence of the field ($h_\lambda=0$) due to the pinning to the boundary. Below the curve $P_0$ the vortex state may exist only in presence of finite external field ($h_\lambda>0$, see the curves [*1,2*]{} in Figs. 3($b,c$)). (This is an example of the field stimulation effect, or re-entrant superconductivity.) The peep-holes [*1-8*]{} in ($a$) are pearced in the points: [*1*]{} – $\kappa=0.35$, [*2*]{} – $\kappa=0.4$, [*3*]{} – $\kappa=0.7$, [*4*]{} – $\kappa=1$, [*5*]{} – $\kappa=1.07$, [*6*]{} – $\kappa=1.2$ ($R_\lambda=4$); [*7*]{} – $R_\lambda=3$, [*8*]{} – $R_\lambda=2.4$ ($\kappa=0.7$).
Fig. 4. Analogous to Fig 2, but for $m=1$. The presence of paramagnetic ($M_\lambda>0$) and diamagnetic ($M_\lambda<0$) parts of magnetisation is evident in Fig. 4($b$).
Fig. 5. Analogous to Figs. 1($a$) and 3($a$), but for $m=2$. The vertical asymptote $\kappa=0.94$ is the same for $m=0,1,2$. This is natural, because for large radii ($R_\lambda\gg 1$) the influence of the vortex field is negligible. The bottom of the curve $s_{\rm I-II}$ lays at $R_\lambda=2.78$ (with $R_\lambda=2.45$ for $m=1$, and $R_\lambda=1.69$ for $m=0$). The dashed curve $C_{ns}$ is well approximated by the dependence $C_{ns}\approx 1.81/\kappa$ (the dotted line).
|
---
abstract: 'We have studied the accuracy and reliability of the exposure time calculator (ETC) of the Wide Field Planetary Camera 2 (WFPC2) on board the Hubble Space Telescope (HST) with the objective of determining how well it represents actual observations and, therefore, how much confidence can be invested in it and in similar software tools. We have found, for example, that the ETC gives, in certain circumstances, very optimistic values for the signal-to-noise ratio (SNR) of point sources. These values overestimate by up to a factor of 2 the HST performance when simulations are needed to plan deep imaging observations, thus bearing serious implications on observing time allocation. For this particular case, we calculate the corrective factors to compute the appropriate SNR and detection limits and we show how these corrections vary with field crowding and sky background. We also compare the ETC of the WFPC2 with a more general ETC tool, which takes into account the real effects of pixel size and charge diffusion. Our analysis indicates that similar problems may afflict other ETCs in general showing the limits to which they are bound and the caution with which their results must be taken.'
author:
- Gianluca Li Causi
- Guido De Marchi
- Francesco Paresce
title: 'On the accuracy of the S/N estimates obtained with the exposure time calculator of the Wide Field Planetary Camera 2 on board the Hubble Space Telescope'
---
Introduction
============
ETCs play an important role in modern instrument use as they allow observers to determine how to carry out specific investigations and, especially, to predict the amount of time these will require. Since the time needed for the various programmes is a very sensitive issue in the allocation process for most modern high visibility ground and space-based facilities, the accuracy of these simulators must be well understood both by the observers and the time allocation committees that must rely on their results for a fair and scientifically effective distribution of the available time. In this context, unfortunately, besides the documentation accompanying the software tools, there is practically no published information on the reliability of existing ETCs of imaging cameras.
The WFPC2 has been so far the principal instrument on board the HST and it is expected to be of extreme utility to image parallel fields even now that the Advanced Camera for Surveys (ACS) is installed on the HST. ETC software utilities are available on the internet site of the STScI which simulate analytically the photometry for a given target for each HST instrument. The accuracy of these programmes plays a fundamental role in the planning of observations, in particular when extremely deep imaging is required and whenever the performances of two different instruments have to be compared.
While performing simulations for an HST proposal for the WFPC2 and the ACS, in which high accuracy was needed in order to evaluate the limiting magnitudes for deep observations of a globular cluster, we found substantial differences between the WFPC2 ETC results and real photometry obtained on archival images. We found similar differences also in archival non crowded fields, so that we decided to analyse the problem by directly comparing the ETC predictions with our photometry in various circumstances and here we show the results and the way in which they depend on field crowding. We also compare our photometry with the result of the recently published “ETC++” software [@ber01], whose calculations are based on statistical analysis tools and take into account the real effects of the pixel size and charge diffusion.
The WFPC2 ETC: comparison with real point–source photometry
===========================================================
The WFPC2 ETC computes the expected SNR of a point source from its input parameters, namely: the magnitude of the star in a given spectral band, the spectral type, the filter to use, the channel of the detector (PC1 or WF2, WF3, WF4), the analogue to digital gain, the position of the star on the pixel (centre or corner), the exposure time of the whole observation (i.e. the sum of all the exposure frames) and the sky coordinates of the target [@bir96]. As of late, the option to manually select a specific value of the sky brightness has been added [@bir01].
First, the programme computes the source count rate, assuming a blackbody spectrum, if the user has not specified it, and multiplies it by the response curves of the detector and filter. Then, the programme takes into account the various noise sources, including photon noise, read noise, dark noise and sky noise. The latter depends on the target position on the sky, with the sky brightening by about one magnitude from the ecliptic pole to the ecliptic plane. The programme uses the values from Table 6.4 in the WFPC2 Instrument Handbook [@biretal01] to compute the sky count rate per pixel and hence its photon noise. The contribution of the total noise to the photometry of a star depends upon the number of pixels in the point spread function (PSF) and how these pixels are weighted during data reduction. The WFPC2 ETC assumes that the data reduction employs PSF fitting photometry, so that it weights the pixels in proportion to their intensity, which maximises the SNR. The multiple read errors for a “cosmic–ray splitted” (CR-split) image, i.e. an image composed by many shorter frames, is then computed for a set of default splitting values and the corresponding SNR is also given in the ETC result page.
In order to quantify the possible WFPC2 ETC deviations from real photometry, we performed accurate aperture photometry (using the DAOPhot package) on both crowded and non crowded archival fields. The average image used in our analysis was computed after aligning the individual frames in the dithering pattern and removing cosmic ray hits. A custom programme was used, which computes the offsets of the frames by measuring the mean displacement of the centroid of some reference stars. The task then registers all the images to the first one, creates a mask of the CR-contaminated pixels, by means of an iterative sigma clipping routine with respect to the median value of the corresponding pixels in all frames, and finally computes the mean image by averaging the corresponding pixel on all the images if not included in the CR-mask. The CR-corrupted pixel in the original un-shifted frames are also replaced by the value of the corresponding pixel in the mean image in order to allow us to perform photometry on both the combined and on the individual frames. Our instrumental magnitudes were then transformed to the Johnson/Cousin UBVRI system by following the prescription of [@holt95] — specifically, their equation8, which also takes account of the colour correction by means of the coefficients in Table 7 therein — and by making an optimal choice of the aperture radius for each star, so as to minimise the associated photometric error.
For the crowded field, we have used the images of the Galactic globular cluster $M\,4$ (HST proposal 5461), obtained in the F555W and F814W filters. These are deep images centered in a region at one core radius from the centre of a dense globular cluster and should be representative of the cases in which the field observed is filled with a multitude of very bright and saturated stars ($V \leq 16$), whose haloes overlap each other and cover a significant fraction of the frame (Figure \[fig1\]).
Images of the field of ARP2 taken from HST proposal 6701, also obtained through F555W and F814W filters, were used as representative of a sparsely filled region in which the field is populated with faint stars with no appreciable overlapping haloes (Figure \[fig2\]).
How we measured the SNR
-----------------------
Aperture photometry was performed on both series of images (i.e. crowded and non crowded), with the following parameters. The flux of the object was sampled within an aperture of radius $r_0$, which is varied in steps of $0.5$ pixel. The background is sampled within an annulus drawn from an inner radius $r_1=r_0 + 1$ pixel to an outer radius $r_2$, with an annulus width which is varied from $3$ pixel up to $20$ pixel. As is discussed later in this section, an adjustable aperture radius and annulus size allow us to maximise the SNR, by limiting the noise generated by the contamination of the neighbouring objects. Moreover, the background is always estimated by taking the mode, rather than the mean or median, of the pixel distribution within the annulus. Appropriate aperture corrections were applied, which were directly measured from the most isolated non saturated stars in the field. A direct comparison with the encircled energy curve for the WFPC2 PSF [@biretal01] shows a perfect match, thus proving that the growth curves that we measured are reliable.
The DAOPhot task, used with the optimal aperture radius $r_0$ and the radii $r_1$ and $r_2$ for the sky annulus, gives the best estimate of both the magnitude and the associated error $\sigma_m$, from which we compute the SNR by using the equation:
$$\label{eq1}
{\rm SNR^D} = \frac{1}{{\rm e}^{-\sigma_m/1.08574} -1}$$
that comes from inverting Pogson’s relation $\Delta m = - 2.5 \, {\rm
log}((F + \Delta F)/F)$, where the numerical constant $1.08574$ is equal to $2.5 \, {\rm ln}(10)$. Hereafter, the acronym ${\rm SNR^D}$ indicates the SNR estimated on the basis of the photometric error given by DAOPhot.
As an independent check, we have computed the SNR as indicated in equation 6.7 of the WFPC2 Handbook [@biretal01] which, in the practical case of observed quantities, becomes:
$$\label{eq1a}
{\rm SNR^H } = \frac{F \cdot G^{1/2} \cdot N^{1/2}}
{\sqrt{F + (S + N_R^2 / G) \cdot \pi \cdot r_0^2 + N_S}}$$
where $r_0$ is the optimal aperture radius used by DAOPhot, $N$ is the number of frames combined together, $N_R$ the read-out noise (in units of electrons) of each specific CCD, $S$ the average background per pixel inside the annulus from $r_1$ to $r_2$, in units of DN, $F$ is the flux within the aperture of radius $r_0$ after subtraction of the background contribution $S\cdot \pi \cdot r_0^2$, in units of DN, and $G$ is the effective gain factor, i.e. the CCD gain times the number of frames averaged togheter. Finally, $N_S$ is a small (although non negligible) contribution to the error affecting the estimate of the background level which takes on the form:
$$\label{1b}
N_S = \frac{(S + N_R^2 / G) \cdot \pi \cdot r_0^4}
{r_2^2 - r_1^2}$$
The computation of ${\rm SNR^H}$ makes no use of the error estimate on the magnitude or flux provided by DAOPhot, so it is reassuring to find that ${\rm SNR^H}$ is in excellent agreement with ${\rm SNR^D}$. This, however, only happens if we use an adaptive choice for aperture radius and for the background annulus, as explained above. In fact, if we select a fixed radius and annulus size in a crowded environment, the contamination due to neighbouring stars alters the statistics of the sky within the annulus and we always find ${\rm SNR^H} > {\rm SNR^D}$. This is precisely the reason that made @demetal93 conclude that core aperture photometry, i.e. source and sky measurement conducted as close to the source as possible, as well as the use of the mode for the background are most advisable in crowded environments.
How the ETC expects the SNR to be measured
------------------------------------------
In light of the consistency between ${\rm SNR^D}$ and ${\rm SNR^H}$ and since the latter stems directly from equation 6.7 of the WFPC2 Handbook, on which the WFPC2 ETC is also based, we can now proceed and compare our measured ${\rm SNR^D}$ with the ETC predictions. Before doing so, however, we must make sure that the way in which we measure the SNR (i.e. ${\rm SNR^D}$) is consistent with the way in which the ETC software expects users to carry out the photometry. In fact, the latter assumes that the data reduction process employ PSF fitting photometry, i.e. that optimal weighting be assigned to each pixel in proportion to its intensity in the PSF. As discussed above, however, we have used aperture photometry to determine ${\rm SNR^D}$. The WFPC2 ETC instructions would indeed offer a correction to apply to the ideal PSF fitting case ${\rm SNR^P}$ (we call it “ETC optimal SNR”) in order to convert it to the equivalent SNR that would be obtained with canonical aperture photometry ${\rm SNR^A}$ (“ETC aperture SNR”). Following the WFPC2 ETC instructions in [@bir96], we have:
$$\label{eq2}
{\rm SNR^A}= {\rm SNR^P}\cdot \frac{K}{r_0}$$
where $K=0.11$ for the PC camera and $K=0.17$ for the WF chips, particularly valid when the aperture radius is $r_0>2.5$ pixel for the PC and $r_0>1.8$ pixel in the case of the WF.
Since we determined ${\rm SNR^D}$ by using aperture photometry, it would seem that we need to take into account the correction given by Equation4. We show, however, that this correction is not necessary thanks to the adaptive method that we used for photometry. In Figure\[fig3\] we plot, for the PC chip, the measured ${\rm SNR^D}$ against the prediction of the ETC for the aperture photometry case, i.e. ${\rm SNR^A}$. We should like to clarify here how Figure\[fig3\] and others of the same type in the following were built. After having measured the calibrated magnitude of a star in the images, we folded the latter value through the WFPC2 ETC so as to calculate the estimated SNR for an object of that brightness and for the exposure time and CR-SPLIT pattern corresponding to those of the actual combined image. For this and all the other figures in this paper, unless otherwise specified, we used the “average sky” option for the sky brightness setting as allowed by the new WFPC2 ETC Version 3.0.
We can see from Figure \[fig3\] that the prediction of the ETC for aperture photometry (${\rm SNR^A}$) are over-estimated for faint stars and under-estimated for bright objects with respect to the measured values for both the sparse and the crowded field. Figure \[fig4\] is the analogue of Figure \[fig3\] but here the reference is the ETC optimal SNR, ${\rm SNR^P}$, i.e. without any correction for aperture photometry. As one can easily see, the ETC in this case always overestimates the value of the SNR with respect to the measured ones by up to $\sim 100\,\%$ for the fainter stars. As the right hand side axis shows, such a mismatch of the SNR corresponds to a time estimation error of the same amount (see Equation7 ahead), i.e. the ETC appears to underestimate the exposure time actually needed to achieve a given SNR.
A closer look at Figure \[fig4\], however, reveals that the scatter of the representative points on the plot is smaller when our measurements are compared with SNR$^{\rm P}$ than for SNR$^{\rm A}$ and that the overall behaviour is closer to the ETC prediction at any magnitude. This is a consequence of our optimised aperture and annulus photometry closely approaching PSF fitting. In light of these results, in the following we ignore the correction for aperture photometry given by equation4 and compare our measurements directly with the ETC optimal SNR, i.e. SNR$^{\rm P}$.
How the predicted SNR compares with the observed one
----------------------------------------------------
Figures \[fig3\] and \[fig4\] clearly witness the dependence of the actual SNR upon the level of field crowding and, at the same time, its independence of the filter used. In principle, one could question the validity of our latest assumption, i.e. that of ignoring the correction to be applied to the SNR measured with aperture photometry. In fact, in a crowded field, PSF fitting photometry is expected to give better results. We have, therefore, attempted a direct comparison between the predictions of the ETC and the results of PSF fitting photometry. Rather than carrying out the reduction ourselves, we have utilised one of the finest examples of photometric work carried out on these very M4 data by @rich97, who employed very accurate ALLFRAME photometry as described in detail in @iba99. In their paper, these authors measure the magnitude of each star from the individual frames in the dithering stack and compute the combined magnitudes as the weighted average of the corresponding fluxes, the error on them, $\sigma_m^P$, being related to the flux scatter amongst the frames.
In order to make a reliable comparison with our results, we have performed, in a similar way, optimised aperture photometry on the individual frames (i.e. the original, not yet aligned images, in which CR-hits had been removed as described above). The measured fluxes were averaged with a weight inversely proportional to the DAOPhot estimated uncertainty after rescaling for the flux ratio. Our final magnitude errors, $\sigma_m^A$, are thus derived from the standard deviation of the fluxes, divided by the square root of the number of images combined. Figure \[fig5\] displays the comparison between $\sigma_m^P$, $\sigma_m^A$ and the ETC prediction, showing that the two photometric uncertainties overlap each other, while the ETC largely overestimates the precision that can be attained with PSF fitting photometry, even by one of the most experienced teams.
Thus, in this crowded case, it is also apparent that the ETC deviations are independent of the photometric technique adopted. In sparse fields, where aperture photometry and PSF fitting are equally effective and reliable, Figures \[fig3\] and \[fig4\] already prove that the ETC predictions depart from the measured data, although by a smaller amount than that applicable in the crowded case. Finally, in Figure \[fig6\] we compare the predictions of the ETC with the actual measurements for both the PC and WF, to show that the behaviour of the ETC applies regardless of the channel.
The relevance of the above considerations becomes clear when one uses an ETC to simulate very deep observations, especially when a comparison between instruments, e.g. ACS/WFC and WFPC2, is required to compare the limiting magnitude in given exposure times. As experience shows, a star finding programme is able to detect a faint point source only when its brightest pixel is at least $2$ or $3 \, \sigma_{\rm sky}$ above the sky background (where $\sigma_{\rm sky}$ is the standard deviation of the background), with a value of $\sim 5$ or more being the typical prerequisite in most faint photometry precision applications. If we plot the so called object [*detectability*]{} $d$, defined as:
$$\label{eq3}
d = \frac{\rm Peak - Sky}{\sigma_{\rm sky}}$$
as a function of the magnitude error $\sigma_m$, we obtain the graph in Figure \[fig7\]. Here we notice that the detectability (which is practically independent of the filter and crowding) drops to the value of $d = 2.1$ just when the magnitude error approaches $0.5\,mag$, which is usually considered the maximum allowed error in canonical photometric work. By relating the detectability $d$ with the ETC optimal SNR, ${\rm SNR^P}$, as done in Figure \[fig8\], we see that $d = 2.1$ corresponds to an ETC optimal SNR of $3.0$ for the non crowded case and to $7.0$ for the crowded case. This literally means that if we need to know the magnitude of the faintest detectable star in an observation of a stellar field with the WFPC2 we should query the ETC, setting “average sky”, for a SNR of $7.0$ and $3.0$, respectively in a crowded and in a sparse environment. It is normally assumed that a $3\,\sigma$ detection requires a SNR of 3, but in the case of the SNR provided by the WFPC2 ETC, this is only true for an isolated object.
Discussion and corrections
==========================
The direct consequence of what we have illustrated so far is that, if the ETC were used to plan observations of faint stars in a globular cluster like M4 with the WFPC2, the predicted exposure time could be considerably underestimated. Conversely, the same predictions would be almost correct for a star of equal brightness in a sparse field. In the following we try and provide an empirical correction formula that can be applied to the SNR given by the WFPC2 ETC to compensate for the effects of crowding.
In order to understand the discrepancy between the expected and measured SNR and to clarify how to exactly account for the effects of crowding in the simulations, we artificially modified the background level and photon noise in the sparse field so as to reproduce the sky level and sky variance measured on the crowded field. In practice, we added to the sparse field a Gaussian noise with a mean equal to the difference in the sky level between the two fields and a variance equal to the quadratic difference of the sky variances between them. The SNR diagramme for the modified image (Figure \[fig9\]) reveals that the locus of the modified sparse data points shifts towards and perfectly overlaps the crowded field locus. This tells us, as expected, that the increased background level resulting from crowding is responsible for the differences shown in Figures \[fig3\] and \[fig4\] between sparse and crowded fields.
It is, however, true that the ETC gives the SNR under the best possible sky conditions, which are rarely encountered, if ever, in real observations. Moreover, it is generally not expected of the ETC to take account of the position and brightness of all the stars in the field as would be necessary to simulate how crowding increases the background level. We have, therefore, manually set the ETC sky brightness to match the levels directly measured with the DAOPhot SKY task on the crowded image (i.e.. the mode of the levels distribution), hoping in this way to force the SNR simulated by the ETC to agree with our measurements. In fact, the results change only marginally, as shown in Figure\[fig10\], where ${\rm SNR^D}$ and ${\rm SNR^P}$ are plotted against the observed magnitude (Johnson $V$ in this case). The ETC simulation gets closer to the real data, but it does not still match them. Moreover, it seems as if a suitable value for the background cannot be found at all as shown in Figure \[fig11\], where one sees that the sky value that would force the ETC prediction to match ${\rm SNR^D}$, changes significantly as a function of star brightness.
We must, thus, conclude that the treatment of the background is a major issue for the WFPC2 ETC, although that alone cannot explain the whole discrepancy. It goes without saying that we have verified and confirmed that the predictions of the ETC as concerns the count rates per pixel in the source and background are precise to within an accuracy of 10%, as one would expect of a professional tool. We have also repeated all our tests on the individual frames, compared in turn with the predictions of the ETC for a case of CR-SPLIT=1. The result being the same, we can exclude an error in either the way in which we combined the data or in the way in which the ETC accounts for CR-SPLIT$> 1$. The rest of the discrepancy, then, must be attributed to the way in which the noise is estimated, the signal being correct. A delicate issue could be, for instance, the value and operational definition of $\sigma_{\rm sky}$. We notice here that large variations in the value of $\sigma_{\rm sky}$ are possible, in the crowded environment, depending as to whether we measure it with the IRAF SKY task, which fits a Gaussian around the mode, or as the standard deviation that one obtains by manual analysis over the darkest regions of the background in the image. In fact the latter can be up to 3 times smaller than the former, and also 2 times smaller than the mean sigma as measured inside the photometric sky annulus around each star. Conversely, all these numbers turn out to be quite similar for the sparse field image.
To try and account for the possible sources of the residual error, we considered recent results published by @ber01, who uses Fourier analysis and Fisher information matrices to show to which extent the SNR of a point source depends on factors which normally are not considered in ETC programmes, such as pixel size, intra-pixel response function, extra-pixel charge diffusion and cosmic ray hits.
According to this work, a programme that does not take all these parameters into account may overestimate the SNR by up to a factor of2. More precisely, whenever background limited point source photometry is involved, the key factor for the SNR calculation, namely the “effective area” $A_{SN}$ (see equation12 in Bernstein 2001), strongly depends on the detector geometry, such as pixel size, under-sampling factor, intra-pixel response function and charge diffusion. The finite pixel size plays an important role, as even a Nyquist sampled pixel (i.e. one $\lambda /2D$ in size) causes a 13% degradation in the SNR of a faint star and the same applies to extra-pixel charge diffusion.
In order to check whether these problems also affect the WFPC2, we configured Bernstein’s “ETC++” software to simulate WFPC2 point source photometry for the sparse field. The result is shown in Figure \[fig12\] where the measured SNR (${\rm SNR^D}$), the ETC optimal SNR (${\rm SNR^P}$) and the ETC++ SNR for aperture photometry are plotted against the stellar magnitude. The ETC++ gives a confidence level for its results as the value of the cumulative function of the stars distribution above the computed SNR. The ETC++ line in Figure \[fig12\] means that 50% of the stars of any given magnitude should be above this line. The WFPC2 ETC does not give confidence levels, but we can assume that its SNR is computed as the mean of the SNR distribution at any given magnitude, i.e. at 50% confidence level. If this is the case, Figure \[fig12\] indicates that the actual SNR is located in between the WFPC2 ETC and the ETC++ predictions, thus confirming the difficulty of any analytical ETC in reliably estimating the SNR.
Thus, a correction for the currently on-line WFPC2 ETC can only be empirical in nature. The following formula can be used to obtain a realistic estimate of the SNR:
$$\label{eq4}
{\rm SNR^C} \simeq (60 \cdot C + 17) \cdot (e^{-0.012 \cdot {\rm SNR^P}}-1)
+ 0.93 \cdot {\rm SNR^P}$$
where ${\rm SNR^P}$ is the SNR estimated by the ETC without correction for aperture photometry and $C$ is a measure of the crowding, defined as the logarithm of the ratio between the total area of the chip and the number of pixel with value lower than the modal sky value plus one standard deviation. For example $C$ is equal to $0.05$ for our sparse field, whereas it grows to $0.42$ in the crowded case of M4. For faint stars, e.g. for ${\rm SNR^P} \lesssim 20$, this equation can be roughly approximated by the rule of thumb that the actual SNR is about $1/2$, or $2/3$, of the ${\rm SNR^P}$, respectively for a crowded and non crowded environment. It should be noted that not even in an ideal case of zero crowding ($C \simeq 0$) would the measured SNR match the prediction of the ETC, since there would still be a discrepancy of the same order of that found in the sparse case.
The advantage of this formula is that ${\rm SNR^C} = 3$ would now always imply a $3\,\sigma$ detection, regardless of the level of crowding in the image. The correction that we propose would allow an observer to accurately plan his observations and make the best use of the HST time. For the low SNR regime (e.g. ${\rm SNR^P} \lesssim 50$), equation\[eq4\] can actually be rewritten to more explicitly show the effects of crowding on the exposure time:
$$\label{eq5}
{\rm t^C} \simeq {\rm t^P} \cdot \frac{{\rm SNR^P}}
{(60 \cdot C + 17) \cdot (e^{-0.012 \cdot {\rm SNR^P}}-1)
+ 0.93 \cdot {\rm SNR^P}}$$
where ${\rm t^P}$ is the exposure time predicted by the ETC to reach a certain SNR and ${\rm t^C}$ is its actual value.
An example of how serious the underestimate of the exposure time can be when the ETC is not used with the above caveat in mind is given in Figures \[fig13\]a and \[fig13\]b for a crowded environment. There we show a simulation of the detectability of the white dwarf cooling sequence with the WFPC2 in NGC6397, the nearest globular cluster, through the filters F606W and F814W. We have adopted the theoretical WD cooling sequence of @pm02 which provide a perfectly thin isochrone and have applied to it the colour and magnitude uncertainty that one obtains from the estimated SNR by inverting equation\[eq1\]. Two cases are shown: one (a) as predicted by the WFPC2 ETC and one (b) for our corrected estimate of equation\[eq4\]. The difference is outstanding, as the ETC predictions, taken at face value and ignoring the effects of crowding, would suggest that the sequence is not spread very much by photometric errors and its quasi-horizontal tail between $m_{606} = 29$ and $m_{606} = 30$ is clearly noticeable, whereas in our realistic simulation the sequence is widely spread and its lower part lies well below the detection limit.
The delicacy of the issue is immediately apparent when one considers that, based on the ETC estimates, one would deem that 120 orbits are sufficient to reliably secure the white dwarf cooling sequence in the colour–magnitude diagramme of NGC6397 down to $m_{606} = 30.5$ and $m_{814} = 30$, whereas, in fact, the correction shows that as many as 255 orbits would be needed to comfortably reach those limits with the WFPC2.
All of the above considerations are valid not only for the WFPC2, but also for any analytical ETC in general, especially when used to estimate the SNR of stars embedded in crowded environment or when the detector considerably under-samples the PSF, as suggested in @ber01. We should underline here, however, that this does not mean at all that the ETCs are unreliable nor that they are useless. One of the most important and practical reasons for having a standardised ETC is to allow the telescope time allocation committees to compare all the proposals on equal footing. In this respect, the ETC does not necessarily need to be accurate. Clearly, the better the detector’s cosmetics, intra-pixel response, charge diffusion and readout noise, the closer will the real photometry be to the ETC prediction. Thus, we expect, for example, a better behaviour of the ACS/WFC on-line simulator with respect to the WFPC2.
A non-analytical SNR calculator, which would simulate the whole observing session, including the dithering pattern, by numerically reproducing the real field (i.e. with the correct stellar positions and brightness, as imaged by a realistic model of the detector) and which uses the same photometric tools that will be adopted by the user (such as DAOPhot, ALLSTAR, and the like), would be, in our opinion, the best method to accurately predict the expected performances of any planned observing programme providing reliable results. Alternatively, at least for imaging ETCs which have very few configuration parameters and are fairly stable such as those in space telescopes, one should consider empirical modeling. One can take real results, such as we did in this paper, to calibrate an ETC which in turn interpolates between calibrations. In this way, the use of an empirical correction formula such as the one proposed here would guarantee a closer matching between simulations and real observations.
Summary and conclusions
=======================
The results of the WFPC2 exposure time calculator for point sources have been analysed by direct comparison with aperture and PSF photometry on real archival images. Significant deviations have been found between the ETC predictions and the actual photometry on the real data. Specifically, the analysis shows that the ETC deviations are [*i)*]{} independent of the filter, [*ii)*]{} independent of the choice of optimised aperture photometry or PFS fitting photometry, [*iii)*]{} independent of the PC or WF channel used, [*iv)*]{} strongly dependent upon the level of crowding in the field and that [*v)*]{} the ETC systematically overestimates the SNR, slightly for the bright sources and more seriously for faint sources close to the detection limit. Moreover, when data reduction follows the optimised aperture photometry method, the measured SNR will be as good as that obtained with PSF fitting and there is no need to apply the aperture photometry conversion suggested in the ETC documentation. An empirical correction formula is given to compute realistic SNR estimates, so as to assist observation planning when extremely faint sources have to be imaged, an example of which is presented. Manually increasing the value of the sky brightness in the simulator, so as to mimic the effects of crowding, shows that, although important, the background level is not the key parameter to explain the discrepancy, which is present even for data collected in rather sparse environments. Thus, it is not possible to correct the WFPC2 ETC predictions by just modifying the sky level. A comparison with a software tool developped by @ber01, whose predictions slightly underestimate the SNR at variance with the WFPC2 ETC, suggests that the effects of pixel size, charge diffusion and cosmic rays hits could be more important than previously thought.
It’s our pleasure to thank H. Ferguson, M. Stiavelli, S. Casertano, F. Massi, L. Pulone and R. Buonanno for helpful discussions. We are indebted to F. Valdes, the referee of this paper, for his useful comments and suggestions. G. Li Causi is particularly grateful to the ESO Director General’s Discretionary Fund for supporting his work. We also wish to thank Gary Bernstein for making his ETC++ software available to us.
Bernstein, G. 2001, “Advanced Exposure-Time Calculations: Undersampling, Dithering, Cosmic Rays, Astrometry, and Ellipticities”, astro-ph/0109319
Biretta, J. 1996, “WFPC2 Exposure Time Calculator”, <http://www.stsci.edu/instruments/wfpc2/Wfpc2_etc/wfpc2-etc.html>
Biretta, J. 2001, “WFPC2 Exposure Time Calculator Version 3.0”, <http://www.stsci.edu/instruments/wfpc2/Wfpc2_etc/wfpc2-etc-point-source-v30.html>
Biretta, J., Heyer, I. 2001, WFPC2 Instrument Handbook, version 6.0 (Baltimore: STScI)
De Marchi, G., Nota, A., Leitherer, C., Ragazzoni, R., Barbieri, C. 1993, ApJ, 419, 658
Holtzmann, J.A., Burrows, C.J., Casertano, S., Hester, J.J, Trauger, J.T., Watson, A.M., Worthey, G. 1995, , 107, 1065-1093
Ibata, R.A., Fahlman, G.G., Irwin, M.J., Gilmore, G., Richer, H.B., 1998, HST proposal n. 6701
Ibata, R.A., Richer. H.B., Fahlman G.G., Bolte, M., Bond, W.E., Hesser, Pryor, C., Stetson, P. 1999, , 120, 265-275
Prada Moroni, P., Castellani, V., Straniero, O., 2002 in preparation
Richer, H.B., Fahlman G.G., Ibata, R.A., Pryor, C., Bell, R.A., Bolte, M., Bond, H.E., Harris, W.E., Hesser, J.E., Holland, S., Ivanans, N., Mandushev, G., Stetson, P., Wood, M.A. 1997, , 484, 741-760
FIGURE CAPTIONS:\
\
Figure 1: Negative image of a crowded field in the globular cluster M4 obtained with the PC channel of the WFPC2 through the F555W filter [@rich97].
Figure 2: Negative image of a sparse field obtained with the PC [@iba98].
Figure 3: The ratio between the SNR measured in crowded and sparse fields (SNR$^{\rm D}$ in the text) and the WFPC2 ETC prediction for aperture photometry (SNR$^{\rm A}$ in the text) is shown for the F555W and F814W filters.
Figure 4: The ratio between the SNR measured in crowded and sparse fields (SNR$^{\rm D}$ in the text) and the WFPC2 ETC prediction for PSF-fitting photometry (SNR$^{\rm P}$ in the text) is shown for the F555W and F814W filters. The right hand side axis applies to the low SNR regime ($\lesssim 50$) and indicates the amount of the time estimation error, i.e. the ratio between the actual exposure time (Equation7 in the text) and that estimated by the ETC, for a given SNR in the abscissa.
Figure 5: Measured magnitude error from PSF-fitting photometry of @iba99 ($\sigma_m^P$) and from our optimized aperture photometry ($\sigma_m^A$), as a function of the magnitude, compared with the WFPC2 ETC prediction, for the crowded field and F555W filter.
Figure 6: The ratio between the measured SNR (SNR$^{\rm D}$) and the ETC optimal SNR (SNR$^{\rm P}$) is shown for the PC and the WF2 channels of the WFPC2, in the crowded field case.
Figure 7: Detectability $d$ versus measured magnitude error ($\sigma_m$). An uncertainty $\sigma_m = 0.5 mag$, usually the highest allowed in most photometric works, corresponds to a detectability $d =
2.1$.
Figure 8: Detectability $d$ versus ETC optimal SNR (SNR$^{\rm P}$). A value of $d = 2.1$ corresponds to a detection limit of $SNR^P = 3.0$ or $SNR^P = 7.0$ respectively for the sparse and the crowded case.
Figure 9: The ratio between the measured SNR (SNR$^{\rm D}$) and the ETC optimal SNR (SNR$^{\rm P}$) is shown for the crowded and sparse fields, before and after the artificial brightening of the sparse field background.
Figure 10: Comparison between the measured SNR (SNR$^{\rm D}$) and the ETC predictions for i) default low background, ii) default average background, iii) default high background and iv) actually measured background.
Figure 11: Values to enter in the User Specified Sky Background parameter of the WFPC2 ETC in order to force the ETC to match the measured SNR for crowded and non crowded fields.
Figure 12: Comparison between the prediction of the WFPC2 ETC v.3.0, the prediction of the ETC++ software and the measured SNR (SNR$^{\rm
D}$), for the crowded and non crowded cases in the two filters (both ETCs were used here after setting the sky magnitude to the value measured in the real images).
Figure 13: Comparison between (a) the WFPC2 ETC predictions (SNR$^{\rm
P}$) and (b) our correction of equation\[eq4\] (SNR$^{\rm C}$), in a simulation of a 120 HST orbits observation of the white dwarfs cooling sequence in NGC6397, in a colour–magnitude diagramme made through the filters F606W and F814W.
|
---
abstract: 'In this paper we discuss the analysis of a cross-diffusion PDE system for a mixture of hard spheres, which was derived in [@Bruna:2012wu] from a stochastic system of interacting Brownian particles using the method of matched asymptotic expansions. The resulting cross-diffusion system is valid in the limit of small volume fraction of particles. While the system has a gradient flow structure in the symmetric case of all particles having the same size and diffusivity, this is not valid in general. We discuss local stability and global existence for the symmetric case using the gradient flow structure and entropy variable techniques. For the general case, we introduce the concept of an asymptotic gradient flow structure and show how it can be used to study the behavior close to equilibrium. Finally we illustrate the behavior of the model with various numerical simulations.'
address:
- '$^1$Mathematical Institute, University of Oxford, RQQ, Woodstock Road, Oxford OX2 6GG, UK'
- '$^2$Institut für Numerische und Angewandte Mathematik and Cells in Motion Cluster of Excellence, Westfälische Wilhelms Universität Münster, Einsteinstrasse 62, D 48149 Münster, Germany'
- '$^3$RICAM, Austrian Academy of Sciences, Altenbergerstr. 63, 4040 Linz, Austria'
- '$^4$University of Warwick, Coventry CV4 7AL, UK and RICAM, Austrian Academy of Sciences, Altenbergerstr. 63, 4040 Linz, Austria'
author:
- Maria Bruna$^1$
- Martin Burger$^2$
- Helene Ranetbauer$^3$
- 'Marie-Therese Wolfram$^4$'
bibliography:
- 'bibliography.bib'
title: 'Cross-diffusion systems with excluded-volume effects and asymptotic gradient flow structures'
---
\[firstpage\]
Introduction {#sec:intro}
============
Systems of interacting particles can be observed in biology (e.g. cell populations), physics or social sciences (e.g. animal swarms or large pedestrian crowds). Macroscopic models describing the individual interactions of these particles among themselves as well as their environment lead to complex systems of differential equations (cf. e.g. [@bendahmane2009conservative; @Bruna:2012wu; @Bruna:2012cg; @MR2745794; @burger:2015uk; @burger2012nonlinear; @di2016nonlocal; @painter2009continuous; @Schlake:2011wr; @Simpson:2009gi]). In microscopic models the dynamics of each particle is accounted for explicitly, while the macroscopic models typically consist of partial differential equations for the population density. Passing from the microscopic model to the macroscopic equations in a systematic way is, in general, very challenging, and often one relies on closure assumptions, which can be made rigorous under certain scaling assumptions on the number and size of particles. In particular, when crowding due to the finite-size of particles is included in the model, the limiting process is quite subtle and, using different assumptions and closure relations, a variety of macroscopic equations have been derived. For instance, the macroscopic equations of a two-species system where particles undergo a simple exclusion process on a lattice can be derived using formal Taylor expansions, see for example [@MR2745794; @Simpson:2009gi]. The case when particles are not confined to a regular lattice and undergo instead a Brownian motion with hard-core interactions was considered in [@Bruna:2012wu] using matched asymptotic expansions. Cross-diffusion is a common feature of all these models and poses a particular challenge for the analysis since maximum principles do not hold. Classical examples of cross-diffusion systems are reaction diffusion systems or systems describing multicomponent gas mixtures. These quasi-linear parabolic systems were analyzed by Ladyzenskaya [@ol1968linear] or Amann [@amann1985global; @amann1989dynamic], which however rely on strong parabolicity assumptions that break down for the degenerate cross-diffusion systems derived from the interacting particle systems mentioned above.
The canonical form for a two-species system of interacting particles (called red and blue in the following) is $$\begin{aligned}
\label{cross-diffusion}
\partial_t
\begin{pmatrix} r\\ b \end{pmatrix}
= \nabla \cdot \left(D(r,b) \nabla \begin{pmatrix} r \\ b \end{pmatrix} - F(r,b) \begin{pmatrix} r \\ b \end{pmatrix}\right),\end{aligned}$$ where $D = D(r,b)$ is the diffusion matrix and $F = F(r,b)$ is the drift matrix due to a convective flux. Systems like often have a gradient flow structure $$\begin{aligned}
\label{gradflow}
\partial_t \begin{pmatrix} r\\ b \end{pmatrix} =
\nabla \cdot \left[ M (r,b) \nabla
\begin{pmatrix} \partial_r E \\ \partial_b E \end{pmatrix}\right],\end{aligned}$$ where $M$ is a mobility matrix and $\partial_r E$ and $\partial_b E$ denote the functional derivative of an entropy functions $E$ with respect to $r$ and $b$, respectively. The gradient flow formulation provides a natural framework to study the analytic behavior of such systems, cf. e.g. [@ambrosio2008gradient]. It has been used to analyze existence and long-time behavior of systems, see for example [@carrillo2014gradient; @jungel2014boundedness; @liero2013gradient; @zinsl2015transport]. As a result, being able to express a PDE system as gradient flows of an entropy is a very desirable feature; yet, this is not possible in general. The lack of the gradient flow structure on the PDE level can result from the approximations made when passing from the microscopic description to the macroscopic equations. This is the case of the cross-diffusion system derived in [@Bruna:2012wu], which was derived using the method of matched asymptotics. There has been a lot of research on the passage from microscopic models to the continuum equations, for example in the hydrodynamic limit [@kipnis2013scaling]. More recently the microscopic origin of entropy structures, which connects gradient flows and the large deviation principle was analyzed in [@adams2011larg; @liero2015microscopic].
In this paper we introduce the idea of an asymptotic gradient flow structure as a generalization of a standard or full gradient flow for systems derived as an asymptotic expansion such as that in [@Bruna:2012wu]. In this paper we provide several analytic results for these cross-diffusion systems and introduce the notion of asymptotic gradient flows. We discuss how the closeness of these asymptotic gradient flow structures can be used to analyze the behavior of the system close to equilibrium. Furthermore we present a global in time existence result in the case of particles of same size and diffusivity (in which the system has a full gradient flow structure). The existence proof is based on an implicit Euler discretisation and Schauder’s fixed point theorem. We study the linearized system with an additional regularization term in the entropy to ensure boundedness of the solutions and deduce existence results for the unregularised system in the limit (similar to the deep quench limit for the Cahn Hilliard equation [@Elliott:1996]). This is, to the authors’ knowledge, the first global in time existence result for this system so far. We note however that it is only valid if the total density stays strictly below the maximum density.
This paper is organized as follows: we introduce the mathematical model in Section \[sec:model\] and discuss the cases for which the system has either a full or an asymptotic gradient flow structure. In Section \[sec:closetoequilibrium\] we define the notion of asymptotic gradient flows formally and discuss how they can be used to analyze the behavior of stationary solutions close to equilibrium. Several numerical examples illustrating the deviation of stationary solutions from the equilibrium solutions for asymptotic gradient flows are presented in Section \[sec:numerics\]. Finally, we give a global in time existence result in the case of particles of same size and diffusivity in Section \[sec:existence\].
The mathematical model {#sec:model}
======================
In this paper we analyze a cross-diffusion system for a mixture of hard spheres derived in [@Bruna:2012wu], which we present below. The system is obtained as the continuum limit of a stochastic system with two types of interacting Brownian particles, referred to as red and blue particles. In particular, we consider $N_r$ red particles of diameter $\epsilon_r$, constant diffusion coefficient $D_r$ and external potential $\tilde V_r$, and $N_b$ blue particles of diameter $\epsilon_b$, diffusion coefficient $D_b$, and external potential $\tilde V_b$. Each particle evolves according to a stochastic differential equation (SDE) with independent Brownian dynamics, and interacts with the other particles in the system via hard-core collisions. This means that the centers of two particles with position ${\bf X}_i$ and ${\bf X}_j$ in space are not allowed to get closer than the sum of their radii, that is, $ \| {\bf X}_i - {\bf X}_j \| \ge (\epsilon_i + \epsilon_j)/ 2$, where $\epsilon_i$ denotes the radius of the $i$th particle. We define the total number of particles in the system by $N = N_r + N_b$, and the distance at contact between a red and blue particles by $\epsilon_{br}=(\epsilon_r + \epsilon_b)/2$. The situation detailed above can be described by the overdamped Langevin SDEs $$\begin{aligned}
\label{sde}
\begin{aligned}
d {\bf X }_i (t) &= \sqrt{2D_r}\, d{\bf W}_i(t) - \nabla \tilde V_r( {\bf X}_i) \, dt \qquad 1\leq i\leq N_r,\\
d {\bf X }_i (t) &= \sqrt{2D_b}\, d{\bf W}_i(t) - \nabla \tilde V_b( {\bf X}_i) \, dt \qquad N_r+1 \leq i \leq N,
\end{aligned}\end{aligned}$$ where ${\bf X}_i \in \Omega \subset \mathbb R^d$, $d= 2, 3$, is the position of the $i$th particle and ${\bf W}_i$ a $d$-dimensional standard Brownian motion. We assume that $\Omega$ is a bounded domain. The boundary conditions due to collisions between particles and with the domain walls are $$\begin{aligned}
\begin{aligned}
(d {\bf X }_i - d {\bf X }_j) \cdot {\bf n} & = 0, & \quad &\text{on} \quad \| {\bf X}_i - {\bf X}_j \| = (\epsilon_i + \epsilon_j)/ 2,\\
d {\bf X }_i \cdot {\bf n} & = 0, & \quad &\text{on} \quad \partial \Omega,
\end{aligned}\end{aligned}$$ where $ \bf n $ denotes the outward unit normal. The continuum-level model associated to this individual-based model was derived in [@Bruna:2012wu] using the method of matched-asymptotic expansions in the limit of low but finite volume fraction. If $v_d(\epsilon)$ is the volume of a $d$-dimensional ball of diameter $\epsilon$, then the volume fraction in the system is $$\label{vol_fraction}
\Phi= N_r v_d(\epsilon_r) + N_b v_d(\epsilon_b),$$ assuming that the problem is nondimensionalised such that the domain $\Omega$ has unit volume, $|\Omega| = 1$. Because particles cannot overlap each other, in addition to the global constraint $\Phi \ll 1$ there is also a local constraint on the total volume density, defined as $$\label{total_vol_density}
\phi({\bf x}, t) = v_d(\epsilon_r) r({\bf x}, t) + v_d(\epsilon_b) b({\bf x}, t).$$ In particular, the local volume density cannot exceed the theoretical maximum allowed volume fraction, given by the Kepler conjecture. We note that $\Phi$ and $\phi$ are related via $\Phi = \int_\Omega \phi \, d {\bf x}$.
The cross-diffusion model in [@Bruna:2012wu] is valid for any number of blue and red particles, $N_b$ and $N_r$. However, here we will consider the case that the number of both particles is large, such that $N_r-1 \approx N_r$,$N_b -1 \approx N_b$, as it simplifies the model slightly. In this case, the model reads [@Bruna:2012wu]
\[pde\_general\] $$\begin{aligned}
\label{pde_general_r}
\partial_t r &= D_r \nabla \cdot \left[ ( 1 +
\epsilon_r^d \alpha r ) \nabla { r} + \nabla V_r r + \epsilon_{br}^d \big ( \beta_r \, { r} \nabla { b} - \gamma_r { b}\nabla { r} + \nabla ( \gamma_b V_b - \gamma_r V_r ) r b\big ) \right],
\\
\label{pde_general_b}
\partial_t b &= D_b \nabla \cdot \left[ ( 1 + \epsilon_b^d \alpha b ) \nabla { b} + \nabla V_b b + \epsilon_{br}^d \big ( \beta_b \, { b} \nabla { r} - \gamma_b { r}\nabla { b} + \nabla \big( \gamma_r V_r - \gamma_b V_b \big) r b\big ) \right], \end{aligned}$$
where $r=r({\bf x},t)$ and $b=b({\bf x},t)$ are the number densities of the red and blue species, respectively, depending on space and time. Consequently, meaningful solutions satisfy $r \ge 0$ with $\int_\Omega r\, d{\bf x} = N_r$ and $b \ge 0$ with $\int_\Omega b \, d{\bf x} = N_b$. In , $V_i = \tilde V_i/D_i$ are the rescaled potentials, and the parameters $\alpha$, $\beta_i$ and $\gamma_i$ depend on the geometry of the particles. For balls, they are given by $$\label{coef_23d}
\begin{aligned}
\alpha &= \frac{2(d-1)\pi}{d} , \qquad \beta_i = \frac{2\pi}{d} \frac{ [(d-1)D_i + dD_j]}{D_i + D_j} , \qquad \gamma_i= \frac{2\pi}{d} \frac{D_i}{D_i + D_j} ,
\end{aligned}$$ for $i = r$ or $b$, and space dimension $d=2$ or $3$. This system is an asymptotic expansion in $\epsilon_r$, $\epsilon_b$ (assuming that both small parameters are of the same asymptotic order, $\epsilon_r \sim \epsilon_b \sim \epsilon$), valid up to order $\epsilon^d$. The nonlinear terms in correspond to the leading-order contribution of the pairwise particle interactions. The asymptotic method used in [@Bruna:2012wu] could be extended if desired to evaluate higher-order terms coming from three or more particle interactions, as well as higher-order corrections in the pairwise interaction. This would result in higher-order terms in $\epsilon_i$ in (of order $\epsilon_i^{(d+1)}$ and higher) with quite some effort. However, it seems impossible to derive the full infinite series expansion.
We will consider the system in $\Omega \times (0, T)$ with no-flux boundary conditions
\[bcs\_general\] $$\begin{aligned}
\label{bcs_general_r}
0 & = {\bf n} \cdot \left \{ ( 1 +
\epsilon_r^d \alpha r ) \nabla { r} + \nabla V_r r + \epsilon_{br}^d \big [ \beta_r \, { r} \nabla { b} - \gamma_r { b}\nabla { r} + \nabla ( \gamma_b V_b - \gamma_r V_r ) r b\big ] \right \},
\\
\label{bcs_general_b}
0 & = {\bf n} \cdot \left \{ ( 1 + \epsilon_b^d \alpha b ) \nabla { b} + \nabla V_b b + \epsilon_{br}^d \big [ \beta_b \, { b} \nabla { r} - \gamma_b { r}\nabla { b} + \nabla \big( \gamma_r V_r - \gamma_b V_b \big) r b\big ] \right \}, \end{aligned}$$
on $\partial \Omega \times (0, T)$ and initial values $$\label{initial_general}
r({\bf x}, 0) = r_0({\bf x}), \qquad b({\bf x}, 0) = b_0({\bf x}).$$
In order to analyze the cross-diffusion system , it is convenient to consider its associated gradient flow structure of the form . However, only the system in the symmetric case where red and blue particles have same size and diffusivity can be rewritten in that form. For the general case we introduce a generalization of a gradient flow, namely an *asymptotic gradient flow*, motivated by the underlying structure of the general system .
Cross-diffusion system for particles of the same size and diffusivity {#sec:case1}
---------------------------------------------------------------------
In this section we suppose that red and blue particles are of the same size, that is $\epsilon_r = \epsilon_b : = \epsilon$, and have the same diffusion coefficient, $D_r = D_b$. Without loss of generality, we take the diffusion coefficient equal one (this can be achieved by rescaling time). In this case, the cross-diffusion system can be written as
\[case1\] $$\begin{aligned}
\label{case1_r}
\partial_t r &= \nabla \cdot \left[ (1 + \alpha \epsilon^d
r - \gamma \epsilon^d { b} ) \nabla { r} + \beta \epsilon^d r \nabla b + r \nabla V_r + \gamma \epsilon^d \nabla \left ( V_b - V_r \right ) r b \right] ,
\\
\label{case1_b}
\partial_t b &= \nabla \cdot \left[ ( 1 + \alpha \epsilon^d b - \gamma \epsilon^d r ) \nabla b + \beta \epsilon^d b \nabla r + b \nabla V_b + \gamma \epsilon^d \nabla \left ( V_r - V_b \right ) r b \right],\end{aligned}$$
where $\beta_i$ and $\gamma_i$, for $i = r, b$, are now equal and simplify to $\gamma = \pi/d$ and $\beta = 2^{d-1}\gamma$, respectively.
This cross-diffusion system can be used to describe a mixture of particles that are physically identically but that are driven by different potentials $V_r$ and $V_b$ (for example cells that are attracted to different food sources, or pedestrians that want to move in different directions). Moreover, it can also be used to model the scenario where the red and the blue particles are in fact identical, but one has knowledge about the initial distributions of each sub-population, $r_0$ and $b_0$. This is the scenario in many experimental set-ups that use noninvasive fluorescent tagging. On the other hand, if the red and blue particles are identical and initially indistinguishable, then one has that $r/N_r = b/N_b := p$ for all times. In this case, both equations and reduce to the same equation, which coincides with the equation for the evolution of a single population of hard spheres as expected [@Bruna:2012cg].
In the following we define $\bar \alpha = \epsilon^d \alpha$, $\bar \gamma = \epsilon^d \gamma$, and the total number density $$\label{rho_case1}
\rho ({\bf x},t) : = r ({\bf x},t) + b({\bf x},t).$$ When particles have the same size and diffusivity we find that $$\label{relation_case1}
\rho \equiv 2 \phi/ \bar \gamma,$$ where $ \phi$ is the total volume density given in . Using $\rho$, the equations can be rewritten in the following form
\[case1\_rho\] $$\begin{aligned}
\label{case1_rho_r}
\partial_t r &= \nabla \cdot \left[ (1 - \bar \gamma \rho ) \nabla { r} + (\bar \alpha + \bar \gamma) r \nabla \rho + r \nabla V_r + \bar \gamma \nabla \left( V_b - V_r \right) r b \right],
\\
\label{case1_rho_b}
\partial_t b &= \nabla \cdot \left[ ( 1- \bar \gamma \rho ) \nabla b + (\bar \alpha + \bar \gamma) b \nabla \rho + b \nabla V_b + \bar \gamma \nabla \left ( V_r - V_b \right ) r b \right ],\end{aligned}$$ where we have used that $ \beta = \alpha + \gamma$.
It is straight-forward to see that the system has a formal gradient flow structure, with an entropy functional given by $$\begin{aligned}
\label{entropy_case1}
E(r, b) = \int_{\Omega} r\log r + b \log b + r V_r + b V_b + \frac{\bar \alpha}{2} \left( r^2 + 2 r b + b^2 \right) d {\bf x}.\end{aligned}$$ Using the corresponding entropy variables $$\begin{aligned}
\label{entropy_vars_case1}
\begin{aligned}
u &:= \partial_r E= \log r + \bar \alpha \rho + V_r,\\
v &:= \partial_b E= \log b + \bar \alpha \rho + V_b,
\end{aligned}\end{aligned}$$ the system can be written in the form $$\begin{aligned}
\label{case1_entropy}
\partial_t \begin{pmatrix} r\\ b \end{pmatrix} =
\nabla \cdot \left[ M (r ,b) \nabla
\begin{pmatrix} u \\ v \end{pmatrix}\right],\end{aligned}$$ with the symmetric mobility matrix $$\begin{aligned}
\label{mobility_case1}
\renewcommand{\arraystretch}{1.0}
M (r,b) =
\begin{pmatrix} r (1 - \bar \gamma b) & \bar \gamma r b \\
\bar \gamma r b & b (1 - \bar \gamma r) \end{pmatrix}.\end{aligned}$$
Cross-diffusion system for particles of different size and diffusivity
----------------------------------------------------------------------
In this section we attempt to write a gradient flow structure for the general cross-diffusion system guided by the symmetric case in the previous subsection, and . We will see that this requires a generalization of the definition of gradient flow structure. We define the following entropy
\[gradflow\_general\] $$\begin{aligned}
\label{entropy_general}
E_\epsilon ( r, b) = \int_{\Omega} r\log r + b \log b + r V_r + b V_b +\frac{\alpha}{2} \left( \epsilon_r^d \, r^2 + 2 \epsilon_{br}^d \, r b + \epsilon_b^d \, b^2 \right) d {\bf x},\end{aligned}$$ and mobility matrix $$\begin{aligned}
\label{mobility_general}
\renewcommand{\arraystretch}{1.0}
M_\epsilon ( r, b) =
\begin{pmatrix} D_r r (1 - \gamma_r \epsilon_{br}^d b) & D_r \gamma_b \epsilon_{br}^d r b \\
D_b \gamma_r \epsilon_{br}^d r b & D_b b (1 - \gamma_b \epsilon_{br}^d r) \end{pmatrix}.\end{aligned}$$
As mentioned earlier, we suppose that the red and blue particle sizes are of the same asymptotic order, namely $\epsilon_r \sim \epsilon_b$. It is then convenient to introduce a single small parameter $\epsilon$ and the order one parameters $a_r, a_b$ and $a_{br}$ such that $\epsilon_i^d = a_i \epsilon^d$. Then the entropy and mobility can be expressed as $E_\epsilon \sim E_0 + \epsilon^d E_1$ and $M_\epsilon \sim M_0 + \epsilon^d M_1$, with $$\begin{aligned}
\label{grad_flowgeneral_exp}
\begin{aligned}
E_0 &= \int_{\Omega} r\log r + b \log b + r V_r + b V_b \, d {\bf x}, & \ E_1 &= \frac{\alpha}{2} \int_{\Omega} a_r r^2 + 2 a_{br} r b + a_b b^2 \, d {\bf x} ,\\
M_0 &= \text{diag}( D_r r, D_b b),& M_1 &= a_{br} r b \begin{pmatrix} -D_r \gamma_r & D_r \gamma_b \\
D_b \gamma_r & - D_b \gamma_b \end{pmatrix}.
\end{aligned}
$$
Using , the general cross-diffusion system can be rewritten as $$\begin{aligned}
\label{gradflow_generalasy}
\partial_t \begin{pmatrix} r\\ b \end{pmatrix} =
\nabla \cdot \left[ M_\epsilon \nabla
\begin{pmatrix} \partial_r E_\epsilon \\ \partial_b E_\epsilon \end{pmatrix} - \epsilon^{2d} G \right],\end{aligned}$$ where $G(r,b)$ is the vector $$\begin{aligned}
\label{vectorw}
G = \alpha a_{br} r b \begin{pmatrix} \gamma_r (\theta_r \nabla r - \theta_b \nabla b) \\ \gamma_b(\theta_b \nabla b - \theta_r \nabla r) \end{pmatrix},\end{aligned}$$ with $$\label{thetar_thetab}
\theta_r = D_b a_{br} - D_r a_r,\qquad \theta_b = D_r a_{br} - D_b a_b.$$ Then it is easy to see that the gradient flow structure induced by and our system (or ) agree up to order $\epsilon^d$, which is the order of the asymptotic expansion that produced in the first place. In other words, the discrepancy between the system and the gradient-flow induced by is of order $\epsilon^{2d}$. Therefore, up to order $\epsilon^d$, we can see as a gradient flow structure of our system. We will call this an asymptotic gradient flow structure; the precise definition will be made clear in the following section.
Finally, we note that the system coincides with the gradient-flow structure in the case $D_r = D_b$ and $\epsilon_r = \epsilon_b$, see and . Note that $G \equiv 0$ for the parameter values of the simpler system , as expected. Specifically, we find that if $D_r = D_b$ and $\epsilon_r = \epsilon_b$, then $\theta_ r= \theta_b = 0$. A natural question to ask is whether there are other parameter values for which $G(r,b) \equiv 0$ for all $r, b$. Imposing that $\theta_ r= \theta_b = 0$ leads to the condition $a_{br}^2 = a_r ab$, which in turn leads to $\epsilon_r = \epsilon_b$, and thus that $D_r = D_b$. Therefore, the only case for which is an exact gradient flow for the system is the case which we have already studied, that is when the particle sizes and diffusivities are equal.
Gradient Flows and Asymptotic Gradient Flows close to Equilibrium {#sec:closetoequilibrium}
=================================================================
In the following we provide a more detailed discussion on gradient flow structures and implications for the behavior close to equilibrium.
Full gradient flow structure case {#sec:gradient_flow}
---------------------------------
In this subsection we analyze the behavior of system close to equilibrium. We follow the strategy outlined in the previous subsection, by proving uniqueness of equilibrium solutions and studying the stability and well-posedness of the system close to this equilibrium solution. We have seen in the previous subsection that the linear stability analysis for gradient flow structures reduces to showing that the mobility matrix $M$ is positive definite in the case of a strictly convex entropy functional $E$, cf. [@Schlake:2011wr]. We assume from now on:
1. \[a:V\] Let $V_r, V_b \in H^1(\Omega)\cap L^{\infty}(\Omega)$.
We recall that in case of assumption \[a:V\] an equilibrium solution $(r_\infty,b_\infty)$ exists and that the corresponding entropy variables $u_\infty$ and $v_\infty$ are constant. The determinant of the mobility matrix $M$ defined in is given by $$\label{det_case1}
\det M = rb (1- \bar \gamma \rho).$$ Together with the positivity of diagonal entries we see that $M$ is positive definite if $\rho < 1 / \bar \gamma$. This constraint gives a local bound on the total local volume density (using ), namely $2 \phi < 1$. This is consistent with the asymptotic assumption that $\phi \ll 1$. Hence, we define the set $$\label{equ:set}
\mathcal{S}=\left\{\begin{pmatrix}
r\\b
\end{pmatrix}\in \mathbb{R}^2:r\geq 0,b \geq 0,r+b \leq \frac{1}{\overline{\gamma}}\right\},$$ which will also use in the existence proof presented in Section \[sec:existence\]. For stability and uniqueness it will be crucial to have solutions staying strictly in the interior of $\mathcal{S}$, due to the degeneracy of the mobility matrix on the boundary of $\mathcal{S}$.
\[linearstability\] The stationary solutions of the system are unique and linearly stable with respect to small perturbations $\xi, \eta \in L^2(0,T;H^1(\Omega))$.
Due to the gradient flow structure, any stationary solution of is a minimizer of the entropy subject to the constraints of given mass and $(r(x),b(x)) \in {\mathcal S}$ almost everywhere. Due to the strict convexity of the entropy and the convexity of the constraint set, the minimizer is unique.
Let us consider the linearisation of system around the unique equilibrium, which corresponds to the constant entropy variables $(u_\infty, v_\infty)$. As we have seen before this is equivalent to have a linear expansion in $(r,b)$ and in the entropy variables $(u,v)$ , i.e. $u=u_\infty+\xi, v=v_\infty+\eta$. In the latter setting we obtain the following first order approximation
$$\begin{aligned}
E^\star{''}(u_\infty,v_\infty) \begin{pmatrix}
\partial_t \xi\\ \partial_t \eta
\end{pmatrix}=
\begin{pmatrix}
\partial_u r(u_\infty,v_\infty) \partial_t \xi+\partial_v r(u_\infty,v_\infty) \partial_t \eta \\
\partial_u b(u_\infty,v_\infty) \partial_t \xi+\partial_v b(u_\infty,v_\infty) \partial_t \eta
\end{pmatrix}=\nabla \cdot \left( M(r_\infty,b_\infty)\begin{pmatrix}
\nabla \xi\\ \nabla \eta
\end{pmatrix} \right),\end{aligned}$$
where $E^\star{''}$ denotes the Hessian of the dual entropy functional. Note that for the first order approximation, we also have no flux boundary conditions. A simple calculation shows that $E^\star{''}(u_\infty,v_\infty)$ as well as $M(r,b)$ are positive definite for $(r,b)$ in the interior of ${S}$, which is guaranteed everywhere for the stationary solution $(r_\infty,b_\infty)$. Stability of this linear system is equivalent to nonpositivity of all the real parts of eigenvalues $\lambda$ in $$\lambda E^\star{''}(u_\infty,v_\infty) \begin{pmatrix}
\xi\\ \eta
\end{pmatrix} = \nabla \cdot \left( M(r_\infty,b_\infty)\begin{pmatrix}
\nabla \xi\\ \nabla \eta
\end{pmatrix} \right) .$$ Note that due to the symmetry of the eigenvalue problem, all eigenvalues are real. Moreover, we find $$\begin{aligned}
\lambda \int_\Omega E^\star{''}(u_\infty,v_\infty)\begin{pmatrix}
\xi\\ \eta
\end{pmatrix}\cdot \begin{pmatrix}
\xi\\ \eta
\end{pmatrix}\, d{\bf x}=-\int_\Omega M(r_\infty,b_\infty)\begin{pmatrix}
\nabla \xi\\ \nabla \eta
\end{pmatrix}\cdot\begin{pmatrix}
\nabla \xi\\ \nabla \eta
\end{pmatrix}\, d{\bf x} .\end{aligned}$$ Since $E^\star{''}$ and $M(r,b)$ are positive definite, we conclude that $\lambda<0$, which implies linear stability.
Next we consider the well-posedness close to equilibrium. We shall make use of the following auxiliary lemma:
\[lemma6\] Let $V_r$ and $V_b$ satisfy assumption \[a:V\] and let $V_r,V_b \in X$ with $$X=L^{\infty}(0,T;H^2(\Omega))\cap L^2(0,T;H^3(\Omega)) \cap H^1(0,T;H^1(\Omega)).$$ Then the gradient of the dual entropy functional $E^*{'}:X \times X \to X \times X, \,(u,v) \mapsto (r,b)$, defined by , is continuous.
To verify continuity, we have to show the existence of a constant $C>0$ such that $$\label{equ:continuity}
\|(r,b)\|_{X\times X}\leq C \|(u,v)\|_{X\times X} \quad \forall (u,v)\in X\times X.$$ Given $(u,v)\in X\times X$, we calculate $$\begin{aligned}
\label{entropy_der1}
\begin{aligned}
\nabla u &=\frac{1}{r}\nabla r+\bar \alpha \nabla \rho +\nabla V_r, \qquad \nabla v =\frac{1}{b}\nabla b+\bar \alpha \nabla \rho +\nabla V_b,
\end{aligned}\end{aligned}$$ $$\begin{aligned}
\label{entropy_der2}
\begin{aligned}
\Delta u &=-\frac{1}{r^2} (\nabla r)^2 +\left(\frac{1}{r}+\bar \alpha\right)\Delta r +\bar \alpha \Delta b +\Delta V_r,\\
\Delta v &=-\frac{1}{b^2} (\nabla b)^2 +\left(\frac{1}{b}+\bar \alpha\right)\Delta b +\bar \alpha \Delta r +\Delta V_b,
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
\label{entropy_der3}
\begin{aligned}
\nabla \Delta u&=\frac{1}{r^3}\nabla r (\nabla r)^2-\frac{3}{r^2}\nabla r\Delta r+\left(\frac{1}{r}+\bar\alpha\right)\nabla \Delta r+\bar\alpha \nabla\Delta b+\nabla \Delta V_r,\\
\nabla \Delta v&=\frac{1}{b^3}\nabla b (\nabla b)^2-\frac{3}{b^2}\nabla b\Delta b+\left(\frac{1}{b}+\bar\alpha\right)\nabla \Delta b+\bar\alpha \nabla\Delta r+\nabla \Delta V_b.
\end{aligned}\end{aligned}$$ From $H^2(\Omega)\hookrightarrow L^{\infty}(\Omega)$ in dimensions $d=2,3$ and using the definition of the entropy variables we get that $r,b\in L^{\infty}(0,T;L^{\infty}(\Omega))$ and $r,b>\varepsilon$ for some positive $\varepsilon$. As $u,v\in L^{\infty}(0,T;H^2(\Omega))$ and $H^2(\Omega)\hookrightarrow W^{1,6}(\Omega)$, we get that $\nabla u, \nabla v \in L^{\infty}(0,T;L^6(\Omega))$ and therefore $\nabla u \nabla v \in L^{\infty}(0,T;L^3(\Omega))$. Hence, relation implies that $\nabla r,\nabla b\in L^{\infty}(0,T;L^6(\Omega))$ and $\nabla r \nabla b \in L^{\infty}(0,T;L^3(\Omega))\hookrightarrow L^{\infty}(0,T;L^2(\Omega))$. Applying relation , we obtain that $\Delta r, \Delta b\in L^{\infty}(0,T;L^2(\Omega))$. Since $u,v\in L^2(0,T;H^3(\Omega))$, the embedding $H^3(\Omega)\hookrightarrow W^{1,\infty}(\Omega)$ for dimensions $d=2,3$ as well as relation imply that $\nabla r,\nabla b\in L^2(0,T;L^\infty(\Omega))$. Together with relation , we obtain that $r,b \in L^2(0,T;H^3(\Omega))$, which implies continuity.
\[wellposedness\] Consider system with initial data $u_0,v_0 \in H^2(\Omega)$ and potentials $V_r,V_b\in H^3(\Omega)$. Furthermore let $$\|u_0-u_\infty\|_{H^2(\Omega)}\leq \kappa \quad \text{and} \quad \|v_0-v_\infty\|_{H^2(\Omega)}\leq \kappa,$$ for $\kappa>0$ sufficiently small. Then, there exists a unique solution to system in $$B_R= \{(u,v):\|u-u_\infty\|_X\leq R,\, \|v-v_\infty\|_X\leq R\},$$ where $$X=L^{\infty}(0,T;H^2(\Omega))\cap L^2(0,T;H^3(\Omega)) \cap H^1(0,T;H^1(\Omega))$$ and $R$ is a constant depending on $\kappa$ and $T>0$ only.
The proof is based on Banach’s fixed point theorem. The corresponding fixed point operator is defined by the evolution of $u-u_\infty$ and $v-v_\infty$, which can be written as $$\begin{aligned}
\label{equ1}
\begin{aligned}
E^\star{''}(u_\infty,v_\infty)\begin{pmatrix}
\partial_t (u-u_\infty) \\ \partial_t (v-v_\infty)
\end{pmatrix}&-\nabla \cdot \left( M(r_\infty,b_\infty)\begin{pmatrix}
\nabla u \\ \nabla v \end{pmatrix}\right)\\
&=\nabla \cdot \Bigl(( M(r,b)-M(r_\infty,b_\infty))
\begin{pmatrix}
\nabla(u-u_\infty)\\ \nabla(v-v_\infty)
\end{pmatrix} \Bigr)\\
&\quad-(E^\star{''}(u,v)-E^\star{''}(u_\infty,v_\infty))\begin{pmatrix}
\partial_t (u-u_\infty) \\ \partial_t (v-v_\infty)
\end{pmatrix}\\
&=:F(u,v),
\end{aligned}\end{aligned}$$ where we used that $(r,b)=E^\star{'}(u,v)$. Note that by using a similar argumentation as in the proof of Lemma \[lemma6\], we can show that the stationary solutions $r_\infty,b_\infty$ are in $H^3(\Omega)$ assuming that the potentials $V_r,V_b$ are in $H^3(\Omega)$. Consider $(u,v)\in X\times X$ with the corresponding function $r=r(u,v),b=b(u,v)$ and let $L$ denote the solution of for a given right-hand side. Then the fixed point operator is given by the concatenation of $L$ and $F$, that is $$J=L\circ F:X \times X \to X\times X.$$ Note that Lemma \[lemma6\] guarantees that $(r,b)=(r(u,v),b(u,v))\in X\times X$. Properties of the entropy functional guarantee that $E^\star{''}$ is bounded for $(u,v)\in X\times X$. The operator $F$ defined in maps from $X\times X$ into $Y\times Y$, where $$Y:=L^{\infty}(0,T;L^2(\Omega))\cap L^2(0,T;H^1(\Omega)).$$ Standard results for linear parabolic equations, see [@ol1968linear] or [@Evans199806], ensure that the solution $(\tilde{u}-u_\infty,\tilde{v}-v_\infty)$ to equation lie in $X\times X$.
To apply Banach’s fixed point theorem, it remains to show that the operator $J$ is self-mapping into the ball $B_R$ and contractive. The selfmapping property follows from the fact that $$\begin{aligned}
\|(\tilde{u}-u_\infty,\tilde{v}-v_\infty)\|_{X\times X}&\leq C \Bigl(\underbrace{\|F(u,v)\|_{L^2}}_{\sim R^2}+\underbrace{\|(u_0-u_\infty,v_0-v_\infty)\|_{H_0^1}}_{\sim\kappa}\Bigr)=:R(\kappa).\end{aligned}$$ For the contractivity we consider $(u_1,v_1)\in X\times X$ and $(u_2,v_2)\in X \times X$ and deduce that: $$\begin{aligned}
\|F(u_1,v_1)-F(u_2,v_2)\|_Y&=\left\|\nabla \cdot \left( \left(M(E^*{'}(u_1,v_1))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_1-u_\infty)\\ \nabla(v_1-v_\infty)
\end{pmatrix} \right)\right.\\
&\quad +(E^\star{''}(u_1,v_1)-E^\star{''}(u_\infty,v_\infty))\begin{pmatrix}
\partial_t (u_1-u_\infty) \\ \partial_t (v_1-v_\infty)
\end{pmatrix}\\
&\quad -\nabla \cdot \left( \left(M(E^*{'}(u_2,v_2))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_2-u_\infty)\\ \nabla(v_2-v_\infty)
\end{pmatrix} \right)\\
& \quad \left.-(E^\star{''}(u_2,v_2)-E^\star{''}(u_\infty,v_\infty))\begin{pmatrix}
\partial_t (u_2-u_\infty) \\ \partial_t (v_2-v_\infty)
\end{pmatrix}\right\|_Y\end{aligned}$$ Therefore $$\begin{aligned}
\|F(u_1,v_1)-F(u_2,v_2)\|_Y &\leq \phantom{+} \left\|\nabla \cdot \left( \left(M(E^*{'}(u_1,v_1))-M(E^{*}{'} (u_2,v_2))\right)\begin{pmatrix}
\nabla(u_1-u_\infty)\\ \nabla(v_1-v_\infty)
\end{pmatrix} \right)\right\|_Y\\
&\quad+\left\|\nabla \cdot \left( \left(M(E^*{'}(u_2,v_2))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_1-u_2)\\ \nabla(v_1-v_2)
\end{pmatrix} \right)\right\|_Y\\
&\quad+\left\|(E^\star{''}(u_1,v_1)-E^\star{''}(u_2,v_2))\begin{pmatrix}
\partial_t (u_1-u_\infty) \\ \partial_t (v_1-v_\infty)
\end{pmatrix}\right\|_Y\\
&\quad+\left\|(E^\star{''}(u_2,v_2)-E^\star{''}(u_\infty,v_\infty))\begin{pmatrix}
\partial_t (u_1-u_2) \\ \partial_t (v_1-v_2)
\end{pmatrix}\right\|_Y\\
&\leq C_1R (\|u_1-u_2\|_X+\|v_1-v_2\|_X),\end{aligned}$$ for some constant $C_1>0$. Hence, we have that $$\begin{aligned}
\|J(u_1,v_1)-J(u_2,v_2)\|_X\leq CR(\|u_1-v_1\|_X+\|u_2-v_2\|_X),\end{aligned}$$ for some $C>0$. Choosing $\kappa$ and $R$ such that $R<\frac{1}{C}$, we can apply Banach’s fixed point theorem which guarantees the existence of unique solutions $(u,v) \in B_R$.
Asymptotic Gradient Flow Structure {#sec:asymptoticgradientflowstructure}
----------------------------------
We have seen in Section \[sec:case1\] that system with particles of same size satisfies a gradient flow structure, which is not valid for the general system due to terms of higher order in $\epsilon$. However, we want to interpret the latter as an [*asymptotic gradient flow structure*]{}, motivated by the fact that it was derived from an asymptotic expansion in $\epsilon$. For further motivation, consider a gradient flow structure for the density $w$ of the form $$\label{eq:wequation}
\partial_t w = \nabla \cdot (M(w;\delta) \nabla E'(w;\delta)),$$ where both the mobility $M$ and the entropy $E$ depend on a small parameter $\delta > 0$. With an expansion of $M$ and $E$ in terms of $\delta$ as $$M(w;\delta) = \sum_{j=0}^\infty \delta^j M_j(w), \text{ and } E(w;\delta) = \sum_{j=0}^\infty \delta^j E_j(w),$$ we find $$\partial_t w = \sum_{k=0}^\infty \delta^k \nabla \cdot \left( \sum_{j=0}^k M_j(w) \nabla E_{k-j}'(w) \right).$$ Truncating the expansion on the right-hand side at a finite $k$ does not yield a gradient flow structure in general, but up to terms of order $\delta^{k}$ it coincides with the gradient flow structure with mobility $ \sum_{j=0}^k \delta^j M_j(w)$ and entropy $ \sum_{j=0}^k \delta^j E_j(w)$. In our case we deal with the example $k=1$ (with $\delta = \epsilon^d$), where we have $$\partial_t w = \nabla \cdot (M_0(w) \nabla E_0'(w)) + \delta \nabla \cdot (M_1(w) \nabla E_0'(w)+M_0(w) \nabla E_1'(w)).$$ Adding a term of order $\delta^2$, namely $\delta^2 \nabla \cdot (M_1(w) \nabla E_1'(w))$, this equation becomes a gradient flow. This motivates a more general definition:
Let ${\mathcal F(.;\delta)}$ be a densely defined operator on some Hilbert space for $\delta \in (0,\delta_*)$. Then the dynamical system $$\label{eq:dynsyst}
\partial_t w = {\mathcal F}(w;\delta)$$ is called an asymptotic gradient flow structure of order $k$ if there exist densely defined operators ${\mathcal G}_j$, $j={k+1},\ldots,2k$ such that for $\delta \in (0,\delta_*)$ $${\mathcal F}(w;\delta) + \sum_{j=k+1}^{2k} \delta^j {\mathcal G}_{k+1+j}(w) = - {\mathcal M}(w;\delta) {\mathcal E}'(w;\delta)$$ for some (parametric) energy functional ${\mathcal E}(\cdot;\delta)$, and ${\mathcal M}(w;\delta)$ is a densely defined formally positive-definite operator for each $w$.
If an expansion of mobility and entropy up to order $k$ are available, it seems natural to perform a separate expansion to derive a lower order model that is a gradient flow as well. For complicated models and types of expansions as in [@Bruna:2012wu] or [@Bruna:2012cg] it seems not suitable to derive such however. Hence, we shall work with the asymptotic gradient flow concept below. Note that with the above notations we can rewrite as $$\partial_t w = - {\mathcal M}(w;\delta) {\mathcal E}'(w;\delta)- \delta^{k+1} \sum_{j=0}^{k-1} \delta^j {\mathcal G}_{k+1+j}(w),$$ which opens the door to perturbation arguments in the analysis of for $\delta$ sufficiently small.
In the remainder of this section we will highlight in particular the use of asymptotic gradient flow structures close to equilibrium. Let $w_\infty^{\delta}$ denote the equilibrium solution, which is a minimizer of the energy functional on the manifold defined by ${\mathcal M}$. Hence $w_{\infty}^\delta$ solves ${\mathcal M}(w;\delta) {\mathcal E}'(w_\infty^\delta;\delta) = 0$ for any $w$. In the case of it typically means that $E'(w_\infty^\delta;\delta)$ is constant. In order to prove the existence of a stationary solution of one can then try the following strategy: first of all compute $w_\infty^\delta$ (or prove at least its existence and uniqueness by variational principles) and then use the equation $${\mathcal M}(w_\infty^\delta;\delta) {\mathcal E}'(w;\delta) = -\delta^{k+1} \sum_{j=0}^{k-1} \delta^j {\mathcal G}_{k+1+j}(w) + ({\mathcal M}(w_\infty^\delta;\delta) - {\mathcal M}(w ;\delta)) ( {\mathcal E}'(w;\delta) - {\mathcal E}'(w_\infty^\delta;\delta) )$$ as the basis of a fixed-point argument, freezing $w$ on the right-hand side. Since the terms on the right-hand side are of high order in $\delta$ or of second order in terms of $w-w_\infty^\delta$, there is some hope of contractivity of the fixed-point operator close to equilibrium $w_\infty^\delta$. Such an approach can also yield some structural insight into the stationary solution, since it will be a higher order perturbation of $w_\infty^\delta$. The same idea can be employed to analyze transient solutions of , since $$\begin{aligned}
\partial_t w + {\mathcal M}(w_\infty^\delta;\delta) {\mathcal E}'(w;\delta) = &-\delta^{k+1} \sum_{j=0}^{k-1} \delta^j {\mathcal G}_{k+1+j}(w) \\
&+ ({\mathcal M}(w_\infty^\delta;\delta) - {\mathcal M}(w ;\delta)) ( {\mathcal E}'(w;\delta) - {\mathcal E}'(w_\infty^\delta;\delta) ) .\end{aligned}$$ If $ {\mathcal M}(w_\infty^\delta;\delta) $ is invertible and ${\mathcal E}(\cdot;\delta)$ is strictly convex on its domain, one can directly apply variational techniques to analyze the fixed point operator. In particular it can be rather beneficial to set up the fixed-point operator in dual (or entropy) variables $z = {\mathcal E}'(w;\delta)$ instead.
Finally let us comment on the linear stability analysis around a stationary solution $w_*^\delta$. Using a similar way of expanding the equation around $w_\infty^\delta$, the linearised problem for a variable $\tilde w$ is given by $$\begin{aligned}
\partial_t \tilde w + {\mathcal M}(w_\infty^\delta;\delta) ({\mathcal E}''(w_*^\delta;\delta) \tilde w) &=& -\delta^{k+1} \sum_{j=0}^{k-1} \delta^j {\mathcal G}_{k+1+j}'(w_*^\delta)\tilde w + \\&& ({\mathcal M}(w_\infty^\delta;\delta) - {\mathcal M}(w_*^\delta;\delta)) ( {\mathcal E}''(w_*^\delta;\delta)\tilde w) - \\ &&
( {\mathcal M}'(w_*^\delta;\delta)\tilde w) ( {\mathcal E}'(w_*^\delta;\delta) - {\mathcal E}'(w_\infty^\delta;\delta) ), \end{aligned}$$ where we denote by ${\mathcal E}'$ and ${\mathcal M}'$ the derivatives with respect to $w$ at fixed $\delta$. Due to positive definiteness of ${\mathcal E}''(w_*^\delta;\delta)$, this system can be interpreted as a linear equation for the linearised entropy variable $\tilde z = {\mathcal E}''(w_*^\delta;\delta) \tilde w$, which is equivalent to considering linear stability directly in the transformed equation for the entropy variable $z$ as performed in [@Schlake:2011wr]. Using the simplified notation ${\mathcal A}={\mathcal E}''(w_*^\delta;\delta)^{-1}$ and ${\mathcal B}={\mathcal M}(w_\infty^\delta;\delta)$, we obtain $$\begin{aligned}
{\mathcal A} \partial_t \tilde z + {\mathcal B}\tilde z &=& -\delta^{k+1} \sum_{j=0}^{k-1} \delta^j {\mathcal G}_{k+1+j}'(w_*^\delta){\mathcal A} \tilde z + ({\mathcal B} - {\mathcal M}(w_*^\delta;\delta)) \tilde z - \\ &&
- ( {\mathcal M}'(w_*^\delta;\delta){\mathcal A} \tilde z) ( {\mathcal E}'(w_*^\delta;\delta) - {\mathcal E}'(w_\infty^\delta;\delta) ). \end{aligned}$$ In the case of a gradient flow (${\mathcal G}_j \equiv 0$, $w_*^\delta = w_\infty^\delta$) this reduces to $${\mathcal A} \partial_t \tilde z + {\mathcal B}\tilde z = 0,$$ which is stable if ${\mathcal A}$ and ${\mathcal B}$ are positive definite. In the asymptotic gradient flow case, with $w_*^\delta = w_\infty^\delta + \mathcal O(\delta^{k+1})$, we can formally write the linearised problem as $${\mathcal A} \partial_t \tilde z + ({\mathcal B}+ \delta^{k+1} {\mathcal C})\tilde z = 0,$$ and hence expect linear stability also for $w_*^\delta$ if $\delta$ is sufficiently small.
The application of the above strategies to prove existence of solutions and linear stability to a concrete model obviously depends on an appropriate choice of topologies. In the remaining part of this section we focus on the analysis of the asymptotic gradient flow of the general model.
Asymptotic gradient flow structure case {#sec:asymptotic_gradient_flow}
---------------------------------------
First we study the existence of stationary solutions to . Then we discuss stability of stationary states following the ideas presented in subsection \[sec:asymptoticgradientflowstructure\].
Note that for $\epsilon=0$, the equilibrium solutions are given by $(r_\infty,b_\infty)=(C_r e^{-V_r},C_b e^{-V_b})$, with constants $C_r$ and $C_b$ depending on the initial masses only. Hence $(r_{\infty}, b_{\infty})$ are bounded for $V_r$ and $V_b$ satisfying assumption \[a:V\]. For $\epsilon>0$, the equilibrium solutions are a $\mathcal{O}(\epsilon^d)$ perturbation in $L^{\infty}$ and therefore also uniformly bounded.
\[theorem2\] Consider system with potentials $V_r,V_b\in H^3(\Omega)$. Then there exists a unique stationary state $(u_*, v_*)$ to system in $$B_R= \{(u,v):\|u-u_\infty\|_X \leq R,\, \|v-v_\infty\|_X \leq R\},$$ where $X=H^3(\Omega)$ and $R$ depending on $\epsilon$ and $T>0$ only.
We follow the ideas detailed in Subsection \[sec:asymptoticgradientflowstructure\] and define a fixed point operator close to equilibrium. Denote by $(r_\infty,b_\infty)$ the minimizer of the entropy functional $E_\epsilon(r,b)$, which exists as the entropy functional is strictly convex. Then any stationary solution to system exists has to satisfy $$\begin{aligned}
\label{equ2}
-\nabla \cdot \left( M(r_\infty,b_\infty)
\begin{pmatrix}
\nabla u \\ \nabla v
\end{pmatrix}\right)
&=\nabla \cdot \left(-\epsilon^{2d} G(r,b)+ \left(M(r,b)-M(r_\infty,b_\infty)\right)\begin{pmatrix} \nabla(u-u_\infty)\\ \nabla(v-v_\infty)
\end{pmatrix} \right)\nonumber\\
&=:F(u,v).\end{aligned}$$ Similar arguments as in Lemma \[lemma6\] ensure that for $(u,v)\in X\times X$ the functions $r=r(u,v)$ and $b=b(u,u)$ lie in $X\times X$. Let $L$ denote the solution operator to for a given right-hand side $F(u,v)$. Then the fixed point operator is constructed by: $$J=L\circ F:X \times X \to X\times X.$$ Hence, we can conclude that $F$ maps from $X\times X$ into $Y\times Y$, where $Y=H^1(\Omega)$. Employing results about the elliptic operator, cf. [@gilbarg2015elliptic] or [@Evans199806], we obtain that the solution $(\tilde{u},\tilde{v})$ to equation is in $X\times X$.
To apply Banach’s fixed point theorem, it remains to show that the operator $J$ is self-mapping into the ball $B_R$ and contractive. The self-mapping property follows from the fact that $$\begin{aligned}
\|(\tilde{u},\tilde{v})\|_{X\times X}&\leq \tilde{C} \underbrace{\|F(u,v)\|_{L^2}}_{\sim R^2+\epsilon^{2d}}=:R(\epsilon).\end{aligned}$$ For the contractivity we consider $(u_1,v_1)\in X\times X$ and $(u_2,v_2)\in X \times X$. Then $$\begin{aligned}
\|F(u_1,v_1)&-F(u_2,v_2)\|_Y=\left\|\nabla \cdot \left( -\epsilon^{2d} G(E^*{'}(u_1,v_1))d +\epsilon^{2d} G(E^*{'}(u_2,v_2))\right)\right.\\
&\qquad+\nabla \cdot \left( \left(M(E^*{'}(u_1,v_1))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_1-u_\infty)\\ \nabla(v_1-v_\infty)
\end{pmatrix} \right)\\
&\left.\qquad -\nabla \cdot \left( \left(M(E^*{'}(u_2,v_2))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_2-u_\infty)\\ \nabla(v_2-v_\infty)
\end{pmatrix} \right)\right\|_Y.\end{aligned}$$ Therefore $$\begin{aligned}
\|F(u_1,v_1)-F(u_2,v_2)\|_Y &\leq \left\|\nabla \cdot \left(\epsilon^{2d} G(E^*{'}(u_1,v_1))-\epsilon^{2d} G(E^*{'}(u_2,v_2)\right)\right\|_Y\\
&\quad+\left\|\nabla \cdot \left( \left(M(E^*{'}(u_1,v_1))-M(E^{*}{'} (u_2,v_2))\right)\begin{pmatrix}
\nabla(u_1-u_\infty)\\ \nabla(v_1-v_\infty)
\end{pmatrix} \right)\right\|_Y\\
&\quad+\left\|\nabla \cdot \left( \left(M(E^*{'}(u_2,v_2))-M(E^*{'}(u_\infty,v_\infty))\right)\begin{pmatrix}
\nabla(u_1-u_2)\\ \nabla(v_1-v_2)
\end{pmatrix} \right)\right\|_Y\\
&\leq\epsilon^{2d} C_1 \left(\left\| r_1-r_2\right\|_X +\left\|b_1-b_2\right\|_X\right)+ C_2R (\|u_1-u_2\|_X+\|v_1-v_2\|_X)\\
&\leq \left(\epsilon^{2d} C_3+2C_1R\right)(\|u_1-u_2\|_X+\|v_1-v_2\|_X),\end{aligned}$$ for some constants $C_1,C_2,C_3>0$ and therefore $$\begin{aligned}
\|J(u_1,v_1)-J(u_2,v_2)\|_X\leq \tilde{C} \left(\epsilon^{2d} C_3+2C_1R\right) (\|u_1-v_1\|_X+\|u_2-v_2\|_X),\end{aligned}$$ for some $C>0$. Choosing $R$ and $\epsilon$ such that $$\tilde{C} \left(\epsilon^{2d}C_3 +2C_1R\right)<1,$$ we can apply Banach’s fixed point theorem which guarantees the existence of unique solutions $(u_*,v_*) \in B_R$.
A direct consequence of the proof is the closeness of the stationary solution $(u_*,v_*)$ to the gradient flow solution $(u_\infty,v_\infty)$:
\[corollary\_closeness\] Let the assumptions of Theorem \[theorem2\] be satisfied. Then there exists a constant $C > 0$ such that for $\epsilon$ sufficiently small $$\Vert u_* - u_\infty \Vert_X + \Vert v_* - v_\infty \Vert_X \leq C \epsilon^{2d}.$$
We use rewritten as $$\begin{aligned}
&-\nabla \cdot \left( M(r_\infty,b_\infty)\begin{pmatrix}
\nabla (u_* - u_\infty) \\ \nabla (v_*-v_\infty) \end{pmatrix}\right) = \\ & \qquad \qquad \qquad \qquad {\nabla \cdot \left(-\epsilon^{2d} G(r,b)+ \left(M(r_*,b_*)-M(r_\infty,b_\infty)\right)\begin{pmatrix}
\nabla(u_*-u_\infty)\\ \nabla(v_*-v_\infty)
\end{pmatrix} \right)}.\end{aligned}$$ and the properties of the operators used above immediately imply the assertion.
We conclude this section by discussing linear stability of system close to its stationary states $(u_*, v_*)$. Following the ideas presented in Section \[sec:asymptoticgradientflowstructure\] we rewrite as $$\begin{aligned}
\partial_t (r,b) &=\mathcal{M}(r,b)\mathcal{E}'(r,b)-\epsilon^{2d}\mathcal{G}(r,b).\end{aligned}$$ Then $$\begin{aligned}
\label{stationary_stability1}
\begin{split}
\partial_t (r,b) -\mathcal{M}(r_\infty,b_\infty)\mathcal{E}'(r,b)=&-\epsilon^{2d}\mathcal{G}(r,b)\\
&+(\mathcal{M}(r,b)-\mathcal{M}(r_\infty,b_\infty))(\mathcal{E}'(r,b)-\mathcal{E}'(r_\infty,b_\infty)).
\end{split}\end{aligned}$$ The linearisation of equation around $(r_*,b_*)$ is given by the following system for $(\tilde{r},\tilde{b})$: $$\begin{aligned}
\begin{aligned}
\partial_t (\tilde{r},\tilde{b})-\mathcal{M}(r_\infty,b_\infty)(\mathcal{E}{''}(r_*,b_*)(\tilde{r},\tilde{b}))&=-\epsilon^{2d}\mathcal{G}'(r_*,b_*)(\tilde{r},\tilde{b})\\
&\quad+(\mathcal{M}(r_*,b_*)-\mathcal{M}(r_\infty,b_\infty))(\mathcal{E}{''}(r_*,b_*)(\tilde{r},\tilde{b}))\\
&\quad+(\mathcal{M}'(r_*,b_*) (\tilde{r},\tilde{b}))(\mathcal{E}'(r_*,b_*)-\mathcal{E}'(r_\infty,b_\infty)).
\end{aligned}\end{aligned}$$ Using the linearised entropy variables $(\tilde{u},\tilde{v})=\mathcal{E}{''}(r_*,b_*)(\tilde{r},\tilde{b})$ we obtain$$ $$\begin{aligned}
\label{eq:linstab}
\begin{aligned}
\mathcal{A} \partial_t (\tilde{u},\tilde{v})-\mathcal{B}(\tilde{u},\tilde{v})&=-\epsilon^{2d}\mathcal{G}'(r_*,b_*)\mathcal{A}(\tilde{u},\tilde{v})+(\mathcal{M}(r_*,b_*)-\mathcal{B})(\tilde{u},\tilde{v})\\
&\quad+(\mathcal{M}'(r_*,b_*) \mathcal{A}(\tilde{u},\tilde{v}))(\mathcal{E}'(r_*,b_*)-\mathcal{E}'(r_\infty,b_\infty)),
\end{aligned}\end{aligned}$$ where $\mathcal{A}= \mathcal{E}{''}^{-1}(r_*,b_*)$ is a positive and $\mathcal{B}=\mathcal{M}(r_\infty,b_\infty)$ are negative semidefinite operator. Note that with the usual settings for elliptic systems, $\mathcal{B}$ is elliptic and hence invertible on the space of function pairs in $H^1(\Omega)$ with zero means.
As already mentioned in Section \[sec:asymptoticgradientflowstructure\], $(r_*,b_*)=(r_\infty,b_\infty) + \mathcal O(\epsilon^{2d})$ and can be written as $$\begin{aligned}
\begin{aligned}
\mathcal{A} \partial_t (\tilde{u},\tilde{v})-(\mathcal{B}+\epsilon^{2d}\mathcal{C})(\tilde{u},\tilde{v})&=0,
\end{aligned}\end{aligned}$$ for some bounded operator $\mathcal{C}$ on $H^1(\Omega)^2$ . As $\mathcal{B}$ is symmetric and negative definite except on the two-dimensional space of constant functions also annihilated by $\mathcal{C}$, the nonzero eigenvalues of $\mathcal{B}+\epsilon^{2d}\mathcal{C}$ stay negative for $\epsilon$ sufficiently small, yielding linear stability for $(r_*,b_*)$, cf. [@kato2013perturbation].
Numerical investigations of steady states {#sec:numerics}
=========================================
In this section we compute the stationary solutions of . For the symmetric system , the solutions can be computed exactly as the minimizers of the entropy $E$ in . If the mobility matrix is positive definite (which it is under the assumptions), the equilibrium states can be computed by finding constants $\chi_r \in \mathbb{R}$ and $\chi_b \in \mathbb{R}$ such that $$\begin{aligned}
\partial_r E = \chi_r \text{ and } \partial_b E = \chi_b\end{aligned}$$ subject to normalization constraints. In the case of system we have
\[stationary\] $$\begin{aligned}
\log r_{\infty} + V_r + \alpha(\epsilon_r^d r_{\infty} + \epsilon_{br}^d b_{\infty}) &= \chi_b\\
\log b_{\infty} + V_b + \alpha(\epsilon_b^d b_{\infty} + \epsilon_{br}^d r_{\infty}) &= \chi_r\\
\int_{\Omega} r_{\infty}( {\bf x}) \, d {\bf x} &= N_r\\
\int_{\Omega} b_{\infty}( {\bf x}) \, d {\bf x} &= N_b.\end{aligned}$$
System defines a nonlinear operator equation $F(r_{\infty}, b_{\infty}, \chi_r, \chi_b) = 0$, which can be solved via Newton’s method. Note that the no-flux boundary conditions are automatically satisfied by assuming that $\partial_r E$ and $\partial_b E$ are constant.
For the general case we only obtain an asymptotic gradient flow structure with the entropy $E_\epsilon$; if we use to solve for the stationary solutions we will be committing an order $\epsilon^{2d}$ error. Instead, we compute the exact stationary states $(r_*, b_*)$ of the general system by solving the time-dependent problem for long-times, until the system has equilibrated. To solve , we use a second-order accurate finite-difference scheme in space and the method of lines with the inbuilt Matlab ode solver `ode15s` in time.
We set $d=2$ and consider one-dimensional external potentials $\tilde V_r = \tilde V_r(x)$ and $\tilde V_b = \tilde V_b(x)$ so that the stationary states will be also one-dimensional. In particular, we take linear potentials $\tilde V_r = v_r x$ and $\tilde V_b = v_b x$ and solve for the full system and for the minimizers in $[-1/2,1/2]$, which is split into 200 intervals. The Newton solver is initialized with the stationary state solution in the case of point particles and terminated if $\lVert F(r,b,\chi_r, \chi_b) \rVert_{L^2(0,1)} \leq 10^{-8}$.
#### Example 1
First we consider the case: $\epsilon_r = \epsilon_b$ and $D_r = D_b$, that is particles of the same size and diffusivity. In this case, system has a full gradient flow structure and hence we expect that the stationary states computed with the two approaches to be the same. We plot the two pairs, $(r_*, b_*)$ computed as the long-time limit of , and $(r_\infty, b_\infty)$, computed from in [[Figure]{} \[fig:stat\_exact\]]{}. The parameters are $D_r = D_b = 1$, $\epsilon_r = \epsilon_b = 0.01$, $N_b = N_r = 200$ and $v_r = 2$, $v_b = 1$. As expected, the solutions are identical.
\[\]\[\]\[\][$x$]{} \[\]\[\]\[\][$r_\infty$]{} \[\]\[\]\[\][$b_\infty$]{} \[\]\[\]\[\][$r_*$]{} \[\]\[\]\[\][$b_*$]{}
#### Example 2
From Corollary \[corollary\_closeness\] we expect the stationary solutions corresponding to the case of an asymptotic and a full gradient flow equation agree up to order $\mathcal{O}(\epsilon^d)$. To investigate this, we again compare the solutions $(r_*, b_*)$ and $(r_\infty, b_\infty)$ as we move away from the case with an exact gradient-flow structure (which corresponds to $\theta_r = \theta_b = 0$, see and ).
In particular, we do a one-parameter sweep with $\theta_r$, increasing it from 0 (as in [[Figure]{} \[fig:stat\_exact\]]{}) to $9 \cdot 10^{-5}$, while keeping $\epsilon_r = \epsilon_b =0.01$ and $D_b = 1$ fixed. This ensures that when $\theta_r = 0$ then $\theta_b = 0$. The reds diffusivity $D_r$ is varied according to . We plot the result for $\theta_r = 8\cdot 10^{-5}$ in [[Figure]{} \[fig:stat\_error\]]{}. As expected, the error between the stationary solutions is apparent.
\[\]\[\]\[\][$x$]{} \[\]\[\]\[\][$x$]{} \[\]\[\]\[\][$r_\infty$]{} \[\]\[\]\[\][$b_\infty$]{} \[\]\[\]\[\][$r_*$]{} \[\]\[\]\[\][$b_*$]{}
The absolute error and the relative error between the solutions, $\| r_\infty-r_* \|$ and $\| b_\infty - b_*\|$ and $\| r_\infty - r_* \|/ \| r_\infty\|$ and $\| b_\infty - b_*\|/\|b_\infty\|$, respectively, as a function of $\theta_r$ is shown in [[Figure]{} \[fig:errors\]]{}.
\[\]\[\]\[\][$\theta_r$]{} \[\]\[\]\[\][$b$]{} \[\]\[\]\[\][$r$]{} \[\]\[\]\[\][Abs. error]{} \[\]\[\]\[\][Rel. error]{} \[\]\[\]\[\][(a)]{} \[\]\[\]\[\][(b)]{}
To conclude this section, we compute the stationary solutions of the (exact) full system and that approximated by the asymptotic gradient flow system as we vary $\epsilon$, where $\epsilon =\epsilon_b = \epsilon_r$, while keeping all the other parameters fixed. We plot the results in [[Figure]{} \[fig:errors\_ep\]]{}. As expected from Corollary \[corollary\_closeness\], the errors scale with $\epsilon^{2d} = \epsilon^4$.
\[\]\[\]\[\][$\epsilon$]{} \[\]\[\]\[\][$b$]{} \[\]\[\]\[\][$r$]{} \[\]\[\]\[\][$\epsilon^4$]{} \[\]\[\]\[\][Abs. error]{} \[\]\[\]\[\][Rel. error]{} \[\]\[\]\[\][(a)]{} \[\]\[\]\[\][(b)]{}
Global existence for the full gradient flow system {#sec:existence}
==================================================
In this section we present a global in time existence result for the system with particles of same size and diffusivity .
Let $T>0$, let $( r_0, b_0):\Omega \to \mathcal{S}^\circ$, where $\mathcal{S}$ is defined by , be a measurable function such that $E( r_0, b_0)<\infty$. Then there exists a weak solution $( r, b):\Omega\times (0,T)\to \mathcal{S}$ to system $$\begin{aligned}
\label{theorem1_1}
\begin{aligned}
\partial_t
\begin{pmatrix}
r\\ b
\end{pmatrix} &=\nabla \cdot\begin{pmatrix}
J_r\\J_b
\end{pmatrix} \quad \text{ with }\\
(1-\overline{\gamma}\rho)J_r&=
(1-\overline{\gamma}\rho)\left((1-\bar \gamma \rho)\nabla r+(\bar \alpha+\bar \gamma) r \nabla \rho + r\nabla V_r+\bar \gamma \nabla(V_b-V_r)rb\right)\\
(1-\overline{\gamma}\rho)J_b&=(1-\overline{\gamma}\rho)\left((1-\bar \gamma \rho)\nabla b+(\bar \alpha+\bar \gamma) b \nabla \rho + b\nabla V_b+\bar \gamma \nabla(V_b-V_r)rb\right),
\end{aligned}\end{aligned}$$ satisfying $$\begin{aligned}
&\partial_t r,\, \partial_t b \in L^2(0,T;H^1(\Omega)'),\\
& \rho\, \in L^2(0,T;H^1(\Omega )),\\
&(1-\bar \gamma \rho)^2\nabla\sqrt{ r},\,(1-\bar \gamma \rho)^2\nabla\sqrt{ b} \,\in L^2(0,T;L^2(\Omega)).\end{aligned}$$ Moreover, the solution satisfies the following entropy dissipation inequality: $$\begin{aligned}
\label{theorem1_2}
\begin{aligned}
\frac{\mathrm{d}E}{\mathrm{d}t} +\mathcal{D}_1\leq C,
\end{aligned}\end{aligned}$$ where $$\mathcal{D}_1=\int_{\Omega} 2(1-\bar \gamma \rho)^4|\nabla\sqrt{ r}|^2 +2(1-\bar \gamma \rho)^4|\nabla\sqrt{ b}|^2+\frac{\bar \gamma}{2}|\nabla \rho|^2\,d{\bf x}$$ and $C\geq0$ is a constant.
We recall that system can be written as a gradient flow:
$$\begin{aligned}
\label{case1_1}
\begin{aligned}
\partial_t
\begin{pmatrix}
r \\ b
\end{pmatrix}&=\nabla\cdot\left( M( r, b)\nabla \begin{pmatrix}
u\\ v
\end{pmatrix}\right),
\end{aligned}\end{aligned}$$
where $$M=\begin{pmatrix}
r(1-\bar \gamma b)& \bar \gamma r b\\
\bar \gamma r b & b (1-\bar \gamma r)\\
\end{pmatrix}.$$ Note that if $ r, b$ and $\rho \in \mathcal{S}^\circ$, then the matrix $M$ is positive definite.
We perform a time discretisation of system using the implicit Euler scheme. The resulting recursive sequence of elliptic problems is then regularized. Let $N\in\mathbb{N}$ and let $\tau=T/N$ be the time step size. We split the time interval into the subintervals $$(0,T]=\bigcup_{k=1}^N ((k-1)\tau,k\tau],\qquad \tau=\frac{T}{N}.$$ Then for given functions $( r_{k-1}, b_{k-1}) \in \mathcal{S}$, which approximate $( r, b)$ at time $\tau(k-1)$, we want to find $( r_k, b_k) \in \mathcal{S}$ solving the regularized time discrete problem $$\begin{aligned}
\label{case1_1_reg}
\begin{aligned}
\frac{1}{\tau}\begin{pmatrix}
r_k- r_{k-1} \\ b_k- b_{k-1}
\end{pmatrix}&=\nabla\cdot\left( M( r_k, b_k)\begin{pmatrix}
\nabla \tilde{u}_k\\ \nabla \tilde{v}_k\end{pmatrix}\right)+\tau\begin{pmatrix}
\Delta \tilde{u}_k-\tilde{u}_k\\ \Delta \tilde{v}_k-\tilde{v}_k
\end{pmatrix},
\end{aligned}\end{aligned}$$ where we use the modified entropy $$\begin{aligned}
\tilde{E} =E+E_\tau=\int_{\Omega} & r(\log r -1)+ b (\log b-1) + r V_r + b V_b + \frac{\bar \alpha}{2} \left( r^2 + 2 r b + b^2 \right) \\
& + \tau (1-\bar \gamma \rho) (\log (1-\bar \gamma \rho)-1)\, d {\bf x},\nonumber\end{aligned}$$ with associated entropy variables $$\begin{aligned}
\begin{aligned}
\tilde{u}=u+u_\tau &= \log r + \bar \alpha \rho + V_r- \tau \bar \gamma \log (1-\bar \gamma \rho) ,\\
\tilde{v}=v+v_\tau &= \log b + \bar \alpha \rho + V_b- \tau \bar \gamma \log (1-\bar \gamma \rho).
\end{aligned}\end{aligned}$$
The additional term in the entropy provides upper bounds on the solutions and the higher order regularization terms guarantee coercivity of the elliptic system in $H^1(\Omega)$, which is needed to show existence of weak solutions to a linearized version of the problem using Lax-Milgram. The existence result of the corresponding nonlinear problem is concluded by applying Schauder fixed point theorem.
Finally uniform a priori estimates in $\tau$ and the use of a generalized version of the Aubin-Lions lemma allow to pass to the limit $\tau \to 0$ leading to the existence of . Note that the compactness results are sufficient for $1-\bar \gamma\rho >0$ to pass to the correct limit in the flux terms $J_r$ and $J_b$, i.e leading to the global existence of weak solutions to system .
\[lemma1\] The entropy density $$\begin{aligned}
\tilde{h}:\mathcal{S}^\circ\to \mathbb{R}, \begin{pmatrix}
r\\b
\end{pmatrix}& \mapsto r(\log r-1) + b (\log b-1) + r V_r + b V_b \\
&+ \frac{\bar \alpha}{2} \left( r^2 + 2 r b + b^2 \right) + \tau (1-\bar \gamma \rho) (\log (1-\bar \gamma \rho)-1)\end{aligned}$$ is strictly convex and belongs to $C^2(\mathcal{S}^\circ).$ Its gradient $\tilde{h}':\mathcal{S}^\circ\to \mathbb{R}^2$ is invertible and the inverse of the Hessian $\tilde{h}'':\mathcal{S}^\circ\to \mathbb{R}^{2\times 2}$ is uniformly bounded.
Note that $$\tilde{h}'=\begin{pmatrix}
\log r-\tau\bar \gamma \log (1-\bar \gamma \rho) +\bar \alpha \rho +V_r\\
\log b-\tau\bar \gamma \log (1-\bar \gamma \rho) +\bar \alpha \rho +V_b
\end{pmatrix}$$ and $$\tilde{h}''=\begin{pmatrix}
\frac{1}{ r}+\tau\frac{\bar \gamma^2}{1-\bar \gamma \rho}+\bar \alpha & \tau\frac{\bar \gamma^2}{1-\bar \gamma \rho}+\bar \alpha \\
\tau \frac{\bar \gamma^2}{1-\bar \gamma \rho}+\bar \alpha &\frac{1}{ b} +\tau \frac{\bar \gamma^2}{1-\bar \gamma \rho}+\bar \alpha
\end{pmatrix}.$$ The matrix $\tilde{h}''$ is positive definite on the set $\mathcal{S}^\circ$, so $\tilde{h}$ is strictly convex. We can easily deduce that the inverse of $\tilde{h}''$ exists and is bounded on $\mathcal{S}^\circ$.
Next we verify the invertibility of $\tilde{h}'$. Note that the function $g=(g_1,g_2):\mathcal{S}^\circ\to \mathbb{R}^2,( r, b)\mapsto (\log r-\tau \bar \gamma \log (1-\bar \gamma \rho),
\log b-\tau \bar \gamma \log (1-\bar \gamma \rho))$ is invertible. Let $(x,y)\in \mathbb{R}^2$ and define $u(z)=(e^x+e^y)(1-\bar \gamma z)$ for $0<z<\frac{1}{\bar \gamma}$. Then $u$ is nonincreasing and as $u(0)>0$ and $u\left(\frac{1}{\bar \gamma}\right)=0$, there exists a unique fixed point $0<z_0<\frac{1}{\bar \gamma}$ such that $u(z_0)=z_0$. Then we define $ r=e^x(1-\bar \gamma z_0)>0$ and $ b =e^y(1-\bar \gamma z_0)>0$. It holds that $ r+ b=(e^x+e^y)(1-\bar \gamma z_0)=z_0<\frac{1}{\bar \gamma}$. So, $( r, b)\in \mathcal{S}^\circ$. Then, we define the function $f=\tilde{h}'\circ g^{-1}:\mathbb{R}^2\to \mathbb{R}^2$. Since $\tilde{h}''$ and $g'$ are nonsingular matrices for $( r , b)\in \mathcal{S}^\circ$, the Jacobian of $f$ is also nonsingular for $( r , b)\in \mathcal{S}^\circ$. Furthermore, we have that $$f(y)=y+\chi (g^{-1}(y)),\quad y\in \mathbb{R}^2,$$ where $\chi=\begin{pmatrix}
\bar \alpha \rho +V_r\\
\bar \alpha \rho +V_b
\end{pmatrix}\in C^0(\mathcal{S})\subseteq L^{\infty}(\mathbb{\mathcal{S}^\circ})$. So, $|f(y)|\to \infty$ as $|y|\to \infty$, which together with the invertibility of the matrix $Df$ allow us to apply Hadamard’s global inverse theorem showing that $f$ is invertible. So, also $\tilde{h}'$ is invertible.
Time discretisation and regularization of system
-------------------------------------------------
The weak formulation of system is given by:
$$\begin{aligned}
\label{case1_1_reg_weak}
\begin{aligned}
\frac{1}{\tau}\int_\Omega \begin{pmatrix}
r_k- r_{k-1} \\ b_k- b_{k-1}
\end{pmatrix} \cdot \begin{pmatrix}
\Phi_1 \\ \Phi_2
\end{pmatrix}\, d{\bf x}&+\int_\Omega\begin{pmatrix}
\nabla \Phi_1 \\ \nabla \Phi_2
\end{pmatrix}^T M( r_k, b_k)\begin{pmatrix}
\nabla \tilde{u}_k\\ \nabla \tilde{v}_k\end{pmatrix}\,d{\bf x}\\
&+\tau R\left(\begin{pmatrix}
\Phi_1\\ \Phi_2
\end{pmatrix},\begin{pmatrix}
\tilde{u}_k\\\tilde{v}_k
\end{pmatrix}\right)=0
\end{aligned}\end{aligned}$$
for $(\Phi_1,\Phi_2)\in H^1(\Omega)\times H^1(\Omega)$, where $( r_k, b_k)=h'^{-1}(\tilde{u}_k,\tilde{v}_k)$ and $$\begin{aligned}
\begin{aligned}
R\left(\begin{pmatrix}
\Phi_1\\ \Phi_2
\end{pmatrix},\begin{pmatrix}
\tilde{u}_k\\\tilde{v}_k
\end{pmatrix}\right)&=\int_{\Omega}
\Phi_1\tilde{u}_k+\Phi_2\tilde{v}_k+\nabla \Phi_1\cdot \nabla \tilde{u}_k+\nabla \Phi_2\cdot \nabla \tilde{v}_k
\,dx\,dy.
\end{aligned}\end{aligned}$$
We define $F:\mathcal{S}\subseteq L^2(\Omega,\mathbb{R}^2)\to\mathcal{S}\subseteq L^2(\Omega,\mathbb{R}^2), (\tilde{r} ,\tilde{b}) \mapsto ( r, b)=h'^{-1}(\tilde{u},\tilde{v})$, where $(\tilde{u},\tilde{v})$ is the unique solution in $H^1(\Omega,\mathbb{R}^2)$ to the linear problem $$\label{equ_lax}
a((\tilde{u},\tilde{v}),(\Phi_1,\Phi_2))=F(\Phi_1,\Phi_2) \quad \text{for all }(\Phi_1,\Phi_2)\in H^1(\Omega,\mathbb{R}^2)$$ with $$\begin{aligned}
a((\tilde{u},\tilde{v}),(\Phi_1,\Phi_2))&=\int_{\Omega}\begin{pmatrix}
\nabla \Phi_1\\ \nabla \Phi_2
\end{pmatrix}^T M(\tilde{r},\tilde{b})\begin{pmatrix}
\nabla u \\ \nabla v
\end{pmatrix}\,d{\bf x}+\tau R\left(\begin{pmatrix}
\Phi_1\\ \Phi_2
\end{pmatrix},\begin{pmatrix}
\tilde{u}\\\tilde{v}
\end{pmatrix}\right)\\
F(\Phi_1,\Phi_2)&=-\frac{1}{\tau}\int_{\Omega}\begin{pmatrix}
\tilde{r}- r_{k-1}\\\tilde{b}- b_{k-1}
\end{pmatrix}\cdot\begin{pmatrix}
\Phi_1\\\Phi_2
\end{pmatrix}\,d{\bf x}\end{aligned}$$ The bilinear form $a:H^1(\Omega;\mathbb{R}^2)\times H^1(\Omega;\mathbb{R}^2)\to \mathbb{R}$ and the functional $F:H^1(\Omega,\mathbb{R}^2)\to \mathbb{R}$ are bounded. Moreover, $a$ is coercive since the positive semi-definiteness of $M(r,b)$ implies that $$\begin{aligned}
a((\tilde{u},\tilde{v}),(\tilde{u},\tilde{v}))&=\int_{\Omega}\begin{pmatrix}
\nabla \tilde{u}\\ \nabla \tilde{v}
\end{pmatrix}^T M(\tilde{r},\tilde{b})\begin{pmatrix}
\nabla \tilde{u} \\ \nabla \tilde{v}
\end{pmatrix}\,d{\bf x}+\tau R\left(\begin{pmatrix}
\tilde{u}\\ \tilde{v}
\end{pmatrix},\begin{pmatrix}
\tilde{u}\\ \tilde{v}
\end{pmatrix}\right)\\
&\geq \tau \left(\|\tilde{u}\|_{H^1(\Omega)}^2+\|\tilde{v}\|_{H^1(\Omega)}^2\right).\end{aligned}$$ Then the Lax-Milgram lemma guarantees the existence of a unique solution $(\tilde{u},\tilde{v})\in H^1(\Omega;\mathbb{R}^2)$ to .
To apply Schauer’s fixed point theorem, we need to show that the map $S$:
maps a convex, closed set onto itself,\[schauder1\]
is compact,\[schauder2\]
is continuous.\[schauder3\]
Since $\mathcal{S}$ is convex and closed, property is satisfied; follows from the compact embedding $H^1(\Omega,\mathbb{R}^2)\hookrightarrow L^2(\Omega,\mathbb{R}^2)$. Continuity : let $(\tilde{r}_k,\tilde{b}_k)$ be a sequence in $\mathcal{S}$ converging strongly to $(\tilde{r},\tilde{b})$ in $L^2(\Omega,\mathbb{R}^2)$ and let $(\tilde{u}_k,\tilde{v}_k)$ be the corresponding unique solution to in $H^1(\Omega;\mathbb{R}^2)$. As the matrix $M$ only contains sums and products of $ r$ and $ b$, we have that $M(\tilde{r}_k,\tilde{b}_k)\to M(\tilde{r},\tilde{b})$ strongly in $L^2(\Omega,\mathbb{R}^2)$. The positive semidefiniteness of the matrix $M$ for $( r, b)\in \mathcal{S}$ provides a uniform bound for $(\tilde{u}_k,\tilde{v}_k)$ in $H^1(\Omega;\mathbb{R}^2)$. Hence, there exists a subsequence with $(\tilde{u}_k,\tilde{v}_k)\rightharpoonup (\tilde{u},\tilde{v})$ weakly in $H^1(\Omega;\mathbb{R}^2)$. The $L^{\infty}$ bounds of $M(\tilde{r}_k,\tilde{b}_k)$ and the application of a density argument allow us to pass from test functions $(\Phi_1,\Phi_2)\in W^{1,\infty}(\Omega,\mathbb{R}^2)$ to test functions $(\Phi_1,\Phi_2)\in H^1(\Omega,\mathbb{R}^2)$. So, the limit $(\tilde{u},\tilde{v})$ as the solution of problem with coefficients $(\tilde{r},\tilde{b})$ is well defined. Due to the compact embedding $H^1(\Omega,\mathbb{R}^2)\hookrightarrow L^2(\Omega,\mathbb{R}^2)$, we have a strongly converging subsequence of $(\tilde{u}_k,\tilde{v}_k)$ in $L^2(\Omega,\mathbb{R}^2)$. Since the limit is unique, the whole sequence converges. From Lemma \[lemma1\] we know that $( r, b)=h'^{-1}(\tilde{u},\tilde{v})$ is Lipschitz continuous, which yields continuity of $F$.
Hence, we can apply Schauder’s fixed point theorem, which assures the existence of a solution $( r, b)\in \mathcal{S}$ to with $(\tilde{r},\tilde{b})$ replaced by $(r, b)$.
Entropy dissipation {#sec:entropy_diss}
-------------------
\[lemma2\] Let $ r, b :\Omega \rightarrow \mathcal{S}$ be a sufficiently smooth solution to system $$\begin{aligned}
\label{case1_1_ent}
\begin{aligned}
\partial_t
\begin{pmatrix}
r \\ b
\end{pmatrix}&=\nabla\cdot\left( M( r, b) \nabla \begin{pmatrix}
\tilde{u}\\ \tilde{v}
\end{pmatrix}\right).
\end{aligned}\end{aligned}$$ Then, the entropy $\tilde{E}$ is decreasing and there exists a constant $C\geq 0$ such that $$\begin{aligned}
\label{entropyinequality}
\begin{aligned}
\frac{\mathrm{d} \tilde{E}}{\mathrm{d}t}+\mathcal{D}_0&\leq C,
\end{aligned}\end{aligned}$$ where $$\mathcal{D}_0=\int_{\Omega} 2(1-\bar \gamma \rho)|\nabla\sqrt{ r}|^2 +2(1-\bar \gamma \rho)|\nabla\sqrt{ b}|^2+\frac{\bar \gamma}{2}|\nabla \rho|^2+\frac{\tau^2}{2} \frac{\bar \gamma^5 \rho^2}{(1-\bar \gamma \rho)^2}|\nabla \rho |^2\,d{\bf x}.$$
System enables us to deduce the entropy dissipation relation: $$\begin{aligned}
\label{equ_1}
\begin{aligned}
\frac{\mathrm{d}\tilde{E}}{\mathrm{d}t} &=\int_{\Omega} (\tilde{u}\, \partial_t r +\tilde{v}\, \partial_t b ) \,d{\bf x} =-\int_{\Omega} \begin{pmatrix}
\nabla \tilde{u} \\ \nabla \tilde{v}
\end{pmatrix}^T M \begin{pmatrix}
\nabla \tilde{u} \\ \nabla \tilde{v}
\end{pmatrix}\,d{\bf x}\\
&=-\int_{\Omega} r (1 - \bar \gamma b) |\nabla \tilde{u}|^2+ b (1 - \bar \gamma r) |\nabla \tilde{v}|^2 +2\bar \gamma r b \nabla \tilde{u}\nabla \tilde{v}\,d{\bf x}\\
&=-\int_{\Omega} r (1 - \bar \gamma \rho) |\nabla \tilde{u}|^2+ b (1 - \bar \gamma \rho) |\nabla \tilde{v}|^2 +\bar \gamma | r \nabla \tilde{u}+ b \nabla\tilde{v}|^2\,d{\bf x}\leq 0.
\end{aligned}\end{aligned}$$ Inequality follows from the definitions of $\tilde{u}$ and $\tilde{v}$ as well as Young’s inequality to estimate the mixed terms. Furthermore we use that $$\begin{aligned}
\quad r(1-\bar \gamma \rho)& \left|\frac{\nabla r}{ r}+\tau\frac{\bar \gamma^2}{1-\bar \gamma \rho}\nabla \rho +\bar \alpha \nabla \rho \right|^2+ b (1-\bar \gamma \rho) \left| \frac{\nabla b}{ b}+\tau\frac{\bar \gamma^2}{1-\bar \gamma \rho}\nabla \rho +\bar \alpha \nabla \rho\right|^2\\
&=4(1-\bar \gamma \rho)|\nabla \sqrt{ r}|^2+4(1-\bar \gamma \rho)|\nabla \sqrt{ b}|^2+\bar \alpha^2 \rho (1-\bar \gamma \rho)|\nabla \rho|^2+2\bar \alpha(1-\bar \gamma\rho)|\nabla\rho|^2\\
&\quad +\tau^2 \frac{\bar \gamma^4 \rho}{1-\bar \gamma \rho}|\nabla \rho |^2+2\tau \bar \gamma^2|\nabla \rho|^2+2\tau \rho \bar \gamma^2 \bar \alpha |\nabla \rho |^2 \end{aligned}$$ and $$\begin{aligned}
&\quad \bar \gamma | r \nabla \tilde{u}+ b \nabla\tilde{v}|^2 = \bar \gamma \left|\nabla \rho \left(1+\frac{\tau \bar \gamma^2 \rho }{1-\bar \gamma\rho}+\bar \alpha \rho \right)+ r\nabla V_r+ b \nabla V_b \right|^2.\end{aligned}$$
This gives us $$\begin{aligned}
\frac{\mathrm{d}E}{\mathrm{d}t} &\leq -\int_\Omega 2(1-\bar \gamma \rho)|\nabla \sqrt{ r}|^2+2(1-\bar \gamma \rho)|\nabla \sqrt{ b}|^2+ \frac{\bar \gamma}{2}|\nabla\rho|^2+\frac{\tau^2}{2} \frac{\bar \gamma^5 \rho^2}{(1-\bar \gamma \rho)^2}|\nabla \rho |^2\, d{\bf x} \\
&\quad +\int_\Omega (1-\bar \gamma \rho)( r|\nabla V_r|^2+ b|\nabla V_b|^2)+\bar \gamma | r \nabla V_r + b \nabla V_b|^2\, d{\bf x}.\end{aligned}$$ Since $ r, b$ and $\rho \in \mathcal{S}$ and $\nabla V_r, \nabla V_b\in L^1(\Omega)$, we deduce .
The limit $\tau \to 0$. {#sec:limit_tau}
-----------------------
As the entropy density $\tilde{h}$ is convex, we have $\tilde{h}(\varphi_1)-\tilde{h}(\varphi_2)\leq \tilde{h}'(\varphi_1)\cdot(\varphi_1-\varphi_2)$ for all $\varphi_1,\varphi_2\in\mathcal{S}$. Choosing $\varphi_1=( r_k, b_k)$ and $\varphi_2=( r_{k-1}, b_{k-1})$ and using $\tilde{h}'( r_k, b_k)=( \tilde{u}_k, \tilde{v}_k)$, we obtain $$\begin{aligned}
\label{inequ1}
\frac{1}{\tau}\int_{\Omega}&\begin{pmatrix}
r_k- r_{k-1}\\ b_k- b_{k-1}
\end{pmatrix}\cdot\begin{pmatrix}
\tilde{u}_k\\ \tilde{v}_k
\end{pmatrix}\,d{\bf x}\geq\frac{1}{\tau}\int_{\Omega}\begin{pmatrix}
\tilde{h}( r_k, b_k)-\tilde{h}( r_{k-1}, b_{k-1})
\end{pmatrix}\,d{\bf x}.\end{aligned}$$ Applying in equation with the test function $(\Phi_1,\Phi_2)=(\tilde{u}_k,\tilde{v}_k)$ leads to $$\begin{aligned}
\label{inequ2}
\begin{aligned}
\int_{\Omega}\tilde{h}( r_k, b_k)\,d{\bf x}
+\tau\int_{\Omega}\begin{pmatrix}
\nabla \tilde{u}_k\\ \nabla \tilde{v}_k
\end{pmatrix}^T M( r_k, b_k)\begin{pmatrix}
\nabla \tilde{u}_k \\ \nabla \tilde{v}_k
\end{pmatrix}\,d{\bf x} \\
+\tau^2 R \left(\begin{pmatrix}
\tilde{u}_k\\\tilde{v}_k
\end{pmatrix},\begin{pmatrix}
\tilde{u}_k\\\tilde{v}_k
\end{pmatrix}\right)&\leq\int_{\Omega}\tilde{h}( r_{k-1}, b_{k-1})\,d{\bf x}.
\end{aligned}\end{aligned}$$ Applying the entropy inequality and resolving recursion yields $$\begin{aligned}
\label{discrete_entropyinequality}
\begin{aligned}
\quad\int_{\Omega} \tilde{h}( r_k, b_k)\,d{\bf x}+&\tau\sum_{j=1}^k\int_{\Omega} 2(1-\bar \gamma \rho_j)|\nabla\sqrt{ r_j}|^2 +2(1-\bar \gamma \rho_j)|\nabla\sqrt{ b_j}|^2+\frac{\bar \gamma}{2}|\nabla \rho_j|^2\\
&+\frac{\tau^2}{2} \frac{\bar \gamma^5 \rho_j^2}{(1-\bar \gamma \rho_j)^2}|\nabla \rho_j |^2\,d{\bf x}+\tau^2\sum_{j=1}^k R \left(\begin{pmatrix}
\tilde{u}_j\\\tilde{v}_j
\end{pmatrix},\begin{pmatrix}
\tilde{u}_j\\\tilde{v}_j
\end{pmatrix}\right) \\
&\leq \int_{\Omega} \tilde{h}( r_0, b_0)\,dx\,dy+T C.
\end{aligned}\end{aligned}$$
Let $( r_k, b_k)$ be a sequence of solutions to . We define $ r_\tau({\bf x},t)= r_k({\bf x})$ and $b_\tau({\bf x},t)=b_k({\bf x})$ for ${\bf x}\in\Omega$ and $t\in ((k-1)\tau,k\tau]$. Then $( r_\tau, b_\tau)$ solves the following problem, where $\sigma_\tau$ denotes a shift operator, i.e. $(\sigma_\tau r_\tau)({\bf x},t)= r_\tau ({\bf x},t-\tau)$ and $(\sigma_\tau b_\tau)({\bf x},t)= b_\tau ({\bf x},t-\tau)$ for $\tau \leq t\leq T$,
$$\begin{aligned}
\label{equ1_tau}
&\int_0^T\int_{\Omega}\frac{1}{\tau}\begin{pmatrix}
r_\tau-\sigma_\tau r_\tau\\ b_\tau-\sigma_\tau b_\tau
\end{pmatrix}\cdot\begin{pmatrix}
\Phi_1\\\Phi_2
\end{pmatrix}+
\begin{pmatrix}
(1-\bar \gamma \rho_\tau)\nabla r_\tau+(\bar \alpha+\bar \gamma) r_\tau \nabla \rho_\tau \\
(1-\bar \gamma \rho_\tau)\nabla b_\tau+(\bar \alpha+\bar \gamma) b_\tau \nabla \rho_\tau \\
\end{pmatrix}\cdot\begin{pmatrix}
\nabla \Phi_1\\ \nabla \Phi_2
\end{pmatrix}\,d{\bf x}\,dt\nonumber \\
&\qquad \qquad +\int_0^T \int_{\Omega}\begin{pmatrix}
r_\tau\nabla V_r+\bar \gamma \nabla(V_b-V_r) r_\tau b_\tau\\
b_\tau\nabla V_b+\bar \gamma \nabla(V_r-V_b) r_\tau b_\tau\\
\end{pmatrix}\cdot\begin{pmatrix}
\nabla \Phi_1\\ \nabla \Phi_2
\end{pmatrix}d{\bf x}\,dt\\
&\qquad \qquad +\int_0^T \int_{\Omega}\begin{pmatrix}
\frac{\tau\bar \gamma^2r_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau \\
\frac{\tau\bar \gamma^2 b_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau
\end{pmatrix}\cdot \begin{pmatrix}
\nabla \Phi_1\\ \nabla \Phi_2
\end{pmatrix}d{\bf x}+\tau R\left(\begin{pmatrix}
\Phi_1\\\Phi_2
\end{pmatrix},\begin{pmatrix}
\tilde{u}_\tau\\\tilde{v}_\tau
\end{pmatrix}\right)\,dt=0,\nonumber\end{aligned}$$
for $(\Phi_1(t),\Phi_2(t))\in L^2(0,T;H^1(\Omega))$. Note that the terms in the third line are the regularization terms.
Inequality becomes $$\begin{aligned}
\label{entropyinequality2}
\begin{aligned}
\quad\int_{\Omega} \tilde{h}( r_\tau(T), b_\tau(T))\,d{\bf x}&+\int_0^T\int_{\Omega} 2(1-\bar \gamma \rho_\tau)|\nabla\sqrt{ r_\tau}|^2 +2(1-\bar \gamma \rho_\tau)|\nabla\sqrt{ b_\tau}|^2+\frac{\bar \gamma}{2}|\nabla \rho_\tau|^2\\
&+\frac{\tau^2}{2} \frac{\bar \gamma^5 \rho_\tau^2}{(1-\bar \gamma \rho_\tau)^2}|\nabla \rho_\tau |^2\,d{\bf x}\,dt+\tau\int_0^T R \left(\begin{pmatrix}
\tilde{u}_\tau\\\tilde{v}_\tau
\end{pmatrix},\begin{pmatrix}
\tilde{u}_\tau\\\tilde{v}_\tau
\end{pmatrix}\right)\,dt\\
&\leq \int_{\Omega}\tilde{h}(r_0,b_0)\,dx\,dy+T C,
\end{aligned}\end{aligned}$$ which provides us the following a priori estimates. Note that from now on $K$ denotes a generic constant.
[(A priori estimates)]{}\[lemma3\] There exists a constant $K\in\mathbb{R}^+$, such that the following bounds hold: $$\begin{aligned}
\|\sqrt{1-\bar \gamma \rho_\tau}\nabla\sqrt{ r_\tau}\|_{L^2(\Omega_T)}+\|\sqrt{1-\bar \gamma \rho_\tau}\nabla\sqrt{ b_\tau}\|_{L^2(\Omega_T)}&\leq K, \label{apriori1}\\
\| \rho_\tau\|_{L^2(0,T;H^1(\Omega))}&\leq K, \label{apriori2}\\
\tau \left(\left\|\frac{ r_\tau}{1-\bar \gamma \rho_\tau}\nabla \rho_\tau\right\|_{L^2(\Omega_T)}+ \left\|\frac{ b_\tau}{1-\bar \gamma \rho_\tau}\nabla \rho_\tau\right\|_{L^2(\Omega_T)}\right)&\leq K, \label{apriori3}\\
\sqrt{\tau}(\|\tilde{u}_\tau\|_{L^2(0,T;H^1(\Omega))}+\|\tilde{v}_\tau\|_{L^2(0,T;H^1(\Omega))})&\leq K,\label{apriori4}\end{aligned}$$ where $\Omega_T=\Omega \times (0,T)$.
\[lemma4\] The discrete time derivatives of $ r_\tau$ and $ b_\tau$ are uniformly bounded, i.e. $$\begin{aligned}
\label{inequ3}
\frac{1}{\tau}\| r_\tau-\sigma_\tau r_\tau\|_{L^2(0,T;H^1(\Omega)')}+\frac{1}{\tau}\| b_\tau-\sigma_\tau b_\tau\|_{L^2(0,T;H^1(\Omega)')}&\leq K. \end{aligned}$$
Let $\Phi \in L^2(0,T;H^1(\Omega))$. Using the a priori estimates from Lemma \[lemma3\] gives $$\begin{aligned}
\frac{1}{\tau}\int_0^T \langle r_\tau-\sigma_\tau r_\tau,\Phi\rangle\,dt &= -\int_0^T\int_{\Omega}((1-\bar \gamma \rho_\tau)\nabla r_\tau+(\bar \alpha+\bar \gamma) r_\tau \nabla \rho_\tau) \nabla \Phi\,d{\bf x}\,dt\\
&-\int_0^T \int_{\Omega}( r_\tau\nabla V_r+\bar \gamma \nabla(V_b-V_r) r_\tau b_\tau)\nabla \Phi\,d{\bf x}\,dt\\
&-\tau \bar\gamma^2\int_0^T \int_{\Omega} \frac{r_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau \nabla \Phi\,d{\bf x}\,dt\\
&-\tau \int_0^T\int_{\Omega} \tilde{u}_\tau\Phi+ \nabla \tilde{u}_\tau\cdot\nabla \Phi\,d{\bf x}\,dt\\
\leq \,&\|(1-\bar \gamma \rho_\tau)\nabla r_\tau\|_{L^2(\Omega_T)}\|\nabla \Phi\|_{L^2(\Omega_T)}\\
&+(\bar \alpha +\bar \gamma )\| r_\tau \|_{L^{\infty}(\Omega_T)}\|\nabla \rho_\tau \|_{L^2(\Omega_T)}\|\nabla \Phi\|_{L^2(\Omega_T)}\\
&+\| r_\tau \nabla V_r+\bar \gamma \nabla(V_b-V_r) r_\tau b_\tau\|_{L^{\infty}(\Omega_T)}\|\nabla \Phi\|_{L^1(\Omega_T)}\\
&+\tau \bar \gamma^2\left\|\frac{r_\tau}{1-\bar \gamma \rho_\tau} \nabla\rho_\tau\right\|_{L^2(\Omega_T)}\|\nabla \Phi\|_{L^2(\Omega_T)}\\
&+\tau \|\tilde{u}_\tau\|_{L^2(0,T;H^1(\Omega))}\|\Phi\|_{L^2(0,T;H^1(\Omega))}\\
\leq & \, K\|\Phi\|_{L^2(0,T;H^1(\Omega))}.\end{aligned}$$ A similar estimate can be deduced for $b$ which concludes the proof.
Even though the a priori estimates from Lemma \[lemma3\] are enough to get boundedness for all terms in in $L^2(\Omega_T)$, the compactness results are not enough to identify the correct limits for $\tau \to 0$. From Lemma \[lemma3\] we get that, as $\tau \to 0$ $$\tau \tilde{u}_\tau, \tau \tilde{v}_\tau \to 0 \quad \text{ strongly in } L^2(0,T;H^1(\Omega)).$$ Together with Lemma \[lemma4\], we get a solution to $$\begin{aligned}
\int_0^T \int_{\Omega} \begin{pmatrix}
\partial_t r\\ \partial_t b
\end{pmatrix}\cdot\begin{pmatrix}
\Phi_1\\ \Phi_2
\end{pmatrix} \,d{\bf x}\,dt = \int_0^T \int_{\Omega} \begin{pmatrix}
J_r\\ J_b
\end{pmatrix}\cdot \begin{pmatrix}
\nabla \Phi_1\\\nabla \Phi_2
\end{pmatrix}\,d{\bf x}\,dt,\end{aligned}$$ where $$\begin{aligned}
(1-\bar \gamma \rho_\tau)\nabla r_\tau+(\bar \alpha+\bar \gamma) r_\tau \nabla \rho_\tau + r_\tau\nabla V_r+\bar \gamma \nabla(V_b-V_r) r_\tau b_\tau+\frac{\tau\bar \gamma^2r_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau \rightharpoonup J_r, \label{limit1}\\
(1-\bar \gamma \rho_\tau)\nabla b_\tau+(\bar \alpha+\bar \gamma) b_\tau \nabla \rho_\tau+ b_\tau\nabla V_b+\bar \gamma \nabla(V_r-V_b) r_\tau b_\tau+\frac{\tau\bar \gamma^2 b_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau\rightharpoonup J_b,\label{limit2}\end{aligned}$$ weakly in $L^2(\Omega_T)$.
In order to identify the limit terms, we multiply equation by $(1-\overline{\gamma}\rho)$.
\[lemma5\] For $\tau\to 0$, we have
$(1-\bar \gamma \rho_\tau)^2\nabla r_\tau \rightharpoonup(1-\bar \gamma \rho)^2\nabla r $ weakly in $L^2(\Omega_T)$\[1\],\
$(1-\bar \gamma \rho_\tau)(\bar \alpha+\bar \gamma) r_\tau \nabla \rho_\tau\rightharpoonup(1-\bar \gamma \rho)(\bar \alpha+\bar \gamma) r \nabla \rho$ weakly in $L^2(\Omega_T)$\[2\],\
$(1-\bar \gamma \rho_\tau)r_\tau\nabla V_r \to (1-\bar \gamma \rho)r\nabla V_r$ strongly in $L^2(\Omega_T)$\[3\],\
$(1-\bar \gamma \rho_\tau)\bar \gamma \nabla(V_b-V_r) r_\tau b_\tau \to (1-\bar \gamma \rho)\bar \gamma \nabla(V_b-V_r) r b$ strongly in $L^2(\Omega_T)$\[4\],\
$(1-\bar \gamma \rho_\tau)\frac{\tau\bar \gamma^2r_\tau}{1-\bar \gamma\rho_\tau}\nabla \rho_\tau =\tau\bar \gamma^2r_\tau\nabla \rho_\tau\to 0$ strongly in $L^2(\Omega_T)$\[5\].
The estimates from Lemma \[lemma3\] and Lemma \[lemma4\] allow us to use Aubin’s lemma to deduce the existence of a subsequence (not relabeled) such that, as $\tau \to 0$: $$\label{conv_rho1}
\rho_\tau \to \rho \quad \text{ strongly in } L^2(\Omega_T).$$ This implies $$\label{conv_rho}
1-\overline{\gamma} \rho_\tau \to 1-\overline{\gamma}\rho \quad \text{ strongly in } L^2(\Omega_T).$$ Note that the $L^{\infty}$ bounds for $b_\tau$ and $r_\tau$ imply that, up to a subsequence, $$\label{conv_rb}
r_\tau\rightharpoonup r, \quad b_\tau\rightharpoonup b \quad \text{ weakly}^*\text{ in } L^{\infty}(\Omega_T).$$ With the help of a generalized version of Aubin-Lions Lemma (see Lemma 7 in [@zamponi2015analysis]), we also get strong convergence of the terms $(1-\overline{\gamma} \rho_\tau )r_\tau$ and $(1-\overline{\gamma} \rho_\tau )r_\tau b_\tau$. The lemma states that if , , and $$\|(1-\overline{\gamma}\rho_\tau)\,g\|_{L^2(0,T;H^1(\Omega))}\leq K\quad\text{ for } g\in \{1,r_\tau,b_\tau\}$$ hold, then we have strong convergence up to a subsequence for all $f=f(r_\tau,b_\tau)\in C^0(\mathcal{S};\mathbb{R}^2)$ of $$\label{conv_rhorb}
(1-\overline{\gamma}\rho_\tau)f(r_\tau,b_\tau) \to (1-\overline{\gamma}\rho)f(r,b) \quad \text{ strongly in } L^2(\Omega_T),$$ as $\tau \to 0$.
Applying with $f(r_\tau,b_\tau)=r_\tau$, we get $$\label{aubin1}
(1-\overline{\gamma}\rho_\tau)\,r_\tau\to (1-\overline{\gamma}\rho)\,r \quad \text{ strongly in }L^2(\Omega_T).$$ Writing as $$\begin{aligned}
(1-\overline{\gamma}\rho_\tau)^2\nabla r_\tau=(1-\overline{\gamma}\rho_\tau)\nabla((1-\overline{\gamma}\rho_\tau)r_\tau)-(1-\overline{\gamma}\rho_\tau)r_\tau\nabla(1-\overline{\gamma}\rho_\tau),\end{aligned}$$ and using the $L^{\infty}$ bounds together with the bounds in Lemma \[lemma3\] to get $L^2$ bounds for $\nabla ((1-\overline{\gamma}\rho_\tau) r_\tau)=\nabla(1-\overline{\gamma}\rho_\tau)r_\tau+2\sqrt{r_\tau}\sqrt{1-\overline{\gamma}\rho_\tau}\sqrt{1-\overline{\gamma}\rho_\tau}\nabla\sqrt{r_\tau}$, we can deduce $$(1-\bar \gamma \rho_\tau)^2\nabla r_\tau \rightharpoonup(1-\bar \gamma \rho)^2\nabla r \quad \text{ weakly in } L^2(\Omega_T).$$ The convergence of follows from the $L^{\infty}$ bounds, the a priori estimate as well as from the convergences and .
The strong convergences of and can be shown by applying in and the generalized Aubin-Lions lemma with $f(r_\tau,b_\tau)=r_\tau b_\tau$ in .
Finally, as $ r_\tau \nabla \rho_\tau$ is bounded in $L^2(\Omega_T)$ and $\tau \to 0$, we can deduce .
Analogous results hold for equation which allows us to perform the limit $\tau \to 0$ giving a weak solution to system .
The only thing which remains to verify is the entropy inequality . Since $E$ is convex and continuous, it is weakly lower semi-continuous. Because of the weak convergence of $(r_\tau(t),b_\tau(t))$, $$\int_\Omega \tilde{h}(r(t),b(t))\,d{\bf x}\leq \liminf_{\tau\to 0}\int_\Omega \tilde{h}(r_\tau (t),b_\tau (t))\,d{\bf x}\quad \text{ for a.e. } t>0.$$ We cannot expect the identification of the limit of $\sqrt{1-\rho_\tau}\nabla \sqrt{r_\tau}$, but employing with $f(r,b)=\sqrt{r}$, we get $$(1-\overline{\gamma}\rho_\tau) \sqrt{r_\tau} \to (1-\overline{\gamma}\rho)\sqrt{r} \quad \text{ strongly in }L^2(\Omega_T)$$ with analogous convergence results for $r$ being replaced by $b$. Because of the $L^{\infty}$-bounds and the bounds in , we obtain $\nabla((1-\overline{\gamma}\rho_\tau)\sqrt{r_\tau}), \nabla((1-\overline{\gamma}\rho_\tau)\sqrt{b_\tau})\in L^2(\Omega_T)$, which implies $$\begin{aligned}
\label{equ12}
\begin{aligned}
(1-\overline{\gamma}\rho_\tau) \sqrt{r_\tau} &\rightharpoonup (1-\overline{\gamma}\rho)\sqrt{r} \quad \text{ weakly in }L^2(0,T;H^1(\Omega)),\\
(1-\overline{\gamma}\rho_\tau) \sqrt{b_\tau} &\rightharpoonup (1-\overline{\gamma}\rho)\sqrt{b} \quad \text{ weakly in }L^2(0,T;H^1(\Omega)).
\end{aligned}\end{aligned}$$ The $L^{\infty}$-bounds, and the fact that $$\nabla (1-\overline{\gamma}\rho_\tau)\rightharpoonup \nabla (1-\overline{\gamma}\rho) \quad \text{ weakly in } L^2(\Omega_T),$$ imply that both $$(1-\overline{\gamma}\rho_\tau)^2\nabla \sqrt{r_\tau}=(1-\overline{\gamma}\rho_\tau)\nabla ((1-\overline{\gamma}\rho_\tau) \sqrt{r_\tau})-(1-\overline{\gamma}\rho_\tau)\sqrt{r_\tau}\nabla (1-\overline{\gamma}\rho_\tau)$$ and $$(1-\overline{\gamma}\rho_\tau)^2\nabla \sqrt{b_\tau}=(1-\overline{\gamma}\rho_\tau)\nabla ((1-\overline{\gamma}\rho_\tau) \sqrt{b_\tau})-(1-\overline{\gamma}\rho_\tau)\sqrt{b_\tau}\nabla (1-\overline{\gamma}\rho_\tau)$$ converge weakly in $L^1$ to the corresponding limits. The $L^2$ bounds imply also weak convergence in $L^2$: $$\begin{aligned}
(1-\overline{\gamma}\rho_\tau)^2\nabla \sqrt{r_\tau}&\rightharpoonup (1-\overline{\gamma}\rho)^2\nabla \sqrt{r}\quad \text{ weakly in }L^2(\Omega_T),\\
(1-\overline{\gamma}\rho_\tau)^2\nabla \sqrt{b_\tau}&\rightharpoonup (1-\overline{\gamma}\rho)^2\nabla \sqrt{b}\quad \text{ weakly in }L^2(\Omega_T).\end{aligned}$$ As $1-\rho_\tau\geq (1-\rho_\tau)^4$, we can pass to the limit inferior $\tau\to 0$ in $$\begin{aligned}
\begin{aligned}
&\quad\int_{\Omega} \tilde{h}( r_\tau(T), b_\tau(T))\,d{\bf x}+\int_0^T\int_{\Omega} 2(1-\bar \gamma \rho_\tau)^4|\nabla\sqrt{ r_\tau}|^2 +2(1-\bar \gamma \rho_\tau)^4|\nabla\sqrt{ b_\tau}|^2+\frac{\bar \gamma}{2}|\nabla \rho_\tau|^2\\
&\qquad \qquad +\frac{\tau^2}{2} \frac{\bar \gamma^5 \rho_\tau^2}{(1-\bar \gamma \rho_\tau)^2}|\nabla \rho_\tau |^2\,d{\bf x}\,dt+\tau\int_0^T R \left(\begin{pmatrix}
\tilde{u}_\tau\\\tilde{v}_\tau
\end{pmatrix},\begin{pmatrix}
\tilde{u}_\tau\\\tilde{v}_\tau
\end{pmatrix}\right)\,dt\leq \int_{\Omega} \tilde{h}(r_0,b_0)\,dx\,dy+T C,
\end{aligned}\end{aligned}$$ attaining the entropy inequality .
Conclusion
==========
Gradient flow techniques provide a natural framework to study the behavior of time evolving systems that are driven by an energy. This energy is decreasing along solutions as fast as possible, a property inherent in nature. Hence many partial differential equation models exhibit this structure. Most of these systems arise in the mean-field limit of a particle system, which has a gradient structure itself. Passing from the microscopic level to the macroscopic equations often relies on closure assumptions and approximations, which perturb the original gradient flow structure.
In this paper we studied a mean-field model for two species of interacting particles which was derived using the method of matched asymptotics in the case of low volume fraction. This asymptotic expansions results in a cross-diffusion system which has a gradient flow structure up to a certain order. We therefore introduce the notion of asymptotic gradient flows for systems whose gradient flow structure is perturbed by higher order terms. We show that this ’closeness’ to a classic gradient flow structure allows us to deduce existence and stability results for the perturbed or as we call them asymptotic gradient flow system.
While the presented results on linear stability (Theorem \[linearstability\]), well-posedness (Theorem \[wellposedness\]) and existence of stationary solutions (Theorem \[theorem2\]) also hold on unbounded domains, the proof of the global existence result in Section \[sec:existence\] uses embeddings which do not hold on unbounded domains in general, e.g. $H^2(\Omega)$ is compactly embedded in $L^2(\Omega)$.
The presented work is a first step towards the development of a more general framework for asymptotic gradient flows. It provides the necessary tools to understand the impact of high order perturbations on the energy dissipation as well as the behavior of solutions and opens interesting directions for future research.
Acknowledgments {#acknowledgments .unnumbered}
===============
The work of MB was partially supported by the German Science Foundation (DFG) through Cells-in-Motion Cluster of Excellence (EXC 1003 CiM), Münster. MTW and HR acknowledge financial support from the Austrian Academy of Sciences ÖAW via the New Frontiers Group NST-001. The authors thank the Wolfgang Pauli Institute (WPI) Vienna for supporting the workshop that lead to this work.
\[lastpage\]
|
---
address:
- 'Cyclotron Institute, Texas A&M University, 3366 TAMU, College Station, TX 77843-3366, United States'
- 'Department of Physics and Astronomy, Texas A&M University, 4242 TAMU, College Station, TX 77842-4242, United States'
- 'TRIUMF, 4004 Wesbrook Mall, Vancouver, BC V6T 2A3, Canada'
- 'Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB R3T 2N2, Canada'
- 'School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel'
- 'Department of Chemistry, Texas A&M University, 3012 TAMU, College Station, TX 77842-3012, United States'
author:
- 'B. Fenker'
- 'J.A. Behr'
- 'D. Melconian'
- 'R.M.A. Anderson'
- 'M. Anholm'
- 'D. Ashery'
- 'R.S. Behling'
- 'I. Cohen'
- 'I. Craiciu'
- 'J.M. Donohue'
- 'C. Farfan'
- 'D. Friesen'
- 'A. Gorelov'
- 'J. McNeil'
- 'M. Mehlman'
- 'H. Norton'
- 'K. Olchanski'
- 'S. Smale'
- 'O. Thériault'
- 'A.N. Vantyghem'
- 'C.L. Warner'
bibliography:
- 'library\_manual.bib'
title: 'Precision measurement of the nuclear polarization in laser-cooled, optically pumped $^{37}$K'
---
optical pumping ,$\beta$-decay ,fundamental symmetries ,atom-trapping ,parity violation
|
---
abstract: 'In this paper we show that the non-Hermitian Hamiltonians $H=p^{2}-gx^{4}+a/x^2$ and the conventional Hermitian Hamiltonians $h=p^2+4gx^{4}+bx$ ($a,b\in \mathbb{R}$) are isospectral if $a=(b^2-4g\hbar^2)/16g$ and $a\geq -\hbar^2/4$. This new class includes the equivalent non-Hermitian -Hermitian Hamiltonian pair, $p^{2}-gx^{4}$ and $p^{2}+4gx^{4}-2\hbar \sqrt{g}x,$ found by Jones and Mateo six years ago as a special case. When $a=\left(b^{2}-4g\hbar ^{2}\right) /16g$ and $a<-\hbar^2/4,$ although $h$ and $H$ are still isospectral, $b$ is complex and $h$ is no longer the Hermitian counterpart of $H$.'
author:
- Asiri Nanayakkara
- 'Thilagarajah Mathanaranjan${^*}$'
title: |
Isospectral Hermitian counterpart of complex non Hermitian\
Hamiltonian $\ p^{2}-gx^{4}+a/x^{2}$
---
=1
Introduction {#sec:1}
============
Bender and Boettcher in a pioneering paper [@R1] showed that non Hermitian, $PT$-symmetric Hamiltonians of the form $$H_{0}=p^{2}-g\left( ix\right)^{N} \label{eq:1}$$ posses real and positive eigenspectra when $N\geq2$. Since then many 1-D $PT$-symmetric non Hermitian Hamiltonian models have been investigated both quantum mechanically as well as classically. Interest in non-Hermitian $PT$-symmetric models has increased considerably during the last decade mainly due to their usefulness in the areas such as particle-physics, quantum optics, supersymmetric and magnetohydrodynamics and now the applicability and the usefulness of non-Hermitian $PT$-symmetric quantum mechanics have been well established [@R1; @R2; @R3; @R4; @R5; @R6; @R7; @R8]. If $PT$-symmetry is not spontaneously broken, Non-Hermitian $PT$-symmetric Hamiltonians have real** **energy spectra. However, for a given $PT$-symmetric Hamiltonian, there is no simple way of figuring out ahead of time whether the $PT$-symmetry is spontaneously broken or not. Mostafazadeh [@R6] has proved that if the Hamiltonian of a quantum system possesses an exact $PT$-symmetry (unbroken $PT$-symmetry) then the Hamiltonian is equivalent to a Hermitian Hamiltonian which has the same spectrum. This was achieved by constructing the unitary operator relating a given non-Hermitian Hamiltonian with exact $PT$-symmetry to a Hermitian Hamiltonian. Nonetheless, only in a few instances, people succeeded in finding Hermitian Hamiltonians which posses the same eigenspectra as $PT$-symmetric non Hermitian Hamiltonians [@R7; @R8; @R9; @R10; @R11; @R12].\
Using operator techniques and path integral methods, Jones et al [@R7; @R8; @R9] found that the complex non-Hermitian $PT$-symmetric Hamiltonian $p^{2}-gx^{4}$ and the conventional Hermitian Hamiltonian $p^{2}+4gx^{4}-2\sqrt{g}x$ are isospectral$.$ However using a method based on a combination of certain integrals (viz. Fourier) and point (i.e. change-of-variables) spectrum-preserving transformations, Buslaev and Grecchi [@R10] had shown this equivalence relation several years earlier. It is also interesting to note that these results had been published five years earlier than the pioneering paper [@R1] on $PT$-symmetry by Bender et al.\
Recently, the Asymptotic Energy Expansion (AEE) method has been applied by Nanayakkara et al to show that the complex non-Hermitian $PT$-symmetric Hamiltonian $p^{2}-gx^{4}+4i\hbar \sqrt{g}x$ and the conventional Hermitian Hamiltonian $p^{2}+4gx^{4}+6\hbar \sqrt{g}x$ have the same eigenspectra [@R12].\
In this paper we show that the Hamiltonians $\ H=p^{2}-gx^{4}+a/x^{2}$ and $h=p^{2}+4gx^{4}+bx$ are isospectral if $a=\left( b^{2}-4g\hbar ^{2}\right)
/16g$ and the $p^{2}-gx^{4}+4i\hbar \sqrt{g}x$ and $p^{2}+4gx^{4}+6\hbar
\sqrt{g}x$ as well as $p^{2}-gx^{4}$ and $p^{2}+4gx^{4}-2\hbar \sqrt{g}x$ are special cases of $H$ and $h$. The outline of the paper is as follows. In Sec. \[sec:2\], the equivalence condition for $H$ and $h$ is derived using the AEE method. The behavior of eigenenergies and breakdown of $PT$-symmetry with respect to the parameters of the Hamiltonians are investigated in Sec. \[sec:3\]. Exact ground state wave functions, superpotentials and supersymmetric partners of both Hamiltonians are analyzed in Sec. \[sec:4\]. Concluding remarks are given in Sec. \[sec:5\].
Derivation of equivalence condition {#sec:2}
===================================
In this section, we establish the conditions for which the non-Hermitian $PT$- symmetric quartic Hamiltonian $$H=p^{2}-gx^{4}+\frac{a}{x^{2}} \label{eq:2}$$and the conventional Hermitian Hamiltonian $$h=p^{2}+\alpha x^{4}+bx \label{eq:3}$$ are equivalent. Here $a$, $g$, $\alpha $ and $b$ are assumed to be real. However, later in the Sec. \[sec:3\], we consider the cases where these parameters are complex as well. In a previous study on equivalent non Hermitian and Hermitian Hamiltonians [@R12], it was shown that the Hamiltonians $p^{2}-gx^{4}+4i\hbar \sqrt{g}x$ and $p^{2}+4gx^{4}+6\hbar
\sqrt{g}x$ are equivalent with zero energy ground states and the supersymmetric partner of $-gx^{4}+4i\hbar \sqrt{g}x$ is $-gx^{4}+\frac{2\hbar ^{2}}{x^{2}}.$ Further these two Hamiltonians are found to be isospectral as well. Consequently, the Hamiltonian $p^{2}-gx^{4}+\frac{2\hbar ^{2}}{x^{2}}$ is equivalent to $p^{2}+4gx^{4}+6\hbar \sqrt{g}x$ and, hence $a=2\hbar ^{2},\alpha =4g,$ and $b=6\hbar \sqrt{g}$ are one set of parameters for which (\[eq:2\]) and (\[eq:3\]) are equivalent. Therefore, it is worthwhile to investigate whether there are any other parameter values for which $H$ and $h$ are equivalent.\
In order to obtain the general conditions of equivalence, we used the Asymptotic Energy Expansion (AEE) method [@R12; @R13; @R14] which is employed by Nanayakkara et al. The AEE method is an analytic method where each term in the expansion can be obtained explicitly in terms of Gamma functions and multinomials of the parameters in the potential. The accuracy and the applicability of AEE method to obtain equivalent Hamiltonians have been demonstrated in [@R12]. First the AEE is derived for the non Hermitian Hamiltonian $H$. Since Hamiltonian $H$ contains a $1/x^{2}$ term, the standard AEE method used for even degree polynomial potentials has to be modified. Therefore the complete derivation is described below.\
\
\
\
\
Consider the non Hermitian Hamiltonian $H$ $$H\left( x,p\right) =p^{2}+V\left( x\right) \label{eq:4}$$ where** **$V\left( x\right) =-gx^{4}+\frac{a}{x^{2}}.$\
\
The AEE quantization condition for this potential is $$J\left( E\right) =n\hbar \label{eq:5}$$where $n$ is a positive integer and quantum action variable $J\left(E\right) $ is given by $$J\left( E\right) =\frac{1}{2\pi }\underset{\gamma }{\int }P\left( x,E\right)dx \label{eq:6}$$ $P\left( x,E\right) $** **satisfies the Riccati equation $$\frac{\hbar }{i}\frac{\partial P\left( x,E\right) }{\partial x}+P^{2}\left(
x,E\right) =E-V\left( x\right) =P_{c}\left( x,E\right) \label{eq:7}$$Note that $P\left( x,E\right) $ relates to the wave function as $P\left(
x,E\right) =\frac{\hbar }{i}\frac{\partial \Psi /\partial x}{\Psi }.$ The contour $\gamma $ in (\[eq:6\]) encloses two physical turning points of $
P_{c}\left( x,E\right) $. Boundary conditions imposed upon $P\left(
x,E\right) $ is $P\left( x,E\right) \rightarrow $ $P_{c}\left( x,E\right) $ as $\hbar \rightarrow 0$ [@R15; @R16].\
\
For the above potential, (\[eq:7\]) becomes $$\frac{\hbar }{i}\frac{\partial P\left( x,E\right) }{\partial x}+p^{2}\left(
x,E\right) =E+gx^{4}-\frac{a}{x^{2}}. \label{eq:8}$$ Let $\epsilon =E^{-1/4}$ and $y=g^{1/4}\epsilon x.$ Then (\[eq:8\]) becomes, after simplification, $$\hat{h}y^{2}\epsilon ^{5}\frac{\partial P\left( y,\epsilon \right) }{\partial y}+y^{2}\epsilon ^{4}P^{2}\left( y,\epsilon \right)
=y^{2}(1+y^{4})-ag^{1/2}\epsilon ^{6} \label{eq:9}$$ where $\hat{h}=\frac{\hbar }{i}g^{1/4}$. In order to obtain asymptotic energy expansion, first $P\left( y,\epsilon \right) $ is expanded as an asymptotic series in powers of $\epsilon $ and subsequently obtain recurrence relations.** **This expansion usually has zero radius of convergence.** **However, truncating the series after a finite number of terms provides a good approximation to $P\left( y,\epsilon \right) $ [@R17; @R18]. The asymptotic series expansion is written as $$P\left( y,\epsilon \right) =\epsilon ^{s}\overset{\infty }{\underset{k=0}{\sum }}a_{k}\left( y\right) \epsilon ^{k} \label{eq:10}$$ where $a_{k}$ and $s$ are determined below. Substituting (\[eq:10\]) in (\[eq:9\]) and equating coefficients of $\epsilon ^{0}$, $s$ and $a_{0}$ are found as $s=-2$ and $a_{0}=\sqrt{1+y^{4}}$ and (\[eq:9\]) becomes
$$\hat{h}\overset{\infty }{\underset{k=0}{y^{2}\sum }}\epsilon ^{k+3}\frac{da_{k}}{dy}+y^{2}\underset{i=0}{\overset{\infty }{\sum }}\overset{\infty }{\underset{j=0}{\sum }}a_{i}a_{j}\epsilon
^{i+j}=y^{2}(1+y^{4})-ag^{1/2}\epsilon ^{6} \label{eq:11}$$
Next assume $a_{k}=0$ when $k<0$ and rearranging terms, $$\ \left( \hat{h}\overset{\infty }{y^{2}\underset{k=1}{\sum }}\frac{da_{k-3}}{dy}+\underset{k=1}{\overset{\infty }{y^{2}\sum }}\overset{k-1}{\underset{i=1}{\sum }}a_{i}a_{k-i}+2y^{2}a_{0}\overset{\infty }{\underset{k=0}{\sum }}a_{k}\right) \ \epsilon ^{k}=y^{2}(1+y^{4})-ag^{1/2}\epsilon ^{6}. \label{eq:12}$$\
Then coefficients $a_{k}$’s are given by
$$a_{k}=\frac{-1}{2y^{2}a_{0}}\left[ y^{2}\underset{i=1}{\overset{k-1}{\sum }}a_{i}a_{k-i}+\hat{h}y^{2}\frac{da_{k-3}}{dy}+ag^{1/2}\delta _{k,6}\right] .
\label{eq:13}$$
In the above formula $a_{k}=0\ \forall k<0.$ Now $J$ can be written as $$J\left( E\right) =\overset{\infty }{\underset{k=0}{\sum }}b_{k}E^{\frac{-(k-3)}{4}} \label{eq:14}$$ where $$b_{k}=\frac{1}{2\pi }\underset{\gamma }{\int }a_{k}dy \label{eq:15}$$ and can be determined analytically in terms of $g$ and $a.$ The contour $
\gamma $ encloses the two branch points of $\sqrt{1+y^{4}}$ (i.e. $e^{i\pi
/4}$ and $e^{3i\pi /4}$) on the complex plane. The quantization condition $J\left( E\right) =n\hbar $ determines the eigenenergies of $H.$\
Using (\[eq:13\]) and evaluating the integral (\[eq:15\]) analytically, the asymptotic series is obtained. The eigenenergy expansion becomes $$J\left( E\right) =\underset{k=0}{\overset{\infty }{\sum }}b_{k}E^{\frac{-(k-3)}{4}}. \label{eq:16}$$ Here first six non zero $b_{k}$’s are $$b_{0}=\frac{\Gamma \left[ \frac{1}{4}\right] }{3g^{1/4}\sqrt{2\pi }\smallskip \ \Gamma \left[ \frac{3}{4}\right] }, \label{eq:17}$$ $$b_{3}=-\frac{\hbar }{2}, \label{eq:18}$$ $$b_{6}=\frac{g^{1/4}(4a-\hbar ^{2})\ \Gamma \left[ \frac{3}{4}\right] }{4\sqrt{2\pi }\smallskip \ \Gamma \left[ \frac{1}{4}\right] },\label{eq:19}$$ $$b_{12}=\frac{g^{3/4}(80a^{2}-200ah^{2}-11\hbar ^{4})\ \Gamma \left[ \frac{1}{4}\right] }{1536\sqrt{2\pi }\smallskip \ \Gamma \left[ \frac{3}{4}\right] },
\label{eq:20}$$ $$b_{18}=-\frac{77g^{5/4}(192a^{3}-1296a^{2}h^{2}+1860ah^{4}+61\hbar ^{6})\
\Gamma \left[ \frac{3}{4}\right] }{30720\sqrt{2\pi }\smallskip \ \Gamma \left[ \frac{1}{4}\right] }, \label{eq:21}$$ $$b_{24}=-\frac{1105g^{7/4}(256a^{4}-3328a^{3}h^{2}+14432a^{2}h^{4}-17360a\hbar ^{6}+353h^{8})\ \Gamma \left[ \frac{1}{4}\right] }{3670016\sqrt{2\pi }\smallskip \ \Gamma \left[ \frac{3}{4}\right] }. \label{eq:22}$$ The next step is to obtain the AEE expansion for the Hamiltonian $h$ in (\[eq:3\]). Since the AEE expansion for $h$ has been derived in [@R12], only the result is presented below. The expansion of the quantum action variable $J(E)$ for the Hamiltonian $h$ is
$$J^{\prime }\left( E\right) =\underset{k=0}{\overset{\infty }{\sum }}\beta
_{k}E^{\frac{-(k-3)}{4}}.\label{eq:23}$$
The first six non zero $\beta _{k}$’s are $$\beta _{0}=\frac{\Gamma \left[ \frac{1}{4}\right] }{3\sqrt{\pi }\alpha
^{1/4}\smallskip \ \Gamma \left[ \frac{3}{4}\right] }, \label{eq:24}$$ $$\beta _{3}=-\frac{\hbar }{2}, \label{eq:25}$$ $$\beta _{6}=-\frac{(2\hbar ^{2}\alpha -b^{2})\ \Gamma \left[ \frac{3}{4}\right] }{8\sqrt{\pi }\alpha ^{3/4}\smallskip \ \Gamma \left[ \frac{1}{4}\right] }, \label{eq:26}$$ $$\beta _{12}=\frac{(44\hbar ^{4}\alpha ^{2}-60\hbar ^{2}\alpha b^{2}+5b^{4})\
\Gamma \left[ \frac{1}{4}\right] }{6144\sqrt{\pi }\alpha ^{5/4}\smallskip \
\Gamma \left[ \frac{3}{4}\right] }, \label{eq:27}$$ $$\beta _{18}=\frac{77(488\hbar ^{6}\alpha ^{3}-636\hbar ^{4}\alpha
^{2}b^{2}+90\hbar ^{2}\alpha b^{4}-3b^{6})\ \Gamma \left[ \frac{3}{4}\right]
}{245760\sqrt{\pi }\alpha ^{7/4}\smallskip \ \Gamma \left[ \frac{1}{4}\right]
}, \label{eq:28}$$ $$\beta _{24}=-\frac{1105(5648\hbar ^{8}\alpha ^{4}-6304\hbar ^{6}\alpha
^{3}b^{2}+1064\hbar ^{4}\alpha ^{2}b^{4}-56\hbar ^{2}\alpha b^{6}+b^{8})\
\Gamma \left[ \frac{1}{4}\right] }{58720256\sqrt{\pi }\alpha
^{9/4}\smallskip \ \Gamma \left[ \frac{3}{4}\right] }. \label{eq:29}$$\
\
By equating the coefficients of $J(E)$ expansions of both Hamiltonians, the conditions of the equivalence are obtained as $$\alpha =4g, \label{eq:30}$$ $$a=\frac{b^{2}-4g\hbar ^{2}}{16g}. \label{eq:31}$$\
The condition (\[eq:30\]) is obtained by equating terms $b_{0}$ and $\beta _{0}$ while condition (\[eq:31\]) is derived by equating $b_{6}$ and $\beta _{6}$. When these two conditions are satisfied, it was found that $b_{k}$ and $\beta_{k}$ are equal for next hundred $k$ values indicating AEE of $J(E)$ and $J^{\prime }(E)$ identical**.** In addition, by imposing the condition that $h$ is Hermitian, the parameters $a$ and $b$ become $b^{2}\geq 0$ and $a\geq -\frac{\hbar ^{2}}{4}$.\
Since the AEE expansion is accurate for higher eigenvalues, we have verified the equivalence of the Hamiltonians $h$ and $H$ for low energies by solving the Schrödinger equation numerically along suitable contours for various values of parameters $a$ and $b$.\
It is evident from the Table \[tab:1\] and Table \[tab:2\] that both Hamiltonians $h$ and $H$ have the same eigenspectra for first ten eigenstates. On the other hand, the expansion of $J(E)$ is very accurate for large eigenvalues and both Hamiltonians have the identical $J(E)$ expansions as shown above.
**n** **$E_{H}$** **$E_{h}$** **$E_{J}$**
------- ------------- ------------- -------------
0 -2.4558329 -2.4558327 1.5186675
1 4.5014539 4.5014546 4.5046982
2 10.931991 10.931992 10.931992
3 17.793015 17.793016 17.793016
4 25.238132 25.238134 25.238134
5 33.213971 33.213972 33.213972
6 41.666149 41.666150 41.666150
7 50.549802 50.549804 50.549804
8 59.828456 59.828459 59.828459
9 69.472108 69.472110 69.472110
10 79.455684 79.455685 79.455685
: Verification of the equivalence of Hamiltonians $H=p^{2}-x^{4}+\frac{6}{x^{2}}$ and $h=p^{2}+4x^{4}+10x$ . The first ten exact eigenenergy* *values of $H$ and $h$ and approximate eigenvalues $E_{J}$ obtained by $J(E)$ expansion method are given up to eight digits.[]{data-label="tab:1"}
**n** **$E_{H}$** **$E_{h}$** **$E_{J}$**
------- ------------- ------------- -------------
0 1.8961344 1.8961346 2.4545618
1 6.0533268 6.0533273 6.0884046
2 11.867933 11.867933 11.866200
3 18.510801 18.510802 18.510890
4 25.836222 25.836224 25.836220
5 33.733312 33.733314 33.733314
6 42.128813 42.128814 42.128814
7 50.969273 50.969275 50.969275
8 60.213679 60.213680 60.213680
9 69.829366 69.829368 69.829368
10 79.789590 79.789590 79.789590
: Verification of the equivalence of Hamiltonians $H=p^{2}-x^{4}-\frac{1}{2x^{2}}$ and $h=p^{2}+4x^{4}+2ix$ . The first ten exact eigenenergy values of $H$ and $h$ and approximate eigenvalues $E_{J}$ obtained by $J(E)$ expansion method are given up to eight digits.[]{data-label="tab:2"}
Behavior of eigenenergies and Hermiticity {#sec:3}
==========================================
In the previous section, the conditions of equivalence have been established. Next the behavior of the eigenvalues of $H$ is examined as a function of the parameter $a$. The Hermitian condition on $h$ is relaxed such that $b^{2}$ can also be negative. Therefore now $a$ can be less than $-\frac{\hbar ^{2}}{4}$ as well$.$ When $a$ is large ($\simeq 40$ and $\hbar
=1 $) lower eigenvalues of $H$ are negative as shown in figure \[f:1\]. As $a$ decreases eigenvalues become larger and whole spectrum become real and positive when $-2.76 < a < 2$. When $a=2$, $H$ has a zero energy ground state and Hamiltonians $H$ and $h$ are recognized as the equivalent Hamiltonians found by Nanayakkara et al [@R12]. $\ $When $a=0$, $\ b=2g\hbar$, the Hamiltonians $H$ and $h$ become the equivalent non-Hermitian - Hermitian Hamiltonian pair found by Jones et al [@R7; @R8].\
If $-\frac{\hbar ^{2}}{4}\leq a<\infty $ , $h$ is the Hermitian equivalent Hamiltonian of the $PT$-symmetric Hamiltonian $H.$ When $a<-\frac{\hbar ^{2}}{4}$ , $b$ is pure imaginary and $h$ loses its Hermiticity and becomes non-Hermitian and $PT$-symmetric. However, $h$ and $H$ are still isospectral. Hamiltonian $h$ for this case has been studied in detail by Delabaere et al [@R19] and Bender et al [@R21]* *in the past. Similar to what Bender et al have observed for the Hamiltonian $h$, as $a$ decreases below $-\frac{\hbar
^{2}}{4}$ further, adjacent pairs of energy levels of $H$ also coalesce and then become complex, starting with the ground state and the first excited state as shown in figure \[f:1\]. The value of $a$ at which this coalescence takes place for $H$ is $a=A=-2.76\hbar ^{2}$. Note that when $a<-\frac{\hbar ^{2}}{4},$ decrease in $a$ in the Hamiltonian $H$ is equivalent to an increase in $\left\vert b\right\vert $ in $h$.
![Six lowest eigenvalues of the Hamiltonian $H$ as a function of the parameter $a,$ when $\hbar =1$.[]{data-label="f:1"}](F_1.png){width="10cm" height="8cm"}
At this point it is useful to pay our attention to the Hermiticity of both systems. We have observed previously, for $-\frac{\hbar ^{2}}{4}\leq
a<\infty ,$ $h$ is the Hermitian equivalent of $H$ and both Hamiltonians have real spectra. $\ $When $A<a<-\frac{\hbar ^{2}}{4}$, the eigenspectrum of $H$ is real and positive while $h$ has become non-Hermitian and $PT$-symmetric as $b$ is pure imaginary. Therefore $h$ is no longer the Hermitian equivalent of $H.$ If the $PT$-symmetry of $H$ is not broken for $A<a<-\frac{\hbar ^{2}}{4}$ then by reference [@R6], there exists an equivalent Hermitian Hamiltonian which is different from $h$. However, there is another possibility that although the eigen spectrum of $H$ is entirely real, $PT$- symmetry of $H$ may be spontaneously broken and therefore $H$ is no longer having a Hermitian counterpart (Note that it has not been proven that real eigenspectra of a $PT$-symmetric system implies unbroken $PT$-symmetry) . On the other hand when $a<A$, the lower eigenenergies of both $H$ and $h$ become complex and hence $H$ is no longer has the true $PT$-symmetry.
Unbroken Supersymmetry {#sec:4}
======================
In [@R12], it was shown that the Hamiltonian $$H_{1}=p^{2}-x^{4}+4ix \tag*{(32)}$$ has zero energy ground state and the Hamiltonian $$H_{2}=p^{2}-x^{4}+2/x^{2} \tag*{(33)}$$ is the supersymmetric partner potential (assume $\hbar =1$, $2m=1,$ and $g=1$). In this section we examine these two systems in detail. In a single framework these two systems have been investigated in detail by Dorey et al [@R20]. $H_{1}$ corresponds to $l=0$ and $\alpha =4$ and $H_{2}$ corresponds to $l=1$ and $\alpha =0$ in their notations. Therefore our discussion will be based on some of the results they have obtained in [@R20]. With the above choice of $\alpha $ and $l$ , the ground state wave function of $H_{1}$ is on the line $\alpha _{-}=0$ while the same of $H_{2}$ is on the line $\alpha
_{+}=0$ in their notations. Based on [@R20] and the current study, the following results can be listed;\
\
(1) Hamiltonians $H_{1}$ and $H_{2}$ have zero energy ground states with the normalizable wave functions $\Phi _{0}^{(1)}(x)$ and $\Phi _{0}^{(2)}(x)$ respectively as $$\Phi _{0}^{(1)}(x)=ixe^{\frac{i}{3}x^{3}} \tag*{(34)}$$ and $$\Phi _{0}^{(2)}(x)=\left( ix\right) ^{-1}e^{-\frac{i}{3}x^{3}} \tag*{(35)}$$ where quantization contour starts and ends at $\left\vert
x\right\vert =\infty $ joining the **(**stokes**)** sectors $S_{-1} $ and $S_{1}$ and $$S_{k}=\left\{ x:\left\vert \arg (x)-\frac{\pi k}{3}\right\vert <\frac{\pi }{6}\right\} \tag*{(36)}$$\
(2) The superpotential $W_{H_{1}}\left( x\right) $ obtained from the zero energy ground state wave function of $H_{1}$ is $$W_{H_{1}}\left( x\right) =-\frac{1+ix^{3}}{x} \tag*{(37)}$$\
(3) The superpotential $W_{H_{2}}\left( x\right) $ obtained from the zero energy ground state wave function of $H_{2}$ is $$W_{H_{2}}\left( x\right) =\frac{1+ix^{3}}{x}=-W_{H_{1}}\left( x\right)
\tag*{(38)}$$ (4) The supersymmetric partner Hamiltonian of $H_{1}$ is $H_{2}$ and the supersymmetric partner Hamiltonian of $H_{2}$ is $H_{1}$ hinting at broken supersymmetry. But both have normalizable ground state wave functions assuring unbroken supersymmetry.\
Therefore $H_{1}$ and $H_{2}$ are isospectral having unbroken supersymmetry with zero energy ground states as concluded in [@R20]. Similar behavior has also been observed for some other systems by Znojil et al [@R22].
Summary and concluding remarks {#sec:5}
==============================
In this paper we have shown that the non-Hermitian Hamiltonian $H=p^{2}-gx^{4}+a/x^{2}$ is equivalent to the Hermitian Hamiltonian $h=p^{2}+4gx^{4}+bx$ if $a=\left( b^{2}-4g\hbar ^{2}\right) /16g$ and $a\geq -\frac{\hbar ^{2}}{4}.$ We applied the asymptotic energy expansion (AEE) method to obtain the above result. The AEE method is based on series expansion of the quantum action variables $J(E)$ in rational powers of reciprocal of energy. The $J(E)$ expansions of these two Hamiltonians were found to be identical. In addition, the spectral equivalence of $H$ and $h$ was verified with eigenspectra obtained by solving the Schrödinger equation for these Hamiltonians numerically along suitable contours of integration for various values of $a$ and $b$.
When $a<-\frac{\hbar ^{2}}{4},$ it was shown that $h$ becomes non-Hermitian and is no longer the Hermitian equivalent of $H.$ However, $H$ and $h$ remain isospectral partners even if $a<-\frac{h^{2}}{4}$. When $a$ decreases below $a=-2.76\hbar ^{2}$, adjacent pairs of energy levels of $H$ coalesce and then become complex conjugate pairs, starting with the ground state and the first excited state.
[99]{}
Reference
=========
C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80 5243(1998)
C. M. Bender, Rep.Prog. Phys. 70 947 (2007) .
P. Dorey, C. Dunning, and R. Tateo, J. Phys. A: Math. Theor. 40 R205(2007)
A. Mostafazadeh Pseudo-Hermitian Quantum Mechanics, arXiv:0810.5643.
M. Znojil, SIGMA, Vol. 5 (2009), 001 (arXiv:0901.0700).
A. Mostafazadeh, J. Math. Phys. 43 (2002) 205 ; J. Phys. A: Math. Gen. 36 7081 (2003).
H. F. Jones, J. Mateo,and R. J. Rivers, Phys. Rev. D 74, 125022 (2006) .
H. F. Jones and J. Mateo, Phys. Rev. D 73 085002 (2006).
C. M. Bender, D. C. Brody, J.-H. Chen, H. F. Jones, K. A.Milton, and M. C. Ogilvie, Phys. Rev. D 74 025016 (2006) .
V. Buslaev and V. Grecchi, J. Phys. A 26 5541 (1993)
P. E. G. Assis and A. Fring, J. Phys. A: Math. Theor. 41 244001(2008)
A. Nanayakkara and T. Mathanaranjan Phys. Rev. A 86 022106(2012)
A. Nanayakkara, Phys. Lett. A 289 39 (2001)
A. Nanayakkara and I. Dassanayake, Phys Lett. A 294, 158 (2002)
R. A. Leacock and M. J. Padgett, Phys. Rev. Lett. 50, 3 (1983)
R. A. Leacock and M. J. Padgett Phys. Rev. D 28, 2491 (1983)
A. Nanayakkara and V. Bandara, Can. J. Phys 80,959 (2002)
A. Nanayakkara Can. J. Phys 85: 1473 (2007)
E. Delabaere and F. Pham, Phys. Lett. A 250, 29 (1998)
P. Dorey, C. Dunning, and R. Tateo, J. Phys. A 34, L391 (2001).
C. M. Bender, M. Berry, P. N. Meisinger, V. M Savage and M. Simsek J. Phys. A: Math. Gen. 34 L31 (2001)
M. Znojil, F. Cannata, B. Bagchi and R. Roychoudhury, Phys. Lett. B483 284 (2000)
|
---
abstract: |
As an explosion develops in the collapsed core of a massive star, neutrino emission drives convection in a hot bubble of radiation, nucleons, and pairs just outside a proto-neutron star. Shortly thereafter, neutrinos drive a wind-like outflow from the neutron star. In both the convective bubble and the early wind, weak interactions temporarily cause a proton excess (${{\ensuremath{Y_{\mathrm{e}}}}\xspace}{\lower .1ex\hbox{\rlap{\raise .6ex\hbox{\hskip .3ex
{\ifmmode{\scriptscriptstyle >}\else
{$\scriptscriptstyle >$}\fi}}}
\kern -.4ex{\ifmmode{\scriptscriptstyle \sim}\else
{$\scriptscriptstyle\sim$}\fi}}}0.50$) to develop in the ejected matter. This situation lasts for at least the first second, and the approximately 0.05 - 0.1 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ that is ejected has an unusual composition that may be important for nucleosynthesis. Using tracer particles to follow the conditions in a two-dimensional model of a successful supernova explosion calculated by @jan03, we determine the composition of this material. Most of it is helium and $^{56}$Ni. The rest is relatively rare species produced by the decay of proton-rich isotopes unstable to positron emission. In the absence of pronounced charged-current neutrino capture, nuclear flow will be held up by long-lived waiting point nuclei in the vicinity of $^{64}{\rm Ge}$. The resulting abundance pattern can be modestly rich in a few interesting rare isotopes like $^{45}{\rm Sc}$, $^{49}{\rm
Ti}$, and $^{64}{\rm Zn}$. The present calculations imply yields that, when compared with the production of major species in the rest of the supernova, are about those needed to account for the solar abundance of $^{45}{\rm Sc}$ and $^{49}{\rm Ti}$. Since the synthesis will be nearly the same in stars of high and low metallicity, the primary production of these species may have discernible signatures in the abundances of low metallicity stars. We also discuss uncertainties in the nuclear physics and early supernova evolution to which abundances of interesting nuclei are sensitive.
author:
- 'J. Pruet'
- 'S. E. Woosley'
- 'R. Buras'
- 'H.-T. Janka'
- 'R.D. Hoffman'
title: 'Nucleosynthesis in the Hot Convective Bubble in Core-Collapse Supernovae'
---
INTRODUCTION
============
When the iron core of a massive star collapses to a neutron star, a hot proto-neutron star is formed which radiates away its final binding energy as neutrinos. Interaction of these neutrinos with the infalling matter has long been thought to be the mechanism responsible for exploding that part of the progenitor external to the neutron star and making a supernova (e.g., Janka 2001; Woosley, Heger, & Weaver 2002, and references therein). During the few tenths of a second when the explosion is developing, a convective bubble of photo-disintegrated matter (nucleons), radiation, and pairs lies above the neutron star but beneath an accretion shock. Neutrino interactions in this bubble power its expansion, drive convective overturn, and determine its composition. Since baryons exist in the bubble only as nucleons, the critical quantity for nucleosynthesis is the proton mass fraction (${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$). Initially, in part because of an excess of electron neutrinos over antineutrinos, ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}{\lower .1ex\hbox{\rlap{\raise .6ex\hbox{\hskip .3ex
{\ifmmode{\scriptscriptstyle >}\else
{$\scriptscriptstyle >$}\fi}}}
\kern -.4ex{\ifmmode{\scriptscriptstyle \sim}\else
{$\scriptscriptstyle\sim$}\fi}}}0.5$ [@qia96]. As time passes, however, the fluxes of the different neutrino flavors and their spectra change so that ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ evolves and becomes considerably less than 0.5. This epoch, also known as the “neutrino-powered wind”, has been explored extensively as a possible site for the r-process [@qia96; @hof97; @woo94; @car97; @qia00; @tak94; @ots00; @sum00; @Tho01] as well as $^{64}$Zn and some light p-process nuclei [@hof96].
In this paper we consider nucleosynthesis during the earlier epoch when ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ is still greater than 0.5. This results in a novel situation in which the alpha-rich freeze out occurs in the presence of a non-trivial abundance of free protons. The resulting nuclear flows thus have characteristics of both the alpha-rich freeze out [@Woo73; @Woo92] and the rp-process [@Wal81]. Several proton-rich nuclei, e.g., $^{64}$Ge and $^{45}$Cr, are produced in such great abundance that, after ejection and decay, they contribute a significant fraction of the solar inventory of such species.
Supernova Model and Nuclear Physics Employed
============================================
Explosion Model for a 15$\,$M$_{\odot}$ Star
--------------------------------------------
The nucleosynthesis calculations in this paper are based on a simulation of the neutrino-driven explosion of a nonrotating 15${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ star (Model S15A of Woosley & Weaver 1995) by @jan03 (see also Janka et al. 2004). The post-bounce evolution of the model was followed in two dimensions (2D) with a polar coordinate grid of 400 (nonequidistant) radial zones and 32 lateral zones (2.7 degrees resolution), assuming azimuthal symmetry and using periodic conditions at the boundaries of the lateral wedge at $\pm 43.2^{\mathrm{o}}$ above and below the equatorial plane. Convection was seeded in this simulation by velocity perturbations of order $10^{-3}$, imposed randomly on the spherical post-bounce core.
The neutrino transport was decribed by solving the energy-dependent neutrino number, energy, and momentum equations in radial direction in all angular bins of the grid, using closure relations from a model Boltzmann equation [@ram02]. Neutrino pressure gradients and neutrino advection in lateral direction were taken into account (for details, see Buras et al. 2004). General relativistic effects were approximately included as described by @ram02.
Although convective activity develops in the neutrino-heating layer behind the supernova (SN) shock on a time scale of several ten milliseconds after bounce, no explosions were obtained with the described setup until $\sim$250$\,$ms [@bur03], at which time the very CPU-intense simulations usually had to be terminated. The explosion in the simulation discussed here was a consequence of omitting the velocity-dependent terms from the neutrino momentum equation. This manipulation increased the neutrino-energy density und thus the neutrino energy deposition in the heating region by $\sim$20–30% and was sufficient to convert a failed model into an exploding one (see also Janka et al. 2004, Buras et al. 2004). This sensitivity of the outcome of the simulation to only modest changes of the transport treatment demonstrates how close the convecting, 2D models of @bur03 with energy-dependent neutrino transport are to ultimate success.
The evolution from the onset of core collapse (at about $-175\,$ms) through core bounce and convective phase to explosion is shown in terms of mass shell trajectories in Fig. \[massshells\]. The explosion sets in when the infalling interface between Si layer and oxygen-enriched Si layer reaches the shock at about 160$\,$ms post bounce. The corresponding steep drop of the density and mass accretion rate, associated with an entropy increase by a factor of $\sim\,$2, allow the shock to expand and convection to become more violent, thus establishing runaway conditions. The calculation was performed in 2D for following the ejection of the convective shell until 470$\,$ms after bounce. While matter is channeled in narrow downflows towards the gain radius, where it is heated by neutrinos and some of it starts expanding again in high-entropy bubbles, its neutron-to-proton ratio is set by weak interactions with electron neutrinos and antineutrinos as well as electron and positron captures on free nucleons. The final value of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ is a crucial parameter for the subsequent nucleosynthesis. The mass distribution of neutrino-heated and -processed ejecta from the convective bubble is plotted in Fig. \[massye\].
At 470$\,$ms after bounce the model was mapped to a 1D grid and the subsequent evolution was simulated until 1300$\,$ms after bounce. With accretion flows to the neutron star having ceased, this phase is characterized by an essentially spherically symmetric outflow of matter from the nascent neutron star, which is driven by neutrino-energy deposition outside the neutrinosphere [@woo92; @dun86]. This neutrino-powered wind is visible in Fig. \[massshells\] after $\sim$500$\,$ms. The fast wind collides with the dense shell of slower ejecta behind the shock and is decelerated again. The corresponding negative velocity gradient steepens to a reverse shock when the wind expansion becomes supersonic (Fig. \[massshells\]; Janka & Müller 1995). Characteristic parameters for some mass shells in this early wind phase are shown in Fig. \[wind\]. Six representative shells are sufficient, because the differences between the shells evolve slowly with time according to the slow variation of the conditions (neutron star radius, gravitational potential, neutrino luminosities and spectra) in the driving region of the wind near the neutron star surface. In Table \[tbl1w\] the masses associated with the different shells are listed.
At the end of the simulated evolution the model has accumulated an explosion energy of approximately $0.6\times 10^{51}\,$erg. The mass cut and thus initial baryonic mass of the neutron star is 1.41$\,$M$_{\odot}$. The model fulfills fundamental constraints for Type II SN nucleosynthesis [@hof96] because the ejected mass having ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}\lesssim 0.47$ is $\lesssim$10$^{-4}\,$M$_{\odot}$ (see Fig. \[massye\]) and thus the overproduction of N=50 (closed neutron shell) nuclei of previous explosion models does not occur. More than 83% of the ejected mass in the convective bubble and early wind phase (in total 0.03$\,$M$_{\odot}$ in this rather low-energetic explosion) have ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}> 0.5$. The ejection of mostly p-rich matter is in agreement with 1D general relativistic SN simulations with Boltzmann neutrino transport in which the explosion was launched by artificially enhancing the neutrino energy deposition in the gain layer [@thi03; @fro04]. The reason for the proton excess is the capture of electron neutrinos and positrons on neutrons, which is favored relative to the inverse reactions because of the mass difference between neutrons and protons and because electron degeneracy becomes negligible in the neutrino-heated ejecta [@fro04; @qia96].
Although the explosion in the considered SN model of @jan03 was obtained by a regression from the most accurate treatment of the neutrino transport, it not only demonstrates the proximity of such accurate models to explosions, but also provides a consistent description of the onset of the SN explosion due to the convectively supported neutrino-heating mechanism, and of the early SN evolution. The properties of the resulting explosion are very interesting, including the conditions for nucleosynthesis. The ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ values of the ejecta should be rather insensitive to the manipulation which enabled the explosion. On the one hand the expansion velocities of the high-entropy ejecta are still fairly low (less than a few $10^8\,$cm$\,$s$^{-1}$) when weak interactions freeze out, and on the other hand the omitted velocity-dependent effects affect neutrinos and antineutrinos in the same way.
Outflows in the Convective Bubble
---------------------------------
In order to calculate the nucleosynthesis it is necessary to have a starting composition and the temperature-density ($T-\rho$) history of the matter as it expands and is ejected from the supernova. Because the matter is initially in nuclear statistical equilibrium, the initial values of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$, $T$, and $\rho$ determine the composition which is just protons with a mass fraction ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ and neutrons. We are most interested in the innermost few hundredths to one tenth of a solar mass to be ejected. This matter has an interesting history. It was initially part of the silicon shell of the star, but fell in when the core collapsed, passed through the SN shock and was photodisintegrated to nucleons. Neutrino heating then raised the entropy and energy of the matter causing it to convect. Eventually some portion of this matter gained enough energy to expand and escape from the neutron star, pushing ahead of it the rest of the star. As it cooled, the nucleons reassembled first into helium and then into heavy elements.
The temperature-density history of such matter is thus not given by the simple ansatz often employed in explosive nucleosynthesis — “adiabatic expansion on a hydrodynamic time scale”. In fact, owing to convection, the temperature history may not even be monotonic. Here we rely on tracer particles embedded in the so called “hot convective bubble” of the 15 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ SN model calculated by @jan03 (Fig. \[fig0\]). These tracer particles were not distributed uniformly in mass, but chosen to represent a range of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ in the ejecta.
The proton-rich outflows of interest here begin at about 190 ms after core bounce (Fig. \[massshells\]). Entropies and electron fractions characteristic of a few different trajectories are given in Table \[tbl1\]. Each trajectory represents a different mass element in the convective bubble. As is seen, ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ for the different trajectories lies in the range from $0.5-0.546$, and the entropies per nucleon are modest, $s/k_b\sim 30-50$. Figure \[massye\] shows the ejected mass versus ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ during the convective phase of the SN explosion.
At the end of the 2D calculation of @jan03, the mass element in a typical trajectory had reached a radius of about $2000$ [[$\mathrm{km}$]{}]{}(corresponding to the time when the SN model was mapped from 2D to 1D and thus detailed information for the mass elements was lost). Temperatures at this radius were typically $T_9\equiv T/10^9\,{\rm
K} \approx 4$–5, which is still hot enough that nuclei have not yet completely re-assembled. To follow the nucleosynthesis until all nuclear reactions had frozen out it was necessary to extrapolate the trajectories to low temperature. In doing so, we assumed that the electron fraction and entropy were constant during the extrapolated portion of the trajectory. This should be valid because the number of neutrino captures suffered by nuclei beyond $\sim 2000$ [[$\mathrm{km}$]{}]{}is small.
We considered two approximations to the expansion which should bracket the actual behavior. The first assumes homologous expansion at a velocity given by the Janka et al. calculation between 10 billion and 4 billion K. This ignores any deceleration experienced as the hot bubble encounters the overlying star and is surely an underestimate of the actual cooling time (though perhaps realistic for the accretion-induced collapse of a bare white dwarf). In particular, we estimated the homologous expansion time scale for each trajectory as $\tau_{\rm hom}=
(t_{\mathrm{f}}-t_{\mathrm{i}})/\ln(\rho_{\mathrm{i}}/\rho_{\mathrm{f}})$ where the subscript $\mathrm{i}$ denotes the value of a quantity when $T_9=10$ and the subscript $\mathrm{f}$ denotes the value of a quantity at the last time given for the tracer particle history ($t_{\mathrm{f}}\approx 436\,{\rm ms},\,
T_{9,{\mathrm{f}}}\approx
4$–5). Values of $\tau_{\rm hom}$ for different trajectories are given in Table \[tbl1\].
The second approximation was an attempt to realistically represent material catching up with the supernova shock. This extrapolation is based on smoothly merging the trajectories found in the calculations of Janka et al. with those calculated for the inner zone of the same 15 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ supernova by @Woo95. There are some differences. The earlier study was in one dimension and the shock was launched artificially using a piston. The kinetic energy at infinity of the Woosley-Weaver model was $1.2 \times 10^{51}$ erg; that of the Janka et al.model was $0.6 \times 10^{51}$ erg. Still the calculations agreed roughly in the temperature and density at the time when the evaluation of tracer particles in the current 2D simulation was stopped. In order not to have discontinuities in the entropy at the time when the two calculations are matched, the density in the previous 1D calculation is changed slightly. This merging of the late time trajectories is expected to be reasonable because the shock evolution at several seconds post core bounce is determined mostly by the explosion energy.
We shall see in Sect. \[sec:nucresults\] that abundances of key nuclei are particularly sensitive to the time it takes the flow to cool from $2\cdot 10^9\,$K to $1\cdot 10^9\,$K. The homologous expansion approximation gives this time as about 100–200 ms, while the Kepler based estimate gives this time as about 1 sec. Both estimates are rough and should be viewed as representing upper and lowed bounds to the time scale.
Figure \[figrho\] shows the evolution of density in a representative trajectory for each of the two approximations to the flow at large radii. The temperature history in these trajectories is shown in Fig. \[figtemp\]. Note the irregular and non-monotonic evolution of the thermodynamic quantities at early times.
Outflows in the Early Wind
--------------------------
While the shock sweeps through and expels the stellar mantle, matter is still being continuously ablated from the surface of the cooling neutron star. Neutrino heating, principally via charged current neutrino capture, acts to maintain pressure-driven outflow in the tenuous atmosphere formed by the ablated material. This outflow has a higher entropy and is less irregular than the convective bubble.
The evolution of material at radii smaller than a few hundred km is set by characteristics of the cooling neutron star. It is at these small radii that the asymptotic entropy $s$ and electron fraction ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ are set. At early times the neutron star has yet to radiate away the bulk of its gravitational energy and so has a relatively large radius. Material escaping the star during this period only needs to gain a little energy through heating to escape the still shallow gravitational potential. Consequently, the entropy of the asymptotic outflow is about a factor of two smaller than the entropy of winds leaving the neutron star $\sim$10 seconds post core-bounce. This can be seen from the analytic estimate provided by [@qia96] $$s\approx 235 (L_{{\bar \nu}_e,51} \epsilon_{{\bar \nu}_e,{\rm
MeV}}^2)^{-1/6} \left(\frac{10^6\,{{\ensuremath{\mathrm{cm}}}\xspace}}{R}\right)^{2/3}.$$ Here $L_{{\bar \nu}_e,51}=L_{\bar{\nu}_e}/10^{51}{\rm erg /sec}$ , $\epsilon_{{\bar \nu}_e,{\rm MeV}}$ approximately the mean energy of electron anti-neutrinos and $R$ is the neutron star radius. A lower entropy implies a higher density and therefore faster particle capture rates at a given temperature. For proton-rich outflows this typically results in synthesis of heavier elements.
The electron fraction in the outflow is set by a competition between different lepton capture processes on free nucleons: $$\begin{aligned}
\nu_e+ {\rm n} & \longleftrightarrow & {\rm p} + e^{-} \, ,\\
e^+ + {\rm n} & \longleftrightarrow & {\rm p} + \bar{\nu}_e \, .\end{aligned}$$ Because the neutron star is still deleptonizing at early times, the $\nu_e$ and $\bar{\nu}_e$ spectra can be quite similar. Also, once heating raises the entropy of material leaving the neutron star, the number densities and spectra of electrons and positrons within the material become similar. Under these circumstances the 1.29 MeV threshold for ${\rm p\rightarrow n}$ results in $\bar{\nu}_e/e^-$ capture rates which are slower than the inverse $\nu_e/e^+$ capture rates. Weak processes then drive the outflow proton rich. The electron fraction in the wind is mostly set by the competition between $\nu_e$ and $\bar{\nu}_e$ capture (because $e^\pm$ captures freeze out when the density and temperature in the outflow become low, whereas high-energy neutrinos streaming out from the neutrinosphere still continue to react with nucleons). When the composition comes to equilibrium with the neutrino fluxes, $${{\ensuremath{Y_{\mathrm{e}}}}\xspace}\,\approx\, \frac{\lambda_{\nu_e n}}{\lambda_{\nu_e n}
+ \lambda_{\bar{\nu}_e p}}\ .$$ Here $\lambda_\nu$ represents the electron neutrino or antineutrino capture rate on neutrons or protons. Because the star is still deleptonizing at early times, the $\nu_e$ and $\bar{\nu}_e$ spectra can be quite similar. The 1.29$\,$MeV threshold for $\bar{\nu}_e$ capture then leads to $\lambda_{\nu_e n} > \lambda_{\bar{\nu}_e p}$, and proton-richness is established in the outflow. Finally also the neutrino reactions cease because of the $1/r^2$ dilution of the neutrino density with growing distance $r$ from the neutron star.
Table \[tbl1w\] gives characteristics of the early wind found in the simulations of @jan03. As expected, the wind is proton rich at early times. Eventually, the hardening of the $\bar{\nu}_e$ spectrum relative to the $\nu_e$ spectrum will cause ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ to fall below 1/2. This turnover has not yet occurred when the hydrodynamic simulation was stopped. It should take place at a later time when the wind properties (mass loss rate, entropy) have changed such that the nucleosynthesis constraints for the amount of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}< 0.47$ ejecta [@hof96] will not be violated. At 1.3$\,$s after bounce the mass loss rate of about $3\times 10^{-3}\,$M$_{\odot}\,$s$^{-1}$ and wind entropy of $\sim\,$80$\,k_b$ per nucleon in the Janka et al. model are likely to still cause an overproduction of N=50 nuclei if ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ went significantly below 0.5.
The temperature in the wind at the end of the traced shell expansion is $T_9\approx 2$ (Fig. \[wind\]). Approximations for the wind evolution at lower temperatures are the same as those discussed above.
Nuclear Physics Employed
------------------------
The reaction network used for the present calculations is given in Table \[reactable\]. Estimates of reaction rates and nuclear properties used in our calculations are the same as those used in the study of X-ray bursts by [@Woo04]. Briefly, reaction rates were taken from experiment whenever possible, from detailed shell-model based calculations [@Fis01] for a few key $({\rm p,\gamma})$ rates, and from Hauser-Feshbach calculations [@Rau00] otherwise. Proton separation energies, which are crucial determinants of nucleosynthesis in flows with ${{{\ensuremath{Y_{\mathrm{e}}}}\xspace}>1/2}$, were taken from a combination of experiment [@Aud95], the Hartree-Fock Coulomb displacement calculations of [@bro02] for many important nuclei with Z$>$N, and theoretical estimates [@Mol95]. Choosing the best nuclear binding energies is somewhat involved and we refer the reader to the discussion in [@bro02] and Fig. 1 of [@Woo04]. Ground-state weak lifetimes are experimentally well determined for the nuclei important in this paper. At temperatures larger than $10^9$ K the influence of thermal effects on weak decays was estimated from the compilation of [@Ful82] where available. Table \[fultable\] gives the nuclei for which the Fuller et al. rates were used. A test calculation in which we switched thermal rates off and used only experimentally determined ground-state rates showed little effect on the important abundances. Section \[sec:nuclear\] contains a discussion of the influence of nuclear uncertainties on yields of some interesting nuclei.
Nucleosynthesis Results
=======================
[[\[sec:nucresults\]]{}]{} Table \[tbl1\] gives the major calculated production factors for a number of trajectories in the convective bubble and for our two different estimates of the material expansion rate at low temperatures. Table \[tbl1w\] gives production factors for nuclei synthesized in different mass elements comprising the early wind. Here the production factor for nuclide $i$ is defined as $$P_i={M \over M^{\mathrm{ej}}}{X_i \over X_{\odot,i}},$$ where $M$ is the total mass in a given trajectory, $M^{\mathrm{ej}}=13.5\,{{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ is the total mass ejected in the SN explosion, $X_i$ is the mass fraction of nuclide $i$ in the trajectory, and $X_{\odot,i}$ is the mass fraction of nuclide $i$ in the sun. To aid in interpreting the tables we show in Fig. \[fig2\] plots of $X_i/X_{\odot,i}$ characterizing the nucleosynthesis in two representative hot-bubble trajectories.
Production factors integrated over the different bubble trajectories are given in Table \[tbl2\]. If one assumes rapid expansion, production factors of $^{45}$Sc, $^{63}$Cu, $^{49}$Ti, and $^{59}$Co are all above 1.5. For the slower expansion time scale below $4
\times 10^9$ K, which we regard as more realistic, a different set of nuclei are produced, especially $^{49}$Ti and $^{64}$Zn. Depending upon mass and metallicity, $^{49}$Ti may already be well produced in other regions of the same supernova [@Woo95; @Rau02], but $^{64}$Zn is not. The synthesis here thus represents a new way of making $^{64}$Zn and this same process will function as well in zero and low metallicity stars as in supernovae today. However, $^{64}$Zn was already known to be produced, probably in greater quantities, by the neutrino-powered wind [@hof96].
Production factors integrated over the different wind trajectories are given in Table \[tbl2w\]. The somewhat high-entropy wind synthesizes $^{45}$Sc, $^{49}$Ti and $^{46}$Ti more efficiently than the bubble. Typical values of $X/X_{\sun}$ for these three nuclei are approximately $10^4$ in the wind and approximately $2\cdot 10^3$ in the bubble. In the present calculations the integrated production factor for Sc in the wind is between about 1.5 and 4.7 depending on the time scale describing the wind expansion at $T_9\lesssim 2$.
For comparison, in the 15 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ supernova of @Rau02, this production factor was about 7 for many major species, including oxygen. This is close to the combined wind/bubble production factors of Sc and $^{46,49}$Ti in the present calculations. The other most abundant productions in Tables \[tbl2\] and \[tbl2w\] fall short of this - but not by much. The bulk production factors in a 25 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ supernova are about twice those in a 15, but our explosion model is not easily extrapolable to stars of other masses. [*If*]{} 25 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ stars explode with a similar kinetic energy it will probably take a more powerful central engine to overcome their greater binding energy and accretion rate during the explosion. Probably this requires more mass in the convective bubble. In fact, the energy of the 15 ${{\ensuremath{\mathrm{M}_{\odot}}}\xspace}$ supernova used here, $0.6 \times 10^{51}$ erg, would be regarded by many as low. It may be that the mass here should be doubled too.
It is important to note that the species listed in Tables \[tbl2\] and \[tbl2w\] are not made as themselves but as proton-rich radioactive progenitors. Major progenitors of important product nuclei are given in the far right column of Table \[tbl2\]. Typical progenitors of important nuclei are 3–4 charge units from stability. This can be understood through consideration of the Saha equation. Before charged particle reactions freeze out at $T_9\approx
1.5-2$, nuclear abundances along an isotonic chain are well approximated as being in local statistical equilibrium: $$\label{localsaha}
\frac{X({\rm Z+1,N})}{X({\rm Z,N})} \approx 10^{-5} \exp(S_p/T) \frac{\rho_5}{T_9^{3/2}}\frac{G_{\rm Z+1,N}}{G_{\rm Z,N}}.$$ Here $S_p$ is the proton separation energy of the Z+1,N nuclide, $G$ represents the partition function, $\rho_5=\rho/10^5{{{\ensuremath{{{\ensuremath{\mathrm{g}}}\xspace}\,{{\ensuremath{\mathrm{cm}}}\xspace}^{-3}}}\xspace}}$, $T_9=T/10^9{\rm K}$, and A=Z+N. Equation (\[localsaha\]) predicts that the abundances of nuclei with $S_p\lesssim 500$ keV are very small.
Perhaps the most notable feature of the proton-rich trajectories is their inefficiency at synthesizing elements with A$\,\gtrsim\,$60. Neutron-rich outflows, by contrast, readily synthesize nuclides with mass A$\,\sim\,$100. This is shown in Table \[tbl3\] which gives production factors characterizing nucleosynthesis in somewhat neutron-rich winds occurring in the SN. The Kepler-based extrapolation of the first trajectory in Table \[tbl1\] is used for these ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}<0.5$ calculations. Estimates of the mass in each ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ bin for the calculations of [@jan03] are shown in Fig. \[massye\].
Termination of the nuclear flow at low mass number in proton-rich outflows has a simple explanation. Unlike nuclei at the neutron drip lines, proton-rich waiting point nuclei have lifetimes much longer than the time scales characterizing expansion of neutrino-driven outflows. In addition, proton capture from waiting point nuclei to more rapidly decaying nuclei is inefficient. To illustrate the difficulty with rapidly assembling heavier proton-rich nuclei, consider nuclear flow through $^{64}{\rm Ge}$. This waiting point nucleus has a lifetime of approximately 64 sec. The ratio of the amount of flow leaving $^{65}{\rm As}$ to that leaving $^{64}{\rm Ge}$ is found from application of the Saha equation above, $$\frac{\lambda_+(^{65}{\rm As})Y(^{65}{\rm As})}{\lambda_+(^{64}{\rm
Ge})Y(^{64}{\rm Ge})} \approx 10^{-2} \frac{\rho_5}{T_9^{3/2}} \exp(S_p/T).$$ Here $\lambda_+$ represents the $\beta^+$ decay rate and $S_p$ is the proton separation energy of As. For $^{65}{\rm As}$, $\lambda_+\approx
\ln(2)/0.1\sec$ and for $^{64}{\rm Ge}$, $\lambda_+\approx
\ln(2)/64\sec$. By definition, proton capture daughters of waiting point nuclei are characterized by small proton separation energies. The binding energy of $^{65}{\rm As}$ still has large uncertainties, though is known to be less than about 200keV [@bro02]. Positron decay out of the proton capture daughter of the waiting point nuclei is negligible for such small proton separation energies. These considerations do not hold for X-ray bursts, where time scales characterizing nuclear burning can be tens or hundreds of seconds.
The difficulty with rapid assembly of heavy proton-rich nuclei is also evident in the final free proton and alpha particle mass fractions. The trend of $X_p$ and $X_{\alpha}$ with ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ is shown in Fig. \[xpandxalpha\] for the different Kepler extrapolated bubble trajectories. Also shown in this figure is the proton mass fraction calculated under the assumption that all available nucleons are bound into alpha particles. This is an approximate measure of the mass fraction of available protons. Note that the mass fraction of protons in the two calculations are nearly identical. This is because assembly of proton-rich nuclei occurs on a very slow time scale set by a few $\beta^+$ rates.
Because nucleosynthesis past A$\,\sim\,$60 is inefficient these proton-rich flows do not produce N=50 closed shell nuclei. Historically, overproduction of N=50 nuclei has plagued calculations of supernova nucleosynthesis [@How93; @Wit93; @woo94]. The influence of weak interactions in driving some of the outflow to ${{{\ensuremath{Y_{\mathrm{e}}}}\xspace}>1/2}$ ameliorates this problem.
Details of the Nucleosynthesis and Critical Nuclear Physics
-----------------------------------------------------------
[[\[sec:nuclear\]]{}]{}
To aid in understanding the general character of these proton-rich flows we show in Fig. \[nfig\] the evolution of nuclear mass fractions as a function of the neutron number. At $T_9\approx 4$, $\alpha$ captures have led to efficient synthesis of tightly bound species with N=28 and N=30. As temperature decreases $\alpha$ capture becomes less efficient and $\beta^+$ decay drives flow to higher neutron number. From Table \[tbl2\] it is seen that the nuclei we are most interested in arise from decay of nuclei with N=21, 24, 31 and 32. From Fig. \[nfig\] it is clear that synthesis of nuclei with these neutron numbers represents a minor perturbation on the nucleosynthesis as a whole.
Tables \[tbl2\] and \[tbl2w\] show that $^{45}{\rm Sc}$, the only stable scandium isotope, has a combined wind/bubble production factor of about 6 if freeze-out is rapid and a combined production factor about 50$\%$ smaller in the slower Kepler extrapolated trajectories. Efficient synthesis of scandium in proton rich outflows associated with Gamma Ray Bursts has been noted previously by [@pru04], while [@mae03] found that scandium may also be synthesized explosively in shocks exploding anomalously energetic supernovae. Indeed, values presented here for ${{{\ensuremath{Y_{\mathrm{e}}}}\xspace}}$, $s/k_b$, and $\tau$ in the early SN wind are very close to estimates of these quantities in winds leaving the inner regions of accretion disks powering collapsars [@mac99; @pru204]. The origin of Sc is currently uncertain and it may be quite abundant in low metallicity stars [@Cay04] suggesting a primary origin. In the present calculations the yields of this element are close to those needed to explain the current inventory of Sc.
To understand how synthesis of scandium depends on the outflow parameters and nuclear physics, note that Sc arises mostly from $\beta^+$ decay originating with the quasi waiting-point nucleus $^{45}{\rm Cr}$. In turn, N=21 isotones of $^{45}{\rm Cr}$ originate from $\beta^+$ decay out of isotones of $^{40}{\rm Ca}$. The doubly magic nucleus $^{40}{\rm Ca}$ is efficiently synthesized through a sequence of alpha captures. At temperatures larger than about $2\cdot
10^9$ K statistical equilibrium keeps almost all N=20 nuclei locked into $^{40}{\rm Ca}$. This nucleus is $\beta$ stable and has a first excited state at 3.3 MeV, too high to be thermally populated. Flow out of N=20 can only proceed when the temperature drops to approximately 1.5 billion degrees and statistical equilibrium favors population of $^{42}{\rm Ti}$ over $^{40}{\rm Ca}$. The proton capture daughter of $^{40}{\rm Ca}$ ($^{41}{\rm Sc}$) has a proton separation energy of only 1.7 MeV and is not appreciably abundant. Decay out of $^{42}{\rm
Ti}$ is then responsible for allowing flow to N=21. $^{42}{\rm Ti}$ has a well determined $\beta^+$ half life of 199$\pm$6 ms, a proton separation energy which is uncertain only by about 5 keV, and a first excited state too high in excitation energy to play a role in allowing flow to N=21. In short, nuclear properties are well determined for important N=20 nuclei. Once nuclei make their way to N=21 at $T_9\approx 1.5$, their abundances are divided between the tightly bound $^{45}{\rm Cr}$ and $^{43}{\rm Ti}$. Here uncertainties in nuclear physics may be more important. For $^{45}{\rm Cr}$ the proton separation energy is uncertain to about 100 keV and the spin of the ground state is uncertain. To the extent that the relative abundances are set by the Saha equation, these uncertainties could imply an uncertainty of a factor of several in the relative abundances of $^{45}{\rm Cr}$ and $^{45}{\rm Ti}$ at $T_9\approx 1.5$. In turn, this implies appreciable uncertainty in the estimated Sc yield.
Whether or not Sc is efficiently synthesized following decay of $^{45}$Cr depends on the expansion time scale at low temperatures. This is because the $\beta^+$ daughter of $^{45}$Cr is $^{45}$V, which has a relatively small proton separation energy of 1.6 MeV. At low temperatures the Saha equation favors proton capture to $^{46}$Cr. If the expansion is slow enough that most $^{45}$Cr decays at temperatures where $^{45}{\rm V(p,\gamma)^{46}Cr}$ is still rapid, then flow out of the N=22 nuclei occurs via $\beta^+$ decay out of $^{46}$Cr. In this case $^{46}$Ti is synthesized rather than $^{45}$Sc.
$^{49}{\rm Ti}$ originates from the the N=24 nuclide $^{49}{\rm Mn}$. At $T_9\approx 1.4$ nuclei with N=24 are divided roughly equally between $^{49}{\rm Mn}$ and $^{50}{\rm Fe}$. Uncertainties in the proton separation energies and lifetimes of these nuclei are small. $^{49}{\rm Mn}$ does have a low lying excited state at 382$\,$keV which is thermally populated at low temperatures. However, $^{49}{\rm
Mn}$ is a nucleus with Z=N+1 that is expected to have ground and excited state decay rates that are dominated by super-allowed Fermi transitions which are almost independent of excitation energy.
Lastly, we turn our attention to flow out of the N=32 isotones which are progenitors of $^{60}{\rm Zn}$ and $^{63}{\rm Cu}$. Proton-rich nucleosynthesis near $^{64}{\rm Ge}$ has been extensively discussed in the X-Ray Burst literature (e.g. Brown et al. 2002). Uncertainties in basic nuclear properties important for synthesis of $^{60}{\rm Zn}$ are small. This is not true for $^{63}{\rm Cu}$, which is formed directly by the decay of $^{63}{\rm Ga}$. $^{63}{\rm Ga}$ has a $J^\pi=(5/2)^-
$excited state at 75.4 keV which dominates the partition function at $T_9\approx 1.5$ since the ground state has $J=3/2$. The weak lifetime of this excited state is experimentally undetermined (as are the weak lifetimes of all short lived excited states) and could easily be a factor of five longer or shorter than the quite long ground state lifetime of $\sim 32$ sec. This translates into an uncertainty of a factor of several in the inferred $^{63}{\rm Cu}$ yield.
The influence of possible uncertainties in the time scale, entropy, and electron fraction characterizing the different trajectories can be seen from the results in Table \[tbl1\]. Modest changes in the outflow parameters result in factors of $\sim 2$ changes in yields of the most important isotopes. This is evident by the quite different efficiencies with which the lower entropy bubble and higher entropy wind synthesize $^{45}$Sc and $^{49}$Ti.
So far we have not considered the influence of neutrino interactions, except implicitly through the setting of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$. If matter remains close to the neutron star, neutrino capture and neutrino-induced spallation may compete with positron decay, even on a dynamic time scale. However, neutrino capture alone cannot act to accelerate nuclear flow past waiting point nuclei and allow synthesis of the heavier proton-rich elements. The reason is that the neutrino capture rates on the waiting point nuclei are about the same as the rate of neutrino capture on a free proton [@woo90]. Every capture of a neutrino by a heavy nucleus is accompanied by a capture onto a free proton. The electron fraction is then rapidly driven to $1/2$ since the neutron produced in this way immediately goes into the formation of an $\alpha$-particle. This is analogous to the “$\alpha$-effect” discussed in the context of late-time winds [@ful95; @mey98].
Conclusions and Implications
============================
The important news is that, unlike simulations of a few years ago, there is no poisonous overproduction of neutron-rich nuclei in the vicinity of the N = 50 closed shell [@woo94]. When followed in more detail (i.e. mainly with a better, spectral treatment of the neutrino transport), weak interactions in the hot convective bubble drive ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ back to 0.5 and above so that most of the mass comes out as $^{56}$Ni and $^4$He. Since $^{56}$Fe and helium are abundant in nature, this poses no problem.
Beyond this it is also interesting that the proton-rich environment of the hot convective bubble and early neutrino-driven wind can synthesize interesting amounts of some comparatively rare intermediate mass elements. If the total mass of SN ejecta with ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}\gtrsim 0.5$ is larger than a few hundredths of a solar mass, these proton-rich outflows may be responsible for a significant fraction of the solar abundances of $^{45}{\rm Sc}$, $^{64}{\rm Zn}$, and some Ti isotopes, especially $^{49}$Ti.
However, these ejecta do not appear to be implicated in the synthesis of elements that do not have other known astrophysical production sites. For example, ${\rm Sc}$ can be produced explosively, while $^{64}{\rm Zn}$ can be synthesized in a slightly neutron-rich wind. It seems unlikely that consideration of nucleosynthesis in proton-rich outflows will lead to meaningful constraints on conditions during the early SN.
Since the conditions in the hot convective bubble resemble in some ways those of Type I X-ray bursts (high temperature and proton mass fraction), we initially hoped that the nuclear flows would go higher, perhaps producing the $p$-process isotopes of Mo and Ru. Such species have proven difficult to produce elsewhere and the $rp$-process in X-ray bursts can go up as high as tellurium [@Sch01]. Unfortunately the density is much less here than in the neutron star and the time scale shorter. Proton-induced flows are weaker and the leakage through critical waiting point nuclei is smaller. Using the present nuclear physics, significant production above A$\,$=$\,$64 is unlikely. However, heavier nuclei can be produced in ejecta that are right next to these zones but with values of ${{\ensuremath{Y_{\mathrm{e}}}}\xspace}$ considerably less than 0.50 [@hof96].
This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore Laboratory under contract W-7405-ENG-48. HTJ enjoyed discussions with Matthias Liebendörfer. RB and HTJ thank A. Marek for assistance with data evaluation and visualization, and acknowledge support by the Sonderforschungsbereich 375 “Astro-Particle Physics” of the Deutsche Forschungsgemeinschaft. The supernova simulations were done at the Rechenzentrum Garching and at the John von Neumann Institute for Computing (NIC) in Jülich.
Audi, G. & Wapstra, A.H. 1995, Nucl. Phys. A, 595, 409
Brown, B.A., Clement, R.R., Schatz, H., Volya, A. & Richter, W.A. 2002, Phys. Rev. C, 65, 5802
Buras, R., Rampp, M., Janka, H.-T., & Kifonidis, K. 2003, PRL, 90, 241101
Buras, R., Rampp, M., Janka, H.-T., Kifonidis, K., Takahashi, K., & Horowitz, C.J. 2004, in preparation
Cardall, C. Y. & Fuller, G. M. 1997, ApJL, 486, 111
Cayrel, R., et al. 2004, , 416, 1117
Duncan, R.C., Shapiro, S.L., & Wasserman, I. 1986, , 309, 141
Fisker, J.L., Barnard, V., Gorres, J., Langanke, K., Mártinez-Pinedo, G. & Wiescher, M.C. 2001, At. Data Nucl. Data Tables, 79, 241
Fröhlich, C., et al. 2004, Nucl. Phys. A, submitted (astro-ph/0408067)
Fuller, G.M., Fowler, W.A., & Newman, M.J. 1982, , 252, 715
Fuller, G.M. & Meyer, B.S. 1995, , 453, 792
Hoffman, R. D., Woosley, S. E., Fuller, G. M., & Meyer, B. S. 1996, ApJ, 460, 478
Hoffman, R. D., Woosley, S. E., & Qian, Y.-Z. 1997, , 482, 951
Howard, W.M., Goriely, S., Rayet, M., & Arnould, M. 1993, , 417, 713
Janka, H.-T. 2001, , 368, 527
Janka, H.-T. & Müller, E. 1995, , 448, L109
Janka, H.-T., Buras, R., & Rampp, M. 2003, Nucl. Phys. A, 718, 269
Janka, H.-T., Buras, R., Kifonidis, K., Rampp, M., & Plewa, T. 2004, in Stellar Collapse, ed. C.L. Fryer, Kluwer, Dordrecht, p. 65
MacFadyen, A.I. & Woosley, S.E. 1999, , 524, 262
Maeda, K. & Nomoto, K. 2003, , 598, 1163
Meyer, B.S., McLaughlin, G.C., & Fuller, G.M. 1998, Phys. Rev. C, 58, 3696
Möller, P., Nix, J.R., Myers, W.D., & Swiatecki, W.J. 1995, At. Data Nucl. Data Tables, 59, 185
Otsuki, K., Tagoshi, H., Kajino, T., & Wanajo, S.-Y. 2000, ApJ, 533, 424
Pruet, J., Surman, R. & McLaughlin, G.C. 2004, , 602, L101
Pruet, J., Thompson, T.A., & Hoffman, R.D. 2004, 606, 1006
Qian, Y.-Z. & Wasserburg, G. J. 2000, Phys. Reps., 333, 77
Qian, Y.-Z. & Woosley, S.E. 1996, , 471, 331
Rampp, M. & Janka, H.-T. 2002, A&A, 396, 361
Rauscher, T. & Thielemann, F.-K. 200, At. Data Nucl. Data Tables, 75, 1
Rauscher, T., Heger, A., Hoffman, R. D., & Woosley, S. E. 2002, , 576, 323
Schatz, H., et al. 2001, Physical Review Letters, 86, 3471
Sumiyoshi, K., Suzuki, H., Otsuki, K., Teresawa, M., & Yamada, S. 2000, PASJ, 52, 601
Takahashi, K., Witti, J., & Janka, H.-T. 1994, A&A, 286, 857
Thielemann, F.-K., et al. 2003, Nucl. Phys. A, 718, 139
Thompson, T. A., Burrows, A., & Meyer, B. S. 2001, , 562, 887
Wallace, R. K. & Woosley, S. E. 1981, , 45, 389
Witti, J., Janka, T.-H., & Takahashi, K. 1993, A&A, 286, 841
Woosley, S.E. & Baron, E. 1992, , 391, 228
Woosley, S.E., Arnett, W.D., & Clayton, D.D. 1973, , 26, 231
Woosley, S.E., Hartmann, D.H., Hoffman, R.D. & Haxton, W.C. 1990, , 356, 272
Woosley, S.E. & Hoffman, R.D. 1992, , 395, 202
Woosley, S.E. & Weaver, T.A. 1995, , 101, 181
Woosley, S.E., Heger, A., & Weaver, T.A. 2002, Reviews of Modern Physics, 74, 1015
Woosley, S.E., Wilson, J.R., Mathews G.J., Hoffman, R.D., & Meyer, B.S. 1994, ApJ, 433, 209
Woosley, S.E., et al. 2004, , 151, 75
[cccccccccc]{} H & 1 & 2 & He & 1 & 4 & Li & 3 & 6\
Be & 3 & 8 & B & 3 & 9 & C & 3 & 12\
N & 4 & 14 & O & 5 & 14 & F & 5 & 17\
Ne & 6 & 21 & Na & 6 & 33 & Mg & 6 & 35\
Al & 7 & 38 & Si & 8 & 40 & P & 8 & 42\
S & 8 & 44 & Cl & 8 & 46 & Ar & 9 & 49\
K & 11 & 51 & Ca & 10 & 53 & Sc & 13 & 55\
Ti & 12 & 58 & V & 15 & 60 & Cr & 14 & 62\
Mn & 17 & 64 & Fe & 16 & 66 & Co & 19 & 69\
Ni & 18 & 71 & Cu & 21 & 73 & Zn & 21 & 75\
Ga & 24 & 77 & Ge & 23 & 80 & As & 26 & 82\
Se & 25 & 84 & Br & 28 & 86 & Kr & 27 & 88\
Rb & 31 & 91 & Sr & 30 & 93 & Y & 33 & 95\
Zr & 32 & 97 & Nb & 35 & 99 & Mo & 35 & 102\
Tc & 38 & 104 & Ru & 37 & 106 & Rh & 40 & 108\
Pd & 40 & 110 & Ag & 41 & 113 & Cd & 42 & 115\
In & 43 & 117 & Sn & 44 & 119 & Sb & 46 & 120\
Te & 47 & 121
[ccccccc]{} 1 & 0.500 & 18.4 & 0.086 & 9.25e-04 & $^{59}{\rm Co}$(0.33) & $^{64}{\rm Zn}$(0.17)\
& & & & & $^{64}{\rm Zn}$(0.12) & $^{59}{\rm Co}$(0.17)\
& & & & & $^{49}{\rm Ti}$(0.10) & $^{49}{\rm Ti}$(0.16)\
5 & 0.502 & 15.9 & 0.066 & 7.05e-04 & $^{59}{\rm Co}$(0.17) & $^{64}{\rm Zn}$(0.30)\
& & & & & $^{63}{\rm Cu}$(0.15) & $^{49}{\rm Ti}$(0.14)\
& & & & & $^{49}{\rm Ti}$(0.12) & $^{60}{\rm Ni}$(0.09)\
10 & 0.505 & 21.7 & 0.062 & 3.58e-04 & $^{59}{\rm Co}$(0.07) & $^{64}{\rm Zn}$(0.10)\
& & & & & $^{63}{\rm Cu}$(0.05) & $^{49}{\rm Ti}$(0.09)\
& & & & & $^{49}{\rm Ti}$(0.05) & $^{46}{\rm Ti}$(0.04)\
20 & 0.513 & 17.8 & 0.104 & 4.63e-04 & $^{45}{\rm Sc}$(0.14) & $^{49}{\rm Ti}$(0.19)\
& & & & & $^{46}{\rm Ti}$(0.06) & $^{64}{\rm Zn}$(0.07)\
& & & & & $^{42}{\rm Ca}$(0.05) & $^{60}{\rm Ni}$(0.05)\
30 & 0.521 & 26.2 & 0.047 & 2.67e-04 & $^{59}{\rm Co}$(0.03) & $^{49}{\rm Ti}$(0.08)\
& & & & & $^{45}{\rm Sc}$(0.02) & $^{64}{\rm Zn}$(0.03)\
& & & & & $^{63}{\rm Cu}$(0.02) & $^{60}{\rm Ni}$(0.02)\
35 & 0.524 & 26.9 & 0.062 & 2.28e-04 & $^{45}{\rm Sc}$(0.07) & $^{49}{\rm Ti}$(0.13)\
& & & & & $^{42}{\rm Ca}$(0.04) & $^{46}{\rm Ti}$(0.03)\
& & & & & $^{46}{\rm Ti}$(0.03) & $^{64}{\rm Zn}$(0.03)\
40 & 0.545 & 40.6 & 0.024 & 3.12e-04 & $^{42}{\rm Ca}$(0.04) & $^{49}{\rm Ti}$(0.25)\
& & & & & $^{45}{\rm Sc}$(0.04) & $^{46}{\rm Ti}$(0.06)\
& & & & & $^{46}{\rm Ti}$(0.03) & $^{45}{\rm Sc}$(0.04)
[ccccccc]{} 0.551 & 54.8 & 0.131 & 1.53e-03 & $^{45}{\rm Sc}$(1.73) & $^{49}{\rm Ti}$(2.02)\
& & & & $^{49}{\rm Ti}$(0.97) & $^{46}{\rm Ti}$(0.70)\
& & & & $^{46}{\rm Ti}$(0.87) & $^{45}{\rm Sc}$(0.36)\
0.558 & 58.0 & 0.127 & 6.40e-04 & $^{45}{\rm Sc}$(0.95) & $^{49}{\rm Ti}$(1.09)\
& & & & $^{49}{\rm Ti}$(0.52) & $^{46}{\rm Ti}$(0.38)\
& & & & $^{46}{\rm Ti}$(0.48) & $^{45}{\rm Sc}$(0.20)\
0.559 & 76.7 & 0.099 & 6.80e-04 & $^{45}{\rm Sc}$(0.60) & $^{49}{\rm Ti}$(1.07)\
& & & & $^{49}{\rm Ti}$(0.38) & $^{46}{\rm Ti}$(0.41)\
& & & & $^{46}{\rm Ti}$(0.31) & $^{45}{\rm Sc}$(0.22)\
0.560 & 71.0 & 0.112 & 4.80e-04 & $^{45}{\rm Sc}$(0.55) & $^{49}{\rm Ti}$(0.79)\
& & & & $^{49}{\rm Ti}$(0.31) & $^{46}{\rm Ti}$(0.29)\
& & & & $^{46}{\rm Ti}$(0.27) & $^{45}{\rm Sc}$(0.15)\
0.568 & 74.9 & 0.059 & 8.00e-04 & $^{45}{\rm Sc}$(0.55) & $^{49}{\rm Ti}$(1.25)\
& & & & $^{46}{\rm Ti}$(0.35) & $^{46}{\rm Ti}$(0.47)\
& & & & $^{49}{\rm Ti}$(0.35) & $^{45}{\rm Sc}$(0.25)\
0.570 & 76.9 & 0.034 & 1.04e-03 & $^{46}{\rm Ti}$(0.38) & $^{49}{\rm Ti}$(1.49)\
& & & & $^{45}{\rm Sc}$(0.35) & $^{46}{\rm Ti}$(0.57)\
& & & & $^{42}{\rm Ca}$(0.31) & $^{45}{\rm Sc}$(0.31)
[cccc]{}
$^{59}{\rm Co}$ & 2.81 & 0.37 & $^{59}{\rm Cu},^{59}{\rm Zn}$\
$^{49}{\rm Ti}$ & 2.00 & 6.53 & $^{49}{\rm Mn}$\
$^{63}{\rm Cu}$ & 1.91 & 0.28 & $^{63}{\rm Ga},^{63}{\rm Ge}$\
$^{45}{\rm Sc}$ & 1.65 & 1.33 & $^{45}{\rm Cr}$\
$^{64}{\rm Zn}$ & 1.28 & 3.61 & $^{64}{\rm Ge}$\
$^{46}{\rm Ti}$ & 1.22 & 1.97 & $^{46}{\rm Cr}$\
$^{60}{\rm Ni}$ & 1.10 & 1.81 & $^{60}{\rm Zn}$\
$^{42}{\rm Ca}$ & 1.04 & 0.46 & $^{42}{\rm Ti}$
[ccc]{} $^{45}{\rm Sc}$ & 4.74 & 1.50\
$^{49}{\rm Ti}$ & 2.83 & 7.70\
$^{46}{\rm Ti}$ & 2.66 & 2.81\
$^{42}{\rm Ca}$ & 2.16 & 0.46\
$^{51}{\rm V\ }$ & 1.09 & 0.90\
$^{50}{\rm Cr}$ & 0.56 & 0.09
[cccc]{} 0.470 & 6.40e-05 & $^{74}{\rm Se}$(6.59)\
& & $^{78}{\rm Kr}$(4.25)\
& & $^{64}{\rm Zn}$(1.35)\
0.475 & 7.98e-05 & $^{64}{\rm Zn}$(1.36)\
& & $^{74}{\rm Se}$(0.85)\
& & $^{78}{\rm Kr}$(0.78)\
0.480 & 1.59e-04 & $^{64}{\rm Zn}$(1.49)\
& & $^{78}{\rm Kr}$(0.34)\
& & $^{68}{\rm Zn}$(0.30)\
0.485 & 3.36e-04 & $^{62}{\rm Ni}$(0.92)\
& & $^{58}{\rm Ni}$(0.35)\
& & $^{64}{\rm Zn}$(0.23)\
0.490 & 6.24e-04 & $^{62}{\rm Ni}$(1.21)\
& & $^{58}{\rm Ni}$(0.42)\
& & $^{66}{\rm Zn}$(0.13)\
0.495 & 1.36e-03 & $^{62}{\rm Ni}$(1.30)\
& & $^{58}{\rm Ni}$(0.41)\
& & $^{61}{\rm Ni}$(0.23)
[ccc]{} 21 & F, Mg, Na, Ne, O\
22 & Mg, Na, Ne\
23 & F, Mg, Na, Ne\
24 & Mg, Na, Ne, Si\
25 & Mg, Na, Ne, Si\
26 & Mg, Na, Si\
27 & Mg, Na, P, Si\
28 & Mg, Na, P, S, Si\
29 & Mg, Na, P, S, Si\
30 & P, S, Si\
31 & Cl, P, S, Si\
32 & Cl, P, S, Si\
33 & Cl, P, S, Si\
34 & Cl, P, S, Si\
35 & Cl, K, P, S\
36 & Ca, Cl, K, S\
37 & Ca, Cl, K, S\
38 & Ca, Cl, K, S\
39 & Ca, Cl, K\
40 & Ca, Cl, K, Sc, Ti\
41 & Ca, Cl, K, Sc, Ti\
42 & Ca, K, Sc, Ti\
43 & Ca, Cl, K, Sc, Ti\
44 & Ca, K, Sc, Ti, V\
45 & Cr, K, Sc, Ti, V\
46 & Cr, K, Sc, Ti, V\
47 & Cr, K, Sc, Ti, V\
48 & Cr, K, Sc, Ti, V\
49 & Cr, Fe, K, Mn, Sc, Ti, V\
50 & Cr, Mn, Sc, Ti, V\
51 & Mn, Sc, Ti, V\
52 & Fe, Mn, Ti, V\
53 & Cr, Fe, Mn, Ti, V\
54 & Cr, Fe, Mn, V\
55 & Cr, Fe, Mn, Ti, V\
56 & Cr, Fe, Mn, Ni, Sc, Ti, V\
57 & Cr, Cu, Fe, Mn, Ni, Ti, V, Zn\
58 & Cr, Cu, Fe, Mn, Ni, Ti, V\
59 & Cr, Cu, Fe, Mn, Ni, V\
60 & Cr, Cu, Fe, Mn, Ni, Ti, V, Zn
\
|
---
abstract: 'The kinetics of single-species annihilation, $A+A\to 0$, is investigated in which each particle has a fixed velocity which may be either $\pm v$ with equal probability, and a finite diffusivity. In one dimension, the interplay between convection and diffusion leads to a decay of the density which is proportional to $t^{-3/4}$. At long times, the reactants organize into domains of right- and left-moving particles, with the typical distance between particles in a single domain growing as $t^{3/4}$, and the distance between domains growing as $t$. The probability that an arbitrary particle reacts with its $n^{\rm th}$ neighbor is found to decay as $n^{-5/2}$ for same-velocity pairs and as $n^{-7/4}$ for $+-$ pairs. These kinetic and spatial exponents and their interrelations are obtained by scaling arguments. Our predictions are in excellent agreement with numerical simulations.'
address:
- '$\dag$The James Franck Institute, The University of Chicago, Chicago, IL 60637'
- '$\ddag$Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215'
- '$^*$Courant Institute of Mathematical Sciences, New York University, New York, NY 10012-1185'
author:
- 'E. Ben-Naim$\dag$, S. Redner$\ddag$, and P. L. Krapivsky$^*$'
title: 'Two-Scale Annihilation'
---
[2]{}
Single-species diffusion-controlled annihilation, $A+A\to 0$, exhibits classical mean-field kinetics when the spatial dimension $d>2$, in which the concentration $c(t)$ decays as $t^{-1}$, and nonclassical dimension-dependent kinetics for $d\leq 2$ with a slower concentration decay, $c(t)\propto t^{-d/2}$ \[1-7\]. In one dimension, the geometric restriction to nearest-neighbor interactions leads to relatively large departure from the mean-field kinetics, as well as a spatial organization of reactants. In this well-studied case, it is known that $c(t)$ asymptotically decays as $(Dt)^{-1/2}$, independent of the initial concentration. The complementary situation of single-species annihilation where the reactants move ballistically has recently begun to receive attention \[8-12\]. Perhaps the simplest example is the deterministic $\pm$ annihilation process, where each particle moves at a constant velocity which may be either $+v$ or $-v$ \[8,9\]. When the densities of the $+v$ and $-v$ particles are equal, $c(t)$ decays as $(c_0/vt)^{1/2}$.
In this letter, we consider single species annihilation when the particle transport is a superposition of convection [*and*]{} diffusion — we term this system the stochastic $\pm$ annihilation process (Fig. 1). Although the concentration decays as $t^{-1/2}$ when only one of the transport mechanisms — either convection or diffusion — is operative, the combined transport process leads to a faster concentration decay of $t^{-3/4}$. Our goal is to understand this unusual decay law and its attendant consequences on the spatial distribution of reactants. While there has been fragmentary mention of some aspects of this system \[7,10\], here we give primarily new results and a self-contained account of the basic phenomena.
To set the stage for our approaches and results in the stochastic $\pm$ annihilation process, it is first helpful to provide a simple derivation for the decay of $c(t)$ in the deterministic $\pm$ process. Let us consider a system where particles are placed with concentration $c_0$ in a box of size $L$, and denote by $c(L,t)$ the time dependent concentration. Initially, there are $N=c_0 L$ particles, and the difference between the number of right and left moving particles is of the order of $\Delta N=|N_+-N_-|\sim \sqrt{N}$. Eventually, all particles who belong to the minority-velocity species are annihilated and thus $c(L,t=\infty)\sim \Delta N/L\sim (c_0/L)^{1/2}$. We assume a scaling from for the concentration, $c(L,t)\sim (c_0/L)^{1/2}f(z)$ with $z=L/vt$. According to the above argument, $f(z)\to {\rm const.}$ in the $z\to 0$ limit. Conversely, in the short time limit, $z\to\infty$, the concentration cannot depend on the box size, so that $f(z)$ must be proportional to $z^{1/2}$. Thus we find $$c(t)\sim \left({c_0\over vt}\right)^{1/2}.$$ As a consequence, the system organizes into right- and left-moving domains whose size is of the order of $vt$.
In the diffusive case, either one particle or no particles survive the annihilation process in a finite box, depending on the parity of the initial number of particles. Following the above line of reasoning, we may write the scaling ansatz $c(L,t)\sim L^{-1}f(z)$ with $z=L/\sqrt{Dt}$. Here the relevant time dependent length scale is $\sqrt{Dt}$. In the limit $z\to 0$, the concentration is independent of $L$, thereby implying $f(z)\sim z$. Therefore the time dependent concentration is given by $$c(t)\sim \left(1\over Dt\right)^{1/2}.$$
The crucial new feature in the stochastic $\pm$ annihilation process is that particles with the same velocity can mutually annihilate because of their interaction which is driven by diffusion (Fig. 1). A useful way to determine the decay in this process is to consider separately the role of convection and diffusion on the kinetics. Because of the convection, particles organize into right-moving and left-moving domains as outlined above. Inside each domain, however, diffusive annihilation between same-velocity particles takes place. We assume that the diffusive annihilation mechanism leads to an effective time dependent “initial” concentration, $c_0(t)\sim
(Dt)^{-1/2}$, which plays the role of $c_0$ in Eq. (1). Thus we obtain $$c(t)\sim \left(1\over{D\,v^2\,t^3}\right)^{1/4}.$$ Intriguingly, the concentration in the stochastic $\pm$ annihilation process is predicted to decay as $t^{-3/4}$ even though $c(t)$ decays as $t^{-1/2}$ if either diffusion only or convection only is the transport mechanism.
An alternative method to determine the decay law, which provides additional insight into the relative effects of diffusion and convection, is dimensional analysis. If the particle diffusion coefficient is $D$, then the stochastic $\pm$ process is fully characterized by the initial concentration $c_0$, the velocity $v$, and $D$. From these parameters, the only variable combinations with the dimensions of concentration are $c_0$, ${1/{vt}}$, and ${1/\sqrt{Dt}}$. On physical grounds, we anticipate that these three concentration scales should enter multiplicatively so that the time-dependent concentration can be expressed in a conventional scaling form. Accordingly, we write the time dependent concentration in the form $$c(t)\sim (c_0)^\rho
\left({1\over{vt}}\right)^\sigma
\left({1\over\sqrt{Dt}}\right)^{1-\rho-\sigma},$$ in which the dimension of the right-hand side is manifestly a concentration. The exponents $\rho$ and $\sigma$ can be now determined by requiring that the above expression for $c(t)$ matches with: (a) the diffusion-controlled behavior $c(t)\to (Dt)^{-1/2}$ for $t<\tau_v\simeq
D/v^2$, which is the characteristic time below which the drift can be ignored for a particle which undergoes biased diffusion; and (b) the ballistically-controlled behavior $c(t)\to (c_0/vt)^{1/2}$ when $t<\tau_D\simeq 1/(Dc_0^2)$, which is the time for adjacent particles to meet by diffusion. Thus by matching Eq. (4) with $(Dt)^{-1/2}$ at $\tau_v$, one obtains $\rho=0$, and then matching Eq. (4) with $(c_0/vt)^{1/2}$ at $\tau_D$ gives $\sigma=1/2$. This then reproduces Eq. (3).
To test this decay law, we performed Monte Carlo simulations using the following realization of the reaction process. Initially all sites are occupied with either a $+$ or a $-$ particle with equal probabilities. A simulation step consists of picking a particle at random and moving it a single lattice site in the direction of its velocity. If the target site is occupied, then both particles are removed from the system. Time is updated by the inverse of the number of particles. The simulation was carried up to $10^5$ time steps on a periodic chain of $10^6$ sites and an average over $10^3$ realizations was performed. The data for $c(t)$ is strikingly linear over a substantial time range on a double logarithmic scale (Fig. 2). The local two-point slopes of the data in the time range $10^2\ltwid t\ltwid 5\times 10^4$ give an exponent value of $0.745$. We interpret the constancy of this data as evidence that the actual value of the exponent is $3/4$. It is worth noting that a Padé analysis of the exact short-time power series gives an estimate for the decay exponent of approximately 0.72 \[13\]. This provides a rough estimate for the magnitude of the variation of the effective exponent between the early time and asymptotic regimes.
=7.5cm
Having established the decay exponent numerically, it is of interest to consider the consequences of this unusual decay law on the spatial distribution of reactants. In particular, since $c(t)$ decays as $t^{-3/4}$, one might expect that the average separation between nearest-neighbor particles grows as $t^{3/4}$. However, if there remains any vestige of the domain organization that is associated with the deterministic $\pm$ process, then more than one length scale may be needed to characterize this spatial distribution. Such multiscale behavior has been observed previously in diffusive two-species annihilation \[14\] and the associated consequences lead to new insights about the system. To investigate possible multiscale behavior in the stochastic $\pm$ annihilation process, we introduce the following distance scales (Fig. 3): $$\begin{aligned}
\langle{x_{++}(t)}\rangle&\sim t^{\nu_{++}},\quad
\langle{x_{+-}(t)}\rangle&\sim t^{\nu_{+-}},\nonumber\\
\langle{x_{-+}(t)}\rangle&\sim t^{\nu_{-+}},\quad
\langle{x_{\rm dom}(t)}&\rangle\sim t^{\nu_{\rm dom}},\end{aligned}$$ which are defined to be, respectively, the average distance between neighboring same-velocity pairs, $+-$ pairs, $-+$ pairs, and the average length of a domain of same velocity particles.
Our Monte Carlo data for these length scales exhibits considerable curvature on a double logarithmic scale (Fig. 4). Thus to estimate the asymptotic behavior, we studied the systematic variation of the slopes of linear least-squares fits as the data at the earliest times are progressively eliminated. The effective exponents obtained in this manner vary considerably; for example, for $\langle{x_{++}(t)}\rangle$, the effective exponent systematically increases, but at a progressively slower rate, from 0.699 to 0.734. Together with relatively strong numerical evidence that the concentration decays as $t^{-3/4}$, we conclude that the actual value of $\nu_{++}$ is $3/4$. This accords with the expectation that $\langle{x_{++}(t)}\rangle$ should scale as $1/c(t)$. Similar finite time corrections occur in the exponent estimates for the remaining length scales defined above. For these cases, the effective exponent values are all increasing as short-time data are systematically deleted and it appears that $\nu_{+-}$, $\nu_{-+}$, and $\nu_{\rm
dom}$ are all very close to 1, asymptotically. That is, the corresponding lengths are governed by the ballistic particle motion, but again with considerable finite time corrections. The case of $\langle{x_{+-}(t)}\rangle$ is especially problematic, as the effective exponent changes from approximately $0.80$ to $0.93$ over the time range covered by our simulation. Evidently, more extensive simulation would be needed to determine the asymptotic exponent values unambiguously by simulation alone.
A new useful way to characterize the spatial range of bimolecular reactions is the collision probability, $P(n)$, defined as the probability that the reaction partner of a given particle is its $n^{\rm
th}$ neighbor. Eventually, every particle reacts with some collision partner in one dimension and the distribution of the distances between partners provides a measure of the reaction “efficiency”. In the deterministic $\pm$ process, for example, this probability can be obtained analytically \[8,9,15\]. Let us denote the velocity of the $n^{\rm th}$ neighbor by $v_n=\pm 1$, and the local velocity sum by $S_n=\sum_{i=0}^n v_i$. A right moving particle initially at the origin reacts with its $(2n+1)^{th}$-neighbor if: (a) $S_l>0$ for $l=0,1,\ldots,2n$, and (b) $S_{2n+1}=0$. This quantity is precisely the same as the first-passage probability for a random walk which starts at the origin to return to the origin for the first time after $2n$ steps. Because of this equivalence to an exactly soluble first-passage problem \[16\], one has $P(2n)=0$ and $P(2n+1)=2^{-2n-1}{(2n)!/ n!(n+1)!}$. In the limit $n\to\infty$, the probability that a given particle collides with its $n^{\rm th}$-neighbor is given by $$P(n)\propto n^{-3/2}.$$
Motivated by this power law dependence, we assume, in general, that $P(n)\sim n^{-\gamma}$. The exponent $\gamma$ can be related to other fundamental exponents of reaction processes, namely, the concentration decay exponent $\alpha$, defined by $c(t)\sim t^{-\alpha}$, and the correlation exponent $\beta$, defined by $\xi(t)\sim t^{\beta}$. Here $\xi(t)$ refers to the distance over “information” about the reactants spread. In a time $t$, only particles within a domain of linear size $\xi(t)$ are eligible to react and thus the surviving fraction, or concentration, is $$c(t)\sim \int_{\xi(t)} dn\,
P(n)\sim \int_{\xi(t)} dn\, n^{-\gamma}
\sim t^{\beta(1-\gamma)}.$$ Consequently, we find the exponent relation $$\gamma=1+\alpha/\beta.$$ For the deterministic $\pm$ process, $\alpha=1/2$ and $\beta=1$ \[8,9\], and the exact $\gamma=3/2$ of Eq. (6) is recovered. As an illustration, consider, for example, single-species diffusion-limited annihilation. The decay and correlation exponents are $\alpha=1/2$ and $\beta=1/2$, leading to $\gamma=2$ from Eq. (8). Preliminary simulations appear to confirm this result. Similarly, for two-species annihilation, $\alpha$ is now equal to 1/4 while $\beta$ remains 1/2 so that $\gamma=3/2$.
Let us now consider the behavior of the collision probability in the stochastic $\pm$ annihilation process. In this case, the existence of two length scales in the system suggests that it is necessary to make a distinction between reaction events that involve particles of the same and of different velocities. We therefore define a ballistic correlation scale $\xi_{+-}(t)\sim t^{\beta_{+-}}$, with $\beta_{+-}=1$, which is associated with $+-$ collisions, [*i.e.*]{}, annihilation events between opposite velocity particles. Invoking the scaling relation Eq. (8), we thus find $P_{+-}(l)\sim l^{-\gamma_{+-}}$ with $\gamma_{+-}=7/4$. Similarly, there is a diffusive length scale $\xi_{++}(t)\sim t^{\beta_{++}}$, with $\beta_{++}=1/2$, corresponding to annihilation events between same-velocity particles. In this case, Eq. (8) gives $\gamma_{++}=5/2$. To summarize, we obtain $$P(n)\sim P_{+-}(n)\sim n^{-3/4}\quad
P_{++}(n)=P_{--}(n) \sim n^{-5/2}.$$ This behavior is consistent with our Monte Carlo simulation data (Fig. 5). Notice that over large distances, annihilation between opposite velocity particles dominates, as one would naively expect.
Our results can also be generalized to arbitrary spatial dimension $d>1$. In this case, it is necessary to ascribe a finite, non-zero radius $R$ to the particles so that there is a finite collision cross section for particles to actually meet. Let us consider the anisotropic system in which particles undergo isotropic Brownian motion with diffusivity $D$, and a drift along the $\hat x$ axis only, with the velocity taking on the value $\pm v \hat x$ with equal probability. In the ballistic limit ($D\equiv 0$), the concentration decays as $\sqrt{c_0/R^{d-1}vt}$ (since the process is quasi-one-dimensional, the $t^{-1/2}$ decay of the true one-dimensional system is still obeyed). In contrast, for diffusion-controlled annihilation ($v\equiv 0$), the concentration decays as $(Dt)^{-d/2}$ for $d<2$, and as $(R^{d-2}Dt)^{-1}$ for $d>2$ (with logarithmic corrections at the critical dimension $d=2$) \[7\]. Repeating the analysis detailed previously for the one-dimensional case in the derivation of Eq. (3) from Eqs. (1) and (2), we find the concentration decay $$c(t)\sim \cases{
(R^{2-2d}\,D^d\,v^2)^{-1/4}\,t^{-(d+2)/4} &$d<2$;\cr
(R\,D\,v)^{-1/2}\,t^{-1}\,\left[\ln(Dt/R^2)\right]^{1/2}, &$d=2$;\cr
(R^{2d-3}\,D\,v)^{-1/2}\,t^{-1}, &$d>2$.\cr}$$ The combined diffusion and ballistic transport does not change the mean-field nature of the annihilation kinetics when $d>2$ and the classical $t^{-1}$ decay is recovered. For sufficiently low spatial dimension, however, the non-classical behavior arises in which the decay exponent $\alpha=(d+2)/4$. Thus in low spatial dimensions, the interplay between convection and diffusion provides more effective mixing than diffusion or drift alone, and leads to a larger decay exponent than $\alpha_{\rm diff}=d/2$ and $\alpha_{\rm ball}=1/2$ which arise when only one transport mechanism is operative.
In summary, the stochastic $\pm$ single-species annihilation process exhibits a $t^{-3/4}$ decay of the concentration. This is faster than the $t^{-1/2}$ decay that arises when only one of the constituent transport processes in the stochastic $\pm$ process, either diffusion and deterministic $\pm$ convection, is present. A microscopic understanding of this decay law is lacking, and it seems that a technique beyond those typically used to solve one-dimensional reactive systems would be needed for the stochastic $\pm$ annihilation process. At long times, the system exhibits a spatial organization in which diffusion controls the short distance behavior and convection controls the large distance behavior. We have also introduced the concept of the collision probability, $P(n)$, the probability that a given particle is annihilated by its $n^{\rm th}$-neighbor. For the stochastic $\pm$ process, this probability is further discriminated by annihilation by same-velocity and opposite velocity pairs. These two probabilities decay as $P_{++}(n)\sim n^{-5/2}$ and $P_{+-}(n)\sim
n^{-7/4}$, respectively. It will be interesting to study the collision probability in other reaction processes such as diffusive driven single-species annihilation.
We gratefully acknowledge support from the NSF under awards 92-08527, MRSEC program DMR-9400379 (EBN), DMR-9219845 and ARO grant DAAH04-93-G-0021 (SR). PLK was supported in part by a grant from NSF.
[99]{}
M. Bramson and D. Griffeath, [*Z. Wahrsch. verw. Gebiete*]{} [**53**]{}, 183 (1980).
D. C. Torney and H. M. McConnell, [*Proc. Roy. Soc. London A*]{} [**387**]{}, 147 (1983).
Z. Racz, [*Phys. Rev. Lett.*]{} [**55**]{}, 1707 (1985).
A. A. Lushnikov, [*Sov. Phys. JETP*]{} [**64**]{}, 811 (1986).
J. L. Spouge, [*Phys. Rev. Lett.*]{} [**60**]{}, 871 (1988).
D. ben-Avraham, M. A. Burschka and C. R. Doering, [*J. Stat. Phys.*]{} [**60**]{}, 695 (1990).
For a recent review of diffusion-controlled annihilation, see S. Redner, in [*Nonequilibrium Statistical Mechanics in One Dimension*]{}, ed. V. Privman (Cambridge University Press, Cambridge, 1996).
Y. Elskens and H. L. Frisch, [*Phys. Rev. A*]{} [**31**]{}, 3812 (1985).
J. Krug and H. Spohn, [*Phys. Rev. A*]{} [**38**]{}, 4271 (1988).
E. Ben-Naim, S. Redner, and F. Leyvraz, [*Phys. Rev. Lett.*]{} [**70**]{}, 1890 (1993).
J. Piasecki, [*Phys. Rev. E*]{} [**51**]{}, 5535 (1995).
M. Droz, P.-A. Rey, L. Frachebourg, and J. Piasecki, [*Phys. Rev. E*]{} [**51**]{}, 5541 (1995).
E. Ben-Naim and J. Zhuo, [*Phys. Rev. E*]{} [**48**]{}, 2603 (1993).
F. Leyvraz and S. Redner, [*Phys. Rev. E*]{} [**66**]{}, 2168 (1991).
P. L. Krapivsky, S. Redner, and F. Leyvraz, [*Phys. Rev. E*]{} [**51**]{}, 3977 (1995).
W. Feller, [*An Introduction to Probability Theory and it Applications*]{}, Vol I (Wiley, New York, 1968).
|
Despite years of debate, the nature of the spin glass phase of the finite dimensional systems remains a major open problem in statistical physics. Two competing theories have been proposed as candidate to explain spin glass physics at low temperature: the theory of replica symmetry breaking [@MPV] [@petc] and the droplet theory [@fh] [@BrMo]. The former, based on the analysis of the long range Sherrington-Kirkpatrick (SK) spin glass, predicts a rich phenomenology with ergodicity breaking not related to any physical symmetry breaking and susceptibility anomalies related to the presence on many pure states. The latter assimilates spin glasses to some kind of “disguised ferromagnet” -albeit with complex phenomenology- where the transition appears as a conventional symmetry breaking phenomenon. Both theories being non-rigorous in the applications to finite dimensional systems, it appears very difficult to solve the question on a purely theoretical ground. On the other hand, experiments in 3D and numerical simulations in 3 and 4D fail to give compelling evidence in favour of one or the other of the two theories: the times probed in the experiments are too short to settle the question of the presence or absence of replica symmetry breaking and the related issue of asymptotic existence of response anomalies during aging dynamics, and the length scales probed in the simulations are too small to infer the behaviour of the thermodynamic limit. Rigorous analysis of finite dimensional systems turns out to be very hard, and so far has not been able to exclude either scenario, although it has produced [@ns] considerable conceptual clarification, and shown some of the subtleties hidden even in the definition of the infinite volume limit of these models. Even at the mean field level, only very recently, simple interpolation methods have been introduced [@limterm] [@broken] [@ass] which have allowed to prove [@talaparisi] the Parisi solution for the SK model. Interpolation methods have subsequently been applied also in the context of finite range spin glasses, [*e.g.*]{} in [@cg].
In this Letter we focus our attention on the Kac limit of finite range spin glasses as first considered in [@froelich], and later studied in [@bovier-kac] and [@kacnoi]. Kac models are a classical tool of mathematical physics, where one considers variables interacting via a potential with finite range $\xi=\gamma^{-1}$, which tends to infinity [*after*]{} the thermodynamic limit is taken. In a classical paper [@lp] Penrose and Lebowitz proved that for conventional non-disordered systems, the free-energy tends (modulo the Maxwell construction) to the one of the corresponding mean-field system where the interactions do not decay with distance and scale with the size of the system. We combine here the idea of the interpolating model with the idea [@lp] of dividing the system into boxes of suitable size to prove the same property in spin glasses.
Other disordered models with Kac-type interactions have been studied in previous literature. For instance, see [@bgp] and references therein for the case of the Hopfield model.
The model we consider is defined on the $d$-dimensional lattice ${ Z}^d$, with Ising spin degrees of freedom $\sigma_i=\pm 1, i\in{ Z}^d$. Given a finite hypercube $\Lambda$ of side $L$ one defines the finite volume Hamiltonian as $$\begin{aligned}
\label{hkac}
H^{(\gamma)}_{\Lambda}(\sigma,h;J)=-
\sum_{i,j\in\Lambda}
\sqrt{\frac{{w(i-j;\gamma)}}{{2 W(\gamma)}}}
J_{ij}\sigma_i\sigma_j-h\sum_{i\in \Lambda}\sigma_i,\end{aligned}$$ where $W(\gamma)=\sum_{i\in { Z}^d}w(i;\gamma)$ and $w(r;\gamma)=\gamma^d \phi(\gamma r)$ for some smooth, nonnegative function $\phi(r)$, decaying sufficiently fast for $|r|\to\infty$ to have $W(\gamma)<\infty$. The parameter $\gamma=\xi^{-1}$ is the inverse range of the interaction. The quenched couplings $J_{ij}$ are i.i.d. Gaussian $N(0,1)$ variables, and we denote by $E$ the corresponding averages. As is well known [@vuiller] [@vanenter], the infinite-volume limit of the quenched free energy $$f^{(\gamma)}(\beta,h)=-\lim_{L\to\infty}
\frac1{\beta|\Lambda|}E\ln Z^{(\gamma)}_\Lambda(\beta,h;J)$$ exists.
On the other hand, the Hamiltonian of the SK spin glass mean field model is defined as [@sk] $$\begin{aligned}
\label{hsk}
H^{S.K.}_{|\Lambda|}(\sigma,h;J)=-\frac1{\sqrt{2|\Lambda|}}
\sum_{i,j\in \Lambda} J_{ij}
\sigma_i\sigma_j-h\sum_{i\in\Lambda}\sigma_i,\end{aligned}$$ where $|\Lambda|=L^d$ is the number of lattice sites in $\Lambda$. Subadditivity of the corresponding free energy and existence of its infinite volume limit $$\begin{aligned}
f^{S.K.}(\beta,h)=-\lim_{L\to\infty}
\frac1{\beta|\Lambda|}E\ln Z^{S.K.}_{|\Lambda|}(\beta,h;J)\end{aligned}$$ has been proven in [@limterm].
It was recently shown in [@kacnoi] that the free energy of model (\[hkac\]) is bounded below by that of SK: $$\label{risGT}
f^{(\gamma)}(\beta,h)\ge f^{S.K.}(\beta,h)$$ for any value of $d,\beta,h$ and $\gamma$, provided that the potential $\phi(i-j)$ is [*nonnegative definite*]{}, [*i.e.*]{}, its Fourier transform is nonnegative. For instance, it is immediate to check this condition for $
\phi(i-j)=e^{-\sum_{\alpha=1}^d|i_\alpha-j_\alpha|},
$ which for $d=1$ is just the potential considered originally by Kac in [@kac]. In the present paper, we provide the complementary bound, which allows to fully characterize the quenched free energy in the Kac limit $\gamma\to0$:
Assume that $\sum_{i\in Z^d} \phi(i)<\infty$. Then, for any $\beta$ and $h$ one has $$\begin{aligned}
\lim_{\gamma\to0} f^{(\gamma)}(\beta,h)\le f^{S.K.}(\beta,h).
\end{aligned}$$ If in addition all the Fourier components of $\phi$ are nonnegative, then $$\begin{aligned}
\lim_{\gamma\to0} f^{(\gamma)}(\beta,h)= f^{S.K.}(\beta,h).
\end{aligned}$$
Together with Talagrand’s recently established proof [@talaparisi] of the Parisi ansatz for the SK model, this shows that the Parisi theory [@MPV] gives the correct free energy for finite dimensional spin glasses in the Kac limit.
The idea of the proof is to interpolate between the Kac model in a volume $|\Lambda|$ and a system made of a collection of many independent SK subsystems of volume $M=\ell^d$. The crucial point, as in [@lp], is to choose $$\label{scale}
\ell \ll \xi \ll L,$$ and to let the three lengths diverge in this order. Let us divide the box $\Lambda$ into sub-cubes $\Omega_n$ of volume $M$, $n=1,\cdots,|\Lambda|/M$, and introduce the interpolating partition function $$\begin{aligned}
\nonumber
Z_\Lambda(t)&=&\sum_{\sigma}\exp\left(\beta\sqrt{1-t}\sum_n\sum_{i,j\in\Omega_n}
\frac{J_{ij}}{\sqrt{2M}}\sigma_i\sigma_j
\right)\\
\nonumber
& &\times\exp\left(\beta\sqrt t\sum_{i,j\in\Lambda}
\sqrt{\frac{w(i-j;\gamma)}{2W(\gamma)}}J'_{ij}\sigma_i\sigma_j
+\beta h\sum_{i\in\Lambda}\sigma_i\right),\end{aligned}$$ where the Gaussian variables $J'$ are independent of the $J$. Note that $$\begin{aligned}
&& \frac1{|\Lambda|}E \ln Z_\Lambda(0)=\frac1M E\ln Z_M^{S.K.}(\beta,h;J)\\
&&\frac1{|\Lambda|}E \ln Z_\Lambda(1)=\frac1{|\Lambda|}
E\ln Z_\Lambda^{(\gamma)}(\beta,h;J).\end{aligned}$$ As we show below, one has $$\begin{aligned}
\label{bellissima}
\lim_{\gamma\to0}\lim_{L\to \infty}
\frac d{dt} \frac1{|\Lambda|}E\ln Z_\Lambda(t)\ge0\end{aligned}$$ uniformly for $0\le t \le1$. After integration on $t$ between $0$ and $1$ and taking the large $M$ limit, one finds therefore the desired result $$\begin{aligned}
-\beta\lim_{\gamma\to0}f^{(\gamma)}(\beta,h)& \ge &
\lim_{M\to\infty}\frac1M E\ln Z_M^{S.K.}(\beta,h;J)\\
\nonumber
& =&-\beta f^{S.K.}(\beta,h).\end{aligned}$$ Denoting as ${\langle . \rangle}$ the Gibbs average, the computation of the $t$ derivative gives, up to terms negligible for large $L$, $$\begin{aligned}
\frac d{dt} \frac1{|\Lambda|}E\ln Z_\Lambda(t)&= &\frac{\beta^2}{4|\Lambda|}
E\left[\sum_n\sum_{i,j\in \Omega_n}\frac1{M}{\langle \sigma_i\sigma_j \rangle}^2
\right.
\label{deriv}
\\
\nonumber
& &
\left.
-\sum_{i,j\in\Lambda}\frac{w(i-j;\gamma)}{W(\gamma)}
{\langle \sigma_i\sigma_j \rangle}^2
\right],\end{aligned}$$ where we have used integration by parts on the Gaussian disorder and the property $$\lim_{L\to \infty}\frac1{|\Lambda|}\sum_{i,j\in\Lambda}
\frac{w(i-j;\gamma)}{W(\gamma)}=1.$$ Introducing two replicas with identical quenched couplings and spin configurations $\sigma^1,\sigma^2$, we can write (\[deriv\]) as: $$\begin{aligned}
\label{form1}
\frac d{dt} \frac1{|\Lambda|}E\ln Z_\Lambda(t)= & &\frac{\beta^2}{4|\Lambda|}
E\left[
\sum_{n}\frac1M\sum_{i,j\in\Omega_n}
{\langle \sigma^1_i\sigma^2_i\sigma^1_j\sigma^2_j \rangle}
\right.
\\\nonumber
& &
\left.
-\sum_{i,j\in\Lambda}
\frac{w(i-j;\gamma)}{W(\gamma)}
{\langle \sigma^1_i\sigma^2_i\sigma^1_j\sigma^2_j \rangle}
\right]. \end{aligned}$$ Denoting the partial overlap in the $n$-th sub-cube as $
q_{12}^{(n)}=1/M\sum_{i\in \Omega_n}\sigma^1_i\sigma^2_i,
$ the first term of the r.h.s. can be rewritten as $$\label{form2}
\frac{\beta^2M}{4|\Lambda|}\sum_nE{\langle (q_{12}^{(n)})^2 \rangle}.$$ As for the second term, defining $$w^+_{mn}=\sup_{i\in\Omega_m,j\in\Omega_n}\frac{w(i-j;\gamma)}{W(\gamma)}$$ and using the straightforward inequality $2x y\le x^2+y^2$, one has $$\begin{aligned}
\label{aux}
& & \frac1{|\Lambda|}\sum_{i,j\in\Lambda}\frac{w(i-j;\gamma)}{W(\gamma)}
E{\langle \sigma^1_i\sigma^2_i\sigma^1_j\sigma^2_j \rangle}
\\
\nonumber
& &
\le \frac{M^2}{2|\Lambda|}\sum_{n,m}w^+_{mn}E{\langle (q_{12}^{(n)})^2
+(q_{12}^{(m)})^2 \rangle}.\end{aligned}$$ In the Kac limit $\gamma\to0$, the diagonal terms $n=m$ give a vanishing contribution. As for the nondiagonal ones, one observes that $$\lim_{\gamma\to0}\sum_{m(\ne n)} w^+_{mn}=\frac1M,$$ where the summation runs only on one of the two indices, so that finally the r.h.s. of (\[aux\]) is bounded above by $$\begin{aligned}
\frac M{|\Lambda|}\sum_nE{\langle (q_{12}^{(n)})^2 \rangle},\end{aligned}$$ apart from a negligible error term. Together with Eqs. (\[form1\]) and (\[form2\]), this proves (\[bellissima\]) and therefore the Theorem.
As a side remark, it is easy to employ this method, together with that of [@kacnoi], to obtain a new proof of the existence of the thermodynamic limit for the SK model, independent of the convexity argument developed in [@limterm].
It is possible to generalize this theorem to the “diluted Kac spin glass” case [@kacnoi] where each given spin $\sigma_i$ interacts with a finite random number of other spins $\sigma_j$, which are chosen randomly according to a probability distribution that decays to zero on the scale $\xi$, as $|i-j|$ diverges. In the Kac limit $\xi\to
\infty$, one can prove that the free energy of the model converges to that of its mean field counterpart, which in that case is the Viana-Bray model [@viana]. Full details of the proof are given in [@kacvb].
A second generalization of our result is to consider two replicas of the system, coupled via a term depending on their mutual overlap. This problem has been considered for instance in [@franzparisi] and is relevant for the study of glassy dynamics, especially if applied to models which exhibit “one-step replica symmetry breaking” [@MPV]. The new feature here is that, at the mean field level, the free energy of the coupled system can be expressed [@franzparisi] in terms of an effective potential depending on the overlap, which turns out to be [*nonconvex*]{}. It was argued in [@fp] that a minimal modification of the theory in finite dimension requires restoration of the convexity through the Maxwell construction. This, analogously to the ordered case [@lp], emerges naturally in the Kac limit of finite range models. We plan to report on this soon [@aes].
The main interest of the result presented in this Letter is that it could represent for spin glasses, a first step toward an expansion around the mean field case, which would hopefully shed some light on the nature of the spin glass phase for models with finite -albeit large- interaction range. This hope is supported by the fact that a similar program has been successfully carried on recently for non-random ferromagnetic spin systems [@cp] [@bz] [@bp] and continuous particle systems [@lpres1], showing that in dimension $d\ge2$ it is possible to write a controlled expansion around the $\gamma=0$ point, and to prove rigorously that for large but finite $\xi$ the system has a phase transition (broken spin flip or liquid-vapor, respectively) with coexisting phases.
[**Acknowledgments**]{}
We would like to thank Francesco Guerra for many enlightening conversations. F.L.T. is grateful to the Condensed Matter Group of the ICTP for kind hospitality during the preparation of this work. This work was supported in part by the European Community’s Human Potential programme under contract “HPRN-CT-2002-00319 STIPCO”.
[99]{}
e-mail: [franz@ictp.trieste.it]{}
e-mail: [ftoninel@euridice.tue.nl]{}
M. Mézard, G. Parisi and M.A. Virasoro, [*Spin glass theory and beyond*]{}, World Scientific, Singapore (1987).
E. Marinari, G. Parisi, F. Ricci-Tersenghi, J. Ruiz-Lorenzo, F. Zuliani, J. Stat. Phys. [**98**]{}, 973 (2000).
D.S. Fisher, D.A. Huse, Phys. Rev. Lett. [**56**]{}, 1601-1604 (1986).
A.J. Bray and M.A. Moore, in [*Heidelberg Colloquium on Glassy Dynamics*]{}, eds. J.L. Van Hemmen and I. Morgenstern, Springer-Verlag (1986) p. 121.
C.M. Newman, D.L. Stein, J. Phys.: Condens. Matter [**15**]{}, R1319-R1364 (2003).
F. Guerra, F.L. Toninelli, Commun. Math. Phys. [**230**]{} (1), 71-79 (2002).
F. Guerra, Commun. Math. Phys. [**233**]{} (1), 1-12 (2003).
M. Aizenman, R. Sims, S.L. Starr, to appear. Preprint [cond-mat/0306386]{}.
M. Talagrand, C. R. Acad. Sci. Paris, Ser. I [**337**]{}, 111-114 (2003).
P. Contucci, S. Graffi, J. Stat. Phys, to appear. Preprint [math-ph/0302013]{}.
J. Fröhlich, B. Zegarlinski, Commun. Math. Phys. [**112**]{}, 553-566 (1987).
A. Bovier, J. Stat. Phys. [**91**]{}, 459-474 (1998).
F. Guerra, F.L. Toninelli, J. Phys. A [**36**]{} (43), 10987-10995 (2003).
J.L. Lebowitz, O. Penrose, J. Math. Phys. [**7**]{} (1), 98-113 (1966).
A. Bovier, V. Gayrard, P. Picco, in [ *Mathematical Aspects of Spin glasses and Neuronal Networks*]{}, eds. A. Bovier and P. Picco, Progress in Probability [**41**]{}, Birkhäuser (1998) pp. 3-89.
P.A. Vuillermot, J. Phys. A [**10**]{}, 1319-1333 (1977).
A.C.D. van Enter, J.L. van Hemmen, J. Stat. Phys. [**32**]{}, 141 (1983).
D. Sherrington, S. Kirkpatrick, Phys. Rev. Lett. [**35**]{}, 1792-1796 (1975).
M. Kac, Phys. Fluids [**2**]{}, 8 (1959). L. Viana, A. J. Bray, J. Phys. C [**18**]{}, 3037 (1985).
S. Franz, F.L. Toninelli, [*The Kac limit for diluted spin glasses*]{}, to appear.
S. Franz, G. Parisi, J. Phys. I (France) [**5**]{}, 1401 (1995).
S. Franz, G. Parisi, Phys. Rev. Lett. [**79**]{}, 2486 (1997); Physica A [**261**]{}, 317 (1998).
S. Franz, F. L. Toninelli, in preparation
M. Cassandro, E. Presutti, Markov Proc. Rel. Fields [**2**]{}, 241 (1996).
A. Bovier, M. Zahradník, J. Stat. Phys. [**87**]{}, 311-333 (1997).
T. Bodineau, E. Presutti, Commun. Math. Phys. [**189**]{}, 287-298 (1997).
J.L. Lebowitz, A. E. Mazel, E. Presutti, Phys. Rev. Lett. [**80**]{}, 4701-4704 (1998); J. Stat. Phys. [**94**]{}, 955-1025 (1999).
|
---
abstract: 'Atomic Beam Probe (ABP) is an extension of the routinely used Beam Emission Spectroscopy (BES) diagnostic for plasma edge current fluctuation measurement at magnetically confined plasmas. Beam atoms ionized by the plasma are directed to a curved trajectory by the magnetic field and may be detected close to the wall of the device. The arrival location and current distribution of the ions carry information about the plasma current distribution, the density profile and the electric potential in the plasma edge. This paper describes a micro-Faraday cup matrix detector for the measurement of the few microampere ion current distribution close to the plasma edge. The device implements a shallow Faraday cup matrix, produced by printed-circuit board technology. Secondary electrons induced by the plasma radiation and the ion bombardment are basically confined into the cups by the tokamak magnetic field. Additionally, a double mask is installed in the front face to limit ion influx into the cups and supplement secondary electron suppression. The setup was tested in detail using a Lithium ion beam in the laboratory. Switching time, cross talk and fluctuation sensitivity test results in the lab setup are presented, along with the detector setup to be installed at the COMPASS tokamak.'
author:
- 'D. I. Réfy'
- 'S. Zoletnik'
- 'D. Dunai'
- 'G. Anda'
- 'M. Lampert'
- 'S. Hegedűs'
- 'D. Nagy'
- 'M. Palánkai'
- 'J. Kádi'
- 'B. Leskó'
- 'M. Aradi'
- 'P. Hacek'
- 'V. Weinzettl'
bibliography:
- 'article.bib'
title: 'Micro-Faraday cup matrix detector for ion beam measurements in fusion plasmas'
---
Introduction\[Sect.introduction\]
=================================
The ELMy H-mode operation regime of magnetically confined fusion plasmas is being explored for several decades[@Wagner2007] and it is still in the focus of fusion research as it considered to be the main operating regime of a future fusion reactor. Alongside the benefits of H-mode in terms of improved energy confinement compared to L-mode operation, the so called Edge Localized Modes (ELMs) are of severe concern for a future reactor. They are an important element to extract impurities from the plasma but at the same time they periodically expel up to $20\%$ of the total plasma energy within a millisecond time scale[@Doyle2007]. Such a high power load can damage the plasma facing components of the machine.
The understanding of the physics background of the ELM triggering mechanism is indispensable in order to control the ELMs and to mitigate their effect. The key physics quantities for the peeling-ballooning instability which can describe the type-I (large) ELMs are the plasma edge pressure gradient and current density[@Snyder2002]. The pressure gradient can be measured by several techniques which are available on numerous machines e.g. Alkali-BES[@Refy2018] and reflectometer[@Wang2004]$^{,}$[@Meneses2006] for electron density profile measurement, electron cyclotron emission[@Classen2010]$^{,}$[@Luna2004] for electron temperature, charge exchange recombination spectrosopy[@Isler1994] for both the ion temperature and ion density, Thomson scattering[@Pasqualotto2004] for both the electron temperature and the electron density measurement. On the other hand there are only limited possibilities for the edge current density measurement, therefore it is not measured routinely.
A trajectory of a charged-particle beam passing through a magnetically confined plasma is determined by the energy, charge state and mass of the given particle, the magnetic field structure and the electric potential distribution. The measurement of the trajectory of a monoenergetic ion beam thus enables characterization of the magnetic field and potential. Several techniques have been proposed both for the diagnostic beam production and the ion beam trajectory detection.
The Heavy Ion Beam Probe (HIBP) technique [@Jobes1970] utilizes a primary beam of singly charged ions of a large mass species ($Cs^{+}$ typically), which may undergo ionization after injection into the plasma due to collisions with the plasma particles. The doubly charged ions are separated from the primary beam at the point of second ionization, follow a path defined by the magnetic field, and exit the confined region at one point, reaching the wall of the machine. The standard HIBP technique is based on the detection of the spatial [@Beckstead1997] and energy distribution of the leaving ions which reflects the variation of the plasma current, electron density and plasma potential mostly at the second ionization location. Additionial measurements have also been proposed for secondary ion beam emission imaging [@Demers2003] and beam velocity direction measurement[@Fimognari2016]. Variations of the HIPB technique indeed allow excellent measurements but serious limitations arise from the necessary high (up to MeV) beam energy and limited access to the plasma.
A proposed version of HIBP is called the Laser-accelerated Ion-beam Trace Probe (LITP) [@Yang2014]. It intends to replace the electrostatic particle acceleration method by laser ablation. In this case a pulsed beam of MeV ions with high energy spread and multiple charge states is obtained. The concept intends to measure the spatial distribution of ions at the wall of the fusion machine which enables the reconstruction of the radial electric field, electron density[@Chen2014] and the poloidal magnetic field [@Yang2016].
The Atomic Beam Probe[@Berta2013] (ABP) technique to be discussed in this paper is an extension of the routinely used Alkali (typically Lithium or Sodium) Beam Emission Spectroscopy diagnostic[@Zoletnik2018]. The ion source of the system is replaceable on a day timescale, thus the ion species can be varied flexibly. Lithium and Sodium is used routinely and other heavier species (Rb, Cs) are also possible. ABP intends to measure the spatial distribution of an ion beam originating from the atomic beam after ionization in the plasma. The ion beam path is affected by the magnetic field and the electric potential thus the ion beam shape enables measurement of these quantities. The beam current depends on the plasma density, therefore information can be obtained on this quantity as well. Unlike in the case of HIBP, the ion energy cannot be measured with the required precision inside the fusion device, therefore the diagnostic relies solely on the ion beam current distribution measurement. On the other hand, the technique is a simple extension of a standard diagnostic and potentially offers microsecond-scale temporal resolution.
For detection of the ion beam distribution a collection plate matrix was proposed and tested in the COMPASS tokamak[@Hacek2018]. Results showed that several improvements are necessary to be able to extend the operation space towards standard H-mode scenarios. The detector has to be first tested in laboratory environment to quantify the performance and to be able to interpret the measurements in a tokamak environment. This paper presents the design and laboratory testing of a new ABP detector. It has to be noted that an alternative scintillator detector concept has also been proposed[@Sharma2017] which offers better spatial resolution but more limitation on geometry.
This paper is organized as follows: the working principle of the ABP is described briefly in section \[Sect.principle\]. The detector head for the ion current measurement is presented in section \[Sect.detector\]. The setup for the laboratory experiment is detailed in section \[Sect.labexp\] followed by the measurement results in section \[Sect.labexpresults\]. The ABP setup for the tokamak environment is described in section \[Sect.tokexp\], and the paper is summarized in section \[sect:summary\].
ABP working principle\[Sect.principle\]
=======================================
The ABP is an extension of the alkali BES diagnostic described in detail in ref[@Zoletnik2018]. The diagnostic is routinely used for electron density profile measurement at several plasma experiments, and works as follows. An accelerated atomic beam is injected into the plasma, where the beam atoms are excited and ionized by plasma particles. The ionization process results in a gradual loss of the atoms in the beam. The ionization rate is such that the beam can penetrate only the edge of the plasma, thus Li-BES systems are used for electron density profile and fluctuation measurement of the outer plasma regions only, namely the plasma edge and scrape off layer (SOL). Spontaneous de-excitation of the beam atoms results in a characteristic photon emission which can be detected through an optical system. The distribution of the light emission along the beam (light profile) can be measured by a detector system, from which the electron density distribution (density profile) can be calculated [@Schweinzer1992]$^,$[@Fischer2008].
![100 keV Lithium (a) and 45 keV Sodium (b) ion beam trajectories (solid purple lines) along with the ionization location (red squares), the detector location (solid blue line) and the detector plane impact location (green squares) are shown. The poloidal cross section of the COMPASS vacuum vessel and the magnetic surfaces with the highlighted last closed flux surface are also indicated. (shot: \#15579, time: 1100 ms) \[fig:polcut\]](abp_poloidal_cut_2.pdf){width="\linewidth"}
The ions are deflected from the beam by the magnetic field, and may reach the wall of the machine. Figure \[fig:polcut\] shows the modelled path of 100 keV Lithium (a) and 45 keV Sodium (b) ions in the COMPASS alkali beam diagnostic[@Anda2016]. Ions from the beam injected from the low field side (LFS) midplane reach the wall of the machine approximately 22-27 cm above the midplane on the LFS. Here the ion detector can be placed into a port. The red squares indicate the location of the ionization, while the green squares show the intersection of the ion trajectories and the detector surface plane. The ion trajectories are deflected in the toroidal direction (out of the plane of the drawing) by the poloidal magnetic field resulting from the plasma current. This toroidal displacement needs to be measured by the detector. Ions starting at different locations in the plasma hit the detector at different elevation and different toroidal displacement, therefore a two-dimensional measurement is desirable. Besides the toroidal deflection, a change in the vertical detection position is caused by the plasma potential change or toroidal magnetic field change. Intensity modulations are caused by plasma density fluctuations at the ionization point and outside. Disentangling these effects is not straightforward and will be the subject of other publications.
Detector setup\[Sect.detector\]
===============================
Detector plate\[subs.head\]
---------------------------
The detector is considered as a two-dimensional matrix of ion collector metal plates. The simulation of the ion trajectories shows that the detector has to be placed inside the tokamak vessel, typically in a few centimeter distance from the last closed flux surface (LCFS), facing the plasma. This predetermines high - mostly ultra violet and X-ray - radiation level at the detector which causes secondary electron production on the detector surface. The secondary electron current can be significantly higher than the ion current, therefore the detector design should minimize secondary electron effects. Secondary electrons leave the metal surface with few 100 eV energy. In the few Tesla magnetic field of the tokamak the Larmor radius of these electrons is below 100 micron. If the detector plate is parallel to the tokamak magnetic field a few hundred micron deep cup can prevent them from leaving. Due to variable magnetic configuration a fully parallel magnetic field cannot be ensured on the detector but the deviation can be kept below 20 degrees. In this case a factor of 3 toroidal width/depth ratio of the cup is sufficient to confine electrons in any case.
The cup size is determined the following way. According to computer simulation of the COMPASS measurement setup, the ion beam toroidal shift on the detector in response to an edge current perturbation is 0.1-1 mm/kA. In earlier experiments a Lithium beam reduced to 5 mm still gave good signal[@Hacek2018], therefore we considered this beam diameter. We intend to detect at least 1 mm beam movement, thus the 5 mm beam needs to be resolved to a few measurement channels. As a consequence, the Faraday cup toroidal width was set to 2 mm. In the vertical direction much less resolution is needed, therefore the height was set to 5 mm. The total detector size is limited by the COMPASS port width, therefore a 5x10 (row$\times$column) matrix was possible. The final dimensions of each Faraday cup are $1.8\times 4.8 \times 0.8$ mm (toroidal width, height, depth), the distance between the cups is 0.4 mm in both directions. Figure \[fig:detectorhead\] shows the detector board from the front (a) and the back (b). Each cup is connected to a wire connection point at the lower edge of the detector panel which is shielded from the plasma.
![Front view (a) and back view (b) of the detector panel. The $5 \times 10$ faraday cup matrix and the connector panel below can be seen in the front view, while the wiring between the connectors and the plates on the back view. \[fig:detectorhead\]](ABP_detector_head_1.pdf){width="\linewidth"}
![Schematic drawing of the detector layer setup. a) Upper layer with one copper-plated side, b) Bottom layer with double sided copper-plating. The two layers are laminated together and then electroplated to form the detector setup. \[fig:circuit\]](abp_detector_schematic.pdf){width="\linewidth"}
The detector was manufactured using standard PCB (Printed Circuit Board) technology, however, PCB production is not utilized to produce three-dimensional structures, which was a necessity for the Faraday-cups in the detector. Another difficulty had to be overcome due to the possible high heat loads on the detector board, which excluded the use of standard FR4 fiberglass material. A possible candidate for the PCB was found in the high frequency radio industry, which utilize ceramics as the base material of the circuit board (RO4350B)[@RO4000]. Ceramics can withstand the high heat loads during Atomic Beam Probe measurements since the material meets the flame-retardant standards of Restriction of Hazardous Substances (RoHS), and the PCB can be manufactured using standard FR4 production line. The other difficulty of producing a three-dimensional structure into the PCB was overcome by an advanced layer setup. Three layers were prepared for the setup, as can be seen in Figure \[fig:circuit\]. The first layer is used for the routing of the cables from the Faraday cups to the electrical connection. The second layer provides the bottom part of the Faraday-cups. The third layer is first milled for the slits of the Faraday-cups. In the next step, the first two and the third layer are laminated together, which is followed by copper metallization, which connects the bottom of the Faraday-cups to the top layer and forms the side of the cups, as well. As a last step, the wires are gold plated finalizing the detector board. The final detector was manufactured by Hungarian team of MMAB Group [@MMAB].
Detector mask {#subs.mask}
-------------
Previous measurements[@Hacek2018] showed that electrons and ions as well as UV radiation reaching any metallic surface induce secondary electrons which make the measurement hard to interpret. The detector plate has to be masked so that the ions can only reach the detector at the Faraday cups. On the other hand any secondary electrons generated at the Faraday cup edges e.g. by stray UV radiation should be suppressed by an external electric field while the detector plate and its front face have to be on the ground potential. To fulfill these requirements, a double mask is placed in front of the detector, as indicated in front and front-side and zoomed views in Figure \[fig:detectormask\] (a), (b) and (c). The openings in the mask are 1.2 mm $\times$ 3.6 mm, that is somewhat smaller then the Faraday cup size. The masks are separated electrically from each other and from the detector as well by insulating washers. The spatial separation is 1 mm between each element. The mask closer to the Faraday cup can be either biased or grounded, while the one further facing the plasma is grounded to prevent collection charges from the plasma.
![Front view (a) of the detector masks installed in the laboratory setup, the detectors are visible behind, side-front view (b) of the detector head with the double mask and the cables behind, and a zoomed plot (c) showing the masks, the insulating spacers and screws. \[fig:detectormask\]](ABP_detector_head_mask_2.pdf){width="\linewidth"}
Figure \[fig:detectordraft\] shows a sketch of the typical ion impact scenario, looking from the top at the detectors, quasi perpendicular with the local magnetic field. The Larmor radius of a 100 keV Lithium ion is in the order of 10 cm in 1 T magnetic field, and its trajectory is considered to be straight on the mm length scale as indicated with the red arrow. The Larmor radius of a few 100 eV electron in 1 T magnetic field is in the order of 10 $\mu$m. As the magnetic field is close to parallel to the cup surface, the electrons are primarily prevented from leaving the cup by their small Larmor radius. Electrons which might leave the surface will travel along the magnetic field line, indicated with the blue arrow, and will hit the side of the cup. Should any electron be generated on edge of the cups they are pushed back by the electric field produced by the biased rear mask indicated by the green arrows.
![Double mask concept: Ions are following straight trajectory on the mm scale, and are indicated by the red arrow. The front grounded mask prevents ion impact on surfaces between the detectors. Direct ion impact on the detector surface induces secondary electrons which are prevented from leaving the cup either due to their small Larmor radius and by hitting the wall of the cup after following the magnetic field lines (indicated by the blue arrow). Electrons generated on the cups edges are pushed back by the electric field produced by the biased rear mask (indicated by the green arrows). \[fig:detectordraft\]](mask_draft_2.pdf){width="\linewidth"}
Laboratory experiment setup\[Sect.labexp\]
==========================================
A laboratory experiment was conducted to verify secondary electron suppression, temporal resolution and crosstalk of the detector. A 30 kV Li-beam injector was used as the ion source in a setup shown in Figure \[fig:diagsetupblock3\].
![image](measurement_setup_far.pdf){width="\linewidth"}
Lithium beam injector\[Sect.injector\]
--------------------------------------
The ion beam required for the measurement is produced by a simplified version of the Li-beam injector described in Ref. [@Anda2016]. The approximately 1 mA ion beam is extracted with 3 kV and accelerated with 27 kV from a thermionic emitter with a 2 stage ion optic. The beam is injected through a flight tube, and chopped (deflected so that the beam cannot reach the detector) with a pair of deflection plates. One is grounded, while the other is connected to a fast switch which can change between the potential set on a power supply up to 1 kV and the ground with up to 500 kHz switching frequency. The distance between the detector and the chopper is about 5 m. The beam has about 2 cm FWHM at the detector, thus covering all channels when the beam is not modulated. When the beam modulation is done with 900 V it results in 15 cm deflection at the detector plate, that is the beam is completely off the detector. Note that the beam conditions, such as emitter temperature, emitter depletion, extraction voltage, and beam current, and beam focusing accordingly can be significantly different between the measurement series to be presented in this paper; thus, the net measured current per detector is not necessarily comparable between the experiments.
Detector holder
---------------
A detector holder was designed to mimic the tokamak magnetic field. Two Neodymium magnets with about 0.5 T surface induction were placed on 2 sides of a holder which is hung from the vacuum flange. The detector is placed in the middle of the magnets where the field is the strongest and the most homogeneous. The setup was modelled with Finite Element Method Magnetics[@Meeker2018] as indicated in Figure \[fig:comsol\], showing a straight induction vector at the detector position (indicated with a red line), and 7$\%$ field strength variation (326 - 347 mT) between the edge and the center of the 20 mm wide detector. The detector is perpendicular to the geometrical beam line axis, however the beam is slightly deflected downwards due to the magnetic field. (30 keV, Li, 0.4 T homogenous field, 2 cm path in the field, $r_{L}$=16 cm, deflection: 2.5 mm, $7^\circ$) The major difference of the setup described in Ref. [@Hacek2018] is that the magnetic field was less homogenous at the detector position which is indicated with a blue line in Figure \[fig:comsol\].
![Strength of the magnetic field in between the Neodymium magnets as a function of the distance from one magnet. The setup viewed from the top is shown in the contour plot, the graph shows the horizontal cut of the 2D simulation results at the middle of the setup indicated with a black line. The detector position is indicated with a red line. The detector position for the Ref. [@Hacek2018] setup with less homogenous field is also indicated with a blue line \[fig:comsol\]](FEMM_plot.pdf){width="\linewidth"}
Figure \[fig:detectorholder\] shows the Computer-Aided Design (CAD) model with one magnet and the covering mask removed (a), the full setup with the vacuum flange (b) and the detector head with the magnet holder zoomed (c). The Faraday cup signals are led to the vacuum feedthrough by a ribbon cable, and the external voltage and grounding are also applied there.
![CAD model with one magnet and the covering mask removed (a), full setup with the vacuum flange (b) and the detector head with the magnet holder zoomed (c). \[fig:detectorholder\]](ABP_detector_holder_1.pdf){width="\linewidth"}
Data acquisition and control {#Sect.labcontrol}
----------------------------
The signals from 20 Faraday cups were connected through a 2 m bundle of coaxial cables to amplifiers consisting of a current sensing stage with 2 kOhm resistor and a second stage amplifier with gain of 100. The analogue bandwidth was 1 MHz and the differential signals were digitized with 2 MHz and 14 bits resolution. The sampling of the digitizer was synchronized to the beam modulation at the deflection plate pair. A 1 kOhm series resistor and voltage limiting diodes were installed at the amplifier input to protect them from overcurrent from the plasma or beam. The time constant to the resistor and the cable capacitance contributes to to final analogue bandwidth of the setup. This setup is identical to the one used on the COMPASS tokamak.
Unused cups were connected to the ground. To prevent noise pickup, proper grounding turned out to be essential. Therefore, the detector setup was isolated from the beam injector and the beam flight tube and grounded from the amplifiers and digitizers.
Laboratory experiment results {#Sect.labexpresults}
=============================
Mask biasing test \[subs:biasing\]
----------------------------------
The aim of this test was to check the amount of spurious signal in the Faraday cups from secondary electrons generated by ion impact. If the beam is injected fully perpendicularly to the detector surface all ions hit the Faraday cup bottom and secondary electrons are confined in the cup by the magnetic field. However, in the tokamak experiment the beam is not always perpendicular and also in the laboratory experiment it is deflected about 7 degrees by the magnet. This way some electrons are generated at the side of the cups close to the top and may escape. The secondary electrons are suppressed when negative voltage is applied, while extracted from the plate when the mask is positively biased.
The signal level was measured on the Faraday cup plates while the biasing voltage of the mask was scanned. Figure \[fig:biastest2\] shows the average current on the plates as a function of the biasing voltage. Figure \[fig:biastest2\] (a) corresponds to the homogenous field setup while Figure \[fig:biastest2\] (b) to the inhomogeneous field setup indicated with red and blue respectively in Figure \[fig:comsol\]. The systematic effect of the biasing can be seen when negative voltage is applied. Even at -100 V the current drops from 0.6 $\mu$A to 0.45 $\mu$A for the homogenous case. The effect is more emphasized when the detector was in inhomogeneous field, since the current changes by factor 8 between the negatively and the positively biased mask cases (0.4 $\mu$A to 0.05 $\mu$A). This result confirms that the geometrical electron suppression described in section \[subs.mask\] works reliably for small pitch angle between the detector surface and the local magnetic field, and suggests small (few hundred volts) negative biasing for the measurement. The beam current was relatively stable during each measurement series, but small deviations could occur which can explain the small deviations from the trend. The beam current changed considerably between the homogenous (0.45 $\mu$A ion current) and the inhomogeneous case (0.05 $\mu$A ion current) due to the different emitter conditions and beam focusing.
![Average current on the detectors as a function of biasing voltage. Detector was placed in homogenous field (a) and in inhomogeneous field (b) as indicated with the red and the blue lines respectively in Figure \[fig:comsol\]. There is a clear decrease when negative voltage is applied.\[fig:biastest2\]](abp_bias_test_series_3_19_edited.pdf){width="\linewidth"}
Cross talk test \[subs:crosstalk\]
----------------------------------
The aim of this measurement was to characterize the cross talk between channels due to escaping secondary electrons. A special mask was installed with only one 1.2 mm $\times$ 3.6 mm window, allowing ions to reach directly only one Faraday cup.
The 30 keV, 1 mA ion beam was injected and chopped with 100 Hz. A measurement series was done with different mask biasing: grounded, +300 V, -300 V. The raw signals with 10 $\mu$s integration time are plotted for all channels in Figure \[fig:crosstalkall\] for the different biasing cases. The beam current changes on the 10 ms time scale since the ion beam focusing time can be several tens of milliseconds due to space charge being compensated by back flowing electrons in the beam. There is about 1.5% crosstalk with the bottom channel relative to the opened one with the unbiased and the positively biased cases (consider different vertical axis scale), while no cross talk for the negatively biased case. It is also visible that the signal on the bottom channel is in counter phase with the reference channel indicating the effect of the secondary electrons, since the electrons cause negative current relative to the beam off phase.
This test also suggests that the measurement should be done with small (few hundred volts) negative biasing.
![Cross talk test, signals above show the signal level on the channel behind the opening on the mask while signals below show the signal level on the channel below the opening for various biasing: mask grounded (a,b), mask biased with +300 V (c,d), mask biased with -300V (e,f) \[fig:crosstalkall\]](ABP_crosstalk_test.pdf){width="\linewidth"}
Fast switching test \[subs:fastswitch\]
---------------------------------------
The aim of this measurement was to clarify the change-over time between the beam on and beam off periods when fast, 100 kHz chopping is applied. This fast beam chopping will be essential in the tokamak experiment to differentiate between ion beam and background signal. The measurement was running with 2 MHz sampling, the mask was biased with -300 V. The raw signal of a channel is plotted in Figure \[fig:fastchop\] without integration for a 50 $\mu s$ long data set (5 chopping periods) showing each Analog-to-Digital Converter (ADC) sample. The analogue bandwidth is 1 MHz, however the beam change over time is approximately 2 $\mu s$ (4 samples). This is a result of the integration time introduced by the cable capacitance and the overload protection resistor in the input of the amplifier. However, this test confirms that background can be measured on the 10 $\mu s$ timescale.
![Raw signal of an ABP channel during 100 kHz beam modulation. \[fig:fastchop\]](ABP_fastchop_test.pdf){width="\linewidth"}
Position sensitivity test \[subs:fluct\]
----------------------------------------
The aim of this test is to characterize the sensitivity of the measurement for ion beam movement. The idea was to place the detector close to the deflection plates, so that the ion beam deflection can be reduced to 0.1 mm. The measurement setup was changed in the way, that the ABP detector was placed close to the ion gun, and a 5 mm diameter beam reducer diaphragm was also installed as indicated with a blue line in Figure \[fig:diagsetupblock4\]. The angle of the beam deflection was calculated from the deflection voltage assuming homogeneous deflecting electric field between the plates. With different deflection voltages applied, the beam travels from the hole to the detector at different angles, thus the beam moves on the detector surface. Due to deflection the beam intensity also changes slightly as different areas of the 2 cm wide beam pass through the diaphragm. Please note, that the deflection plates are installed in about 30 degrees angle relative to horizontal due to technical reasons, thus the beam movement is oblique accordingly.
![Laboratory diagnostic setup with ABP placed close to the ion source \[fig:diagsetupblock4\]](measurement_setup_2.pdf){width="\linewidth"}
### Slow modulation test\[subs:slowswitch\]
A 20 keV beam with approximately 0.3 mA extracted ion current was used for this measurement. The beam was deflected with 100 Hz square signal, the amplifier signal was digitized with 100 kHz sampling for 3 s, and the middle mask was biased with -300 V. The deflection voltage was set to 0V, 500V and 900V. The beam was turned off after 2.2 s to have a background measurement as well. The ion beam distribution was measured on 20 channels in a 4 $\times$ 6 channel part of the detector matrix, missing the 4 corner channels, as only a 20 channel analogue amplifier was available. Out of these 2 channels were broken at the detector plate connector (1-5, 3-5 counting from the top left corner) offering electronic background measurement for the whole measurement chain.
The subfigures in figure \[fig:deflslowcontour\] shows the ion beam current distribution on the detector for different deflection voltages, the dead channels are left blank. The bottom right subplot shows the case without deflection while the others with different deflection voltages. The calculated beam deflection and the shift in the distribution center of mass is printed above each plot. The center of mass of each distribution is indicated with $\times$ sign. The beam is deflected towards the bottom left corner relative to the undeflected case, and the effect of the increasing deflection voltage is clearly visible. The measured beam deflection is about half of the calculated if the calculated deflection is above 1 mm. This reduction in sensitivity is caused partly by the simplified beam deflection calculation assuming homogeneous electric field between the plates, partly by the limited number of measurement channels. Nevertheless, it is confirmed that the Farady cup matrix is sensitive to sub-pixel beam movements.
![Beam intensity distribution on the detector head for different deflection voltages. The bottom-right figure shows the reference distribution without deflection. Center of mass of the distributions are indicated with $\times$ sign.\[fig:deflslowcontour\]](abp_deflection_test_series_contour_on_17_reduced.pdf){width="\linewidth"}
The average movement of the center of mass relative to the reference 0 V case is shown in Figure \[fig:deflcogvsvoltageslow\] as the function of the deflection voltage. The effect of the change in the chopper voltage on the center of mass movement is close to be linear, and an order of 0.1 mm displacement is resolvable with this technique.
![Average displacement of the center of mass of the ion current distribution during the slow, 100 Hz deflection measurement for different deflection voltages. \[fig:deflcogvsvoltageslow\]](abp_deflection_test_series_voltage_vs_deflection_17.pdf){width="\linewidth"}
### Fast modulation test
The aim of this test was to verify that fast (microsecond scale) beam movements can be measured. The beam properties, the mask biasing and the deflection voltage steps were identical with the previous measurement’s, while the beam modulation was set to 100 kHz and the sampling to 1 MHz. The center of mass movement relative to the start point is shown in Figure \[fig:deflcogfast\] with different colours for different deflection voltages without integration. The modulation of the center of mass is above the noise level and fairly linear. Approximately 0.2 mm movement on the microsecond time scale is resolvable.
![Movement of the center of mass of the ion current distribution during the fast, 100 kHz deflection measurement for different deflection voltages. The different colours correspond to different deflection voltages indicated in the legend. \[fig:deflcogfast\]](abp_deflection_test_series_COG_movement_18.pdf){width="\linewidth"}
Tokamak experiment setup {#Sect.tokexp}
========================
As described above, the ABP detector head has to be placed in the vacuum chamber of the torus close to the LCFS. This section summarizes the considerations for the mechanical and electrical eningeering topics and the data acquisition system for the tokamak environment.
Detector head design
--------------------
Figure \[fig:detectorhead2\] shows the final detector head which is installed in the COMPASS tokamak since February 2018. The most conspicuous parts are the Graphite frame and the double mask in the picture along with the actuator mechanism above, while the CAD drawing shows the inner structure of the final detector head design. The grounded graphite frame acts as a local limiter and prevents direct contact of the plasma to the masks. The actuator mechanism enables moving the detector a few cm out from the tokamak port and also 2 cm to the side.
The Faraday cups and the detector masks are connected to a vacuum feedthrough with a Kapton shielded ribbon cable. The cables are hard soldered to the connector panel on the detector board as shown in Figure \[fig:detectorhead\].
![Photo (a) and CAD drawing (b) of the detector head for the tokamak measurements. The detector surface is barely visible behind the graphite limiter and the double mask. \[fig:detectorhead2\]](ABP_detector_head_2.pdf){width="\linewidth"}
Detector holder
---------------
The aim of the detector holder setup is to be able to move the detector, since the detailed modelling of different plasma scenarios, beam energy and beam species showed, that the detector position must be variable both vertically and horizontally to match the ion trajectories. It also aims to carry the ABP signals from the detectors to the data acquisition system, and to move the detector into the port to minimize deterioration when the detector is out of use.
To fulfill these requirements, an in-vessel setup was designed and it consists of the detector head, a horizontal actuator, a vertical actuator, an extention vacuum vessel, special cables, a double connector feedthrough and a manual actuator with proper scale to ensure position reproducibility.
Figure \[fig:detectormovement\] shows the actuation possibilities with the setup, note that the picture is rotated by 90 degrees relative to its position in the tokamak. The movement range vertically is 218 mm to 273 mm in height above midplain coordinates, while horizontally 0 to 22 mm counter clockwise looking from the top of the tokamak, relative to the beam injection position.
Figure \[fig:detectorholdertokamak\] indicates how the detector holder is installed at the COMPASS tokamak, picture (a) shows the CAD model of the COMPASS vacuum vessel with the ABP setup on the top. Picture (b) shows the detector head at its lowermost position, while the extention vacuum vessel and the actuator can be seen in picture (c) along with a photo of the actuator mechanics (d) and the vacuum feedthrough (e).
![Detector head on the left, and the actuator mechanics. The yellow arrows indicate the horizontal and the vertical movement capabilities respectively. \[fig:detectormovement\]](ABP_detector_head_movement.pdf){width="\linewidth"}
![A CAD drawing of a cut of the COMPASS tokamak vessel with the ABP detector setup on top can be seen in picture (a). Picture (b) shows the detector at its lowermost position in the port. Picture (c) shows the detector vacuum vessel from a different angle, while picture (d) shows the actuator mechanics, and picture (e) the vacuum feedthrough. \[fig:detectorholdertokamak\]](ABP_detector_head_tokamak.pdf){width="\linewidth"}
Summary and conclusions {#sect:summary}
=======================
A purpose designed experimental setup and measurement series were carried out to characterize the performance of the Faraday cup type atomic beam probe diagnostic. It was found, that the Faraday cup matrix detector with double mask is capable of measuring the expected $\sim$100 nA ion current with microsecond time resolution. A double mask is needed in front of the cups to shield the gaps between the cups so as to reduce secondary electron generation. The first mask has to be on ground potential while the second one biased to about -300 V voltage to prevent cross talk and suppress secondary electrons. Sensitivity to sub-mm beam movement with few microsecond time resolution was confirmed, therefore this detector is a viable solution for the ABP diagnostic. Such a device has been installed on the COMPASS tokamak and its first results will be reported in a separate paper.
This work received funding from MEYS Project No.LM2015045.
This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
|
---
author:
- |
Tommaso Cavallari, Stuart Golodetz, Nicholas A. Lord, Julien Valentin,\
Victor A. Prisacariu, Luigi Di Stefano and Philip H. S. Torr
bibliography:
- 'grove.bib'
title: |
Real-Time RGB-D Camera Pose Estimation in\
Novel Scenes using a Relocalisation Cascade
---
at (current page.south) ;
Camera pose estimation is a key computer vision problem, with applications in simultaneous localisation and mapping (SLAM) [@Newcombe2011; @MurArtal2014; @Kaehler2015; @Golodetz2018], virtual and augmented reality [@Bae2016; @Castle2008; @Golodetz2015SPDEMO; @Paucher2010; @Rodas2015; @Valentin2015SP] and navigation [@Lee2016]. In SLAM, the camera pose is commonly initialised upon starting reconstruction and then tracked from frame to frame, but tracking can easily be lost due to e.g. rapid movement or textureless regions in the scene; when this happens, it is important to be able to relocalise the camera with respect to the scene, rather than forcing the user to restart the reconstruction. Camera relocalisation is also crucial for loop closure when trying to build globally consistent maps [@Fioraio2015; @Kaehler2016; @Whelan2015RSS].
Approaches to camera relocalisation roughly fall into two main categories: (i) those that attempt to find the pose directly, e.g. by matching the input image against keyframes with known poses [@GalvezLopez2011; @Gee2012; @Glocker2015], or by directly regressing the pose [@Kendall2015], and (ii) those that establish correspondences between points in camera and world space, so as to deploy e.g. a Perspective-n-Point (PnP) algorithm [@Hartley2004] (on RGB data) or the Kabsch algorithm [@Kabsch1976] (on RGB-D data) to generate a number of camera pose hypotheses from which a single hypothesis can be selected, e.g. using RANSAC [@Fischler1981]. Hybrid approaches that first find pose candidates directly and then refine them geometrically also exist [@MurArtal2015; @Valentin2016; @Taira2018].
Recently, Shotton et al. [@Shotton2013] proposed the use of a regression forest to directly predict corresponding 3D points in world space for all pixels in an RGB-D image (each pixel in the image effectively denotes a 3D point in camera space). By generating predictions for all pixels, their approach avoids the explicit detection, description and matching of keypoints typically required by more traditional correspondence-based methods, making it simpler and faster. Moreover, this also provides them with a significantly larger number of correspondences with which to verify or reject camera pose hypotheses. However, one major limitation they have is a need to train a regression forest on the scene of interest *offline* (in advance), which prevents on-the-fly camera relocalisation in novel scenes.
Subsequent work has significantly improved upon the relocalisation performance of [@Shotton2013]. Guzman-Rivera et al. [@GuzmanRivera2014] rely on multiple regression forests to generate a number of camera pose hypotheses, then cluster them and use the mean pose of the cluster whose poses minimise the reconstruction error as the result. Valentin et al. [@Valentin2015RF] replace the modes [@Shotton2013] used in the leaves of the forests with mixtures of anisotropic 3D Gaussians in order to better model uncertainties in the 3D point predictions. Brachmann et al. [@Brachmann2016] deploy a stacked classification-regression forest to achieve results of a quality similar to [@Valentin2015RF] for RGB-D relocalisation (their approach can also be used for pure RGB localisation, albeit with lower accuracy). Meng et al. [@Meng2016] perform RGB relocalisation by estimating an initial camera pose using a regression forest, then querying a nearest neighbour keyframe image and refining the initial pose by sparse feature matching between the camera input image and the keyframe. Massiceti et al. [@Massiceti2017] map between regression forests and neural networks to leverage the performance benefits of neural networks for dense regression while retaining the efficiency of random forests for evaluation. Meng et al. [@Meng2017IROS] store a priority queue of non-visited branches whilst passing a feature vector down the forest during testing, and then backtrack to see whether some of those branches might have been better than the one chosen. Meng et al. [@Meng2017arXiv] make use of both point and line segment features to achieve more robust relocalisation in poorly textured areas and/or in the face of motion blur. Brachmann et al. [@Brachmann2017CVPR] show how to replace the RANSAC stage of the conventional pipeline with a probabilistic approach to hypothesis selection that can be differentiated, allowing end-to-end training of the full system. Li et al. [@Li2018RSS] use a fully-convolutional encoder-decoder network to predict scene coordinates for the whole image at once, to take global context into account. This obviates [@Brachmann2017CVPR]’s need for patch sampling, but needs significant data augmentation to avoid overfitting. Brachmann and Rother [@Brachmann2018CVPR] significantly improve on the results of [@Brachmann2017CVPR], whilst also showing how to avoid the need for a 3D model at training time (albeit at a cost in performance). Very recently, Li et al. [@Li2018arXiv] have shown how to use an angle-based reprojection loss to remove [@Brachmann2018CVPR]’s need to initialise the scene coordinates with a heuristic when training without a model. However, despite all of these advances, none of these papers remove the need to train on the scene of interest in advance.
Recently, we showed [@Cavallari2017] that this need for *offline* training on the scene of interest can be overcome through *online* adaptation to a new scene of a regression forest that has been pre-trained on a generic scene. We achieve genuine on-the-fly relocalisation similar to that which can be obtained using keyframe-based approaches (e.g. [@Glocker2015]), but with significantly higher relocalisation performance. Moreover, unlike such approaches, which can struggle to relocalise from novel poses because of their reliance on matching the input image to a database of keyframes, our approach performs well even at some distance from the training trajectory.
Our initial implementation of this approach [@Cavallari2017] achieved relocalisation performance that was competitive with offline-trained forests, whilst requiring no pre-training on the scene of interest and relocalising in close to real time. This made it a practical and high-quality alternative to keyframe-based methods for online relocalisation in novel scenes. In this paper, we present an extension of [@Cavallari2017] that achieves significantly better relocalisation performance whilst running fully in real time. To achieve this, we make several novel improvements to the original approach:
1. Instead of simply accepting the camera pose produced by RANSAC without question, we make it possible to select the most promising of the final few hypotheses it considers using a geometric approach (see §\[subsubsec::hypothesisranking\]).
2. We chain several instances of our relocaliser (with different parameters) into a cascade, letting us try fast, less accurate relocalisation first, and only fall back to slow, more accurate relocalisation if needed (see §\[subsubsec::cascade\]).
3. We tune the parameters of our cascade, and the individual relocalisers it contains, to achieve the most effective overall performance (see §\[sec:parametertuning\]).
These changes allow us to achieve state-of-the-art results on the well-known 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] benchmarks. We then make two further contributions:
1. We present a novel way of visualising the internal behaviour of SCoRe forests, and use this to explain in detail why adapting a forest pre-trained on one scene to a new scene makes sense (see §\[subsec:forestvisualisation\]).
2. Based on the insights gleaned from this visualisation, we show that it is feasible to avoid pre-training the forest altogether by making use of a random decision function in each branch node (see §\[subsec:nopretraining\]). Whilst this approach leads to a small loss in performance with respect to a pre-trained forest, we show that it can still achieve better performance than existing methods.
**This paper is organised as follows**: in §\[sec:relatedwork\], we survey existing camera relocalisation approaches; in §\[sec:method\], we describe our approach; in §\[sec:experiments\], we evaluate our approach extensively on the 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] benchmarks; finally, in §\[sec:conclusion\], we conclude. Source code for our approach can be found online at <https://github.com/torrvision/spaint>.
Related Work {#sec:relatedwork}
============
Due to the importance of camera relocalisation, many approaches have been proposed to tackle it over the years [@Piasco2018]:
*(i) Straight-to-pose* methods try to determine the pose directly from the input image. Within these, matching methods try to match the input image against keyframes stored in an image database (potentially interpolating between keyframes where necessary), and direct regression methods train a decision forest or neural network to directly predict the pose. For example, Gee and Mayol-Cuevas [@Gee2012] estimate the pose by matching the input image against synthetic views of the scene. Other methods match descriptors computed from the input image against a database, e.g. Galvez-Lopez et al. [@GalvezLopez2011] compute a bag of binary words based on BRIEF descriptors for the current image and compare it with bags of words for keyframes in the database using an L1 score. Glocker et al. [@Glocker2015] encode frames using Randomised Ferns, which when evaluated on images yield binary codes that can be matched quickly by their Hamming distance. In terms of direct regression, Kendall et al. [@Kendall2015]’s PoseNet uses a convolutional neural network to directly regress the 6D camera pose from the current image. Their later works build on this to model the uncertainty in the result [@Kendall2016] and explore different loss functions to achieve better results [@Kendall2017]. Melekhov et al. [@Melekhov2017] train an hourglass network, using skip connections between their encoder and decoder, to directly regress the camera pose. Kacete et al. [@Kacete2017] directly regress the camera pose using a sparse decision forest. Clark et al. [@Clark2017] and Walch et al. [@Walch2017] directly regress the pose using LSTMs. Valada et al. [@Valada2018] train a multi-task network to predict both 6D global pose and the relative 6D poses between consecutive frames, and report dramatic improvements over earlier neural network-based approaches on 7-Scenes [@Shotton2013] and Cambridge Landmarks [@Kendall2015], although their best results rely on using the estimated pose from the previous frame. Radwan et al. [@Radwan2018] add semantics to this approach.
*(ii) Correspondence-based* methods (e.g. the regression forest approaches we mention in §\[sec:introduction\]) find correspondences between camera and world space points and use them to estimate the pose. A common approach is to find 2D-to-3D correspondences between keypoints in the current image and 3D scene points, so as to deploy e.g. a Perspective-n-Point (PnP) algorithm [@Hartley2004] (on RGB data) or the Kabsch algorithm [@Kabsch1976] (on RGB-D data) to generate a number of pose hypotheses that can be pruned to a single hypothesis using RANSAC [@Fischler1981]. Williams et al. [@Williams2011] recognise/match keypoints using an ensemble of randomised lists, and exclude unreliable or ambiguous matches when generating hypotheses. Li et al. [@Li2015] use graph matching to disambiguate visually-similar keypoints. Sattler et al. [@Sattler2015] use a fine visual vocabulary and a visibility graph to find locally unique 2D-to-3D matches. Sets of consistent matches are then used to compute hypotheses. Sattler et al. [@Sattler2017] find correspondences in both the 2D-to-3D and 3D-to-2D directions and apply a 6-point DLT algorithm to compute hypotheses. Schmidt et al. [@Schmidt2017] train a fully-convolutional network using a contrastive loss to output robust descriptors that can be used to establish dense correspondences.
Some hybrid methods use both paradigms. Mur-Artal et al. [@MurArtal2015] describe a relocalisation approach that initially finds pose candidates using bag of words recognition [@GalvezLopez2012], which they incorporate into their ORB-SLAM system. They then refine these candidates using PnP and RANSAC. Valentin et al. [@Valentin2016] find pose candidates using a retrieval forest and a multiscale navigation graph, before refining them using continuous pose optimisation. Taira et al. [@Taira2018] use NetVLAD [@Arandjelovic2016] to find the $N$ closest database images to the query image, before using dense feature matching and P3P-RANSAC to generate candidate poses. They then render a synthetic view of the scene from each candidate pose, and yield the pose whose view is most similar to the query image.
Several less traditional approaches have also been tried. Deng et al. [@Deng2016] match a 3D point cloud representing the scene to a local 3D point cloud constructed from a set of query images that can be incrementally extended by the user to achieve a successful match. Lu et al. [@Lu2015] perform 3D-to-3D localisation that reconstructs a 3D model from a short video using structure-from-motion and matches that against the scene within a multi-task point retrieval framework. Laskar et al. [@Laskar2017] train a Siamese network to predict the relative pose between two images. At test time, they find the $N$ nearest neighbours to the query image in a database, predict their poses relative to the query image, and use these in conjunction with the known poses of the neighbours to estimate the query pose. Balntas et al. [@Balntas2018] train a Siamese network to generate global feature descriptors using a continuous metric learning loss based on camera frustum overlap. Like [@Laskar2017], they then predict the relative poses between the query image and nearest neighbours in a database, and use these to determine the query pose. Sch[ö]{}nberger et al. [@Schoenberger2018] train a variational encoder-decoder network to hallucinate complete, denoised semantic 3D subvolumes from incomplete ones. At test time, they match query subvolumes against ones in a database using the network’s embedding space, and use the matches to generate pose hypotheses.
Method {#sec:method}
======
![image](pipeline-new.pdf){width=".8\linewidth"}
Overview {#subsec::methodoverview}
--------
Our approach is shown in Figure \[fig:pipeline\]. Initially, we train a regression forest *offline* to predict 2D-to-3D correspondences for a *generic* scene using the approach described in [@Valentin2015RF]. To adapt this forest to a new scene, we remove the contents of the leaf nodes in the forest (i.e. GMM modes and associated covariance matrices) whilst retaining the branching structure of the trees (including learned split parameters). We then adapt the forest *online* to the new scene by feeding training examples down the forest to refill the empty leaves, dynamically learning a set of leaf distributions specific to that scene. Thus adapted, the forest can then be used to predict correspondences for the new scene that can be used for camera pose estimation. Reusing the tree structures spares us from expensive offline learning on deployment in a novel scene, allowing for relocalisation on the fly.
To estimate the camera pose, we extend the pipeline we proposed in [@Cavallari2017], which fed triples of correspondences to the Kabsch [@Kabsch1976] algorithm to generate pose hypotheses, and then refined them down to a single output pose using pre-emptive RANSAC. As highlighted in [@Cavallari2017], returning a single pose from RANSAC has the disadvantage of sometimes yielding the wrong pose when the energies of the last few candidates considered by RANSAC are relatively similar (e.g. when different parts of the scene look nearly the same). To address this, we thus add in an additional pipeline step that scores and ranks the last few RANSAC candidates using an independent, model-based approach (see §\[subsubsec::hypothesisranking\]). This significantly improves the performance of the relocaliser in situations exhibiting serious appearance aliasing (see §\[sec:experiments\]), but at a cost in speed. To mitigate this, we introduce the concept of a relocalisation cascade (see §\[subsubsec::cascade\]), which runs multiple variants of our relocaliser in sequence, starting with a fast variant that is less likely to succeed, and progressively falling back to slower, better relocalisers as the earlier ones fail. This leads to fast average-case relocalisation performance without significantly compromising on quality.
Details {#subsec::methoddetails}
-------
### Offline Forest Training {#subsubsec::forestpretraining}
Training is done as in [@Valentin2015RF], greedily optimising a standard reduction-in-spatial-variance objective over the randomised parameters of simple threshold functions. Like [@Valentin2015RF], we make use of ‘Depth’ and ‘Depth-Adaptive RGB’ (‘DA-RGB’) features, centred at a pixel $\textbf{p}$, as follows: $$\textstyle f^{\text{Depth}}_\Omega = D(\mathbf{p}) - D\left(\mathbf{p} + \frac{\boldsymbol{\delta}}{D(\mathbf{p})}\right)$$ $$\textstyle f^{\text{DA-RGB}}_\Omega = C(\mathbf{p},c) - C\left(\mathbf{p} + \frac{\boldsymbol{\delta}}{D(\mathbf{p})}, c\right)$$ In this, $D(\mathbf{p})$ is the depth at $\mathbf{p}$, $C(\mathbf{p},c)$ is the value of the $c^{\text{th}}$ colour channel at $\mathbf{p}$, and $\Omega$ is a vector of randomly sampled feature parameters. For ‘Depth’, the only parameter is the 2D image-space offset $\boldsymbol{\delta}$, whereas ‘DA-RGB’ adds the colour channel selection parameter $c \in \{R,G,B\}$. We randomly generate 128 values of $\Omega$ for ‘Depth’ and 128 for ‘DA-RGB’. We concatenate the evaluations of these functions at each pixel of interest to yield 256D feature vectors.
At training time, a set $S$ of training examples, each consisting of such a feature vector $\mathbf{f} \in \mathbb{R}^{256}$, its corresponding 3D location in the scene and its colour, is assembled via sampling from a ground truth RGB-D video with known camera poses for each frame (obtained by tracking from depth camera input). A random subset of these training examples is selected to train each tree in the forest.
Starting from the root of each tree, we recursively partition the training examples in the current node into two using a binary threshold function. To decide how to split each node $n$, we randomly generate a set $\Theta_n$ of $512$ candidate split parameter pairs, where each $\theta = (\phi,\tau) \in \Theta_n$ denotes the binary threshold function $$\textstyle \theta(\mathbf{f}) = \mathbf{f}[\phi] \ge \tau.$$ In this, $\phi \in [0,256)$ is a randomly-chosen feature index, and $\tau \in \mathbb{R}$ is a threshold, chosen to be the value of feature $\phi$ in a randomly chosen training example. Examples that pass the test are routed to the right subtree of $n$; the remainder are routed to the left. To pick a suitable split function for $n$, we use exhaustive search to find a $\theta^{*} \in \Theta_n$ whose corresponding split function maximises the information gain that can be achieved by splitting the training examples that reach $n$. Formally, the information gain corresponding to split parameters $\theta \in \Theta_n$ is $$\textstyle V(S_n) - \sum_{i\in\{\text{L,R}\}} \frac{|S^i_n(\theta)|}{|S_n|} \; V(S^i_n(\theta)),$$ in which $V(X)$ denotes the spatial variance of set $X$, and $S^L_n(\theta)$ and $S^R_n(\theta)$ denote the left and right subsets into which the set $S_n
\subseteq S$ of training examples reaching $n$ is partitioned by the split function denoted by $\theta$. Spatial variance is defined in terms of the log of the determinant of the covariance of a fitted 3D Gaussian [@Valentin2015RF].
For a given tree, the above process is simply recursed to a maximum depth of 15. We train 5 trees per forest. The (approximate, empirical) distributions in the leaves are discarded at the end of this process (we replace them during online forest adaptation, as discussed in the next section).
### Online Forest Adaptation {#subsubsec::forestadaptation}
[!t]{}
[.48]{} ![image](modes-pretrained){width="\linewidth"}
[.48]{} ![image](modes-adapted){width="\linewidth"}
To adapt a forest to a new scene, we replace the distributions discarded from its leaves at the end of pre-training with dynamically updated ones drawn entirely from the new scene. Here, we detail how the new leaf distributions used by the relocaliser are computed and updated online.
We draw inspiration from the use of reservoir sampling [@Vitter1985] in SemanticPaint [@Valentin2015SP], which makes it possible to store an unbiased subset of an empirical distribution in a bounded amount of memory. On initialisation, we allocate (on the GPU) a fixed-size sample reservoir for each leaf of the existing forest. Our reservoirs contain up to $\kappa$ entries, each of which stores a 3D location (in world coordinates) and an associated colour. At runtime, we pass training examples (of the form described in §\[subsubsec::forestpretraining\]) down the forest and identify the leaves to which each example is mapped. We then add the 3D location and colour of each example to the reservoirs associated with its leaves.
To obtain the 3D locations of the training examples, we need to know the transformation that maps points from camera space to world space. When training on sequences from a dataset, this is trivially available as the ground truth camera pose, but in a live scenario, it will generally be obtained as the output of a fallible tracker.[^1] To avoid corrupting our forest’s reservoirs, we avoid passing new examples down the forest when tracking is unreliable. We measure tracker reliability using the support vector machine (SVM) approach described in [@Kaehler2016]. For frames for which a reliable camera pose *is* available, we proceed as follows:
1. First, we compute feature vectors for a subset of the pixels in the image, as detailed in §\[subsubsec::forestpretraining\]. We empirically choose our subset by subsampling densely on a regular grid with $4$-pixel spacing, i.e. we choose pixels $\{(4i,4j) \in [0,w) \times [0,h) : i,j \in \mathbb{N}^+ \}$, where $w$ and $h$ are respectively the width and height of the image.
2. Next, we pass each feature vector down the forest, adding the 3D position and colour of the corresponding scene point to the reservoir of the leaf reached in each tree. Our CUDA-based random forest implementation uses the node indexing described in [@Sharp2008].
3. Finally, for each leaf reservoir, we cluster the contained points using a CUDA implementation of Really Quick Shift (RQS) [@Fulkerson2010] to find a set of modal 3D locations. We sort the clusters in each leaf in decreasing size order, and keep at most $M_{\max}$ modal clusters per leaf. For each cluster we keep, we compute 3D and colour centroids, and a covariance matrix. The cluster distributions are used when estimating the likelihood of a camera pose, and also during continuous pose optimisation (see §\[subsubsec:poseestimation\]). Since running RQS over all the leaves in the forest would take too long if run in a single frame, we amortise the cost over multiple frames by updating $256$ leaves in parallel each frame in round-robin fashion. A typical forest contains around $42,000$ leaves, so each leaf is updated roughly once every $6$s.
Figure \[fig:modes\] illustrates the effect that online adaptation has on a pre-trained forest: (a) shows the modal clusters present in a small number of randomly selected leaves of a forest pre-trained on the *Chess* scene from the 7-Scenes dataset [@Shotton2013]; (b) shows the modal clusters that are added to the same leaves during the process of adapting the forest to the *Kitchen* scene. Note that whilst the positions of the predicted modes have (unsurprisingly) completely changed, the split functions in the forest’s branch nodes (which we preserve) still do a good job of routing similar parts of the scene into the same leaves, enabling effective sampling of 2D-to-3D correspondences for camera pose estimation.
### Camera Pose Estimation {#subsubsec:poseestimation}
As in [@Valentin2015RF], camera pose estimation is based on the preemptive, locally-optimised RANSAC of [@Chum2003]. We begin by randomly generating an initial set of up to $N_{\max}$ pose hypotheses. A pose hypothesis $H \in \mathbf{SE}(3)$ is a transform that maps points in camera space to world space. To generate each pose hypothesis, we apply the Kabsch algorithm [@Kabsch1976] to $3$ point pairs of the form $(\mathbf{x}_i^\mathcal{C}, \mathbf{x}_i^\mathcal{W})$, where $\mathbf{x}_i^\mathcal{C} = D(\mathbf{u}_i) K^{-1} \dot{\mathbf{u}}_i$ is obtained by back-projecting a randomly chosen point $\mathbf{u}_i$ in the live depth image $D$ into camera space, and $\mathbf{x}_i^\mathcal{W}$ is a corresponding scene point in world space, randomly sampled from $M(\mathbf{u}_i)$, the modes of the leaves to which the forest maps $\mathbf{u}_i$. In this, $K$ denotes the depth camera intrinsics. We subject each hypothesis to three checks:
1. First, we randomly choose one of the three point pairs $(\mathbf{x}_i^\mathcal{C},\mathbf{x}_i^\mathcal{W})$ and compare the RGB colour of the corresponding pixel $\mathbf{u}_i$ in the colour input image to the colour centroid of the mode (see §\[subsubsec::forestadaptation\]) from which we sampled $\mathbf{x}_i^\mathcal{W}$. We reject the hypothesis iff the L$^\infty$ distance between the two exceeds a threshold.
2. Next, we check that the three hypothesised scene points are sufficiently far from each other. We reject the hypothesis iff the minimum distance between any pair of points is below a threshold (tuned, but generally $30$cm).
3. Finally, we check that the distances between all scene point pairs and their corresponding back-projected depth point pairs are sufficiently similar, i.e. that the hypothesised transform is ‘rigid enough’. We reject the hypothesis iff this is not the case.
If a hypothesis gets rejected by one of the checks, we try to generate an alternative hypothesis to replace it. In practice, we use $N_{\max}$ dedicated threads, each of which attempts to generate a single hypothesis. Each thread continues generating hypotheses until either (a) it finds a hypothesis that passes all of the checks, or (b) a maximum number of iterations is reached. We proceed with however many hypotheses we obtain by the end of this process.
Having generated our large initial set of hypotheses, we next aggressively prune it by scoring each hypothesis and keeping the $N_\textup{cull}$ lowest-energy transforms (if there are fewer than $N_\textup{cull}$ hypotheses, we keep all of them). To score the hypotheses, we first select an initial set $I = \{i\}$ of pixel indices in $D$ (of size $\eta$), and back-project the denoted pixels $\mathbf{u}_i$ to corresponding points $\mathbf{x}_i^\mathcal{C}$ in camera space as described above. We then score each hypothesis $H$ by summing the Mahalanobis distances between the transformations of each $\mathbf{x}_i^\mathcal{C}$ under $H$ and their nearest modes: $$\textstyle E(H) = \sum_{i \in I} \left( \min_{(\boldsymbol{\mu},\Sigma) \in M(\mathbf{u}_i)} \left\| \Sigma^{-\frac{1}{2}} (H\mathbf{x}_i^\mathcal{C} - \boldsymbol{\mu}) \right\| \right)$$ After this initial cull, we use pre-emptive RANSAC to prune the remaining $\le N_\textup{cull}$ hypotheses to a much smaller set of chosen hypotheses, which will then be scored and ranked (see §\[subsubsec::hypothesisranking\]). (This differs from [@Cavallari2017], where our RANSAC module was designed to output a single hypothesis, and no subsequent scoring was performed.) We iteratively (i) expand the sample set $I$ (by adding $\eta$ new pixels each time), (ii) refine the pose candidates via Levenberg-Marquardt optimisation [@Levenberg1944; @Marquardt1963] of the energy function $E$, (iii) re-evaluate and re-score the hypotheses, and (iv) discard the worse half. We stop when the number of hypotheses remaining reaches a desired threshold. The actual optimisation is performed not in $\mathbf{SE}(3)$, where it would be hard to do, but in the corresponding Lie algebra, $\mathfrak{se}(3)$. See [@Valentin2015RF] for details of this process, and [@Strasdat2012] for a longer explanation of Lie algebras.
In [@Cavallari2017], this process yielded a single pose hypothesis, which it was possible to either return directly, or, if the relocaliser was integrated into a 3D reconstruction framework such as InfiniTAM [@Prisacariu2017], return after first refining it using ICP [@Besl1992]. Here, the process instead yields a set of chosen hypotheses, from which we then need to select a single, final output pose. To do this, we assume the presence of a 3D scene model, since 3D reconstruction is a key application of our approach, and propose a model-based way of ranking the hypotheses to choose a best pose (if a 3D model is not present, one of the hypotheses can be returned as-is, or the hypotheses can be scored and ranked in a different way).
### Model-Based Hypothesis Ranking {#subsubsec::hypothesisranking}
![Our model-based approach to ranking the camera pose hypotheses that survive the RANSAC stage (see §\[subsubsec::hypothesisranking\]). For each hypothesis, we first refine the pose by performing ICP [@Besl1992] with respect to the 3D scene model. If this succeeds, we score the hypothesis by comparing a synthetic depth raycast from the refined pose to the live depth image from the camera. Once all hypotheses have been scored, we rank them and return the one with the lowest score.[]{data-label="fig:hypothesisranking"}](hypothesisranking-crop){width=".8\linewidth"}
To score a hypothesis chosen by RANSAC, we first refine it using ICP [@Besl1992] with respect to the 3D scene model (see Figure \[fig:hypothesisranking\]). If this fails, we discard the pose, since regardless of its correctness, it was not a pose that would have been good enough to allow tracking to resume. If this succeeds, we further verify correctness by rendering a synthetic depth raycast of the scene model from the refined pose, and comparing it to the live depth image from the camera.[^2] To do this, we draw inspiration from [@Golodetz2018], which compared synthetic depth raycasts to verify a relative transform estimate between two different sub-scenes. By contrast, we use comparisons between the live depth image and synthetic raycasts of a single scene to compute scores that can be used to rank the different pose hypotheses against each other.
Formally, let $D_\ell$ be the live depth image, $\Xi = \{\xi_1,...\xi_k\}$ be the set of chosen pose hypotheses, and $\tilde{D}_\xi$ be a synthetic depth raycast of the 3D model from pose $\xi$. Moreover, for any depth image $D$, let $\Omega(D)$ denote the domain of $D$ and $\Omega^v(D)$ denote the range of pixels $\mathbf{x}$ for which $D(\mathbf{x})$ is valid, and let $\Omega^v_{\ell,\xi} = \Omega^v(D_\ell) \cap \Omega^v(\tilde{D}_\xi)$. To assign a score to hypothesis $\xi$, we first compute a mean (masked) absolute depth difference between $D_\ell$ and $\tilde{D}_\xi$ via $$\textstyle \mu(\xi) = \left( \sum_{\mathbf{x} \in \Omega^v_{\ell,\xi}} \left| D_\ell(\mathbf{x}) - \tilde{D}_\xi(\mathbf{x}) \right| \right) / |\Omega^v_{\ell,\xi}|,$$ similar to Equation (4) in [@Golodetz2018]. We then compute a final score $s(\xi)$ for $\xi$ via $$\textstyle s(\xi) = \begin{cases}
\mu(\xi) & \mbox{if } |\Omega^v(\tilde{D}_\xi)| / |\Omega(\tilde{D}_\xi)| \ge 0.1 \\
\infty & \mbox{otherwise.}
\end{cases}$$ In this, the purpose of the (empirically chosen) threshold is to ensure that the hypothesised pose points sufficiently towards the 3D scene model to allow it to be verified: we found $0.1$ to be effective in practice. Having scored all of the hypotheses in this way, we can then simply pick the pose $\xi^*$ with the lowest score (i.e. the one whose synthetic depth raycast was closest to the live depth) and return it: $$\textstyle \xi^* = {\operatornamewithlimits{argmin}}_{\xi \in \Xi} s(\xi)$$
### Relocalisation Cascade {#subsubsec::cascade}
![Our relocalisation cascade (see §\[subsubsec::cascade\]): we instantiate multiple instances of our relocaliser, backed by the same regression forest, but with different hypothesis generation and RANSAC parameters, and run them one at a time on the camera input until one of them produces an acceptable pose (or we reach the end of the cascade). The idea is to gradually fall back from fast but less effective relocalisers to slower but more effective ones, with the aim of yielding an effective overall relocaliser that is fast on average.[]{data-label="fig:cascade"}](cascade-crop.pdf){width="\linewidth"}
Ranking the last few RANSAC candidates as described in §\[subsubsec::hypothesisranking\] significantly improves relocalisation performance in scenarios exhibiting serious appearance aliasing (see §\[sec:experiments\]), but can in practice be quite expensive (clearly undesirable in an interactive SLAM context) because of the need to perform ICP for each candidate. In practice, however, many scenarios do not exhibit such aliasing, and in those contexts, hypothesis ranking provides little benefit. As an additional contribution, we thus propose a novel *relocalisation cascade* approach that uses hypothesis ranking only when necessary.
Figure \[fig:cascade\] shows how this works: we instantiate multiple instances of our relocaliser, backed by the same regression forest, but with different parameters for the hypothesis generation and RANSAC steps, and run them one at a time on the camera input until one of them produces an acceptable pose (or we reach the end of the cascade). The idea is to put ‘faster’ relocalisers (i.e. ones that have been tuned more for speed than performance) towards the start of the cascade, and ‘slower’ ones (i.e. ones that have been tuned more for performance than speed) towards the end of it, yielding an effective cascade that is fast on average. The key to making this work well is in how to decide whether or not a given relocaliser in the cascade has produced an acceptable pose. If we accept poses produced by early-stage relocalisers too readily, the cascade’s relocalisation quality will suffer; conversely, if we are too draconian in rejecting such poses, then the cascade will be slow (since we will be running the slower, late-stage relocalisers far too often).
In practice, the depth-difference scores computed during hypothesis ranking (see §\[subsubsec::hypothesisranking\]) provide us with an effective basis on which to make these decisions: in particular, it suffices to fall back from one relocaliser in the cascade to the next iff the score associated with the final output pose of the relocaliser is above a threshold (reflecting a high likelihood of an incorrect pose). The thresholds used are important parameters of the cascade: the way in which we tune both them and the parameters of the relocalisers in the cascade is described in §\[sec:parametertuning\].
Experiments {#sec:experiments}
===========
We perform both quantitative and qualitative experiments to evaluate our approach. In §\[subsec:headlineperformance\], we compare several variants of our approach, trained on the *Office* sequence from the 7-Scenes dataset [@Shotton2013] and then adapted to each target scene, to state-of-the-art *offline* relocalisers trained directly on the target scene. We show that our adapted forests achieve state-of-the-art relocalisation performance despite being trained on a different scene, allowing them to be used for *online* relocalisation. Further results, showing that very similar performances can be obtained with forests trained on the other 7-Scenes sequences, can be found in §\[sec:additionalexperiments\], along with detailed timings for each stage of our pipeline. In §\[subsec:trackinglossrecovery\], we show our ability to adapt forests on-the-fly from live sequences, allowing us to support tracking loss recovery in interactive scenarios. In §\[subsec:novelposes\], we evaluate how well our approach generalises to novel poses in comparison to a keyframe-based random fern relocaliser based on [@Glocker2015]. Our results show that we are able to relocalise well even from poses that are quite far away from the training trajectory. In §\[subsec:forestvisualisation\], we visualise the internal behaviour of SCoRe forests, and use this to explain why the behaviour of a forest is relatively independent of the specific scene on which it was trained. Finally, in §\[subsec:nopretraining\], we use the insights gained from this visualisation to show that pre-training can be avoided entirely by replacing the pre-trained forest with a generated one that plays the same role.
Headline Performance {#subsec:headlineperformance}
--------------------
[lcccccccccc]{} & **Chess** & **Fire** & **Heads** & **Office** & **Pumpkin** & **Kitchen** & **Stairs** & **Average** & **Avg. Med. Error** & **Frame Time (ms)**\
\
Ours (Default) & 99.75% & 97.35% & [****100%]{} & [99.80%]{} & 82.25% & [****95.64%]{} & 79.10% & 93.41% & [****]{}/1.16$^\circ$ & 128\
+ ICP & 99.85% & 99.15% & [****100%]{} & [****99.85%]{} & 90.00% & 91.52% & 80.00% & 94.34% & 0.013m/1.16$^\circ$ & 133\
+ Ranking & [****99.95%]{} & [****99.70%]{} & [****100%]{} & 99.48% & 90.85% & 90.68% & [****94.20%]{} & [****96.41%]{} & 0.013m/1.17$^\circ$ & 257\
Ours (Fast) w/ICP & 99.75% & 97.10% & 98.40% & 99.55% & 89.35% & 89.26% & 62.40% & 90.83% & 0.014m/1.17$^\circ$ & [****]{}\
Ours (Cascade F$\stackrel{7.5\textup{cm}}{\rightarrow}$S) & [99.90%]{} & 98.95% & [99.90%]{} & 99.48% & [90.95%]{} & 89.34% & 86.10% & 94.95% & 0.013m/1.17$^\circ$ &\
Ours (Cascade F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S) & 99.85% & [99.40%]{} & [99.90%]{} & 99.40% & 90.85% & 89.64% & [89.80%]{} & [95.55%]{} & 0.013m/1.17$^\circ$ & 66\
Ours (Random) & 99.80% & 96.90% & [****100%]{} & 98.48% & 78.65% & 91.98% & 71.60% & 91.06% & /1.18$^\circ$ & 121\
+ ICP & 99.85% & 98.50% & [****100%]{} & 99.10% & 89.50% & 90.32% & 77.80% & 93.58% & 0.013m/1.16$^\circ$ & 126\
Ours (2017) [@Cavallari2017] & 99.2% & 96.5% & 99.7% & 97.6% & 84.0% & 81.7% & 33.6% & 84.6% & – & –\
+ ICP & 99.4% & 99.0% & [****100%]{} & 98.2% & [****91.2%]{} & 87.0% & 35.0% & 87.1% & – & 141\
\
Shotton *et al.* [@Shotton2013] & 92.6% & 82.9% & 49.4% & 74.9% & 73.7% & 71.8% & 27.8% & 67.6% & – & –\
Guzman-Rivera *et al.* [@GuzmanRivera2014] & 96% & 90% & 56% & 92% & 80% & 86% & 55% & 79.3% & – & –\
Valentin *et al.* [@Valentin2015RF] & 99.4% & 94.6% & 95.9% & 97.0% & 85.1% & 89.3% & 63.4% & 89.5% & – & –\
Brachmann *et al.* [@Brachmann2016] & 99.6% & 94.0% & 89.3% & 93.4% & 77.6% & 91.1% & 71.7% & 88.1% & 0.061m/2.7$^\circ$ & –\
Meng *et al.* [@Meng2017arXiv] & 99.5% & 97.6% & 95.5% & 96.2% & 81.4% & 89.3% & 72.2% & 90.3% & 0.017m/[****]{} & –\
Schmidt *et al.* [@Schmidt2017] & 97.75% & 96.55% & 99.8% & 97.2% & 81.4% & [93.4%]{} & 77.7% & 92.0% & – & –\
Brachmann and Rother [@Brachmann2018CVPR] & 97.1% & 89.6% & 92.4% & 86.6% & 59.0% & 66.6% & 29.3% & 76.1% & 0.036m/ & –\
\
*Brachmann and Rother [@Brachmann2018CVPR]* & *93.8%* & *75.6%* & *18.4%* & *75.4%* & *55.9%* & *50.7%* & *2.0%* & *60.4%* & *0.084m/2.4$^\circ$* & –\
*Li *et al.* [@Li2018arXiv]* & *96.1%* & *88.6%* & *86.9%* & *80.6%* & *60.3%* & *61.9%* & *11.3%* & *71.8%* & *0.043m/1.3$^\circ$* & –\
\
*Valada *et al.* [@Valada2018]* & – & – & – & – & – & – & – & *59.1%* & *0.048m/3.801$^\circ$* & –\
*Radwan *et al.* [@Radwan2018]* & – & – & – & – & – & – & – & *99.2%* & *0.013m/0.77$^\circ$* & *79*\
**Sequence** **LTN** [@Valentin2016] **BTBRF** [@Meng2017IROS] **PLForests** [@Meng2017arXiv] **Ours** **+ ICP** **+ Ranking** **Ours (F$\stackrel{7.5\textup{cm}}{\rightarrow}$S)** **Ours (Random)** **+ ICP**
---------------------- ------------------------- --------------------------- -------------------------------- -------------- ---------------- ---------------- ------------------------------------------------------- ------------------- ----------------
Kitchen 85.7% 92.7% 98.9% [****100%]{} [****100%]{} [****100%]{} [****100%]{} [99.72%]{} [****100%]{}
Living 71.6% 95.1% [****100%]{} [99.80%]{} [****100%]{} [****100%]{} [****100%]{} 99.59% [****100%]{}
Bed 66.4% 82.8% [99.0%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{}
Kitchen 76.7% 86.2% 99.0% [****100%]{} [****100%]{} [****100%]{} [99.52%]{} [****100%]{} [****100%]{}
Living 66.6% [99.7%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{}
Luke 83.3% 84.6% [98.9%]{} 97.92% [****99.20%]{} [****99.20%]{} [****99.20%]{} 96.31% [****99.20%]{}
Floor5a 66.2% 89.9% 98.8% 99.20% [****100%]{} [****100%]{} [99.60%]{} 98.59% [****100%]{}
Floor5b 71.1% 98.9% 99.0% [99.75%]{} 99.01% [****100%]{} 99.01% 99.26% 99.01%
Gates362 51.8% [96.7%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{}
Gates381 52.3% 92.9% 98.8% 99.24% [****100%]{} [****100%]{} [99.91%]{} 98.10% [99.91%]{}
Lounge 64.2% 94.8% [99.1%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{}
Manolis 76.0% [98.0%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{} [****100%]{}
**Average** 67.4% 92.7% 99.3% 99.66% [99.85%]{} [****99.93%]{} 99.77% 99.30% 99.84%
**Avg. Med. Trans.** – – – 0.009m [****]{} 0.010m
**Avg. Med. Rot.** – – – [****]{} [****]{} [****]{} 0.54$^\circ$ [****]{}
**Frame Time (ms)** – – – 122 127 240 [****]{} 123
To evaluate the headline performance of our approach, we compare our relocaliser to a variety of state-of-the-art *offline* relocalisers on the 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] benchmarks (see Tables \[tbl:comparativeperformance7\] and \[tbl:comparativeperformance12\]).[^3]
We test several variants of our approach, using the following testing procedure. For all variants of our approach except *Ours (Random)*, we first pre-train a forest on a generic scene (*Office* from 7-Scenes [@Shotton2013]) and remove the contents of its leaves, as described in §\[sec:method\]: this process runs *offline* over a number of hours (but we only need to do it once). Next, we adapt the forest by feeding it new examples from a training sequence captured on the scene of interest: this runs *online* at frame rate (in a real system, this allows us to start relocalising almost immediately whilst training carries on in the background, as we show in §\[subsec:trackinglossrecovery\]). Finally, we test the adapted forest by using it to relocalise from every frame of a separate testing sequence captured on the scene of interest.
*Ours (Default)* is an improved variant of [@Cavallari2017], in which each leaf can now store up to $50$ modes. With hypothesis ranking of the last $16$ candidates enabled, this approach achieves an average of $96.41\%$ of frames within $5$cm/$5^\circ$ of the ground truth on 7-Scenes, beating the previous state-of-the-art [@Schmidt2017] by over $4$%. However, as mentioned in §\[subsubsec::cascade\], hypothesis ranking significantly slows down the speed of the relocaliser: we thus present several faster variants of our approach. *Ours (Fast)* is a variant tuned for maximum speed (see Table \[tbl:parameters\] for the tuned parameters). With ICP enabled, it is able to achieve an average of over $90$% on 7-Scenes in under $30$ms; however, it achieves slightly lower performance on the difficult *Stairs* sequence. To achieve a better trade-off between performance and speed, we present two relocalisation cascades (see §\[subsubsec::cascade\]), F$\stackrel{7.5\textup{cm}}{\rightarrow}$S and F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S, each of which starts with our *Fast* relocaliser and gradually falls back to slower, more effective relocalisers if the depth difference for the currently predicted pose exceeds the specified thresholds. We describe the tuning of the parameters of the individual relocalisers in the cascades and the depth-difference thresholds used to fall back between them in §\[sec:parametertuning\]. Both cascades presented achieve state-of-the-art results in under $70$ms, offering high-quality performance at frame rate (nearly $20$ FPS in the case of F$\stackrel{7.5\textup{cm}}{\rightarrow}$S).
Finally, *Ours (Random)* denotes a variant of our approach that uses a randomly generated forest (see §\[subsec:nopretraining\]). By achieving an average of $93.58$% on 7-Scenes after ICP, it shows that it is possible to achieve state-of-the-art performance without any prior offline training on a generic scene.
Tracking Loss Recovery {#subsec:trackinglossrecovery}
----------------------
![Our approach’s performance for tracking loss recovery (§\[subsec:trackinglossrecovery\]). Filling the leaves of a forest pre-trained on *Office* frame-by-frame *directly* from the *testing* sequence, we are able to start relocalising almost immediately in new scenes.[]{data-label="fig:trackinglossrecovery"}](online_relocalization_icp-crop){width=".9\linewidth"}
In §\[subsec:headlineperformance\], we investigated our ability to adapt a forest to a new scene by filling its leaves with data from a training sequence for that scene, before testing the adapted forest on a separate testing sequence shot on the same scene. Here, we quantify our approach’s ability to perform this adaptation *on the fly* by filling the leaves frame-by-frame from the testing sequence: this allows recovery from tracking loss in an interactive scenario without the need for prior online training on anything other than the live sequence, making our approach extremely convenient for tasks such as interactive 3D reconstruction.
Our testing procedure is as follows: at each new frame (except the first), we assume that tracking has failed, and try to relocalise using the forest we have available at that point; we record whether or not this succeeds. Regardless, we then restore the ground truth camera pose (or the tracked camera pose, in a live sequence) and, provided tracking hasn’t actually failed, use examples from the current frame to continue training the forest. As Figure \[fig:trackinglossrecovery\] shows, we are able to start relocalising almost immediately in a live sequence (in a matter of frames, typically 4–6 are enough). Subsequent performance then varies based on the difficulty of the sequence, but rarely drops below $80\%$. This makes our approach highly practical for interactive relocalisation.
Generalisation to Novel Poses {#subsec:novelposes}
-----------------------------
![Evaluating how well our approach generalises to novel poses in comparison to a keyframe-based random fern relocaliser based on [@Glocker2015]. The performance decay experienced as test poses get further from the training trajectory is much less severe with our approach than with ferns.[]{data-label="fig:novelposes-graph"}](novelposes-graph-crop.pdf){width="\linewidth"}
![Novel poses from which we are able to relocalise to within 5cm/5$^\circ$ on the *Fire* sequence from 7-Scenes [@Shotton2013]. Pose novelty measures the distance of a test pose from a nearby pose (blue) on the training trajectory (yellow). We can relocalise from both easy poses (up to 35cm/35$^\circ$ from the training trajectory, green) and hard poses ($>$ 35cm/35$^\circ$, red). The images below the main figure show views of the scene from the training poses and testing poses indicated.[]{data-label="fig:novelposes-example"}](poses_frustums "fig:"){width="\linewidth"} ![Novel poses from which we are able to relocalise to within 5cm/5$^\circ$ on the *Fire* sequence from 7-Scenes [@Shotton2013]. Pose novelty measures the distance of a test pose from a nearby pose (blue) on the training trajectory (yellow). We can relocalise from both easy poses (up to 35cm/35$^\circ$ from the training trajectory, green) and hard poses ($>$ 35cm/35$^\circ$, red). The images below the main figure show views of the scene from the training poses and testing poses indicated.[]{data-label="fig:novelposes-example"}](poses_numbered "fig:"){width="\linewidth"}
To evaluate how well our approach generalises to novel poses, we examine how the proportion of frames we can relocalise decreases as the distance of the (ground truth) test poses from the training trajectory increases. We compare our approach with the keyframe-based relocaliser in InfiniTAM [@Kaehler2016], which is based on the random fern approach of Glocker et al.[@Glocker2015], and with the original version of our own approach [@Cavallari2017]. Relocalisation from novel poses is a well-known failure case of keyframe-based methods, so we would expect the random fern approach to perform poorly away from the training trajectory; by contrast, it is interesting to see the extent to which both incarnations of our approach can relocalise from a wide range of novel poses.
We perform the comparison separately for each 7-Scenes sequence, and then aggregate the results. For each sequence, we first group the test poses into bins by pose novelty. Each bin is specified in terms of a maximum translation and rotation difference of a test pose with respect to the training trajectory (for example, poses that are within 5cm and 5$^\circ$ of any training pose are assigned to the first bin, remaining poses that are within 10cm and 10$^\circ$ are assigned to the second bin, etc.). We then determine the proportion of the test poses in each bin for which it is possible to relocalise to within $5$cm translational error and $5^\circ$ angular error using (a) the random fern approach, (b) our approach (both versions) without ICP and (c) our approach (both versions) with ICP. As shown in Figure \[fig:novelposes-graph\], the decay in performance experienced as the test poses get further from the training trajectory is much less severe with our approach than with ferns. Moreover, our improvements to our original approach [@Cavallari2017] have notably improved its ability to successfully relocalise test frames that are more than 50cm/50$^\circ$ from the training trajectory (from less than $60$% to more than $85$%).
A qualitative example of our ability to relocalise from novel poses is shown in Figure \[fig:novelposes-example\]. In the main figure, we show a range of test poses from which we can relocalise in the *Fire* scene, linking them to nearby poses on the training trajectory so as to illustrate their novelty in comparison to poses on which we have trained. The most difficult of these test poses are also shown in the images below alongside their nearby training poses, visually illustrating the significant differences between the two.
Visualising the Forest’s Behaviour {#subsec:forestvisualisation}
----------------------------------
Ultimately, we are able to adapt a forest to a new scene because the split functions that we preserve in the forest’s branch nodes are able to route similar parts of the new scene into the same leaves, regardless of the scene on which the forest was originally trained. In effect, we exploit the observation that the forest is ultimately a way of clustering points in a scene together based on their appearance, in a way that is broadly independent of the scene on which it was pre-trained (we would prefer to cluster points based on their spatial location, but since that information is not available at test time, we rely on appearance as a proxy). Such appearance clustering is very common in the literature, e.g. [@Wang2016] uses a decision forest to perform patch matching for optical flow and stereo, and [@Sattler2015] uses a visual vocabulary to match features with points for relocalisation.
[!t]{}
[.32]{} ![image](fire-colour-000250){width="\linewidth"}
[.32]{} ![image](fire-colour-000300){width="\linewidth"}
[.32]{} ![image](fire-colour-000435){width="\linewidth"}
\
[.32]{} ![image](fire-leaves-000250){width="\linewidth"}
[.32]{} ![image](fire-leaves-000300){width="\linewidth"}
[.32]{} ![image](fire-leaves-000435){width="\linewidth"}
\
[.32]{} ![image](fire-points-000250){width="\linewidth"}
[.32]{} ![image](fire-points-000300){width="\linewidth"}
[.32]{} ![image](fire-points-000435){width="\linewidth"}
\
[.32]{} ![image](fire-gtpoints-000250){width="\linewidth"}
[.32]{} ![image](fire-gtpoints-000300){width="\linewidth"}
[.32]{} ![image](fire-gtpoints-000435){width="\linewidth"}
To illustrate this, we visualise both the leaves and world-space points that the forest predicts for the pixels in three images of the *Fire* sequence from 7-Scenes [@Shotton2013] in Figure \[fig:forestvisualisation\]. As the pixel-to-leaf mapping (second row) shows, the forest (as expected) clusters points with similar appearances together, as can be seen from the fact that many similar-looking points that are adjacent to each other fall into the same leaves. Notably, the forest also manages, to some extent, to predict the same leaves for similar-looking points from different frames, as long as they view the scene from roughly similar viewpoints – e.g. the pixels on the seat of the chair in the first two columns are mostly predicted to fall into the same leaves. Its ability to do so clearly depends on the viewpoint-invariance or otherwise of the features we use (see §\[subsubsec::forestpretraining\]), and indeed when the viewpoint changes more significantly, as in the third column, different leaves can be predicted. However, in practice this is not a problem: there is no need for the forest to predict the same leaves for points in the scene when viewed from different angles, only to predict leaves that represent roughly the same locations in space. In reality, many leaves can occupy the same part of space, and as long as the forest is able to predict one of them, we can still produce suitable correspondences (see third row of Figure \[fig:forestvisualisation\]). This makes our approach highly robust in practice, and explains why good results can be achieved even with the relatively simple features we use.
Is Pre-Training Necessary? {#subsec:nopretraining}
--------------------------
Since the purpose of the forest is purely to cluster scene points based on their appearance (see §\[subsec:forestvisualisation\]), this raises the interesting question of whether pre-training on an actual scene is really necessary in the first place. To explore this, we performed an additional experiment in which we tried replacing the pre-trained forest used for our previous experiments with ones that were entirely randomly generated. We considered forests consisting of $5$ complete binary trees of equal height. As in §\[subsubsec::forestpretraining\], we generated a binary threshold function for each branch node consisting of a feature index $\phi \in [0,256)$ and a threshold $\tau \in \mathbb{R}$. In this case, instead of randomly choosing a feature index, we first randomly decided whether to use a Depth or DA-RGB feature (with a probability $p$ of choosing a depth feature), and then chose a relevant feature index randomly. We empirically fixed the threshold $\tau$ to $0$ (we also tried replicating the distribution of thresholds from a pre-trained forest, but found that it made no difference in practice: this makes sense, since the features we use are signed depth/colour differences, which we would naturally expect to be distributed around $0$). We tuned the height of the forest and the probability $p$ offline using coordinate descent (see §\[subsec:tuning-single\]), with $t_{\max} = 200$ms, keeping all other parameters as the defaults (see Table \[tbl:parameters\]). This tuning process found an optimal height of $14$ and a probability $p$ of $0.4$.
Our results on both the 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] benchmarks are shown as *Ours (Random)* in Tables \[tbl:comparativeperformance7\] and \[tbl:comparativeperformance12\]. In both cases, we achieve similar-quality results to those of our default relocaliser, at similar speeds. This indicates that pre-training on a real scene is not strictly necessary, and that the appearance-clustering role played by a pre-trained forest can be replaced by alternative approaches without compromising performance.
Conclusion {#sec:conclusion}
==========
In recent years, offline approaches based on using regression to predict 2D-to-3D correspondences [@Shotton2013; @GuzmanRivera2014; @Valentin2015RF; @Brachmann2016; @Massiceti2017; @Meng2017arXiv] have been shown to achieve state-of-the-art camera relocalisation results, but their adoption for online relocalisation in practical systems such as InfiniTAM [@Kaehler2015; @Kaehler2016] has been hindered by the need to train extensively on the target scene ahead of time. In [@Cavallari2017], we showed that it was possible to circumvent this limitation by adapting offline-trained regression forests to novel scenes online. Our adapted forests achieved relocalisation performance on 7-Scenes [@Shotton2013] that was competitive with the offline-trained forests of existing methods, and our approach ran in under $150$ms, making it competitive for practical purposes with fast keyframe-based approaches such as random ferns [@Glocker2015; @Kaehler2016]. Unlike such approaches, we were also much better able to relocalise from novel poses, removing much of the need for the user to move the camera around when relocalising.
In this paper, we have extended this approach to achieve results that comfortably exceed the existing state-of-the-art. In particular, our F$\stackrel{7.5\textup{cm}}{\rightarrow}$S cascade simultaneously beats the current top performer [@Schmidt2017] on 7-Scenes by around $3$% and runs at nearly $20$ FPS. We also achieve near-perfect results on Stanford 4 Scenes [@Valentin2016], beating even the existing high-performing state-of-the-art [@Meng2017arXiv]. Finally, we have shown that it is possible to obtain state-of-the-art results on both datasets without any need at all for offline pre-training on a generic scene, whilst clarifying the role that regression forests play in scene coordinate regression pipelines.
Additional Experiments {#sec:additionalexperiments}
======================
Timing Breakdown {#subsec:timingbreakdown}
----------------
To evaluate the usefulness of our approach for on-the-fly relocalisation in new scenes, we compare several variants of it to the keyframe-based random fern relocaliser implemented in InfiniTAM [@Glocker2015; @Kaehler2016]. To be practical in a real-time system, a relocaliser needs to perform in real time during normal operation (i.e. for online training whilst successfully tracking the scene), and ideally take no more than $200$ms for relocalisation itself (when the system has lost track).
As shown in Table \[tbl:timings\], the random fern relocaliser is fast both for online training and relocalisation, taking only $1.2$ms per frame to update the keyframe database, and $6.8$ms to relocalise when tracking is lost. However, speed aside, the range of poses from which it is able to relocalise is quite limited. By contrast, the variants of our approach, whilst taking longer both for online training and for actual relocalisation, can relocalise from a much broader range of poses, whilst still running at more than acceptable speeds. Indeed, the *Fast* variant of our approach is able to relocalise in under $30$ms, making it competitive with the $6.8$ms taken by random ferns, whilst dramatically outperforming it on relocalisation quality. Our cascade approaches are slower, but achieve even better relocalisation performance, and at $< 70$ms are still more than fast enough for practical use.
A timing breakdown for the default variant of our relocaliser, *Ours (Default)*, is shown in Figure \[fig:timings\]. Notably, a significant proportion of the time it takes is spent on optimising poses during the RANSAC stage of the pipeline (a fact that is later exploited when we tune the parameters for our cascades in §\[sec:parametertuning\] and §\[sec:cascadedesign\]). With hypothesis ranking disabled, the second largest amount of time is spent on generating candidate hypotheses: at least some of the time this takes is likely due to warp divergence on the GPU, since some candidate generation threads are likely to generate acceptable hypotheses before others. With ranking enabled, the ranking itself dominates: this is unsurprising, since it is linear in the number of hypotheses considered.
**Per-Frame Training (ms)** **Relocalisation (ms)**
-------------------------------------------------------------------------------------------------- ----------------------------- -------------------------
Ours (Default) 5.8 128.0
+ ICP 5.8 132.7
+ Ranking 5.8 256.8
Ours (Fast) 11.0 25.1
+ ICP 11.0 29.9
Ours (Cascade F$\stackrel{7.5\textup{cm}}{\rightarrow}$S) 11.0 52.4
Ours (Cascade F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S) 11.1 66.1
Random Ferns [@Glocker2015; @Kaehler2016] 1.2 6.8
: Comparing the typical timings of our approach vs. random ferns during both normal operation and relocalisation. Our approach is slower than random ferns, but achieves dramatically higher relocalisation performance, especially from novel poses. All of our experiments are run on a machine with an Intel Core i7-7820X CPU and an NVIDIA GeForce GTX 1080Ti GPU.[]{data-label="tbl:timings"}
[!t]{}
[.49]{} ![image](timings-withoutranking-crop){height="4cm"}
[.49]{} ![image](timings-withranking-crop){height="4cm"}
--- ----------- ----------- ---------- ----------- ------------ ------------- ------------- ------------ --------------------------
**Chess** **Fire** **Heads** **Office** **Pumpkin** **Kitchen** **Stairs** **Average (all scenes)**
Reloc 99.55% 97.00% 100% 98.85% 80.70% 95.56% 75.60% 92.47%
+ ICP 99.85% 98.75% 100% 99.13% 88.75% 91.32% 78.90% 93.81%
+ Ranking 99.75% 99.15% 100% 99.15% 90.25% 90.02% 95.80% 96.30%
Reloc 99.55% 98.20% 99.80% 98.68% 77.65% 95.14% 75.40% 92.06%
+ ICP 99.85% 99.80% 100% 98.95% 87.80% 91.36% 76.50% 93.47%
+ Ranking 99.80% 100% 100% 99.08% 90.25% 90.18% 94.40% 96.24%
Reloc 99.50% 96.15% 100% 96.80% 76.95% 93.00% 53.30% 87.96%
+ ICP 99.85% 98.60% 100% 98.78% 89.05% 91.26% 59.00% 90.93%
+ Ranking 99.80% 99.00% 100% 98.20% 90.75% 89.72% 86.80% 94.90%
Reloc 99.75% 97.35% 100% 99.80% 82.25% 95.64% 79.10% 93.41%
+ ICP 99.85% 99.15% 100% 99.85% 90.00% 91.52% 80.00% 94.34%
+ Ranking 99.95% 99.70% 100% 99.48% 90.85% 90.68% 94.20% 96.41%
Reloc 99.40% 97.00% 99.90% 99.05% 81.95% 94.62% 75.50% 92.49%
+ ICP 99.85% 98.65% 100% 99.83% 90.00% 91.50% 76.00% 93.69%
+ Ranking 99.85% 99.40% 100% 99.28% 91.10% 90.36% 93.50% 96.21%
Reloc 99.95% 97.60% 100% 99.55% 80.65% 95.20% 76.30% 92.75%
+ ICP 99.85% 98.95% 100% 99.80% 89.25% 91.42% 78.40% 93.95%
+ Ranking 99.90% 99.60% 100% 99.25% 90.20% 90.74% 94.50% 96.31%
Reloc 99.55% 97.05% 99.90% 99.00% 80.15% 94.70% 82.70% 93.29%
+ ICP 99.85% 98.60% 100% 99.28% 88.55% 91.10% 84.20% 94.51%
+ Ranking 99.85% 99.15% 100% 99.23% 90.60% 89.94% 96.90% 96.52%
Reloc 99.61% 97.19% 99.94% 98.82% 80.04% 94.84% 73.99% 92.06%
+ ICP 99.85% 98.93% 100% 99.37% 89.06% 91.35% 76.14% 93.53%
+ Ranking 99.84% 99.43% 100% 99.10% 90.57% 90.23% 93.73% 96.13%
--- ----------- ----------- ---------- ----------- ------------ ------------- ------------- ------------ --------------------------
------------------------------------------------------------------------------------------- ------------ -------- -------
Successful Failed All
Default 125.0 107.9 124.0
Default (w/ICP) 129.8 114.0 128.9
Default (w/Ranking) 254.5 255.9 254.6
Fast 24.9 24.8 24.9
Fast (w/ICP) 29.8 30.1 30.0
Intermediate 74.6 71.7 73.9
Intermediate (w/ICP) 79.4 76.7 78.8
Slow 77.3 73.5 76.4
Slow (w/Ranking) 202.5 202.3 202.4
Cascade F$\stackrel{5\textup{cm}}{\rightarrow}$I 51.0 104.5 54.8
Cascade F$\stackrel{5\textup{cm}}{\rightarrow}$S 73.0 202.8 81.9
Cascade F$\stackrel{7.5\textup{cm}}{\rightarrow}$S 44.6 148.3 51.4
Cascade F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S 61.0 180.4 69.3
------------------------------------------------------------------------------------------- ------------ -------- -------
: The average times taken to relocalise successful/failed/all frames from the 7-Scenes dataset [@Shotton2013]. ‘Successful’ frames are defined as those frames whose relocalised poses are within $5$cm/$5^\circ$ of the ground truth. Note that unlike the average numbers elsewhere in the paper, which as per common practice were computed by averaging the averages for the different sequences in the dataset, these averages were computed by averaging over the individual frames (this is equivalent to weighting the average of averages by the number of frames in each sequence).[]{data-label="tbl:successfulfailedtimings"}
Successful/Failed Frame Timings
-------------------------------
To better understand how the time taken by our approach to try to relocalise a frame varies depending on whether or not that frame can be successfully relocalised, we timed several different variants of our relocaliser on 7-Scenes [@Shotton2013], and compared the average times taken for just the successful/failed frames to the timing results for all frames (see Table \[tbl:successfulfailedtimings\]). The results indicate that for the non-cascade variants, there is little difference between the times taken for successful/failed frames, which is what we would expect. By contrast, for the cascade variants, the average time taken for failed frames is significantly higher than that for successful ones: this makes sense, since the way in which our cascades work is to try the relocalisers in order, and for failed frames, we end up running the full cascade. For successful frames, we are often able to avoid running the slower relocalisers towards the end of the cascade, as indicated by the fact that the average times for the successful frames with each cascade are much lower than the corresponding average times for the slower relocalisers in the cascade. Moreover, it is notable that the average times for all frames are quite close to those for successful frames, indicating that in practice, most frames are successfully relocalised.
[!t]{}
[.18]{} ![image](missingdepth-10){height="2.5cm"}
[.18]{} ![image](missingdepth-20){height="2.5cm"}
[.18]{} ![image](missingdepth-50){height="2.5cm"}
[.18]{} ![image](missingdepth-80){height="2.5cm"}
[.18]{} ![image](missingdepth-90){height="2.5cm"}
[!t]{}
[.18]{} ![image](noisydepth-0){height="2.5cm"}\
![image](noisymodel-0){height="2.5cm"}
[.18]{} ![image](noisydepth-25){height="2.5cm"}\
![image](noisymodel-25){height="2.5cm"}
[.18]{} ![image](noisydepth-50){height="2.5cm"}\
![image](noisymodel-50){height="2.5cm"}
[.18]{} ![image](noisydepth-75){height="2.5cm"}\
![image](noisymodel-75){height="2.5cm"}
[.18]{} ![image](noisydepth-100){height="2.5cm"}\
![image](noisymodel-100){height="2.5cm"}
Adaptation Performance {#subsec:adaptationperformance}
----------------------
In §\[subsec:headlineperformance\], we evaluated how the performance of forests that had been pre-trained offline on the *Office* sequence from 7-Scenes [@Shotton2013] and then adapted to the target scene compared to that of offline methods that had been trained offline directly on the target scene. Here, we show that very similar results can be obtained by pre-training on *any* of the sequences from 7-Scenes, thus demonstrating that there is nothing specific to *Office* that makes it particularly suitable for pre-training (this is to be expected, since we show in §\[subsec:nopretraining\] that similar results can be obtained without pre-training offline at all).
As mentioned in §\[subsec:headlineperformance\], we use the following testing procedure. First, we pre-train a forest on a generic scene *offline* and remove the contents of its leaves. Next, we adapt the forest by feeding it new examples from a training sequence captured on the scene of interest: this runs *online* at frame rate. Finally, we test the adapted forest by using it to relocalise from every frame of a separate testing sequence captured on the scene of interest.
As shown in Table \[tbl:adaptationperformance\], in all cases the results are very accurate. Whilst there are certainly some variations in the performance achieved by adapted forests pre-trained on different scenes (in particular, the forest trained on the *Heads* sequence from the dataset, which is very simple, is slightly worse), the differences are not profound: in particular, relocalisation performance seems to be more tightly coupled to the difficulty of the scene of interest than to the scene on which the forest was pre-trained. Notably, all of our adapted forests achieve results that are comparable (and in many cases superior) to those of state-of-the-art *offline* methods (see Table \[tbl:comparativeperformance7\]).
Robustness to Missing/Noisy Depth
---------------------------------
To evaluate our approach’s robustness to missing/noisy depth, we performed two sets of experiments, one in which we randomly masked out various percentages of the depth images, and another in which we corrupted all of the depth values in the images with zero-mean, depth-dependent Gaussian noise, with various standard deviations.[^4] Our expectation was that our relocaliser would be relatively robust to missing depth, since only a small number of reliable correspondences are needed to accurately estimate the pose, but that it might be more sensitive to noisy depth, since we rely on having reasonably accurate world space points in the leaves of the forest in order to correctly relocalise.
### Missing Depth {#subsubsec:missingdepth}
For the missing depth experiment, we evaluated how the performance of our *Default* relocaliser on 7-Scenes [@Shotton2013] varied for different levels of missing depth, ranging from 10% up to 95%. To mask out a percentage $p \in [0,1]$ of the pixels in a depth image, we uniformly sampled a real number $r_i \sim [0,1]$ for each pixel in the image, and set the pixel to zero iff $r_i \le p$. Examples of the different levels of missing depth involved can be seen in Figure \[fig:missingdepth-examples\].
The results are shown in Figure \[fig:missingdepth-results\]. As expected, they demonstrate that our relocaliser is relatively robust to missing depth: the average pre-ICP relocalisation performance remains above $85\%$ even with $70\%$ of the depth values missing, and the average post-ICP performance is even more robust, remaining over $85\%$ even with $90\%$ of the depth values missing (this is to be expected, since even if the pre-ICP relocaliser itself performs a bit worse, it only has to return an initial pose that falls within the ICP convergence basin to allow ICP to succeed). The performance does start to decrease more significantly when $95$% of the depth values are missing: this is most likely because by that stage, there are far fewer remaining points, and it becomes harder to find the reliable correspondences needed. Nevertheless, even at that stage, our post-ICP performance remains over $80\%$.
### Noisy Depth {#subsubsec:noisydepth}
![The performance of our *Default* relocaliser on 7-Scenes [@Shotton2013] for different levels of missing depth (see §\[subsubsec:missingdepth\]).[]{data-label="fig:missingdepth-results"}](missingdepth-results){width="\linewidth"}
![The performance of our *Default* relocaliser on 7-Scenes [@Shotton2013] for different levels of zero-mean, depth-dependent Gaussian noise (see §\[subsubsec:noisydepth\]).[]{data-label="fig:noisydepth-results"}](noisydepth-results){width="\linewidth"}
For the noisy depth experiment, we evaluated how the performance of our *Default* relocaliser on 7-Scenes [@Shotton2013] varied when we added different levels of zero-mean, depth-dependent Gaussian noise to the depth images. We considered Gaussians with several different $\sigma$ values, as shown in Figure \[fig:noisydepth-examples\]. To add depth-dependent noise to a depth image for a given $\sigma$ value, we uniformly sampled a value $n_i \sim \mathcal{N}(0,\sigma^2)$ for each pixel in the image, and then replaced the pixel’s depth value $d_i$ with $d_i + n_i \times d_i$. Examples of the effect this has for different $\sigma$ values can be seen in Figure \[fig:noisydepth-examples\].
As expected, the results in Figure \[fig:noisydepth-results\] indicate that our method is rather more sensitive to noisy depth than it was to missing depth (see §\[subsubsec:missingdepth\]). In particular, it seems reasonably tolerant of a small amount of depth noise ($\sigma = 0.025$, i.e. a standard deviation of $2.5$cm at $1$m), but its performance degrades much more significantly for larger amounts of noise. This makes sense, since our method needs to find reasonably accurate correspondences between points in camera space and world space in order to relocalise, and thus the world space points we add to the leaves of the forest at adaptation time need to be reasonably accurate. If the depth is too noisy, these points are likely to be inaccurate, leading to much worse relocalisation performance.
Notably, our relocaliser’s post-ICP performance drops sharply as $\sigma$ increases, whereas its pre-ICP performance degrades much more gracefully. There are two main reasons for this: (i) ICP is much more sensitive to outlying points than the RANSAC stage of our pipeline (which, like all RANSAC-based approaches, explicitly aims to exclude outliers from consideration), and (ii) as $\sigma$ increases, it becomes increasingly difficult for InfiniTAM to fuse a high-quality 3D model to which ICP can register the current depth image (see Figure \[fig:noisydepth-examples\]). To mitigate this latter problem, we used a larger voxel size of $2$cm for this experiment (without which, InfiniTAM is unable to fuse a reasonable model at all for high levels of depth noise), but even with this change, the model quality notably decreases for high values of $\sigma$. Based on these results, we thus recommend disabling ICP when the depth is anticipated to be particularly noisy.
Usefulness of DA-RGB Features {#subsec:rgbfeatures}
-----------------------------
**DA-RGB + Depth** **DA-RGB Only** **Depth Only**
----------------------- --------------------- --------------------- ---------------------
Chess 99.85% 99.65% 99.75%
Fire 98.50% 99.00% 97.15%
Heads 100% 100% 99.90%
Office 99.10% 98.95% 97.80%
Pumpkin 89.50% 90.20% 80.90%
Kitchen 90.32% 89.50% 85.26%
Stairs 77.80% 68.90% 68.70%
**Average** 93.58% 92.31% 89.92%
**Avg. Median Error** 0.013m/1.16$^\circ$ 0.013m/1.17$^\circ$ 0.013m/1.17$^\circ$
: Comparing the post-ICP performance on 7-Scenes [@Shotton2013] of three variants of our relocaliser based on randomly-generated forests (see §\[subsec:nopretraining\]) with different sets of features. ‘DA-RGB + Depth’ is the same as ‘Ours (Random)’, i.e. a randomly-generated forest that uses both Depth-Adaptive RGB features and Depth features. ‘DA-RGB Only’ is a randomly-generated forest that uses only Depth-Adaptive RGB features. ‘Depth Only’ is a randomly-generated forest that uses only Depth features.[]{data-label="tbl:rgbfeatures"}
To evaluate the usefulness of the Depth-Adaptive RGB (‘DA-RGB’) features we describe in §\[subsubsec::forestpretraining\], we compared the post-ICP performance on 7-Scenes [@Shotton2013] of three variants of our relocaliser that are based on randomly-generated forests (see §\[subsec:nopretraining\]) with different sets of features. Specifically, we randomly generated three forests – one (also shown as ‘Ours (Random)’ in Tables \[tbl:comparativeperformance7\] and \[tbl:comparativeperformance12\]) based on feature vectors containining $128$ DA-RGB features and $128$ Depth features, another based on feature vectors containing $256$ DA-RGB features, and a final one based on feature vectors containing $256$ Depth features. For the first forest, we randomly chose to split each branch node based on a Depth rather than a DA-RGB feature with probability $p = 0.4$, as per §\[subsec:nopretraining\]; for the other forests, $p$ was irrelevant, since all of the features were of one type or the other. We used a tree height of $14$ in each case.
The results are shown in Table \[tbl:rgbfeatures\]. For most sequences, we found that using DA-RGB features was superior to using Depth features alone. Moreover, we found that a combination of both DA-RGB and Depth features performed best overall, particularly on hard sequences like *Stairs*. Based on these results, we recommend using a combination of both types of feature, rather than either of the two alone.
Outdoor Relocalisation {#subsec:outdoorrelocalisation}
----------------------
------------------------------------------ ---------------------------- ------------------- ---------------------------- ---------------------------- ---------------------------- -------------------
**Kings College** **Street** **Old Hospital** **Shop Façade** **St. Mary’s Church** **Great Court**
5600m$^2$ 50000m$^2$ 2000m$^2$ 875m$^2$ 4800m$^2$ 8000m$^2$
Ours (Default) / (37.03%) – /0.37$^\circ$ (35.17%) / (60.19%) / (42.45%) –
+ ICP [****]{}/[****]{} (76.09%) – [****]{}/ (74.73%) [****]{}/[****]{} (97.09%) [****]{}/[****]{} (77.74%) –
+ Ranking [****]{}/[****]{} (76.97%) – [****]{}/[****]{} (82.97%) [****]{}/[****]{} (99.03%) [****]{}/[****]{} (79.62%) –
PoseNet (Geom. Loss) [@Kendall2017] 0.99m/1.1$^\circ$ / 2.17m/2.9$^\circ$ 1.05m/4.0$^\circ$ 1.49m/3.4$^\circ$ 7.00m/3.7$^\circ$
Active Search (SIFT) [@Sattler2017] 0.42m/0.6$^\circ$ [****]{}/[****]{} 0.44m/1.0$^\circ$ 0.12m/0.4$^\circ$ 0.19m/0.5$^\circ$ –
DSAC (RGB Training) [@Brachmann2017CVPR] \*0.30m/0.5$^\circ$ – 0.33m/0.6$^\circ$ 0.09m/0.4$^\circ$ \*0.55m/1.6$^\circ$ /
DSAC++ [@Brachmann2018CVPR] 0.18m/0.3$^\circ$ – 0.20m/0.3$^\circ$ 0.06m/ 0.13m/ [****]{}/[****]{}
------------------------------------------ ---------------------------- ------------------- ---------------------------- ---------------------------- ---------------------------- -------------------
[!t]{}
[.47]{} ![image](street){width="\linewidth"}
[.47]{} ![image](greatcourt){width="\linewidth"}
Whilst our RGB-D relocaliser was primarily designed with indoor relocalisation in mind, it can also be used to relocalise outdoors. To show this, we evaluated its performance on the Cambridge Landmarks dataset [@Kendall2015; @Kendall2016; @Kendall2017], which consists of a number of outdoor scenes at much larger scales[^5] than those in either 7-Scenes [@Shotton2013] or Stanford 4 Scenes [@Valentin2016]. Whilst Cambridge Landmarks was originally designed for RGB-only relocalisation, depth images for each sequence can be constructed by rendering the 3D models provided with each scene as part of the dataset. For consistency with other works, we used the depth images rendered by Brachmann and Rother [@Brachmann2018CVPR] for this purpose.[^6]
The results in Table \[tbl:cambridgeresults\] show how our approach compares to the best existing methods that also make use of the 3D models provided with Cambridge Landmarks. Encouragingly, we achieve state-of-art-results on four out of the six scenes on which we tested, showing that our approach has the potential to be effective for outdoor relocalisation. However, like some of the other methods in the table, our approach was unable to successfully relocalise in the remaining two scenes (*Street* and *Great Court*), owing to the significantly greater scales involved. As shown in Figure \[fig:cambridgescenes\](a), *Street* covers a $500$m $\times$ $100$m area [@Kendall2015], which is an order of magnitude greater than *Kings College* (the largest scene in which our method was able to successfully relocalise). To date, only Active Search [@Sattler2017], which is based on SIFT, has been able to achieve reasonable relocalisation results in this scene. For *Great Court*, the problem is not the overall scale of the scene, but that the camera sequences traverse the centre of a large quadrangle in Cambridge, such that most of the scene geometry is far away from the camera (see Figure \[fig:cambridgescenes\](b)). Since our approach relies on reasonably accurate depth values (see §\[subsubsec:noisydepth\]), and depth inaccuracy tends to increase with distance, and since we also need there to be points in each leaf that are sufficiently close together to allow them to be clustered, and the points in an RGB-D image become sparser at greater distances due to perspective, it is unsurprising that our method struggles in this case. Methods like DSAC++ [@Brachmann2018CVPR] have a much better chance of working for scenes like *Great Court*, because they train a network to predict the pose end-to-end on the target scene, and so are less dependent on the accuracy of the correspondences they predict as an intermediate step. However, their method, unlike ours, requires offline training on the target scene.
Overall, our initial results on Cambridge Landmarks suggest that our approach is already effective for moderately-sized outdoor scenes in which the average distance from the camera to the nearest scene geometry is not extreme. To make it also work effectively on city-scale scenes like *Street*, we could in future consider initially using a coarse relocaliser to determine a particular area of the scene, and then using an instance of our relocaliser to yield an accurate pose within that area. However, extending our approach in this way is beyond the scope of this paper.
Parameter Tuning {#sec:parametertuning}
================
**Name** **Category** **$\theta^*$ (Default)** **$\theta_1^*$ (Fast)** **$\theta_2^*$ (Intermediate)** **$\theta_3^*$ (Slow)**
-------------------------------------------------- -------------- -------------------------- ------------------------- --------------------------------- -------------------------
clustererSigma Forest 0.1 0.1 0.1 0.1
clustererTau Forest 0.05 0.2 0.2 0.2
maxClusterCount ($M_{\max}$) Forest 50 50 50 50
minClusterSize Forest 20 5 5 5
reservoirCapacity ($\kappa$) Forest 1024 2048 2048 2048
maxCandidateGenerationIterations RANSAC 6000 500 1000 250
maxPoseCandidates ($N_{\max}$) RANSAC 1024 2048 2048 2048
maxPoseCandidatesAfterCull ($N_{\textup{cull}}$) RANSAC 64 64 64 64
maxTranslationErrorForCorrectPose RANSAC 0.05 0.05 0.1 0.1
minSquaredDistanceBetweenSampledModes RANSAC 0.09 0 0.09 0.0225
poseUpdate RANSAC True False True True
ransacInliersPerIteration ($\eta$) RANSAC 512 256 256 256
usePredictionCovarianceForPoseOptimization RANSAC True N/A False False
maxRelocalisationsToOutput RANSAC 16 1 1 16
The parameters associated with our relocaliser are shown in Table \[tbl:parameters\]. Our goal when tuning is to find a set of values for them that let us accurately relocalise as many frames as possible, whilst staying within a fixed time bound to allow our relocaliser to be used in interactive contexts. To achieve this, we start by defining the following cost function, which computes a cost for running relocaliser $r$ on sequence $s$ in the context of a desired time bound $t_{\max}$: $$\mathit{cost}(r,s,t_{\max}) = \begin{cases}
(1 - \mathit{score}(r,s))^2 & \mbox{if } \mathit{time}(r,s) \le t_{\max} \\
\infty & \mbox{otherwise}
\end{cases}$$ In this, $\mathit{score}(r,s) \in [0,1]$ yields the fraction of the frames in $s$ that are correctly relocalised by $r$ to within $5$cm/$5^\circ$ of the ground truth, and $\mathit{time}(r,s)$ yields the average time that $r$ takes to run on a single frame of $s$.
Tuning a Single Relocaliser {#subsec:tuning-single}
---------------------------
To tune a single relocaliser, we choose a time bound $t_{\max}$ and then use the implementation of coordinate descent [@Wright2015] in the open-source SemanticPaint framework [@Golodetz2015SPTR] to find the parameters $\theta^*$ that minimise $$\theta^* = {\operatornamewithlimits{argmin}}_{\theta} \sum_{s \in \mathit{Sequences}} \mathit{cost}\left(\mathit{reloc}(\theta),s,t_{\max}\right),$$ in which $\mathit{reloc}(\theta)$ denotes a variant of the relocaliser with parameters $\theta$. Any suitable set of sequences can be used for tuning; in our case, we took the training sequences from the well-known 7-Scenes dataset [@Shotton2013], split them into training and validation subsets[^7], and tuned on the validation subsets. (More precisely, for each scene, we first adapt the relocaliser by refilling its leaves with points from the training sequence for the scene, as per §\[subsubsec::forestadaptation\], and then evaluate the cost on the validation sequence for the scene.)
Tuning a Relocalisation Cascade {#subsec:tuning-cascade}
-------------------------------
Tuning a relocalisation cascade is more involved, since we need to tune not only the parameters for each individual relocaliser in the cascade, but also the depth-difference thresholds used to decide when to fall back from one relocaliser to the next in the sequence. Formally, let $\Theta = \{\theta_1,\ldots,\theta_N\}$ be the sets of parameters for the $N$ individual relocalisers in an $N$-stage relocalisation cascade, and let $\mathcal{T} = \{\tau_1,\ldots,\tau_{N-1}\}$ be the depth-difference thresholds used to decide when to fall back from one relocaliser to the next in the sequence. Then we could in principle cast the problem as $$(\Theta^*,\mathcal{T}^*) = {\operatornamewithlimits{argmin}}_{(\Theta,\mathcal{T})} \sum_{s \in \mathit{Sequences}} \mathit{cost}(\mathit{cascade}(\Theta,\mathcal{T}),s,t_{\max}),$$ in which $\mathit{cascade}(\Theta,\mathcal{T})$ denotes a cascade with parameters $\Theta$ for the individual relocalisers and thresholds $\mathcal{T}$ to decide when to fall back from one relocaliser to the next. However, this has the disadvantage of treating the parameters for each relocaliser as completely independent of each other, whereas in reality we can significantly reduce the memory our approach needs by making all of the individual relocalisers in the cascade share the same regression forest.
To achieve this, we first observe that of the parameters in Table \[tbl:parameters\], only those in the *Forest* category are associated with the forest itself, whilst those in the *RANSAC* category can be varied independently of the forest. We can therefore take the following approach to tuning a cascade:
1. Choose $N$, the number of relocalisers to use in the cascade, and time bounds $t_{\max}^{(1)} > \ldots > t_{\max}^{(N)}$ for them. (See §\[sec:cascadedesign\] for how we chose these parameters.)
2. For each individual relocaliser $i$, divide its parameters $\theta_i$ into shared ones associated with the forest ($\phi$) and independent ones associated with RANSAC ($\rho_i$): $$\theta_i = \phi \cup \rho_i$$ Then, tune all of the parameters of the fastest relocaliser to jointly find suitable parameters for the forest and RANSAC, before fixing the optimised forest parameters $\phi^*$ for all individual relocalisers in the cascade (we tune the forest parameters on the fastest relocaliser since it is impossible to get really fast relocalisation just by tuning the RANSAC parameters):
$$\theta_1^* = {\operatornamewithlimits{argmin}}_{\theta} \sum_{s \in \mathit{Sequences}} \mathit{cost}\left(\mathit{reloc}(\theta),s,t_{\max}^{(1)}\right)$$
3. Next, tune only the RANSAC parameters of all the other relocalisers in the cascade, i.e. for $i > 1$:
$$\rho_i^* = {\operatornamewithlimits{argmin}}_{\rho} \sum_{s \in \mathit{Sequences}} \mathit{cost}\left(\mathit{reloc}(\phi^* \cup \rho),s,t_{\max}^{(i)}\right)$$
4. Finally, having determined optimised parameters $\Theta^*$ for all the relocalisers in the cascade, tune the depth-difference thresholds between them by choosing a maximum average time bound $t_{\max}$ and minimising:
$$\mathcal{T}^* = {\operatornamewithlimits{argmin}}_{\mathcal{T}} \sum_{s \in \mathit{Sequences}} \mathit{cost}\left(\mathit{cascade}(\Theta^*,\mathcal{T}),s,t_{\max}\right)$$
The result of this process is a relocalisation cascade that takes no longer than $t_{\max}$ to relocalise on average. Provided we do not choose an overall time bound that is so low that it forces the cascade to always accept the results of the first relocaliser, this can also result in excellent average-case relocalisation performance (as we show in Table \[tbl:comparativeperformance7\]).
Cascade Design {#sec:cascadedesign}
==============
**Chess** **Fire** **Heads** **Office** **Pumpkin** **Kitchen** **Stairs** **Average** **Avg. Median Error** **Frame Time (ms)**
----------------------------------------------------------------------------------- ----------- ---------- ----------- ------------ ------------- ------------- ------------ ------------- ----------------------- ---------------------
Fast 65.35% 47.55% 72.00% 56.10% 47.15% 47.60% 18.40% 50.59% 0.058m/2.44$^\circ$ 25.06
Fast (w/ICP) 99.75% 97.10% 98.40% 99.55% 89.35% 89.26% 62.40% 90.83% 0.014m/1.17$^\circ$ 29.91
Intermediate 86.60% 81.25% 69.00% 88.28% 67.20% 75.48% 53.20% 74.43% 0.033m/1.91$^\circ$ 73.12
Intermediate (w/ICP) 99.85% 98.90% 99.70% 99.73% 89.85% 90.34% 75.00% 93.34% 0.013m/1.17$^\circ$ 77.87
Slow 86.40% 81.25% 68.00% 88.70% 67.15% 75.42% 53.40% 74.33% 0.033m/1.95$^\circ$ 77.78
Slow (w/Ranking) 99.90% 100% 100% 99.55% 90.80% 89.38% 94.10% 96.25% 0.013m/1.17$^\circ$ 203.64
F$\stackrel{5\textup{cm}}{\rightarrow}$I 99.85% 99.40% 99.70% 99.70% 90.65% 89.68% 76.00% 93.57% 0.013m/1.17$^\circ$ 51.85
F$\stackrel{5\textup{cm}}{\rightarrow}$S 99.90% 99.50% 99.80% 99.45% 90.50% 89.42% 91.80% 95.77% 0.013m/1.17$^\circ$ 77.20
F$\stackrel{7.5\textup{cm}}{\rightarrow}$S 99.90% 98.95% 99.90% 99.48% 90.95% 89.34% 86.10% 94.95% 0.013m/1.17$^\circ$ 52.45
F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S 99.85% 99.40% 99.90% 99.40% 90.85% 89.64% 89.80% 95.55% 0.013m/1.17$^\circ$ 66.08
**Sequence** **Fast (w/ICP)** **Intermediate (w/ICP)** **Slow (w/Ranking)** **F$\stackrel{5\textup{cm}}{\rightarrow}$I** **F$\stackrel{5\textup{cm}}{\rightarrow}$S** **F$\stackrel{7.5\textup{cm}}{\rightarrow}$S** **F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S**
----------------------- --------------------- -------------------------- ---------------------- ---------------------------------------------- ---------------------------------------------- ------------------------------------------------ ---------------------------------------------------------------------------------------
Kitchen 100% 99.72% 100% 100% 100% 100% 100%
Living 99.80% 100% 100% 100% 100% 100% 100%
Bed 98.53% 98.53% 100% 99.51% 100% 100% 100%
Kitchen 100% 100% 100% 100% 100% 99.52% 100%
Living 100% 100% 100% 100% 100% 100% 100%
Luke 99.04% 99.20% 99.20% 99.20% 99.20% 99.20% 99.04%
Floor5a 97.99% 99.00% 98.99% 99.60% 99.80% 99.60% 100%
Floor5b 98.77% 98.77% 100% 99.01% 98.77% 99.01% 99.26%
Gates362 100% 100% 100% 100% 100% 100% 100%
Gates381 100% 100% 100% 100% 100% 99.91% 100%
Lounge 100% 100% 100% 100% 100% 100% 100%
Manolis 100% 100% 99.88% 100% 100% 100% 100%
**Average** 99.51% 99.60% 99.84% 99.78% 99.81% 99.77% 99.86%
**Avg. Median Error** 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$ 0.007m/0.26$^\circ$
**Frame Time (ms)** 29.69 72.32 171.66 32.71 33.49 32.83 33.00
**Max. Candidate Generation Iterations** **Average** **Frame Time (ms)**
------------------------------------------ ------------- ---------------------
250 85.47% 179.38
500 85.37% 182.54
1000 85.39% 184.48
: The average results of our *Slow* relocaliser (with ranking enabled) on our validation subset of 7-Scenes [@Shotton2013], for various different settings of the `maxCandidateGenerationIterations` parameter. In practice, we found that this parameter made very little difference to the results.[]{data-label="tbl:candidategenerationiterations"}
We had two goals when designing our cascades:
- Obtain $\ge 85$% accuracy for all sequences in both the 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] datasets.
- Relocalise in under $t_{\max} = 100$ms on average (for a frame rate of at least $10$ FPS), amortised across the entire dataset in each case.
Since the fastest relocalisers we were able to tune took around $30$ms to relocalise, there was a practical upper-bound of $N \le 3$ on the size of the cascades that we could use to meet the $100$ms time bound. We therefore chose to initially tune three individual relocalisers with $t_{\max}^{(1)} = 50\mbox{ms}$, $t_{\max}^{(2)} = 100\mbox{ms}$ and $t_{\max}^{(3)} = 200\mbox{ms}$, and then combine them into various possible cascades with $N = 2$ or $N = 3$ (tuning the thresholds separately in each case) to see if adding in a third relocaliser was worthwhile, or whether two relocalisers was enough.
The parameters that our tuning found for the individual relocalisers (which we call *Fast*, *Intermediate* and *Slow*) are shown in Table \[tbl:parameters\], and their results on 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] are shown in Tables \[tbl:cascadestages7\] and \[tbl:cascadestages12\]. Several interesting observations can be made from Table \[tbl:parameters\]. Firstly, it is noticeable that the *Fast* relocaliser disables continuous pose optimisation, which is fairly costly in practice. In principle, we would expect this to have a significant negative effect on performance, and indeed (as can be seen in Table \[tbl:cascadestages7\]) this is the case prior to running ICP. However, the post-ICP results are actually relatively good, indicating that even without the pose optimisation, our approach is able to relocalise well enough to get into ICP’s basin of convergence (intuitively, refining the final pose with ICP after the fact has a similar overall effect to optimising the pose hypotheses during RANSAC). Secondly, none of the tuned relocalisers make use of the covariance information in the leaves during pose optimisation, indicating that it may be possible to avoid storing it in practice. This is a potentially important observation for future work, since storing the covariance has a significant memory cost. Finally, it is noticeable that the *Slow* relocaliser only attempts to generate a pose candidate on each thread at most $250$ times, whereas the other relocalisers all try much harder to generate the initial candidates. In principle, we might hope for the performance to be slightly better for higher `maxCandidateGenerationIterations` values, but in practice we found that for *Slow*, it made little actual difference to our results (see Table \[tbl:candidategenerationiterations\]), indicating that most threads do not actually need more than $250$ iterations to generate a candidate. It is also worth mentioning that our optimiser was explicitly designed to find sets of parameters that perform well within a particular time bound, in order to relocalise quickly on average, and so even if the performance had been slightly better for higher values of this parameter, it would still have been naturally inclined to focus on parameters that make a significant difference to performance (e.g. `maxRelocalisationsToOutput`) at the expense of more minor parameters such as this one.
[!t]{}
[.49]{} ![image](7scenes-columns-crop){height="5.5cm"}
[.49]{} ![image](12scenes-columns-crop){height="5.5cm"}
\
[.49]{} ![image](7scenes-surface-crop){height="5.5cm"}
[.49]{} ![image](12scenes-surface-crop){height="5.5cm"}
Having tuned the individual relocalisers, we then used them to construct and tune three different types of cascade: (i) *Fast* $\rightarrow$ *Intermediate*, (ii) *Fast* $\rightarrow$ *Slow*, and (iii) *Fast* $\rightarrow$ *Intermediate* $\rightarrow$ *Slow*. We hypothesised that relocalisers of type (i) might be fast but have performance that was limited by that of the *Intermediate* relocaliser, that those of type (ii) might achieve good results but be slow (since they might be forced to run the slow relocaliser on only moderately hard frames), and that those of type (iii) might be best overall. For all types, our tuning process found that a depth-difference threshold of $5$cm was a good choice for falling back from *Fast* to *Intermediate*, and that $7.5$cm was a good choice for falling back from *Intermediate* to *Slow*. For falling back from *Fast* to *Slow*, the tuning proposed $5$cm as a good threshold, but we decided to also try $7.5$cm (the proposed *Intermediate* to *Slow* threshold) to see if an interesting alternative balance between accuracy and speed could be achieved.
We evaluated all of these cascades on the 7-Scenes [@Shotton2013] and Stanford 4 Scenes [@Valentin2016] datasets. On Stanford 4 Scenes (see Table \[tbl:cascadestages12\]), all four cascades we tested achieved almost perfect results, which we believe can be attributed to the relatively straightforward nature of the underlying dataset (see §\[sec:datasetanalysis\]), leading to all four cascades choosing to run the fast relocaliser on almost all frames. The results on 7-Scenes (see Table \[tbl:cascadestages7\]) are more interesting. In particular, we found that, as expected, our F$\stackrel{5\textup{cm}}{\rightarrow}$I cascade was relatively fast, but was unable to achieve high-quality results on all sequences. Also as expected, our 3-stage cascade (F$\stackrel{5\textup{cm}}{\rightarrow}$I$\stackrel{7.5\textup{cm}}{\rightarrow}$S) was generally preferable to F$\stackrel{5\textup{cm}}{\rightarrow}$S, with similar accuracy and a higher frame rate, as a result of its ability to use the *Intermediate* relocaliser on only moderately difficult frames. Interestingly, our hand-tuned relocaliser (F$\stackrel{7.5\textup{cm}}{\rightarrow}$S) was also good, achieving acceptable results on all sequences whilst running at almost the speed of F$\stackrel{5\textup{cm}}{\rightarrow}$I.
Overall, we believe that the best cascade to choose most likely depends on the application at hand. All four cascades achieved accurate relocalisation at a high frame rate. The differences between them were most exposed by the *Stairs* sequence from 7-Scenes [@Shotton2013], which is a notoriously difficult sequence that most approaches have struggled to cope with. For more real-world use, the differences between our cascades are most likely small enough that they can be ignored in practice.
Dataset Analysis {#sec:datasetanalysis}
================
[!t]{}
![image](alignment-7scenes){width=".9\linewidth"}
\
![image](alignment-12scenes){width=".9\linewidth"}
To better understand the near-perfect results of all four of our cascades on the Stanford 4 Scenes [@Valentin2016] dataset (see §\[sec:cascadedesign\]), we analysed the proportions of test frames from both 7-Scenes [@Shotton2013] and Stanford 4 Scenes that are within certain distances of the training trajectories. The results, as shown in Figure \[fig:datasetanalysis\], help explain why our approach invariably achieves such good results on Stanford 4 Scenes, whilst not yet fully saturating the more difficult 7-Scenes benchmark. In particular, it is noticeable that in Stanford 4 Scenes, the vast majority of the test frames fall within 30cm/30$^\circ$ of the training trajectory, whilst in 7-Scenes, far more of the test frames are at a much greater distance, particularly those from the *Fire* sequence. This makes 7-Scenes a far harder benchmark in practice: test frames that are near the training trajectory are much easier to relocalise, since there is then less need to match keypoints across scale/viewpoint changes. In Stanford 4 Scenes, almost all of the sequences are dominated by test frames that are around 10-15cm/$^\circ$ from the training trajectory, making it an easy dataset to saturate.
Two other considerations make 7-Scenes difficult to fully saturate in practice. The more significant of the two is that the original dataset was captured with KinectFusion [@Newcombe2011], which is prone to tracking drift, even at room scale, meaning that in practice the ground truth poses for some frames can be slightly inaccurate. This can cause at least two different types of problem: firstly, if the ground truth poses for the training sequence are slightly inaccurate, then InfiniTAM is liable to fuse an imperfect 3D model (e.g. see Figure \[fig:redkitchen71-model\]), which can affect the poses to which our ICP-based refinement process will converge at test time; secondly, if the ground truth poses for the testing sequence are slightly inaccurate, then relocalised poses that would have been within 5cm/5$^\circ$ of a ‘perfect’ pose can be marked as incorrect, and other poses that would have been too far from the ‘perfect’ pose but are within 5cm/5$^\circ$ of the ground truth pose given can be marked as correct. Dataset problems like this are unfortunately very difficult to mitigate at the level of an individual method such as ours – whilst it might in principle be possible (if time-consuming) to bundle adjust all of the frames in each sequence to correct any inaccurate poses, any results obtained on the corrected sequences would then be incomparable with those obtained on the standard dataset. As such, we limit ourselves in this paper to simply noting that this problem is a noticeable failure mode of our approach on this dataset (see §\[subsec:badgroundtruth\]), and in common with other approaches, rely on the ground truth poses as given when computing our results.
A more minor issue is that the dataset was unfortunately captured with the internal calibration between the depth and colour cameras disabled, leading to poor alignment between the two (see Figure \[fig:alignment\]). In the context of our approach, this can lead to a slight offset between the locations at which the depth and RGB features for each pixel are computed, which can in principle cause the correspondences for some pixels (particularly those near the boundaries of objects) to be incorrectly predicted by the forest. However, in practice we found that we were fairly robust to this problem: most of the correspondences are still predicted correctly, and incorrect correspondences are in any case handled naturally by the RANSAC stage of our pipeline.
Failure Case Analysis {#sec:failurecaseanalysis}
=====================
As shown in both the main paper and this supplementary material, our approach is able to achieve highly-accurate online relocalisation in real time, from novel poses and without needing extensive offline training on the target scene. However, there are inevitably still situations in which it will fail. In this section, we analyse a few examples, and attempt to explain the underlying reasons in each case.
Visual Ambiguities
------------------
One of the most common reasons for our relocaliser to fail is the presence of repetitive structures and/or textures in the scene. Two examples of this are shown in the following subsections – one showing a staircase, and the other showing a stretch of similar-looking red cupboards. Our relocaliser can sometimes struggle in situations like this because it relies on local features around each pixel to predict the pixel’s world-space coordinates, and these local features do not always provide sufficient context for it to disambiguate between similar-looking points. We achieve significant robustness to this problem by allowing multiple correspondences to be predicted for each pixel (we use forests with multiple trees, and store multiple clusters of world-space points in each leaf), but for some inputs, our relocaliser can still fail.
### Stairs
![The $470$th frame of the *Stairs* sequence from 7-Scenes [@Shotton2013]. This is an example of an input that can confuse our relocaliser, owing to the presence of multiple visually-identical steps.[]{data-label="fig:stairs470-input"}](stairs_input_470){width="\linewidth"}
![The top $16$ pose candidates (left-to-right, top-to-bottom) corresponding to the failure case from the *Stairs* scene shown in Figure \[fig:stairs470-input\]. The coloured points indicate the 2D-to-3D correspondences that are used to generate the initial pose hypotheses. Note that in this case, none of the candidates would relocalise the camera successfully. This is because the points at the same places on different stairs tend to end up in similar leaves, making the modes in the leaves less informative and significantly reducing the probability of generating good initial hypotheses.[]{data-label="fig:stairs470-candidates"}](stairs_ransac_candidates_470){width="\linewidth"}
The first example we consider is from the *Stairs* scene in 7-Scenes [@Shotton2013]. This is a notoriously difficult scene containing a staircase that consists of numerous visually-identical steps. When viewing the scene from certain angles, the relocaliser is able to rely on points in the scene that can be identified unambiguously to correctly estimate the pose, but from viewpoints such as that in Figure \[fig:stairs470-input\], it is forced to use more ambiguous points, e.g. those on the stairs themselves or the walls. When this happens, relocalisation is prone to fail, since the relocaliser finds it difficult to tell the difference between the different steps.
To illustrate this, we visualise the last $16$ surviving camera hypotheses for this instance in Figure \[fig:stairs470-candidates\], in descending order (left-to-right, top-to-bottom). It is noticeable that in this case, none of the top $16$ hypotheses would have successfully relocalised the camera. As suggested by the points predicted in the 3D scene for each hypothesis (which are often in roughly the right place but on the wrong stairs), this is because the points at the same places on different stairs tend to end up in similar leaves, making the modes in the leaves less informative and significantly reducing the probability of generating good initial hypotheses.
### Pumpkin
The second example we consider is from the *Pumpkin* sequence (see Figure \[fig:pumpkin920-input\]). Here, the input image seems slightly easier to relocalise than in the previous example, but in practice, the repetitive red cupboards and dark grey ceiling panels (not to mention the reflections from the ceiling lights) provide many opportunities for a relocaliser such as ours to get confused (see Figure \[fig:pumpkin920-candidates\]). In this case, as with the *Stairs* example, all of the individual matches seem individually reasonable (in the sense that the matched points genuinely do have a similar appearance), but the end result is nevertheless to relocalise in the wrong place.
### Analysis
Notably, in both of these examples, there seem to be at least a few visually distinctive points that could have been chosen in order to successfully relocalise (e.g. for the *Pumpkin* example, the top-right corner of the cupboards, the T-junction between the cupboards and the machine, and the top-left corner of the white notice all look useful), but no candidates based on these points were ever generated. Ultimately, this is caused by the fact that RANSAC randomly samples only a subset of all of the possible candidates, and there is thus always a possibility of missing a candidate that might have worked. One way of mitigating this problem might be to explicitly search for visually distinctive points in the image (i.e. reintroduce a keypoint detection stage into the pipeline) and generate additional candidates based on any points found. We do not currently implement this approach (and it would be expected to carry a speed cost), but it offers an interesting avenue for further work.
Inaccurate Ground Truth Poses {#subsec:badgroundtruth}
-----------------------------
As mentioned in §\[sec:datasetanalysis\], one of the other main types of failure we experienced was ultimately caused by slightly inaccurate ground truth poses in the 7-Scenes dataset [@Shotton2013], which was captured using KinectFusion [@Newcombe2011], a reconstruction method that is known to be prone to tracking drift. To demonstrate this, we consider an example from the *Red Kitchen* scene (see Figure \[fig:redkitchen71-input\]).
![The $920$th frame of the *Pumpkin* sequence from 7-Scenes [@Shotton2013]. This is another example that can confuse our relocaliser, owing to the presence of the repetitive red cupboards and dark grey ceiling panels.[]{data-label="fig:pumpkin920-input"}](pumpkin_input_920){width="\linewidth"}
![The top $16$ pose candidates (left-to-right, top-to-bottom) corresponding to the failure case from the *Pumpkin* scene shown in Figure \[fig:pumpkin920-input\]. The coloured points indicate the 2D-to-3D correspondences that are used to generate the initial pose hypotheses. Note that in this case, none of the candidates would relocalise the camera successfully, although several are close to being acceptable.[]{data-label="fig:pumpkin920-candidates"}](pumpkin_ransac_candidates_920){width="\linewidth"}
![The $71$st frame of the *Red Kitchen* testing sequence from 7-Scenes [@Shotton2013]. Here, our relocaliser initially produces a pose that is within $5$cm/$5^\circ$ of the ground truth, but ICP against the fused 3D model then refines it to a pose that is further from the ground truth. The ultimate cause of this is slightly inaccurate ground truth poses in the training sequence, leading to InfiniTAM fusing an imperfect 3D model, which then affects the pose to which ICP converges.[]{data-label="fig:redkitchen71-input"}](redkitchen_input_71){width="\linewidth"}
[!t]{}
[.32]{} ![image](redkitchen_model_71-1){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-2){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-3){width="\linewidth"}
\
[.32]{} ![image](redkitchen_model_71-4){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-5){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-6){width="\linewidth"}
\
[.32]{} ![image](redkitchen_model_71-7){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-8){width="\linewidth"}
[.32]{} ![image](redkitchen_model_71-9){width="\linewidth"}
![The top $16$ pose candidates (left-to-right, top-to-bottom) corresponding to the failure case from the *Red Kitchen* scene shown in Figure \[fig:redkitchen71-input\]. The coloured points indicate the 2D-to-3D correspondences that are used to generate the initial pose hypotheses. Note that in this case, the relocalised pose before ICP was within 5cm/5$^\circ$ of the ground truth, but that the 3D model seems somewhat eroded in comparison to the original input images. This is caused by slightly inaccurate ground truth poses in the training sequence, leading to InfiniTAM fusing an imperfect 3D model, which then affects the pose to which ICP converges.[]{data-label="fig:redkitchen71-candidates"}](redkitchen_ransac_candidates_71){width="\linewidth"}
In this sequence, we would hope that e.g. the brown box on the worktop would be perfectly reconstructed as InfiniTAM fuses frames from the training sequence into the 3D model, but as Figure \[fig:redkitchen71-model\] shows, this is not in fact the case. Initially, the box is indeed reconstructed as expected, but as the sequence proceeds, subtle inaccuracies in the ground truth poses cause parts of the box to be eroded away. When we later try to relocalise a frame from the testing sequence (see Figure \[fig:redkitchen71-candidates\]), the box has been significantly eroded, as has the wall at the left-hand side of the images. This is more than enough in practice to cause ICP against the corrupted model to converge to a pose that is more than $5$cm/$5^\circ$ from the ground truth, which will lead to the frame being recorded as having failed to relocalise after ICP. Moreover, since we cannot be sure that the ground truth pose for this specific test frame is not itself slightly inaccurate, an estimated pose that might have been within $5$cm/$5^\circ$ of the genuinely correct pose could in principle be marked as having failed, even though it would have succeeded if compared to a ‘perfect’ ground truth pose.
### Analysis
In practice, problems like these are almost impossible to mitigate, since they are ultimately caused by limitations of the 7-Scenes dataset itself, rather than those of our relocaliser: indeed, it could be argued that our relocaliser manages to achieve state-of-the-art results and a reasonably high degree of robustness even in the face of slightly inaccurate input data. Our results on the Stanford 4 Scenes [@Valentin2016] dataset, which was captured much more carefully (e.g. see Figure \[fig:alignment\]), and on the Cambridge Landmarks dataset (see §\[subsec:outdoorrelocalisation\]), which uses bundle-adjusted poses, also support this view.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by FiveAI Ltd., Innovate UK/CCAV project 103700 (StreetWise), the EPSRC, ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1 and EC grant 732158 (MoveCare).
[1]{}
H. Kopka and P. W. Daly, *A Guide to LaTeX*, 3rd ed.1em plus 0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999.
[Tommaso Cavallari]{} received his PhD in Computer Science and Engineering from the University of Bologna in 2017. He then worked for a year as a postdoc in Prof. Torr’s group at Oxford University, doing research on camera localisation and 3D reconstruction. He is now a research scientist in FiveAI’s Oxford Research Group, which focuses on computer vision and machine learning for autonomous driving.
[Stuart Golodetz]{} obtained his D.Phil. in Computer Science at the University of Oxford in 2011. After working for two years in industry, he returned to Oxford as a postdoc for four and a half years, working on 3D reconstruction, scene understanding and visual object tracking. He is currently the director of FiveAI’s Oxford Research Group, which focuses on computer vision and machine learning for autonomous driving.
[Nicholas Lord]{} received his Ph.D. in Computer Engineering from the University of Florida in 2007. He spent several years in Sony’s European Playstation division, including work on Wonderbook for the 2013 BAFTA Award-nominated Book of Spells. As a postdoc at Oxford University, he worked on 6D relocalisation and analysis of the vulnerabilities of DNNs. He is currently a research scientist at FiveAI, working on problems related to autonomous vehicles.
[Julien Valentin]{} received his PhD from the University of Oxford, under the supervision of Philip H. S. Torr. He then became a founding member of PerceptiveIO, and is now working at Google on machine learning and optimization for real-time computer vision and graphics.
[Victor Prisacariu]{} received the D.Phil. degree in Engineering Science from the University of Oxford in 2012. He continued there first as an EPSRC prize Postdoctoral Researcher and then as a Dyson Senior Research Fellow, before being appointed an Associate Professor in 2017. He is a Research Fellow with St Catherine’s College, Oxford. His research interests include semantic visual tracking, 3-D reconstruction, and SLAM.
[Luigi Di Stefano]{} received the PhD degree from the University of Bologna in 1994. He is now a full professor at the University of Bologna, where he founded and leads the Computer Vision Laboratory (CVLab). He is the author of more than 150 papers and several patents. He has been a scientific consultant for major companies in computer vision/machine learning. He is a member of the IEEE Computer Society and the IAPR-IC.
[Philip H. S. Torr]{} received the PhD degree from Oxford University. After working for another three years at Oxford, he worked for six years for Microsoft Research, first in Redmond, then in Cambridge, founding the vision side of the Machine Learning and Perception Group. He is now a professor at Oxford University. He has won awards from top vision conferences, including ICCV, CVPR, ECCV, NIPS and BMVC. He is a senior member of the IEEE and a Royal Society Wolfson Research Merit Award holder.
[^1]: We are largely agnostic to the camera tracker used, but in keeping with our scenario of relocalising in a known scene, at least some frames must be tracked reliably to allow the relocaliser to be trained.
[^2]: Comparing a colour raycast to the live colour image is also possible, but we found depth-based ranking to be more effective in practice.
[^3]: For completeness, we include [@Brachmann2018CVPR; @Li2018arXiv; @Valada2018] and [@Radwan2018] in Table \[tbl:comparativeperformance7\] alongside the other results. However, it is important to note that they are not directly comparable to the other approaches: on the one hand, they are not allowed to use depth, which puts them at a disadvantage; on the other hand, [@Valada2018] and [@Radwan2018] make use of the estimated pose from the previous frame, which is unavailable under the standard evaluation protocol and gives them a significant advantage. For these reasons, we italicise all four sets of results to make it clear that they cannot be directly compared to the other methods in the table.
[^4]: For both experiments, we disabled the built-in SVM in InfiniTAM that measures tracker reliability (see §\[subsubsec::forestadaptation\]), in order to better isolate how our relocaliser performs for missing/noisy depth. In this case, all of the poses are ground truth poses from 7-Scenes [@Shotton2013] that are already assumed to be sufficiently reliable.
[^5]: Note that for this reason, we used a voxel size of $4$cm in InfiniTAM for these models, and doubled the default size of the voxel hash table.
[^6]: We thank Eric Brachmann for providing us with these images.
[^7]: To aid reproducibility, the splits are shared on our project page.
|
---
address:
- |
George P. & Cynthia W. Mitchell Institute for Fundamental Physics\
Texas A&M University\
College Station, TX 77843-4242\
- |
Theory Group, Physics Department\
University of Texas at Austin\
Austin, TX 78712\
author:
- |
Aaron Bergman [^1]\
Jacques Distler [^2]
title: Wormholes in Maximal Supergravity
---
[Introduction]{} {#intro}
================
The relation of the path integral formulation of quantum gravity to string theory has long been mysterious. While both the Euclidean and Lorentzian path integrals for gravitational theories have longstanding pathologies, by analogy with well-understood situations in quantum field theory, they can be used to inspire effects that may or may not be present in a true theory of quantum gravity. Thus, it is an extremely interesting question to see if such effects are present in string theory.
Towards this end, Arkani-Hamed *et al* in [@AOP] examined the situation of Euclidean wormhole [@GS1; @LRT; @H1; @H2]. These should represent stationary points of the Euclidean path integral of quantum gravity. One can try to understand them either through Wick rotation or as contributions to the path integral obtained by “deforming the contour” analogously to the usual stationary phase approximation for finite dimensional integrals. One might have naively thought such solutions to have led to bilocal terms in the effective action, but Coleman [@Co] (and also [@GS2]) instead reinterpreted them in terms of modifications of the coupling constants of local terms in the lagrangian. Arkani-Hamed *et al* demonstrate the existence of such wormhole solutions in compactifications of string theory on higher dimensional tori and argue that the contribution of such solutions represents a contradiction with the predictions of the AdS/CFT conjecture.
The purpose of this note is two-fold. First, we demonstrate further wormhole solutions that were missed in [@AOP] and in earlier works [@MM; @GS3; @Ta]. In particular, we are able to find wormhole solutions in toroidal compactifications of Type II on $T^{10-d}$ for all $d\leq 9$ and – more generally – in any compactification preserving enough supersymmetry such that the scalars in the gravity multiplet take their values in a Riemannian symmetric space.
String theory, in contrast to supergravity, possesses a discrete gauge symmetry, termed U-duality, which reduces the true moduli space of the scalars to a quotient of the symmetric space (termed a locally symmetric space). Our second goal is to argue that none of the wormhole solutions (those presented in [@AOP] or the new ones presented here) can be assigned well-defined transformation properties under U-duality. It, therefore, seems unlikely that there exists a procedure (involving, say, summing over wormhole configurations) which would be compatible with U-duality invariance. Hence, we conclude that these wormholes cannot contribute to the quantum gravity path integral in any dimension, generalizing the result of [@AOP].
[Constructing the Solution]{} {#constructing_the_solution}
=============================
[Background]{}
--------------
We will consider Type II string theory, compactified on $T^{10-d}$. The low energy physics is governed by maximal supergravity in $d$ dimensions. At the level of the supergravity, there is a continuous symmetry group, $G$, which is the split real form of some semisimple Lie group. The scalar fields take values in a nonlinear $\sigma$-model whose target space is the symmetric space, $\mathcal{M} = G/K$, where $K$ is the maximal compact subgroup of $G$. $G$ acts as the group of continuous isometries of this space. The various $G$ and $K$, for different choices of $d$ are listed in the table below.
d 3 4 5 6 7 8 9
--- ------------ ----------- ----------- ------------------------- ---------------------- ----------------------------------------------- ----------------------
G $E_{8,8}$ $E_{7,7}$ $E_{6,6}$ $Spin(5,5)$ $SL_{5}(\mathbb{R})$ $SL_{3}(\mathbb{R})\times SL_{2}(\mathbb{R})$ $SL_{2}(\mathbb{R})$
K $Spin(16)$ $SU(8)$ $Sp(4)$ $Spin(5)\times Spin(5)$ $Spin(5)$ $SU(2)\times SO(2)$ $SO(2)$
In string theory, the continuous $G$-symmetry is broken by higher-derivative corrections to the low-energy supergravity. What remains is a discrete group, $G(\mathbb{Z})$, which acts as a discrete gauge symmetry. Thus, the the correct moduli space is the quotient $\mathcal{M}_{\text{true}} = G(\mathbb{Z})\,\backslash\: G /K$. For the purpose of finding solutions to the supergravity, we can ignore these discrete identifications and work on the covering space $\mathcal{M}=G/K$. We will discuss the implications of U-duality in the following section.
We begin by reviewing the work of [@AOP]. There, Arkani-Hamed *et al* look for wormhole solutions of the supergravity theory. In particular, they take as an ansatz an $O(d)$-invariant metric of the form
$$d r^2 + a^2(r) d\Omega_{d-1}^2\ .
\label{metricAnsatz}$$
The equations of motion for the scalars coupled to gravity are
$$\begin{split}
\frac{{a'}^2}{a^2}- \frac{1}{a^2} \frac{G_{i j}\phi^{i\prime}\phi^{j\prime}}{2(d-1)(d-2)} &=0\ ,\\
\left(a^{d-1} G_{i j} \phi^{j\prime}\right)' - \frac{1}{2} a^{d-1} G_{j k,i} \phi^{j\prime} \phi^{k\prime} &= 0
\end{split}
\label{eom}$$
where $G_{i j}$ is the metric on the scalar manifold, $\mathcal{M}$, and $'$ indicates derivative with respect to $r$.
If we define $\tau$ via
$$\frac{d r}{d\tau} = a(r)^{d-1}
\label{tauDef}$$
and use , we recognize that the second equation is proportional to the geodesic equation on $\mathcal{M}$. Thus, the scalars travel along a geodesic, and a constant of the motion is given by $$C = a^{2(d-1)} G_{i j}\phi^{i\prime}\phi^{j\prime}\ .
\label{cdef}$$ A wormhole solution (in flat space) has the following asymptotics: $$\begin{gathered}
a(r)^2 \sim r^2,\quad r\to \pm\infty\ ,\\
a(r=0) = a_0\ .
\end{gathered}$$ These constraints are satisfied if we set $C$ in to
$$C = -2(d-1)(d-2) a_0^{2(d-2)} {<}0\ .
\label{cVal}$$
There is a further constraint, however. In [@AOP], this ansatz is used to calculate the distance the moduli must travel between the two asymptotic regions of the wormhole solution:
$$\begin{aligned}
D[\phi(r=+\infty), \phi(r=-\infty)]&= 2 D[\phi(r=+\infty), \phi(r=0)]\\
&= \pi \sqrt{\frac{2(d-1)}{d-2}}\ .
\end{aligned}$$
Thus, in order to admit a wormhole solution, $\mathcal{M}$ must possess a timelike geodesic of at least this length,
$$\Delta\tau \geq \pi \sqrt{\frac{2(d-1)}{d-2}}\ .
\label{bound}$$
Of course, since the scalar manifold, $\mathcal{M}= G/K$, in the supergravity theory is Riemannian, everything is spacelike, and there are no *real* wormhole solutions. Instead, we look for complex saddle points. That is, we consider Wick-rotating one (or more) of the scalar directions. We will proceed by choosing a cyclic coordinate, $\phi_0$, on which the metric does not depend and Wick rotating along that coordinate. Equivalently, we want to find a coordinate Killing vector, $\partial/\partial\phi_0$. Since $\mathcal{M}$ is a symmetric space there are many Killing vectors to choose from.
[A general solution]{} {#a_general_solution}
----------------------
The general theory of symmetric spaces[^3] tells us how we can find our needed Killing vector. We first choose any 1-parameter subgroup of $G$ or, equivalently, an element of the Lie algebra, $T\in\mathfrak{g}$ and study the isometry of $\mathcal{M}$ given by
$$e^{\phi_0 T}\ .
\label{isometry}$$
The Minkowski-signature metric is produced by replacing ${d\phi_0}^2\to -{d\phi_0}^2$ in the metric on $\mathcal{M}$.
This procedure necessitates that act on $\mathcal{M}$ without fixed points; otherwise, the Minkowski metric will be singular. A Cartan decomposition of the Lie algebra $\mathfrak{g}$ gives
$$\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{r}
\label{cartanDecomp}$$
where $\mathfrak{k}$ is the Lie algebra of $K$, a maximal compact subgroup, and
$$[\mathfrak{k},\mathfrak{r}]\subset \mathfrak{r},\qquad [\mathfrak{r},\mathfrak{r}]\subset \mathfrak{k}\ .$$
In addition, the Cartan form is negative-definite on $\mathfrak{k}$ $$K(T,T) {<}0,\quad \forall 0\neq T\in \mathfrak{k}$$ reflecting the compactness of $K$. The Iwasawa decomposition further decomposes $$\mathfrak{r} = \mathfrak{a} \oplus \mathfrak{n}
\label{iwasawaDecomp}$$ where $\mathfrak{a}$ is an abelian subalgebra of $\mathfrak{g}$ and $\mathfrak{n}$ is nilpotent, obeying $$K(T,T) = 0,\quad \forall T\in \mathfrak{n}\ .$$
We have the following
The action on $\mathcal{M}$ defined by $T \in \mathfrak{g}$ is fixed-point free action if $K(T,T) \geq0$.
Assume that $g_0K$ is a fixed point of the action, *i.e.*,
$$e^{s T} g_0 k_1 = g_0 k_2$$
for $T\neq 0$ and some $k_{1,2}\in K$. It follows that
$$g_0^{-1} T g_0 \in \mathfrak{k}\ ,$$
and hence
$$K(g_0^{-1} T g_0,g_0^{-1} T g_0) {<}0\ .$$
Since the Killing form is invariant under the adjoint action of $G$, this implies that $K(T,T) {<}0$.
This is enough to see that the desired Wick rotation exists. We can be more explicit, however. For any real Lie group $G$, we define the generalized transpose
$$T^\# = \begin{cases}-T & T\in\mathfrak{k}\\ T & T\in\mathfrak{r}\end{cases}
\label{genTranspose}$$
which is equal to minus the Cartan involution of the Lie algebra. It can be extended to the group in a manner that satisfies
$$\left(e^X\right)^\# = e^{X^\#}\ .$$
The invariant metric on $G/K$ is
$${d s}^2 = \frac{1}{2} Tr (m^{-1} d m)^2
\label{invariantmetric}$$
where
$$m = g g^\#\ .$$
The transformation
$$g \to e^{\phi_0 T} g
\label{isomGen}$$
is an isometry of , and the coordinate $\phi_0$ is cyclic. As above, if $K(T,T)\geq 0$, then is fixed-point free, and we can Wick rotate ${d\phi_0}^2\to -{d\phi_0}^2$ in . If, moreover, $T$ is in the orthogonal complement of $\mathfrak{k}$, then the flow through the identity, $g(\phi_0) = e^{\phi_0 T}$, is a geodesic[^4] of infinite length in either the Riemannian or Minkowskian signature.
Arkani-Hamed *et al* take $T\in \mathfrak{n}$ which implies that $K(T,T) =0$, but the flow is no longer necessarily geodesic. Moreover, in their case, the timelike geodesics of the Wick-rotated metric are bounded in length, and, in many case of possible interest, fail to satisfy the inequality .
[An example]{} {#an_example}
--------------
To make things more concrete, let us apply this to the simple example of $\mathcal{M}= SL_{2}(\mathbb{R})/SO(2)$. We can parametrize the coset space as
$$g = \exp{\begin{pmatrix} v/2 & 0\\ 0 &-v/2\end{pmatrix}}
\cdot \exp{\begin{pmatrix} 0 & u/2\\ u/2 &0\end{pmatrix}}
\cdot O$$
where $O$ is an $SO(2)$ matrix. Setting
$$m = g g^T = \begin{pmatrix}
e^v \cosh u & \sinh u\\
\sinh u& e^{-v} \cosh u\end{pmatrix}\ ,$$
the metric on $\mathcal{M}$ is
$${d s}^2 = \frac{1}{2} Tr (m^{-1} d m)^2 = {d u}^2 + \cosh^2 u {d v}^2\ .
\label{sl2metric}$$
The coordinate $v$ is cyclic and corresponds to the 1-parameter subgroup generated by $T=\left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)\in\mathfrak{a}$. We can Wick rotate $v\to i v$ and obtain a Minkowski-signature metric. Moreover, $u=0, v(\tau)=\tau$ is a geodesic of infinite length in either signature.
We can perform the following change of coordinates
$$\rho = \sqrt{\frac{\cosh u \cosh v -1}{\cosh u \cosh v +1}},\quad \sin\theta = \frac{\cosh u \sinh v}{\sqrt{\cosh^{2} u \cosh^{2} v -1}}\ .
\label{discCoords}$$
This gives the usual Poincaré metric on the unit disk
$${d s}^2 = \frac{4({d\rho}^2 +\rho^2 {d\theta}^2)}{(1-\rho^2)^2}
\label{discMetric}$$
in which $\theta$ is a cyclic coordinate corresponding to the the 1-parameter subgroup generated by $T = \left(\begin{smallmatrix}0 & -1\\ 1 & 0\end{smallmatrix}\right)\in\mathfrak{k}$. One cannot Wick rotate $\theta$, however, as the resulting Minkowsi metric is singular at $\rho=0$.
If we make the further change of coordinates
$$\rho e^{i\theta} = \frac{i z +1}{z +i}\ ,
\label{uhpCoords}$$
we obtain the familiar metric on the upper half plane
$${d s}^2 = \frac{d z\ d\overline{z}}{{(\mathrm{Im\ } z)}^2}
\label{UHP}$$
where $x=\mathrm{Re\ }z$ is cyclic corresponding to the nilpotent generator, $T = \left(\begin{smallmatrix}0&1\\ 0 & 0\end{smallmatrix}\right)\in \mathfrak{n}$. We obtain the desired Minkowski-signature metric by Wick-rotating $x\to i x$. Timelike geodesics in the resulting Minkowski-signature metric take the form
$$\begin{aligned}
x(\tau) &= x_0 - y_0 \tan\tau\ , \\
y(\tau) &= \frac{y_0}{\cos\tau}
\end{aligned}$$
with $-\pi/2 {<}\tau {<}\pi/2$ and have length $\delta\tau = \pi$.
[The question of U-duality]{} {#the_question_of_uduality}
=============================
We have seen that, in contrast to some claims in the literature (reiterated by Arkani-Hamed *et al*), the Euclidean supergravity theory has complex wormhole solutions for any $d\leq 9$. We must then ask the question: do such complex saddle points of the Euclidean action contribute to the path integral? Arkani-Hamed *et al* adduced evidence from AdS/CFT that they do not, at least for the case of $AdS_3\times S^2\times T^4$. We would like to argue, more generally, that they never contribute.
As discussed above, the true moduli space for the scalar fields in string theory compactified on $T^d$ is
$$\mathcal{M}_0 = G(\mathbb{Z})\setminus G/K
\label{trueM}$$
where the U-duality group, $G(\mathbb{Z})$, acts a discrete gauge symmetry of the theory.
There is no problem studying geodesics on $\mathcal{M}_0$ by looking at geodesics on the covering space. This is not sufficient, however, as we are interested in performing a Wick rotation on the geodesic. We will present evidence that this will, in general, be incompatible with U-duality invariance.
Note that we do not mean “incompatible” in some trivial sense. Any particular scalar field configuration, whether it is $\phi=\text{const}$ or a “rolling” configuration, $\phi=\phi(r)$, such as we are considering, is not invariant under the U-duality group. Instead, U-duality maps one such configuration into another. In particular, it maps solutions of the supergravity equations of motion into one another.
This, rather trivial, lack of invariance is not what is at issue. Indeed, if the example of D-instantons is any guide, the resolution is to sum over saddle points. Any particular D-instanton breaks S-duality modular invariance, but the sum has the correct modular properties under S-duality (see, e.g. [@Green:1997di]).
Our problem here is that we are interested in finding a lift of the U-duality group to some “complexification” (loosely speaking) of M or, better, some complexification of the space of scalar field configurations. As before, any particular *complex* configuration of $\phi$ will not be invariant under U-duality. But one expects that U-duality will map one such configuration into another (and we might hope that a suitable sum over such configurations restores U-duality invariance).
This, we argue, is what fails to be the case. It is that failure that we we are referring to when we claim that complexifying $\mathcal{M}$ is incompatible with U-duality.
In special cases, complexification might be compatible with some subgroup of the U-duality group. In the case of the nilpotent generator, , one finds that the subgroup
$$\left\{\begin{pmatrix}1&n\\0& 1\end{pmatrix}, n\in\mathbb{Z}\right\}\subset SL_{2}(\mathbb{Z})$$
is preserved by the Wick rotation. For the more interesting case of $T\in \mathfrak{a}$, however, the U-duality group is broken completely.
We suspect that there is no extension of the discrete gauge symmetry of string theory to complex values of the fields. This means that we cannot “deform the contour” (to use the familiar metaphor of steepest descent) to pick up these saddle points as the semiclassical approximation to the path integral. For this reason, these complex saddle points of the supergravity theory cannot contribute to the quantum gravity path integral.
We can see what the problem is more concretely for the simplest case of $SL_{2}(\mathbb{R})/SO(2)$. Our choices of Wick rotations consists of choosing a basepoint, $g_0 K\in \mathcal{M}$, and an element, $T$, of the Lie algebra (up to scale), satisfying $K(T,T) \geq 0$. Such a choice gives rise to a cyclic coordinate, $\phi_0$. Our “partial complexification” of $\mathcal{M}$ then replaces the real variable $\phi_0$ with a complex one, $\tilde{\phi}_0$. We obtain in this way a 3-manifold, $\widetilde{\mathcal{M}}$, with a *nonsingular* complex bilinear form on its tangent space which is our original metric with ${d\phi_0}^2$ replaced by ${d\tilde{\phi}_0}^2$. The bilinear form so obtained reduces to our original metric on the subspace $\tilde{\phi}_0 \in \mathbb{R}$ and to the “Wick-rotated” Minkowskian metric for $\tilde{\phi}_0 \in i\mathbb{R}$.
What we seek is an action of $SL_{2}(\mathbb{Z})$ on $\widetilde{\mathcal{M}}$ which
- reduces to the usual action of $SL_{2}(\mathbb{Z})$ on $\mathcal{M}\subset\widetilde{\mathcal{M}}$, and
- is an isometry of the complex bilinear form on $\widetilde{\mathcal{M}}$.
One can check that this is impossible by a brute force computation. Consider, for example, the nilpotent case, . Writing $z=x+i y$, the action of $SL_{2}(\mathbb{Z})$ on $\mathcal{M}$ is
$$\begin{split}
x &\to \frac{(a x+b)(c x+d)+ a c y^2 }{(c x+d)^2 +(c y)^2}\ ,\\
y &\to \frac{y}{(c x+d)^2 +(c y)^2}
\end{split}
\label{sl2z}$$
for $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in SL_{2}(\mathbb{Z})$. We would like to generalize this to complex $x$ in such a way that the result is an isometry of the complex bilinear form,
$${d s}^2 = \frac{ {d x}^2 + {d y}^2}{y^2}\ .$$
The latter demands that depend holomorphically on $x$. This implies, however, that the transformation for $y$ in is not real unless $c=0$. Thus, only this subgroup of the U-duality group $SL_{2}(\mathbb{Z})$ is compatible with this complexification.
Similar arguments hold in the hyperbolic case .
More geometrically, we can think of the above “partial complexification” as follows. Choosing a $T$ allows us to write $\mathcal{M}$ as a real line bundle over base $B$ ($\partial/\partial\phi_0$ being a vertical tangent vector). $\widetilde{\mathcal{M}}$ is obtained by complexifying the fibers, resulting in a complex line bundle over the same base.
To generalize, we could Wick rotate on more than one cyclic coordinate. Thus, we give $\mathcal{M}$ the structure of a real *vector* bundle
$$\begin{aligned}
V \to&\mathcal{M}\\
&\downarrow\\
&B
\end{aligned}$$
of rank $k\leq r$, such that the metric on $\mathcal{M}$ takes the form
$${d s}^2_{\mathcal{M}} = {d s}^2_B + h$$
where $h$ is an metric on the vertical tangent space. $\widetilde{\mathcal{M}}$ is constructed by complexifying the fibers,
$$\begin{aligned}
V\otimes \mathbb{C} \to&\widetilde{\mathcal{M}}\\
&\downarrow\\
&B
\end{aligned}$$
and extending $h$ to a $\mathbb{C}$-bilinear form.
In this context, our objective is to find a lift of the $G(\mathbb{Z})$ action on $\mathcal{M}$ to an action on the partial complexification, $\widetilde{\mathcal{M}}$ such that it acts by isometries of the complex bilinear form on $\widetilde{\mathcal{M}}$. We have not found an elegant proof of this result, but it seems clear that it follows from the above explicit computation by restricting to an $SL_{2}$ subgroup.
In closing, we should mention a possible loophole in this argument. Rather than restricting ourselves to complexifying cyclic coordinates, we could pick some particular coordinate system and complexify everything. This would not make sense for a general real manifold, but because $G/K$ is contractible, it can be covered in a single coordinate chart. For the example of $SL_{2}(\mathbb{R})/SO(2)$, we could choose the upper half plane coordinates, $x,y$ , or the $u,v$ coordinates , and promote them to complex variables. If we promote the $SL_{2}(\mathbb{Z})$ symmetry to act holomorphically on these variables, then we seem to have an answer to the above objection. Each such choice of coordinates clearly leads to a (very!) different complexification. But, even more problematically, each one seems to lead to some pathology. In the upper half plane coordinates, $y$ was supposed to be positive, and it is unclear what the proper range of values should be when we complexify it. The $u,v$ coordinates run over all real values, so there is no a-priori problem when complexifying with letting them run over all complex values. But, if we do so, then the metric has singularities at $u\in i\pi\mathbb{Z}$. So, even liberalizing the rules of what it might mean to “complexify” the field space, $G/K$, does not appear to lead to any satisfactory solution.
Acknowledgements {#acknowledgements .unnumbered}
================
J.D. would like to thank J. Polchinski for discussions. Both authors would like to thank the Aspen Center for Physics, where this work was completed.
[99]{}
N. Arkani-Hamed, J. Orgera and J. Polchinski, “Euclidean Wormholes in String Theory,” [arXiv:0705.2768 \[hep-th\]](http://arxiv.org/abs/arXiv:0705.2768). S. B. Giddings and A. Strominger, “Axion Induced Topology Change In Quantum Gravity And String Theory,” Nucl. Phys. B [**306**]{}, 890 (1988). G. V. Lavrelashvili, V. A. Rubakov and P. G. Tinyakov, “Disruption of Quantum Coherence upon a Change in Spatial Topology in Quantum Gravity,” JETP Lett. [**46**]{}, 167 (1987) \[Pisma Zh. Eksp. Teor. Fiz. [**46**]{}, 134 (1987)\]. S. W. Hawking, “Quantum Coherence Down the Wormhole,” Phys. Lett. B [**195**]{}, 337 (1987). S. W. Hawking, “Wormholes in Space-Time,” Phys. Rev. D [**37**]{}, 904 (1988). S. R. Coleman, “Black Holes as Red Herrings: Topological Fluctuations and the Loss of Quantum Coherence,” Nucl. Phys. B [**307**]{}, 867 (1988). S. B. Giddings and A. Strominger, “Loss of Incoherence and Determination of Coupling Constants in Quantum Gravity,” Nucl. Phys. B [**307**]{}, 854 (1988). J. M. Maldacena and L. Maoz, “Wormholes in AdS,” JHEP [**0402**]{}, 053 (2004), [hep-th/0401024](http://arxiv.org/abs/hep-th/0401024). S. B. Giddings and A. Strominger, “String Wormholes,” Phys. Lett. B [**230**]{}, 46 (1989). K. Tamvakis, “Two Axion String Wormholes,” Phys. Lett. B [**233**]{}, 107 (1989). F. I. Mautner, “Geodesic flows on symmetric Riemann spaces,” Ann. of Math. (2), 65:416–431, 1957.
S. Helgason, *Differential geometry, Lie groups, and Symmetric Spaces*, volume 34 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2001. Corrected reprint of the 1978 original.
M. B. Green and P. Vanhove, “D-instantons, strings and M-theory,” Phys. Lett. B [**408**]{}, 122 (1997), [hep-th/9704145](http://arxiv.org/abs/hep-th/9704145).
[^1]: Research supported by NSF Grants PHY-0505757, PHY-0555575 and by Texas A&M Univ.
[^2]: Research supported in part by NSF Grant PHY-0455649.
[^3]: See, for example, [@He].
[^4]: In other words, if we choose $T$ such that $K(T, X) = 0, \forall X\in \mathfrak{k}$, then $e^{s T}$ is a geodesic on $G/K$. This result is originally due to Cartan. See, for example, [@Ma].
|
---
abstract: 'Based on the first-principles calculations, we investigated the ferroelectric properties of two-dimensional (2D) Group-IV tellurides XTe (X=Si, Ge and Sn), with a focus on GeTe. 2D Group-IV tellurides energetically prefer an orthorhombic phase with a hinge-like structure and an in-plane spontaneous polarization. The intrinsic Curie temperature $T_{c}$ of monolayer GeTe is as high as 570 K and can be raised quickly by applying a tensile strain. An out-of-plane electric field can effectively decrease the coercive field for the reversal of polarization, extending its potential for regulating the polarization switching kinetics. Moreover, for bilayer GeTe the ferroelectric phase is still the ground state. Combined with these advantages, 2D GeTe is a promising candidate material for practical integrated ferroelectric applications.'
author:
- Wenhui Wan
- Chang Liu
- Wende Xiao
- Yugui Yao
bibliography:
- 'GroupIVTellurides.bib'
nocite: '[@*]'
title: 'Promising Ferroelectricity in 2D Group IV Tellurides: a First-Principles Study'
---
[^1]
[^2]
Nanoscale devices based on ferroelectric thin films and compatible with Si chips have many potential applications, e.g. ultrafast switching, cheap room-temperature magnetic-field detectors, electrocaloric coolers for computers and nonvolatile random access memories. [@Martin2016; @Rabe2005; @Scott2007] However, it is still a great challenge to keep the ferroelectricity stable in thin films at room temperature to date. [@Kooi2016] For the conventional ferroelectric materials such as BaTiO$_{3}$ or PbTiO$_{3}$, the enhanced depolarization field will destroy the ferroelectricity at critical thicknesses of about 12 Å and 24 Å, [@Junquera2003; @Fong2004] respectively. To address this challenge, new two-dimensional (2D) ferroelectric phases with ferroelectricity sustained against such a depolarization field are desirable to pave the way for the application of “integrated ferroelectrics”.
Compared with their bulk counterparts, 2D materials often lose some symmetry elements (e.g. centrosymmetry) as the result of dimensionality reduction, [@Xu2014; @Blonsky2015] which favors the appearance of ferroelectricity. A number of 2D ferroelectric phases have been theoretically proposed, e.g. the distorted $1T$ MoS$_{2}$ monolayer, [@Shirodkar2014] low-buckled hexagonal IV-III binary monolayers including InP and AsP, [@DiSante2015] unzipped graphene oxide monolayer [@Noor-A-Alam2016] and monolayer Group-IV monochalcogenides. [@Wu2016; @Fei2016; @Wang2016] The Group-IV monochalcogenides including GeS, GeSe, SnS and SnSe have attracted much attention due to their large in-plane spontaneous polarizations $\bm{P_{s}}$ in theory [@Wang2016] and experimental accessibility. Monolayers of SnSe and GeSe have been successfully synthesized. [@Li2013; @Sun20141] However, the ground-state SnSe and GeSe multilayers adopt a stacking order that the directions of $\bm{P_{s}}$ in two neighboring layers are opposite. [@Fei2016] Thus, the non-zero polarization only exists in odd-numbered layers, hindering their ferroelectric applications. Excitingly, a robust ferroelectricity has been experimentally observed in SnTe(001) few-layers. [@chang2016] Compared with the low Curie temperature $T_{c}=98$ K in bulk SnTe, [@Iizumi1975] the $T_{c}$ in monolayer SnTe was greatly enhanced to 270 K, due to the suppression of the Sn-vacancy and the in-plane expansion of the lattice. [@chang2016; @Kooi2016] Meanwhile, the $\bm{P_{s}}$ in 2D SnTe are aligned along the in-plane $<110>$ direction, in contrast to the $<111>$ direction in bulk. [@Plekhanov2014] This behavior again indicates that the dimensionality reduction favors the formation of new ferroelectric phases. However, the ferroelectric properties of 2D SnTe predicted in theory are inconsistent with the experimental measurements. [@chang2016] This calls for a microscopic understanding of the relevant physics. Additionally, the discovery of the ferroelectricity in 2D SnTe shades light on the possible ferroelectric phase in other Group-IV tellurides. Bulk GeTe is ferroelectric and exhibits a rhombohedral crystal structure. [@DiSante2013] No crystal phase has been identified in bulk SiTe, but several thermodynamically stable phases of 2D SiTe have been theoretically proposed. [@Chen2016] To date, the ferroelectric properties of both 2D GeTe and 2D SiTe remain unexplored.
In this work, we investigated the structural, electronic and ferroelectric properties of 2D Group-IV tellurides XTe (X=Si, Ge and Sn). All computational details are in the supplementary material ($SM$). [@support] We found that 2D Group-IV tellurides prefer a hinge-like structure with an in-plane spontaneous polarization. During the reversal of polarization, monolayer SiTe undergoes a semiconductor-to-metal transition, while monolayer GeTe and SnTe keep semiconducting and ferroelectric. Monolayer GeTe has a high ferroelectricity transition $T_{c}$ of 570 K. However, for 2D SnTe, the achievement of the room-temperature ferroelectricity requires external strain. The ferroelectricity of 2D GeTe can be effectively controlled through the application of strain engineering and vertical electric fields. Moreover, bilayer GeTe exhibits a ferroelectric ground state that the polarization of each layer is aligned parallel. These novel properties provide 2D GeTe a promising material for future nanoscale ferroelectric applications.
Though bulk GeTe and SnTe adopt a rhombohedral structure below the Curie temperature, [@Iizumi1975; @DiSante2013] the synthesized 2D SnTe exhibits a layered orthorhombic phase with a hinge-like structure. [@chang2016] The corresponding monolayer is displayed in Fig. \[wh1\](a) and \[wh1\](b). Each of the four atoms in a unit cell is three-fold coordinated with the atoms of the other species. $\bm{a}$ and $\bm{b}$ are the lattice vectors along the $x$ (puckered) and $y$ (zigzag) directions, respectively. We first calculated the crystal structure of the monolayer SnTe with different methods to ensure the reliability of our simulations. The theoretical results as well as the experimental data [@chang2016] are listed in the Table S1 of $SM$. [@support] In the special hinge-like structure of monolayer SnTe, some atoms are close to each other without covalent bonding (see Fig. \[wh1\](a)). The distance between them is larger than 3.2 Å. Description of this kind of interaction due to the weak wave function overlap should include the vdW interactions, which have been reported in the phosphorene. [@qiao2014high] Considering the lattice anisotropy which is critical in the determining the ferroelectricity, it was found that the optPBE-vdW method [@vdw2] produces the best results (see Table S1 [@support]) compared with the experiment.
![(a, b) Side and top views of the crystal structure of monolayer XTe (X=Si, Ge and Sn), respectively. (c) 2D first Brillouin zone. (d-f) Band structures of monolayer SiTe, GeTe and SnTe, respectively.[]{data-label="wh1"}](fig1.pdf){width="45.00000%"}
Based on this method, the optimized lattice constants of monolayer Group-IV tellurides are given in Table \[Table1\]. No imaginary frequency is observed in the phonon dispersions (see Fig. S1 [@support]), confirming their structural stability. Additionally, it is found the orthorhombic phase of monolayer Group-IV tellurides is more stable than the hexagonal phase extracted from the bulk (see the Fig. S2 and Table S2 of $SM$ [@support]), in line with the previous theoretical work [@Singh2014] and the experiment. [@chang2016]
$a$ (Å) $b$ (Å) $\delta$(%) $P_{s}({\mu}$C/cm$^{2})$ $E_{b}$(meV/f.u.)
------ --------- --------- ------------- -------------------------- ------------------- --
SiTe 4.452 4.127 7.88 42.0 88.5
GeTe 4.472 4.273 4.66 32.8 37.4
SnTe 4.666 4.577 1.95 19.4 4.48
: \[Table1\] The lattice constants ($a$, $b$), lattice anisotropy $\delta=(\frac{a}{b}-1)\times100\%$, intrinsic polarization $P_{s}$ and ferroelectric transition barrier $E_{b}$ of monolayer SiTe, GeTe and SnTe.
The calculated band structures of monolayer Group-IV tellurides are shown in Figs. \[wh1\](d-f). The trend of the band gaps is $E_{g}$(GeTe)$>E_{g}$(SnTe)$>E_{g}$(SiTe). The anomalous order of $E_{g}$ might be the result of the fine balance between the relative atomic energy levels and the repulsion between the levels [@Wei1997] The anisotropy of the band structure and phonon dispersion decreases from monolayer SiTe to monolayer SnTe, consistent with the decrease of the anisotropy of their lattice constants (see Table \[Table1\]).
![(a) Two symmetry-equivalent ferroelectric states with opposite in-plane polarizations as well as a high-symmetry, non-polar transition state. (b) Double-well potential of monolayer GeTe vs polarization. $E_{b}$ is the ferroelectric transition barrier. The red line represents the fitting curve of the Landau-Ginzburg model. The uniaxial and biaxial strains dependence of $P_{s}$ and $E_{b}$ for (c, d) monolayer GeTe and (e, f) monolayer SnTe, respectively.[]{data-label="wh2"}](fig2.pdf){width="45.00000%"}
In such an orthorhombic structure, the Group-IV atoms displace along the $x$-direction with respect to the Te atoms, leading to the break of the centrosymmetry but the perseveration of the $yz$-mirror symmetry. Spontaneous polarization is aligned along the $x$-direction and can be labeled as a scalar $P_s$. Therefore, the thickness of 2D ferroelectric Group-IV tellurides will not be limited by the aforementioned depolarization field vertical to the slab, [@Junquera2003; @Fong2004] but another lateral critical size still exists due to the in-plane depolarization field. [@Wang2016] The polarization $P_s$ was calculated by the Berry phase approach. [@King-Smith1993] The reversal of polarization is realized through a phase transition between two symmetry-equivalent ferroelectric states with opposite $P_{s}$ (labeled as the FE state and -FE state in Fig. \[wh2\](a)). By calculating the transition barrier $E_{b}$ of several pathways using the nudged-elastic-band (NEB) methods, [@Henkelman2000] we found that a transition path through a centrosymmetric non-polar (NP) state (see Fig. \[wh2\](a)) has the lowest $E_{b}$. With this transition path, we calculated the polarization dependence of free energy $F(P)$ and show the result specific to monolayer GeTe in Fig. \[wh2\](b). The $P_s$ in the FE state is $3.28\times10^{-10}$ C/m, equivalent to a bulk polarization of $32.8$ ${\mu}$C/cm$^{2}$ if an effective thickness of 1 nm for monolayer GeTe is used. The transition barrier $E_{b}$ is estimated by the energy difference between the FE and NP states (see Fig. \[wh2\](b)). For monolayer GeTe, $E_{b}$ is 74.8 meV, equivalent to 37.4 meV per formula unit (f.u.) and much smaller than $E_{b}\approx 200$ meV/f.u. in conventional ferroelectric PbTiO$_{3}$. [@Cohen1992] This small $E_{b}$ indicates that the required electric field for the reversal of polarization in monolayer GeTe would be much lower than that in PbTiO$_{3}$.
The ferroelectricity can be substantially affected by the external strain. [@Haeni2004] Here the strain is defined as $\epsilon=(\frac{a}{a_{0}}-1)\times100\%$ where $a$ and $a_{0}$ are the lattice constants along the $x-$ or $y-$direction for the strained and unstrained structures, respectively. It is found that both uniaxial ($\epsilon_{x}$) and biaxial ($\epsilon_{x}=\epsilon_{y}$) tensile strains can enlarge the displacement of the Ge atoms with respect to the Te atoms and therefore effectively enhance the $P_{s}$ and $E_{b}$ of monolayer GeTe, as shown in Figs. \[wh2\](c, d). In contrast, the compressive strain suppresses the $P_{s}$ and $E_{b}$.
Monolayer SnTe has a larger $P_{s}$ and higher $E_{b}$ than that of GeTe (see Table \[Table1\]). However, its band gap calculated using HSE06 approach [@Paier2006] will be closed during the reversal of $P_s$ (see Fig. S3 [@support]), leading to a drop of $P_s$ to zero. This semiconductor-metal transition hinders the ferroelectric application of 2D SiTe, but make 2D SiTe suited for the field effect switching devices. [@Metal-Insulator]
Monolayer SnTe remains semiconducting during the reversal of $P_s$. The effective $P_{s}$ and the $E_{b}$ of monolayer SnTe is small (see Table \[Table1\]). After applying a tensile strain, both $P_{s}$ and $E_{b}$ can be effectively increased (Figs. \[wh2\](e, f)), showing that the ferroelectricity of 2D SnTe can be effectively tuned by strain.
The stability of ferroelectricity is represented by the Curie temperatures $T_{c}$ at which the macroscopic spontaneous polarization vanishes. Based on the Landau-Ginzburg phase transition theory, [@Cowley1980; @Fei2016] the free energy of GeTe supercell is written as a Taylor expansion in terms of the polarization:
$$\label{eq1}
F=\!\sum_{ i } \left( \frac{A}{2}P_{i}^{2} \!+\! \frac{B}{4}P_{i}^{4}\!+\!\frac{C}{6}P_{i}^{6} \right) \!+\!\frac{D}{2}\!\sum_{ < i,j > } \!({P_i} - {P_j})^{2},$$
where $P_{i}$ is the polarization of each unit cell. The first three terms describe the anharmonic double-well potential in a unit cell (see Fig. \[wh2\](b)). The last term represents the dipole-dipole interaction between the nearest neighboring unit cells. The parameter $D$ can be estimated by a fitting process in the mean-field approximation. [@Fei2016] All the fitted parameters are given in the Table S3 of $SM$. [@support] The Monte Carlo simulations were performed with the effective Hamiltonian of Eq. \[eq1\] to investigate the ferroelectric phase transition. As shown in Fig. \[wh3\](a), the $T_{c}$ of unstrained monolayer GeTe is 570 K. By applying a biaxial strain of 2%, the $T_{c}$ can be easily enhanced to 903 K, as illustrated in Fig. \[wh3\](b), consistent with the increase of the transition barrier $E_{b}$ (see Fig. \[wh2\](d)).
In contrast, the $T_{c}$ of unstrained monolayer SnTe is only 166 K (see Fig. \[wh3\](b)), much smaller than the experimental value of $T_{c} = 270$ K. [@chang2016] However, the $T_{c}$ can be increased quickly by applying a biaxial tensile strain (see Fig. \[wh3\](b)). The sensitive response of the ferroelectricity to external strain in 2D SnTe offers a possible reason to explain the difference between the predicted $T_{c}$ and experimental one. The theoretical lattice anisotropy $\delta$ of monolayer SnTe is $1.95\%$ (see Table. \[Table1\]) which is smaller than the experimental $\delta=3.15\%$. [@chang2016] If an uniaxial tensile strain of $\epsilon_{x}=1.0\%$ is applied to monolayer SnTe, the $T_{c}$ can be enhanced from 166 K to 265 K, close to the experimental $T_{c}$ of 270 K. [@chang2016] The external strain has also been observed in other materials in the vdW epitaxy, [@cai2016band; @wang2016band; @zhang2015self] such as GaSe flakes via vdW epitaxy on the Si (111) surface. [@cai2016band] The origin of strain in 2D SnTe calls for a future study.
The averaged polarization $\langle P_{i} \rangle$ in the vicinity of the $T_{c}$ follows an asymptotic form [@fridkin2014; @Fei2016] of $\langle P_{i} \rangle=C (T_{c}-T)^{\delta}$ with $T<T_{c}$. Here, $C$ is a constant and $\delta$ is the critical exponent. For monolayer GeTe, the asymptotic form fits well with the MC simulations, as shown in Fig. \[wh3\](a). $P_{s}$ decreases continuously to zero at $T_{c}$. The $\delta$ is 0.195, deviating from $\delta$=0.5 in the second-order ferroelectric phase transition. [@fridkin2014] A similar behavior has also been reported in other IV-VI compounds such as SnSe. [@Fei2016]
![(a) Temperature dependence of averaged polarization $\langle P_{i} \rangle$ of monolayer GeTe obtained by the MC simulations. Here the $\langle P_{i} \rangle$ at different temperature has been normalized with respect to the $\langle P_{i} \rangle_{T=0 K}=P_{s}$. The red line represents a fitting curve of $P_{s}$ with an asymptotic form in the vicinity of the Curie temperatures $T_{c}$. (b) Biaxial strain dependence of $T_{c}$ for monolayer GeTe (red line) and SnTe (blue line).[]{data-label="wh3"}](fig3.pdf){width="50.00000%"}
Based on the Landau-Ginzburg phase transition theory, [@Cowley1980] the electric field $E$ can be calculated from free energy, i.e. $E=\frac{\partial F(P)}{\partial P}$. The coercive field $E_{c}$ is at the turning points of the hysteresis loop $P(E)$, satisfying the condition of $(\frac{\partial P}{\partial E})^{-1}|_{E=E_{c}}=0$. Therefore, the ideal $E_{c}$ can be estimated from the maximum slope of the $F(P)$ curve between the NP and FE states. [@Wang2016] A lateral size of $l=30$ nm is adopted to estimate the effective coercive voltage $V_{c}=lE_{c}$. This lateral size is about the one of latest ferroelectric field effect transistor memory. [@Mueller2016] Through the $F(P)$ curve of monolayer GeTe (Fig. \[wh2\](b)), the estimated $E_{c}$ is 0.206 V/nm and the effective $V_{c}$ is 6.18 V. It is noted that the ideal $E_{c}$ of the bulk ferroelectric material is always much higher than the experimentally measured $E_{c}$, due to the growth and propagation of the ferroelectric domains. [@Kim2002] However, the distinction between them at the nanoscale become small as thin ferroelectric films turn out to be more homogeneous than bulk and the formation of ferroelectric domains is suppressed. [@Highland2010; @fridkin2014]
![(a) Vertical electric field $E_{\perp}$ dependence of polarization $P_{s}$ and transition barrier $E_{b}$ of monolayer GeTe. (b) Vertical electric field $E_{\perp}$ dependence of in-plane coercive field $E_{c}$ and coercive voltage $V_{c}$ of monolayer GeTe. (c) A schematic representation of the switching of polarization of monolayer GeTe by a combination of an in-plane electric field $E_{//}$ and an out-of-plane electric field $E_{\perp}$.[]{data-label="wh4"}](fig4.pdf){width="45.00000%"}
It is found that if a vertical electric field $E_{\perp}$ was applied, the in-plane displacements of the Ge atoms with respect to the Te atoms will decrease, due to the field-induced coulomb forces. This leads to a reduction of the $P_{s}$ and $E_{b}$ (see Fig. \[wh4\](a)). As a result, the in-plane coercive field $E_{c}$ in monolayer GeTe can be effectively decreased by $E_{\perp}$, as displayed in Fig. \[wh4\](b). The maximum $E_{\perp}$ required to tune the $E_{c}$ is about 4.5 V/nm (see Fig. \[wh4\](b). The equivalent $V_{\perp}$ is 4.5 V if an effective thickness of 1 nm for monolayer GeTe is adopted. Fig. \[wh4\](c) depicts a feasible way for fast switching the polarization by a combination of two mutually perpendicular electric fields. In this switching process, the required operating voltages are less than 5V, which is desirable for the integration into Si-based semiconducting devices. [@Scott2007; @Rabe2005]
The stacking order is crucial for the ferroelectricity of multilayers of Group-IV tellurides. We first defined two kinds of stacking order for bilayer GeTe, namely, AA and anti-AA stacking. As shown in Fig. \[wh5\](a), for AA-stacking, the top layer is directly stacked on top of the bottom layer, so that polarization of each layer is aligned parallel. By further shifting the top layer of AA-stacking with $\bm{a}$/2, $\bm{b}$/2 and $(\bm{a+b})$/2, we can get other three stacking orders labeled as AB, AC and AD stacking. The anti-AA stacking can be gotten if the top layer of the AA-stacked bilayer is rotated around the $z$ axis by $180^{\circ}$. Other stacking orders, e.g. anti-AB, anti-AC and anti-AD can be obtained with a similar process. The energies of bilayer GeTe with different stacking order are shown in Table S4. [@support] The bilayer GeTe with the AA stacking has the lowest energy. The effective thickness of bilayer GeTe is taken as the twice of that of its monolayer. The corresponding effective bulk $P_{s}$ and $E_{b}$ is 34.2 ${\mu}$C/cm$^{2}$ and 40.1 meV/f.u., respectively, exhibiting a increase compared with that of monolayer GeTe (see Fig. \[wh2\](c) and Fig. \[wh5\](d)). Tensile strain can further enhance the $P_{s}$ and $E_{b}$ of bilayer GeTe (see Fig. \[wh5\](d)). Thus, the promising ferroelectricity also exists in bilayer GeTe.
For bilayer SnTe, the AA-stacking is also the ground state but exhibits a weak ferroelectricity (see Table S4 [@support]). Therefore, 2D GeTe and SnTe take the advantage over aforementioned Group-IV monochalcogenides such as SnSe in ferroelectric application, as the ferroelectricity in 2D SnSe only exists in odd-numbered layers. [@Fei2016]
![The (a) AA stacking order for bilayer GeTe. (b) Biaxial strain dependence of the $P_{s}$ and $E_{b}$ of bilayer GeTe with the AA stacking.[]{data-label="wh5"}](fig5.pdf){width="45.00000%"}
In summary, we show by the first-principles calculations that 2D GeTe with a hinge-like crystal structure is ferroelectric with an in-plane spontaneous polarization. When examining the atomic structure and ferroelectricity of 2D GeTe, it is necessary to include of the van der Waals interactions in order to well describe the interatomic interactions. The Curie temperatures $T_{c}$ of monolayer GeTe is as high as 570 K. Tensile strain can effectively enhance the $T_{c}$ and serves as a powerful tool to improve the ferroelectricity of 2D GeTe. The in-plane coercive field for reversing the polarization can be widely tuned by a vertical electric field, facilitating the fast switching of polarization. Furthermore, for bilayer GeTe the ferroelectric phase is still the ground state. With these advantages, 2D GeTe may be the long-sought candidate for realizing the integrated ferroelectric applications.
We acknowledge professor Wei Kang (Peking university) and Dr. Ruixiang Fei (Washington University) for their fruitful discussions. The work is supported by the National Natural Science Foundation of China (Grant No. 11574029) and the MOST Project of China (Nos. 2014CB920903, 2016YFA0300600).
[^1]: These two authors contributed equally
[^2]: These two authors contributed equally
|
---
abstract: |
Transformations performing on the dependent and/or the independent variables are an useful method used to classify PDE in class of equivalence. In this paper we consider a large class of U(1)-invariant nonlinear Schrödinger equations containing complex nonlinearities. The U(1) symmetry implies the existence of a continuity equation for the particle density $\rho\equiv|\psi|^2$ where the current ${{\mbox{\boldmath${j}$}}}_{_\psi}$ has, in general, a nonlinear structure. We introduce a nonlinear gauge transformation on the dependent variables $\rho$ and ${{\mbox{\boldmath${j}$}}}_{\psi}$ which changes the evolution equation in another one containing only a real nonlinearity and transforms the particle current ${{\mbox{\boldmath${j}$}}}_{_\psi}$ in the standard bilinear form. We extend the method to U(1)-invariant coupled nonlinear Schrödinger equations where the most general nonlinearity is taken into account through the sum of an Hermitian matrix and an anti-Hermitian matrix. By means of the nonlinear gauge transformation we change the nonlinear system in another one containing only a purely Hermitian nonlinearity. Finally, we consider nonlinear Schrödinger equations minimally coupled with an Abelian gauge field whose dynamics is governed, in the most general fashion, through the Maxwell-Chern-Simons equation. It is shown that the nonlinear transformation we are introducing can be applied, in this case, separately to the gauge field or to the matter field with the same final result. In conclusion, some relevant examples are presented to show the applicability of the method.\
Mathematics Subject Classification (2000): 35Q55, 37K05, 37K35\
author:
- 'Antonio M.'
title: |
**Gauge equivalence among quantum\
nonlinear many body systems**
---
Introduction
============
The Schrödinger equation is one of the most studied topics both from a mathematical and physical point of wiev. A particular interest is related to the possible nonlinear extensions of this equation. Just one year after the discovery of the Schrödinger equation, Fermi proposed its first nonlinear generalization [@Fermi].\
In the following years, many nonlinear extensions of this equation have been proposed in literature in order either to explore fundamental arguments of the quantum mechanics, with the usual linear theory representing only an approximation, or to describe particular phenomenological physical effects. Among the many attempts made to generalize in a nonlinear manner the Schrödinger equation we recall the Bialynicki-Birula and Mycielski equation [@Bialynicki], with the nonlinear term $-b\,\ln(|\psi|^2)\,\psi$; the Guerra and Pusterla model [@Guerra], that with the purpose of preserving the superposition principle of the quantum mechanics introduced the nonlinear term $(\Delta |\psi|/|\psi|)\,\psi$; more recently, the Weinberg model [@Weinberg; @Weinberg1], with the introduction of homogeneous nonlinear terms in order to save partially the same fundamental principle.\
On a phenomenological basis we recall the well known cubic Schrödinger equation [@Gross2; @Gross1; @Pitaevskii], used in the study of the dynamical evolution of a Boson gas with a $\delta$-function pair-wise repulsion or attraction [@Barashenkov] and in the description of the Bose-Einstein condensation of alcali atoms like $^7$Li, $^{23}$Na and $^{87}$Rb [@Shi; @Stringari]; the model introduced by Kostin [@Kostin2; @Kostin1; @Schuch2; @Schuch1], with the nonlinear term $i\,\ln(\psi/\psi^\ast)\psi+i\,\langle\ln(\psi/\psi^\ast)\rangle\,\psi$ used to describe dissipative systems, and others [@Dodonov; @Gisin; @Grigorenko; @Malomed; @Martina].\
Many nonlinear Schrödinger equations (NLSEs) contain complex nonlinearities. For instance, a nonlinearity of the type $a_1\,|\psi|^2\,\psi+a_2\,|\psi|^4\,\psi+
i\,a_3\,\partial_x(|\psi|^2\,\psi)+(a_4+i\,a_5)\,\partial_x|\psi|^2\,\psi$ was introduced to describe a single mode wave propagating in a Kerr dielectric guide [@Gagnon2; @Gagnon1]; the nonlinearity $a_1\,|\psi|^2\,\psi+i\,a_2\,\psi+
i\,a_3\,\partial_{xx}\psi+i\,a_4\,|\psi|^2\,\psi$, proposed in [@Malomed2; @Malomed1] to take into account of pumping and dumping effects of the nonlinear media, is used to describe dynamical modes in plasma physics, hydrodynamics, and also solitons in optical fibers ([@Malomed3] and references therein); the nonlinearity $a_1\,|\psi|^2\,\psi+
i\,a_2\,\partial_{xxx}\psi+i\,a_3\,\partial_x(|\psi|^2\,\psi)
+i\,a_4\,\partial_x|\psi|^2\,\psi$ introduced to describe the propagation of high power optical pulses in ultra-short soliton communication systems [@Gedalin; @Karpman; @Karpman1; @Karpman2; @Karpman3; @Li; @Doktorov]. In [@Kaniadakis; @Kaniadakis1] a NLSE with the complex nonlinearity $\kappa\,(\psi^\ast\,{{\mbox{\boldmath${\nabla}$}}}\psi-\psi\,{{\mbox{\boldmath${\nabla}$}}}\psi^\ast)\,
{{\mbox{\boldmath${\nabla}$}}}\,\psi+(\kappa/2)\,{{\mbox{\boldmath${\nabla}$}}}(\psi^\ast\,{{\mbox{\boldmath${\nabla}$}}}
\psi-\psi\,{{\mbox{\boldmath${\nabla}$}}}\psi^\ast)\,\psi$ has been introduced to take into account a generalized Pauli exclusion-inclusion principle between the quantum particles constituting the system. In [@Scarfone7], in the stochastic quantization framework, starting from the most general kinetic containing a nonlinear drift term and compatible with a linear diffusion term, a class of NLSEs with a complex nonlinearity was derived whilst, recently [@Scarfone10; @Scarfone8; @Scarfone9], a wide class of NLSEs has been obtained starting from the quantization of a classical many body system whose underlying kinetic is described by a nonlinear Fokker-Planck equation associated to a generalized trace-like entropy. Finally, the Doebner-Goldin equation [@Doebner3; @Doebner2; @Doebner1; @Goldin1] was introduced from topological considerations as the most general class of NLSEs compatible with the linear Fokker-Planck equation for the probability density $\rho\equiv|\psi|^2$, where the nonlinear term was derived from the unitary group representation of the infinite-dimensional diffeomorphism group proposed as a [*universal quantum kinematical group*]{} [@Goldin2].\
In the recent years, an increasing interest has been also addressed to systems of coupled nonlinear Schrödinger equations (CNLSEs), particularly after the invention of high-intensity lasers which have allowed [@Mollenauer] the experimental test of the pioneering theoretical works on the optical fibers propagation in long-distance communications [@Hasegawa; @Tappert]. In fact, single-mode optical fibers are not really single-mode type since two possible polarizations exist. A rigorous study of their propagation requires the use of CNLSEs in order to take into account the evolution of the different polarized waves. In 1974 Manakov [@Manakov] introduced a CNLSE starting from the cubic NLSE by considering that the total field is a superposition of two, left and right polarized fields. When ultrashort pulses are transmitted through fibers, CNLSEs with complex high derivative nonlinearities arise [@Mahalingam; @Nakkeeran1; @Nakkeeran2; @Nakkeeran3; @Lakshmanan; @Vinoj]. CNLSEs are also employed in the study of light propagation through a nonlinear birefringent medium, in systems with nonrelativistic interactions among the different kind of particles, in spinor Bose-Einstein condensation, in the description of micro-polar elastic solids, among the many [@Ablowitz; @Agrawal; @Zakharov; @Erbay; @Bose; @Matthews; @Newboult; @Ryskin; @Yip].\
Finally, when coupled with gauge fields, NLSEs are useful in the study of some interesting phenomenologies in condensed matter physics. For instance, in the Ginzburg-Landau theory of the superconductivity the cubic NLSE is coupled with an Abelian gauge field whose interaction is described by means of the Maxwell equations [@Kaper; @Tomaras]. Some time, gauge field dynamics can be described by the further Chern-Simons term which confers mass to the field without destroy the gauge invariance of the theory. These models have particle-like solutions obeying to a non-conventional statistics, named anyons [@Jackiw3; @Jackiw1; @Wilczek1], which can find an application in the study of high-$T_{\rm c}$ super-conductors [@Wainberg2; @Wilczek2].\
In this work we present, in a unify way, some recent results concerning the gauge equivalence among U(1)-invariant nonrelativistic quantum systems described by nonlinear Schrödinger equations containing a complex nonlinearity [@Scarfone4; @Scarfone3; @Scarfone2; @Scarfone1; @Scarfone5]. We discuss, in particular, some aspects of the gauge transformations of the third kind based on its capability to change the NLSE in another one containing only a real nonlinearity (in the case of CNLSE complex nonlinearities are replaced by non Hermitian nonlinearities and the gauge transformation changes them in purely Hermitian nonlinearities). It is worthy to observe that nonlinear transformations of the type discussed in the following have been introduced previously in literature to relate different families of NLSEs [@Kundu2; @Kundu1], CNLSEs [@Wadati; @Wadati2] and, more in general, nonlinear PDE [@Calogero6; @Calogero5; @Calogero4a; @Calogero4].\
In [@Doebner4] the name of “gauge transformations of third kind” has been coined for the class of the unitary nonlinear transformations. They differ from the gauge transformations of first kind, which have constant generators, and those of the second kind, which have generators depending on the space coordinate and eventually on time, since, the gauge transformations of third kind have generators depending functionally, often in a nonlinear manner, on the fields.\
On the physical ground, they are named [*gauge transformations*]{} because, as stated by Feynmann and Hibbs [@Feynmann], in a nonrelativistic quantum mechanics, all measurements of observables are always accomplished through a measurement of position and time. Thus, quantum theories, for which the corresponding wave-functions give the same probability density in space at all time are in principle equivalent [@Doebner4]. In particular, when the wave-functions $\psi$ and $\phi$ are related each to the other by a unitary transformation, like in the gauge transformations which we are introducing, the densities of probability of position coincide $|\psi|^2=|\phi|^2\equiv\rho$ and the fields $\psi$ and $\phi$ describe the same system. In this sense, nonlinear gauge transformations permit to classify NLSEs in classes of equivalence. Any member belonging to the same class, in spite of its nonlinearity, describe the same physical system.\
After the brief introductory next section about the notations used in this paper we begin, in section 3, by considering a wide class of canonical NLSEs with complex nonlinearity in a $n+1$-dimensional space-time (throughout this paper we use unities $\hbar=c=e=1$ and we set $m=1/2$) $$i\,\frac{\partial\psi}{\partial
t}+\Delta\psi+\Big(W[\psi^\ast,\,\psi]+i\,{\mathcal
W}[\psi^\ast,\,\psi]\Big)\,\psi=0 \ ,\label{sch1}$$ describing, in the mean field approximation, the dynamics of a nonrelativistic scalar field $\psi$ conserving the particle number $N=\int\rho\,d^nx$. The real $W[\psi^\ast,\,\psi]$ and the imaginary ${\mathcal W}[\psi^\ast,\,\psi]$ nonlinearities appearing in equation (\[sch1\]) are smooth functionals of the fields $\psi,\,\psi^\ast$ and of their spatial derivatives of any order. Since the nonlinearity in the evolution equation is complex the particle current ${{\mbox{\boldmath${j}$}}}_{_\psi}$ has, in general, a nonlinear structure which differs from the bilinear form of the standard linear quantum mechanics. We introduce the Lagrangian formulation both in the wave-function representation and in the hydrodynamic representation and we study the U(1) symmetry, which plays a relevant role for the purpose of the present work. Than, we introduce a nonlinear unitary transformation $\psi\to\phi$ that changes the complex nonlinearity $W[\psi,\,\psi^\ast]+i\,{\mathcal
W}[\psi,\,\psi^\ast]$ in another one $\widetilde
W[\phi,\,\phi^\ast]$ which turns out to be purely real. As a consequence the new current ${{\mbox{\boldmath${j}$}}}_{_\phi}$ assumes the bilinear form of the linear Schrödinger theory. In [@Calogero3; @Calogero6; @Calogero5; @Calogero4a; @Calogero4; @Calogero1; @Doebner4; @Fordy; @Hisakado2; @Hisakado1; @Jackiw; @Kundu2; @Kundu1; @Wadati; @Wadati2] we can find some examples of nonlinear gauge transformation of the third kind. All transformations introduced there can be systematically obtained with the method here presented.\
In section 4, we generalize the transformation to CNLSEs by considering the following system of equations $$i\,\frac{\partial\Psi}{\partial t}+\hat{A}\,
\Delta\Psi+\Big(\widehat W[\Psi^\dag,\,\Psi]+i\,\widehat{\mathcal
W}[\Psi^\dag,\,\Psi]\Big)\,\Psi=0 \ ,\label{csch}$$ where $\Psi=(\psi_{_1},\,\ldots,\,\psi_{_p})$ is a $p$-dimensional vector of scalar wave-functions and the nonlinearity $\widehat\Lambda[\Psi^\dag,\,\Psi]=\widehat
W[\Psi^\dag,\,\Psi]+i\,\widehat{\mathcal W}[\Psi^\dag,\,\Psi]$ is composed by an Hermitian matrix $\widehat
W=(\widehat\Lambda+\widehat\Lambda^\dag)/2$ and an anti-Hermitian matrix $i\,\widehat{\mathcal
W}=(\widehat\Lambda-\widehat\Lambda^\dag)/2$. We assume that the system (\[csch\]) has $q\leq p$ conserved multiplets which implies the existence of $q$ continuity equations. Then, we introduce a nonlinear gauge transformation $\Psi\rightarrow\Phi$ which transforms the CNLSE in another one with a purely Hermitian nonlinearity $\widehat\Lambda^\prime[\Psi^\dag,\,\Psi]=\widehat
W^\prime[\Psi^\dag,\,\Psi]$. As a consequence, the transformed currents assume the standard bilinear form.\
Finally, in section 5, we generalize the method to a class of NLSEs minimally coupled with an Abelian gauge field $A_\mu$, where the matter field is described by the following equation $$i\,D_0\psi+{{\mbox{\boldmath${D}$}}}^2\psi+\Big(W[\psi^\ast,\,\psi,\,{\mbox{\boldmath${
A}$}}]+i\,{\mathcal W}[\psi^\ast,\,\psi,\,{\mbox{\boldmath${ A}$}}]\Big)\,\psi=0 \
,\label{schroedinger}$$ with $\psi$ the scalar charged field and $D_\mu\equiv(D_0,\,{{\mbox{\boldmath${D}$}}})$ denotes the standard covariant derivative. The dynamics of the gauge field is provided by the Maxwell-Chern-Simons equation $$\gamma\,\partial_\mu F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=j^\nu_{_{A\psi}} \ ,\label{ggg}$$ where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the electromagnetic field and $j^\nu_{_{A\psi}}\equiv(\rho,\,{{\mbox{\boldmath${j}$}}}_{_{A\psi}})$ is the covariant current. We stress one more that the charged current ${{\mbox{\boldmath${j}$}}}_{_{A\psi}}$ has, in general, a nonlinear structure due to the presence, in the evolution equation (\[schroedinger\]), of the complex nonlinearity $W[\psi^\ast,\,\psi,\,{\mbox{\boldmath${ A}$}}]+i\,W[\psi^\ast,\,\psi,\,{\mbox{\boldmath${ A}$}}]$. The gauge transformation can be applied equivalently to the matter field or to the gauge field obtaining the same final result: the nonlinearity in the Schrödinger equation (\[schroedinger\]) turns out to be purely real and the expression of the charged current reduces to the standard bilinear form.\
In section 6, we collect some explicit examples to illustrate the applicability of the method whilst, in the conclusive section 7, we discuss the possible further development about the nonlinear gauge transformations presented in this paper.
Preliminary mathematical background
===================================
Let $M$ be a complex $n$-dimensional smooth manifold labeled by the vector ${{\mbox{\boldmath${x}$}}}\equiv(x_{_1},\,\ldots,\,x_{_n})$. Let ${\mathcal
F}:\,\,M\rightarrow I\!\!R$ be the algebra of the functions on $M$ and $F:\,\,{\mathcal F}\rightarrow I\!\!R$ the algebra of the functionals on ${\mathcal F}$ of the type $G=\int{\mathcal G}({{\mbox{\boldmath${x}$}}},\,t)\,d^nx$. Let $\psi_{_j}({{\mbox{\boldmath${x}$}}},\,t)\in{\mathcal F}$ with $j=1,\ldots,p$ a set of $p$ fields on $M$, with $t$ a real parameter and we denote by ${{\mbox{\boldmath${\Omega}$}}}\equiv(\psi_{_1},\ldots,\psi_{_p})$ a $p$-dimensional vector on ${\mathcal M}=M\times\ldots\times M$. We assume uniform boundary conditions, to guarantee the convergence of the integrals, by requiring that all fields and their spatial derivatives vanish quickly on the boundary of ${\mathcal M}$.\
In the following we consider a nonrelativistic canonical system described by the action ${\mathcal A}=\int{\mathcal
L}[{{\mbox{\boldmath${\Omega}$}}}]\,d^nx\,dt$, where ${\mathcal L}[{{\mbox{\boldmath${\Omega}$}}}]\in F$ is the Lagrangian density which depends on the scalar fields $\psi_{_j}\in M$ and their space and time derivatives. Hereinafter we use the notation between square brackets $G[\psi]$ to indicate the dependence of $G$ on the field $\psi$ and its spatial derivative of any order. Since the theory is nonrelativistic the Lagrangian contains only time derivatives of the first order.\
We introduce the variation of a functional $G$ with respect to ${\mbox{\boldmath${\Omega}$}}$ as $$\delta G=\left(\frac{\delta G}{\delta\psi_{_1}},\ldots, \frac{\delta
G}{\delta\psi_{_p}}\right) \ ,\label{va}$$ where the functional derivative can be defined by means of the Euler operator [@Olver] $$\frac{\delta G}{\delta\psi_{_j}}\equiv{\mathcal E}_{_{\psi_{_j}}}(G)
\ ,\label{euler}$$ given by $${\mathcal E}_{_{\psi_{_j}}}(G)=-\frac{\partial}{\partial
t}\left(\frac{\partial{\mathcal
G}[{{\mbox{\boldmath${\Omega}$}}}]}{\partial(\partial_t\psi_{_j})}\right)+\sum_{[k=0]}(-1)^k{\mathcal
D}_{_{I_k}}\left( \frac{\partial{\mathcal
G}[{{\mbox{\boldmath${\Omega}$}}}]}{\partial({\mathcal D}_{_{I_k}}\psi_{_j})}\right) \
,\label{derfun}$$ with ${\mathcal D}_{_{I_k}}\equiv\partial^k/(\partial
x_{_1}^{i_1}\ldots
\partial x_{_n}^{i_n})$ and $\sum_{[k=0]}\equiv\sum_{k=0}^\infty\sum_{I_k}$. The sum $\sum_{I_k}$ is over the multi-index $I_k\equiv(i_1,\,\ldots,\,i_n)$ with $0\leq i_q\leq k$, $\sum_q i_q=k$ and $1\leq q\leq n$.\
It is easy to show, by using equation (\[derfun\]), that the Euler operator satisfies the following property $${\mathcal E}\left(\frac{\partial B}{\partial
t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${C}$}}}\right)=0 \ ,\label{null}$$ where $B$ and the components of ${{\mbox{\boldmath${C}$}}}$ belong to $F$ whilst ${{\mbox{\boldmath${\nabla}$}}}\equiv(\partial_{_1},\,\ldots,\,\partial_{_n})$ is the $n$-dimensional gradient operator.
Nonlinear Schrödinger equation for scalar particles
===================================================
We begin by consider nonrelativistic many body systems of scalar interacting particles described, in the mean field approximation, through a very genera family of NLSEs. It is useful to recall the main aspects of the canonical theory both in the wave-function formulation and in the hydrodynamic formulation.\
[*a) Wave-function formulation.*]{}\
Let us consider the class of canonical NLSEs described by the Lagrangian density $${\mathcal
L}[\psi^\ast,\,\psi]=\frac{i}{2}\,\left(\psi^\ast\,\frac{\partial\psi}{\partial
t}- \psi\,\frac{\partial\psi^\ast}{\partial t}\right)-
|{{\mbox{\boldmath${\nabla}$}}}\psi|^2-U[\psi^\ast,\,\psi] \ ,\label{lagrangean}$$ which is a functional of the 2-dimensional vector ${{\mbox{\boldmath${\Omega}$}}}\equiv(\psi^\ast,\,\psi)$. The first two terms are the same encountered in the standard linear quantum mechanics whilst the last term is the nonlinear potential describing the interaction among the particles of the system. We assume that $U[\psi^\ast,\,\psi]$ be a real smooth functional of the fields $\psi$ and $\psi^\ast$ and their spatial derivatives which leave the Lagrangian density (\[lagrangean\]) invariant under transformations belonging to the U(1) group that assures the conservation of the total number of particles. As we will show, this condition imposes a constraint on the form of the nonlinear potential $U[\psi,\,\psi^\ast]$.\
By introducing the action of the system $${\mathcal A}=\int\limits_{\mathcal R}{\mathcal
L}[\psi^\dag,\,\psi]\,d^nx\,dt \ ,\label{action}$$ where the domain of integration is the whole real region ${\mathcal
R}=M\times I\!\!R$, the evolution equation for the vector field $\psi$, corresponding to the stationary trajectories of the action, can be obtained from the extremal problem $$\delta{\mathcal A}=0 \ ,\label{vv}$$ where the variation is performed with respect to the vector ${{\mbox{\boldmath${\Omega}$}}}$. According to equation (\[va\]), the Euler-Lagrange equations for the fields $\psi$ and $\psi^\ast$ are given by $$\begin{aligned}
\nonumber &&\frac{\delta}{\delta\psi^\ast}\int\limits_{\mathcal R}
\frac{i}{2}\left(\psi^\ast\,\frac{\partial\psi}{\partial t}-
\psi\,\frac{\partial\psi^\ast}{\partial t}\right)\,d^nx\,dt-
\frac{\delta}{\delta\psi^\ast}\int\limits_{\mathcal R}|{{\mbox{\boldmath${\nabla}$}}}\psi|^2\,d^nx\,dt\\
&-&\frac{\delta}{\delta\psi^\ast}\int\limits_{\mathcal R}
U[\psi^\ast,\,\psi]\,d^nx\,dt=0 \ ,\end{aligned}$$ and its conjugate. We recall that the Lagrangian density (\[lagrangean\]) is defined modulo a total derivative (null Lagrangian) which does not give contribute to the evolution equations because, as stated in equation (\[null\]), the variation of a total derivative vanishes.\
After performing the functional derivatives we obtain the following NLSE $$i\,\frac{\partial\psi}{\partial t}+\Delta\psi
+\Lambda[\psi^\ast,\,\psi]=0 \ ,\label{schroedingerr}$$ where $\Delta\equiv\partial^2_{_1}+\ldots+\partial^2_{_n}$ is the $n$ dimensional Laplacian operator and the complex nonlinear term $\Lambda[\psi^\ast,\,\psi]$ is given by $$\Lambda[\psi^\ast,\,\psi]=-\frac{\delta}{\delta\psi^\ast}\int\limits_{\mathcal
R} U[\psi^\ast,\,\psi]\,d^nx\,dt \ .$$
[*b) Hydrodynamic formulation.*]{}\
In the hydrodynamic formulation we introduce two real fields $\rho$ and $S$ related to the wave-function through the polar decomposition $$\psi({{\mbox{\boldmath${x}$}}},t)=\rho^{1/2}({{\mbox{\boldmath${x}$}}},t)\,\exp\Big(i\,S({{\mbox{\boldmath${x}$}}},t)\Big) \ ,\label{ansatz}$$ or equivalently $$\begin{aligned}
&&\rho({{\mbox{\boldmath${x}$}}},\,t)=|\psi({{\mbox{\boldmath${x}$}}},\,t)|^2 \ ,\label{r}\\&&\nonumber\\
&&S({{\mbox{\boldmath${x}$}}},\,t)=\frac{i}{2}\,\log\left(\frac{\psi^\ast({{\mbox{\boldmath${x}$}}},\,t)}{\psi({{\mbox{\boldmath${x}$}}},\,t)}\right) \ .\label{s}\end{aligned}$$ By defining the action ${\mathcal A}=\int_{\mathcal R}{\mathcal
L}[\rho,\,S]\,d^nx\,dt$ through the Lagrangian density $${\mathcal L}[\rho,\,S]=-\frac{\partial S}{\partial t}\,\rho
-({{\mbox{\boldmath${\nabla}$}}} S)^2\,\rho-\frac{({{\mbox{\boldmath${\nabla}$}}}
\rho)^2}{4\,\rho}-U[\rho,\,S] \ ,\label{lagrangiana1}$$ where $U[\rho,\,S]$ is the nonlinear potential in the hydrodynamic representation, from the variational problem $\delta\,{\mathcal
A}=0$, where now ${{\mbox{\boldmath${\Omega}$}}}\equiv(\rho,\,S)$, we obtain two real equations $$\begin{aligned}
&&\frac{\partial S}{\partial t}+({{\mbox{\boldmath${\nabla}$}}} S)^2+U_q[\rho]-W[\rho,\,S]=0 \ ,\label{hj}\\
&&\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot(2\,\rho\,{{\mbox{\boldmath${\nabla}$}}} S)+2\,\rho\,{\mathcal
W}[\rho,\,S]=0 \ .\label{conti}\end{aligned}$$ In equations (\[hj\]) and (\[conti\]) $U_q[\rho]=-\Delta\rho^{1/2}/\rho^{1/2}$ denotes the quantum potential [@Bohm; @Madelung] whilst the two real functionals $W[\rho,\,S]$ and ${\mathcal W}[\rho,\,S]$ are given by $$\begin{aligned}
&&W[\rho,\,S]=-\frac{\delta}{\delta\,\rho}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt \ ,\label{real}\\ &&{\mathcal
W}[\rho,\,S]=-\frac{1}{2\,\rho}\frac{\delta}{\delta\,S}\int\limits_{\mathcal
R} U[\rho,\,S]\,d^nx\,dt \ .\label{imaginary}\end{aligned}$$ According to the relation $$\frac{\delta}{\delta\psi}=\psi\,\left(\frac{\delta}{\delta\rho}
+\frac{i}{2\,\rho}\,\frac{\delta}{\delta S}\right) \ ,$$ the quantities $W[\rho,\,S]$ and ${\mathcal W}[\rho,\,S]$ are related to the nonlinearity $\Lambda[\psi,\,\psi^\ast]$ in $$\Lambda[\psi,\,\psi^\ast]=\Big(W[\rho,\,S]+i\,{\mathcal
W}[\rho,\,S]\Big)\,\psi \ .\label{ll}$$ Equation (\[hj\]) is an [*Hamilton-Jacobi*]{}-like equation for the field $S$ whilst equation (\[conti\]) describes the evolution equation for the field $\rho$. The last term in equation (\[conti\]) originates from the nonlinear potential $U[\rho,\,S]$ and is a source of particle which, in the general case, destroys the conservation of the number of particles of the system.\
Accounting for equation (\[ll\]) we can rewrite the NLSE (\[schroedingerr\]) in $$i\,\frac{\partial\psi}{\partial t}+\Delta\psi
+\Big(W[\rho,\,S]+i\,{\mathcal W}[\rho,\,S]\Big)\,\psi=0 \
,\label{schroedinger1}$$ where the complex nonlinearity $\Lambda[\rho,\,S]$, expressed in the hydrodynamic fields $\rho$ and $S$, is separated in the real $W[\rho,\,S]$ and imaginary ${\mathcal W}[\rho,\,S]$ part.
U(1) Symmetry
-------------
Differently from the linear Schrödinger equation which is U(1)-invariant, the presence of the nonlinearity $U[\rho,\,S]$ generally breaks this symmetry. In fact, equation (\[conti\]), in general, is not a continuity equation for the field $\rho$. In the following we study the relevant restrictions to the nonlinear potential $U[\rho,\,S]$ so that the Lagrangian (\[lagrangean\]) turns out to be U(1)-invariant. Such invariance implies, according to the Noether theorem [@Noether], the conservation of the total number of particles by restoring a continuity equation for the field $\rho$.\
To begin with, we rewrite equation (\[conti\]) in the form $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}+2\,\rho\,{\mathcal W}[\rho,\,S]=0 \ ,\label{eq2}$$ where $${{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}=-i\,\Big(\psi^\ast\,{{\mbox{\boldmath${\nabla}$}}}\psi
-\psi\,{{\mbox{\boldmath${\nabla}$}}}\psi^\ast\Big)\equiv2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}S \
,\label{currentcct}$$ is the bilinear particles current of the standard quantum mechanics. Making use of equation (\[imaginary\]), equation (\[eq2\]) become $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}-\,\frac{\delta}{\delta S}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt=0 \ ,\label{bilin}$$ and taking into account the definition of functional derivative (\[euler\]) we get $$\begin{aligned}
\nonumber \frac{\partial\rho}{\partial t}&+&{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}-\frac{\partial}{\partial S}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt\\
&-&\sum_{[k=1]}(-1)^k{\mathcal
D}_{_{I_k}}\left(\frac{\partial}{\partial({\mathcal
D}_{_{I_k}}S)}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt\right)=0 \ .\label{cuc}\end{aligned}$$ This last equation can be written in the form $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_\psi}=\frac{\partial U}{\partial S} \ ,\label{continuityro}$$ where the nonlinear current ${\mbox{\boldmath${j}$}}_{_\psi}$ has components $$\Big({{\mbox{\boldmath${j}$}}}_{_\psi}\Big)_i=2\,\rho\,\partial_iS+\frac{\delta}{\delta(\partial_iS)}\int\limits_{\mathcal
R} U[\rho,\,S]\,d^nx\,dt \ ,\label{ccc}$$ because, according to the definition (\[derfun\]), the following relation holds $$\begin{aligned}
\nonumber & &\sum_{[k=1]}(-1)^k{\mathcal
D}_{_{I_k}}\left(\frac{\partial}{\partial({\mathcal
D}_{_{I_k}}S)}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt\right)\\
&=&-{{\mbox{\boldmath${\nabla}$}}}\cdot
\Bigg(\frac{\delta}{\delta({{\mbox{\boldmath${\nabla}$}}}S)}\,\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt\Bigg) \ .\end{aligned}$$ Remark that the expression of current (\[ccc\]) is always defined modulo the curl of an arbitrary functional $G[\rho,\,S]$ which does not give contribute to equation (\[continuityro\]).\
As stated before, the Lagrangian (\[lagrangean\]), for a general nonlinear potential $U[\rho,\,S]$, is not U(1)-invariant. In fact, equation (\[continuityro\]) describe a very general kinetics process in which the right hand side plays the role of a source of particles. Trivially, if the nonlinear potential $U[\rho,\,S]$ depends on the field $S$ only through its spatial derivatives the right hand side of equation (\[continuityro\]) vanishes and it becomes a continuity equation for the field $\rho$ $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_\psi}=0
\ .\label{continu}$$ The conserved quantity associated to equation (\[continu\]) is the total number of particles $$N=\int\limits_M\rho\,d^nx \ ,\label{numero}$$ where the integral is evaluated on the full real region $M$ (uniform boundary conditions guarantee the convergence of the integral).\
Under the assumption that $U[\rho,\,S]$ depends on $S$ only through its derivatives, according to the relation $$\frac{\delta}{\delta S}\int\limits_{\mathcal R}
U[\rho,\,S]\,d^nx\,dt=-{{\mbox{\boldmath${\nabla}$}}}\cdot\left(\frac{\delta}{\delta({{\mbox{\boldmath${\nabla}$}}}S)}\int\limits_{\mathcal
R} U[\rho,\,S]\,d^nx\,dt\right) \ ,$$ we can rewrite the expression of ${\mathcal W}[\rho,\,S]$ in $${\mathcal W}[\rho,\,S]=\frac{1}{2\,\rho}\,{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${\mathcal J}$}}}[\rho,\,S] \ ,\label{funf1}$$ where ${\mbox{\boldmath${\mathcal J}$}}[\rho,\,S]$ is given by $$\left({\mbox{\boldmath${\mathcal
J}$}}\right)_i[\rho,\,S]=\frac{\delta}{\delta(\partial_iS)}\int\limits_{\mathcal
R} U[\rho,\,S]\,d^nx\,dt \ ,\label{funf}$$ and the expression of the current (\[ccc\]) becomes $${{\mbox{\boldmath${j}$}}}_{_\psi}={{\mbox{\boldmath${j}$}}}_{_\psi}^{(0)}+{{\mbox{\boldmath${\mathcal J}$}}
}[\rho,\,S] \ .\label{cccc}$$ Summing up, the nonlinear potential $U[\rho,\,S]$ generally breaks the U(1) symmetry of the system (\[lagrangean\]). On the other hand, if it depends on the field $S$ only through its spatial derivative, the U(1) symmetry is restored. This can be clarified if we taking into account that, under a global U(1) transformation (gauge transformation of first kind) $$\psi\rightarrow\phi=e^{i\,\epsilon}\,\psi \ ,\label{u1}$$ the phase $S$ transforms in $$S\rightarrow {\mathcal S}=S+\epsilon \ ,$$ where $\epsilon$ is the constant parameter of the transformation. As a consequence, if $S$ appears in the Lagrangian only through its derivatives, the transformation (\[u1\]) does not change the Lagrangian of the system.
Gauge transformation
--------------------
We introduce a unitary and nonlinear transformation on the field $\psi$ $$\psi({{\mbox{\boldmath${x}$}}},\,t)\rightarrow\phi({{\mbox{\boldmath${x}$}}},\,t)={\mathcal
U}[\rho,\,S]\,\psi({{\mbox{\boldmath${x}$}}},\,t) \ ,\label{transf1}$$ whose purpose is to change the NLSE (\[schroedinger1\]), which contains a complex nonlinearity $W[\rho,\,S]+i{\mathcal
W}[\rho,\,S]$, in another one containing only a purely real nonlinearity $\widetilde W[\rho,\,{\mathcal S}]$. As a consequence the current ${\mbox{\boldmath${j}$}}_{_\psi}$, given in equation (\[cccc\]), is transformed in another one ${{\mbox{\boldmath${j}$}}}_{_\psi}\rightarrow{{\mbox{\boldmath${j}$}}}_{_\phi}^{(0)}$ having the merely bilinear form of the ordinary quantum mechanics.\
Since the transformation is unitary: ${\mathcal
U}^\ast={\mathcal U}^{-1}$, equation (\[transf1\]) does not change the quantity $$\rho({{\mbox{\boldmath${x}$}}},\,t)=|\psi({{\mbox{\boldmath${x}$}}},\,t)|^2=|\phi({{\mbox{\boldmath${x}$}}},\,t)|^2 \ ,$$ representing the density of probability of position of the system.\
The functional $\mathcal U[\rho,\,S]$ is defined in $${\mathcal U}[\rho,\,S]=\exp\Big(i\,\sigma[\rho,\,S]\Big) \
,\label{trasf2}$$ where the generator $\sigma[\rho,\,S]$ is a real functional which relates the phase $\mathcal S$ of the field $\phi$ with the phase $S$ of the field $\psi$ $${\mathcal S}=S+\sigma[\rho,\,S] \ ,\label{phases}$$ since we define $$\phi({{\mbox{\boldmath${x}$}}},\,t)=\rho^{1/2}({{\mbox{\boldmath${x}$}}},\,t)\,\exp\Big(i\,{\mathcal
S}({{\mbox{\boldmath${x}$}}},\,t)\Big) \ .$$ When equation (\[phases\]) is invertible, we can express the phase $S$ as a functional of the fields $\mathcal S$ and $\rho$.\
The expression of the generator $\sigma[\rho,\,S]$ is related to the imaginary part $\mathcal W[\rho,\,S]$ of the NLSE through $${{\mbox{\boldmath${\nabla}$}}}\sigma[\rho,\,S]=\frac{1}{2\,\rho}\,{{\mbox{\boldmath${\mathcal J}$}}
}[\rho,\,S] \ ,\label{genf}$$ which defines $\sigma[\rho,\,S]$ modulo an arbitrary integration constant. The same equation (\[genf\]) imposes a condition on the form of the nonlinear potential as it follows from the relation ${{\mbox{\boldmath${\nabla}$}}}\times{{\mbox{\boldmath${\nabla}$}}}\,\sigma=0$ (where ${{\mbox{\boldmath${\nabla}$}}}\times{{\mbox{\boldmath${f}$}}}$ means $\partial_{_i}\,f_{_j}-\partial_{_j}\,f_{_i}$ with $i,\,j=1,\ldots,\,n$) $${{\mbox{\boldmath${\nabla}$}}}\times\left(\frac{{\mbox{\boldmath${\mathcal
J}$}}[\rho,\,S]}{\rho}\right)=0 \ .\label{rot}$$ Equation (\[rot\]) selects the potentials $U[\rho,\,S]$ and in this way the nonlinear systems in which we can perform the transformation (\[transf1\]). For one-dimensional systems this transformation can be always accomplished.
By plunging the expression of $\psi({{\mbox{\boldmath${x}$}}},\,t)={\mathcal
U}^{-1}[\rho,\,S]\,\phi({{\mbox{\boldmath${x}$}}},\,t)$ in the NLSE (\[schroedinger1\]) it is easy to verify that it reduces in the following evolution equation $$i\,\frac{\partial\phi}{\partial t}+\Delta\phi
+\widetilde{W}[\rho,\,{\mathcal S}]\,\phi=0 \ ,\label{schroedinger3}$$ which contains only a real nonlinearity $\widetilde{W}[\rho,\,{\mathcal S}]$ given by $$\widetilde{W}[\rho,\,{\mathcal
S}]=W-({{\mbox{\boldmath${\nabla}$}}}\,\sigma)^2+2\,{{\mbox{\boldmath${\nabla}$}}}{\mathcal S}
\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma+\frac{\partial\sigma}{\partial t} \
,\label{real1}$$ where $W\equiv W[\rho,\,S[\rho,\,{\mathcal S}]]$ and $\sigma\equiv
\sigma[\rho,\,S[\rho,\,{\mathcal S}]]$. Because the phase $\mathcal
S$ appears in equation (\[schroedinger3\]) only trough its spatial derivatives, as required by the U(1)-invariance, the arbitrary integration constant arising from the definition of ${\mathcal\sigma}[\rho,\,S]$, does not produce any effect and can be posed equal to zero. Finally, the last term in equation (\[real1\]) can be solved using the Hamilton-Jacobi equation (\[hj\]) and the continuity equation (\[conti\]) in order to reduce the nonlinearity $\widetilde{\mathcal W}[\rho,\,S]$ in a quantity containing only space derivatives.\
We can easily verify that the continuity equation for the field $\rho$, obtained from equation (\[schroedinger3\]), is now given by $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_\phi}^{(0)}=0 \ ,$$ where, due to the reality of the nonlinearity $\widetilde{\mathcal
W}[\rho,\,S]$, the current $${{\mbox{\boldmath${j}$}}}_{_\phi}^{(0)}=2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}{\mathcal S} \
,\label{corcor}$$ assumes the standard bilinear form.\
Let us briefly discuss the generalization of this transformation to non canonical systems. Firstly, we observe that for a non canonical system the two quantities $W[\rho,\, S]$ and ${\mathcal
W}[\rho,\,S]$ are not derivable from a potential $U[\rho,\,S]$. In particular, the real nonlinearity $W[\rho,\,S]$ can assume any arbitrary expression whereas the form of the imaginary nonlinearity ${\mathcal W}[\rho,\,S]$, constrained by the continuity equation for the field $\rho$, is given by $${\mathcal
W}[\rho,\,S]={1\over2\,\rho}\,{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${\mathcal
J}$}}[\rho,\,S] \ ,\label{w}$$ for an arbitrary functional ${\mbox{\boldmath${\mathcal J}$}}[\rho,\,S]$. The particle current ${{\mbox{\boldmath${j}$}}}_{_\psi}$ is still given through the relation $${{\mbox{\boldmath${j}$}}}_{_\psi}=2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}S+{\mbox{\boldmath${\mathcal J}$}}[\rho,\,S] \
,\label{noncan}$$ but now the functional ${\mbox{\boldmath${\mathcal J}$}}[\rho,\,S]$ is related to ${\mathcal W}[\rho,\,S]$ only through equation (\[w\]). Following the same steps described for the canonical case, it is easy to verify that the transformation (\[transf1\]) with generator defined in the same way $${{\mbox{\boldmath${\nabla}$}}}\sigma[\rho,\,S]={1\over2\,\rho}\,{\mbox{\boldmath${\mathcal
J}$}}[\rho,\,S] \ ,$$ eliminates the imaginary part of the nonlinearity in the motion equation (\[schroedinger1\]) which assumes the expression (\[schroedinger3\])-(\[real1\]). We observe that, differently from the canonical case, equation (\[rot\]) now constraints only the form of the imaginary part ${\mathcal W}[\rho,\,S]$ whilst the real part $W[\rho,\,S]$ is completely arbitrary.\
Finally, let us remark that when the transformation (\[transf1\]) is applied to a canonical equation, generally it breaks the canonical structure of the theory. Consequently, the new NLSE is no more expressible in the Lagrangian formalism. Differently, when the transformation is applied to a non canonical system, the new NLSE can acquire a canonical structure. It is not hard to show that this is possible if the transformed nonlinearity $\widetilde W[\rho]$ is a functional depending only on the field $\rho$. In fact, the real quantity $\widetilde W$ is related to a nonlinear potential $\widetilde U$ through equation (\[real\]) and because the new nonlinearity is purely real, from equation (\[imaginary\]) it follows that the potential $\widetilde U$ and consequently $\widetilde W$ cannot depend on the field $\mathcal S$.
Coupled nonlinear Schrödinger equations
=======================================
Physical systems, whose dynamics is described by means of CNLSEs are ubiquitous in nature. They occur, for instance, in presence of many interacting particle of different spices or in presence of multi polarized laser beams propagating in optical fibers. In this last case, any polarized component of the electric or magnetic field, can be considered like a “particle state” since its evolution is describable through a NLS-like equation [@Hasegawa].\
We observe that for many particle systems of different kind, by denoting with $N_{_k}$ the number of the $k$th species, many possible combinations of conserved multiplets can be realized. In particular, two relevant physical situations are given when:\
a) All the quantities $N_{_k}$ are separately conserved, which is relevant, for instance, in nonrelativistic systems of multi-species where process of transmutation from a species to another one is forbidden.\
b) Only the quantity $N_{\rm tot}=\sum_kN_{_k}$ is conserved. Relevant examples are given in the study of light propagation in optical fibers where each species describes a polarization mode and only the total intensity of the beam is conserved.\
In the following we introduce a wide class of CNLSEs in the form $$i\,\frac{\partial\Psi}{\partial t}+\widehat{A}\,\Delta
\Psi+\widehat\Lambda[{\vec \rho},\,{\vec S}]\,\Psi=0 \
,\label{csch1}$$ where $\Psi=(\psi_{_1},\,\ldots,\,\psi_{_p})$, ${\vec\rho}\equiv(\rho_{_1}\,\ldots,\,\rho_{_p})$ and ${\vec
S}\equiv(S_{_1},\,\ldots,\,S_{_p})$ are $p$ dimensional vectors. We denote the operator valued matrix $\widehat{M}[{\vec
v}]$ by an hat (the lower case letter $m[{\vec v}]$ denotes its entries) and use the notation between square brackets to indicate the functional dependence on the components of the vector ${\vec
v}=(v_{_1},\,\ldots,\,v_{_p})$ and on its spatial derivatives of any order. Without loss of generality we assume the $p\times p$ matrix $\widehat A$ in a diagonal form.\
We observe that any system of CNLSEs can be always accommodated in the form given in equation (\[csch1\]) with a diagonal nonlinearity $\widehat\Lambda[{\vec \rho},\,{\vec S}]$. Such nonlinearity can be separated in an Hermitian matrix $\widehat
W=(\widehat\Lambda+\widehat\Lambda^\dag)/2$ and an anti-Hermitian matrix $i\,\widehat{\mathcal
W}=(\widehat\Lambda-\widehat\Lambda^\dag)/2$. Thus, we can pose $\widehat\Lambda[{\vec \rho},\,{\vec S}]=\widehat W[{\vec
\rho},\,{\vec S}]+i\,\widehat{\mathcal W}[{\vec \rho},\,{\vec S}]$, where the diagonal matrices $\widehat W[{\vec \rho},\,{\vec S}]$ and $\widehat{\mathcal W}[{\vec \rho},\,{\vec S}]$ have purely real entries. Such assumption is only for sake of convenience and does not imply any restriction on the form of the nonlinearity.\
We will consider a general situation in which the system (\[csch1\]) has $q$ conserved multiplets of order $p_{_k}$, with $k=1,\,\ldots,\,q$ and $\sum_kp_{_k}=p$, where $1\leq q\leq p$. In this way, the two particular cases a) and b) quoted previously are recognized for $q=p$ and $q=1$, respectively.\
Let us organize the fields $\psi_{_i}$, belonging to the vector $\Psi$, in $$\stackrel{\mbox{$\Psi\equiv$}\big(\underbrace{\psi_{_{11}},
\,\ldots,\, \psi_{_{1p_{_1}}}}\mbox{$;\,$}
\underbrace{\psi_{_{21}},\,\ldots,\,
\psi_{_{2p_{_2}}}}\mbox{$;\,\ldots;\,$} \underbrace{\psi_{_{q1}},\,
\ldots,\,\psi_{_{qp_{_q}}}}\big)\mbox{$ \ ,$}}
{\scriptscriptstyle\hspace{6mm} {\rm 1st\;
multiplet\hspace{10mm}2nd\; multiplet\hspace{16mm}}q{\rm th\;
multiplet}}$$ and, from now on, we relabel the fields $\psi_{_i}$ in $\psi_{_{kl}}$ where, the first index $k$ refers to the $k$th multiplet of order $p_{_k}$, whereas the second index $l$, with $1\leq l\leq p_{_k}$, refers to the $l$th field inside to the multiplet $k$.\
Canonical system (\[csch1\]) can be obtain from the Lagrangian density $${\mathcal L}[\Psi^\dag,\,\Psi]=\frac{i}{2}\,\left(\Psi^\dag\,
\frac{\partial\Psi}{\partial t}-\frac{\partial\Psi^\dag}{\partial t}
\,\Psi\right)-{{\mbox{\boldmath${\nabla}$}}}\Psi^\dag\cdot\widehat
A\,{{\mbox{\boldmath${\nabla}$}}}\Psi-U[\Psi^\dag,\,\Psi] \ ,\label{qlagrangean}$$ where the scalar product in the second term is applied among the gradient operators. The nonlinear potential $U[\Psi^\dag,\,\Psi]$ is a real functional which depends on the vector fields $\Psi$, $\Psi^\dag$ and their spatial derivatives. Accounting for the uniform boundary conditions, the potential $U[\Psi^\dag,\,\Psi]$ vanishes together with all its derivatives, at the spatial infinity.\
By introducing the action of the system $${\mathcal A}=\int\limits_{\mathcal R}{\mathcal
L}[\Psi^\dag,\,\Psi]\,d^nx\,dt \ ,\label{aact}$$ the evolution equation for the vector field $\Psi$ is given by the stationary trajectories of the action as it follows from the variational problem $\delta{\mathcal A}=0$, where the variation is performed with respect to the $2p$-dimensional vector ${{\mbox{\boldmath${\Omega}$}}}\equiv(\Psi^\dag,\,\Psi)$.\
In this way we obtain the equation $$i\,\frac{\partial\Psi}{\partial t}+\widehat{A}\,\Delta
\Psi-\frac{\delta}{\delta\Psi^\dag}\int\limits_{\mathcal R}
U[\Psi^\dag,\,\Psi]\,d^nx\,dt=0 \ ,\label{c1NLSE}$$ and its Hermitian conjugate, which form a system of $2p$-nonlinear coupled Schrödinger equations.\
Taking in account of the polar decomposition of the fields $\psi_{_{kl}}$ in the real fields $\rho_{_{kl}}$ and $S_{_{kl}}$ $$\psi_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)=\rho_{_{kl}}^{1/2}({{\mbox{\boldmath${x}$}}},\,t)
\,\exp\Big(i\,S_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)\Big) \ ,\label{hydro}$$ we can express the variation $\delta/\delta\psi_{_{kl}}^\ast$ as $$\frac{\delta}{\delta\psi_{_{kl}}^\ast}=
\psi_{_{kl}}\,\left(\frac{\delta}{\delta\rho_{_{kl}}}
+\frac{i}{2\,\rho_{_{kl}}}\,\frac{\delta} {\delta S_{_{kl}}}\right)
\ .$$ In this way, each component of equation (\[c1NLSE\]) can be written in $$i\,\frac{\partial\psi_{_{kl}}}{\partial
t}+a_{_{kl}}\,\Delta\psi_{_{kl}}-\left[\left(\frac{\delta}{\delta
\rho_{_{kl}}} +\frac{i}{2\,\rho_{_{kl}}}\,\frac{\delta} {\delta
S_{_{kl}}}\right)\int\limits_{\mathcal R} U[{\vec\rho},\,{\vec
S}]\,d^nx\,dt\right]\,\psi_{_{kl}}=0 \ , \label{eqqq}$$ where $U[{\vec\rho},\,{\vec S}]$ is the nonlinear potential in the hydrodynamic representation. Equation (\[eqqq\]) can be posed in the following matrix form $$i\,\frac{\partial\Psi}{\partial t}+\widehat{A}\,\Delta\Psi
+\left(\widehat W[{\vec\rho},\,{\vec S}]+i\,\widehat {\mathcal
W}[{\vec\rho},\,{\vec S}]\right)\,\Psi=0 \ ,\label{c2NLSE}$$ where the Hermitian and anti-Hermitian nonlinearities are given by $$\begin{aligned}
&&\widehat W[{\vec\rho},\,{\vec S}]=-{\rm
diag}\left(\frac{\delta}{\delta\rho_{_{kl}}}\int\limits_{\mathcal
R}U[{\vec\rho},\,{\vec S}]\,dx\,dt\right) \ ,\label{hermit}\\
&&\widehat{\mathcal W}[{\vec\rho},\,{\vec S}]=-{\rm
diag}\left(\frac{1}{2\,\rho_{_{kl}}} \,\frac{\delta}{\delta
S_{_{kl}}}\int\limits_{\mathcal R} U[{\vec\rho},\,{\vec
S}]\,dx\,dt\right)
\ .\label{antihermitt}\end{aligned}$$ Finally, by using the polar decomposition (\[hydro\]), equation (\[c2NLSE\]) can be separate in a system of $2p$ nonlinear real coupled equations $$\begin{aligned}
&&\frac{\partial S_{_{kl}}}{\partial\,t}+a_{_{kl}}
\,({{\mbox{\boldmath${\nabla}$}}}S_{_{kl}})^2- a_{_{kl}}\,\frac{\Delta
\rho_{_{kl}}^{1/2}}{\rho_{_{kl}}^{1/2}}-w_{_{kl}}
[{\vec\rho},\,{\vec S}]=0 \
,\label{ms}\\&&\frac{\partial\rho_{_{kl}}}{\partial t}
+2\,a_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}\cdot( \rho_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}
S_{_{kl}}) +2\,\rho_{_{kl}}\,{\scriptstyle{\mathcal
W}}_{_{kl}}[{\vec\rho},\,{\vec S}]=0 \ .\label{mr}\end{aligned}$$ The first set of equations (\[ms\]) is a system of $p$-coupled [*Hamilton-Jacobi*]{}-like equations for the fields $S_{_{kl}}$, whilst the second set of equations (\[mr\]) describes the time evolution of the fields $\rho_{_{kl}}$.
U(1) symmetry
-------------
In the following we consider those systems written in the form (\[c2NLSE\]) which admit a set of $q$ continuity equations $$\frac{\partial\rho_{_{k}}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}=0 \ ,\label{qcon}$$ which assure the conservation of the quantities $$N_{_k}=\int\limits_M\rho_{_k}\,d^nx \ .\label{ntot}$$ This imposes some restrictions on the functional dependence of the potential $U[{\vec\rho},\,{\vec S}]$ with respect to the fields $\vec\rho$ and $\vec S$. To obtain such restrictions we recall the following relation $$\frac{\delta}{\delta S_{_{kl}}}=\frac{\partial} {\partial
S_{_{kl}}}-
{{\mbox{\boldmath${\nabla}$}}}\cdot\frac{\delta}{\delta({{\mbox{\boldmath${\nabla}$}}}S_{_{kl}})} \
,\label{derfun1}$$ so that, by taking in account the expression of the matrix $\widehat
{\mathcal W}[\vec \rho,\,\vec S]$, we can rewrite equation (\[mr\]) in $$\begin{aligned}
\nonumber \frac{\partial\rho_{_{kl}}}{\partial t}
&+&{{\mbox{\boldmath${\nabla}$}}}\cdot\left(2\,a_{_{kl}}
\,\rho_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}S_{_{kl}}+
\frac{\delta}{\delta({{\mbox{\boldmath${\nabla}$}}}S_{_{kl}})} \int\limits_{\mathcal
R} U[{\vec\rho},\,{\vec S}]\,d^nx\,dt\right)\\&-&\frac{\partial}
{\partial S_{_{kl}}}\int\limits_{\mathcal R} U[{\vec\rho},\,{\vec
S}]\,d^nx\,dt=0 \ .\end{aligned}$$ By summing this equation on the index $l$, with $1\leq l\leq
p_{_k}$, we obtain $$\frac{\partial\rho_{_k}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot\Big({{\mbox{\boldmath${j}$}}}^{(0)}_{_{\Psi,k}}+{\mbox{\boldmath${\mathcal J}$}}_{_k}[{\vec\rho},\,{\vec
S}]\Big)+ I_{_k}[{\vec\rho},\,{\vec S}]=0 \ ,\label{ccsor1}$$ where $$\rho_{_k}=\sum_{l=1}^{p_{_k}}\rho_{_{kl}} \ ,$$ and $${{\mbox{\boldmath${j}$}}}^{(0)}_{_{\Psi,k}}=\sum_{l=1}^{p_k}{{\mbox{\boldmath${j}$}}}_{_{\Psi,kl}}^{(0)} \ ,$$ with $${{\mbox{\boldmath${j}$}}}^{(0)}_{_{\Psi,kl}}=2\,a_{_{kl}}
\,\rho_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}S_{_{kl}} \ .\label{j0}$$ Moreover, we posed $${\mbox{\boldmath${\mathcal J}$}}_{_k}[{\vec\rho},\,{\vec
S}]=\sum_{l=1}^{p_k}{\mbox{\boldmath${\mathcal J}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]
\ ,\label{jjc}$$ with $$\left({\mbox{\boldmath${\mathcal J}$}}_{_{kl}}\right)_i[{\vec\rho},\,{\vec
S}]=\frac{\delta}{\delta(\partial_iS_{_{kl}})} \int\limits_{\mathcal
R} U[{\vec\rho},\,{\vec S}]\,d^nx\,dt \ ,\label{antientry}$$ whilst $$I_{_k}[{\vec\rho},\,{\vec S}]=\sum_{l=1}^{p_k}
I_{_{kl}}[{\vec\rho},\,{\vec S}] \ ,\label{iic}$$ with $$I_{_{kl}}=-\frac{\partial} {\partial S_{_{kl}}}
\int\limits_{\mathcal R} U[{\vec\rho},\,{\vec S}]\,d^nx\,dt \
.\label{ic}$$ By comparing equation (\[qcon\]) with equation (\[ccsor1\]) we obtain, as a condition, that the functionals $I_{_k}[{\vec\rho},\,{\vec S}]$ must be expressed as the gradient of a set of functionals ${{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]$ $$I_{_k}[{\vec\rho},\,{\vec S}]={{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${G}$}}}_{_{k}}[{\vec\rho},\,{\vec S}] \ .\label{cond}$$ We remark that the expression of the functionals ${{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]$ is determined univocally from the nonlinear potential $U[{\vec\rho},\,{\vec S}]$ through equations (\[iic\]), (\[ic\]) and (\[cond\]) which select, in this way, the class of the Lagrangians (\[qlagrangean\]) of the family of CNLSEs compatible with the set of continuity equations (\[qcon\]). If the conditions (\[cond\]) are accomplished, equations (\[ccsor1\]) become a system of $q$ continuity equations, where the total currents of the $k$th multiplet ${{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}$ is given by $${{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}={{\mbox{\boldmath${j}$}}}^{(0)}_{_{\Psi,k}}+{\mbox{\boldmath${\mathcal
J}$}}_{_k}[{\vec\rho},\,{\vec S}]+{{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]
\ .\label{ccurrent}$$ We recall that, as it follows from the Noether theorem, equations (\[qcon\]) are consequence of the invariance of the Lagrangian with respect to a global unitary transformation $$\Psi\rightarrow\Phi=\widehat U\,\Psi \ ,\label{tr1}$$ where $$\widehat U={\rm diag}\Big(\exp(i\,{\vec\epsilon})\Big) \ ,$$ and $$\stackrel{\mbox{${\vec\epsilon}\equiv$}\big(\underbrace{\epsilon_{_1},\,\ldots,\,
\epsilon_{_1}}\mbox{$;\,$}\underbrace{\epsilon_{_2},\,\ldots,\,
\epsilon_{_2}}\mbox{$;\,\ldots;\,$}\underbrace{\epsilon_{_q},\,
\ldots,\,\epsilon_{_q}}\big)\mbox{$ \ ,$}} {\scriptstyle\hspace{4mm}
p_{_1}{\rm \;times}\hspace{7mm}p_{_2}{\rm
\;times}\hspace{14mm}p_{_q}{\rm\; times}}$$ are the constant parameters of the transformation.\
In fact, the Lagrangian (\[qlagrangean\]) is invariant under the transformation (\[tr1\]) if the nonlinear potential $U[{\vec\rho},\,{\vec S}]$ changes according to $$\delta\,U[{\vec\rho},\,{\vec
S}]=-\sum_{k=1}^q\epsilon_{_k}\,{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${G}$}}}_{_{k}}[{\vec\rho},\,{\vec S}] \ ,\label{ttrr1}$$ where ${{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]$ are arbitrary functionals. We recall that in this way the motion equation (\[c1NLSE\]) does not change because the Lagrangian density is always defined modulo a total derivative of an arbitrary functional. Taking in account for the independence of the parameters $\epsilon_{_k}$, from equation (\[ttrr1\]) we obtain $$\sum_{l=1}^{p_{_k}}\frac{\partial}{\partial S_{_{kl}}} \int
U[{\vec\rho},\,{\vec S}]\,d^nx\,dt =-{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${G}$}}}_{_{k}}[{\vec\rho},\,{\vec S}] \ ,\label{condd1}$$ which, according to the definitions (\[iic\]) and (\[ic\]) coincides with the condition (\[cond\]). In addition, because the parameters $\epsilon_{_k}$ are constants, the potential $U[{\vec\rho},\,{\vec S}]$ can depend on the phases $S_{_{kl}}$ also through their spatial derivatives of any order.
Gauge transformation
--------------------
We are ready to generalize the nonlinear gauge transformation described in the section 3.2 to the family of CNLSEs under inspection. Let us introduce the following transformation $$\Psi({{\mbox{\boldmath${x}$}}},\,t)\rightarrow\Phi({{\mbox{\boldmath${x}$}}},\,t)=\widehat{\mathcal
U}[{\vec\rho},\,{\vec S}]\,\Psi({{\mbox{\boldmath${x}$}}},\,t) \ ,\label{ctr}$$ where $\widehat{\mathcal U}[\vec\rho,\,\vec S]$ is a diagonal and unitary matrix: $ \widehat{\mathcal U}^\dag=\widehat{\mathcal
U}^{-1}$. This implies $$\rho_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)=\left|\,\psi_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)\right|^2= \left|\,\phi_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)\right|^2 \ ,$$ whilst the phases ${\mathcal S}_{_{kl}}$ are related to fields $\phi_{_{kl}}$ through the relation $${\mathcal
S}_{_{kl}}=\frac{i}{2}\,\ln\left(\frac{\phi_{_{kl}}^\ast({{\mbox{\boldmath${x}$}}},\,t)}{\phi_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)}\right) \ ,$$ since we define $$\phi_{_{kl}}({{\mbox{\boldmath${x}$}}},\,t)=\rho_{_{kl}}^{1/2}({{\mbox{\boldmath${x}$}}},\,t)\,\exp\Big(i\,{\mathcal S}({{\mbox{\boldmath${x}$}}},\,t)\Big) \ .$$ Without lost of generality, the matrix $\widehat{\mathcal
U}[\vec\rho,\,\vec S]$ can be written in $$\widehat{\mathcal U}[{\vec\rho},\,{\vec S}]={\rm diag}\,\Big(
\exp\Big(i\,{\vec \sigma}[{\vec\rho},\,{\vec S}]\Big)\Big) \
,\label{u11}$$ where ${\vec\sigma} \equiv(\ldots,\,\sigma_{_{kl}},\,\ldots)$ is a $p$-dimensional vector with real components. The generators of the transformation $\sigma_{_{kl}}[{\vec\rho},\,{\vec S}]$ relate the phase $\vec{\mathcal S}$ of the new field $\Phi$ with the phase ${\vec S}$ of the old field $\Psi$ according to $$\vec{\mathcal S}={\vec S}+{\vec \sigma}[{\vec\rho},\,{\vec S}] \
.\label{phase}$$ We introduce the generators $\sigma_{_{kl}}[{\vec\rho},\,{\vec S}]$ through the relations $${{\mbox{\boldmath${\nabla}$}}}\sigma_{_{kl}}[{\vec\rho},\,{\vec S}]=\frac{1}
{2\,a_{_{kl}}\,\rho_{_{kl}}}\,\Big({\mbox{\boldmath${\mathcal
J}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]+{\mbox{\boldmath${\mathcal
R}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]\Big) \ ,\label{cgenf}$$ where ${\mbox{\boldmath${\mathcal R}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]$ are arbitrary real functionals related to ${{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]$, introduced in equation (\[cond\]), in $$\sum_{l=1}^{p_{_k}}{\mbox{\boldmath${\mathcal R}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]=
{{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}] \ .\label{conts}$$ Consistence of equations (\[cgenf\]) implies the following constraints $${{\mbox{\boldmath${\nabla}$}}}\times\left[{1\over\rho_{_{kl}}}\Big({\mbox{\boldmath${\mathcal
J}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]+{\mbox{\boldmath${\mathcal
R}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]\Big)\right]=0 \ .\label{crot}$$ These equations select the potential $U[{\vec\rho},\,{\vec
S}]$ and, through equations (\[hermit\])-(\[antihermitt\]), the nonlinear system where the transformation can be performed.\
We remark that, according to equations (\[cgenf\]), equation (\[ctr\]) defines a wide class of transformations, one for every choice of the set of functionals ${\mbox{\boldmath${\mathcal
R}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]$. Each transformation changes the initial system (\[c2NLSE\]), with the nonlinearity $\widehat
W[{\vec\rho},\,{\vec S}]+i\,\widehat{\mathcal W}[{\vec\rho},\,{\vec
S}]$, in another one with a purely Hermitian nonlinearity $\widehat
W^\prime[{\vec\rho},\,{\vec S}]$.\
Preliminarily, we observe that, within the notation (\[antientry\]) and (\[ic\]), the matrix $\widehat{\mathcal W}[\vec\rho,\,\vec S]$ assumes the expression $$\widehat {\mathcal W}[{\vec\rho},\,{\vec S}]=-{\rm
diag}\left({1\over2\,\rho_{_{kl}}} \Big(I_{_{kl}}[{\vec\rho},\,{\vec
S}]+{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${\mathcal J}$}}_{_{kl}}[{\vec\rho},\,{\vec
S}]\Big)\right) \ .$$ In this way, by performing the gauge transformation, equation (\[c2NLSE\]) becomes $$i\,\frac{\partial\Phi}{\partial t}+\widehat{A}\,\Delta
\Phi+\left(\widehat W_{\rm t}[{\vec\rho},\,{\vec {\mathcal
S}}]+i\,\widehat{\mathcal W}_{\rm t}[{\vec\rho},\,{\vec {\mathcal
S}}]\right)\,\Phi=0 \ ,\label{CNLSE2}$$ where $$\widehat W_{\rm t}[{\vec\rho},\,{\vec {\mathcal S}}]={\rm
diag}\left(w_{_{kl}}- a_{_{kl}}\,\left({{\mbox{\boldmath${\nabla}$}}}\sigma_{_{kl}}
\right)^2+2\,a_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}{\mathcal
S}_{_{kl}}\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma_{_{kl}}
+\frac{\partial\sigma_{_{kl}}}{\partial t}\right) \ ,$$ and $$\widehat{\mathcal W}_{\rm t}[{\vec\rho},\,{\vec {\mathcal S}}]={\rm
diag}\left(\frac{1}{2\,\rho_{_{kl}}}{\mathcal
F}_{_{kl}}[{\vec\rho},\,{\vec {\mathcal S}}]\right) \ .$$ The functionals ${\mathcal F}_{_{kl}}[{\vec\rho},\,{\vec {\mathcal
S}}]$ are given by $${\mathcal F}_{_{kl}}[{\vec\rho},\,{\vec {\mathcal
S}}]=I_{_{kl}}[{\vec\rho},\,{\vec {\mathcal
S}}]-{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${\mathcal R}$}}_{_{kl}}[{\vec\rho},\,{\vec
{\mathcal S}}] \ ,$$ and fulfill the relations $$\sum_{l=1}^{p_{_k}}{\mathcal F}_{_{kl}}[{\vec\rho},\,{\vec {\mathcal
S}}]=0 \ ,\label{rel}$$ as can be verify by using equations (\[iic\]), (\[cond\]) and (\[conts\]).\
We remark that, as a consequence of this last relation, equation (\[CNLSE2\]) admits the following set of continuity equations $$\frac{\partial\rho_{_k}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{\Phi,k}}^{(0)}=0 \ ,$$ where $${{\mbox{\boldmath${j}$}}}_{_{\Phi,k}}^{(0)}=\sum_{l=1}^{p_k}{{\mbox{\boldmath${j}$}}}_{_{\Phi,kl}}^{(0)} \ ,\label{jl}$$ and $${{\mbox{\boldmath${j}$}}}_{_{\Phi,kl}}^{(0)}=2\,a_{_{kl}}
\,\rho_{_{kl}}\,{{\mbox{\boldmath${\nabla}$}}}{\mathcal S}_{_{kl}} \ ,\label{cor1}$$ i.e., the nonlinear currents ${{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}$ are transformed in ${{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}\rightarrow
{{\mbox{\boldmath${j}$}}}_{_{\Phi,k}}^{(0)}$ which have the standard bilinear form.\
On the other hand, the system (\[CNLSE2\]), with the nonlinearity $\widehat W_{\rm t}[{\vec\rho},\,{\vec{\mathcal
S}}]+i\,\widehat{\mathcal W}_{\rm t}[{\vec\rho},\,{\vec{\mathcal
S}}]$, can be rewritten in $$i\,\frac{\partial\Phi}{\partial t}+\widehat{A}\, \Delta\Phi+\widehat
W^\prime[{\vec\rho},\,{\vec{\mathcal S}}]\,\Phi \ ,\label{CNLSE3}$$ with a purely Hermitian nonlinearity $\widehat W^\prime=(\widehat
W^\prime)^\dag$ given in the following block-form $$\widehat W^\prime[{\vec\rho},\,{\vec{\mathcal S}}]={\rm
diag}\left(\widehat W_{_k}^\prime[{\vec\rho},\,{\vec{\mathcal
S}}]\right) \ .\label{rhermit}$$ The $p_{_k}\times p_{_k}$ matrices $\widehat
W^\prime_{_k}[{\vec\rho},\,{\vec{\mathcal S}}]=\widehat
D_{_k}[{\vec\rho},\,{\vec{\mathcal S}}]+\widehat
C_{_k}[{\vec\rho},\,{\vec{\mathcal S}}]$ have a diagonal part $$\begin{aligned}
\nonumber
\widehat D_{_k}[{\vec\rho},\,{\vec {\mathcal S}}]&=&{\rm
diag}\,\left(w_{_{kl}}-
a_{_{kl}}\,\left({{\mbox{\boldmath${\nabla}$}}}\sigma_{_{kl}}\right)^2+2\,a_{_{kl}}\,
{{\mbox{\boldmath${\nabla}$}}}{\mathcal S}_{_{kl}}\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma_{_{kl}}
+\frac{\partial\sigma_{_{kl}}}{\partial t}\right) \ ,\\\label{d}\end{aligned}$$ with purely real entries, and an off-diagonal part $$\left(\widehat C_{_k}\right)_{_{lm}}\!\!\![{\vec\rho},\,{\vec
{\mathcal S}}]=i\,\frac{{\mathcal F}_{_{kl}}-{\mathcal
F}_{_{km}}}{2\, p_{_k}\,\sqrt{\rho_{_{kl}}\, \rho_{_{km}}}}\,e^{i\,
\left(S_{_{kl}}-S_{_{km}}\right)} \ ,\label{c}$$ which result to be Hermitian matrices $\widehat C_{_k}=\widehat
C_{_k}^\dag$.\
We observe that because the Lagrangian (\[qlagrangean\]) is U(1)-invariant, the arbitrary integration constants, deriving from the definition (\[cgenf\]), do not produce any effect and can be posed equal to zero. Moreover, the last term in equation (\[d\]) can be solved using equations (\[ms\])-(\[mr\]) reducing the nonlinearity $\widehat
W^\prime[\vec\rho,\,\vec S]$ in a quantity containing only space derivatives.\
The extension of the method to the case of non canonical coupled systems is almost immediate and can be performed following the same steps described at the end of section 3.2.\
For a non canonical system the matrix $\widehat W[{\vec\rho},\,{\vec
S}]$ can assume any arbitrary expression whereas the form of the matrix $\widehat{\mathcal W}[{\vec\rho},\,{\vec S}]$ is constrained by the existence of the set of the continuity equations for the fields $\rho_{_k}$. Without loss of generality we can pose $${\scriptstyle{\mathcal W}}_{_{kl}} [{\vec\rho},\,{\vec
S}]=-{1\over2\,\rho_{_{kl}}}\left(I_{_{kl}}[{\vec\rho},\,{\vec
S}]+{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${\mathcal J}$}}_{_{kl}}[{\vec\rho},\,{\vec
S}]\right) \ ,\label{wne}$$ where now the functionals ${\mbox{\boldmath${\mathcal
J}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]$ and $I_{_{kl}}[{\vec\rho},\,{\vec
S}]$ are no more related to the nonlinear potential $U[{\vec\rho},\,{\vec S}]$ through equations (\[antientry\]) and (\[ic\]). The continuity equations (\[qcon\]) require that the functionals $I_{_{kl}}[{\vec\rho},\,{\vec S}]$ still fulfill the constraints (\[cond\]) for an arbitrary set of functionals ${{\mbox{\boldmath${G}$}}}_{_k}[{\vec\rho},\,{\vec S}]$. The total currents ${{\mbox{\boldmath${j}$}}}_{_{\Psi,k}}$ are given in equation (\[ccurrent\]) but now the functionals $I_{_{kl}}[{\vec\rho},\,{\vec S}]$ and ${\mbox{\boldmath${\mathcal
J}$}}_{_{kl}}[{\vec\rho},\,{\vec S}]$ are related to the matrix $\widehat{\mathcal W}[{\vec\rho},\,{\vec S}]$ only through equation (\[wne\]). Finally, we introduce the transformation (\[ctr\]) with generators (\[cgenf\]) which eliminates the anti-Hermitian matrix $\widehat{\mathcal W}[{\vec\rho},\,{\vec S}]$ of the nonlinearity and transforms the system of CNLSEs in the form given in equation (\[CNLSE3\]) with only an Hermitian matrix $\widehat
W^\prime[{\vec\rho},\,{\vec S}]$ given still through equations (\[rhermit\])-(\[c\]).\
Let us now briefly study separately two particular relevant cases:\
[*a) CNLSEs conserving the number of each species of particles*]{}\
We assume $p=q$ with $p_{_k}=1$ and replace the double index $kl\to
k$. From equation (\[c1NLSE\]) we obtain the following evolution equation for the quantities $\rho_{_k}$ $$\frac{\partial\rho_{_k}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{\psi,k}}+I_{_k}=0 \ ,$$ where the currents ${{\mbox{\boldmath${j}$}}}_{_{\psi,k}}$ are given by $$\Big({{\mbox{\boldmath${j}$}}}_{_{\psi,k}}\Big)_i=2\,a_{_k}\,\rho_{_k}\,\partial_iS_{_k}
+\frac{\delta}{\delta(\partial_i\,S_{_k})}\int\limits_{\mathcal R}
U[\vec\rho,\,\vec S]\,d^nx\,dt \ ,$$ while the quantities $I_{_k}[{\vec\rho},\,{\vec {\mathcal S}}]$ assume the expression $$I_{_k}=-\frac{\partial}{\partial S_{_k}}\int\limits_{\mathcal
R}U[{\vec\rho},\,\vec S]\,d^nx\,dt \ .$$ Conservation of the single densities $\rho_{_k}$ implies that all the quantities $I_{_k}[{\vec\rho},\,{\vec {\mathcal S}}]$ must vanish. This requires that the potential $U[{\vec\rho},\,\vec S]$ depends on the phases $S_{_k}$ only through their spatial derivatives as it follows also from arguments based on the invariance of Lagrangian under a global unitary transformation.\
Remark that in this case the matrix $\widehat{\mathcal
W}[\vec\rho,\,\vec S]$ assumes the simple expression $$\widehat{\mathcal W}[\vec\rho,\,\vec S]={\rm
diag}\left(-{1\over2\,\rho_{_k}}{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${\mathcal
J}$}}_{_k}[\vec\rho,\,\vec S]\right) \ ,\label{antihermit1}$$ where the functionals ${\mbox{\boldmath${\mathcal J}$}}_{_k}[\vec\rho,\,\vec S]$ are defined in equations (\[jjc\]) and (\[antientry\]), after posing $p_{_k}=1$. All the quantities ${{\mbox{\boldmath${G}$}}}_{_k}[\vec\rho,\,\vec S]$ introduced in equation (\[cond\]) reduce to constant vectors which, without lost of generality, can be posed equal to zero. This implies that all the functionals ${\mathcal
F}_{_k}[{\vec\rho},\,{\vec {\mathcal S}}]$ vanish and after the transformation, the matrix $\widehat W^\prime[{\vec\rho},\,{\vec
{\mathcal S}}]$ is reduced in a diagonal form given by $$\widehat W^\prime[{\vec\rho},\,{\vec {\mathcal S}}]={\rm
diag}\left[w_{_k}- a_{_k}\,\left({{\mbox{\boldmath${\nabla}$}}}\sigma_{_k}\right)^2
+2\,a_{_k}\,{{\mbox{\boldmath${\nabla}$}}}S_{_k}\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma_{_k}
+\frac{\partial\sigma_{_k}}{\partial t}\right] \ ,\label{rhermit1}$$ which contains now only a purely real nonlinearity since the off-diagonal part $\widehat C[{\vec\rho},\,{\vec {\mathcal S}}]$ vanishes. Remark that, in this case, the gauge transformation is univocally defined because the generators are given by ${{\mbox{\boldmath${\nabla}$}}}\sigma_{_k}[\vec\rho,\,\vec S]={{\mbox{\boldmath${\mathcal J}$}}}_{_k}[\vec\rho,\,\vec S]/2\,a_{_k}\,\rho_{_k}$.\
[*b) CNLSEs conserving the total number of particles*]{}\
We pose $q=1$ and replace the double index $kl\to l$. From equation (\[c1NLSE\]) we obtain the following evolution equation for the density $\rho_{_{\rm tot}}$ $$\frac{\partial\rho_{_{\rm tot}}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}+I_{_{\rm tot}}[\vec\rho,\,\vec S]=0 \ ,\label{cc1}$$ where $$\rho_{_{\rm tot}}=\sum_{l=1}^p\rho_{_l} \ ,$$ is the total density of particles and the current ${\mbox{\boldmath${j}$}}$ is given by $$\Big({{\mbox{\boldmath${j}$}}}\Big)_i=\sum_{l=1}^p\left(2\,a_{_l}\,\rho_{_l}\,\partial_iS_{_l}
+\frac{\delta}{\delta(\partial_i\,S_{_l})}\int\limits_{\mathcal R}
U[\vec\rho,\,\vec S]\,d^nx\,dt\right) \ .$$ Conservation of $\rho_{_{\rm tot}}$ require that $I_{_{\rm
tot}}[\vec\rho,\,\vec S]$, defined by $$I_{_{\rm tot}}[\vec\rho,\,\vec
S]=-\sum_{l=1}^p\frac{\partial}{\partial
S_{_l}}\int\limits_{\mathcal R}U[{\vec\rho},\,\vec S]\,d^nx\,dt \ ,$$ and can be expressed in $$I_{_{\rm tot}}[{\vec\rho},\,{\vec S}]={{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${G}$}}}[{\vec\rho},\,{\vec S}] \ ,$$ so that equation (\[cc1\]) becomes a continuity equation $$\frac{\partial\rho_{_{\rm tot}}}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{\rm tot}}=0 \ ,$$ where the total current ${{\mbox{\boldmath${j}$}}}_{_{\rm tot}}$ is given in $${{\mbox{\boldmath${j}$}}}_{_{\rm tot}}={{\mbox{\boldmath${j}$}}}+{{\mbox{\boldmath${G}$}}}[{\vec\rho},\,{\vec S}] \ .$$ By performing the transformation (\[ctr\]) with generator (\[cgenf\]), where $$\sum_{l=1}^p{\mbox{\boldmath${\mathcal R}$}}_{_l}[{\vec\rho},\,{\vec S}]={{\mbox{\boldmath${G}$}}}[{\vec\rho},\,{\vec S}] \ ,$$ we obtain the new system of CNLSEs (\[CNLSE3\]) with an Hermitian nonlinearity $\widehat{W}^\prime[\vec\rho,\,\vec S]=\widehat
D[\vec\rho,\,\vec S]+\widehat C[\vec\rho,\,\vec S]$. The diagonal part $$\widehat D[\vec\rho,\,\vec S]={\rm diag}\,\left[w_{_l}-
a_{_l}\,\left({{\mbox{\boldmath${\nabla}$}}}\sigma_{_l}\right)^2+2\,a_{_l}\,
{{\mbox{\boldmath${\nabla}$}}}{\mathcal S}_{_l}\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma_{_l}
+\frac{\partial\sigma_{_l}}{\partial t}\right] \ ,$$ contains purely real entries whilst the off-diagonal part $$\widehat C_{_{lm}}=i\,\frac{{\mathcal F}_{_l}-{\mathcal
F}_{_m}}{2\,\sqrt{\rho_{_l}\, \rho_{_m}}}\,e^{i\,
\left(S_{_l}-S_{_m}\right)} \ ,$$ results to be Hermitian.
Nonlinear Schrödinger equation coupled with gauge fields
========================================================
In this section we generalize the nonlinear transformation to NLSEs coupled with Abelian gauge fields whose dynamic is described by means of the standard Maxwell term with the inclusion of the additional Chern-Simons term.
The canonical model
-------------------
We consider a class of NLSEs describing, in the mean field approximation, a system of interacting charged particles. The model is furnished by the following Lagrangian density $${\mathcal L}[\psi^\ast,\,\psi,\,A_\mu]={\mathcal L}_{\rm
m}[\psi^\ast,\,\psi,\,A_\mu]+{\mathcal L}_{\rm g}[A_\mu] \
,\label{lagrangianag}$$ where the Lagrangian of the matter field ${\mathcal L}_{\rm m}$ is given by $$\begin{aligned}
{\mathcal L}_{\rm m}[\psi^\ast,\,\psi,\,A_\mu]
=\frac{i}{2}\,\Big[\psi^\ast\,D_t\psi- \psi\,(D_t\psi)^\ast\Big]-
|{{\mbox{\boldmath${D}$}}}\psi|^2-U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ ,\label{lagrangecng}\end{aligned}$$ with $D_\mu=(\partial_\mu+i\,A_\mu)$ the covariant derivative and $U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ is the nonlinear potential in the hydrodynamic representation depending on the abelian gauge field $A_\mu\equiv(A_0,\,-{{\mbox{\boldmath${A}$}}})$ only through its spatial components. The Lagrangian of the gauge field ${\mathcal L}_{\rm g}$ assumes the expression $${\mathcal L}_{\rm
g}[A_{\mu}]=-\frac{\gamma}{4}\,F_{\mu\nu}\,F^{\mu\nu}+\frac{g}{2}
\,\varepsilon^{\tau\mu\nu}\,A_\tau\,F_{\mu\nu} \ , \label{MCS}$$ where $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the electromagnetic tensor, with $\partial_\mu\equiv(\partial/\partial
t,\,{{\mbox{\boldmath${\nabla}$}}})$.\
In the following, Greek indices take the value $0,\,\ldots,\,n$, the Latin indices assume the value $1,\,\ldots,\,n$ and denote the spatial coordinates. Indices are lowered and uppered depending on the metric tensor $\eta_{\mu\nu}\equiv{\rm
diag}(1,\,-1,\,\ldots,\,-1)$. The Levi-Civita tensor $\varepsilon^{\tau\mu\nu}$, fully antisymmetric, is defined as $\varepsilon^{012}=1$. The parameters $\gamma$ and $g$ weight the contribute of the Maxwell interaction and the Chern-Simons interaction. We recall that the Chern-Simons term gives contribution only when the dynamic of the system is constrained in a manifold with an even number of space dimensions (like in the plane) whilst in an odd number of space dimensions it reduces to a total derivative which does not give contribute to the motion equation.\
Starting from the action of the system $${\mathcal A}=\int\limits_{\mathcal R}{\mathcal
L}[\psi^\ast,\,\psi,\,A_\mu]\,d^nx\,dt \ ,$$ the evolution equations for the fields $\psi,\,\psi^\ast$ and $A_{_\mu}$ are obtained by posing $\delta{\mathcal A}=0$ where the variation is performed with respect to the 3-vector ${{\mbox{\boldmath${\Omega}$}}}\equiv(\psi,\,\psi^\ast,\,A_\mu)$.\
The motion equation for the gauge field assumes the expression $$\gamma\,\partial_\mu F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=j^\nu_{_{A\psi}} \ , \label{gaugefield}$$ where the covariant current $j^\nu_{_{A\psi}}\equiv(\rho,\,{{\mbox{\boldmath${j}$}}}_{_{A\psi}})$ has spatial components $$\begin{aligned}
\Big({{\mbox{\boldmath${j}$}}}_{_{A\psi}}\Big)_i=2\,\rho\,\Big(\partial_iS+A_i\Big)+\frac{\delta}{\delta
A_i}\int\limits_{\mathcal R} U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \
.\label{ccg}\end{aligned}$$ By observing that $F^{\mu\nu}=-F^{\nu\mu}$, from equation (\[gaugefield\]) we immediately obtain the continuity equation for the field $\rho$ $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{A\psi}}=0 \ ,\label{continuity}$$ which assures the conservation of the total charge of the system.\
On the other hand, the evolution equation for the matter field, as it follows from the Lagrangian density (\[lagrangianag\]), is given by $$i\,D_t\psi+{{\mbox{\boldmath${D}$}}}^2\psi+\Big(W[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]+i\,{\mathcal
W}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\Big)\,\psi=0 \ ,\label{schroedingerg2}$$ where the real and imaginary parts of the nonlinearity are given, respectively, by $$W[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]=-\frac{\delta}{\delta\rho}\,\int\limits_{\mathcal R}
U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \ ,$$ and $${\mathcal W}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]=-\frac{1}{2\,\rho}\,\frac{\delta}{\delta
S}\,\int\limits_{\mathcal R} U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \
.\label{wa}$$ For consistence, equation (\[schroedingerg2\]) must admit the same continuity equation (\[continuity\]).\
Following standard arguments, by multiply equation (\[schroedingerg2\]) by $\psi^\ast$ and taking its imaginary part, we obtain $$\frac{\partial\rho}{\partial
t}+{{\mbox{\boldmath${\nabla}$}}}\cdot\Big[2\,\rho\left({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}\right)\Big] -\frac{\delta}{\delta S}\int\limits_{\mathcal R}
U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt=0 \ ,$$ which can be written in $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{A\psi}}=\frac{\partial}{\partial S}\int\limits_{\mathcal R}
U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \ ,\label{eee}$$ where the charged current ${{\mbox{\boldmath${j}$}}}_{_{A\psi}}$ now becomes $$\Big({{\mbox{\boldmath${j}$}}}_{_{A\psi}}\Big)_i=2\,\rho\left(\partial_iS+A_i\right)+\frac{\delta}{\delta(\partial_iS)}
\,\int\limits_{\mathcal R} U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \
.\label{currentgg}$$ By comparing this expression with equation (\[ccg\]) it follows that:\
1) the nonlinear potential $U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ must depend on the field $S$ only through its spatial derivatives so that the right hand side of equation (\[eee\]) vanishes becoming, in this way, a continuity equation for the field $\rho$.\
2) the fields ${{\mbox{\boldmath${\nabla}$}}}S$ and ${\mbox{\boldmath${A}$}}$ must be present in the nonlinear potential through the combination ${{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}$. In other words the Lagrangian of the matter field can be obtained consistently from the Lagrangian of the scalar field (\[lagrangean\]) by replacing in it the standard derivatives with the covariant ones $\partial_\mu\rightarrow D_\mu=\partial_\mu+i\,A_\mu$ (minimal coupling prescription).\
Since, as a required, $U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ depends only through the quantity ${{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}$ and its highest spatial derivatives, equation (\[wa\]) can be written in $${\mathcal W}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]={1\over2\,\rho}\,{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${\mathcal
J}$}}_{\!\!A}}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ ,$$ where the vector ${\mbox{\boldmath${\mathcal J}$}}_{\!\!A}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ is defined in $$\Big({\mathcal J}_{\!A}\Big)_i[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]=\frac{\delta}{\delta(\partial_iS)}\int\limits_{\mathcal R}
U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,d^nx\,dt \ ,$$ and the charged current (\[currentgg\]) assumes the expresion $${{\mbox{\boldmath${j}$}}}_{_{A\psi}}=2\,\rho\left({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}\right)+{\mbox{\boldmath${\mathcal J}$}}_{\!\!A}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ .$$
Gauge transformation
--------------------
Firstly, we recall that the system described by the Lagrangian (\[lagrangianag\]) is invariant over a local U(1) transformation (gauge transformation of second kind), accomplished both on the fields $\psi$ and $A_\mu$, by means of $$\begin{aligned}
\nonumber &&A_{\mu}({{\mbox{\boldmath${x}$}}},\,t)\rightarrow A_{\mu}({{\mbox{\boldmath${x}$}}},\,t)
-\partial_{\mu}\omega({{\mbox{\boldmath${x}$}}},\,t) \ ,\\
&&\label{gauge}\\
\nonumber &&\psi({{\mbox{\boldmath${x}$}}},\,t)\rightarrow \exp\Big(i\,\omega({{\mbox{\boldmath${x}$}}},\,t)\Big)\psi({{\mbox{\boldmath${x}$}}},\,t) \ ,\end{aligned}$$ where $\omega({{\mbox{\boldmath${x}$}}},\,t)$ is a well-behaved function in the sense of $\epsilon^{\mu\nu}
\partial_\mu\partial_\nu\omega=0$, with $\epsilon^{\mu\nu}$ the anti-symmetric tensor $\epsilon^{\mu\nu}=-\epsilon^{
\nu\mu}$. Remark that, under the transformation (\[gauge\]), Lagrangian (\[MCS\]) changes according to $${\mathcal L}_{\rm g}\rightarrow{\mathcal L}_{\rm
g}+\frac{g}{2}\,\epsilon^{\mu\nu\tau}\,\partial_\mu
\left(\omega\,F_{\nu\tau}\right)
\ ,$$ with an extra surface term which does not change the motion of equations for the fields $\psi$ and $A_\mu$.\
Let us now introduce the gauge transformation of third kind as a unitary nonlinear transformation performed only on the field $\psi$ $$\psi({{\mbox{\boldmath${x}$}}},\,t)\rightarrow\phi({{\mbox{\boldmath${x}$}}},\,t)={\mathcal
U}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,\psi({{\mbox{\boldmath${x}$}}},\,t) \ ,\label{trasf1g}$$ which allows to eliminate the imaginary part $\mathcal W[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ of the nonlinearity in the evolution equation (\[schroedingerg2\]) and reduces the charged current to the standard bilinear form $${{\mbox{\boldmath${j}$}}}_{_{A\psi}}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\rightarrow{{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}} [\rho,\,{\mathcal S},\,{{\mbox{\boldmath${A}$}}}]=2\,\rho\,({{\mbox{\boldmath${\nabla}$}}}{\mathcal S}-{{\mbox{\boldmath${A}$}}}) \ .\label{currentg}$$ The unitary functional ${\mathcal U}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ is given by $${\mathcal U}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]=\exp\Big(i\,\sigma\left[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}\right]\Big)
\ ,\label{trasf2g}$$ where the real generator of the transformation $\sigma\left[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}\right]$ defined according to $${{\mbox{\boldmath${\nabla}$}}}\sigma\left[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}\right]=\frac{1}{2\,\rho}\,{\mbox{\boldmath${\mathcal
J}$}}_{\!\!A}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ ,\label{sig}$$ are constrained by $${{\mbox{\boldmath${\nabla}$}}}\times\left(\frac{{\mbox{\boldmath${\mathcal
J}$}}_{\!\!A}[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]}{\rho}\right)=0 \
.\label{condition1}$$ By performing the transformation (\[trasf1g\]), from equation (\[schroedingerg2\]) we obtain the following NLSE for the charged field $\phi$ $$i\,D_t\phi+{{\mbox{\boldmath${D}$}}}^2\phi+\widetilde{W}[\rho,\,{\mathcal S},\,{{\mbox{\boldmath${A}$}}}]\,\phi=0 \ ,\label{schroedinger2g}$$ where the real nonlinearity $\widetilde W[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ assumes the expression $$\widetilde{W}[\rho,\,{\mathcal S},\,{{\mbox{\boldmath${A}$}}}]=W-({{\mbox{\boldmath${\nabla}$}}}\sigma)^2+2\,({\mbox{\boldmath${\nabla}$}}{\mathcal S}-{{\mbox{\boldmath${A}$}}})
\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma+\frac{\partial\sigma}{\partial t} \ ,$$ with $W\equiv W[\rho,\,S[\rho,\,{\mathcal S},\,{{\mbox{\boldmath${A}$}}}],\,{{\mbox{\boldmath${A}$}}}]$ and $\sigma\equiv \sigma[\rho,\,S[\rho,\,{\mathcal S},\,{{\mbox{\boldmath${A}$}}}],\,{{\mbox{\boldmath${A}$}}}]$. The new phase $\mathcal S$ of the field $\phi$ is related to the old phase $S$ of the field $\psi$ through the relation $${\mathcal S}=S+\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ ,\label{ns}$$ and because the nonlinearity in equation (\[schroedinger2g\]) is a purely real quantity the continuity equation for the field $\rho$ becomes $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}}=0 \ ,$$ with the transformed charged current ${{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}}$ given in equation (\[currentg\]).\
Since the nonlinear transformation has been accomplished only on the matter field, the evolution equation for the gauge field retains formally the same expression given in equation (\[gaugefield\]) $$\gamma\,\partial_\mu\,F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=j^\nu_{_{A\phi}} \ ,$$ but with the transformed charged source $j^\nu_{_{A\phi}}\equiv(\rho,\,{{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}})$.\
On the other hand, the presence of the gauge field enable us to introduce a transformation on it, leaving the matter field unchanged.\
In fact, let us introduce the following transformation $${{\mbox{\boldmath${A}$}}}({{\mbox{\boldmath${x}$}}},\,t)\rightarrow{{\mbox{\boldmath${\chi}$}}}({{\mbox{\boldmath${x}$}}},\,t)={{\mbox{\boldmath${A}$}}}({{\mbox{\boldmath${x}$}}},\,t)-{{\mbox{\boldmath${\nabla}$}}}\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ ,
\label{trasfgauge1}$$ where $\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ is still defined through equation (\[sig\]).\
Accounting for $F_{\mu\nu}=-F_{\nu\mu}$, it follows that $$F_{\mu\nu}\equiv\partial_\mu A_\nu-\partial_\nu A_\mu=
\partial_\mu\chi_\nu-\partial_\nu\chi_\mu \
,\label{condition2}$$ whenever $\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]$ is a well behaved function fulfilling the relation $$\epsilon^{\mu\nu}\,\partial_\mu\partial_\nu\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]=0 \ .\label{trasfgauge2}$$ This implies that, for $\mu$ and $\nu$ spatial indices, equation (\[condition2\]) is trivially satisfied as consequence of condition (\[condition1\]), differently, for $\mu$ or $\nu$ equal to zero, equation (\[condition2\]) implies the following transformation for the component $A_0({{\mbox{\boldmath${x}$}}},\,t)$ of the gauge field $$A_0({{\mbox{\boldmath${x}$}}},\,t)\rightarrow\chi_0({{\mbox{\boldmath${x}$}}},\,t)=A_0({{\mbox{\boldmath${x}$}}},\,t)+\frac{\partial}{\partial t}\,\sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] \ .
\label{trasfgauge3}$$ By performing the transformation (\[trasfgauge1\]) and (\[trasfgauge3\]) in equation (\[gaugefield\]) we obtain $$\gamma\,\partial_\mu F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=\tilde j^\nu_{_{A\phi}} \ ,$$ where the new covariant current $\tilde
j^\nu_{_{A\phi}}\equiv(\rho,\,\tilde {{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}})$ with $$\tilde {{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}}=2\,\rho\,({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${\chi}$}}}) \ ,$$ fulfills the continuity equation $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot\tilde {{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}}=0 \ .$$ Differently, from equation (\[schroedingerg2\]) it follows $$i\,{\overline D}_t\psi+\overline{{\mbox{\boldmath${D}$}}}^{\,2}\psi+\overline{W}[\rho,\,S,\,{{\mbox{\boldmath${\chi}$}}}]\,\psi=0 \ ,$$ which has the same form of equation (\[schroedinger2g\]) but now the covariant derivative is defined in $\overline{D}_\mu=\partial_\mu+i\,\chi_\mu$, while the real nonlinearity becomes $$\overline{W}[\rho,\,S,\,{{\mbox{\boldmath${\chi}$}}}]=W-({{\mbox{\boldmath${\nabla}$}}}\sigma)^2+2\,({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${\chi}$}}})\cdot{{\mbox{\boldmath${\nabla}$}}}\sigma+\frac{\partial\sigma}{\partial
t} \ ,$$ with $W\equiv W[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}[\rho,\,S,\,{{\mbox{\boldmath${\chi}$}}}]]$ and $\sigma\equiv \sigma[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}[\rho,\,S,\,{{\mbox{\boldmath${\chi}$}}}]]$.\
In conclusion, it is worthy to observe that if we introduce the nonlinear transformation both on the matter field and the gauge filed, by following the prescription given in equation (\[gauge\]), the evolution equations (\[gaugefield\]) and (\[schroedingerg2\]) are not changed in form because the variations due to the matter field are balanced by the variations due to the gauge field. Thus, in this case transformation (\[trasf1g\]) behaviors exactly like a gauge transformation of second kind.
Applications
============
To show the applicability of the nonlinear transformation introduced in this paper, we consider some examples for the three cases: scalar NLSEs, coupled NLSEs and gauged NLSEs. Some of the examples here discussed are already known in literature. We show that the nonlinear transformations introduced by different Authors can be obtained, in a unified way, with the method presented in this work.
Scalar NLSEs
------------
Let us consider, as a first example, the following 1-dimensional NLSE $$\begin{aligned}
\nonumber i\,\frac{\partial\psi}{\partial
t}&+&\frac{\partial^2\psi}{\partial x^2}
+a_{_1}\,|\psi|^2\,\psi+a_{_2}\,|\psi|^4\,\psi\\
&+&i\,a_{_3}\,|\psi|^2\,\frac{\partial\psi}{\partial x}\,\psi
+i\,a_{_4}\,\frac{\partial\psi^\ast}{\partial x}\,\psi^2=0 \
,\label{exe1}\end{aligned}$$ where $a_{_1}$, $a_{_2}$, $a_{_3}$ and $a_{_4}$ are real constants. After introducing the hydrodynamic fields $\rho$ and $S$ we can write the real and imaginary part of the nonlinearity in $$W[\rho,\,S]=b_{_1}\,\rho+b_{_2}\,\rho^2+b_{_3}\,\rho\,\frac{\partial
S}{\partial x} \ ,$$ and $${\mathcal W}[\rho]=b_{_4}\,\frac{\partial\rho}{\partial x} \ ,$$ where $b_{_1}=a_{_1}$, $b_{_2}=a_{_2}$, $b_{_3}=a_{_4}-a_{_3}$ and $b_{_4}=(a_{_3}+a_{_4})/2$.\
The canonical subclass of equation (\[exe1\]) is given by posing $b_{_3}=-2\,b_{_4}$ and admits the following potential $$U[\rho,\,S]=-\left({b_{_1}\over2}\,\rho^2+{b_{_2}\over3}\,\rho^3+
{b_{_3}\over2}\,\rho^2\,\frac{\partial S}{\partial x} \right) \ .$$ Equation (\[exe1\]) conserves the density $\rho$ and the corresponding particles current is given by $$j_{_\psi}=2\,\rho\,\frac{\partial S}{\partial x}+b_{_4}\,\rho^2 \ .$$ After performing the transformation (\[transf1\]) with $$\sigma[\rho]={b_{_4}\over2}\int\limits^x\rho\,dx^\prime \ ,$$ equation (\[exe1\]) is changed in $$i\,\frac{\partial\phi}{\partial t}+\frac{\partial^2\phi}{\partial
x^2} +\left(b_{_1}\,\rho+\tilde
{b}_{_2}\,\rho^2+b_{_3}\,\rho\,\frac{\partial{\mathcal S}}{\partial
x}\right)\,\phi=0 \ ,\label{exetrasf1}$$ where $\tilde{b}_{_2}=b_{_2}-b_{_3}\,b_{_4}/2-b_{_4}^2/4$.\
Equation (\[exe1\]) contains, as particular cases, some know NLSEs. Among them we recall:\
1) The Chen-Lee-Liu equation [@Chen] ($b_{_1}=b_{_2}=0$, $b_{_3}=-2\,b_{_4}$) which is transformed in the NLSE with real nonlinearity $$\widetilde W[\rho,\,{\mathcal S}]=\tilde
b_{_2}\rho^2+b_{_3}\,\rho\,\frac{\partial{\mathcal S}}{\partial x} \
,$$ where $\tilde b_{_2}=3\,b_{_3}^2/4$.\
2) The Jackiw-Aglietti equation [@Aglietti; @Jackiw] ($b_{_1}=0$, $b_{_2}=-3\,b_{_4}/4$ and $b_{_3}=-2\,b_{_4}$) which is transformed in the NLSE with real nonlinearity $$\widetilde W[\rho,\,{\mathcal
S}]=b_{_3}\,\rho\,\frac{\partial{\mathcal S}}{\partial x} \ .$$ 3) The Eckaus equation [@Calogero6; @Calogero1] ($b_{_1}=b_{_3}=0$) which is transformed in the NLSE with real nonlinearity $$\widetilde W[\rho]=\tilde b_{_2}\rho^2 \ ,$$ with $\tilde b_{_2}=b_{_2}-b_{_4}^2/4$. Remark that, when $b_{_1}\not=0$ we obtain, after transformation, the cubic-quintic NLSE with real nonlinearity $$\widetilde W[\rho]=b_{_1}\,\rho+\tilde b_{_2}\rho^2 \ ,$$ studied in [@Ginsburg].\
4) The Kaup-Newell equation [@Kaup] ($b_{_1}=b_{_2}=0$ and $b_{_4}=-3\,b_{_3}/2$) which is transformed in the NLSE with real nonlinearity $$\tilde W[\rho,\,{\mathcal S}]=\tilde b_{_2}\rho^2+
b_{_3}\,\rho\,\frac{\partial{\mathcal S}}{\partial x} \ ,$$ with $\tilde b_{_2}=3\,b_{_3}^2/16$.\
As a second example we consider the canonical NLSE introduced in [@Kaniadakis; @Kaniadakis1] $$\begin{aligned}
\nonumber i\,\frac{\partial\psi}{\partial
t}+\frac{\partial^2\psi}{\partial x^2}
&+&\kappa\,\left(\psi^\ast\,\frac{\partial\psi}{\partial x}
-\psi\,\frac{\partial\psi^\ast}{\partial x}\right)\,
\frac{\partial\psi}{\partial x}\\
&+&\frac{\kappa}{2}\, \frac{\partial}{\partial x}\,\left
(\psi^\ast\frac{\partial \psi}{\partial
x}-\psi\,\frac{\partial\psi^\ast}{\partial x}\right) \,\psi=0 \
,\label{exe2}\end{aligned}$$ where $\kappa$ is a real parameter. The real and imaginary nonlinearities in the hydrodynamic representation are given by $$W[\rho,\,S]= -2\,\kappa\,\rho\,\left(\frac{\partial S}{\partial
x}\right)^2 \ ,$$ and $${\mathcal W}[\rho,\,S]=
\frac{\kappa}{\rho}\,\frac{\partial}{\partial x}
\left(\rho^2\,\frac{\partial S}{\partial x}\right) \ .$$ They are obtained from the potential $$U[\rho,\,S]=\kappa\,\left(\rho\,\frac{\partial S}{\partial
x}\right)^2 \ ,\label{ueip}$$ whereas the particles current assumes the expression $$j_{_\psi}=2\,\rho\,(1+\kappa\,\rho)\,\frac{\partial S}{\partial x} \
.$$ By performing the transformation (\[transf1\]) with generator $$\sigma[\rho,\,S]=\kappa\int\limits^x\rho\,\frac{\partial S}{\partial
x^\prime}\,dx^\prime \ ,\label{sigma2}$$ equation (\[exe2\]) changes in $$i\,\frac{\partial\phi}{\partial t}+\frac{\partial^2\phi}{\partial
x^2}-\left[
\frac{2\,\kappa\,\rho}{1+\kappa\,\rho}\,\left(\frac{\partial{\mathcal
S}}{\partial x}\right)^2
-\frac{\kappa}{2}\,\rho\,\frac{\partial^2\log\rho}{\partial
x^2}\right]\,\phi=0 \ .\label{exe21}$$ Remark that although equation (\[exe2\]) can be generalized in any spatial dimensions [@Kaniadakis; @Kaniadakis1] condition (\[rot\]) is not satisfied in general and the transformation (\[transf1\]) can be applied consistently only in 1-dimensional case.\
Another example is given by the class of the Doebner-Goldin equations [@Doebner4] $$i\,\frac{\partial\,\psi}{\partial\,t}+\Delta\,\psi+\left(\sum_{i=1}^5
c_{_i}\,R_{_i}[\rho,\,S]+i\,
\frac{D}{2}\,R_{_2}[\rho]\right)\,\psi=0 \ ,\label{exe3}$$ where the nonlinear functionals $R_{_i}$ are given by $R_{_1}={{\mbox{\boldmath${\nabla}$}}}\cdot(\rho{{\mbox{\boldmath${\nabla}$}}}S)/\rho$, $R_{_2}=\Delta\rho/\rho$, $R_{_3}=({{\mbox{\boldmath${\nabla}$}}}S)^2$, $R_{_4}={{\mbox{\boldmath${\nabla}$}}}S\cdot{{\mbox{\boldmath${\nabla}$}}}\rho/\rho$ and $R_{_5}=({{\mbox{\boldmath${\nabla}$}}}\rho/\rho)^2$. The canonical subclass of equation (\[exe3\]) is obtained for $c_{_1}=-c_{_4}=D$, $c_{_3}=0$ and $c_{_2}=-2\,c_{_5}$ and it follows from the potential $$U[\rho,\,S] =D\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot
{{\mbox{\boldmath${\nabla}$}}}S+c_{_5}\,\frac{({{\mbox{\boldmath${\nabla}$}}}\rho)^2}{\rho} \ .$$ The particles current is given by $${{\mbox{\boldmath${j}$}}}_{_\psi}=2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}S+D\,{{\mbox{\boldmath${\nabla}$}}}\rho \
,\label{currentexe3}$$ and the corresponding continuity equation is the well-known Fokker-Planck equation $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_\psi}^{(0)}+D\,\Delta\rho=0 \ ,$$ where $D$ is the diffusion coefficient.\
By performing the transformation (\[transf1\]) with generator $$\sigma[\rho]={D\over2}\,\log\rho \ ,\label{ss}$$ equation (\[exe3\]) transforms in $$i\,\frac{\partial\phi}{\partial t}+\Delta\phi+\sum_{i=1}^5
\tilde{c}_{_i}\,R_{_i}[\rho,\,{\mathcal S}]\,\phi=0 \ ,$$ with coefficients $\tilde{c}_{_1}=c_{_1}-D$, $\tilde{c}_{_2}=c_{_2}-c_{_1}\,D/2$, $\tilde{c}_{_3}=c_{_3}$, $\tilde{c}_{_4}=c_{_4}+(c_{_3}-1)\,D$ and $\tilde{c}_{_5}=c_{_5}-c_{_4}\,D-(c_{_3}-1)\,D^2/4$.\
It is easy to verify that the generator (\[ss\]) satisfies condition (\[rot\]) and the nonlinear transformation can be performed in any $n\geq1$ spatial dimensions.\
As a final example we consider the following family of NLSE $$i\,\frac{\partial\psi}{\partial
t}+\Delta\psi+\Big(W(\rho,\,S)+i\,{\mathcal
W}(\rho,\,S)\Big)\,\psi=0 \ ,\label{sent}$$ with nonlinearities $$W(\rho,\,S)=-{D\over2}\,f(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\cdot\left(\frac{{{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}}{\rho}\right)+G[\rho] \ , \label{www}$$ and $${\mathcal
W}(\rho,\,S)=-\frac{D}{2\,\rho}\,{{\mbox{\boldmath${\nabla}$}}}\cdot\Big(f(\rho )
\,{{\mbox{\boldmath${\nabla}$}}}\rho\Big) \ , \label{cwww}$$ where $$f(\rho)=\rho\,\frac{\partial\ln\,\kappa(\rho)}{\partial\rho} \ ,$$ and $G[\rho]$ is an arbitrary functional of $\rho$. Equation (\[sent\]) can be obtained from the potential $$U[\rho,\,S]=-D\,f(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot{{\mbox{\boldmath${\nabla}$}}}S+\int\limits^\rho
G[\rho^\prime]\,d\rho^\prime \ ,$$ and was recently derived in the canonical quantization framework from a classical many body systems described by generalized entropies [@Scarfone8].\
The particles current is given by $${{\mbox{\boldmath${j}$}}}_{_\psi}=2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}\,S-D\,f(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho \ ,\label{ncurrent}$$ which is the sum of a linear drift current ${{\mbox{\boldmath${j}$}}}_{\rm
drift}=2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}S$ and a nonlinear diffusion current ${{\mbox{\boldmath${j}$}}}_{\rm diff}=-D\,f(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho$ different from Fick’s current ${{\mbox{\boldmath${j}$}}}_{\rm Fick}=-D\,{{\mbox{\boldmath${\nabla}$}}}\,\rho$ which is recovered by posing $\kappa(\rho)=\alpha\,\rho$, with $\alpha$ a constant. The diffusive term is related to the entropy of the classical system through the relation (with the Boltzmann constant $k_{\rm B}=1$) $$S(\rho)=-\int\limits_M d^nx\int\limits^\rho
\ln\kappa(\rho^\prime)\,d\rho^\prime \ .\label{entropy}$$ By performing the transformation (\[transf1\]) with generator $$\sigma[\rho]=\frac{D}{2}\,\ln\kappa(\rho) \ ,$$ equation (\[sent\]) changes in $$i\,\frac{\partial\phi}{\partial t}+\Delta\phi -{D^2\over2}\,\Bigg[
f_{1}(\rho)\,\Delta\rho+ f_2(\rho)\,
\left({{\mbox{\boldmath${\nabla}$}}}\rho\right)^2\Bigg]\,\phi+G[\rho]\,\phi=0 \ ,$$ with $$\begin{aligned}
&&
f_{1}(\rho)=\rho\,\left(\frac{\partial}{\partial\rho}\,\ln\kappa(\rho)
\right)^2 \ ,\\
&& f_2(\rho)={1\over2}\,\frac{\partial\, f_{1}(\rho)}{\partial\rho}
\ ,\end{aligned}$$ which contains a purely real nonlinearity depending only on $\rho$.\
In particular, starting from the entropy $S=-\int_M\rho\,\log\rho\,d^nx$, with $\kappa(\rho)=e\,\rho$, equation (\[sent\]) becomes $$i\,\frac{\partial\psi}{\partial t}+\Delta\psi-{D\over
2}\,{{\mbox{\boldmath${\nabla}$}}}\cdot\left(\frac{{{\mbox{\boldmath${j}$}}}^{(0)}_{_\psi}}{\rho}\right)\,\psi-i\,\frac{D}{2}\,\frac{\Delta\rho}{\rho}\,\psi=0
\ ,\label{DG}$$ which coincides with the canonical sub-family of the Doebner-Goldin equations described in the previous example. After transformation it becomes $$i\,\frac{\partial\,\phi}{\partial\,t}+\Delta\,\phi
-{D^2\over2}\,\left[\frac{\Delta\rho}{\rho}
-{1\over2}\left(\frac{{{\mbox{\boldmath${\nabla}$}}}\rho}{\rho}\right)^2\right]\,\phi=0
\ ,\label{DG1}$$ which was studied previously in [@Guerra]. Remarkably, this equation is equivalents to the following linear Schrödinger equation $$i\,k^{\!\!\!\!\!-}\,\frac{\partial\chi}{\partial
t}+{k^{\!\!\!\!\!-}}^2\,\Delta\chi=0 \ ,\label{lsch}$$ with $k^{\!\!\!\!\!-}=\sqrt{1-D^2}$ where the field $\chi$ is related to hydrodynamic fields $\rho$ and $\mathcal S$ through the relation $\chi=\sqrt{\rho}\,\exp(i\,{\mathcal S}/k^{\!\!\!\!\!-})$.\
Coupled NLSEs
-------------
Let us now consider the following 1-dimensional system of CNLSEs $$i\,\frac{\partial\psi_{_j}}{\partial
t}+a_{_j}\,\frac{\partial^2\psi_{_j}}{\partial x^2}
+\Lambda(\psi_{_i},\,\psi^\ast_{_i})\,\psi_{_j}=0 \ ,\label{example}$$ with nonlinearity $$\begin{aligned}
\nonumber \Lambda(\psi_{_i},\,\psi^\ast_{_i})=-i\sum_{i=1}^p&&\Bigg(\alpha_{_{ij}}\frac{\rho_{_i}}{\rho_{_j}}\,
\psi_{_j}\frac{\partial\psi_{_j}^\ast}{\partial x}
+\beta_{_{ij}}\,\rho_{_i}\,\frac{\partial}{\partial
x}\\
&&+\gamma_{_{ij}}\,\psi_{_i}\frac{\partial\psi_{_i}^\ast}{\partial
x}
+\epsilon_{_{ij}}\,\psi_{_i}^\ast\frac{\partial\psi_{_i}}{\partial
x}+f_{_j}({\vec \rho})\Bigg) \ , \label{nl}\end{aligned}$$ where $\alpha_{_{ij}}$, $\beta_{_{ij}}$, $\gamma_{_{ij}}$ and $\epsilon_{_{ij}}$ are real constants and $f_{_j}({\vec\rho})$ are arbitrary real functionals depending only on the vector field $\vec \rho$.\
In the hydrodynamic representation the nonlinearity (\[nl\]) has Hermitian and anti-Hermitian part given, respectively, by $$\begin{aligned}
\widehat W[{\vec\rho},\,{\vec S}]&=&{\rm
diag}\left[\sum_{i=1}^p\rho_{_i}\,\left(b_{_{ij}}\, \frac{\partial
S_{_j}}{\partial x}+c_{_{ij}}\,\frac{\partial
S_{_i}}{\partial x}\right)+f_{_j}(\vec \rho)\right] \ ,\label{ew1}\\
\widehat{\mathcal W}[{\vec\rho}]&=&{\rm diag}\left[\sum_{i=1}^p
\left(d_{_{ij}}\,\frac{\rho_{_i}}{\rho_{_j}}\,
\frac{\partial\rho_{_j}}{\partial
x}+e_{_{ij}}\,\frac{\partial\rho_{_i}}{\partial x}\right)\right] \
,\label{ew2}\end{aligned}$$ where $b_{_{ij}}=\alpha_{_{ij}}-\beta_{_{ij}}$, $c_{_{ij}}=\gamma_{_{ij}}-\epsilon_{_{ij}}$, $d_{_{ij}}=(\alpha_{_{ij}}+\beta_{_{ij}})/2$ and $e_{_{ij}}=(\gamma_{_{ij}}+\epsilon_{_{ij}})/2$.\
Equation (\[example\]) includes some cases already known in literature. For instance: the vector generalization of the Kaup-Newell equation [@Fordy] ($a_{_j}=1$, $c_{_{ij}}=0$, $-b_{_{ij}}=2\,d_{_{ij}}=e_{_{ij}}=\beta$ and $f_{_j}({\vec\rho})=0$); the coupled Chen-Lie-Liu equation (Type I) [@Wadati] ($a_{_j}=1$, $c_{_{ij}}=e_{_{ij}}=0$, $-b_{_{ij}}=2\,d_{_{ij}}=\beta$, $f_{_j}({\vec\rho})=0$); the coupled Chen-Lie-Liu equation (Type II) [@Wadati] ($a_{_j}=1$, $b_{_{ij}}=d_{_{ij}}=0$, $c_{_{ij}}=-2\,e_{_{ij}}=\beta$, $f_{_j}({\vec\rho})=0$); the hybrid CNLSE [@Hisakado2; @Hisakado1] ($a_{_j}=1$, $c_{_{ij}}=0$, $-b_{_{ij}}=2\,d_{_{ij}}=e_{_{ij}}=\beta$ and $f_{_j}({\vec
\rho})=\beta\sum_k\rho_{_k}$); the vectorial Eckhaus equation [@Calogero3] ($\alpha_{_{ij}}=0,\,f_{_j}({\vec\rho})=\sum_{ik}\lambda_{_{jik}}\,\rho_{_i}\,\rho_{_k}$). Moreover, for $q=p=2$, with $b_{_{ij}}+2\,d_{_{ij}}=0$ and $f_{_1}({\vec\rho})=f\,\rho_{_1}+g\,\rho_{_2}$, $f_{_2}({\vec\rho})=g\,\rho_{_1}+f\,\rho_{_2}$, equation (\[example\]) has been studied in [@Tsuchida].\
The canonical sub-family of equation (\[example\]) is given by $b_{_{ij}}=c_{_{ji}}=-2\,d_{_{ij}}=-2\,e_{_{ij}}$ and can be obtained through the following nonlinear potential $$U[{\vec\rho},\,{\vec
S}]=-\sum_{i,j=1}^pb_{_{ij}}\,\rho_{_i}\,\rho_{_j}\frac{\partial
S_{_i}}{\partial x} +F({\vec\rho}) \ ,$$ where the conditions $\delta\,F({\vec\rho})/\delta\,\rho_{_j}=f_{_j}({\vec\rho})$ are assumed.\
We observe that:\
a) when $d_{_{ij}}=e_{_{ij}}$, for $i\not= j$, equation (\[example\]) conserves the densities $\rho_{_j}$ and the currents take the form $$j_{_{\psi,j}}=2\,a_{_j}\,\rho_{_j}\frac{\partial S_{_j}}{\partial
x}-(d_{_{jj}}+e_{_{jj}})\,\rho_{_j}^2-2\,
\sum_{i=1,i\not=j}^pd_{_{ij}}\,\rho_{_i}\,\rho_{_j} \ ,\label{jj}$$ with ${\mathcal
J}_{_j}({\vec\rho})=-(d_{_{jj}}+e_{_{jj}})\,\rho_{_j}^2-2\,\sum_{i\not=
j}
d_{_{ij}}\,\rho_{_i}\,\rho_{_j}$ and $I_{_j}({\vec\rho})=0$.\
b) when $d_{_{ij}}+e_{_{ji}}=d_{_{ji}}+e_{_{ij}}$, equation (\[example\]) conserves the total density $\rho_{_{\rm
tot}}=\sum_j\rho_{_j}$, and the total current is given by $$j_{_{\rm tot}}=\sum_{j=1}^p\left[2\,a_{_j}\,\rho_{_j}\frac{\partial
S_{_j}}{\partial x}
-\sum_{i=1}^p(d_{_{ij}}+e_{_{ji}})\,\rho_{_i}\,\rho_{_j}\right] \ ,$$ with ${\mathcal
J}_{_j}[{\vec\rho}]=-(d_{_{jj}}+e_{_{jj}})\,\rho_{_j}^2$ and $$I_{_{j}}[{\vec\rho}]=-2\sum_{i\not=j}\left(d_{_{ij}}\,
\rho_{_i}\frac{\partial\rho_{_j}}{\partial x}+e_{_{ij}}\,\rho_{_j}\,
\frac{\partial\rho_{_i}}{\partial x}\right) \ .$$ If we choose the functionals ${\mathcal R}_{_j}[{\vec\rho}]=0$ in the case a) and $${\mathcal R}_{_j}[{\vec\rho}]=-\sum_{i=1,\,i\not=j}^p
\lambda_{_{ij}}\,\rho_{_i}\,\rho_{_j} \ ,$$ in the case b), where $\lambda_{_{ij}}=d_{_{ij}}+e_{_{ij}}$, the generators (\[cgenf\]) can be written in the unified form $$\sigma_{_j}[{\vec\rho}]=-{1\over
2\,a_{_j}}\sum_{i=1}^p\lambda_{_{ij}}\int\limits^x\rho_{_i}\,dx^\prime
\ .$$ By performing the gauge transformation, from equation (\[example\]) we obtain a new system of CNLSEs for the field $\Phi$ with nonlinearity $$\widehat W^\prime[{\vec\rho},\,{\vec S}]=\widehat
D[{\vec\rho},\,{\vec S}]+\widehat C[{\vec\rho},\,{\vec S}] \ ,$$ where the diagonal matrix $\widehat D[{\vec\rho},\,{\vec S}]$ has entries $$\begin{aligned}
\nonumber \widehat D[{\vec\rho},\,{\vec S}]={\rm
diag}&&\left[\sum_{i=1}^p\rho_{_i}\,\left(\mu_{_{ij}}\,\frac{\partial{\mathcal
S}_{_j}}{\partial x}+\nu_{_{ij}}\, \frac{\partial{\mathcal
S}_{_i}}{\partial
x}\right)\right.\\
&&\left.+\sum_{i,k=1}^p\omega_{_{jik}}\,\rho_{_i}\,\rho_{_k}+f_{_j}({\vec
\rho})\right] \ ,\end{aligned}$$ with $$\begin{aligned}
\nonumber
&&\mu_{_{ij}}=b_{_{ij}}+\lambda_{_{ij}} \ ,\\
&&\nu_{_{ij}}=c_{_{ij}}-\frac{a_{_i}}{a_{_j}}\,\lambda_{_{ij}} \ ,\\
\nonumber
&&\omega_{_{jik}}={1\over4\,a_{_j}}\left(\lambda_{_{ij}}\,\lambda_{_{kj}}+2\,b_{_{ij}}\,
\lambda_{_{kj}}+2\,\frac{a_{_j}}{a_{_i}}\,c_{_{ij}}\,\lambda_{_{ki}}\right)
\ ,\end{aligned}$$ whereas the off-diagonal matrix $\widehat C$ has entries $$\left(\widehat C\right)_{_{ij}}\!\!\![{\vec\rho},\,{\vec
S}]=i\,\frac{{\mathcal F}_{_i}({\vec\rho})-{\mathcal
F}_{_j}({\vec\rho})}{2\,p\,\sqrt{\rho_{_i}\,\rho_{_j}}}\,e^{i\,\left({\mathcal
S}_{_i}-{\mathcal S}_{_j}\right)} \ ,$$ where $${\mathcal F}_{_j }({\vec\rho})=\sum_{i=1}^p(d_{_{ij}}-e_{_{ij}})\,
\left(\rho_{_i}\,\frac{\partial\rho_{_j}}{\partial x}
-\frac{\partial\rho_{_i}}{\partial x}\,\rho_{_j}\right) \
.\label{ff}$$ We observe that the functionals (\[ff\]) vanish in the case a) and the nonlinearity $\widehat W^\prime[{\vec\rho},\,{\vec S}]$ reduces to a purely real quantity.\
Let us now collect some particular cases belonging to equation (\[example\]).\
1) By choosing $b_{_{ij}}=-\lambda_{_{ij}}$ and $a_{_j}\,c_{_{ij}}=2\,a_{_i}\,\lambda_{_{ij}}$ we obtain a system of CNLSEs with a purely real nonlinearity which depends only on the fields $\rho_{_i}$ $$i\,\frac{\partial\phi_{_j}}{\partial
t}+a_{_j}\,\Delta\phi_{_j}-\left(\sum_{i,k=1}^p\omega_{_{jik}}\,
\rho_{_i}\,\rho_{_k}+f_{_j}({\vec\rho})\right)\,\psi_{_j}=0 \
.\label{ris1}$$ When $f_{_j}(\vec\rho)=\sum_{ik}\lambda_{_{jik}}\,\rho_{_i}\,\rho_{_k}$ with $\lambda_{_{jik}}=\sum_{_{ik}}b_{_{ij}}(b_{_{kj}}-2\,b_{_{ki}})/4\,a_{_j}$, it reduces to a system of decoupled linear Schrödinger equations $$i\,\frac{\partial\phi_{_j}}{\partial t}+a_{_j}\,\Delta\phi_{_j}=0 \
.$$ 2) By choosing $b_{_{ij}}=-\lambda_{_{ij}}$ for $i\not=j$, $a_{_j}\,c_{_{ij}}=2\,a_{_i}\,\lambda_{_{ij}}$ and $f_{_j}(\vec\rho)=\sum_{ik}\lambda_{_{jik}}\,\rho_{_i}\,\rho_{_k}$ with $$\begin{aligned}
\left\{
\begin{array}{l}
\lambda_{_{kkk}}=\lambda_{_{kk}}\,\left(b_{_{kk}}+3\,
\lambda_{_{kk}/2}\right)/2\,a_{_k}
\ ,\\
\lambda_{_{kjk}}=\lambda_{_{kj}}\left(b_{_{kk}}+
\lambda_{_{kk}}/2+\lambda_{_{jk}}\right)/2\,a_{_k}
\ ,\\
\lambda_{_{kki}}=\lambda_{_{kk}}\,\lambda_{_{ki}}/4\,a_{_k} \ ,\\
\lambda_{_{kji}}=\lambda_{_{kj}}\left(\lambda_{_{ji}}-
\lambda_{_{ki}}/2\right)/2\,a_{_k}, \quad\mbox{ for $k\not=j\not=i$ and
$\kappa\not=j=i$} \ ,
\end{array}
\right.\end{aligned}$$ we obtain the following system of decoupled Jackiw-like NLSEs $$i\,\frac{\partial\phi_{_j}}{\partial t}+a_{_j}\,
\frac{\partial^2\phi_{_j}}{\partial x^2}
+\eta_{_j}\,j_{_j}\,\phi_{_j}=0 \ ,\label{ex1}$$ with $\eta_{_j}=(b_{_{jj}}+\lambda_{_{jj}})/2\,a_{_j}$.\
3) By choosing $b_{_{ij}}=-\lambda_{_{ij}},\,\,\lambda_{_{kji}}=
c_{_{kj}}\,\lambda_{_{ji}}/2\,a_{_j}-\lambda_{_{kj}}\,\lambda_{_{ki}}/4\,a_{_k}$ we obtain the CNLSEs $$i\,\frac{\partial\phi_{_j}}{\partial t}+a_{_j}\,
\frac{\partial^2\phi_{_j}}{\partial x^2}
+\sum_k\eta_{_{jk}}\,j_{_k}\,\phi_{_j}=0 \ ,\label{34}$$ being $\eta_{_{jk}}=(c_{_{jk}}-a_{_k}\,
\lambda_{_{jk}}/a_{_j})/2\,a_{_k}$. The nonlinear term in equation (\[34\]) has been considered in [@Calogero5].
Gauged NLSEs
------------
Let us consider a system of charged particles undergoing to anomalous diffusion and described by the following NLSE $$i\,D_t\psi+{{\mbox{\boldmath${D}$}}}^2\psi+\Lambda[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]\,\psi=0 \
,\label{exe6}$$ with nonlinearity $$\begin{aligned}
\nonumber \Lambda[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]&&=\left[a_{_1}\,\frac{{{\mbox{\boldmath${\nabla}$}}}\cdot\left({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}\right)}{\rho^{1-q}}+a_{_2}\,\frac{\Delta\rho}{\rho^{3-2\,q}}+
a_{_3}\,\left(\frac{{{\mbox{\boldmath${\nabla}$}}}\rho}{\rho^{2-q}}\right)^2\right]\\
&&+i\,\frac{D}{2}\,\frac{\Delta\rho^q}{\rho} \ , \label{lecm}\end{aligned}$$ where $a_{_1}=q\,D$, $a_{_2}=2\,\alpha$ and $a_{_3}=\alpha\,(2\,q-3)$ with $\alpha,\,q$ and $D$ constant parameters. Equation (\[exe6\]) must be considered jointly with $$\gamma\,\partial_\mu F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=j^\nu_{_{A\psi}} \ ,\label{exe61}$$ describing the dynamics of the gauge field.\
The nonlinearity (\[lecm\]) can be obtained from the potential $$U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}] =D\,q
\,\rho^{q-1}\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}})+\alpha\,\rho^{2q-3}\,\frac{({{\mbox{\boldmath${\nabla}$}}}\rho)^2}{\rho} \
,\label{pot}$$ and the charged current ${{\mbox{\boldmath${j}$}}}_{_{A\psi}}$ is given by $${{\mbox{\boldmath${j}$}}}_{_{A\psi}}={{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\psi}}+D\,q\,\rho^{q-1}\,{{\mbox{\boldmath${\nabla}$}}}\rho \ ,$$ with ${{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\psi}}=2\,\rho\,({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}})$.\
As a consequence the system fulfills the following continuity equation $$\frac{\partial\rho}{\partial t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{A\psi}}^{(0)}+D\,\Delta\rho^q=0 \ ,\label{cexe6}$$ which is a nonlinear Fokker-Planck equation for charged particles.\
By performing the transformation (\[trasf1g\]) with $$\sigma[\rho]=\frac{D}{2}\,\frac{q\,\rho^{q-1}-1}{q-1} \
,\label{trexe6}$$ where the integration constant has been chosen to avoid the singularity for $q\to1$, equations (\[exe6\]) and (\[exe61\]) are transformed in $$i\,D_t\phi+{{\mbox{\boldmath${D}$}}}^2\phi+\beta\,\rho^{2q-2}\,\left[
\frac{\Delta\rho}{\rho}+\left(q-{3\over2}\right)
\,\left(\frac{{{\mbox{\boldmath${\nabla}$}}}\rho}{\rho}\right)^2 \right]\,\phi=0 \ ,
\label{dob2}$$ with $\beta=2\,\alpha-q^2\,D^2/2$ and $$\gamma\,\partial_\mu F^{\mu\nu}+g\,\varepsilon^{\nu\tau\mu}
\,F_{\tau\mu}=j^\nu_{_{A\phi}} \ ,\label{dob21}$$ where $j^\nu_{_{A\phi}}=(\rho,\,{{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}})$ with ${{\mbox{\boldmath${j}$}}}^{(0)}_{_{A\phi}}=2\,\rho\,({{\mbox{\boldmath${\nabla}$}}}{\mathcal S}-{{\mbox{\boldmath${A}$}}})$.\
Similar equations can be obtained equivalently by means of the transformation $$\begin{aligned}
{{\mbox{\boldmath${\chi}$}}}&=&{{\mbox{\boldmath${A}$}}}-\frac{D\,q}{2}\,\rho^{q-2}\,{{\mbox{\boldmath${\nabla}$}}}\rho \ ,\\
\chi_0&=&A_0-\frac{D\,q}{2}\,\rho^{q-2}\,{{\mbox{\boldmath${\nabla}$}}}\cdot{{\mbox{\boldmath${j}$}}}_{_{A\psi}} \ .\label{trgauge}\end{aligned}$$ It is worthy to observe that equation (\[exe6\]), in the $q\to1$ limit, reduces to the gauged canonical subclass of the Doebner-Goldin family discussed in section 6.1 $$\begin{aligned}
\nonumber i\,D_t\psi+{{\mbox{\boldmath${D}$}}}^2\psi
&+&\left[D\,{{\mbox{\boldmath${\nabla}$}}}\cdot\left({{\mbox{\boldmath${\nabla}$}}}S-{{\mbox{\boldmath${A}$}}}\right)+2\,\alpha\,\frac{\Delta\rho}{\rho}-\alpha\,
\left(\frac{{{\mbox{\boldmath${\nabla}$}}}\rho}{\rho}\right)^2\right]\,\psi\\
&+&i\,{D\over2}\,\frac{\Delta\rho}{\rho}\,\psi=0 \ ,\label{exe62}\end{aligned}$$ which is obtainable from the potential $$U[\rho,\,S,\,{{\mbox{\boldmath${A}$}}}]
=D\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot({{\mbox{\boldmath${\nabla}$}}}\,S-{{\mbox{\boldmath${A}$}}})+
\alpha\,\frac{({{\mbox{\boldmath${\nabla}$}}}\rho)^2}{\rho} \ ,$$ and the continuity equation (\[cexe6\]) reduces to the linear Fokker-Planck equation for charged particles $$\frac{\partial\rho}{\partial
t}+{{\mbox{\boldmath${\nabla}$}}}\cdot{\mbox{\boldmath${j}$}}_{_{A\psi}}^{(0)}+D\,\Delta\rho=0 \
.\label{ccexe61}$$ In the same limit the gauge transformation has generator $$\sigma[\rho]={D\over2}\,\log\rho \ ,$$ and reduces equation (\[exe62\]) to $$i\,D_t\phi+{{\mbox{\boldmath${D}$}}}^2\phi+\beta \left[\frac{\Delta\rho}{\rho}
-{1\over2}\,\left(\frac{{{\mbox{\boldmath${\nabla}$}}}\rho}{\rho}\right)^2\right]\,\phi=0
\ ,\label{dob61}$$ with $\beta=2\,\alpha-D^2/2$.
Conclusions and comments
========================
In this paper we have considered a class of canonical NLSEs containing complex nonlinearities and describing U(1)-invariant systems. We have introduced a unitary and nonlinear transformation $\psi\rightarrow\phi$ which reduces the complex nonlinearity in a real one and at the same time transforms the quantum particles current in the standard bilinear form. We have extended the method to U(1)-invariant CNLSEs. For these systems we have generalized the gauge transformation with the purpose to change the initial nonlinearity in another one purely Hermitian. It has been shown that there are many different possibilities to define the generator of the transformation. For any choice we obtain a new CNLSE with a different, but Hermitian, nonlinearity. Finally, we have specialized the method for NLSEs minimally coupled with an Abelian gauged field. We have shown that there are two different ways to reduce the complex nonlinearity in a purely real one: or by a nonlinear unitary transformation on the matter field or, alternatively, by a nonlinear transformation on the gauge field.\
In the following let us make some considerations about the transformation studied in the present work.\
Firstly, the problem of the integrability of a nonlinear evolution equation is one of the most studied topics in mathematical physics. Let us consider the most general U(1)-invariant scalar NLSE in the hydrodynamic representation $$\begin{aligned}
&&\frac{\partial\rho}{\partial
t}+{{\mbox{\boldmath${\nabla}$}}}\cdot\left(2\,\rho\,{{\mbox{\boldmath${\nabla}$}}}\,S+{{\mbox{\boldmath${\mathcal J}$}}}\right)=0 \ ,\label{hjcb}\\
&&\frac{\partial S}{\partial t}+ ({{\mbox{\boldmath${\nabla}$}}}S)^2+U_{_q}-W=0 \
.\label{hjca}\end{aligned}$$ In the Calogero picture [@Calogero3; @Calogero4a; @Calogero4; @Calogero1], the system of equations (\[hjcb\]) and (\[hjca\]) is $C$-integrable if there exists a transformation of the dependent and/or independent variables: $t\rightarrow T,\,\,{{\mbox{\boldmath${x}$}}}\rightarrow{{\mbox{\boldmath${X}$}}},\,\,\rho\rightarrow R,\,\,S\rightarrow{\mathcal S}$ which changes equations (\[hjcb\]), (\[hjca\]) in $$\begin{aligned}
&&\frac{\partial R}{\partial T}+
\overline{{\mbox{\boldmath${\nabla}$}}}\cdot\left(2\,R\,\overline{{\mbox{\boldmath${\nabla}$}}}{\mathcal
S}\right)=0 \ ,\label{hjc1b}\\ &&\frac{\partial{\mathcal
S}}{\partial T}+(\overline{{\mbox{\boldmath${\nabla}$}}}{\mathcal S})^2
+\overline{U}_q=0 \ ,\label{hjc1a}\end{aligned}$$ where $\overline{{\mbox{\boldmath${\nabla}$}}}$ and $\overline{U}_q$ are the gradient and the quantum potential in the new variables. Equations (\[hjc1b\]) and (\[hjc1a\]) constitute the well known hydrodynamic representation of the standard linear Schrödinger equation.\
On the other hand, the transformation on the field $S\rightarrow{\mathcal S}$ introduced in this paper, reduces the continuity equation (\[hjcb\]) in the standard form given by equation (\[hjc1b\]) and can be seen as a first step in the Calogero program.\
Secondly, the most general gauge transformation of the kind discussed in the present work can be stated as $$\psi(t,\,{{\mbox{\boldmath${x}$}}})\to\phi(t,\,{{\mbox{\boldmath${x}$}}})={\mathcal
U}[\rho,\,S]\,\psi(t,\,{{\mbox{\boldmath${x}$}}}) \ ,\label{trtrr}$$ which is an infinite dimensional unitary representation of the diffeomorphism group with $${\mathcal U}[\rho,\,S]=\exp\Big(i\,\omega[\rho,\,S]\Big) \
.\label{trtr}$$ As matter of fact the real generator $\omega[\rho,\,S]$ could be any arbitrary functional depending on the fields $\rho$ and $S$.\
For instance, in [@Doebner4] the generator of the transformation has been assumed in $$\omega(\rho,\,S)=\frac{\gamma(t)}{2}\,\log{\rho}+
(\lambda(t)-1))\,S+\theta(t,\,{{\mbox{\boldmath${x}$}}}) \ ,\label{dbt}$$ which produces a group of transformations mapping the Doebner-Goldin equation in itself. We observe that the one parameter subclass of this transformation with $\theta(t,{{\mbox{\boldmath${x}$}}})=0,\,\lambda(t)=1$ and $\gamma(t)=constant$, coincides with those studied in this work.\
Throughout this paper, the generator of the gauge transformation has been chose with the purpose to make real the complex nonlinearity of the NLSE under inspection. Alternatively, nonlinear gauge transformations can be useful generalized with the purpose to classify NLSEs in equivalence classes. Any equation belonging to the same class, in spite of its nonlinearity, is gauge equivalent, by means of equation (\[trtr\]), to the others equations belonging to the same class.\
For instance, let us consider the following family of NLSEs $$i\,\frac{\partial\psi}{\partial
t}+\Delta\psi+\Lambda[\rho,\,S]\,\psi=0 \ ,\label{eqeq}$$ with complex nonlinearity $$\begin{aligned}
\nonumber \Lambda[\rho,\,S]&=&f_{_1}(\rho)\,\Delta
S+f_{_2}(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot{{\mbox{\boldmath${\nabla}$}}}S+f_{_3}(\rho)\,({{\mbox{\boldmath${\nabla}$}}}\rho)^2
\\&+&f_{_4}(\rho)\,\Delta\rho+{i\over\rho}\,{{\mbox{\boldmath${\nabla}$}}}\Big(f_{_5}(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho\Big)
\ ,\label{cnn}\end{aligned}$$ where the expression of the imaginary part guarantees the existence of a continuity equation for $\rho$.\
The quantities $f_{_i}(\rho)$ are functional parameters fixing the NLSE. Any NLSE belonging to the family of equations (\[eqeq\]) can be determined univocally through the vector $\vec
f\equiv\{f_{_1}(\rho),\,\ldots,\,f_{_5}(\rho)\}$. By performing a gauge transformation with generator $\omega(\rho)$ depending only on the field $\rho$, equation (\[eqeq\]) changes in $$i\,\frac{\partial\phi}{\partial
t}+\Delta\phi+\widetilde\Lambda[\rho,\,{\mathcal S}]\,\phi=0 \
,\label{cnnnn}$$ with $$\begin{aligned}
\widetilde\Lambda[\rho,\,S]&=&\tilde f_{_1}(\rho)\,\Delta {\mathcal
S}+\tilde f_{_2}(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho\cdot{{\mbox{\boldmath${\nabla}$}}}{\mathcal
S}+\tilde f_{_3}(\rho)\,({{\mbox{\boldmath${\nabla}$}}}\rho)^2\\ &+&\tilde
f_{_4}(\rho)\,\Delta\rho+{i\over\rho}\,{{\mbox{\boldmath${\nabla}$}}}\Big(\tilde
f_{_5}(\rho)\,{{\mbox{\boldmath${\nabla}$}}}\rho\Big) \ .\label{cn}\end{aligned}$$ What is important is to note that the transformation maintains the same structure in the nonlinearity through the presence of the functional groups $\Delta
S,\,{{\mbox{\boldmath${\nabla}$}}}S\cdot{{\mbox{\boldmath${\nabla}$}}}\rho,\,({{\mbox{\boldmath${\nabla}$}}}\rho)^2$ and $\Delta\rho$, whilst the expressions of the new parameters $\tilde
f_{_i}(\rho)$ are given by $$\begin{aligned}
\nonumber
&&\widetilde f_{_1}=f_{_1}-2\,\rho\,\frac{\partial\omega}{\partial\rho} \ ,\\
\nonumber
&&\widetilde f_{_2}=f_{_2} \ ,\\ \nonumber&&\widetilde
f_{_3}=f_{_3}-\left[f_{_2}-\frac{\partial\omega}{\partial\rho}
+2\,\frac{\partial
f_{_5}}{\partial\rho}+\left(f_{_1}-2\,\rho\,\frac{\partial\omega}{\partial\rho}\right)
\,\frac{\partial}{\partial\rho}\right]\frac{\partial\omega}{\partial\rho}
\ ,\\
\nonumber
&&\widetilde f_{_4}=f_{_4}-\left(f_{_1}
+2\,f_{_5}-2\,\rho\,\frac{\partial\omega}{\partial\rho}\right)\frac{\partial\omega}{\partial\rho}
\ ,\\
&&\widetilde f_{_5}=f_{_5}-\rho\,\frac{\partial\omega}{\partial\rho}
\ .\end{aligned}$$ By eliminating $\omega(\rho)$ among these equations we obtain a set of gauge invariants relations $$\begin{aligned}
\nonumber
&&\widetilde f_{_1}-f_{_1}=2\,\Big(\widetilde
f_{_5}-f_{_5}\Big) \ ,\label{1}\\
\nonumber
&&\widetilde f_{_3}-f_{_3}={1\over\rho}\left[f_{_2}+
{1\over\rho}\,\Big(\widetilde
f_{_5}-f_{_5}-f_{_1}\Big)+2\,\frac{\partial\widetilde
f_{_5}}{\partial\rho}
+f_{_1}\,\frac{\partial}{\partial\rho}\right]\Big(\widetilde
f_{_5}-f_{_5}\Big) \ ,\\
&&\widetilde f_{_4}-f_{_4}={1\over\rho}\,\Big(f_{1}+2\,\widetilde
f_{_5}\Big)\,\Big(\widetilde f_{_5}-f_{_5}\Big) \ .\label{3}\end{aligned}$$ Given two NLSEs belonging to the family (\[eqeq\]), labeled by the respective vectors $\vec
f\equiv\{f_{_1}(\rho),\,\ldots,\,f_{_5}(\rho)\}$ and $\vec
f^\prime\equiv\{\tilde f_{_1}(\rho),\,\ldots,\,\tilde
f_{_5}(\rho)\}$, if the functionals $f_{_i}(\rho)$ and $\tilde
f_{_i}(\rho)$ fulfil the relations (\[3\]), the two NLSEs are gauge equivalents since there exist a generator $\omega(\rho)$ such that, by means of equation (\[trtrr\]), transforms the first NLSE, labeled by the vector $\vec f$, in the second NLSE labeled by the vector $\vec f^\prime$.\
In particular, observing that the linear Schrödinger equation is represented by the vector $\vec f\equiv\{0,\,0,\,0,\,0,\,0\}$, it follows that any NLSE fulfilling the relations $$\begin{aligned}
\nonumber
&&f_{_1}=2\,f_{_5} \ ,\label{11}\\
\nonumber
&&f_{_2}=0 \ ,\\
\nonumber
&&f_{_3}={f_{_5}\over\rho}\left(2\,\frac{\partial
f_{_5}}{\partial\rho}-{f_{_5}\over\rho}\right) \
,\\
&&f_{_4}=2\,{f_{_5}^2\over\rho} \ ,\label{14}\end{aligned}$$ is gauge equivalent to the linear Schrödinger equation. This sub-family can be linearizable by means of the transformation (\[trtrr\]) with generator $\omega(\rho)=\int^\rho(f_{_5}(\rho^\prime)/\rho^\prime)\,d\rho^\prime$. In this sense, equations (\[14\]) define the subclass of the family of equations (\[eqeq\]) which are $C$-integrable.\
In conclusion, we have shown that the transformation introduced in the present work allows us to deal, in a unifying scheme, different NLSEs already known in literature, obtaining in a systematic way the transformations introduced by various Authors.\
A natural continuation of this work could be performed in several ways:\
1) Extending the method to the case of NLSEs coupled with non Abelian gauge fields, which are relevant, for instance, in the study of heavy-quark particle systems.\
2) Extending the method to relativistic nonlinear equations. In this context, in [@Doebner5], a relativistic generalization of the transformation introduced in [@Doebner4] has been proposed to generate nonlinear extensions of the Dirac equation.\
3) Extending the method to discrete NLSEs, which are particularly relevant in the study of lattice models in condensed matter.
Ablowitz M.J., Benney D.J.: Evolution of multi-phase modes for nonlinear dispersive waves, Stud. Appl. Math. [**49**]{}, 225–238 (1979).
Aglietti U., Griguolo L., Jackiw R., Pi S.-Y., Seminara D. Anyons and chiral solitons on a line:, Phys. Rev. Lett. [**77**]{}, 4406–4409 (1996).
Agrawal G.P.: Modulation instability induced by cross-phase modulation, Phys. Rev. Lett. [**59**]{}, 880–883 (1987).
Barashenkov I., Harin A.: Nonrelativistic Chern-Simons theory for the repulsive Bose gas, Phys. Rev. Lett. [**72**]{}, 1575–1579 (1994).
Berkhoer A.L., Zakharov V.E.: Self excitation of waves with different polarizations in nonlinear media, Zh. Eksp. Teor. Fiz. [**58**]{}, 903–911 (1970); \[Sov. Phys. JETP [**31**]{}, 486–490 (1970)\].
Bialynicki-Birula I., Mycielski J.: Nonlinear wave mechanics, Ann. Phys. (N.Y.) [**100**]{}, 62–93 (1976).
Bohm D.: A suggested interpretation of the quantum theory in terms of “hidden” variables, Phys. Rev. [**85**]{}, 166–193 (1951).
Calogero F., Degasperis A., De Lillo S.: The multicomponent Eckhaus equation, J. Phys. A: Math. Gen. [**30**]{}, 5805–5814 (1997).
Calogero F.: Universal C-integrable nonlinear partial-differential equation in $n+1$ dimensions, J. Math. Phys. , 3197–3209 (1993).
Calogero F.: C-integrable nonlinear partial-differential equations in $n+1$ dimensions, J. Math. Phys. , 1257–1271 (1992).
Calogero F., Xiaoda J.: C-integrable nonlinear PDES .2, J. Math. Phys. [**32**]{}, 875–887 (1991).
Calogero F., Xiaoda J.: C-integrable nonlinear PDES .2, J. Math. Phys. [**32**]{}, 2703–2717 (1991).
Calogero F., De Lillo S.: The Eckhaus Pde $i\,\psi_t+\psi_{xx}+2\,(|\psi|^2)_x\,\psi+|\psi|^4=0$, Inv. Problems [**3**]{}, 633-681 (1987) (Corrigendum), Inv. Problems [**4**]{}, 571 (1988).
Chen H.H., Lee Y.C., Liu C.S: Integrability of non-linear Hamiltonian-systems by inverse scattering method, Phys. Scr. [**20**]{}, 490–492 (1979).
Dodonov V.V., Mizrahi S.S.: Generalized nonlinear Doebner-Goldin Schrödinger equation and the relaxation of quantum-systems, Physica A [**214**]{}, 619–628 (1995).
Doebner H.-D., Zhdanov R.: Nonlinear Dirac equations and nonlinear gauge transformations, (2003); arXiv:quant-ph/0304167.
Doebner H.-D., Goldin G.A., Nettermann P.: Properties of nonlinear Schrödinger equations associated with diffeomorphism group-representations, J. Math. Phys. [**40**]{}, 49 (1999).
Doebner H.-D., Goldin G.A.: Introducing nonlinear gauge transformations in a family of nonlinear Schrödinger equations, Phys. Rev. A [**54**]{}, 3764–3771 (1996).
Doebner H.-D., Goldin G.A.: Properties of nonlinear Schrödinger-equations associated with diffeomorphism group-representations, J. Phys. A: Math. Gen. [**27**]{}, 1771-1780 (1994).
Doebner H.-D., Goldin G.A.: On a general nonlinear Schrödinger equation admitting diffusion currents, Phys. Lett. A [**162**]{}, 397–401 (1992).
Fermi E.: Rend. Accad. Naz. Lincei [**5**]{}, 795 (1955).
Feynmann R.P., Hibbs A.R.: Quantum Mechanics and Path Integrals, McGraw-Hill, New-York, (1965).
Florjańczyk M., Gagnon L.: Dispersive-type solutions for the Eckhaus equation, Phys. Rev. A [**45**]{}, 6881–6883 (1992).
Florjańczyk M., Gagnon L.: Exact-solutions for a higher-order nonlinear Schrödinger equation, Phys. Rev. A [**41**]{}, 4478–4485 (1990).
Fordy A.P.: Derivative nonlinear Schrödinger equations and hermitian symmetric-spaces, J. Phys. A, Math. Gen. [**17**]{}, 1235–1245 (1984).
Gedalin M., Scott T.C.: Optical solitary waves in the higher order nonlinear Schrödinger equation , Band Y.B., Phys. Rev. Lett. [**78**]{}, 448–451 (1997).
Ginzburg V., Pitaevskii L.: On the theory of superfluidity, Zh. Eksp. Theor. Fiz. [**34**]{}, 1240–1245 (1958); \[Sov. Phys. JETP [**7**]{}, 858–861 (1958)\].
Gisin L.: Microscopic derivation of a class of non-linear dissipative Schrödinger-like equations, Physica A [**111**]{}, 364–370 (1961).
Goldin G.A.: The diffeomorphism group-approach to nonlinear quantum-systems, Int. J. Mod. Phys. B [**6**]{}, 1905–1916 (1992).
Goldin G.A., Menikoff R., Sharp D.H.: Diffeomorphism-groups, gauge groups, and quantum-theory, Phys. Rev. Lett. [**51**]{}, 2246–2249 (1983).
Grigorenko A.N.: Measurement description by means of a nonlinear Schrödinger equation, J. Phys. A: Math. Gen. [**28**]{}, 1459–1466 (1995).
Gross E.P.: Hydrodynamics of a superfluid condensate, J. Math. Phys. [**4**]{}, 195–207 (1963).
Gross E.P.: Structure of a quantized vortex in boson systems, Nuovo Cimento [**20**]{}, 454–477 (1961).
Guerra F., Pusterla M.: A nonlinear Schrodinger equation and its relativistic generalization from. basic principles, Lett. Nuovo Cimento [**34**]{}, 351–356 (1982).
Hacinliyan I., Erbay S.: Coupled quintic nonlinear Schrödinger equations in a generalized elastic solid, J. Phys. A: Math. Gen. [**37**]{}, 9387–9401 (2004).
Hasegawa A., Kodama Y.: Solitons in optical communication, Oxford University Press, New York, (1995).
Hasegawa A., Tappert F.D.: Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion , Appl. Phys. Lett. [**23**]{}, 142–144 (1973).
Hisakado M., Wadati M.: Integrable multicomponent hybrid nonlinear Schrödinger-equations, J. Phys. Soc. Jpn. [**64**]{}, 408–413 (1995).
Hisakado M., Iizuka T., Wadati M.: Coupled hybrid nonlinear Schrödinger equation and optical solitons, J. Phys. Soc. Jpn. [**63**]{}, 2887–2894 (1994).
Ho T.-L.: Spinor Bose condensates in optical traps, Phys. Rev. Lett. [**81**]{}, 742–745 (1998).
Jackiw R.: A nonrelativistic chiral soliton in one dimension, J. Nonlinear Math. Phys. , 261–270 (1997).
Jackiw R., Pi S.-Y.: Self-dual Chern-Simons solitons, Prog. Theor. Phys. Suppl. [**107**]{}, 1–40 (1992).
Jackiw R., Pi S.-Y.: Classical and quantal nonrelativistic Chern-Simons theory, Phys. Rev. D [**42**]{}, 3500–3513 (1990) (Corrigendum), Phys. Rev. D [**42**]{}, 3929–3929 (1993).
Karpman V.I., Rasmussen J.J., Shagolov A.G.: Dynamics of solitons and quasisolitons of the cubic third-order nonlinear Schrödinger equation, Phys. Rev. E [**64**]{}, 026614-13 (2001).
Karpman V.I.:, Shagalov A.G.: Evolution of solitons described by the higher-order nonlinear Schrödinger equation. II. Numerical investigation, Phys. Lett. A [**254**]{}, 319–324 (1999).
Karpman V.I.: Evolution of solitons described by higher-order nonlinear Schrödinger equations, Phys. Lett. A [**244**]{}, 397–400 (1998).
Karpman V.I.: Radiation by solitons due to higher-order dispersion, Phys. Rev. E [**47**]{}, 2073–2082 (1993).
Kaniadakis G., Scarfone A.M.: Nonlinear Schrödinger equations within the Nelson quantization picture, Rep. Math. Phys. [**51**]{}, 225–231 (2003).
Kaniadakis G., Miraldi E., Scarfone A.M.: Cole-Hopf like transformation for a class of coupled nonlinear Schrödinger equations, Rep. Math. Phys. [**49**]{}, 203–209 (2002).
Kaniadakis G., Scarfone A.M.: Cole-Hopf-like transformation for Schrödinger equations containing complex nonlinearities, J. Phys. A: Math. Gen. [**35**]{}, 1943–1959 (2002).
Kaniadakis G., Scarfone A.M.: Nonlinear transformation for a class of gauged Schrödinger equations with complex nonlinearities, Rep. Math. Phys. [**48**]{}, 115–121 (2001).
Kaniadakis G., Scarfone A.M.: Nonlinear gauge transformation for a class of Schrödinger equations containing complex nonlinearities, Rep. Math. Phys. [**46**]{}, 113–118 (2000).
Kaniadakis G., Quarati P., Scarfone A.M.: Soliton-like behavior of a canonical quantum system obeying an exclusion-inclusion principle, Physica A [**255**]{}, 474–482 (1998).
Kaniadakis G., Quarati P., Scarfone A.M.: Nonlinear canonical quantum system of collectively interacting particles via an exclusion-inclusion principle, Phys. Rev. E [**58**]{}, 5574–5585 (1998).
Kaper H.G., Takáč P.: Ginzburg-Landau dynamics with a time-dependent magnetic field, Nonlinearity [**11**]{}, 291–305 (1998).
Kaup D.J., Newell A.C.: Exact solution for a derivative non-linear Schrödinger equation, J. Math. Phys. [**19**]{}, 798–801 (1978).
Kostin M.D.: Friction and dissipative phenomena in quantum mechanics, J. Stat. Phys. [**12**]{}, 145-151 (1975).
Kostin M.D.: On the Schrödinger-Langevin equation, J. Chem. Phys. [**57**]{}, 3589–3591 (1973).
Kundu A.: Comments on the Eckhaus PDE $i\,\psi_t+\psi_{xx}+2\,(|\psi|^2)_x\,\psi+|\psi|^4=0$, Inv. Problems, [**4**]{}, 1143–1144 (1988).
Kundu A.: Landau-Lifshitz and higher-order nonlinear-systems gauge generated from nonlinear Schrödinger type equations, J. Math. Phys. [**25**]{}, 3433–3438 (1984).
Li Z., Li L., Tian H., Zhou G.: New types of solitary wave solutions for the higher order nonlinear Schrödinger equation, Phys. Rev. Lett. [**84**]{}, 4096–4099 (2000).
Madelung E.: Quantum theory in hydrodynamical form, Z. Phys. [**40**]{}, 332–336 (1926).
Mahalingam A., Porsezian K.: Propagation of dark solitons in a system of coupled higher-order nonlinear Schrödinger equations, J. Phys. A: Math. Gen. [**35**]{}, 3099–3109 (2002).
Malomed B.A., Stenflo L.: Modulational instabilities and soliton-solutions of a generalized nonlinear Schrödinger equation, J. Phys. A: Math. Gen. [**24**]{}, L1149–1153 (1991).
Malomed B.A.: Bound solitons in the nonlinear Schrödinger-Ginzburg-Landau equation, Phys. Rev. A [**44**]{}, 6954–6957 (1991).
Malomed B.A., Nepomnyashchy A.A.: Kinks and solitons in the generalized Ginzburg-Landau equation, Phys. Rev. A [**42**]{}, 6009–6014 (1990).
Malomed B.A.: Evolution of nonsoliton and quasi-classical wavetrains in nonlinear Schrödinger and Korteweg-Devries equations with dissipative perturbations, Physica D [**29**]{}, 155–172 (1987).
Manakov S.V.: On the theory of two-dimensional stationary self-focusing of electromagnetic waves, Zh. Éksp. Teor. Fiz. [**65**]{}, 505–516 (1973); \[Sov. Phys. JETP [**38**]{}, 248–253 (1974)\].
Martina L., Soliani G., Winternitz P.: Partially invariant solutions of a class of nonlinear Schrödinger equations, J. Phys. A: Math. Gen. [**25**]{}, 4425–4435 (1992).
Matthews M.R., Anderson B.P., Haljan P.C., Hall D.S., Holland M.J., Williams J.E., Wieman C.E., Cornell E.A.: Watching a superfluid untwist itself: Recurrence of rabi oscillations in a Bose-Einstein condensate, Phys. Rev. Lett. [**83**]{}, 3358–3361 (1999).
Mollenauer L.F., Stolen R.H., Gordon J.P.: Experimental-observation of picosecond pulse narrowing and solitons in optical fibers, Phys. Rev. Lett. [**45**]{}, 1095–1098 (1980).
Nakkeeran K.: Exact dark soliton solutions for a family of $N$ coupled nonlinear Schr0dinger equations in optical fiber media, Phys. Rev. E [**64**]{}, 046611-7 (2001).
Nakkeeran K.: On the integrability of the extended nonlinear Schrödinger equation and the coupled extended nonlinear Schrödinger equations, J. Phys. A: Math. Gen. [**33**]{}, 3947–3949 (2000).
Nakkeeran K.: Exact soliton solutions for a family of $N$ coupled nonlinear Schrödinger equations in optical fiber media, Phys. Rev. E [**62**]{}, 1313–1321 (2000).
Newboult G.K., Parker D.F., Faulkner T.R.: Coupled nonlinear Schrödinger equations arising in the study of monomode step-index optical fibers, J. Math Phys. [**30**]{}, 930–936 (1989).
Noether E.: Invariante Variationsprobleme, Nachr. d. König. Gesellsch. d. Wiss. zu Göttingen, Math-phys. Klasse, 235 (1918) (English translation from Travel M.A.: Transport Theory and Statistical Physics 1(3), 183 (1971).
Olver P.J.: Applications of Lie Groups to Differential Equations, Springer, New York, (1986).
Pitaevskii L.P.: Vortex lines in an imperfect Bose gas, Zh. Eksp. Teor. Fiz. [**40**]{}, 646–651 (1961); \[Sov. Phys. JETP [**13**]{}, 451–454 (1961)\].
Radhakrishnan R., Kundu A., Lakshmanan M.: Coupled nonlinear Schrödinger equations with cubic-quintic nonlinearity: Integrability and soliton interaction in non-Kerr media, Phys. Rev. E [**60**]{}, 3314–3323 (1999).
Ryskin N.M.: Schrödinger bound nonlinear equations for the description of multifrequency wave packages distribution in nonlinear medium with dispersion, Zh. Eksp. Teor. Fiz. [**106**]{}, 1542–1546 (1994); \[Sov. Phys. JETP [**79**]{}, 833–834 (1994)\].
Sakovich S.Y., Tsuchida T.: Symmetrically coupled higher-order nonlinear Schrödinger equations: singularity analysis and integrability, J. Phys. A: Math. Gen. [**33**]{}, 7217–7226 (2000).
Scarfone A.M.: Stochastic quantization of an interacting classical particle system, J. Stat. Mech.: Theo. Exp. P03012+16 (2007).
Scarfone A.M.: Canonical quantization of classical systems with generalized entropies, Rep. Math. Phys. [**55**]{}, 169-177 (2005).
Scarfone A.M.: Canonical quantization of nonlinear many-body systems, Phys. Rev. E [**71**]{}, 051103-15 (2005).
Scarfone A.M.:Gauge transformation of the third kind for U(1)-invariant coupled Schrödinger equations, J. Phys. A: Math. Gen. [**38**]{}, 7037–7050 (2005).
Schuch D.: Nonunitary connection between explicitly time-dependent and nonlinear approaches for the description of dissipative quantum systems, Phys. Rev. A [**53**]{}, 945–940 (1997).
Schuch D., Chung K.-M., Hartmann H.: Nonlinear Schrödinger-type field equation for the description of dissipative systems 3. Frictionally damped free motion as an example for an aperiodic motion, J. Math. Phys. [**25**]{}, 3086–3092 (1984).
Shchesnovich V.S., Doktorov E.V.: Perturbation theory for the modified nonlinear Schrödinger solitons, Physica D [**129**]{}, 115–129 (1999).
Shi H., Zheng W.-M.: Bose-Einstein condensation in an atomic gas with attractive interactions , Phys. Rev. A [**55**]{}, 2930–2934 (1997).
Stratopoulos G.N., Tomaras T.N.: Vortex pairs in charged fluids, Phys. Rev. B [**54**]{}, 12493–12504 (1996).
Stringari S.: Collective excitations of a trapped Bose condensed gas, Phys. Rev. Lett. [**77**]{}, 2360–2363 (1996).
Tsuchida T., Wadati M.: Complete integrability of derivative nonlinear Schrödinger-type equations, Inv. Problems [**15**]{}, 1363–1373 (1999).
Tsuchida T., Wadati M.: New integrable systems of derivative nonlinear Schrödinger equations with multiple components, Phys. Lett. A [**257**]{}, 53–64 (1999).
Vinoj M.N., Kuriakose V.C.: Multisoliton solutions and integrability aspects of coupled higher-order nonlinear Schrödinger equations, Phys. Rev. E [**62**]{}, 8719–8725 (2000).
Weinberg S.: Precision tests of quantum mechanics, Phys. Rev. Lett. [**62**]{}, 485–488 (1989).
Weinberg S.: Testing quantum mechanics, Ann. Phys. [**194**]{}, 336–386 (1989).
Weinberg S.: Understanding the Fundamental Constitutents of Matter, A. Zichichi (ed.), Plenum, New York, (1978).
Wilczek F.: Fractional Statistics and Anyon Superconductivity, World Scientific, Singapore, (1990).
Wilczek F.: Magnetic flux, angular momentum, and statistics, Phys. Rev. Lett. [**48**]{}, 1144–1146 (1982).
Yip S.-K.: Internal vortex structure of a trapped spinor Bose-Einstein condensate, Phys. Rev. Lett. [**83**]{}, 4677–4681 (1999).
|
---
abstract: 'We present a new theory that takes internal dynamics of proteins into account to describe forced-unfolding and force-quench refolding in single molecule experiments. In the current experimental setup (Atomic Force Microscopy or Laser Optical Tweezers) the distribution of unfolding times, $P(t)$, is measured by applying a constant stretching force ${\bf f}_{S}$ from which the apparent ${\bf f}_{S}$-dependent unfolding rate is obtained. To describe the complexity of the underlying energy landscape requires additional probes that can incorporate the dynamics of tension propagation and relaxation of the polypeptide chain upon force quench. We introduce a theory of force correlation spectroscopy (FCS) to map the parameters of the energy landscape of proteins. In the FCS the joint distribution $P(T,t)$ of folding and unfolding times is constructed by repeated application of cycles of stretching at constant ${\bf f}_{S}$ separated by release periods $T$ during which the force is quenched to ${\bf f}_{Q}$$<$${\bf f}_{S}$. During the release period, the protein can collapse to a manifold of compact states or refold. We show that $P(T,t)$ at various ${\bf f}_{S}$ and ${\bf f}_{Q}$ values can be used to resolve the kinetics of unfolding as well as formation of native contacts. We also present methods to extract the parameters of the energy landscape using chain extension as the reaction coordinate and $P(T,t)$. The theory and a worm-like chain model for the unfolded states allows us to obtain the persistence length $l_p$ and the ${\bf f}_{Q}$-dependent relaxation time, that gives an estimate of collapse timescale at the single molecular level, in the coil states of the polypeptide chain. Thus, a more complete description of landscape of protein native interactions can be maped out if unfolding time data are collected at several values of ${\bf f}_{S}$ and ${\bf f}_{Q}$. We illustrate the utility of the proposed formalism by analyzing simulations of unfolding-refolding trajectories of a coarse-grained protein ($S1$) with $\beta$-sheet architecture for several values of ${\bf f}_{S}$, $T$ and ${\bf f}_{Q}$$=$$0$. The simulations of stretch-relax trajectories are used to map many of the parameters that characterize the energy landscape of $S1$.'
author:
- 'V. Barsegov$^1$, D. K. Klimov$^3$ and D. Thirumalai$^{1,2}$'
title: 'Mapping the energy landscape of biomolecules using single molecule force correlation spectroscopy (FCS): Theory and applications'
---
[^1]
**INTRODUCTION**
================
Several biological functions are triggered by mechanical force. These include stretching and contraction of muscle proteins such as titin [@1; @2], rolling and tethering of cell adhesion molecules [@3; @4; @4new; @4a; @4b], translocation of proteins across membranes [@5; @5a; @6; @6a], and unfoldase activity of chaperonins and proteasomes. Understanding these diverse functions requires probing the response of biomolecules to applied external tension. Dynamical responses to mechanical force can be used to characterize in detail the free energy landscape of biomolecules. Advances in manipulating micron-sized beads attached to single biomolecules have made it possible to stretch, twist, unfold and even unbind proteins using forces on the order of tens of piconewtons [@7; @7a; @7b]. Single molecule force spectroscopy on a number of different systems has allowed us to obtain a glimpse of the unbinding energy landscape of biomolecules and protein-protein complexes [@8; @9; @10; @10a]. In AFM experiments, used to unfold proteins by force, one end of a protein is adsorbed on a template and a constant or a time-dependent pulling force is applied to the other terminus [@11; @12; @12a; @12b; @13; @13a; @14]. By measuring the distribution of forces required to completely unfold proteins and the associated unfolding times, the global parameters of the protein energy landscape can be estimated [@15; @16; @17; @18; @18a; @19]. These insightful experiments when combined with theoretical studies [@19a; @19b; @19c] can give an unprecedented picture of forced-unfolding pathways.
Current experiments have been designed primarily to obtain information on forced-unfolding of proteins and do not probe the reverse folding process. Although force-clamp AFM techniques have been used recently to probe (re)folding of single ubiquitin polyprotein [@13a], the lack of theoretical approaches has made it difficult to interpret these pioneering experiments [@19e; @19f]. Secondly, the resolution of multiple timescales in protein folding and refolding requires not only novel experimental tools for single molecule experiments but also new theoretical analysis methods. Minimally, unfolding of proteins by a stretching force ${\bf f}_{S}$ is described by the global unfolding time $\tau_U({\bf f}_{S})$, timescales for propagation of the applied tension, and the dynamics describing the intermediates or “protein coil” states. Finally, if the external conditions (loading rate or the magnitude of ${\bf f}_{S}$) are such that these processes can occur on similar timescales then the analysis of the data requires new theoretical ideas.
For forced unfolding the variable conjugate to ${\bf f}_{S}$, namely, the protein end-to-end distance $X$ is a natural reaction coordinate. However, $X$ is not appropriate for describing protein refolding which, due to substantial variations in the duration of folding barrier crossing, may range from milliseconds to few minutes. To obtain statistically [*meaningful*]{} distributions of unfolding times, a large number of [*complete*]{} unfolding trajectories must be recorded which requires repeated application of the pulling force. The inherent heterogeneity in the duration of folding and the lack of correlation between evolution of $X$ and (re)folding progress creates “initial state ambiguity” when force is repeatedly applied to the same molecule. As a result, the interpretation of unfolding time data is complicated especially when the conditions are such that reverse folding process at the quenched force ${\bf f}_{Q}$ can occur on a long timescale, $\tau_F({\bf f}_{Q})$.
Motivated by the need to assess the effect of the multiple timescales on the energy landscape of folding and unfolding, we develop a new theoretical formalism to describe correlations between the various dynamical processes. Our theory leads naturally to a new class of single molecule force experiments, namely, the force correlation spectroscopy (FCS) which can be used to study both forced unfolding as well as force-quenched (re)folding. Such studies can lead to a more detailed information on both kinetic and dynamic events underlying unfolding and refolding. In the FCS, cycles of stretching (${\bf f}_{S}$) are separated by periods $T$ of quenched force ${\bf f}_{Q}$$<$${\bf f}_{S}$ during which the stretched protein can relax from its unfolded state $X_U$ to coil state $X_C$ or even (re)fold to the native basin of attraction (NBA) state. The two experimental observables are $X$ and the unfolding time $t$. The central quantity in the FCS is the distribution of unfolding times $P(T,t)$ separated by recoil or refolding events of duration $T$. The higher order statistical measure embedded in $P(T,t)$ is readily accessible by constructing a histogram of unfolding times for varying $T$ and does not require additional technical developments. The crucial element in the proposed analysis is that $P(T,t)$ is computed by [*averaging over final (unfolded) states*]{}, rather than initial (folded) states. This procedure removes the potential ambiguity of not precisely knowing the initial distribution of conformations in the NBA. Despite the uniqueness of the native state there are a number of conformations in the NBA that reflect the fluctuations of the folded state. The proposed formalism is a natural extension of unbinding time data analysis. Indeed, $P(T,t)$ reduces to the standard distribution of unfolding times $P(t)$ when $T$ exceeds protein (re)folding timescale $\tau_F({\bf f}_{Q})$.
The complexity of the energy ladscape of proteins demands FCS and the theoretical analysis. Current single molecule experiments on poly-Ub or poly-Ig27 (performed in the $T$$\to$$\infty$ regime) show that in these systems unfolding occurs abruptly in an apparent all-or-none manner or through a dominant intermediate [@19a]. On the other hand, refolding upon force-quench is complex and surely occurs though an ensemble of collapsed coiled states [@13a]. A number of timescales characterize the stretch-release experiments. These include besides $\tau_F({\bf f}_{Q})$, the ${\bf f}_{S}$-dependent unfolding time, and the relaxation dynamics in the coiled states $\{ C \}$ upon force-quench $\tau_d({\bf f}_{Q})$. In addition, if we assume that $X$ is an appropriate reaction coordinate then the location of the NBA, $\{ C \}$, the transition state ensembles and the associated widths are required for a complete characterization of the underlying energy landscape. Most of these parameters can be extracted using the proposed FCS experiments and the theoretical analysis presented here.
In a preliminary study [@PRL], we reported the basics of the theory which was used to propose a new class of single molecule force spectroscopy methods for deciphering protein-protein interactions. The current paper is devoted to further developments in the theory with appplication to forced-unfolding and force-quench refolding of proteins. In particular, we illustrate the efficacy of the FCS by analyzing single unfolding-refolding trajectories generated for a coarse-grained model (CGM) protein $S1$ with $\beta$-sheet architecture [@19new; @20old]. We showed previously that forced-unraveling of $S1$, in the limit of $T$$\to$$\infty$, can be described by an apparent “two-state” kinetics [@20; @21]. The thermodynamics and kinetics observed in $S1$ is a characteristic of a number of proteins where folding/unfolding fits well two-state behavior [@21new]. Thus, $S1$ serves as a useful model to illustrate the efficacy of the FCS. Here, we show that by varying $T$ and the magnitude of the stretching (${\bf f}_{S}$ or ${\bf f}_{Q}$), the entire dynamical processes, starting from the NBA to the fully stretched state, can be resolved. In the process we establish that $P(T,t)$ which can be measured using AFM or laser optical tweezer (LOT) experiments, provides a convenient way of characterizing the energy landscape of biomolecules in detail.
**Models and Methods:**
=======================
[*Theory of force correlation spectroscopy (FCS):*]{} In single molecule atomic force microscopy (AFM) experiments used to unfold proteins by force, the N-terminus of a protein is anchored at the surface and the C-terminus is attached to the cantilever tip through a polymer linker (Figure 1). The molecule is stretched by displacing the cantilever tip and the resulting force is measured. From a theoretical perspective it is more convenient to envision applying a constant stretching force ${\bf f}_{S}$$=$$f_S$${\bf x}$ in the ${\bf x}$-direction (Figure 1). The free energy in the constant force formulation is related to the experimental setup by a Legendre transformation. More recently, it has become possible to apply a constant force in AFM or laser or optical tweezer (LOT) experiments to the ends of a protein. With this setup the unfolding time for the end-to-end distance $X$ to reach the contour length $L$ can be measured for each molecule. For a fixed ${\bf f}_{S}$, repeated application of the pulling force results in a single trajectory of unfolding times ($t_1$, $t_2$, $t_3$, $\ldots$, Figure 1) from which the histogram of unfolding times $P(t)$ is obtained. The ${\bf f}_{S}$-dependent unfolding rate $K_U$ is obtained by fitting a Poissonian formula $K_U^{-1}\exp{[-K_U t]}$ to the kinetics of population of folded states $p_F$ which is related to $P(t)$ as $p_F(t)$$=$$1$$-$$\int_0^t ds P(s)$.
Because $K_U$ is a convolution of several microscopic processes, it does not describe unfolding in molecular detail. For instance, mechanical unfolding of fibronectin domains FnIII involves the intermediate “aligned” state [@16] with partially disrupted hydrophobic core which cannot be resolved by knowing only $K_U$. Even when the transition from the folded state $F$ to the globally extended state $U$ [@16] does not involve parallel routes as in Figure 2, or multistate kinetics, the force-induced unfolding pathway must involve formation of intermediate coiled states $\{ C \}$. The subsequent transition from $\{ C \}$ results in the formation of the globally unfolded state $U$. The incomplete time resolution prevents current experiments from probing the signature of the collapsed states. To probe the contributions from the underlying $\{ C \}$ states to global unfolding requires sophisticated experiments that can resolve contributions from dynamic events underlying forced unfolding. We propose a novel experimental procedure which, when supplemented with unfolding time data analysis described below, allows us to separately probe the kinetics of native interactions and the dynamics of the “protein coil” (i.e. the dynamics of end-to-end distance $X$ when the native contacts are disrupted).
Consider an experiment in which stretching cycles (triggered by applying ${\bf f}_{S}$) are interrupted by relaxation intervals $T$ during which force is quenched to ${\bf f}_{Q}$$<$${\bf f}_{S}$. In the time interval $T$, the polypeptide chain can relax into the manifold $\{ C \}$ or even refold to the native state $F$ if $T$ is long enough. If ${\bf f}_{S}$$>$${\bf f}_C$ and ${\bf f}_{Q}$$<$${\bf f}_C$ where ${\bf f}_C$ is the equilibrium critical unfolding force at the specific temperature (see phase diagram for $S1$ in Ref. [@20]), these transformations can be controlled by $T$. In the simplest implementation we set ${\bf f}_{Q}$$=$$0$. The crucial element in the FCS experiment is that the same measurements are repeated for varying $T$. In the FCS the unfolding times are binned to obtain the joint histogram $P(T,t)$ of unfolding events of duration $t$ generated from the recoil manifold $\{ C\}$ or the native basin of attraction (NBA) or both, depending on the duration of the relaxation time $T$. In the current experiments $T$$\to$$\infty$. As a result, the dynamics of additional states in the energy landscape that are explored during folding or unfolding are not probed.
The advantages of $P(T,t)$ over the standard distribution of unfolding times $P(t)$ are two-fold. First, $P(T,t)$ is computed by [*averaging over well-characterized fully stretched states*]{}. This eliminates the problem of not knowing the distribution of initial protein states encountered in current experiments. Indeed, due to intrinsic heterogeneity of the protein folding pathways, after the first unfolding event the protein may or may not refold into the native conformation, which creates the initial state ambiguity in the next (second, third, etc) pulling cycle. Therefore, statistical analysis based on averaging over final (stretched) states rather than initial (folded) states allows to overcome this difficulty. Secondly, statistical analysis of unfolding data [*performed for different values of $T$*]{} allows us to separately probe the kinetics of native interactions and the dynamics of $X$. In addition, the entire energy landscape of native interactions can be mapped out when stretch-quench cycles are repeated for several values of ${\bf f}_{S}$, ${\bf f}_{Q}$, and $T$.
[*Regime I*]{} ($T$$\ll$$\tau_F$): In the simplest unfolding scenario application of ${\bf f}_{S}$ results in the disruption of the native contacts ($F$$\to$$\{ C \}$) followed by stretching of the manifold $\{ C \}$ into $U$ (Figure 2). When stretching cycles are separated by short $T$ compared to the protein folding timescale $\tau_F$ at ${\bf f}_Q$$=$$0$, $P(T;t)$ is determined by the evolution of the coil state. Then the unfolded state population $p_U(T;t)$ is given by the convolution of protein relaxation (over time $T$) from the fully stretched state $X_U$$\approx$$L$ to an intermediate coiled state $X_1$ and streching $X_1$ into final state $X_f$ over time $t$. Thus, $P(T;t)$ is obtained from $p_U(T;t)$ by taking the derivative with respect to $t$, $$\begin{aligned}
\label{2.1}
P(T\ll \tau_F;t) & = & {{d}\over {dt}}p_U(T\ll \tau_F;t)\\\nonumber &
= & {{d}\over {dt}}{{1}\over {N(T)}}\int_{L-\delta}^L
dX_f 4\pi X_f^2 \int_0^L dX_1 4\pi X_1^2 \int_0^L
dX_U 4\pi X_U^2\\\nonumber & \times &
G_S(X_f,t;X_1)G_Q(X_1,T;X_U)P(X_U)\end{aligned}$$ where $N(T)$ is $T$-dependent normalization constant obtained by taking the last integral in the right hand side (rhs) of Eq. (\[2.1\]) from $X_f$$=$$0$ to $X_f$$=$$L$, and $P(X_U)$ is the distribution of unfolded states. If $X$ is well controlled, $X_U$ is expected to be centered around a fixed value ${\bar X}_U$ and $P(X_U)$$\sim$$\delta(X_U-{\bar X}_U)$. In Eq. (\[2.1\]), $G_Q(X',t;X)$ and $G_S(X',t;X)$ are respectively, the quenched and the stretching force dependent conditional probabilities to be in the coiled state $X'$ at time $t$ arriving from state $X$ at time $t$$=$$0$. The integral over $X_f$ is performed in the range $[ L-\delta;L ]$ with $X$$=$$L$$-$$\delta$ (Figure 2) representing unfolding distance at which the total number of native contacts $Q$ is at the unfolding threshold, $Q$$\approx$$Q^*$. It follows that $P(T;t)$ (Eq. (\[2.1\])) contains information on the dynamics of $X$. By assuming a model for $X$ and fitting $P(T;t)$, obtained by differentiating the integral expression appearing in Eq. (\[2.1\]), to the histogram of unfolding times, separated by short $T$$\ll$$\tau_F$, we can resolve the dynamics of the polypeptide chain in the coil state which allows us to evaluate the ${\bf f}_Q$-dependent coil dynamical timescale $\tau_d$ using single molecule force spectroscopy. The fit of Eq. (\[2.1\]) could be analytical or numerical depending on the model of $X$.
[*Regime II*]{} ($T$$\gg$$\tau_F$): When stretching cycles are interrupted by long relaxation periods, $T$$\gg$$\tau_F$, the coiled states refold to $X_F$ (Figure 2). In this regime, the initial conformations in forced-unfolding always reside in the NBA. In this limit, $P(T;t)$ reduces to the standard distribution of unfolding times $P(T,t)$$\to$$P(t)$. When $T$$\gg$$\tau_F$, $P(T;t)$ is given by the convolution of the kinetics of rupture of native contacts, resulting in protein extension $\Delta X_F$, and dynamics of $X$ from state $X_F$$+$$\Delta X_F$ to final state $X_f$, $$\begin{aligned}
\label{2.2}
P(T\gg\tau_F;t) & = & P(t)={{d}\over {dt}}p_U(T\gg\tau_F;t)\\\nonumber
& = & {{d}\over {dt}}{{1}\over
{N'(T)}}\int_{L-\delta}^L dX_f 4\pi X_f^2 \int_0^L
dX_F 4\pi X_F^2 \int_0^t dt'\\\nonumber & \times &
G_S(X_f,t;X_F+\Delta X_F,t') P_F(t',X_F;{\bf f}_{S})\end{aligned}$$ where $N'(T)$ is normalization constant obtained as in Eq. (\[2.1\]) and $P_F(t,X_F;{\bf f}_{S})$ is the probability of breaking the contacts over time $t$ that stabilize the native state $X_F$. By assuming a model for $P_F(t,X_F;{\bf f}_{S})$ and employing information on the dynamics of $X$, obtained from the short $T$-experiment (Eq. (\[2.1\])), we can probe the disruption kinetics of native interactions. By repeating long $T$-measurements at several values of ${\bf f}_{S}$, we can map out the energy landscape of native interactions projected on the direction of the end-to-end distance vector.
[*Regime III*]{} ($T$$\sim$$\tau_F$): In this limit, some of the molecules reach the NBA, starting from extended states ($X$$\approx$$L$), whereas others remain in the basin $\{ C \}$. The fraction of folding events $\rho_F$ depends on $T$ during which $X$ approaches the average extension $\langle X_C \rangle$ facilitating the formation of native contacts. Thus, $P(T\sim \tau_F)$ obtained in the intermediate $T$-experiment, involves contributions from both $\{ C \}$ and $F$ initial conditions and is given by a superposition, $$\label{2.4}
P(T\sim \tau_F;t)=\rho_F(T)P(T\gg\tau_F;t) + \rho_C(T)P(T\ll \tau_F;t)$$ where the probability to arrive to $F$ from $\{ C \}$ at time $T$ is given by $$\label{2.5}
\rho_F(T)=\int_0^LdX_1 4\pi X_1^2 \int_0^L dX_U 4\pi X_U^2 P_C(T, X;{\bf f}_{Q}) G_Q(X_1,T;X_U)P(X_U)$$ and the probability to remain in $\{ C \}$ is $\rho_C(T)$$=$$1$$-$$\rho_F(T)$. In Eq. (\[2.5\]), $P_C(T, X;{\bf f}_{Q})$ is the refolding probability determined by kinetics of formation of native contacts. Because the dynamics of $X$ is weakly correlated with formation of native contacts, $X$ in $P_C$ is expected to be broadly distributed. Therefore, Eqs. (\[2.4\]) and (\[2.5\]) can be used to probe kinetics of formation of native interactions.
For Eqs. (\[2.1\]) and (\[2.2\]) to be of use, one needs to know the (re)folding timescale $\tau_F$. The simplest way to evaluate $\tau_F$ is to construct a series of histograms $P(T_n,t)$ ($n$$=$$1$, $2$, $\ldots$, $N$) for a fixed ${\bf f}_{S}$ and increasing relaxation time $T_1$$<$$T_2$$<$$\ldots$$<$$T_N$, and compare $P(T_n,t)$’s with the distribution $P(T^*,t)$ obtained for sufficiently long $T^*$$\gg$$\tau_F$. If $T$$=$$T^*$ then all the molecules are guaranteed to reach the NBA. The difference $$\label{2.3}
D(T_n) = |P(T_n,t)-P(T^*,t)|$$ is expected to be non-zero for $T_n$$\le$$\tau_F$ and should vanish if $T_n$ exceeds $\tau_F$. Statistically, as $T_n$ starts to exceed $\tau_F$ increasingly more molecules will reach the NBA by forming native contacts. Then, more unfolding trajectories will start from folded states, and when $T$$\gg$$\tau_F$ all unfolding events will originate from the NBA. Therefore, $D(T_n)$ is a sensitive measure for identifying the kinetic signatures for forming native contacts. The utility of $D(T_n)$ is that it is a simple yet accurate estimator of $\tau_F$, which can be utilized in practical applications. Indeed, one can estimate $\tau_F$ by identifying it with the shortest $T_n$ at which $P(T_n;t)$$\approx$$P(T^*,t)$, i.e. $T_n$$\approx$$\tau_F$. We should emphasize that to obtain $\tau_F$ from the criterion that $D(\tau_F)$$\approx$$0$ no assumption about the distribution of refolding times have been made. Having evaluated $\tau_F$ one can then use Eqs. (\[2.1\]) and (\[2.2\]) for short and long $T$-measurements to resolve protein coil dynamics and rupture kinetics of native contacts.
Let us summarize the major steps in the FCS. First, we estimate $\tau_F$ by using $D(T)$ (Eq. \[2.3\])). We next probe protein coil dynamics by analyzing $P(T\ll \tau_F;t)$ obtained from short-$T$-measurements (Eq. (\[2.1\])). In the third step, we use information on protein coil dynamics to resolve the kinetics of rupture of native interactions contained in $P(T\gg\tau_F;t)$ of long-$T$-measurements (Eq. (\[2.2\])). Finally, by employing the information on protein coil dynamics and kinetics of rupture of native interactions, we resolve the kinetics of formation of native contacts by analyzing $P(T\sim \tau_F;t)$ from intermediate $T$-measurements (Eqs. (\[2.4\]) and (\[2.5\])).
The beauty of the proposed framework is that these experiments can be readily performed using available technology. In the current AFM experiments, $T$ can be made as short as few microseconds. Simple calculations show that the relaxation of a short $50$ amino acid protein from stretched state with $L$$\approx$$19nm$ to the coiled states $\{ C \}$ with say, $X$$\approx$$2nm$, occurs on the timescale $\tau_d$$\approx$$\Delta x^2/D$$\sim$$10\mu s$, where $\Delta x$$=$$L$$-$$X$$\approx$$17nm$ and $D$$\approx$$10^{-7}cm^2/s$ is the diffusion constant. Clearly, the time of formation of native contacts, which drives the transition from $\{ C \}$ to the NBA, prolongs $\tau_F$ by few microseconds to few miliseconds or larger, depending on folding conditions. In the experimental studies of forced unfolding and force-quenched refolding of ubiquitin, $\tau_F$ was found to be of the order of $10$$-$$100ms$ [@13a]. Computer simulation studies of unzipping-rezipping transitions in short $22$-nt RNA hairpin P5GA have predicted that $\tau_F$ is of the order of few hundreds of microseconds [@19e].
[*Model for the kinetics of native contacts*]{}: To interpret the data generated by FCS it is useful to have a model for the time evolution of the native contacts and $X$. We first present a simple kinetic model for rupture and formation of native contacts represented by probabilities $P_F$ and $P_C$ in Eqs. (\[2.2\]) and (\[2.5\]), respectively, and a model for the dynamics of $X$ given by the propagator $G_{S,Q}(X',t;X)$. To describe the force-dependent evolution of native interactions we adopt the continuous-time-random-walk (CTRW) formalism [@22; @22a; @24; @24a; @24b]. In the CTRW model, a random walker, representing rupture (formation) of native contacts, pauses in the native (coiled) state for a time $t$ before making a transition to the coiled (native) state. The waiting time distribution is given by the function $\Psi_{\alpha}(t)$ ($\alpha$$=$$r$ or $f$, where $r$ and $f$ refer to rupture and formation of native contacts, respectively). We assume that the probabilities $P_F(t,X_F;{\bf f}_{S})$ and $P_C(t,X_C;{\bf f}_{Q})$ are separable so that $$\label{3.2}
P_F(t,X_F;{\bf f}_{S})\approx P_{eq}(X_F)P_r(t;{\bf f}_{S}),\quad
\text{and} \quad P_C(t,X_C;{\bf f}_{Q})\approx P_C(X_C)P_f(t;{\bf f}_{Q})$$ where $P_{eq}(X_F)$ is the equilibrium distribution of native states, $P_C(X_C)$ is the distribution of coiled states and $P_r(t;{\bf f}_{S})$ and $P_f(t;{\bf f}_{Q})$ are the force-dependent probabilities of rupture and formation of native contacts, respectively. Factorization in Eq. (\[3.2\]) implies that application of force does not result in the redistribution of states $X_F$ and $X_C$ in the NBA and in the manifold of coiled states $\{ C \}$, but only changes the timescales for NBA$\to$$\{ C \}$ and $\{ C \}$$\to$NBA transitions, and thus, the propabilities $P_r$ and $P_f$. We expect the approximation in Eq. (\[3.2\]) to be valid provided the rupture of native contacts and refolding events are cooperative.
During stretching cycles, for ${\bf f}_{S}$ well above ${\bf f}_C$, we may neglect the reverse folding process. Similarly, global unfolding is negligible during relaxation periods with ${\bf f}_{Q}$$<$${\bf f}_C$. Then, the master equations for $P_r(t)$ is $$\label{3.3}
{{d}\over {dt}}P_{r}(t)= -\int_0^t d\tau \Phi_{r}(\tau)P_{r}(t-\tau)$$ where $\Phi_{r}(t)$ is the generalized rate for the rupture and formation of native interactions. In the Laplace domain, defined by ${\bar f}(z)$$=$$\int_0^{\infty}dt f(t)\exp{[-tz]}$, $\Psi_r(t)$ is related to $\Phi_{r}(t)$ as $$\label{3.4}
{\bar \Phi}_{r}(z)=z{\bar \Psi}_{r}(z)\left[ 1-{\bar \Psi}_{r}(z)
\right]^{-1}.$$ The structure of the master equation for $P_f(t)$ is identical to Eq. (\[3.3\]) with the relationship between $\Phi_{f}(t)$ and $\Psi_{f}(t)$ being similar to Eq. (\[3.4\]). The general solution to Eqs. (\[3.3\]) is $$\label{3.5}
{\bar P}_{r}(z)=\left[ z-\Phi_{r}(z)\right]^{-1}{\bar P}_{r}(0)$$ where ${\bar P}_{r}(0)$$=$$1$ is the initial condition and the solution in the time domain is given by the inverse Laplace transform $P_{r}(t)$$=$$L^{-1}\{ {\bar P}_{r}(z) \}$. The solution for ${\bar P}_{f}(z)$ is obtained in a similar fashion (see Eq. (\[3.5\])) with initial condition ${\bar P}_{f}(0)$$=$$1$.
[*Model for the polypeptide chain*]{}: In the extended state, when the majority of native interactions that stabilize the folded state are disrupted, the molecule can be treated roughly as a fluctuating coil. Simulations and analysis of native structures [@19new] suggest that proteins behave as worm-like chains (WLC). For convenience we use a continuous WLC description for the coil state whose Hamiltonian is $$\begin{aligned}
\label{4.1}
H & = & {{3k_B T}\over {2l_p}}\int_{-L/2}^{L/2} ds \left({{\partial
{\bf r}(s,t)} \over {\partial s}}\right)^2 + {{3l_p k_B T}\over
{8}}\int_{-L/2}^{L/2} ds\left({{\partial^2 {\bf r}(s,t)} \over
{\partial s^2}}\right)^2 \\\nonumber & + & {{3 k_b T}\over {4}}\left[
\left({{\partial {\bf r}(-L/2,t)} \over {\partial s}}\right)^2 +
\left({{\partial {\bf r}(-L/2,t)}\over {\partial s}}\right)^2 \right]+
{\bf f}_{S,Q}\int_{-L/2}^{L/2}ds \left({{\partial {\bf r}(s,t)} \over
{\partial s}}\right)\end{aligned}$$ where $l_p$ is the protein coil persistence length. A large number of force-extension curves obtained using mechanical unfolding experiments in proteins, DNA, and RNA have been analyzed using WLC model. In Eq. (\[4.1\]) the three-dimensional Cartesian vector ${\bf r}(s,t)$ represents the spatial location of the $s^{th}$ “protein monomer” at time $t$. The first two terms describe chain connectivity and bending energy, respectively. The third term represents fluctuations of the chain free ends and the fourth term corresponds to coupling of ${\bf r}$ to ${\bf f}_{S,Q}$. The end-to-end vector is computed as ${\bf X}(t)$$=$${\bf r}(L/2,t)$$-$${\bf r}(-L/2,t)$.
We need a dynamical model in which $X$ is represented by the propagator $G(X,t;X_0)$. Although bond vectors of a WLC chain are correlated, the statistics of $X$ can be represented by a large number of independent modes. It is therefore reasonable, at least in the large $L$ limit, to describe $G_{S,Q}(X,t;X_0)$ by a Gaussian, $$\label{4.2}
G_{S,Q}(X,t;X_0) = \left( {{3}\over {2 \pi \langle X^2\rangle_{S,Q}}}
\right)^{3/2} {{1}\over {(1-\phi_{S,Q}^2(t))^{3/2}}}
\exp{\left[-{{3(X-\phi_{S,Q}(t)X_0)^2} \over {2\langle X^2\rangle_{S,Q}
(1-\phi_{S,Q}^2(t))}}\right]}$$ specified by the second moment $\langle X^2\rangle_{S,Q}$ and the normalized correlation function $\phi(t)_{S,Q}$$=$${{\langle X(t)X(0)\rangle_{S,Q}}/{\langle X^2 \rangle_{S,Q}}}$. Calculations of $\langle X^2\rangle_{S,Q}$ and $\phi(t)_{S,Q}$ are given in the Appendix [@41; @41a]. In the absence of force, we obtain: $$\label{4.5}
\langle X(t)X(0)\rangle_0 =12 k_B T \sum_{n=1}^{\infty}{{1}\over
{z_n}}\psi^2_n(L/2) e^{-z_n t/\gamma}, \quad n=1,3,\ldots , 2q+1$$ where $\psi_n(X)$ and $z_n$ are the eigenfunctions and eigenvalues of the modes of the operator that describe the dynamics of ${\bf r}(s,t)$ (see Eq. (\[4.3\])). To construct the propagator $G_{S,Q}(X,t;X_0)$ for ${\bf f}_{S,Q}$, Eq. (\[4.3\]) is integrated with ${\bf f}_{S,Q}$ added to random force. We obtain: $\langle X^2 \rangle_{S,Q}$$=$$\langle X^2\rangle_0$$+$${\bf f}_{S,Q}^2
\sum_{n=1}^{\infty}\psi_n^2(L/2)/z_n^2 $, where $n=1,3,\ldots , 2q+1$. We analyze the distributions of unfolding times $P(T,t)$ for the model sequence $S1$ (Figure 3) obtained using simulations, CTRW model for evolution of native interactions (Eqs. (\[3.2\])-(\[3.5\])) and Gaussian statistics of the protein coil (Eq. (\[4.2\])).
[*Simulations of model $\beta$-sheet protein:*]{} The usefulness of FCS is illustrated by computing and analyzing the distribution function $P(T;t)$ for a model polypeptide chain with $\beta$-sheet architecture. Sequence $S1$, which is a variant of an off-lattice model introduced sometime ago [@20old], is a coarse-grained model (CGM) of a polypeptide chain, in which each amino acid is substituted with a united atom of appropriate mass and diameter at the position of the $C_\alpha$-carbons [@20; @21]. The $S1$ sequence is modeled as a chain of 46 connected beads of three types, hydrophobic $B$, hydrophilic $L$, and neutral $N$, with the contour length $L=46a$, where $a$$\approx$$3.8\AA$ is the distance between two consequtive $C_{\alpha}$- carbon atoms. The coordinate of $j$-th residue is given by the vector ${\bf x}_j$ with $j$$=$$1$, $2$, $\ldots$, $N$.
The potential energy $U$ of a chain conformation is $$U = U_{bond} + U_{bend}+ U_{da} + U_{nb},$$ where $U_{bond}$, $U_{bend}$, $U_{da}$ are the energy terms, which determine local protein structure, and $U_{nb}$ corresponds to non-local (non-bonded) interactions. The bond-length potential $U_{bond}$, that describes the chain connectivity, is given by a harmonic function $$U_{bond}=\frac{k_b}{2} \sum _{j=1}^{N-1} ( |{\bf X}_j -{\bf x}_{j+1}|-a)^2$$ where $k_b$$=$$100$$\epsilon_h/a^2$ and $\epsilon_h$ ($\approx 1.25 kcal/mol$) is the energy unit roughly equal to the free energy of a hydropobic contact. The bending potential $U_{bend}$ is $$\label{V_BA}
U_{bend}=\sum_{j=1}^{N-2} \frac{k_{\theta }}{2}(\theta _j - \theta _0)^{2},$$ where $k_{\theta }$$=$$20$$\epsilon_{h}/rad^2$ and $\theta _{0}$$=$$105^{\circ}$. The dihedral angle potential $U_{da}$, which is largely responsible for maintaining protein-like secondary structure, is taken to be $$\label{V_DIH}
V_{da}=\sum_{i=1}^{N-3} [ A_{i}(1+\cos\phi_{i}) + B_{i}(1+\cos 3\phi_{i})],$$ where the coefficients $A_i$ and $B_i$ are sequence dependent. Along the $\beta$-strands $trans$-states are preferred and $A$$=$$B$$=$$1.2$$\epsilon_h$. In the turn regions (i.e. in the vicinity of a cluster of $N$ residues) $A$$=$$0$, $B$$=$$0.2$$\epsilon_h$. The non-bonded 12-6 Lennard-Jones interaction $U_{nb}$ between hydrophobic residues is the sum of pairwise energies $$\label{Unb}
U_{nb} = \sum_{i<j+2} U_{ij},$$ where $U_{ij}$ depend on the nature of the residues. The double summation in Eq. (\[Unb\]) runs over all possible pairs excluding the nearest neighbor residues. The potential $U_{ij}^{BB}$ between a pair of hydrophobic residues $B$ is given by $U_{ij}^{BB}(r)$$=$$4$$\lambda$$\epsilon_{h}$ $\left[\biggl(\frac{a}{r}\biggr)^{12}-\biggl(\frac{a}{r}\biggr)^{6}\right]$, where $\lambda$ is a random factor unique for each pair of $B$ residues [@21] and $r$$=$$|{\bf x}_i$$-$${\bf x}_j|$. For all other pairs of residues $U_{ij}^{\alpha \beta}$ is repulsive [@21].
Although an off-lattice CGM drastically simplifies the polypeptide chain structure, it does retain important charateristics of proteins, such as chain connectivity and the heterogeneity of contact interactions. The local energy terms in S1 provide accurate representation of the protein topology. The native structure of $S1$ is a $\beta$-sheet protein that has a topology similar to the much studied immunoglobulin domains (Figure 3). When the model sequence is subject to ${\bf f}_{S}$ or ${\bf f}_{Q}$, the total energy is written as $U_{tot}$$=$$U$$-$${\bf f}_{\alpha}$${\bf X}$ ($\alpha$$=$$S$ or $Q$), where ${\bf X}$ is the protein end-to-end vector, and ${\bf f}_{S,Q}$$=$$(f_{S,Q},0,0)$ is applied along the ${\bf x}$-direction (Figure 1).
The dynamics of the polypeptide chain is assumed to be given by the overdamped Langevin equation, which in the absence of ${\bf f}_S$ or ${\bf f}_Q$, is $$\label{2.14}
\eta {{d}\over {dt}}{\bf x}_j = - {{\partial U_{tot}}\over {\partial {\bf
x}_j}} + {\bf g}_j(t)$$ where $\eta$ is the friction coefficient and ${\bf g}_j(t)$ is a Gaussian white noise, with the statistics $$\label{2.15}
\langle {\bf g}_j(t)\rangle = 0, \quad \langle {\bf g}_i(t){\bf
g}_j(t')\rangle = 6k_{B}T\eta \delta_{ij}\delta(t-t')$$ Eqs. (\[2.14\]) are integrated with a step size $\delta t$$=$$0.02$$\tau_L$, where $\tau_L$$=$$(ma^2/\epsilon_h)^{1/2}$$=$$3 ps$ is the unit of time and $m\approx 3\times 10^{-22} g$ is a residue mass. In Eq. (\[2.14\]) the value of $\eta$$=$$50 m/\tau_L$ corresponds roughly to water viscosity.
Results
=======
[*Simulations of unfolding and refolding of $S1$:*]{} For the model sequence $S1$ we have previously shown that the equilibrium critical unfolding force is $f_C$$\approx $$22.6pN$ [@20] at the temperature $T_s$$=$$0.692\epsilon_h/k_B$ below the folding transition temperature $T_F$$=$$0.7\epsilon_h/k_B$. At this temperature $70$$\%$ of native contacts are formed (see the phase diagram in Ref. [@20]). To simulate the stretch-relax trajectories, the initially folded structures in the NBA were equilibrated for $60 ns$ at $T_s$. To probe forced unfolding of S1 at $T$$=$$T_s$, constant pulling force $f_{S}$$=$$40$$pN$ and $80pN$ was applied to both terminals of $S1$. For these values of $f_{S}$, $S1$ globally unfolds in $t$$=$$90 ps$ and $50 ps$, respectively. Cycles of stretching were interrupted by relaxation intervals during which the force is abruptly quenched to $f_{Q}$$=$$0$ for various duration $T$. Unfolding-refolding trajectories of $S1$ have been recorded as time series of $X$ and the number of native contacts $Q$.
In Figure 4 we present a single unfolding-refolding trajectory of $X$ and $Q$ of $S1$, generated by stretch-relax cycles. Stretching cycles of constant force $f_{S}$$=$$80pN$ applied for $30 ns$ are interrupted by periods of quenched force relaxed over $90 ns$. A folding event is registered if it results in the formation of $92\%$ of the total number of native contacts $Q_F$$=$$106$, i.e. $Q$$\ge$$0.92Q_F$ for the first time. An unfolding time is defined as the time of rupture of $92\%$ of all possible contacts for the first time. With this definition, the unfolded state end-to-end distance is $X$$\ge$$X_U$$\approx$$36$$a$. In Figure 4, folded (unfolded) states correspond to minimal (maximal) $X$ and maximal (minimal) $Q$. Inspection of Figure 4 shows that refolding events are essentially stochastic. Out of 36 relaxation periods only 9 attempts resulted in refolding of $S1$. Both $X$ and $Q$ show that refolding of $S1$ occurs though an initial collapse to a coiled state with the end-to-end distance $X_C/a$$\approx$$15$ ($Q$$\approx$$20$), followed by the establishment of additional native contacts ($Q$$\approx$$90$) stabilizing the folded state with $X_F/a$$\approx$$(1-2)$.
We generated about 1200 single unfolding-refolding trajectories and monitored the time-dependent behavior of $X$ and $Q$. In the first set of simulations we set $f_S$$=$$40pN$ and used several values of $T$$=$$24$, $54$, $102$, $150$ and $240 ns$. In the second set, $f_S$$=$$80pN$, and $T$$=$$15$, $48$, $86$, $120$ and $180ns$. Each trajectory involves four stretching cycles separated by three relaxation intervals in which $f_Q$$=$$0$. Typical unfolding-refolding trajectories of $X$ and $Q$ for $f_S$$=$$40pN$, $f_Q$$=$$0$, and $T$$=$$102$, $150$ and $240 ns$ are displayed in Figures 5, 6 and 7, respectively. Due to finite duration of stretching cycles ($90 ns$), unfolding of $S1$ failed in few cases which were not included in the subsequent analysis of unfolding times. Only first stretching cycles in each trajectory are guaranteed to start from the NBA and for $T$$=$$102 ns$ (Figure 5) relatively few relaxation intervals result in refolding (with large $Q$). This implies that the distribution of unfolding times $P(T,t)$ obtained from these trajectories are dominated by contributions from the coiled states with the kinetics of formation of the native contacts playing only a minor role. Not unexpectedly, refolding events are more frequent when $T$ is increased to $150ns$ and $240 ns$. At $T$$=$$150ns$, $Q$ reaches higher values ($\approx$$65-75$) and the failure to refold is rare (Figure 6). This implies that as $T$ starts to exceed the (re)folding time $\tau_F$, the distribution of unfolding events, parametrized by $P(T,t)$, is characterized by diminishing contribution from the coiled states $\{ C \}$ and is increasingly dominated by the folded conformations in the NBA. Note that failed refolding events are observed even at $T$$=$$240 ns$ (Figure 7), which implies large heterogeneity in the duration of folding barrier crossing events. Figures 5-7 suggest that the folding time $\tau_F$ at the temperature $T_U$ is in the range $100$$-$$240ns$. Direct computations of the folding time $\tau_F$ from hundreds of folding trajectories starting with the fully stretched states gives ${\bar \tau}_F$$\approx$$176ns$. The agreement between ${\bar \tau}_F$ and $\tau_F$ validates our stretch-release simulations.
[*Analysis of the distribution of unfolding times of $S1$:*]{} The theoretical considerations in our formalism suggest that the $T$-dependent heterogeneous unfolding processes occur not only from the NBA but also from the intermediate coil $\{ C \}$ states. The $T$-dependent protein dynamics can be utilized to separately probe the coil dynamics of the polypeptide chain and the kinetics of formation/rupture of native contacts ($Q$). We now utilize unfolding-refolding trajectories of $S1$, simulated for short, intermediate and long $T$, to build the histograms of unfolding times $P(T,t)$. Using $P(T,t)$ we provide quantitative description of the polypeptide chain dynamics in the coil state and the kinetics of rupture and formation of native interactions by employing CTRW model for $Q$ and Gaussian statistics for $X$.
We computed $P(T,t)$ using the distribution of unfolding times obtained for $f_S$$=$$80pN$, $T$$=$$15$, $48$ and $86ns$ (Figure 8), and $f_S$$=$$40pN$, $T$$=$$24$, $54$ and $102ns$ (Figure 9). In both cases $f_Q$$=$$0$. We excluded unfolding times corresponding to the first stretch-quench cycle of each trajectory which were used to construct $P(t)$ for the purposes of comparing $P(t)$ with $P(T,t)$ for long $T$. Single peaked $P(T,t)$ obtained for $T$$=$$15ns$ (Figure 8) and $T$$=$$24ns$ (Figure 9), represent contributions to $S1$ unfolding from coil manifold $\{ C\}$ alone. When $T$ is increased to $48ns$ (Figure 8) and $54ns$ (Figure 9), position of the peak shifts to longer times, i.e. from $t$$\approx$$2.5ns$ to $t$$\approx$$5ns$ (Figure 8) and from $t$$\approx$$6ns$ to $t$$\approx$$10ns$ (Figure 9). Furthermore, $P(T,t)$ develops a shoulder at $t$$\approx$$10ns$ and $t$$\approx$$25ns$, observed for $T$$=$$86ns$ (Figure 8) and $T$$=$$102ns$ (Figure 9), which indicates a growing (with $T$) contribution to unfolding from relaxation trajectories that reach the NBA. At longer $T$$=$$150ns$, when most relaxation periods result in refolding of $S1$, contribution from coiled states diminishes and at $T$$=$$240ns$ $P(T,t)$ is identical to the standard distribution $P(t)$ constructed from unfolding times of the first stretch-quench cycle of each trajectory. This implies that for $f_Q$$=$$0$, $\tau_F$$\approx$$240ns$ and that $P(T,t)$$\to$$P(t)$ for $T$$>$$240ns$. The distribution $P(T,t)$$=$$P(t)$ constructed from unfolding times separated by $T$$=$$300ns$ is presented in Figures 8 and 9 (top left panel).
We use the CTRW formalism to analyze the histograms of unfolding times $P(T,t)$ from which the parameters that characterize the energy landscape of $S1$ can be mapped. We describe the kinetics of rupture and formation of native contacts by the waiting time distributions $\Psi_r$, $\Psi_f$, $$\label{5.1}
\Psi_r (t)= N_r t^{v_r-1}e^{-k_r t}, \qquad \Psi_f(t)= N_f t^{v_f-1}e^{-k_f t}$$ where $k_r$ (dependent on $f_S$) and $k_f$ (dependent on $f_Q$) are the rates of rupture and formation of native interactions, respectively, $N_{r,f}$$=$$k_{r,f}/\Gamma(v_{r,f})$ are normalization constants ($\Gamma(x)$ is Gamma function) and $v_{r,f}$$\ge$$1$ are phenomenological parameters quantifying the deviations of the kinetics from a Poissonian process. For instance, $v_{r,f}$$=$$1$ implies Poissonian process and corresponds to standard chemical kinetics with constant rate $k_{r,f}$. We assume that both the folded and the unfolded states are sharply distributed around the mean native and unfolded end-to-end distance $\langle X_F\rangle$ and $\langle X_U\rangle$, respectively (Figure 2), $$\label{5.2}
P_{eq}(X_F)=\delta (X-\langle X_F\rangle), \quad \text{and} \quad
P(X_U)=\delta (X-\langle X_U\rangle)$$ where $\langle X_U\rangle/a$$=$$36$ residues corresponds to the definition of unfolded state. For $S1$ the contour length $L/a$$=$$46$. Thus, $S1$ is unfolded if $X/a$ exceeds $\langle X_U \rangle $ which implies $\delta /a$$=$$10$ residues (see Figure 2 and the lower limit of integration in Eq. (\[2.1\])). We describe the distribution of states $\{ C\}$ before the transition to the NBA by a Gaussian, $$\label{5.3}
P_C(X)=e^{-(X-\langle X_C\rangle)^2/2\Delta X_C^2}$$ with the width $\Delta X_C$, centered around the average distance $\langle X_C\rangle$.
We performed numerical fits of the histograms presented in Figures 8 and 9 using Eqs. (\[2.1\]), (\[2.2\]), (\[2.4\]) and (\[2.5\]). By fitting the theoretical curves to $P(T,t)$ constructed from short $T$$=$$15ns$ and $T$$=$$48ns$ simulations (Figure 8) and $T$$=$$24ns$ and $T$$=$$54ns$ (Figure 9), we first studied the dynamics of $X$ to estimate the dynamical timescale $\tau_d$, i.e. the longest relaxation time corresponding to the smallest eigenvalue $z_n$ (Eq. (\[4.5\])), and persistence length $l_p$ of $S1$ in the coil states $\{ C \}$. By using the values of $\tau_d$ and $l_p$, we used our theory to describe $P(T,t)$ constructed from long $T$$=$$300ns$ simulations. This analysis allows us to estimate the parameters characterizing the rupture of native contacts $k_r$, $v_r$, $\langle X_F\rangle$ and $\Delta X_F$. Finally, the parameters $k_f$, $v_f$, $\langle X_C \rangle$ and $\Delta X_C$, characterizing formation of native contacts were estimated using $\tau_d$, $l_p$, $k_r$, $v_r$, $\langle X_F\rangle$ and $\Delta X_F$, and fitting Eqs. (\[2.4\]) and (\[2.5\]) to $P(T,t)$ for intermediate $T$$=$$86ns$ (Figure 8) and $T$$=$$102ns$ (Figure 9).
[*Extracting the energy landscape parameters of $S1$:*]{} There are a number of parameters that characterize the energy landscape and the dynamics of the major components in the NBA$\to$U transition. The numerical values of the model parameters are summarized in the Table. The values of $v_r$$=$$6.9$ for $f_S$$=$$40pN$ and $v_r$$=$$5.1$ for $f_S$$=$$80pN$ indicate that rupture of native contacts is highly cooperative especially at the lower $f_S$$=$$40pN$. This agrees with the previous findings on kinetics of forced unfolding of $S1$ [@20] which were based solely on unfolding $S1$ by applying a constant force. In contrast, the formation of native contacts is characterized by $v_f$$\approx$$1$ implying almost Poissonian distribution for the kinetics of formation of native contacts. The structural characteristics of the coil states are obtained using the relaxation of the polypeptide chain upon force-quench from stretched states. The value of the persistence length $l_p$, which should be independent of ${\bf f}_Q$ provided ${\bf f}_Q$$/$${\bf f}_C$$\ll$$1$, is found to be about $4.8$$\AA$ (Table). This value is in accord with the results of the recent experimental measurements based on kinetics of loop formation in denatured states of proteins [@41b].
Upon rupture of native contacts, the chain extends by $\Delta X_F/a$$=$$6.4$ (for $f_S$$=$$40pN$) and $\Delta X_F/a$$=$$6.7$ (for $f_S$$=$$80pN$). This distance separates the basins of folded states with $\langle X_F\rangle /a$$=$$4.5$ at $f_S$$=$$40pN$ and $\langle X_F\rangle /a$$=$$4.6$ at $f_S$$=$$80pN$ from high free energy states when the polypeptide chain is stretched in the direction of ${\bf f}_S$ (Figure 2(a)). Because these high free energy states are never populated we expect that forced-unfolding of $S1$ must occur in an apparent two-step manner when $T$$\to$$\infty$. Explicit simulations of $S1$ unfolding at constant ${\bf f}_S$ ($\approx$$69pN$) shows that mechanical unfolding occurs in a single step (see Figure 2 in Ref. [@20]).
From the refolding free energy profile upon force-quench (see Figure 2(b)) we infer that the initial stretched conformation must collapse to an ensemble of compact structures $\{ C \}$. From the analysis of $P(T;t)$ using the CTRW formalism we find that the average end-to-end distance $\langle X_C\rangle$ for the manifold $\{ C \}$ is close to $\langle X_F \rangle$ (see the Table) which suggests that the ensemble of the $\{ C \}$$\to$ NBA transition states is close to the native state. There is a broad distribution of coiled states $\{ C \}$ which is manifested in the large width $\Delta X_C/a$$=$$2.2$. Due to the broad conformational distribution, there is substantial heterogeneity in the refolding pathways. This feature is reflected in the long tails in $P(T,t)$ (see Figures 8 and 9). As a result, we expect the kinetic transition to be sharp. The estimated timescale ($\sim$$1/k_f$) for forming native contacts for $S1$ is shorter than the coil dynamical timescale $\tau_d$ (for the values of $f_S$ used in the simulations). This indicates that the dynamical collapse of $S1$ from the stretched state $X_U$$\approx$$L$ and equilibration in the coiled manifold $\{ C \}$ constitites a significant fraction of the total folding time ($\approx$$\tau_d$$+$$k_f^{-1}$). From the analysis of folding of $S1$ ($P(T;t)$ at intermediate $T$) we also infer that the transition state ensemble for $\{ C \}$$\to$$N$ must be narrow.
From the rates of rupture of native contacts $k_r$ at the two ${f}_S$ values and assuming Bell model for the dependence of $k_r$ on $f_S$, $$\label{5.4}
k_r(f_S) = k_r^0 e^{\sigma f_S/k_B T}$$ we estimated the force-free rupture rate $k_r^0$ and the critical extension $\sigma$ at which folded states of $S1$ become unstable. We found that $k_r^0$$=$$8$$\times$$10^{-4}ns^{-1}$ is negligible compared to the rate of formation of native contacts, $k_f$$=$$0.25ns^{-1}$. The location of the transition state of unfolding $X$$=$$\langle X_F\rangle$$+$$\sigma$ is characterized by $\sigma$$=$$1.5$$a$$\approx$$0.03$$L$. The value of $\sigma$ is short compared to $\Delta X_C$ which is a measure of the width of the $\{ C \}$ manifold. Small $\sigma$ implies that the major barrier to unfolding is close to the native conformation. A similar values of $\sigma$ was obtained in the previous study of $S1$ by using an entirely different approach [@20]. These findings are consistent with AFM experiments [@Rief97Science] and computer simulations [@KlimThirum99PNAS] which show that native structures of proteins appear to be “brittle” upon application of mechanical force.
The parameter $\tau_d$ is an approximate estimate of the collapse time, $\tau_c$, from the stretched to the coiled state. Using direct simulations of the decay of the radius of gyration, $R_g$, starting from a rod-like conformation, we obtained $\tau_c$$\approx$$80ns$ (see Supplementary Information in [@44]). The value of $\tau_d$ ($\approx$$20ns$) is in reasonable agreement with the estimate of $\tau_c$. This exercise shows that reliable estimates of timescales of conformational dynamics, which are difficult to obtain, can be made using FCS. To ascertain the extent to which the estimate of $K_U$ agrees with independent calculations, we obtained the $K_U$ by applying a constant force to unfold $S1$. The value of $K_U$, obtained by averaging over $200$ trajectories, is about $90ns$ at $f_S$$=$$40pN$ which is in rough accord with $K_U$$\approx$$\tau_d$$+$$k_r^{-1}$$\approx$$70ns$. This further validates the efficacy of FCS in obtaining the energy landscape of proteins. We also estimated $K_U^0$ from the value of $K_U$ obtained by direct simulation and the Bell model. The ${\bf f}_S$-dependent unfolding rate $K_U$$\approx$$\tau_d$$+$$k_r^{-1}$ increases with ${\bf f}_S$ in accord with Eq. (\[5.4\]). The prefactor ($K_U^0$) is about ten fold smaller than $k_r^0$. The difference may be either due to the failure of the assumption that $k_r^0$$=$$K_U^0$ or the breakdown of the Bell model [@45].
Discussion
==========
In this Section we summarize the main steps for practical implementation of the proposed Force Correlation Spectroscopy (FCS) to probe the energy landscape of proteins using forced unfolding of proteins.
[*Step 1. Evaluating the (re)folding timescale $\tau_F$:*]{} In the first phase of the FCS experiments, one needs to collect a series of histograms $P(T_n,t)$, $n$$=$$1,2,\ldots$, $N$ of unfolding times for increasing relaxation time $T_1$$<$$T_2$$<$$\ldots$$<$$T_N$ by repeated stretch-release experiments. This can be done by discarding the first unfolding time $t_1$ in the sequence of recorded unfolding times $\{ t_1, t_2, \ldots, t_M \}$ for each $T_n$ to guarantee that all the unfolding events are generated from the stretched states with the distribution $P(X_U)$ (see Eq. (\[2.1\])). This is a crucial element of the FCS methodology since it enables us to perform the averaging over the final (stretched) states. It is easier to resolve experimentally the end-to-end distance $X$$\approx$$L$, rather than the initial (folded) states in which a number of conformations belong to the NBA. The histograms are compared with $P(T^*,t)$ obtained for sufficiently long $T^*$$\gg$$\tau_F$. To ensure that $T^*$ exceeds $\tau_F$, $T^*$ can be as long as few tens of minutes. The time at which $D(T_n)$, given by Eq. (\[2.3\]), is equal to zero can then be used to estimate $\tau_F$. Notice that our estimate of $\tau_F$ [*does not*]{} hinge on whether $P(T\to\infty;t)$$\equiv$$P(t)$ is Poissonian or not! Clearly, the choice of $T^*$ depends on the protein under the study, and prior knowledge or bulk measurements of unfolding times observed under the influence of temperature jump or denaturing agents can serve as a guide to estimate the order of magnitude of $T^*$.
[*Step 2. Resolving the dynamics of the polypeptide chain:*]{} To this end we have determined the ensemble average (re)folding time, $\tau_F$. In the second phase of the FCS, we perform statistical analysis of the distribution of unfolding times collected at $T$$\ll$$\tau_F$, i.e. $P(T\ll \tau_F;t)$ (regime I in Section II). This allows us to probe the dynamic properties of the polypeptide chain, such as the protein persistence length $l_p$ and the protein dynamical timescale $\tau_d$ (see the Table). Indeed, by assuming a reasonable model for the conditional probability, $G(X',t;X)$, of the protein end-to-end distance and the distribution of the stretched states, $P(X_U)$, $l_p$ and $\tau_d$ can be determined from the fit (either analytically or numerically) of the unfolding time distribution, $P(T\ll\tau_F;t)$, given by Eq. (\[2.1\]), to the histogram of unfolding times collected for $T$$\ll$$\tau_F$. To illustrate the utility of the FCS, in the present work we assumed a Gaussian profile for $G_{S,Q}(X',t;X)$ (see Eq. (\[4.2\])) and the worm-like-chain model for the polypeptide chain. The general formulae (\[2.1\]) allows for the use of more sophisticated models of $X$, should it become necessary. Recent single molecule FRET experiments on proteins [@46; @47], dsDNA, ssDNA, and RNA [@48] show, surprisingly, that the characteristics of unfolded states obey worm-like chain models. Moreover, all the data in forced unfolding of proteins have been analyzed using WLC models. Thus, the analysis of FCS data using WLC dynamics for unfolded polypeptide chains to a large extent is justified. $G_S(X',t;X)$ and $G_Q(X',t;X)$ can be “measured” in the current AFM and LOT experiments by computing the frequency of occurence of the event $X$ after the forced stretch (${\bf f}$$=$${\bf f}_S$) or force quench (${\bf f}$$=$${\bf f}_Q$) from the well-controlled partially stretched state $X$ or the fully stretched state $X$$\approx$$L$ of the chain, respectively, over time $t$ ($\ll$$\tau_F$).
[*Step 3. Probing the kinetics of rupture of the protein native contacts:*]{} Having resolved the dynamics of the protein in extension-time regime, where the number of native interactions that stabilize the native state is small, we can resolve the kinetics of forced rupture of native interactions stabilizing the NBA (regime II). In the third part of the FCS we analyze the distribution of unfolding times for $T$$\gg$$\tau_F$, given by Eq. (\[2.2\]). We use the knowledge about the propagator $G_S(X',t';X,t)$, appearing in the rhs of Eq. (\[2.2\]), obtained in [*Step 2*]{} to perform analytical or numerical fit of the distribution $P(T\gg \tau_F;t)$ to the histogram of unfolding times collected for $T$$\gg$$\tau_F$. The new information, gathered in [*Step 3*]{}, sheds the light on the kinetics of native interactions stabilizing the NBA, which is contained in the probability $P_F(t;{\bf f}_{S},X_F)$ (see Eq. (\[2.2\])). For convenience, we used the continuous time random walk (CTRW) model for $P_F(t;{\bf f}_{S},X_F)$, which is summarized in Eqs. (\[3.3\])-(\[3.5\]), and the assumption of separability, given by Eqs. (\[3.2\]). CTRW reduces to the Poissonian kinetics with the rate constants when the waiting time distribution function for the rupture of native contacts, $\Psi_r(t)$, is an exponential function of $t$. The CTRW probes the possible deviations of the kinetics of $P_F(t;{\bf f}_{S},X_F)$ from the Poisson process and allows to test different functional forms for $\Psi_r(t)$. In the simplest implementation of CTRW utilized in the present work, $\Psi_r(t)$ is assumed to be an algebraic function of $t$, given by Eqs. (\[5.1\]), which allows us to estimate the rate of rupture of native interactions, $k_r$, and parameter $v_r$ quantifying the deviations of the rupture kinetics from a Poissonian process. Furthermore, by repeating [*Step 3*]{} for different values of the stretching force, $f_S$, and assuming the Bell model for $k_r(f_S)$, given by Eq. (\[5.4\]), we can also estimate the force-free rupture rate, $k_r^0$, and the critical extension $\sigma$, which quantifies the distance from the NBA to the transition state along the direction of $f_S$. We also obtain the average end-to-end distance in the folded state, $\langle X_F\rangle$ from the distribution of the native states $P_{eq}(X_F)$.
[*Step 4. Resolving the kinetics of formation of native interactions:*]{} In the final step the distributions $P(T\ll \tau_F;t)$ and $P(T\gg \tau_F;t)$, analyzed in [*Steps 2*]{} and [*3*]{} respectively, are used to form a linear superposition (Eq. (\[2.4\]), regime III). The $T$-dependent weights are given by the probabilities $\rho_C(T)$ and $\rho_F(T)$$=$$1$$-$$\rho_C(T)$, respectively. This superposition is used to fit the histogram of unfolding times, $P(T\sim \tau_F;t)$, collected for $T$$\sim$$\tau_F$. The estimated probability $\rho_F(T)$ should then be matched with the propbability obtained by performing double integration in Eq. (\[2.5\]). This allows us to probe the kinetics of formation of native contacts, $P_C(T;X,{\bf f}_Q)$, for the known propagator $G_Q(X',T;X)$ analyzed in [*Step 2*]{}. As in the case of $P_F(t;{\bf f}_{S},X_F)$, we assumed separability condition for $P_C(t;{\bf f}_{Q},X_C)$ (Eqs. (\[3.2\])) and CTRW for the kinetics of formation of native contacts contained in $P_f(t;{\bf f}_Q)$ (see Eqs. (\[3.3\])-(\[3.5\])). A simple algebraic form for the waiting time distribution function, $\Psi_f(t)$, given by Eq. (\[5.1\]), allows us to estimate the force-free rate of formation of native interactions, $k_f(f_Q=0)$$=$$k_f^0$. Moreover, the heterogeneity of the protein folding pathyways can be assesses by analyzing the width, $\Delta X_C$, of the distribution of coiled protein states, $P_C(X)$, centered around the average end-to-end distance, $\langle X_C\rangle$ (see Eq. (\[5.3\])). Similar to the analysis of rupture kinetics, [*Step 4*]{} could be repeated for the two values of the quenched force, $f_Q$, to yield the force-free rate of formation of native contacts, stabilizing the native fold, and the distance between $\langle X_C\rangle$ and the transition state for the formation of native contacts. For the purposes of illustration, in the present work we used $f_Q$$=$$0$.
At the minimum FCS can be used to obtain model-independent estimate of $\tau_F$. By assuming a WLC description for coiled states, which is justified in light of a number of FRET and forced unfolding experiments, estimates of collapse times and their distribution as well as persistence length can be obtained. If CTRW model is assumed then estimates of timescale for rupture and formation of native contacts can be made. The utility of FCS for $S1$ illustrates the efficacy of the theory. The potential of obtaining hitherto unavailable information makes FCS extremely useful.
Conclusions
===========
In this paper we have developed a theory to describe the role of internal relaxation of polypeptide chains in the dynamics of single molecule force-induced unfolding and force-quench refolding. To probe the effect of dynamics of the chain in the compact manifold of states, that are populated in the pathways to the NBA starting from the stretched conformations, we propose using a series of stretch-release cycles. In this new class of single molecule experiments, referred to as force correlation spectroscopy (FCS), the duration of release times ($T$) is varied. FCS is equivalent to conventional mechanical unfolding experiments in the limit $T$$\to$$\infty$. By applying our theory to a model $\beta$-sheet protein we have shown that the parameters that characterize the energy landscape of proteins can be obtained using the joint distribution function of unfolding times $P(T;t)$.
The experimentally controllable parameters are ${\bf f}_{S}$, ${\bf f}_{Q}$, and $T$. In our illustrative example, we used values of ${\bf f}_{S}$ that are approximately $(2-4)$ times greater than the equilibrium unfolding force. We set ${\bf f}_{Q}$$=$$0$ which is difficult to realize in experiments. From the schematic energy landscape in Figure 1 it is clear that the profiles corresponding to the positions of the manifold $\{ C\}$, the dynamics of $\{ C\}$, and the transition state location and barrier hight depend on ${\bf f}_{Q}$. The simple application, used here for proof of principle purposes only, already illustrates the power of FCS. To obtain the energy landscape of $S1$ by using FCS that covers a broader range of ${\bf f}_{S}$ and ${\bf f}_{Q}$, a complete characterization of the landscape can be made. The experiments that we propose based on the new theoretical development can be readily performed using presently available technology. Indeed, the pioneering experimental setup used by Fernandez and Li [@13a] that have utilized force to initiate refolding can be readily adopted to perform single molecule FCS.
It is known that even for proteins that fold in an apparent two-state manner the energy landscape is rough [@12b]. The scale of roughness $\Delta E$ can be measured in conventional AFM experiments by varying temperature. The extent to which the internal dynamics of proteins is affected by $\Delta E$, whose value is between $(2-5)$$k_B T$ [@50; @51], on the force-quenched refolding is hard to predict. These subtle effects of the energy landscape can be resolved (in principle) using FCS in which temperature is also varied.
Calculation of $\langle X(t)X(0)\rangle $
==========================================
In this Appendix we outline the calculation of $\langle X(t)X(0)\rangle $ and $\langle X^2\rangle $ for the force-free propagator $G_0(X,t;X_0)$. By using Eq. (\[4.1\]) (without the last term) and applying the least action principle to WLC Lagrangian $L$$=$$m/2$$\int_{-L/2}^{L/2}ds$$(\partial{\bf r}/\partial t)^2$$-$$H$, we obtain: $m$${{\partial^2}\over {\partial t^2}}$${\bf r}(s,t)$$+$$\epsilon$${{\partial^4} \over {\partial s^4}}$ ${\bf r}(s,t)$$-$$2$$\nu$${{\partial^2}\over {\partial s^2}}$ ${\bf r}(s,t)$$=$$0$, where $m$ is the protein segment mass and $\epsilon$$=$${{3l_p k_B T}/ {4}}$, $\nu$$=$${{3 k_B T}/{2l_p}}$. Dynamics of the media is taken into account by including a stochastic force $f(s,t)$ with the white noise statistics, $\langle{f}_{\alpha} (s,t) \rangle$$=$$0$, $\langle{f}_{\alpha}(s,t){f}_{\beta}(s',t')\rangle$$=$ $2$$\gamma$$k_BT$$\delta_{\alpha \beta}$$\delta(s-s')$$\delta(t-t')$, where $\alpha$$=$$x$, $y$, $z$, and $\gamma $ is the friction coefficient per unit coil length. In the overdamped limit, the equation of motion for ${\bf r}(s,t)$ is [@41; @41a] $$\label{4.3}
\gamma {{\partial}\over {\partial t}}{\bf r}(s,t) + \epsilon
{{\partial^4} \over {\partial s^4}}{\bf r}(s,t) -2\nu
{{\partial^2}\over {\partial s^2}}{\bf r}(s,t) = {\bf f}(s,t)$$ with the boundary conditions, $$\label{4.4}
\left[ 2\nu {{\partial}\over {\partial s}}{\bf r}(s,t)-\epsilon
{{\partial^3} \over {\partial s^3}}{\bf r}(s,t) \right]_{\pm L/2} = 0,
\qquad \left[ 2\nu_0 {{\partial}\over {\partial s}}{\bf
r}(s,t)+\epsilon {{\partial^2} \over {\partial s^2}}{\bf r}(s,t)
\right]_{\pm L/2} = 0$$ where $\nu_0$$=$$3k_B T/4$. We solve Eq. (\[4.3\]) by expanding ${\bf r}(s,t)$ and ${\bf f}(s,t)$ in a complete set of orthonormal eigenfunstions $\{\psi_n(s)\}$, i.e. $$\label{A.1}
{\bf r}(s,t) = \sum_{n=0}^{\infty} {\bf \xi}_n (t) \psi_n(s)\quad
\text{and} \quad {\bf f}(s,t) = \sum_{n=0}^{\infty} {\bf f}_n (t)
\psi_n(s)$$ Substituting Eqs. (\[A.1\]) into Eq. (\[4.3\]) and separating variables we obtain: $$\label{A.2}
\epsilon {{d^4}\over {ds^4}}\psi_n(s)-2\nu {{d^2}\over
{ds^2}}\psi_n(s) = z_n \psi_n(s) \quad \text{and} \quad \gamma
{{d}\over {dt}}{\bf \xi}_n(t) + z_n {\bf \xi}_n (t) = {\bf f}_n(t)$$ where $z_n$ is the $n$-th eigenvalue. The second Eq. (\[A.2\]) for ${\bf \xi}(t)$ is solved by $$\label{A.3}
{\bf \xi}_n(t)={{1}\over {\gamma}}\int_{-\infty}^{t}dt' {\bf f}_n(t')
\exp{\left[-{{(t-t')z_n}\over {\gamma}}\right]}$$ and the eigenfunctions $\psi_n(s)$ are $$\begin{aligned}
\label{A.4}
\psi_0 & = & \sqrt{{{1}/ {L}}}\\\nonumber \psi_n(s) & = &
\sqrt{{{c_n}/ {L}}}\left( {{\alpha_n}\over {\cos{[\alpha_n L/2]}}}
\sin{[\alpha_n s]} + {{\beta_n}\over {\cosh[\beta_n L/2]}}
\sinh{[\beta_n s]}\right), n=1,3,\ldots , 2q+1 \\\nonumber \psi_n(s)
& = & \sqrt{{{c_n}/ {L}}}\left( -{{\alpha_n}\over {\sin{[\alpha_n
L/2]}}} \cos{[\alpha_n s]} +{{\beta_n}\over {\sinh{[\beta_n
L/2]}}}\cosh{[\beta_n s]}\right), n=2,4,\ldots , 2q\end{aligned}$$ where $c_n$’s are the normalization constants; $\alpha_n$ and $\beta_n$ are determined from Eqs. (\[4.4\]), $$\begin{aligned}
\label{A.5}
& \alpha_n & \sin{[\alpha_n L/2]}\cosh{[\beta_n L/2]}-\beta_n^3
\cos{[\alpha_n L/2]} \sinh{[\beta_n L/2]} \\\nonumber & - & {{1}\over
{l_p}}(\alpha_n^2+\beta_n^2)\cos{[\alpha_n L/2]}\cosh{[\beta_n
L/2]}=0, \quad n=1,3, \ldots , 2q+1 \\\nonumber & \alpha_n &
\cos{[\alpha_n L/2]}\sinh{[\beta_n L/2]}+\beta_n^3 \sin{[\alpha_n
L/2]} \cosh{[\beta_n L/2]} \\\nonumber & + & {{1}\over
{l_p}}(\alpha_n^2+\beta_n^2)\sin{[\alpha_n L/2]}\sinh{[\beta_n
L/2]}=0, \quad n=2,4, \ldots , 2q\end{aligned}$$ The parameters $\alpha_n$ and $\beta_n$ are related as $\beta_n^2-\alpha_n^2 = {{1}\over {l_p^2}}$. The eigenvalues $z_n$ are given by $z_n$$=$$\epsilon$$\alpha_n^4$$+$$2$$\nu$$\alpha^2$. Using Eqs. (\[A.1\]) and (\[A.3\]), we obtain: $\langle {\bf r}(s,t){\bf r}(s',t)\rangle=3k_B T\sum_{n=0}^{\infty} {{1}\over {z_n}}
\psi_n(s)\psi_n(s')e^{-z_nt/\gamma}$. Then, $\langle X(t)X(0)\rangle$$=$ $\langle{\bf r}({{L}\over
{2}},t){\bf r}({{L}\over {2}},0)\rangle$$+$ $\langle{\bf r}(-{{L}\over
{2}},t){\bf r}(-{{L}\over {2}},0)\rangle$$-$ $\langle{\bf r}({{L}\over
{2}},t){\bf r}(-{{L}\over {2}},0)\rangle$$-$ $\langle{\bf
r}(-{{L}\over {2}},t){\bf r}({{L}\over {2}},0)\rangle$, which yields Eq. (\[4.5\]).
[99]{}
Labeit, S., and B. Kolmerer. 1995. Titins: Giant proteins in charge of muscle ultrastructure and elasticity. [*Science*]{} 270: 293-296.
Minaeva, A., M. Kulke, J. M. Fernandez and W. A. Linke. 2001. Unfolding of titin domains explains the viscoelastic behavior od skeletal myofibrils. [*Biophys. J.*]{} 80: 1442-1451.
Ohashi, T., D. P. Kiehart and H. P. Erickson. 1999. Dynamics and elasticity of the fibronectin matrix in living cell culture visualized by fibronectin-green fluorescent protein. [*Proc. Natl. Acad. Sci. USA*]{} 96: 2153-2158.
Marshall, B. T., M. Long, J. W. Piper, T. Yago, R. P. McEver and C. Zhu. 2003. Direct observation of catch bonds involving cell-adhesion molecules. [*Nature*]{} 423: 190-193.
Evans, E., A. Leung, D. Hammer and S. Simon. 2001. Chemically distinct transition states govern rapid dissociation of single L-selectin bonds under force. [*Proc. Natl. Acad. Sci. USA*]{} 98: 3784-3789.
Barsegov, V., and D. Thirumalai. 2005. Dynamics of unbinding of cell adhesion molecules: Transition form catch to slip bonds. [*Proc. Natl. Acad. Sci. USA*]{} 102: 1835-1839.
Barsegov, V., D. Klimov and D. Thirumalai. Langevin simulation studies of the transition from catch to slip bonds using coarse-grained protein models (manuscript in preparation).
Chicurel, M. E., C. S. Chen and D. E. Ingber. 1998. Cellular control lies in the balance of forces. [*Curr. Opin. Cell Biol.*]{} 10: 232-239.
Henrickson, S. E., M. Misakian, B. Robertson and J. J. Kasianowicz. 2000. Driven DNA transport into an asymmetric nanometer-scale pore. [*Phys. Rev. Lett.*]{} 85: 3057-3060.
Sung, W., and P. J. Park. 1996. Polymer translocation through a pore in a membrane. [*Phys. Rev. Lett.*]{} 77: 783-786.
Muthukumar, M. 2003. Polymer escape through a nonapore. [*J. Chem. Phys.*]{} 118: 5174-5184.
Liphardt, G., D. Smith and C. Bustamante. 2000. Single-molecule studies of DNA mechanics. [*Curr. Opin. Struct. Biol.*]{} 10: 279-285.
Allemand, J.-F., D. Bensimon and V. Croquette. 2003. Stretching DNA and RNA to probe their interactions with proteins. [*Curr. Opin. Struct. Biol.*]{} 13: 266-274.
Zhuang, X., L. E. Bartley, H. P. Babcock, R. Russel, T. Ha, D. Herschlag and S. Chu. 2000. A single molecule study of RNA catalysis and folding. [*Science*]{} 288: 2048-2051.
Rief, M., H. Clausen-Shaunman and H. E. Gaub. 1999. Sequence-dependent mechanics of single DNA molecules. [*Nat. Struct. Biol.*]{} 6: 346-349.
Chang, K. C., D. F. Tees and D. A. Hammer. 2000. The state diagram for cell adhesion under flow: Leukocyte rolling and firm adhesion. [*Proc. Natl. Acad. Sci. USA*]{} 97: 11262-11267.
Weisel, J. W., H. Shuman and R. I. Litvinov. 2003. Protein-protein unbinding induced by force: single molecule studies. [*Curr. Opin. Struct. Biol.*]{} 13: 227-235.
Liphardt, J., S. Dumont, S. B. Smith, I. Jr Tinoko and C. Bustamante. 2002. Equilibrium information from nonequilibrium measurements in an experimental test of Jarzynski’s equality. [*Science*]{} 296: 1832-1835.
Bartolo, D., I. Derenyi and A. Ajdari. 2002. Dynamic response of adhesion complexes: Beyond the single path picture. [*Phys. Rev. E*]{} 65: 051910-051913.
Hummer, G., and A. Szabo. 2001. Free energy reconstruction from nonequilibrium single-molecule pulling experiments. [*Proc. Natl. Acad. Sci. USA*]{} 98: 3658-3661.
Lapidus, L. J., W. A. Eaton and J. Hofrichter. 2000. Measuring the rate of intramolecular contact formation in polypeptides. [*Proc. Natl. Acad. Sci. USA*]{} 97: 7220-7225.
Hyeon, C., and D. Thirumalai. 2003. Can energy landscape roughness of proteins and RNA be measured by using mechanical unfolding experiments? [*Proc. Natl. Acad. Sci. USA*]{} 100: 10249-10253.
Oberhauser, A. F., H. P. Erickson and J. M. Fernandez. 1998. The molecular elasticity of the extracellular matrix protein tenascin. [*Nature*]{} 393: 181-185.
Fernandez, J. M., and H. Li. 2004. Force-clamp spectroscopy monitors the folding trajectory of a single protein. [*Science*]{} 303: 1674-1678.
Rief, M., J. Pascual, M. Saraste and H. E. Gaub. 1999. Single molecule force spectroscopy of spectrin repeats: low unfolding forces in helix bundles. [*J. Mol. Biol.*]{} 286: 553-561.
Yang, G., C. Cecconi, W. A. Baase, I. R. Vetter, W. A. Breyer, J. A. Haack, B. W. Matthews, F. W. Dahlquist and C. Bustamante. 2000. Solid-state synthesis and mechanical unfolding of polymers of T4 lysozyme. [*Proc. Natl. Acad. Sci. USA*]{} 97: 139-144.
Craig, D., A. Krammer, K. Schulten and V. Vogel. 2001. Comparison of the early stages of forced unfolding for fibronectin type III modules. [*Proc. Natl. Acad. Sci. USA*]{} 98: 5590-5595.
Carrion-Vazquez, M., A. F. Oberhauser, S. B. Fowler, P. E. Marszalek, S. E. Broedel, J. Clarke and J. M. Fernandez. 1999. Mechanical and chemical unfolding of a single protein: A comparison. [*Proc. Natl. Acad. Sci. USA*]{} 96: 3694-3600.
Scott, K. A., A. Steward, S. B. Fowler and J. Clarke. 2002. Titin: a multidomain protein that behaves as the sum of its parts. [*J. Mol. Biol.*]{} 315: 819-829.
Litvinovich, S. V., and K. C. Ingham. 1995. Interactions between type III domains in the 110 kDA cell-binding fragment of fibronectin. [*J. Mol. Biol.*]{} 248: 611-626.
Krammer, A., H. Lu, B. Isralewitz, K. Schulten and V. Vogel. 1999. Forced unfolding of the fibronectin type III module reveals a tensile molecular recognition switch. [*Proc. Natl. Acad. Sci. USA*]{} 96: 1351-1356.
Isralewitz, B., M. Gao and K. Schulten. 2001. Steered molecular dynamics and mechanical functions of proteins. [*Curr. Opin. Struct. Biol.*]{} 11: 224-230.
Cieplak, M., T. X. Hoang, M. O. Robbins. 2004. Thermal effects in stretching of Go-like models of titin and secondary structures. [*Proteins*]{} 56: 285-297.
Makarov, D. E., Z. Wang, J. B. Thompson and H. G. Hansma. 2002. On the interpretation of force extension curves of single protein molecules. [*J. Chem. Phys.*]{} 116: 7760-7765.
Hyeon, C., and D. Thirumalai. 2005. Chemical theory and computation special feature: Mechanical unfolding of RNA hairpins. [*Proc. Natl. Acad, Sci. USA*]{} 102: 6789-6794.
Best, R. B., and G. Hummer. 2005. Comment on “Force-clamp spectroscopy monitors the folding trajectory of a single protein”. [*Science*]{} 308: 498.
Barsegov, V., and D. Thirumalai. 2005. Probing protein-protein interactions by dynamic force correlation spectroscopy. [*Phys. Rev. Lett.*]{} 95: 168301-168305.
Hyeon, C., R. I. Dima and D. Thirumalai. Size, shape, and flexibility of RNA structures. [*J. Mol. Biol.*]{} (submitted).
Honeycutt, J. D., and D. Thirumalai. 1990. Metastability of the folded states of globular proteins. [*Proc. Natl. Acad, Sci. USA*]{} 87: 3526-3529.
Klimov, D. K., and D. Thirumalai. 2000. Native topology determines force-induced unfolding pathways in globular proteins. [*Proc. Natl. Acad. Sci. USA*]{} 97: 7254-7259.
Veitshans, T., D. K. Klimov and D. Thirumalai. 1997. Protein folding kinetics: timescales, pathways and energy landscapes in terms of sequence-dependent properties. [*Folding Des.*]{} 2: 1-22.
Jackson, S. E. 1998. How do small single-domain proteins fold. [*Folding and Design*]{} 3: R81-R91.
Montroll, E. W., and H. Sher. 1975. Anomalous transit-time dispersion in amorphous solids. [*Phys. Rev. B*]{} 12: 2455-2477.
Sher, H., and M. Lax. 1973. Stochastic transport in a disordered solid. I. Theory. [*Phys. Rev. B*]{} 7: 4491-4502.
Barsegov, V., Y. Shapir and S. Mukamel. 2003. One-dimensional transport with dynamic disorder. [*Phys. Rev. E*]{} 68: 011101-011114.
Barsegov, V., and S. Mukamel. 2002. Probing single molecule kinetics by photon arrival trajectories. [*J. Chem. Phys.*]{} 116: 9802-9810.
Barsegov, V., and S. Mukamel. 2004. Multipoint fluorescence quenching-time statistics for single molecules with anomalous diffusion. [*J. Phys. Chem. A*]{} 108: 15-24.
Harnau, L., R. G. Winkler and P. Reineker. 1995. Dynamic properties of molecular chains with variable stiffness. [*J. Chem. Phys.*]{} 102: 7750-7757.
Dua, A., and B. J. Cherayil. 2002. The dynamics of chain closure in semiflexible polymers. [*J. Chem. Phys.*]{} 116: 399-409.
Lapidus, L. J., P. S. Steinbach, W. A. Eaton, A. Szabo and J. Hofrichter. 2002. Effects of chain stiffness on the dynamics of loop formation in polypeptides. Appendix: Testing a 1-dimensional diffusion model for peptide dynamics. [*J. Phys. Chem. B*]{} 106: 11628-11640.
Rief, M., M. Gautel, F. Oesterhelt, J. M. Fernandez and H. E. Gaub. 1997. Reversible unfolding of individual titin immunoglobulin domains by AFM. [*Science*]{} 276: 1109-1112.
Klimov, D. K., and D. Thirumalai. 1999. Stretching single-domain proteins: Phase diagram and kinetics of force-induced unfolding. [*Proc. Natl. Acad. Sci. USA*]{} 96: 6166-6170.
Li, M. S., C. K. Hu, D. Klimov, D. Thirumalai. Multiple stepwise refolding of immunoglobulin domain I27 upon force quench depends on initial conditions. [*Proc. Natl. Acad. Sci. USA*]{} (in press).
Hummer, G., and A. Szabo. 2003. Kinetics from nonequilibrium single-molecule pulling experiments. [*Biophys. J.*]{} 85: 5-15.
Schuler, B., E. A. Lipman, P. J. Steinbach, M. Kumke and W. A. Eaton. 2005. Polyproline and the “spectroscopic ruler” revisited with single-molecule fluorescence. [*Proc. Natl. Acad. Sci. USA*]{} 102: 2754-2759.
Laurence, T. A., X. Kong, M. Jaeger and S. Weiss. 2005. Probing structural heterogeneities and fluctuations of nucleic acids and denatured proteins. [*Proc. Natl. Acad. Sci. USA*]{} 102: 17248-17353.
Caliskan, G., C. Hyeon, U. Perez-Salas, R. M. Briber, S. A. Woodson and D. Thirumalai. Persistence length changes dramatically as RNA folds. [*Phys. Rev. Lett.*]{} (in press).
Thirumalai, D., and S. A. Woodson. 1996. Kinetics of folding of proteins and RNA. [*Acc. Chem. Res.*]{} 29: 433-439.
Nevo, R., V. Brumfeld, R. Kapon, P. Hinterdorfer and Z. Reich. 2005. Direct measurement of protein energy landscape roughness. [*EMBO Reports*]{} 6: 482-486.
$f_S$, $pN$[^2] $l_p/a$[^3] $\tau_d$, $ns$[^4] $k_r$, $1/ns$[^5] $\nu_r$[^6] $\langle X_F\rangle /a$[^7] $\Delta X_F /a$[^8] $k_f$, $1/ns$[^9] $\nu_f$[^10] $\langle X_C\rangle /a$[^11] $\Delta X_C /a$[^12]
----------------- ------------- -------------------- ------------------- ------------- ----------------------------- --------------------- ------------------- -------------- ------------------------------ ----------------------
40 1.2 19.6 0.02 6.9 4.5 6.4 0.26 1.1 4.8 2.2
80 1.1 15.2 0.11 5.1 4.6 6.7 0.25 1.1 4.7 2.2
: Energy landscape parameters for $S1$ extracted from FCS:
\[UCTable.ps\]
**FIGURE CAPTIONS** {#figure-captions .unnumbered}
===================
[**Figure 1.**]{} $a$: A typical AFM setup: constant force ${\bf f}$$=$${\bf f}_{S}$$=$$f_S$${\bf x}$ is applied through the cantilever tip linker in the direction ${\bf x}$ parallel to the protein end-to-end vector ${\bf X}$. Stretching cycles are interrupted by relaxation intervals $T$ during which the force is quenched, ${\bf f}$$=$${\bf f}_{Q}$$=$$f_Q$${\bf x}$ ($f_S$$>$$f_Q$). $b$: A single trajectory of forced unfolding times $t_1$, $t_2$, $t_3$, $\ldots$, separated by fixed relaxation time $T$, during which the unfolded protein can either collapse into the manifold of colied states $\{ C \}$ if $T$ is short or reach the native basin of attraction (NBA) if $T$ is long.
[**Figure 2.**]{} Schematic of the free energy profile of a protein (red) upon stretching at constant force ${\bf f}_S$ and force-quench ${\bf f}_Q$. (a): The projections of energy landscape (blue) is in the direction of ${\bf X}$ which is a suitable reaction coordinate for unfolding induced by force ${\bf f}_S$. The average end-to-end distance in the native basin of attraction is $\langle X_F \rangle$. Upon application of ${\bf f}_{S}$, rupture of contacts that stabilize the folded state $F$ results in the formation of an ensemble of high energy extended (by $\Delta X_F$) conformations $\{ I \}$. Subsequently, transitions to globally unfolded state $U$ (with $L-\delta$$\le$$X$$\le$$L$) occurs. (b): Free energy profile for force-quench refolding which occurs in the order $U$$\to$$\{ C \}$$\to$$F$. Refolding is initiated by quenching the force ${\bf f}_S$$\to$${\bf f}_Q$$<$${\bf f}_C$, where ${\bf f}_C$ is the equilibrium critical force needed to unfold the native protein. The initial event in the process is the formation of an ensemble of compact structures. The mean end-to-end distance of $\{ C \}$ is $\langle X_C\rangle$ and the width is $\Delta X_C$ which is a measure of heterogeneity of the refolding pathways. These states may or may not end up in the native basin of attraction (NBA) depending on the duration of $T$. We have used ${\bf X}$ as a reaction coordinate during force-quench for purposes of illustration only.
Native structure of the model protein $S1$. The model polypeptide chain has a $\beta$-sheet architecture of the native state. The $\beta$-strands of the model chain are formed by native contacts between hydrophobic residues (given by blue balls). The hydrophilic residues are shown by red balls and the residues forming the turn regions are given in grey.
[**Figure 4.**]{} A single unfolding-refolding trajectory of the end-to-end distance $X/a$ (black) and the total number of native contacts $Q$ (red) as a function of time $t$ for $S1$. The trajectory is obtained by repeated application of stretch-quench cycles with stretching force $f_{S}$$=$$80 pN$ and quenched force $f_Q$$=$$0$. The duration of streching cycle and relaxation period is $30ns$ and $90ns$, respectively. The first five unfolding events corresponding to large $X/a$ and small $Q$ are marked explicitely by numbers $1$, $2$, $3$, $4$ and $5$. Force stretch and force quench for the stretch-quench cycles $13$, $14$, $15$, $16$ and $17$ (middle panel) are denoted by solid green and dash.dotted blue arrows.
[**Figure 5.**]{} Typical unfolding-refolding trajectories of $X/a$ (black) and $Q$ (red) for $S1$ as functions of time $t$, simulated by applying four stretch-quench cycles at the pulling force $f_{S}$$=$$40 pN$ and quenched force $f_Q$$=$$0$. The duration of relaxation time $T$$=$$102ns$.
[**Figure 6.**]{} Examples of unfolding-refolding trajectories of $X/a$ (black) and $Q$ (red) for $S1$ as a function of time $t$. The pulling force is $f_{S}$$=$$40 pN$ and the quenched force is $f_Q$$=$$0$. The duration of relaxation time $T$$=$$150ns$.
[**Figure 7.**]{} Same as Figure 6 except $T$$=$$240ns$.
[**Figure 8.**]{} Histograms of forced unfolding times $P(t)$ and the joint distributions of unfolding times separated by relaxation periods of the quenched force $P(T,t)$. The distribution functions are constructed from single unfolding-refolding trajectories of $S1$ simulated in stretch-quench cycles of $f_S$$=$$80pN$ and $f_Q$$=$$0$ for $T$$=$$15 ns$, $48 ns$ and $86 ns$. Simulated distributions are shown by red bars with the contribution to global unfolding events from coiled conformations $\{ C \}$ indicated by an arrow for $T$$=$$86 ns$. The results of the numerical fits obtained by using Eqs. (\[2.1\])-(\[2.5\]) are represented by solid lines. The energy landscape parameters of $S1$ are summarized in the Table.
[**Figure 9.**]{} Histograms of forced unfolding times $P(t)$ and $P(T,t)$ constructed from single unfolding-refolding trajectories for $S1$. The stretch-quench cycles were simulated with $f_S$$=$$40pN$ and $f_Q$$=$$0$ for $T$$=$$24 ns$, $54 ns$ and $102 ns$. Simulated distributions are shown by red bars with the contribution to global unfolding events from coiled conformations $\{ C \}$ indicated by an arrow for $T$$=$$102 ns$. The results of numerical fit obtained by using Eqs. (\[2.1\])-(\[2.5\]) are represented by solid lines. The values of the parameters are given in the Table.
[^1]: Corresponding author phone: 301-405-4803; fax: 301-314-9404; thirum@glue.umd.edu
[^2]: $f_S$ is the magnitude of the stretching force
[^3]: $l_p$ is the persistence length of $S1$ in the coiled state (Eq. (\[4.1\])) measured in units of $a$ ($\approx$$\AA$)
[^4]: $\tau_d$ is the $f_Q$-dependent longest relaxation time in the coil state (Eq. (\[4.5\]))
[^5]: $k_r$ ($k_f$) is the rate of rupture (formation) of native interactions (Eq. (\[5.1\])) and is a function of $f_S$ ($f_Q$)
[^6]: $\nu_{r}$ ($\nu_f$) quantifies deviations of the native contacts rupture (formation) kinetics from the Poisson process
[^7]: $\langle X_F \rangle$ ($\langle X_C \rangle$) is the average end-to-end distance of $S1$ in the NBA (manifold $\{ C \}$) (Figure 2(b), Eqs. (\[5.2\])-(\[5.3\]))
[^8]: $\Delta X_F$ is the extension of the chain prior to rupture of all native contacts (Figure 2(a) and Eq. (\[2.2\]))
[^9]: $k_r$ ($k_f$) is the rate of rupture (formation) of native interactions (Eq. (\[5.1\])) and is a function of $f_S$ ($f_S$)
[^10]: $\nu_{r}$ ($\nu_f$) quantifies deviations of the native contacts rupture (formation) kinetics from the Poisson process
[^11]: $\langle X_F \rangle$ ($\langle X_C \rangle$) is the average end-to-end distance of $S1$ in the NBA (manifold $\{ C \}$) (Figure 2(b), Eqs. (\[5.2\])-(\[5.3\]))
[^12]: $\Delta X_C$ is the width of the distribution of coiled states of $S1$ (Eq. (\[5.4\])), a measure of the refolding heterogeneity
|
---
abstract: 'An equiangular tight frame (ETF) is a type of optimal packing of lines in a real or complex Hilbert space. In the complex case, the existence of an ETF of a given size remains an open problem in many cases. In this paper, we observe that many of the known constructions of ETFs are of one of two types. We further provide a new method for combining a given ETF of one of these two types with an appropriate group divisible design (GDD) in order to produce a larger ETF of the same type. By applying this method to known families of ETFs and GDDs, we obtain several new infinite families of ETFs. The real instances of these ETFs correspond to several new infinite families of strongly regular graphs. Our approach was inspired by a seminal paper of Davis and Jedwab which both unified and generalized McFarland and Spence difference sets. We provide combinatorial analogs of their algebraic results, unifying Steiner ETFs with hyperoval ETFs and Tremain ETFs.'
address:
- 'Department of Mathematics and Statistics, Air Force Institute of Technology, Wright-Patterson AFB, OH 45433'
- 'Department of Mathematics and Statistics, South Dakota State University, Brookings, SD 57007'
author:
- Matthew Fickus
- John Jasper
title: Equiangular tight frames from group divisible designs
---
equiangular tight frames ,group divisible designs 42C15
Introduction
============
Let $N\geq D$ be positive integers, let ${\mathbb{F}}$ be either ${\mathbb{R}}$ or ${\mathbb{C}}$, and let ${\langle{{\mathbf{x}}_1},{{\mathbf{x}}_2}\rangle}={\mathbf{x}}_1^*{\mathbf{x}}_2^{}$ be the dot product on ${\mathbb{F}}^D$. The *Welch bound* [@Welch74] states that any $N$ nonzero vectors ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ in ${\mathbb{F}}^D$ satisfy $$\label{eq.Welch bound}
\max_{n\neq n'}
\tfrac{{|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}}{{\|{{\boldsymbol{\varphi}}_n}\|}{\|{{\boldsymbol{\varphi}}_{n'}}\|}}
\geq{\bigl[{\tfrac{N-D}{D(N-1)}}\bigr]}^{\frac12}.$$ It is well-known [@StrohmerH03] that nonzero equal-norm vectors ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ in ${\mathbb{F}}^D$ achieve equality in if and only if they form an *equiangular tight frame* for ${\mathbb{F}}^D$, denoted an ${{\operatorname{ETF}}}(D,N)$, namely if there exists $A>0$ such that $A{\|{{\mathbf{x}}}\|}^2=\sum_{n=1}^{N}{|{{\langle{{\boldsymbol{\varphi}}_n},{{\mathbf{x}}}\rangle}}|}^2$ for all ${\mathbf{x}}\in{\mathbb{F}}^D$ (tightness), and the value of ${|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}$ is constant over all $n\neq n'$ (equiangularity). In particular, an ETF is a type of optimal packing in projective space, corresponding to a collection of lines whose minimum pairwise angle is as large as possible. ETFs arise in several applications, including waveform design for communications [@StrohmerH03], compressed sensing [@BajwaCM12; @BandeiraFMW13], quantum information theory [@Zauner99; @RenesBSC04] and algebraic coding theory [@JasperMF14].
In the general (possibly-complex) setting, the existence of an ${{\operatorname{ETF}}}(D,N)$ remains an open problem for many choices of $(D,N)$. See [@FickusM16] for a recent survey. Beyond orthonormal bases and regular simplices, all known infinite families of ETFs arise from combinatorial designs. Real ETFs in particular are equivalent to a class of *strongly regular graphs* (SRGs) [@vanLintS66; @Seidel76; @HolmesP04; @Waldron09], and such graphs have been actively studied for decades [@Brouwer07; @Brouwer17; @CorneilM91]. This equivalence has been partially generalized to the complex setting in various ways, including approaches that exploit properties of roots of unity [@BodmannPT09; @BodmannE10], abelian distance-regular covers of complete graphs (DRACKNs) [@CoutinkhoGSZ16], and association schemes [@IversonJM16]. Conference matrices, Hadamard matrices, Paley tournaments and quadratic residues are related, and lead to infinite families of ETFs whose *redundancy* $\frac ND$ is either nearly or exactly two [@StrohmerH03; @HolmesP04; @Renes07; @Strohmer08]. *Harmonic ETFs* and *Steiner ETFs* offer more flexibility in choosing $D$ and $N$. Harmonic ETFs are equivalent to *difference sets* in finite abelian groups [@Turyn65; @StrohmerH03; @XiaZG05; @DingF07], while Steiner ETFs arise from *balanced incomplete block designs* (BIBDs) [@GoethalsS70; @FickusMT12]. Recent generalizations of Steiner ETFs have led to new infinite families of ETFs arising from projective planes that contain hyperovals [@FickusMJ16] as well as from Steiner triple systems [@FickusJMP18], dubbed *hyperoval ETFs* and *Tremain ETFs*, respectively. Another new family arises by generalizing the SRG construction of [@Godsil92] to the complex setting, using generalized quadrangles to produce abelian DRACKNs [@FickusJMPW19].
Far less is known in terms of necessary conditions on the existence of complex ${{\operatorname{ETF}}}(D,N)$. The *Gerzon bound* implies that $N\leq\min{\{{D^2,(N-D)^2}\}}$ whenever a complex ${{\operatorname{ETF}}}(D,N)$ with $N>D>1$ exists [@LemmensS73; @HolmesP04; @Tropp05]. Beyond this, the only known nonexistence result in the complex case is that an ${{\operatorname{ETF}}}(3,8)$ does not exist [@Szollosi14], a result proven using computational techniques in algebraic geometry. In quantum information theory, ${{\operatorname{ETF}}}(D,D^2)$ are known as *symmetric informationally-complete positive operator-valued measures* (SIC-POVMs). It is famously conjectured that such Gerzon-bound-equality ETFs exist for any $D$ [@Zauner99; @Renes07; @FuchsHS17].
In this paper, we give a new method for constructing ETFs that yields several new infinite families of them. Our main result is Theorem \[thm.new ETF\], which shows how to combine a given initial ETF with a *group divisible design* (GDD) in order to produce another ETF. In that result, we require the initial ETF to be of one of the following types:
\[def.ETF types\] Given integers $D$ and $N$ with $1<D<N$, we say $(D,N)$ is *type $(K,L,S)$* if $$\label{eq.ETF param in terms of type param}
D
=\tfrac{S}{K}[S(K-1)+L]
=S^2-\tfrac{S(S-L)}{K},
\quad
N
=(S+L)[S(K-1)+L],$$ where $K$ and $S$ are integers and $L$ is either $1$ or $-1$. For a given $K$, we say $(D,N)$ is *$K$-positive* or *$K$-negative* when it is type $(K,1,S)$ or type $(K,-1,S)$ for some $S$, respectively. We simply say $(D,N)$ is *positive* or *negative* when it is $K$-positive or $K$-negative for some $K$, respectively. When we say that an ETF is one of these types, we mean its $(D,N)$ parameters are of that type.
It turns out that every known ${{\operatorname{ETF}}}(D,N)$ with $N>2D>2$ is either a harmonic ETF, a SIC-POVM, or is positive or negative. In particular, every Steiner ETF is positive, while every hyperoval ETF and Tremain ETF is negative. In this sense, the ideas and results of this paper are an attempt to unify and generalize several constructions that have been regarded as disparate. This is analogous to—and directly inspired by—a seminal paper of Davis and Jedwab [@DavisJ97], which unifies *McFarland* [@McFarland73] and *Spence* [@Spence77] difference sets under a single framework, and also generalizes them so as to produce difference sets whose corresponding harmonic ETFs have parameters $$\label{eq.Davis Jedwab parameters}
D=\tfrac13 2^{2J-1}(2^{2J+1}+1),
\quad
N=\tfrac13 2^{2J+2}(2^{2J}-1),$$ for some $J\geq 1$. It is quickly verified that such ETFs are type $(4,-1,S)$ where $S=\frac13(2^{2J+1}+1)$. As we shall see, combining our main result (Theorem \[thm.new ETF\]) with known ETFs and GDDs recovers the existence of ETFs with these parameters, and also provides several new infinite families, including:
\[thm.new neg ETS with K=4,5\] An ${{\operatorname{ETF}}}(D,N)$ of type $(K,-1,S)$ exists whenever:
1. $K=4$ and either $S\equiv 3\bmod 8$ or $S\equiv 7\bmod 60$;
2. $K=5$ and either $S\equiv 4\bmod 15$ or $S\equiv 5,309\bmod 380$ or $S\equiv 9\bmod 280$.
This result extends the $S$ for which an ETF of type $(4,-1,S)$ is known to exist from a geometric progression to a finite union of arithmetic progressions, with the smallest new ETF having $S=19$, namely $(D,N)=(266,1008)$, cf. [@FickusM16]. Meanwhile, the ETFs given by Theorem \[thm.new neg ETS with K=4,5\] in the $K=5$ case seem to be completely new except when $S=4,5,9$, with $(D,N)=(285,1350)$ being the smallest new example. Using similar techniques, we were also able to find new, explicit infinite families of $K$-negative ETFs for $K=6,7,10,12$. The description of these families is technical, and so is given in Theorem \[thm.new neg ETS with K>5\] as opposed to here. More generally, using asymptotic existence results for GDDs, we show that an infinite number of $K$-negative ETFs also exist whenever $K=Q+2$ where $Q$ is a prime power, $K=Q+1$ where $Q$ is an even prime power, or $K=8,20,30,42,56,342$.
In certain cases, the new ETFs constructed by these methods can be chosen to be real:
\[thm.new real ETFs\]
1. There are an infinite number of real Hadamard matrices of size $H\equiv 1\bmod 35$, and a real ETF of type $(5,-1,8H+1)$ exists for all such $H$.
2. There are an infinite number of real Hadamard matrices of size $H\equiv 1,8\bmod 21$, and a real ETF of type $(6,-1,2H+1)$ exists for all sufficiently large such $H$.
3. There are an infinite number of real Hadamard matrices of size $H\equiv 1,12\bmod 55$, and a real ETF of type $(10,-1,4H+1)$ exists for all sufficiently large such $H$.
4. There are an infinite number of real Hadamard matrices of size $H\equiv 1,277\bmod 345$, and a real ETF of type $(15,-1,4H+1)$ exists for all sufficiently large such $H$.
These correspond to four new infinite families of SRGs, with the smallest new example being a real ${{\operatorname{ETF}}}(66759,332640)$, which is obtained by letting $H=36$ in (a).
In the next section, we introduce known concepts from frame theory and combinatorial design that we need later on. In Section 3, we provide an alternative characterization of when an ETF is positive or negative (Theorem \[thm.parameter types\]), which we then use to help prove our main result (Theorem \[thm.new ETF\]). In the fourth section, we discuss how many known ETFs are either positive or negative, and then apply Theorem \[thm.new ETF\] to them along with known GDDs to obtain the new infinite families of negative ETFs described in Theorems \[thm.new neg ETS with K=4,5\] and \[thm.new neg ETS with K>5\]. We conclude in Section 5, using these facts as the basis for new conjectures on the existence of real and complex ETFs.
Preliminaries
=============
Equiangular tight frames
------------------------
For any positive integers $N$ and $D$, and any sequence ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ of vectors in ${\mathbb{F}}^D$, the corresponding *synthesis operator* is ${\boldsymbol{\Phi}}:{\mathbb{F}}^N\rightarrow{\mathbb{F}}^D$, ${\boldsymbol{\Phi}}{\mathbf{y}}:=\sum_{n=1}^{N}{\mathbf{y}}(n){\boldsymbol{\varphi}}_n$, namely the $D\times N$ matrix whose $n$th column is ${\boldsymbol{\varphi}}_n$. Its adjoint (conjugate transpose) is the *analysis operator* ${\boldsymbol{\Phi}}^*:{\mathbb{F}}^D\rightarrow{\mathbb{F}}^N$, which has $({\boldsymbol{\Phi}}^*{\mathbf{x}})(n)={\langle{{\boldsymbol{\varphi}}_n},{{\mathbf{x}}}\rangle}$ for all $n=1,\dotsc,N$. That is, ${\boldsymbol{\Phi}}^*$ is the $D\times N$ matrix whose $n$th row is ${\boldsymbol{\varphi}}_n^*$. Composing these two operators gives the $N\times N$ *Gram matrix* ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}$ whose $(n,n')$th entry is $({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})(n,n')={\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}$, as well as the $D\times D$ *frame operator* ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*=\sum_{n=1}^{N}{\boldsymbol{\varphi}}_n^{}{\boldsymbol{\varphi}}_n^*$.
We say ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is a *tight frame* for ${\mathbb{F}}^D$ if there exists $A>0$ such that ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*=A{\mathbf{I}}$, namely if the rows of ${\boldsymbol{\Phi}}$ are orthogonal and have an equal nontrivial norm. We say ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is *equal norm* if there exists some $C$ such that ${\|{{\boldsymbol{\varphi}}_n}\|}^2=C$ for all $n$. The parameters of an equal norm tight frame are related according to $DA={\operatorname{Tr}}(A{\mathbf{I}})={\operatorname{Tr}}({\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*)={\operatorname{Tr}}({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})=\sum_{n=1}^{N}{\|{{\boldsymbol{\varphi}}_n}\|}^2=NC$. We say ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is *equiangular* if it is equal norm and the value of ${|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}$ is constant over all $n\neq n'$.
For any equal norm vectors ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ in ${\mathbb{F}}^D$, a direct calculation reveals $$0
\leq{\operatorname{Tr}}[(\tfrac1C{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*-\tfrac{N}{D}{\mathbf{I}})^2]
=\sum_{n=1}^{N}\sum_{\substack{n'=1\\n'\neq n}}^{N}
\tfrac{{|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}^2}{C^2}
-\tfrac{N(N-D)}{D}
\leq N(N-1)\max_{n\neq n'}
\tfrac{{|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}^2}{C^2}
-\tfrac{N(N-D)}{D}.$$ Rearranging this inequality gives the Welch bound . Moreover, we see that achieving equality in is equivalent to having equality above throughout, which happens precisely when ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is a tight frame for ${\mathbb{F}}^D$ that is also equiangular, namely when it is an ETF for ${\mathbb{F}}^D$.
If $N>D$ and ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is a tight frame for ${\mathbb{F}}^D$ then completing the $D$ rows of ${\boldsymbol{\Phi}}$ to an equal-norm orthogonal basis for ${\mathbb{F}}^N$ is equivalent to taking a $(N-D)\times N$ matrix ${\boldsymbol{\Psi}}$ such that ${\boldsymbol{\Psi}}^*{\boldsymbol{\Psi}}=A{\mathbf{I}}$, ${\boldsymbol{\Phi}}{\boldsymbol{\Psi}}^*={\boldsymbol{0}}$ and ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}+{\boldsymbol{\Psi}}^*{\boldsymbol{\Psi}}=A{\mathbf{I}}$. The sequence ${\{{{\boldsymbol{\psi}}_n}\}}_{n=1}^{N}$ of columns of any such ${\boldsymbol{\Psi}}$ is called a *Naimark complement* of ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$. Since ${\boldsymbol{\Psi}}{\boldsymbol{\Psi}}^*=A{\mathbf{I}}$ and ${\boldsymbol{\Psi}}^*{\boldsymbol{\Psi}}=A{\mathbf{I}}-{\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}$, any Naimark complement of an ${{\operatorname{ETF}}}(D,N)$ is an ${{\operatorname{ETF}}}(N-D,N)$. Since any nontrivial scalar multiple of an ETF is another ETF, we will often assume without loss of generality that a given ${{\operatorname{ETF}}}(D,N)$ and its Naimark complements satisfy $$\label{eq.ETF scaling}
A=N{\bigl[{\tfrac{N-1}{D(N-D)}}\bigr]}^{\frac12},
\quad
{\|{{\boldsymbol{\varphi}}_n}\|}^2
={\bigl[{\tfrac{D(N-1)}{N-D}}\bigr]}^{\frac12},
\quad
{\|{{\boldsymbol{\psi}}_n}\|}^2
={\bigl[{\tfrac{(N-D)(N-1)}D}\bigr]}^{\frac12},
\quad
\forall n=1,\dotsc,N,$$ which equates to having ${|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}=1={|{{\langle{{\boldsymbol{\psi}}_n},{{\boldsymbol{\psi}}_{n'}}\rangle}}|}$ for all $n\neq n'$. For positive and negative ETFs in particular (Definition \[def.ETF types\]), we shall see that all of these quantities happen to be integers.
Any ${{\operatorname{ETF}}}(D,N)$ with $N=D+1$ is known as a *regular simplex*, and such ETFs are Naimark complements of ETFs for ${\mathbb{F}}^1$, namely sequences of scalars that have the same nontrivial modulus. In particular, a sequence of vectors ${\{{{\mathbf{f}}_n}\}}_{n=1}^{N}$ in ${\mathbb{F}}^{N-1}$ is a Naimark complement of the all-ones sequence in ${\mathbb{F}}^1$ if and only if $$\label{eq.regular simplex}
{\mathbf{F}}{\mathbf{F}}^*=N{\mathbf{I}},
\quad
\sum_{n=1}^{N}{\mathbf{f}}_n={\mathbf{F}}{\boldsymbol{1}}={\boldsymbol{0}},
\quad
{\mathbf{F}}^*{\mathbf{F}}=N{\mathbf{I}}-{\mathbf{J}},$$ where ${\boldsymbol{1}}$ and ${\mathbf{J}}$ denote an all-ones column vector and matrix, respectively. Equivalently, the vectors ${\{{1\oplus{\mathbf{f}}_n}\}}_{n=1}^{N}$ in ${\mathbb{F}}^N$ are equal norm and orthogonal. In particular, for any $N>1$, we can always take ${\{{1\oplus{\mathbf{f}}_n}\}}_{n=1}^{N}$ to be the columns of a possibly-complex Hadamard matrix of size $N$. In this case, ${\mathbf{F}}$ satisfies and is also *flat*, meaning every one of its entries has modulus one. As detailed below, flat regular simplices can be used to construct several families of ETFs, including Steiner ETFs as well as those we introduce in Theorem \[thm.new ETF\].
Harmonic ETFs are the best-known class of ETFs [@Strohmer08; @XiaZG05; @DingF07]. A harmonic ${{\operatorname{ETF}}}(D,N)$ is obtained by restricting the characters of an abelian group ${\mathcal{G}}$ of order $N$ to a *difference set* of cardinality $D$, namely a $D$-element subset ${\mathcal{D}}$ of ${\mathcal{G}}$ with the property that the cardinality of ${\{{(d,d')\in{\mathcal{D}}\times{\mathcal{D}}: g=d-d'}\}}$ is constant over all $g\in{\mathcal{G}}$, $g\neq 0$. The set complement ${\mathcal{G}}\backslash{\mathcal{D}}$ of any difference set in ${\mathcal{G}}$ is another difference set, and the two corresponding harmonic ETFs are Naimark complements. In particular, for any abelian group ${\mathcal{G}}$ of order $N$, the harmonic ETF arising from ${\mathcal{G}}\backslash{\{{0}\}}$ is a flat regular simplex that satisfies .
Group divisible designs
-----------------------
For a given integer $K\geq 2$, a $K$-GDD is a set ${\mathcal{V}}$ of $V>K$ vertices, along with collections ${\mathcal{G}}$ and ${\mathcal{B}}$ of subsets of ${\mathcal{V}}$, called *groups* and *blocks*, respectively, with the property that the groups partition ${\mathcal{V}}$, every block has cardinality $K$, and any two vertices are either contained in a common group or a common block, but not both. A $K$-GDD is *uniform* if its groups all have the same cardinality $M$, denoted in *exponential notation* as a “$K$-GDD of type $M^U$" where $V=UM$.
Letting $B$ be the number of blocks, a ${\{{0,1}\}}$-valued $B\times UM$ incidence matrix ${\mathbf{X}}$ of a $K$-GDD of type $M^U$ has the property that each row of ${\mathbf{X}}$ contains exactly $K$ ones. Moreover, for any $v=1,\dotsc,V=UM$, the $v$th column of ${\mathbf{X}}$ is orthogonal to $M-1$ other columns of ${\mathbf{X}}$, and has a dot product of $1$ with each of the remaining $(U-1)M$ columns. This implies that the *replication number* $R_v$ of blocks that contain the $v$th vertex satisfies $$(U-1)M
=\sum_{\substack{v'=1\\v'\neq v}}^V({\mathbf{X}}^*{\mathbf{X}})(v,v')
=\sum_{b=1}^{B}{\mathbf{X}}(b,v)\sum_{\substack{v'=1\\v'\neq v}}^V{\mathbf{X}}(b,v')
=\sum_{b=1}^{B}\left\{\begin{array}{cl}K-1,&{\mathbf{X}}(b,v)=1\\0,&{\mathbf{X}}(b,v)=0\end{array}\right\}
=R_v(K-1).$$ As such, this number $R_v=R$ is independent of $v$. At this point, summing all entries of ${\mathbf{X}}$ gives $BK=VR$ and so $B$ is also uniquely determined by $K$, $M$ and $U$. Because of this, the existence of a $K$-GDD of type $M^U$ is equivalent to that of a ${\{{0,1}\}}$-valued $B\times UM$ matrix ${\mathbf{X}}$ with $$\label{eq.GDD incidence matrix}
R=\tfrac{M(U-1)}{K-1},
\quad
B=\tfrac{MUR}{K}=\tfrac{M^2U(U-1)}{K(K-1)},
\quad
{\mathbf{X}}{\boldsymbol{1}}=K{\boldsymbol{1}},
\quad
{\mathbf{X}}^*{\mathbf{X}}=R\,{\mathbf{I}}+({\mathbf{J}}_U-{\mathbf{I}}_U)\otimes{\mathbf{J}}_M.$$ In the special case where $M=1$, a $K$-GDD of type $1^U$ is called a ${{\operatorname{BIBD}}}(U,K,1)$. In the special case where $U=K$, a $K$-GDD of type $M^K$ is called a *transversal design* ${{\operatorname{TD}}}(K,M)$, which is equivalent to a collection of $K-2$ *mutually orthogonal Latin squares* (MOLS) of size $M$.
In order for a $K$-GDD of type $M^U$ to exist, the expressions for $R$ and $B$ given in are necessarily integers. Beyond this, we necessarily have $U\geq K$ since we can partition any given block into its intersections with the groups, and the cardinality of these intersections is at most one. Altogether, the parameters of a $K$-GDD of type $M^U$ necessarily satisfy $$\label{eq.GDD necessary conditions}
U\geq K,
\quad
\tfrac{M(U-1)}{K-1}\in{\mathbb{Z}},
\quad
\tfrac{M^2U(U-1)}{K(K-1)}\in{\mathbb{Z}}.$$ Though these necessary conditions are not sufficient [@Ge07], they are asymptotically sufficient in two distinct ways: for any fixed $K\geq 2$ and $M\geq 1$, there exists $U_0=U_0(K,M)$ such that a $K$-GDD of type $M^U$ exists for all $U\geq U_0$ such that is satisfied [@Chang76; @LamkenW00]; for any fixed $U\geq K\geq 2$, there exists $M_0=M_0(K,U)$ such that a $K$-GDD of type $M^U$ exists for all $M\geq M_0$ such that is satisfied [@Mohacsy11]. In the $M=1$ and $U=K$ cases, these facts reduce to more classical asymptotic existence results for BIBDs and MOLS, respectively.
Many specific examples of GDDs are formed by combining smaller designs in clever ways. We in particular will make use of the following result, which is a special case of Wilson’s approach [@Wilson72]:
\[lem.Wilson\] If a $K$-GDD of type $M^U$ and a $U$-GDD of type $N^V$ exist, then a $K$-GDD of type $(MN)^V$ exists.
Let ${\mathbf{X}}$ and ${\mathbf{Y}}$ be incidence matrices of the form for the given $K$-GDD of type $M^U$ and $U$-GDD of type $N^V$, respectively. In particular, taking $R$ and $B$ as in , we can write ${\mathbf{X}}=\left[\begin{array}{ccc}{\mathbf{X}}_1&\cdots&{\mathbf{X}}_U\end{array}\right]$ where each ${\mathbf{X}}_u$ is a $B\times M$ matrix with ${\mathbf{X}}_u^*{\mathbf{X}}_u^{}=R{\mathbf{I}}$, and ${\mathbf{X}}_u^*{\mathbf{X}}_{u'}^{}={\mathbf{J}}$ for any $u\neq u'$. We now construct the incidence matrix ${\mathbf{Z}}$ of a $K$-GDD of type $(MN)^V$ in the following manner: in each row of ${\mathbf{Y}}$, replace each of the $U$ nonzero entries with a distinct matrix ${\mathbf{X}}_u$, and replace each of the zero entries with a $B\times M$ matrix of zeros.
This result generalizes MacNeish’s classical method for combining MOLS [@MacNeish22]: if a ${{\operatorname{TD}}}(K,M)$ and a ${{\operatorname{TD}}}(K,N)$ exist, then applying Lemma \[lem.Wilson\] to them produces a ${{\operatorname{TD}}}(K,MN)$. We will also use one GDD to “fill the holes" of another:
\[lem.filling holes\] If $K$-GDDs of type $M^U$ and $(MU)^V$ exist, then a $K$-GDD of type $M^{UV}$ exists.
Letting ${\mathbf{X}}$ and ${\mathbf{Y}}$ be incidence matrices of the form for the given $K$-GDDs of type $M^U$ and $(MU)^V$, respectively, it is straightforward to verify that $${\mathbf{Z}}=\left[\begin{array}{c}{\mathbf{I}}_V\otimes{\mathbf{X}}\\{\mathbf{Y}}\end{array}\right]$$ is the incidence matrix of a $K$-GDD of type $M^{UV}$.
Previously known constructions of ETFs involving BIBDs and MOLS
---------------------------------------------------------------
In the next section, we introduce a method for constructing ETFs that uses GDDs. This method makes use of a concept from [@FickusJMP18], which we now generalize from BIBDs to GDDs:
\[def.embeddings\] Take a $K$-GDD of type $M^U$ where $M\geq1$ and $U\geq K\geq 2$, and define $R$, $B$ and an incidence matrix ${\mathbf{X}}$ according to . Without loss of generality, write the columns of ${\mathbf{X}}$ as where, for each $u$, the vectors ${\{{{\mathbf{x}}_{u,m}}\}}_{m=1}^{M}$ have disjoint support. Then, for any $u$ and $m$, a corresponding *embedding operator* ${\mathbf{E}}_{u,m}$ is any ${\{{0,1}\}}$-valued $B\times R$ matrix whose columns are standard basis elements that sum to ${\mathbf{x}}_{u,m}$.
In the special case where $M=1$, this concept leads to an elegant formulation of Steiner ETFs [@FickusJMP18]: letting be the embedding operators of a ${{\operatorname{BIBD}}}(V,K,1)$, and letting ${\{{1\oplus{\mathbf{f}}_i}\}}_{i=0}^{R}$ be the columns of a possibly-complex Hadamard matrix of size $R+1=\frac{V-1}{K-1}+1$, the $V(R+1)$ vectors form an ETF for ${\mathbb{F}}^B$. In [@FickusJMP18], this fact is proven using several properties of embedding operators. We now show those properties generalize to the GDD setting; later on, we use these facts to prove our main result:
\[lem.embed\] If ${\{{{\mathbf{E}}_{u,m}}\}}_{u=1,}^{U},\,_{m=1}^{M}$ are the embedding operators arising from a $K$-GDD of type $M^U$, $${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}
=\left\{\begin{array}{cl}
{\mathbf{I}},&\ u=u', m=m',\\
{\boldsymbol{0}},&\ u=u', m\neq m',\\
{\boldsymbol{\delta}}_{r}^{}{\boldsymbol{\delta}}_{\smash{r'}}^*,&\ u\neq u'.
\end{array}\right.$$ Here, for any $u\neq u'$ and $m,m'$, ${\boldsymbol{\delta}}_r$ and ${\boldsymbol{\delta}}_{\smash{r'}}$ are standard basis elements in ${\mathbb{F}}^R$ whose indices $r,r'$ depend on $u,u',m,m'$.
Each ${\mathbf{E}}_{u,m}$ is a matrix whose columns are standard basis elements that sum to ${\mathbf{x}}_{u,m}$, and so is an isometry, that is, ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u,m}^{}={\mathbf{I}}$. Moreover, for any $u$, $u'$, $m$, $m'$, ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}$ is a matrix whose entries are nonnegative integers that sum to: $$\sum_{r=1}^{R}\sum_{r'=1}^{R}({\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{})(r,r')
={\boldsymbol{1}}^*{\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}{\boldsymbol{1}}^{}
={\langle{{\mathbf{x}}_{u,m}},{{\mathbf{x}}_{u',m'}}\rangle}
=\left\{\begin{array}{cl}
R,&\ u=u', m=m',\\
0,&\ u=u', m\neq m',\\
1,&\ u'\neq u.
\end{array}\right.$$ When $u=u'$ and $m\neq m'$, this implies ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}={\boldsymbol{0}}$. If instead $u\neq u'$ then this implies that ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}$ has a single nonzero entry, and that this entry has value $1$. This means there exists some $r,r'=1,\dotsc,R$, $r=r(u,m,u',m')$, $r'=r'(u,m,u',m')$ such that ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{\smash{u,m'}}^{}={\boldsymbol{\delta}}_{r}^{}{\boldsymbol{\delta}}_{\smash{r'}}^*$.
Other Steiner-like constructions of ETFs include hyperoval ETFs [@FickusMJ16] and Tremain ETFs [@FickusJMPW19]. Beyond Steiner and Steiner-like techniques, there are at least two other methods for constructing ETFs that make direct use of the incidence matrix of some kind of GDD. One method leads to the *phased BIBD ETFs* of [@FickusJMPW19]: if ${\mathbf{X}}$ is the $B\times V$ incidence matrix of a ${{\operatorname{BIBD}}}(V,K,1)$, and ${\boldsymbol{\Phi}}$ is any matrix obtained by replacing each $1$-valued entry of ${\mathbf{X}}$ with any unimodular scalar, then the columns of ${\boldsymbol{\Phi}}$ are immediately equiangular, and the challenge is to design them so that they form a tight frame for their span. Another method constructs ETFs with $(D,N)=(\tfrac12M(M\pm1),M^2)$ from MOLS. To elaborate, a ${{\operatorname{TD}}}(K,M)$ is a $K$-GDD of type $M^K$, meaning by that it has an $M^2\times KM$ incidence matrix ${\mathbf{X}}$ that satisfies $$\label{eq.incidence matrix of TD 1}
{\mathbf{X}}{\boldsymbol{1}}=K{\boldsymbol{1}},
\quad
{\mathbf{X}}^*{\mathbf{X}}=M{\mathbf{I}}+({\mathbf{J}}_K-{\mathbf{I}}_K)\otimes{\mathbf{J}}_M.$$ Here, the columns of ${\mathbf{X}}$ have support $M$, and are arranged as $K$ groups of $M$ columns apiece, where columns in a common group have disjoint support. Together, these facts imply, in turn, that $$\label{eq.incidence matrix of TD 2}
{\mathbf{X}}({\mathbf{I}}_K\otimes{\boldsymbol{1}}_M)={\boldsymbol{1}}_{M^2}^{}{\boldsymbol{1}}_K^*,
\quad
({\mathbf{X}}{\mathbf{X}}^*)^2=M{\mathbf{X}}{\mathbf{X}}^*+K(K-1){\mathbf{J}}.$$ At this point, the traditional approach is to let ${\mathbf{A}}={\mathbf{X}}{\mathbf{X}}^*-K{\mathbf{I}}$ be the adjacency matrix of the TD’s *block graph*, and use and to show that this graph is strongly regular with parameters $(M^2,K(M-1),M+K(K-3),K(K-1))$. In the $M=2K$ case, applying Theorem 4.4 of [@FickusJMPW18] to this graph then produces a real ETF with $(D,N)=(\frac12M(M-1),M^2)$ whose vectors sum to zero, while applying this same result in the $M=2(K-1)$ case produces a real ETF with $(D,N)=(\frac12M(M+1),M^2)$ whose synthesis operator’s row space contains the all-ones vector.
That said, a careful read of the literature reveals that this construction can be made more explicit, and that doing so has repercussions for coding theory. To elaborate, in [@BrackenMW06], MOLS are used to produce quasi-symmetric designs (QSDs) which, via the techniques of [@McGuire97], yield self-complementary binary codes that achieve equality in the Grey-Rankin bound. In [@JasperMF14], such codes are shown to be equivalent to flat real ETFs. A method for directly converting the incidence matrices of certain QSDs into synthesis operators of ETFs was also recently introduced [@FickusJMP19]. Distilling these ideas leads to the following streamlined construction: let ${\mathbf{X}}$ be the incidence matrix of a ${{\operatorname{TD}}}(K,M)$, let ${\{{1\oplus{\mathbf{f}}_m}\}}_{m=1}^{M}$ be the columns of a possibly-complex Hadamard matrix of size $M$, let ${\mathbf{F}}$ be the $(M-1)\times M$ synthesis operator of ${\{{{\mathbf{f}}_m}\}}_{m=1}^{M}$, and consider the $K(M-1)\times M^2$ matrix $$\label{eq.first flat ETF from MOLS}
{\boldsymbol{\Phi}}=({\mathbf{I}}_K\otimes{\mathbf{F}}){\mathbf{X}}^*.$$ Using , , and along with the fact that ${\mathbf{F}}$ is flat, it is straightforward to show that ${\boldsymbol{\Phi}}$ is flat and satisfies ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*
=M^2{\mathbf{I}}$ and ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}=M{\mathbf{X}}{\mathbf{X}}^*-K{\mathbf{J}}$. As such, the columns of ${\boldsymbol{\Phi}}$ form a flat *two-distance tight frame* (TDTF) for ${\mathbb{F}}^{K(M-1)}$ [@BargGOY15]. Moreover, this TDTF is an ETF when $M=2K$. In particular, if there exists a ${{\operatorname{TD}}}(K,2K)$ and a real Hadamard matrix of size $2K$, then there exists a flat real ${{\operatorname{ETF}}}(K(2K-1),4K^2)$. Using the equivalence between flat real ETFs and Grey-Rankin-bound-equality codes given in [@JasperMF14], or alternatively the equivalence between such ETFs and certain QSDs given in [@FickusJMP19], this recovers Theorem 1 of [@BrackenMW06]. In the $K=6$ case, that result gives the only known proof to date of the existence of a flat real ${{\operatorname{ETF}}}(66,144)$.
For any TD and corresponding flat regular simplex, it is quickly verified that the corresponding TDTF is *centered* [@FickusJMPW18] in the sense that ${\boldsymbol{\Phi}}{\boldsymbol{1}}={\boldsymbol{0}}$, namely that the all-ones vector is orthogonal to the row space of ${\boldsymbol{\Phi}}$. This fact leads to an analogous reinterpretation of the second main result of [@BrackenMW06]: in lieu of , we instead consider the $[K(M-1)+1]\times M^2$ flat matrix $$\label{eq.second flat ETF from MOLS}
{\boldsymbol{\Psi}}=\left[\begin{array}{l}{\boldsymbol{1}}^*\\{\boldsymbol{\Phi}}\end{array}\right]
=\left[\begin{array}{c}{\boldsymbol{1}}^*\\({\mathbf{I}}_K\otimes{\mathbf{F}}){\mathbf{X}}^*\end{array}\right].$$ Here, the properties of ${\boldsymbol{\Phi}}$ immediately imply ${\boldsymbol{\Psi}}{\boldsymbol{\Psi}}^*=M^2{\mathbf{I}}$ and ${\boldsymbol{\Psi}}^*{\boldsymbol{\Psi}}=M{\mathbf{X}}{\mathbf{X}}^*-(K-1){\mathbf{J}}$, meaning the columns of ${\boldsymbol{\Psi}}$ form a flat TDTF. However, unlike , the columns of are equiangular precisely when $M=2(K-1)$. Replacing $K$ with $K+1$, this implies in particular that if there exists a ${{\operatorname{TD}}}(K+1,2K)$ and a real Hadamard matrix of size $2K$, then there exists a flat real ${{\operatorname{ETF}}}(K(2K+1),4K^2)$. This recovers Theorem 2 of [@BrackenMW06] via the equivalences of [@JasperMF14; @FickusJMP19], and gives the only known proof of the existence of a flat real ${{\operatorname{ETF}}}(78,144)$.
Simply put, if certain TDs exist, then certain ETFs exist. In the next section, we introduce a new method of constructing ETFs from TDs.
Constructing equiangular tight frames with group divisible designs
==================================================================
In [@DavisJ97], Davis and Jedwab unify McFarland [@McFarland73] and Spence [@Spence77] difference sets under a single framework, and also generalize them so as to produce difference sets with parameters . McFarland’s construction relies on nice algebro-combinatorial properties of the set of all hyperplanes in a finite-dimensional vector space over a finite field. Davis and Jedwab exploit these properties to form various types of *building sets*, which in some cases lead to difference sets.
In [@JasperMF14], it is shown that every harmonic ETF arising from a McFarland ETF is unitarily-equivalent to a Steiner ETF arising from an affine geometry. When we applied a similar analysis to the building sets of [@DavisJ97], we discovered that they have an underlying TD-like incidence structure. (We do not provide this analysis here since it is nontrivial and does not help us prove our results in their full generality.) This eventually led us to the ETF construction technique of Theorem \[thm.new ETF\] below. In short, our approach here is directly inspired by that of [@DavisJ97], though this is not apparent from our proof techniques. In particular, the fact that the $L$ parameter in Definition \[def.ETF types\] is either $1$ or $-1$ is a generalization of Davis and Jedwab’s notion of *extended building sets* with “$+$" and “$-$" parameters, respectively. To facilitate our arguments later on, we now consider these types of parameters in greater detail:
\[thm.parameter types\] If $1<D<N$ and $(D,N)$ is type $(K,L,S)$, see Definition \[def.ETF types\], then $$\label{eq.type param in terms of ETF param}
S={\bigl[{\tfrac{D(N-1)}{N-D}}\bigr]}^{\frac12},
\quad
K=\tfrac{NS}{D(S+L)},$$ where $S\geq 2$. Conversely, given $(D,N)$ such that $1<D<N$, and letting $L$ be either $1$ or $-1$, if the above expressions for $S$ and $K$ are integers then $(D,N)$ is type $(K,L,S)$.
Moreover, in the case that the equivalent conditions above hold, scaling an ETF ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ for ${\mathbb{F}}^D$ so that ${|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}=1$ for all $n\neq n'$ gives that it and its Naimark complements ${\{{{\boldsymbol{\psi}}_n}\}}_{n=1}^{N}$ have tight frame constant $A=K(S+L)$ and $$\label{eq.norms of pos or neg ETFs}
{\|{{\boldsymbol{\varphi}}_n}\|}^2
={\bigl[{\tfrac{D(N-1)}{N-D}}\bigr]}^{\frac12}
=S,
\quad
{\|{{\boldsymbol{\psi}}_n}\|}^2
={\bigl[{\tfrac{(N-D)(N-1)}{D}}\bigr]}^{\frac12}
=S(K-1)+KL,
\quad
\forall n=1,\dotsc,N.$$
Whenever $(D,N)$ is type $(K,L,S)$ we have that $L$ is either $1$ or $-1$ by assumption, at which point the fact that $L^2=1$ coupled with gives $$\begin{aligned}
\nonumber
N-1
&=(S+L)[S(K-1)+L]-1
=S^2(K-1)+SKL
=S[S(K-1)+KL],\\
\tfrac{N}{D}-1
&=\tfrac{K(S+L)[S(K-1)+L]}{S[S(K-1)+L]}-1
=\tfrac{K(S+L)}{S}-1
=\tfrac1{S}[S(K-1)+KL].\end{aligned}$$ Multiplying and dividing these expressions immediately implies that $$\label{eq.N in terms of D and K 0}
{\bigl[{\tfrac{D(N-1)}{N-D}}\bigr]}^{\frac12}=S,
\quad
{\bigl[{\tfrac{(N-D)(N-1)}{D}}\bigr]}^{\frac12}=S(K-1)+KL.$$ Here, $S$ is an integer by assumption, and is clearly positive. Moreover, if $S=1$ then implies $D=1$. Since $D>1$ by assumption, we thus have $S\geq 2$. Continuing, further implies $$\tfrac{NS}{D(S+L)}
=\tfrac{K(S+L)[S(K-1)+L]}{S[S(K-1)+L]}\tfrac{S}{S+L}
=K.$$
Conversely, now assume that $S$ and $K$ are defined by , where $L$ is either $1$ or $-1$, and that $S$ and $K$ are integers. As before, the fact that $D>1$ implies that $S\geq 2$ and so $K>0$. We solve for $N$ in terms of $D$, $K$, and $L$. Here, gives $\frac{N}{DK}=\frac{S+L}{S}=1+\frac{L}{S}$. Since $L^2=1$, this implies $$\label{eq.N in terms of D and K 1}
{\bigl[{\tfrac{N-D}{D(N-1)}}\bigr]}^{\frac12}
=\tfrac1{S}
=L{\bigl({\tfrac{N}{DK}-1}\bigr)}.$$ Squaring this equation and multiplying the result by $N-1$ gives $$\tfrac{N}{D}-1
=\tfrac{N-D}{D}
=(N-1){\bigl({\tfrac{N}{DK}-1}\bigr)}^2
=N{\bigl({\tfrac{N}{DK}-1}\bigr)}^2-\tfrac{N}{DK}{\bigl({\tfrac{N}{DK}-2}\bigr)}-1.$$ Adding $1$ to this equation and multiplying by $\tfrac{(DK)^2}{N}$ then leads to a quadratic in $N$: $$DK^2
=(N-DK)^2-(N-2DK)
=N^2-(2DK+1)N+DK(DK+2).$$ Applying the quadratic formula then gives $$\label{eq.N in terms of D and K 2}
N=DK+\tfrac12{\bigl\{{1\pm{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}.$$ As such, becomes $\tfrac1{S}
=L{\bigl({\tfrac{N}{DK}-1}\bigr)}
=\tfrac{L}{2DK}{\bigl\{{1\pm{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}$, implying “$+$" and “$-$" here correspond to $L=1$ and $L=-1$ respectively, that is, $$\label{eq.N in terms of D and K 3}
\tfrac1{S}
=\tfrac{L}{2DK}{\bigl\{{1+L{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}
=\tfrac1{2DK}{\bigl\{{L+{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}.$$ Moreover, since $N$ is an integer, implies that $4DK(K-1)+1$ is the square of an odd integer, that is, that $4DK(K-1)+1=(2J-1)^2=4J(J-1)+1$ or equivalently that $DK(K-1)=J(J-1)$ for some positive integer $J$. Writing $D=\frac{J(J-1)}{K(K-1)}$, and then become $$\begin{aligned}
\label{eq.N in terms of D and K 4}
N
&=DK+\tfrac12{\bigl\{{1+L{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}
=DK+\tfrac12[1+L(2J-1)],
\\
\nonumber
\tfrac1{S}
&=\tfrac1{2DK}{\bigl\{{L+{\bigl[{4DK(K-1)+1}\bigr]}^{\frac12}}\bigr\}}
=\tfrac{K-1}{2J(J-1)}[L+(2J-1)]
=\left\{\begin{array}{cl}
\tfrac{K-1}{J-1},&L=1\smallskip\\
\tfrac{K-1}{J},&L=-1
\end{array}\right\}
=\tfrac{2(K-1)}{2J-L-1}.\end{aligned}$$ That is, $J=S(K-1)+\frac12(L+1)$. Substituting this into $D=\frac{J(J-1)}{K(K-1)}$ and and again using the fact that $L^2=1$ then gives the expressions for $D$ and $N$ given in Definition \[def.ETF types\]: $$\begin{aligned}
D
&=\tfrac{[S(K-1)+\frac12(L+1)][S(K-1)+\frac12(L-1)]}{K(K-1))}
=\tfrac{S^2(K-1)^2+S(K-1)L}{K(K-1))}
=\tfrac{S}{K}[S(K-1)+L],\\
N
&=S[S(K-1)+L]+\tfrac12\{1+L[2S(K-1)+L]\}
=(S+L)[S(K-1)+L].\end{aligned}$$
Finally, in the case where $(D,N)$ is type $(K,L,S)$, if ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ is an ETF for ${\mathbb{F}}^D$, and is without loss of generality scaled so that ${|{{\langle{{\boldsymbol{\varphi}}_n},{{\boldsymbol{\varphi}}_{n'}}\rangle}}|}=1$ for all $n\neq n'$, then and immediately imply that ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ and any one of its Naimark complements ${\{{{\boldsymbol{\psi}}_n}\}}_{n=1}^{N}$ satisfy , and that both are tight frames with tight frame constant $A=\frac{NS}{D}=K(S+L)$.
Theorem \[thm.parameter types\] implies that the $(D,N)$ parameters of an ETF with $N>D>1$ are type $(K,L,S)$ with $K=1$ if and only if that ETF is a regular simplex, and moreover that this only occurs when $L=1$ and $S=D$. Indeed, for any $D>1$, the pair $(D,N)=(D,D+1)$ satisfies when $(K,L,S)=(1,1,D)$. Conversely, in light of , an ETF of type $(1,L,S)$ has a Naimark complement ${\{{{\boldsymbol{\psi}}_n}\}}_{n=1}^{N}$ with the property that ${\|{{\boldsymbol{\psi}}_n}\|}^2=L=1$ and ${|{{\langle{{\boldsymbol{\psi}}_n},{{\boldsymbol{\psi}}_{n'}}\rangle}}|}=1$ for all $n\neq n'$, namely a Naimark complement that is an ${{\operatorname{ETF}}}(1,N)$.
We also emphasize that it is sometimes possible for the parameters of a single ETF to be simultaneously positive and negative for different choices of $K$. In particular, if $D>1$ and $(D,D+1)$ is type $(K,-1,S)$, then gives $S=D$ and $K=\frac{NS}{D(S+L)}=\frac{(D+1)D}{D(D-1)}=\frac{D+1}{D-1}=1+\frac{2}{D-1}$. Since $K$ is an integer, this implies either $D=2$ or $D=3$. And, letting $(K,L,S)$ be $(3,-1,2)$ and $(2,-1,3)$ in indeed gives that $(D,N)$ is $(2,3)$ and $(3,4)$, respectively.
In the next section, we provide a much more thorough discussion of positive and negative ETFs, including some other examples of ETFs that are both. For now, we turn to our main result, which shows how to combine a given ${{\operatorname{ETF}}}(D,N)$ whose parameters are type $(K,L,S)$ with a certain $K$-GDD to produce a new ETF whose parameters are type $(K,L,S')$ for some $S'>S$. Here, as with any GDD, we require $K\geq 2$. In light of the above discussion, this is not a significant restriction since $(D,N)$ has type $(1,L,S)$ if and only if $L=1$ and $S=D$, and we already know that ETFs of type $(1,1,D)$ exist for all $D>1$, being regular simplices.
\[thm.new ETF\] Assume an ETF of type $(K,L,S)$ exists where $K\geq 2$, and let $M=S(K-1)+L$. The necessary conditions on the existence of a $K$-GDD of type $M^U$ reduce to having $$\label{eq.new ETF 1}
U\geq K,
\quad
\tfrac{U-1}{K-1}\in{\mathbb{Z}},
\quad
\tfrac{(S-L)U(U-1)}{K(K-1)}\in{\mathbb{Z}}.$$ Moreover, if such a GDD exists, and $U$ has the additional property that $$\label{eq.new ETF 2}
\tfrac{(K-2)(U-1)}{(S+L)(K-1)}\in{\mathbb{Z}},$$ then there exists an ETF of type $(K,L,S')$ where $S'=S+R=\tfrac{MU-L}{K-1}$, where $R=\tfrac{M(U-1)}{K-1}$.
In particular, under these hypotheses, , and without loss of generality writing the given ETF as where ${\|{{\boldsymbol{\varphi}}_{m,i}}\|}^2=S$ for all $m$ and $i$, letting ${\{{{\mathbf{E}}_{u,m}}\}}_{u=1,}^{U}\,_{m=1}^{M}$ be the embedding operators of the GDD (Definition \[def.embeddings\]), letting ${\{{{\boldsymbol{\delta}}_u}\}}_{u=1}^{U}$ be the standard basis for ${\mathbb{F}}^U$, and letting ${\{{{\mathbf{e}}_i}\}}_{i=1}^{S+L}$ and ${\{{1\oplus{\mathbf{f}}_j}\}}_{j=0}^{W}$ be the columns of possibly-complex Hadamard matrices of size $S+L$ and $W+1$, respectively, then the following vectors form an ETF of type $(K,L,S')$: $$\label{eq.new ETF 3}
{\{{{\boldsymbol{\psi}}_{u,m,i,j}}\}}_{u=1,}^{U}\,_{m=1,}^{M}\,_{i=1,}^{S+L}\,_{j=0}^{W},
\quad
{\boldsymbol{\psi}}_{u,m,i,j}:=({\boldsymbol{\delta}}_u\otimes{\boldsymbol{\varphi}}_{m,i})\oplus{\bigl({{\mathbf{E}}_{u,m}({\mathbf{e}}_i\otimes{\mathbf{f}}_j)}\bigr)}.$$
As special cases of this fact, an ETF of type $(K,L,S')$ exists whenever either:
1. $U$ is sufficiently large and satisfies and ;
2. there exists a $K$-GDD of type $M^U$, provided we also have that $S+L$ divides $K-2$.
Since $M=S(K-1)+L$, $$\label{eq.pf of new neg ETF 1}
\tfrac{M}{K-1}
=S+\tfrac{L}{K-1}
=(S+L)-L{\bigl({\tfrac{K-2}{K-1}}\bigr)},
\quad
\tfrac{M}{K}
=S-\tfrac{S-L}{K}.$$ As such, the replication number of any $K$-GDD of type $M^U$ is $R=\frac{M(U-1)}{K-1}=S(U-1)+L(\frac{U-1}{K-1})$. In particular, such a GDD can only exist when $K-1$ necessarily divides $U-1$. Moreover, multiplying the expressions in gives that the number of blocks in any such GDD is $$\begin{aligned}
B
&=\tfrac{M^2U(U-1)}{K(K-1)}\\
&={\bigl({S-\tfrac{S-L}{K}}\bigr)}{\bigl({S+\tfrac{L}{K-1}}\bigr)}U(U-1)\\
&=S^2U(U-1)+LSU\tfrac{U-1}{K-1}-\tfrac{S(S-L)}{K}U(U-1)-L\tfrac{(S-L)U(U-1)}{K(K-1)}.\end{aligned}$$ Here, since our initial ${{\operatorname{ETF}}}(D,N)$ is type $(K,L,S)$, $\frac{S(S-L)}{K}=S^2-D$ is an integer, and so the above expression for $B$ is an integer precisely when $K(K-1)$ divides $(S-L)U(U-1)$. To summarize, since $M=S(K-1)+L$ where $K$ divides $S(S-L)$, the necessary conditions on the existence of a $K$-GDD of type $M^U$ reduce to having . These necessary conditions are known to be asymptotically sufficient [@Chang76; @LamkenW00]: for this fixed $K$ and $M$, there exists $U_0$ such that a $K$-GDD of type $M^U$ exists for any $U\geq U_0$ that satisfies . Regardless, to apply our construction below with any given $K$-GDD of type $M^U$, we only need $U$ to satisfy the additional property that $$W
=\tfrac{R}{S+L}
=\tfrac{M(U-1)}{(S+L)(K-1)}
=\tfrac{U-1}{S+L}{\bigl[{(S+L)-L{\bigl({\tfrac{K-2}{K-1}}\bigr)}}\bigr]}
=(U-1)-L\tfrac{(K-2)(U-1)}{(S+L)(K-1)}$$ is an integer, namely to satisfy . Since $K-1$ necessarily divides $U-1$, this is automatically satisfied whenever $S+L$ happens to divide $K-2$, and some of the ETFs we will identify in the next section will have this nice property. Regardless, there are always an infinite number of values of $U$ which satisfy and , including, for example, all $U\equiv 1\bmod (S+L)K(K-1)$.
Turning to the construction itself, the fact that the given ${{\operatorname{ETF}}}(D,N)$ is type $(K,L,S)$ implies $$\label{eq.proof of new ETF 1}
D
=\tfrac{S}{K}[S(K-1)+L]
=\tfrac{SM}{K},
\quad
N
=(S+L)[S(K-1)+L]
=(S+L)M.$$ In particular, since $N=(S+L)M$, the vectors in our initial ${{\operatorname{ETF}}}(D,N)$ can indeed be indexed as . Moreover, $$\label{eq.proof of new ETF 2}
MU
=M(U-1)+M
=(K-1)R+[S(K-1)+L]
=(S+R)(K-1)+L.$$ As such, the number of vectors in the collection is $$\label{eq.proof of new ETF 3}
N'
=UM(S+L)(W+1)
=(S+L)(\tfrac{R}{S+L}+1)MU
=[(S+R)+L][(S+R)(K-1)+L].$$ Also, for each $i$ and $j$, ${\mathbf{e}}_i\otimes{\mathbf{f}}_j$ lies in a space of dimension ${\mathbb{F}}^{(S+L)W}={\mathbb{F}}^R$. And, for each $u$ and $m$, ${\mathbf{E}}_{u,m}$ is a $B\times R$ matrix. As such, for any $u$, $m$, $i$ and $j$, ${\boldsymbol{\psi}}_{u,m,i,j}$ is a well-defined vector in ${\mathbb{F}}^{D'}$ where, by combining , and , we have $$\label{eq.proof of new ETF 4}
D'
=UD+B
=U\tfrac{SM}{K}+\tfrac{MU}{K}R
=\tfrac{S+R}{K}MU
=\tfrac{S+R}{K}[(S+R)(K-1)+L].$$ Comparing and against , we see that $(D',N')$ is indeed type $(K,L,S')$ where $S'=S+R$. Here, further implies $S'=S+R=\tfrac{MU-L}{K-1}$.
Continuing, since $(D',N')$ is type $(K,L,S')$, Theorem \[thm.parameter types\] gives that the Welch bound for $N'$ vectors in ${\mathbb{F}}^{D'}$ is . As such, to show is an ETF for ${\mathbb{F}}^{D'}$, it suffices to prove that ${\|{{\boldsymbol{\psi}}_{u,m,i,j}}\|}^2=S'$ for all $u$, $m$, $i$, $j$, and that ${\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u',m',i',j'}}\rangle}$ is unimodular whenever $(u,m,i,j)\neq(u',m',i',j')$. Here, for any $u,u'=1,\dotsc,U$, $m,m'=1,\dotsc,M$, $i,i'=1,\dotsc,S+L$, $j,j'=0,\dotsc,W$, $$\begin{aligned}
{\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u',m',i',j'}}\rangle}
\nonumber
&={\langle{({\boldsymbol{\delta}}_u\otimes{\boldsymbol{\varphi}}_{m,i})\oplus{\bigl({{\mathbf{E}}_{u,m}({\mathbf{e}}_i\otimes{\mathbf{f}}_j)}\bigr)}},{({\boldsymbol{\delta}}_{u'}\otimes{\boldsymbol{\varphi}}_{m',i'})\oplus{\bigl({{\mathbf{E}}_{u',m'}({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})}\bigr)}}\rangle}\\
\nonumber
&={\langle{{\boldsymbol{\delta}}_u},{{\boldsymbol{\delta}}_{u'}}\rangle}{\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m',i'}}\rangle}
+{\langle{{\mathbf{E}}_{u,m}({\mathbf{e}}_i\otimes{\mathbf{f}}_j)},{{\mathbf{E}}_{u',m'}({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})}\rangle}\\
\label{eq.proof of new ETF 5}
&={\langle{{\boldsymbol{\delta}}_u},{{\boldsymbol{\delta}}_{u'}}\rangle}{\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m',i'}}\rangle}
+{\langle{{\mathbf{e}}_i\otimes{\mathbf{f}}_j},{{\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})}\rangle}.\end{aligned}$$ When $u=u'$, $m=m'$, $i=i'$ and $j=j'$, indeed becomes $${\|{{\boldsymbol{\psi}}_{u,m,i,j}}\|}^2
={\|{{\boldsymbol{\delta}}_u}\|}^2{\|{{\boldsymbol{\varphi}}_{m,i}}\|}^2+{\|{{\mathbf{e}}_i}\|}^2{\|{{\mathbf{f}}_j}\|}^2
=1S+(S+L)W
=S+R
=S'.$$ As such, all that remains is to show that is unimodular in all other cases. For instance, if $u\neq u'$ then Lemma \[lem.embed\] gives that for any $m,m'$ there exists $r,r'=1,\dotsc,R$ such that ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u',m'}^{}={\boldsymbol{\delta}}_{r}^{}{\boldsymbol{\delta}}_{\smash{r'}}^*$ meaning in this case becomes $${\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u',m',i',j'}}\rangle}
=0{\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m',i'}}\rangle}
+{\langle{{\mathbf{e}}_i\otimes{\mathbf{f}}_j},{{\boldsymbol{\delta}}_{r}^{}{\boldsymbol{\delta}}_{\smash{r'}}^*({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})}\rangle}
=\overline{({\mathbf{e}}_i\otimes{\mathbf{f}}_j)(r)}({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})(r'),$$ which is unimodular, being a product of unimodular numbers. If we instead have $u=u'$ and $m\neq m'$ then Lemma \[lem.embed\] gives ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u,m'}^{}={\boldsymbol{0}}$ and so becomes $${\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u,m',i',j'}}\rangle}
={\|{{\boldsymbol{\delta}}_u}\|}^2{\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m',i'}}\rangle}
+{\langle{{\mathbf{e}}_i\otimes{\mathbf{f}}_j},{{\boldsymbol{0}}({\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'})}\rangle}
={\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m',i'}}\rangle},$$ which is unimodular since ${\{{{\boldsymbol{\varphi}}_{m,i}}\}}_{m=1,}^{M}\,_{i=1}^{S+L}$ is an ETF of type $(K,L,S)$, and has been scaled so that ${\|{{\boldsymbol{\varphi}}_{m,i}}\|}^2=S$ for all $m$ and $i$. Next, if we instead have $u=u'$ and $m=m'$ then Lemma \[lem.embed\] gives ${\mathbf{E}}_{u,m}^*{\mathbf{E}}_{u,m}^{}={\mathbf{I}}$ and so becomes $$\label{eq.proof of new ETF 6}
{\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u,m,i',j'}}\rangle}
={\|{{\boldsymbol{\delta}}_u}\|}^2{\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m,i'}}\rangle}+{\langle{{\mathbf{e}}_i\otimes{\mathbf{f}}_j},{{\mathbf{e}}_{i'}\otimes{\mathbf{f}}_{j'}}\rangle}
={\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m,i'}}\rangle}+{\langle{{\mathbf{e}}_i},{{\mathbf{e}}_{i'}}\rangle}{\langle{{\mathbf{f}}_j},{{\mathbf{f}}_{j'}}\rangle}.$$ In particular, when $u=u'$, $m=m'$ and $i\neq i'$, the fact that ${\{{{\mathbf{e}}_i}\}}_{i=1}^{S+L}$ is orthogonal implies that reduces to ${\langle{{\boldsymbol{\varphi}}_{m,i}},{{\boldsymbol{\varphi}}_{m,i'}}\rangle}$, which is unimodular for the same reason as the previous case. The final remaining case is the most interesting: when $u=u'$, $m=m'$, $i=i'$ but $j\neq j'$, we have $0={\langle{1\oplus{\mathbf{f}}_j},{1\oplus{\mathbf{f}}_{j'}}\rangle}=1+{\langle{{\mathbf{f}}_j},{{\mathbf{f}}_{j'}}\rangle}$ and so ${\langle{{\mathbf{f}}_j},{{\mathbf{f}}_{j'}}\rangle}=-1$; when combined with the fact that ${\|{{\boldsymbol{\varphi}}_{m,i}}\|}^2=S$ and ${\|{{\mathbf{e}}_i}\|}^2=S+L$, this implies that in this case becomes $${\langle{{\boldsymbol{\psi}}_{u,m,i,j}},{{\boldsymbol{\psi}}_{u,m,i,j'}}\rangle}
={\|{{\boldsymbol{\varphi}}_{m,i}}\|}^2+{\|{{\mathbf{e}}_i}\|}^2{\langle{{\mathbf{f}}_j},{{\mathbf{f}}_{j'}}\rangle}
=S+(S+L)(-1)
=-L,$$ where, in Definition \[def.ETF types\], we have assumed that $L$ is either $1$ or $-1$.
The construction of Theorem \[thm.new ETF\] leads to the concept of ETFs of type $(K,L,S)$. To clarify, the construction and the above proof of its equiangularity is valid for any initial ${{\operatorname{ETF}}}(D,N)$ and any $K$-GDD of type $M^U$, provided $M=\frac{N}{S+L}$ for some $L\in{\{{-1,1}\}}$, and $S+L$ divides . However, it turns out that the first $UD$ rows of the corresponding synthesis operator have squared-norm $(W+1)\frac{NS}{D}$, whereas the last $B$ rows have squared-norm $(W+1)K(S+L)$. As such, the equiangular vectors are only a tight frame when . Applying the techniques of the proof of Theorem \[thm.parameter types\] then leads to the expressions for $(D,N)$ in terms of $(K,L,S)$ given in . These facts are not explicitly discussed in our proof above since any equal-norm vectors that attain the Welch bound are automatically tight.
\[rem.recursive\] There is no apparent value to recursively applying Theorem \[thm.new ETF\]. To elaborate, given an ${{\operatorname{ETF}}}(D,N)$ of type $(K,L,S)$ and a $K$-GDD of type $M^U$ where $M=S(K-1)+L$ and $U$ satisfies , Theorem \[thm.new ETF\] yields an ${{\operatorname{ETF}}}(D',N')$ of type $(K,L,S')$ where $S'=\frac{MU-L}{K-1}$. The “$M$" parameter of this new ETF is thus $M'=S'(K-1)+L=(MU-L)+L=MU$, and we can apply Theorem \[thm.new ETF\] a second time provided we have a $K$-GDD of type $(MU)^{U'}$ where $U'$ satisfies the appropriate analog of , namely $$\label{eq.recursive 1}
\tfrac{(K-2)(U'-1)}{(S'+L)(K-1)}\in{\mathbb{Z}}.$$ Doing so yields an ETF of type $(K,-1,S'')$ where $S''=\tfrac{M'U'-L}{K-1}=\tfrac{MUU'-L}{K-1}$. However, under these hypotheses, there is a simpler way to construct an ETF of this same type. Indeed, using the first GDD to fill the holes of the second GDD via Lemma \[lem.filling holes\] produces a $K$-GDD of type $M^{UU'}$. Moreover, $UU'$ is a value of “$U$" that satisfies : since $S'=S+R$, $$\tfrac{S'+L}{S+L}=\tfrac{S'-S}{S+L}+1=\tfrac{R}{S+L}+1=W+1\in{\mathbb{Z}},$$ and this together with and imply $$\tfrac{(K-2)(UU'-1)}{(S+L)(K-1)}
=\tfrac{(K-2)[U(U'-1)+(U-1)]}{(S+L)(K-1)}
=U\tfrac{(S'+L)}{(S+L)}\tfrac{(K-2)(U'-1)}{(S'+L)(K-1)}+\tfrac{(K-2)(U-1)}{(S+L)(K-1)}
\in{\mathbb{Z}}.$$ As such, we can combine our original ETF of type $(K,L,S)$ with our $K$-GDD of type $M^{UU'}$ via Theorem \[thm.new ETF\] to directly produce an ETF of type $(K,L,\tfrac{MUU'-L}{K-1})$.
We also point out that, in a manner analogous to how every McFarland difference set can be viewed as a degenerate instance of a Davis-Jedwab difference set [@DavisJ97], every Steiner ETF can be regarded as a degenerate case of the construction of Theorem \[thm.new ETF\]. Here, in a manner consistent with , we regard $(D,N)=(0,1)$ as being type $(K,L,S)=(K,1,0)$ where $K\geq 1$ is arbitrary. Under this convention, $M=S(K-1)+L=1$, and $W=R$, meaning we need a $K$-GDD of type $1^U$, namely a ${{\operatorname{BIBD}}}(V,K,1)$ where $V=U$. When such a BIBD exists, Theorem \[thm.new ETF\] suggests we let ${\{{{\boldsymbol{\varphi}}_{1,1}}\}}$ be some fictitious ETF for the nonexistent space ${\mathbb{F}}^0$, scaled so that . It also suggests we let be the embedding operators of the BIBD, let ${\mathbf{e}}_1=1$, and let ${\{{1\oplus{\mathbf{f}}_j}\}}_{j=0}^{R}$ be the columns of a possibly-complex Hadamard matrix of size $R$. Under these conventions, reduces to a collection of $V(R+1)$ vectors , where for any $v=u$ and $j$, the fact that ${\boldsymbol{\varphi}}_{1,1}$ lies in a “zero-dimensional space" makes it reasonable to regard $${\boldsymbol{\psi}}_{v,j}
={\boldsymbol{\psi}}_{u,1,1,j}
=({\boldsymbol{\delta}}_v\otimes{\boldsymbol{\varphi}}_{1,1})\oplus{\bigl({{\mathbf{E}}_{u,1}({\mathbf{e}}_1\otimes{\mathbf{f}}_j)}\bigr)}
={\mathbf{E}}_{u,1}(1\otimes{\mathbf{f}}_j)
={\mathbf{E}}_{u}{\mathbf{f}}_j.$$ Here, Theorem \[thm.new ETF\] leads us to expect that ${\{{{\mathbf{E}}_v{\mathbf{f}}_j}\}}_{v=1,}^{V}\,_{j=0}^{R}$ is an ETF of type $(K,1,S+R)=(K,1,R)$.
This is indeed the case: as discussed in the previous section, ${\{{{\mathbf{E}}_v{\mathbf{f}}_j}\}}_{v=1,}^{V}\,_{j=0}^{R}$ is by definition a Steiner ETF for ${\mathbb{F}}^B$ where $B=\frac{VR}{K}$, and moreover letting $(K,L,S)=(K,1,R)$ in gives: $$\begin{aligned}
\tfrac{S}{K}[S(K-1)+L]
&=\tfrac{R}{K}[R(K-1)+1]
=\tfrac{R}{K}V
=B,\\
(S+L)[S(K-1)+L]
&=(R+1)[R(K-1)+1]
=V(R+1).\end{aligned}$$ In particular, every Steiner ETF is a positive ETF.
As a degenerate case of Remark \[rem.recursive\], we further have that when a Steiner ETF is regarded as being positive, applying Theorem \[thm.new ETF\] to it yields an ETF whose parameters match those of another Steiner ETF. Indeed, since a Steiner ETF arising from a ${{\operatorname{BIBD}}}(V,K,1)$ is type $(K,1,R)$ where $R=\frac{V-1}{K-1}$, we can only apply Theorem \[thm.new ETF\] to it whenever there exists a $K$-GDD of type $M^U$ where $M=R(K-1)+1=V$ and $U$ satisfies . In this case, the resulting ETF is type $(K,1,\frac{VU-1}{K-1})$. However, under these same hypotheses, we can more simply use the ${{\operatorname{BIBD}}}(V,K,1)$ to fill the holes of the $K$-GDD of type $V^U$ via Lemma \[lem.filling holes\] to obtain a ${{\operatorname{BIBD}}}(UV,K,1)$, and its corresponding Steiner ETF is type $(K,1,\frac{VU-1}{K-1})$.
Families of positive and negative equiangular tight frames
==========================================================
In this section, we use Theorems \[thm.parameter types\] and \[thm.new ETF\] to better our understanding of positive and negative ETFs, in particular proving the existence of the new ETFs given in Theorems \[thm.new neg ETS with K=4,5\] and \[thm.new neg ETS with K>5\]. Recall that by , any ${{\operatorname{ETF}}}(D,N)$ of type $(K,L,S)$ has $D=\tfrac{S}{K}[S(K-1)+L]=S^2-\tfrac{S(S-L)}{K}$, and so $K$ necessarily divides $S(S-L)$. As we shall see, it is reasonable to conjecture that such ETFs exist whenever this necessary condition is satisfied.
Positive equiangular tight frames
---------------------------------
In light of Definition \[def.ETF types\] and Theorem \[thm.parameter types\], an ${{\operatorname{ETF}}}(D,N)$ with $1<D<N$ is positive if and only if there exists integers $K\geq 1$ and $S\geq 2$ such that $$\label{eq.positive ETF param}
D=\tfrac{S}{K}[S(K-1)+1]=S^2-\tfrac{S(S-1)}{K},
\quad
N=(S+1)[S(K-1)+1],$$ or equivalently that and $K=\frac{NS}{D(S+1)}$ are integers.
As discussed in the previous section, every regular simplex is a $1$-positive ETF and vice versa, and moreover, every Steiner ETF arising from a ${{\operatorname{BIBD}}}(V,K,1)$ is type $(K,1,R)$ where $R=\frac{V-1}{K-1}$. Here, the fact that $K$ necessarily divides $S(S-1)=R(R-1)$ is equivalent to having that $D=B$ is necessarily an integer. *Fisher’s inequality* also states that a ${{\operatorname{BIBD}}}(V,K,1)$ can only exist when $K\leq R$. These necessary conditions on the existence of a ${{\operatorname{BIBD}}}(V,K,1)$ are known to also be sufficient when $K=2,3,4,5$, and also asymptotically sufficient in general: for any $K\geq 2$, there exists $V_0$ such that for all $V\geq V_0$ with the property that $R=\frac{V-1}{K-1}$ and $B=\frac{VR}{K}=\frac{V(V-1)}{K(K-1)}$, a ${{\operatorname{BIBD}}}(V,K,1)$ exists [@AbelG07]. Moreover, explicit infinite families of such BIBDs are known, including affine geometries, projective geometries, unitals and Denniston designs [@FickusMT12], and each thus gives rise to a corresponding infinite family of (positive) Steiner ETFs.
That said, not every positive ETF is a Steiner ETF. In particular, for any prime power $Q$, there is a phased BIBD ETF [@FickusJMPW19] whose Naimark complements are ${{\operatorname{ETF}}}(D,N)$ where $$D=\tfrac{Q^3+1}{Q+1},
\quad
N=Q^3+1,
\quad
S={\bigl({\tfrac{N-1}{\frac ND-1}}\bigr)}^{\frac12}={\bigl({\tfrac{Q^3}{Q}}\bigr)}^{\frac12}=Q,
\quad
K=\tfrac{NS}{D(S+1)}=\tfrac{(Q+1)Q}{Q+1}=Q,$$ namely an ETF of type $(Q,1,Q)$. These parameters match those of a Steiner ETF arising from a projective plane of order $Q-1$. However, such an ETF can exist even when no such projective plane exists. For example, such an ETF exists when $Q=7$ despite the fact that no projective plane of order $Q-1=6$ exists.
Other ETFs of type $(K,1,S)$ have $K>S$, and are thus not Steiner ETFs since the underlying BIBD would necessarily violate Fisher’s inequality. To elaborate, when $K$ is a prime power, the requirement that $K$ divides $S(S-1)$ where $S$ and $S-1$ are relatively prime implies that either $K$ divides $S$ or $S-1$, and in either case $K\leq S$. However, when $K$ is not a prime power, we can sometimes choose it to be a divisor of $S(S-1)$ that is larger than $S$.
For example, taking $K=S(S-1)$ in gives $D=S^2-1$ and $N=(S^2-1)^2=D^2$. Such an ETF thus corresponds to a SIC-POVM in a space whose dimension is one less than a perfect square. Such SIC-POVMs are known to exist when $S=2,\dotsc,7,18$, and are conjectured to exist for all $S$ [@FuchsHS17; @GrasslS17]. Similarly, taking in gives $D=S^2-2$ and . We refer to such $(D,N)$ as being of *real maximal type* since real-valued examples of such ETFs meet the real-variable version of the Gerzon bound, and are known to exist when $S=3,5$. Remarkably, it is known that a real ETF of this type does not exist when $S=7$ [@FickusM16]. We also caution that there is a single pair $(D,N)$ with $N=\binom{D+1}{2}$ that is neither positive or negative despite the fact that an ${{\operatorname{ETF}}}(D,N)$ exists, namely $(D,N)=(3,6)$.
We summarize these facts as follows:
\[thm.known positive ETFs\] An ETF of type $(K,1,S)$ exists whenever:
1. $K=1$ and $S\geq 2$, (regular simplices);
2. $K\geq 2$ and there exists a ${{\operatorname{BIBD}}}(S(K-1)+1,K,1)$ (Steiner ETFs [@FickusMT12]), including:
1. $K=2,3,4,5$ and $S\geq K$ has the property that $K$ divides $S(S-1)$;
2. $K=Q$ and $S=\frac{Q^J-1}{Q-1}$ where $Q$ is a prime power and $J\geq 2$ (affine geometries);
3. $K=Q+1$ and $S=\frac{Q^J-1}{Q-1}$ where $Q$ is a prime power and $J\geq 2$ (projective geometries);
4. $K=Q+1$ and $S=Q^2$ where $Q$ is a prime power (unitals);
5. $K=2^{J_1}$ and $S=2^{J_2}+1$ where $2\leq J_1<J_2$ (Denniston designs);
6. $K\geq2$ and $S$ is sufficiently large and has the property that $K$ divides $S(S-1)$;
3. $K=Q$ and $S=Q$ whenever $Q$ is a prime power [@FickusJMPW19];
4. $K=S(S-1)$ where $S=2,\dotsc,7,18$ (SIC-POVMs [@FuchsHS17; @GrasslS17]);
5. $K=\binom{S}{2}$ where $S=3,5$ (real maximal type [@FickusM16]).
Because so much is already known regarding the existence of positive ETFs, we could not find any examples where Theorem \[thm.new ETF\] makes a verifiable contribution. As we now discuss, much less is known about negative ETFs, and this gives Theorem \[thm.new ETF\] an opportunity to be useful.
Negative equiangular tight frames
---------------------------------
By Definition \[def.ETF types\] and Theorem \[thm.parameter types\], an ${{\operatorname{ETF}}}(D,N)$ with $1<D<N$ is negative if and only if there exists integers $K\geq 1$ and $S\geq 2$ such that $$\label{eq.negative ETF param}
D=\tfrac{S}{K}[S(K-1)-1]=S^2-\tfrac{S(S+1)}{K},
\quad
N=(S-1)[S(K-1)-1],$$ or equivalently that and $K=\frac{NS}{D(S-1)}$ are integers. Here, since $\frac{N}{D}>1$ and $\frac{S}{S-1}>1$, we actually necessarily have that $K\geq 2$.
When $K=2$, becomes $(D,N)=(\frac12S(S-1),(S-1)^2)$ and such ETFs exist for any $S\geq 3$, being the Naimark complements of ETFs of type $(2,1,S-1)$.
In the $K=3$ case, becomes $$D
=\tfrac{S(2S-1)}{3}
=S^2-\tfrac{S(S+1)}{3},
\quad
N=(S-1)(2S-1).$$ For $D$ to be an integer, we necessarily have $S\equiv0,2\bmod 3$. Moreover, for any $S\geq 2$ with $S\equiv 0,2\bmod 3$, an ETF of type $(3,-1,S)$ exists. Indeed, the recent paper [@FickusJMP18] gives a way to modify the Steiner ETF arising from a ${{\operatorname{BIBD}}}(V,3,1)$ to yield a Tremain ${{\operatorname{ETF}}}(D,N)$ with $$D=\tfrac16(V+2)(V+3),\quad
N=\tfrac12(V+1)(V+2),$$ for any $V\geq 3$ with $V\equiv1,3\bmod 6$. Such an ETF is type $(3,-1,S)$ with $S=\frac12(V+3)$. We also note that for every $J\geq1$, there is a harmonic ${{\operatorname{ETF}}}(D,N)$ arising from a Spence difference set [@Spence77] with $(D,N)=(\tfrac12 3^{J}(3^{J+1}+1),\tfrac12 3^{J+1}(3^{J+1}-1))$, and such an ETF is type $(3,-1,S)$ where $S=\frac12(3^{J+1}+1)$. It remains unclear whether any Spence ETFs are unitarily equivalent to special instances of Tremain ETFs.
In the $K=4$ case, for any positive integer $J$, Davis and Jedwab [@DavisJ97] give a difference set whose harmonic ETF has parameters and so is a type $(4,-1,S)$ ETF where $S=\frac13(2^{2J+1}+1)$. Beyond these examples, a few other infinite families of negative ETFs are known to exist. In particular, in order for the expression for $D$ given in to be an integer, $K$ necessarily divides $S(S+1)$, and so it is natural to consider the special cases where $S=K$ and $S=K-1$.
When $S=K-1$, gives that ETFs of type $(K,-1,K-1)$ have $$D=(K-1)(K-2),
\quad
N=K(K-2)^2.$$ Remarkably, these are the same $(D,N)$ parameters as those of an ETF of type $(K-2,1,K-1)$. In particular, every Steiner ETF arising from an affine plane of order $Q$ is both of (positive) type $(Q,1,Q+1)$ and (negative) type $(Q+2,-1,Q+1)$. More generally, a Steiner ETF arising from a ${{\operatorname{BIBD}}}(V,K,1)$ has $(D,N)=(B,V(R+1))$ where $R=\frac{V-1}{K-1}$ and $B=\frac{VR}{K}$, and so is only negative when $$\label{eq.when Steiner ETF is neg}
\tfrac{NS}{D(S-1)}
=\tfrac{V(R+1)R}{B(R-1)}
=\tfrac{K(R+1)}{R-1}$$ is an integer. When $R$ is even, $R-1$ and $R+1$ are relatively prime, and this can only occur when $R-1$ divides $K$. Here, since Fisher’s inequality gives $K\leq R$, this happens precisely when either $R=K=2$ or $R=K+1$, namely when the underlying BIBD is a ${{\operatorname{BIBD}}}(3,2,1)$ or is an affine plane of odd order $K$, respectively. Meanwhile, when $R$ is odd, $R-1$ and $R+1$ have exactly one prime factor in common, namely $2$, and is an integer precisely when $\frac12(R-1)$ divides $K$. Since $K\leq R$, this happens precisely when either $R=K=3$, $R=K+1$ or $R=2K+1$, namely when the underlying BIBD is the projective plane of order $2$, an affine plane of even order, or is a ${{\operatorname{BIBD}}}(V,K,1)$ where $V=(2K+1)(K-1)+1=K(2K-1)$ for some $K\geq 2$, respectively. With regard to the latter, it seems to be an open question whether a ${{\operatorname{BIBD}}}(K(2K-1),K,1)$ exists for every $K\geq 2$, though a Denniston design provides one whenever $K=2^J$ for some $J\geq1$, and they are also known to exist when $K=3,5,6,7$ [@MathonR07]. For such ETFs, becomes $K+1$, meaning they are type $(K',-1,2K'-1)$ where $K'=K+1$.
Meanwhile, when $S=K$, gives that ETFs of type $(K,-1,K)$ have $$\label{eq.S=K param}
D=K^2-K-1,
\quad
N=(K-1)(K^2-K-1).$$ The recently-discovered hyperoval ETFs of [@FickusMJ16] are instances of such ETFs whenever $K=2^J+1$ for some $J\geq 1$. In the $K=4$ case, becomes $(D,N)=(11,33)$, and this seems to be the smallest set of positive or negative parameters for which the existence of a corresponding ETF remains an open problem.
Apart from these examples, it seems that only a finite number of other negative ETFs are known to exist. For example, since $K$ necessarily divides $S(S+1)$, it is natural to also consider the cases where $K=S(S+1)$ and $K=\binom{S+1}{2}$. When $K=S(S+1)$, becomes $(D,N)=(S^2-1,(S^2-1)^2)$. In particular, every positive SIC-POVM is also negative. Similarly, when $K=\binom{S+1}{2}$, gives $N=\binom{D+1}{2}$ where $D=S^2-2$, meaning every positive $(D,N)$ of real maximal type is also negative.
The only other example of a negative ETF that we found in the literature was an ${{\operatorname{ETF}}}(22,176)$, which has type $(10,-1,5)$, and arises from a particular SRG. When searching tables of known ETFs such as [@FickusM16], it is helpful to note that most positive and negative ETFs have redundancy $\frac{N}{D}>2$, with the only exceptions being $1$-positive ETFs (regular simplices), $2$-negative ETFs (Naimark complements of $2$-positive ETFs), ${{\operatorname{ETF}}}(2,3)$ when regarded as type $(3,-1,2)$, and ${{\operatorname{ETF}}}(5,10)$, which are type $(3,-1,3)$. Indeed, when $K\geq2$, any $K$-positive ETF has $\tfrac{N}{D}=\frac{K(S+1)}{S}>K\geq2$. Meanwhile, when $K\geq 3$, any ETF of type $(K,-1,S)$ only has $\frac{K(S-1)}{S}=\tfrac{N}{D}\leq 2$ when $S\leq\frac{K}{K-2}$. Since $K$ necessarily divides $S(S+1)$ where $S\geq 2$, such an ETF can only exist when $K=3$ and $S=2,3$. We summarize these previously-known constructions of negative ETFs as follows:
\[thm.known negative ETFs\] An ETF of type $(K,-1,S)$ exists whenever:
1. $K=2$ and $S\geq 3$ (Naimark complements of $2$-positive ETFs);
2. $K=3$ and $S\geq 2$ with $S\equiv 0,2\bmod 3$ (Tremain ETFs [@FickusJMP18] and Spence harmonic ETFs [@Spence77]);
3. $K=4$ and $S=\frac13(2^{2J+1}+1)$ for some $J\geq1$ (Davis-Jedwab harmonic ETFs [@DavisJ97]);
4. $K=Q+2$ and $S=K-1$ where $Q$ is a prime power (Steiner ETFs from affine planes [@FickusMT12]);
5. $K=2^J+1$ and $S=2K-1$ where $J\geq 1$ (Steiner ETFs from Denniston designs [@FickusMT12]);
6. $K=2^J+1$ and $S=K$ where $J\geq 1$ (hyperoval ETFs [@FickusMJ16]);
7. $K=S(S+1)$ where $S=2,\dotsc,7,18$ (SIC-POVMs [@FuchsHS17]);
8. $K=\binom{S+1}{2}$ where $S=3,5$ (real maximal type [@FickusM16]);
9. $(K,S)=(4,7),(6,11),(7,13),(8,15),(10,5)$ (various other ETFs [@FickusM16]).
From this list, we see that for any $K\geq 5$, the existing literature provides at most a finite number of $K$-negative ETFs. Theorem \[thm.new ETF\](a) implies that many more negative ETFs exist: if an ETF of type $(K,-1,S)$ exists, then an ETF of type $(K,-1,\frac{MU+1}{K-1})$ exists for all sufficiently large $U$ that satisfy and . Combining this fact with Theorem \[thm.known negative ETFs\] immediately gives:
There exists an infinite number of $K$-negative ETFs whenever:
1. $K=Q+2$ where $Q$ is a prime power;
2. $K=Q+1$ where $Q$ is an even prime power;
3. $K=2,8,12,20,30,42,56,342$.
In particular, we now know that there are an infinite number of values of $K$ for which an infinite number of $K$-negative ETFs exist, with $K=14$ being the smallest open case. With these asymptotic existence results in hand, we now focus on applying Theorem \[thm.new ETF\] with explicit GDDs.
For example, the “Mercedes-Benz" regular simplex ${{\operatorname{ETF}}}(2,3)$ is type $(K,L,S)=(3,-1,2)$ and so has $M=S(K-1)+L=3$. Since $S+L=1$ divides $K-2=1$, Theorem \[thm.new ETF\] can be applied with any $3$-GDD of type $3^U$ so as to produce an ETF of type $(K,L,\tfrac{MU-L}{K-1})=(3,-1,\tfrac12(3U+1))$, and moreover that the necessary conditions on the existence of such a GDD reduce to , namely to having $U\geq 3$, $\tfrac12(U-1)\in{\mathbb{Z}}$ and $\tfrac12U(U-1)\in{\mathbb{Z}}$. In fact, such GDDs are known to exist whenever these necessary conditions are satisfied [@Ge07], namely when $U\geq 3$ is odd. (This also follows from the fact that such GDDs are equivalent to the incidence structures obtained by removing a parallel class from a resolvable Steiner triple system.) That is, writing $U=2J+1$ for some $J\geq 1$, we can apply Theorem \[thm.new ETF\] with an ${{\operatorname{ETF}}}(2,3)$ and a known $3$-GDD of type $3^{2J+1}$ to produce an ETF of type $(3,-1,\tfrac12(3U+1))=(3,-1,3J+2)$.
In summary, applying Theorem \[thm.new ETF\] to an ETF of type $(3,-1,2)$ produces ETFs of type $(3,-1,S)$ for any $S\equiv 2\bmod 3$, and so recovers the parameters of “half" of all possible $3$-negative ETFs, cf. Theorem \[thm.known negative ETFs\] and [@FickusJMP18], including the parameters of all harmonic ETFs arising from Spence difference sets. To instead recover some of the ETFs of type $(3,-1,S)$ with $S\equiv0\bmod 3$, one may, for example, apply Theorem \[thm.new ETF\] to the well-known ${{\operatorname{ETF}}}(5,10)$, which is type $(3,-1,3)$.
In order to obtain ETFs with verifiably new parameters, we turn our attention to applying Theorem \[thm.new ETF\] to known $K$-negative ETFs with $K\geq 4$. Here, the limiting factor seems to be a lack of knowledge regarding uniform $K$-GDDs: while the literature has much to say when $K=3,4,5$ [@Ge07], we are relegated to well-known simple constructions involving Lemmas \[lem.Wilson\] and \[lem.filling holes\] whenever $K>5$. As such, we consider the $K=4,5$ cases separately from those with $K>5$:
The ${{\operatorname{ETF}}}(6,16)$ is type $(4,-1,3)$, and so we can apply Theorem \[thm.new ETF\] whenever there exists a $4$-GDD of type $8^U$ where $U$ satisfies . By Theorem \[thm.new ETF\], the known necessary conditions on the existence of such GDDs reduce to : $$U\geq 4,
\quad
\tfrac{U-1}{3}\in{\mathbb{Z}},
\quad
\tfrac{U(U-1)}{3}=\tfrac{4U(U-1)}{4(3)}\in{\mathbb{Z}},$$ namely to having $U\geq 4$ and $U\equiv 1\bmod 3$. These necessary conditions on the existence of $4$-GDDs of type $8^U$ are known to be sufficient [@Ge07]. Moreover, for any such $U$, we have is automatically satisfied since . Altogether, for any $U\geq 4$ with $U\equiv 1\bmod 3$, we can apply Theorem \[thm.new ETF\] to the ETF of type $(4,-1,3)$ with a $4$-GDD of type $8^U$, and doing so produces an ETF of type $(4,-1,\frac13(8U+1))$. Here, letting $U=1$ recovers the parameters of the original ETF. Overall, writing $U=3J+1$ for some $J\geq0$, this means that an ETF of type $(4,-1,S)$ exists whenever $S=\frac13(8U+1)=\frac13[8(3J+1)+1]=8J+3$ for any $J\geq0$, namely whenever $S\equiv 3\bmod 8$. In particular, for any $J\geq 1$ we can take $U=4^{J-1}$ to obtain an ETF of type $(4,-1,S)$ where $S=\tfrac13(8U+1)=\tfrac13(2^{2J-1}+1)$. This means that applying Theorem \[thm.new ETF\] to the ETF of type $(4,-1,3)$ recovers the parameters of harmonic ETFs corresponding to Davis-Jedwab difference sets.
In light of Remark \[rem.recursive\], applying Theorem \[thm.new ETF\] to any ETF of type $(4,-1,S)$ with $S\equiv 3\bmod 8$ simply recovers a subset of the ETF types obtained by applying Theorem \[thm.new ETF\] to the ETF of type $(4,-1,3)$. As such, to obtain more $4$-negative ETFs via Theorem \[thm.new ETF\], we need to apply it to initial ETFs that lie outside of this family. By Theorem \[thm.known negative ETFs\], the existing literature gives one such set of parameters, namely ETFs of type $(4,-1,7)$, which have $(D,N)=(35,120)$. Here, $M=20$, and a $4$-GDD of type $20^U$ can only exist if $U$ satisfies : $$U\geq 4,
\quad
\tfrac{U-1}{3}\in{\mathbb{Z}},
\quad
\tfrac{U(U-1)}{3}=\tfrac{4U(U-1)}{4(3)}\in{\mathbb{Z}}.$$ Moreover, these necessary conditions are known to be sufficient [@Ge07], meaning Theorem \[thm.new ETF\] can be applied whenever $U$ also satisfies , namely . Thus, for any $U\equiv 1\bmod 9$, Theorem \[thm.new ETF\] yields an ETF of type $(4,-1,\frac13(20U+1))$. In particular, an ETF of type $(4,-1,S)$ exists for any $S\equiv 7\bmod 60$. ETFs of type $(4,-1,S)$ with $S\equiv 67\bmod 120$ thus arise from both constructions; in the statement of Theorem \[thm.new neg ETS with K=4,5\], we elect to not remove the overlapping values of $S$ from either family so as to not emphasize one family over the other, and possibly make it easier for future researchers to identify potential patterns.
In a similar manner, as summarized in Theorem \[thm.known negative ETFs\], the existing literature provides ETFs of type $(5,-1,S)$ for exactly three values of $S$, namely ${{\operatorname{ETF}}}(12,45)$, ${{\operatorname{ETF}}}(19,76)$ and ${{\operatorname{ETF}}}(63,280)$ which have $S=4,5,9$ and so $M=15,19,35$, respectively. For these particular values of $M$, the corresponding necessary conditions on the existence of $5$-GDDs of type $M^U$ are known to be sufficient [@Ge07], meaning we only need $U$ to satisfy and for the corresponding value of $S$, namely to satisfy $U\geq 5$, $\frac14(U-1)\in{\mathbb{Z}}$ and that $$\begin{aligned}
\tfrac{5U(U-1)}{5(4)}\in{\mathbb{Z}},
\quad
\tfrac{3(U-1)}{3(4)}\in{\mathbb{Z}},&\text{ when }S=4,\\
\tfrac{6U(U-1)}{5(4)}\in{\mathbb{Z}},
\quad
\tfrac{3(U-1)}{4(4)}\in{\mathbb{Z}},&\text{ when }S=5,\\
\tfrac{10U(U-1)}{5(4)}\in{\mathbb{Z}},
\quad
\tfrac{3(U-1)}{8(4)}\in{\mathbb{Z}},&\text{ when }S=9.\end{aligned}$$ An ETF of type $(5,-1,S)$ thus exists when $S=\frac14(15U+1)$ with $U\equiv 1\bmod 4$, or $S=\frac14(19U+1)$ with $U\equiv 1,65\bmod 80$, or $S=\frac14(35U+1)$ with $U\equiv 1\bmod 32$. That is, an ETF of type $(5,-1,S)$ exists when $S\equiv 4\bmod 15$, or $S\equiv 5,309\bmod 380$, or $S\equiv 9\bmod 280$.
In certain special cases, these techniques yield real ETFs:
The ETF constructed in Theorem \[thm.new ETF\] is clearly real when the initial ${{\operatorname{ETF}}}(D,N)$ ${\{{{\boldsymbol{\varphi}}_n}\}}_{n=1}^{N}$ and the Hadamard matrices of size $S+L$ and $W+1$ are real.
In particular, a real ${{\operatorname{ETF}}}(63,280)$ exists [@FickusM16], and such ETFs are type $(5,-1,9)$. Since a real Hadamard matrix of size $S+L=8$ exists, then letting $M=S(K-1)+L=35$, Theorem \[thm.new ETF\] yields a real ETF of type $(5,-1,\frac14(35U+1))$ whenever there exists a $5$-GDD of type $35^U$ where $U$ satisfies and there exists a real Hadamard matrix of size $$H
=W+1
=\tfrac{R}{S+L}+1
=\tfrac{M(U-1)}{(S+L)(K-1)}+1
=\tfrac{35(U-1)}{8(4)}+1
=\tfrac{35U-3}{32}.$$ Here, we recall from the proof of Theorem \[thm.new neg ETS with K=4,5\] that such GDDs exist whenever $U\equiv1\bmod 32$, namely whenever $H\equiv1\bmod 35$. Altogether, since $\frac14(35U+1)=\frac14[35(\frac{32H+3}{35})+1]=8H+1$, Theorem \[thm.new ETF\] yields a real ETF of type $(5,-1,8H+1)$ whenever there exists a Hadamard matrix of size $H$ when $H\equiv1\bmod 35$. An infinite number of such Hadamard matrices exist: since $17$ is relatively prime to $140$, Dirichlet’s theorem implies an infinite number of primes $Q\equiv 17\bmod 140$ exist, and each has the property that $Q\equiv 1\bmod 4$, meaning that Paley’s construction yields a real Hadamard matrix of size $2(Q+1)\equiv 36\bmod 280$, in particular of size $2(Q+1)\equiv 1\bmod 35$.
Similarly, a real ${{\operatorname{ETF}}}(7,28)$ exists [@FickusM16], is type $(6,-1,3)$, and there exists a real Hadamard matrix of size $S+L=2$. Letting $M=14$, Theorem \[thm.new ETF\] thus yields a real ETF of type $(6,-1,\frac15(14U+1))$ whenever there exists a $6$-GDD of type $14^U$ where $U$ satisfies and there exists a real Hadamard matrix of size $H=W+1=\frac15(7U-2)$. Here, the necessary conditions and on the existence of such a GDD reduce to having $U\equiv 1,6\bmod 15$, namely to having $H\equiv 1,8\bmod 21$. Since $K=6$, these necessary conditions are not known to be sufficient. Nevertheless, they are asymptotically sufficient: there exists $U_0$ such that for all $U\geq U_0$ with $U\equiv 1,6\bmod 15$, there exists a $6$-GDD of type $14^U$ where $U$ satisfies . As such, there exists $H_0$ such that for all $H\geq H_0$ with $H\equiv 1,8\bmod 21$, if there exists a real Hadamard matrix of size $H$, then there exists a real ETF of type $(6,-1,\tfrac15(14U+1))=(6,-1,2H+1)$. As above, an infinite number of such Hadamard matrices exist: since $\gcd(73,84)=1$, there are an infinite number of primes $Q\equiv 73\bmod 84$, and each has the property that $Q\equiv 1\bmod 4$, meaning Paley’s construction yields a real Hadamard matrix of size $2(Q+1)\equiv 148\bmod 168$, in particular of size $2(Q+1)\equiv 1\bmod 21$.
Applying these same techniques to real ${{\operatorname{ETF}}}(22,176)$ and ${{\operatorname{ETF}}}(23,276)$ yields the infinite families stated in (c) and (d) of the result.
We have now seen that the construction of Theorem \[thm.new ETF\] recovers all Steiner ETFs as a degenerate case, recovers the parameters of “half" of all Tremain ETFs including those of all harmonic ETFs arising from Spence difference sets, and also recovers the parameters of harmonic ETFs arising from the Davis-Jedwab difference sets. This is analogous to how the approach of [@DavisJ97] unifies McFarland, Spence and Davis-Jedwab difference sets with those with parameters . From this perspective, the value of the generalization of [@DavisJ97] given in Theorem \[thm.new ETF\] is that it permits weaker conclusions to be drawn from weaker assumptions: while [@DavisJ97] forms new difference sets (i.e., new harmonic ETFs) by combining given difference sets with building sets formed from a collection of hyperplanes, Theorem \[thm.new ETF\] forms new ETFs by combining given ETFs with GDDs.
In fact, a careful read of [@DavisJ97] indicates that the building sets used there to produce difference sets with parameters are related to $4$-GDDs of type obtained by recursively using ${{\operatorname{TD}}}(4,2^{2j+1})$ to fill the holes of ${{\operatorname{TD}}}(4,2^{2j+3})$ for every $j=1,\dotsc,J-1$ via Lemma \[lem.filling holes\]. Alternatively, such GDDs can be constructed by using Lemma \[lem.Wilson\] to combine a ${{\operatorname{TD}}}(4,8)$ with a ${{\operatorname{BIBD}}}(4^{J-1},4,1)$ arising from an affine geometry. We now generalize these approaches, using Lemmas \[lem.Wilson\] and \[lem.filling holes\] to produce the GDDs needed to apply Theorem \[thm.new ETF\] to several known $K$-negative ETFs with $K>5$:
\[thm.new neg ETS with K>5\] If an ETF of type $(K,-1,S)$ exists where $\frac{K-2}{S-1}\in{\mathbb{Z}}$, and a ${{\operatorname{TD}}}(K,M)$ exists where $M=S(K-1)-1$, then an ETF of type $(K,-1,\tfrac{MU+1}{K-1})$ exists when either:
1. $U=1$ or $U=K$;
2. a ${{\operatorname{BIBD}}}(U,K,1)$ exists;
3. $U=K^J$ for some $J\geq2$, provided a ${{\operatorname{TD}}}(K,MK^j)$ exists for all $j=1,\dotsc,J-1$.
As a consequence, an ETF of type $(K,-1,S)$ exists when either:
1. $K=6$, $S=\frac15(9U+1)$ where either $U=6^J$ for some $J\geq 0$ or a ${{\operatorname{BIBD}}}(U,6,1)$ exists;
2. $K=6$, $S=\frac15(24U+1)$ where either $U=6^J$ for some $J\geq 0$ or a ${{\operatorname{BIBD}}}(U,6,1)$ exists;
3. $K=7$, $S=\tfrac1{6}(35U+1)$ where either $U=7^J$ for some $J\geq 0$ or a ${{\operatorname{BIBD}}}(U,7,1)$ exists;
4. $K=10$, $S=\tfrac1{9}(80U+1)$ where either $U=10^J$ for some $J\geq 0$ or a ${{\operatorname{BIBD}}}(U,10,1)$ exists;
5. $K=12$, $S=\frac1{11}(32U+1)$ when either $U=12^J$ for some $J\geq 0$ or a ${{\operatorname{BIBD}}}(U,12,1)$ exists.
Since and $M=S(K-1)-1$, any $K$-GDD of type $M^U$ can be combined with the given ETF of type $(K,-1,S)$ via Theorem \[thm.new ETF\] in order to construct an ETF of type . (Since $\frac{M+1}{K-1}=S$, such an ETF also exists when $U=1$, namely the given initial ETF.) For instance, the given ${{\operatorname{TD}}}(K,M)$ is a $K$-GDD of type $M^K$, and so such an ETF exists when $U=K$. Other examples of such GDDs can be constructed by combining the given ${{\operatorname{TD}}}(K,M)$ with any ${{\operatorname{BIBD}}}(U,K,1)$—a $K$-GDD of type $1^U$—via Lemma \[lem.Wilson\]. In particular, when $K$ is a prime power, we can construct these BIBDs from affine geometries of order $K$ to produce examples of such GDDs with $U=K^J$ for any $J\geq 2$. GDDs with these parameters also sometimes exist even when $K$ is not a prime power: if a ${{\operatorname{TD}}}(K,MK^j)$ exists for all $j=1,\dotsc,J-1$, then recursively using ${{\operatorname{TD}}}(K,MK^{j-1})$ to fill the holes of ${{\operatorname{TD}}}(K,MK^j)$ for all $j=1,\dotsc,J-1$ via Lemma \[lem.Wilson\] gives a $K$-GDD of type . We now apply these ideas to some known $K$-negative ETFs with $K>5$, organized according to the families of Theorem \[thm.known negative ETFs\]. In particular, ETFs of type $(Q+2,-1,Q+1)$ exist whenever $Q$ is a prime power, and $S-1=Q$ divides $K-2=Q$. For such ETFs, $M=Q(Q+2)$ and a ${{\operatorname{TD}}}(Q+2,Q(Q+2))$ equates to $Q$ MOLS of size $Q(Q+2)$, which is known to occur when $Q=2,3,4,5,8$ [@AbelCD07]. (When $Q$ and $Q+2$ are both prime powers, the standard method [@MacNeish22] only produces $Q-1$ MOLS of size $Q(Q+2)$.) Taking $Q=2,3$ recovers a subset of the ETFs produced in Theorem \[thm.new neg ETS with K=4,5\], and so we focus on $Q=4,5,8$. In particular, for these values of $Q$, the above methods yield an ETF of type when either $U=1$, $U=Q+2$, or a ${{\operatorname{BIBD}}}(U,Q+2,1)$ exists. Moreover, even when $Q+2$ is not a prime power, such an ETF with $U=(Q+2)^J$ exists, provided a ${{\operatorname{TD}}}(Q+2,Q(Q+2)^{j+1})$ exists for all $j=1,\dotsc,J-1$. When $Q=4,8$, this occurs for any $J\geq 2$, since for any $j\geq 1$, the number of MOLS of size $(4)6^{j+1}=2^{j+3}3^{j+1}$ is at least $\min{\{{2^{j+3},3^{j+1}}\}}-1\geq 7$, while the number of MOLS of size $(8)10^{j+1}=2^{j+4}5^{j+1}$ is at least $\min{\{{2^{j+4},5^{j+1}}\}}-1\geq 23$.
Other new explicit infinite families of negative ETFs arise from SIC-POVMs, which by Theorem \[thm.known negative ETFs\] are ETFs of type $(K,-1,S)$ where $K=S(S+1)$. Such ETFs always have the property that $S-1$ divides $K-2=(S-1)(S+2)$, meaning we can apply the above ideas whenever there exists a ${{\operatorname{TD}}}(K,M)$ where $M=(S-1)(S+1)^2$. Such TDs exist when $S=2,3$, being ${{\operatorname{TD}}}(6,9)$ and ${{\operatorname{TD}}}(12,32)$, respectively. When $S=2$ in particular, the above methods yield an ETF of type $(6,-1,\frac15(9U+1))$ when either $U=1$, $U=6$, a ${{\operatorname{BIBD}}}(U,6,1)$ exists, or $U=6^J$ for some $J\geq 2$. This final family arises from the fact that a ${{\operatorname{TD}}}(6,(9)6^j)$ exists for all $j\geq 1$: there are at least $5$ MOLS of size $54=(9)6$, at least $8$ MOLS of size $324=9(6^2)$ (since there are $8$ MOLS of size $9$, and at least $8$ MOLS of size $36$) [@AbelCD07], while for any $j\geq 3$, the number of MOLS of size $(9)6^j=2^j 3^{j+2}$ is at least $2^j-1\geq 7$. Similarly, taking $S=3$ yields an ETF of type $(12,-1,\frac1{11}(32U+1))$ when either $U=1$, $U=12$, a ${{\operatorname{BIBD}}}(U,12,1)$ exists, or $U=12^J$ for some $J\geq 2$: the number of MOLS of size $32(12)=384$ and $32(12)^2=4608$ is at least $15$ [@AbelCD07], while for $j\geq 3$, the number of MOLS of size $32(12)^j=2^{2j+5}3^j$ is at least $\min{\{{2^{2j+5},3^j}\}}-1\geq 26$.
To be clear, we view the ETFs produced in Theorem \[thm.new neg ETS with K>5\] as a “proof of concept," and believe that GDD experts will be able to find many more examples of new ETFs using Theorems \[thm.new ETF\] and \[thm.known negative ETFs\]. For this reason, we have omitted technical cases where an ETF of type $(K,-1,S)$ and a ${{\operatorname{TD}}}(K,M)$ exist where $M=S(K-1)-1$, but $S-1$ does not divide $K-2$, such as when $S=K=9$, and when $S=2K-1$ where $K=6,7,8,9,17,65537$. In such cases, one can still combine the TD with a ${{\operatorname{BIBD}}}(U,K,1)$ via Lemma \[lem.Wilson\] to produce a $K$-GDD of type $M^U$, but is not automatically satisfied.
We also point out that though Theorem \[thm.new ETF\] is a generalization of [@DavisJ97], the new ETFs we have found here are disjoint from those produced by another known generalization of this same work. In particular, the parameters of Davis-Jedwab difference sets can be regarded as the $Q=2$ case of the more general family: $$\label{eq.Davis Jedwab Chen parameters}
D=Q^{2J-1}{({\tfrac{2Q^{2J}+Q-1}{Q+1}})},
\quad
N=4Q^{2J}{({\tfrac{Q^{2J}-1}{Q^2-1}})},$$ where $J\geq 1$ and $Q\geq 2$ are integers. In particular, in [@Chen97], Chen generalizes the theory of [@DavisJ97] in a way that produces difference sets with parameters for any $J\geq 1$ and any $Q$ that is either a power of $3$ or any even power of an odd prime. For this reason, difference sets with parameters of the form are said to be *Davis-Jedwab-Chen difference sets* [@JungnickelPS07]. The inverse Welch bounds for the corresponding harmonic ETFs and their Naimark complements are always integers: $$S
={\bigl[{\tfrac{D(N-1)}{N-D}}\bigr]}^{\frac12}
=\tfrac{2Q^{2J}+Q-1}{Q+1},
\quad
{\bigl[{\tfrac{(N-D)(N-1)}{D}}\bigr]}^{\frac12}
=\tfrac{2Q^{2J}-Q-1}{Q-1}.$$ However, such ETFs are seldom positive or negative. Indeed, a direct computation reveals $$\tfrac{NS}{D(S+1)}
=2+\tfrac{2Q(Q^{2J-2}-1)}{(Q-1)(Q^{2J-1}+1)},
\quad
\tfrac{NS}{D(S-1)}
=2+\tfrac{2}{Q-1}.$$ Thus, by Theorem \[thm.parameter types\], the only such ETFs that are positive or negative are of type $(2,1,2Q-1)$, $(3,-1,\tfrac12(9^J+1))$ or $(4,-1,\tfrac13(2^{2J+1}+1))$, corresponding to the special cases of where $J=1$, $Q=3$ and $Q=2$, respectively. That is, the only overlap between the $K$-negative ETFs constructed in Theorems \[thm.new neg ETS with K=4,5\] and \[thm.new neg ETS with K>5\] and the ETFs constructed in [@Chen97] are those with parameters .
Conclusions
===========
Theorems \[thm.new neg ETS with K=4,5\] and \[thm.new neg ETS with K>5\] make some incremental progress towards resolving the ETF existence problem. By comparing the existing ETF literature, as summarized in [@FickusM16] for example, against Theorems \[thm.known positive ETFs\] and \[thm.known negative ETFs\], we find that every known ETF is either an orthonormal basis, or has the property that either it or its Naimark complement is a regular simplex, has $N=2D$ or $N=2D\pm1$, is a SIC-POVM, arises from a difference set, or is either positive or negative. In particular, every known ${{\operatorname{ETF}}}(D,N)$ with $N>2D>2$ is either a SIC-POVM ($N=D^2$), or is a harmonic ETF, or has $N=2D+1$ where $D$ is odd, or is either positive or negative. This fact, along with the known nonexistence of the ${{\operatorname{ETF}}}(3,8)$ [@Szollosi14], and the available numerical evidence [@TroppDHS05], leads us to make the following conjecture:
\[con.complex\] If $N>2D>2$, an ${{\operatorname{ETF}}}(D,N)$ exists if and only if $N=D^2$ or $\frac{D(D-1)}{N-1}\in{\mathbb{Z}}$ or $(D,N)$ is positive or negative (Definition \[def.ETF types\]).
This is a substantial strengthening of an earlier conjecture made by one of the authors, namely that if $N>D>1$ and an ${{\operatorname{ETF}}}(D,N)$ exists, then one of the three numbers $D$, $N-D$ and $N-1$ necessarily divides the product of the other two. To help resolve Conjecture \[con.complex\], it would in particular be good to know whether an ${{\operatorname{ETF}}}(9,25)$ exists: this is the smallest value of $D$ for which there exists an $N$ such that $N-1$ divides $D(D-1)$ but no known ETF exists. For context, we note that there are numerous pairs $(D,N)$ for which $N-1$ divides $D(D-1)$ and an ${{\operatorname{ETF}}}(D,N)$ is known to exist, despite the fact that a difference set of that size does not exist [@Gordon18], including $$\begin{gathered}
(35,120), (40,105), (45,100), (63,280), (70,231), (77,210), (91,196), (99,540), (130,560),\\
(143,924), (176,561), (187,528), (208,1105), (231,484), (247,780), (260,741).\end{gathered}$$
It would also be good to know whether an ${{\operatorname{ETF}}}(11,33)$ exists, since such an ETF would be type $(4,-1,4)$: as detailed in the previous section, an ETF of type $(K,L,S)$ can only exist when $K$ divides $S(S-L)$, and moreover this necessary condition for existence is known to be sufficient when $L=1$ and $K=1,2,3,4,5$, as well as when $L=-1$ and $K=2,3$.
The evidence also supports an analogous conjecture in the real case. In fact, comparing the relevant literature [@Brouwer17; @FickusM16] against Theorems \[thm.known positive ETFs\] and \[thm.known negative ETFs\], we find that every known real ${{\operatorname{ETF}}}(D,N)$ with $N>2D>2$ is either positive or negative. Moreover, when $1<D<N-1$, $N\neq 2D$ and a real ${{\operatorname{ETF}}}(D,N)$ exists, then it and its Naimark complements’ Welch bounds are necessarily the reciprocals of odd integers [@SustikTDH07]. In particular, any such ETF automatically satisfies one of the two integrality conditions given in Theorem \[thm.parameter types\] that characterize positive and negative ETFs. Conversely, since $S(K-1)+KL$ is odd whenever $S$ is odd, Theorem \[thm.parameter types\] implies that any ETF of type $(K,L,S)$ satisfies the necessary conditions of [@SustikTDH07] when $S$ is odd. That said, real ${{\operatorname{ETF}}}(19,76)$, ${{\operatorname{ETF}}}(20,96)$ and ${{\operatorname{ETF}}}(47,1128)$ do not exist [@AzarijaM15; @AzarijaM16; @FickusM16], despite the fact that $(19,76)$ is type $(5,-1,5)$, $(20,96)$ is both type $(4,1,5)$ and type $(6,-1,5)$, and $(47,1128)$ is both type $(21,1,7)$ and type $(28,-1,7)$. These facts suggest the following analog of Conjecture \[con.complex\]:
If $N>2D>2$ and a real ${{\operatorname{ETF}}}(D,N)$ exists, then $(D,N)$ is positive or negative.
Acknowledgments {#acknowledgments .unnumbered}
===============
The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States Government. This work was partially supported by the Summer Faculty Fellowship Program of the United States Air Force Research Laboratory.
[WW]{}
R. J. R. Abel, C. J. Colbourn, J. H. Dinitz, Mutually Orthogonal Latin Squares (MOLS), in: C.J. Colbourn, J.H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 160–193.
R. J. R. Abel, M. Greig, BIBDs with small block size, in: C.J. Colbourn, J.H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 72–79.
J. Azarija, T. Marc, There is no (75,32,10,16) strongly regular graph, arXiv:1509.05933.
J. Azarija, T. Marc, There is no (95,40,12,20) strongly regular graph, arXiv:1603.02032.
W. U. Bajwa, R. Calderbank, D. G. Mixon, Two are better than one: fundamental parameters of frame coherence, Appl. Comput. Harmon. Anal. 33 (2012) 58-–78.
A. S. Bandeira, M. Fickus, D. G. Mixon, P. Wong, The road to deterministic matrices with the Restricted Isometry Property, J. Fourier Anal. Appl. 19 (2013) 1123–1149.
A. Barg, A. Glazyrin, K. A. Okoudjou, W.-H. Yu, Finite two-distance tight frames, Linear Algebra Appl. 475 (2015) 163–175.
B. G. Bodmann, H. J. Elwood, Complex equiangular Parseval frames and Seidel matrices containing $p$th roots of unity, Proc. Amer. Math. Soc. 138 (2010) 4387–4404.
B. G. Bodmann, V. I. Paulsen, M. Tomforde, Equiangular tight frames from complex Seidel matrices containing cube roots of unity, Linear Algebra Appl. 430 (2009) 396–417.
C. Bracken, G. McGuire, H. Ward, New quasi-symmetric designs constructed using mutually orthogonal [L]{}atin squares and [H]{}adamard matrices, Des. Codes Cryptogr. 41 (2006) 195–198.
A. E. Brouwer, Strongly regular graphs, in: C. J. Colbourn, J. H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 852-–868.
A. E. Brouwer, Parameters of Strongly Regular Graphs, http://www.win.tue.nl/$\sim$aeb/graphs/srg/
K. I. Chang, An existence theory for group divisible designs, Ph.D. Thesis, The Ohio State University, 1976.
Y. Q. Chen, On the existence of abelian Hadamard difference sets and a new family of difference sets, Finite Fields Appl. 3 (1997) 234–256.
D. Corneil, R. Mathon, eds., Geometry and combinatorics: Selected works of J. J. Seidel, Academic Press, 1991.
G. Coutinho, C. Godsil, H. Shirazi, H. Zhan, Equiangular lines and covers of the complete graph, Linear Algebra Appl. 488 (2016) 264–283.
J. A. Davis, J. Jedwab, A unifying construction for difference sets, J. Combin. Theory Ser. A 80 (1997) 13–78.
C. Ding, T. Feng, A generic construction of complex codebooks meeting the Welch bound, IEEE Trans. Inform. Theory 53 (2007) 4245–4250.
M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, Tremain equiangular tight frames, J. Combin. Theory Ser. A 153 (2018) 54–-66.
M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, Hadamard equiangular tight frames, submitted, arXiv:1703.05353.
M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, C. E. Watson, Equiangular tight frames with centroidal symmetry, to appear in Appl. Comput. Harmon. Anal. M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, C. E. Watson, Polyphase equiangular tight frames and abelian generalized quadrangles, to appear in Appl. Comput. Harmon. Anal. M. Fickus, D. G. Mixon, Tables of the existence of equiangular tight frames, arXiv:1504.00253 (2016).
M. Fickus, D. G. Mixon, J. Jasper, Equiangular tight frames from hyperovals, IEEE Trans. Inform. Theory 62 (2016) 5225–5236.
M. Fickus, D. G. Mixon, J. C. Tremain, Steiner equiangular tight frames, Linear Algebra Appl. 436 (2012) 1014–1027.
C. A. Fuchs, M. C. Hoang, B. C. Stacey, The SIC question: history and state of play, Axioms 6 (2017) 21.
G. Ge, Group divisible designs, in: C. J. Colbourn, J. H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 255–260.
C. D. Godsil, Krein covers of complete graphs, Australas. J. Combin. 6 (1992) 245–255.
J. M. Goethals, J. J. Seidel, Strongly regular graphs derived from combinatorial designs, Can. J. Math. 22 (1970) 597–614.
D. Gordon, La Jolla Covering Repository, https://www.ccrwest.org/diffsets.html.
M. Grassl, A. J. Scott, Fibonacci-Lucas SIC-POVMs, J. Math. Phys. 58 (2017) 122201.
R. B. Holmes, V. I. Paulsen, Optimal frames for erasures, Linear Algebra Appl. 377 (2004) 31–51.
J. W. Iverson, J. Jasper, D. G. Mixon, Optimal line packings from nonabelian groups, submitted, arXiv:1609.09836.
J. Jasper, D. G. Mixon, M. Fickus, Kirkman equiangular tight frames and codes, IEEE Trans. Inform. Theory 60 (2014) 170-–181.
D. Jungnickel, A. Pott, K. W. Smith, Difference sets, in: C. J. Colbourn, J. H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 419–435.
J. H. van Lint, J. J. Seidel, Equilateral point sets in elliptic geometry, Indag. Math. 28 (1966) 335–348.
E. R. Lamken, R. M. Wilson, Decompositions of edge-colored complete graphs, J. Combin. Theory Ser. A 89 (2000) 149–200.
P. W. H. Lemmens, J. J. Seidel, Equiangular lines, J. Algebra 24 (1973) 494–512.
R. Mathon, A. Rosa, $2-(v,k,\lambda)$ designs of small order, in: C. J. Colbourn, J. H. Dinitz (Eds.), Handbook of Combinatorial Designs, Second Edition (2007) 25–58.
H. F. MacNeish, Euler Squares, Ann. of Math. 23 (1922) 221–227.
R. L. McFarland, A family of difference sets in non-cyclic groups, J. Combin. Theory Ser. A 15 (1973) 1–10.
G. McGuire, Quasi-symmetric designs and codes meeting the Grey-Rankin bound, J. Combin. Theory Ser. A 78 (1997) 280–291.
Hedvig Mohácsy, The asymptotic existence of group divisible designs of large order with index one, J. Combin. Theory Ser. A 118 (2011) 1915–1924.
J. M. Renes, Equiangular tight frames from Paley tournaments, Linear Algebra Appl. 426 (2007) 497–501.
J. M. Renes, R. Blume-Kohout, A. J. Scott, C. M. Caves, Symmetric informationally complete quantum measurements, J. Math. Phys. 45 (2004) 2171–2180.
J. J. Seidel, A survey of two-graphs, Coll. Int. Teorie Combin., Atti dei Convegni Lincei 17, Roma (1976) 481–511.
E. Spence, A family of difference sets, J. Combin. Theory Ser. A 22 (1977) 103–106.
T. Strohmer, A note on equiangular tight frames, Linear Algebra Appl. 429 (2008) 326–-330.
T. Strohmer, R. W. Heath, Grassmannian frames with applications to coding and communication, Appl. Comput. Harmon. Anal. 14 (2003) 257–275.
M. A. Sustik, J. A. Tropp, I. S. Dhillon, R. W. Heath, On the existence of equiangular tight frames, Linear Algebra Appl. 426 (2007) 619–635.
F. Szöllősi, All complex equiangular tight frames in dimension 3, arXiv:1402.6429.
J. A. Tropp, Complex equiangular tight frames, Proc. SPIE 5914 (2005) 591401/1–11.
J. A. Tropp, I. S. Dhillon, R. W. Heath, Jr., T. Strohmer, Designing structured tight frames via an alternating projection method, IEEE Trans. Inform. Theory 51 (2005) 188–209.
R. J. Turyn, Character sums and difference sets, Pacific J. Math. 15 (1965) 319–346.
S. Waldron, On the construction of equiangular frames from graphs, Linear Algebra Appl. 431 (2009) 2228–2242.
L. R. Welch, Lower bounds on the maximum cross correlation of signals, IEEE Trans. Inform. Theory 20 (1974) 397-–399.
R. M. Wilson, An existence theory for pairwise balanced designs I. Composition theorems and morphisms, J. Combin. Theory Ser. A 13 (1972) 220–245.
P. Xia, S. Zhou, G. B. Giannakis, Achieving the Welch bound with difference sets, IEEE Trans. Inform. Theory 51 (2005) 1900–1907.
G. Zauner, Quantum designs: Foundations of a noncommutative design theory, Ph.D. Thesis, University of Vienna, 1999.
|
---
abstract: 'A combined effective model reproducing the equation of state of hadronic matter as obtained in recent lattice QCD simulations is presented. The model reproduces basic physical characteristics encountered in dense hadronic matter in the quark-gluon plasma (QGP) phase and the lower temperature hadron resonance gas phase. The hadronic phase is described by means of an extended Mott-Hagedorn resonance gas while the QGP phase is described by the extended PNJL model. The dissociation of hadrons is obtained by including the state dependent hadron resonance width.'
address: |
$^1$Instytut Fizyki Teoretycznej, Uniwersytet Wroc[ł]{}awski, Poland,\
$^2$Bogoliubov Laboratory for Theoretical Physics, JINR Dubna, Russia,\
$^3$DESY Zeuthen, Germany
author:
- 'L. Turko$^1$, D. Blaschke$^{1,2}$, D. Prorok$^1$ and J. Berdermann$^3$'
title: 'An effective model of QCD thermodynamics[^1]'
---
[PACS numbers: 12.38.Gc, 12.38.Mh, 12.40.Ee, 24.85.+p]{}
Introduction
============
Simulations of lattice QCD (LQCD) are in practice the only reliable approach to QCD thermodynamics which covers the broad region of strongly interacting matter properties from the hadron gas at low temperatures to a deconfined quark gluon plasma phase at high temperatures. Recently, finite temperature LQCD simulations have overcome the difficulties of reaching the low physical light quark masses and approaching the continuum limit which makes this theoretical laboratory now a benchmark for modeling QCD under extreme conditions.
We are going to present a combined effective model reproducing the equation of state of hadronic matter as obtained in recent lattice QCD simulations [@Borsanyi:2010cj; @Bazavov:2009zn]. The model should reproduce basic physical characteristics of processes encountered in dense hadronic matter, from the hot QCD phase through the critical temperature region till the lower temperature hadron resonance gas phase. In medium properties of hadrons are different from those in the vacuum. The very notion of the mass shell should be modified, as was postulated quite long ago [@KRT_93]. The interaction becomes effectively nonlocal due to the Mott effect and hadrons eventually gradually dissolve into quarks and gluons in the high temperature phase. Then, with the increasing temperature, quark masses are less and less important although the massless Stefan-Boltzmann limit would be eventually reached only at extremely high temperature.
It has been shown that the equation of state derived from that time QCD lattice calculation [@Karsch:2001cy] can be reproduced by a simple hadron gas resonance model. The rapid rise of the number of degrees of freedom in lattice QCD data around the critical temperature $T_c \sim 150 - 170$ MeV, can be explained quantitatively by a resonance gas below the critical temperature $T_c$ [@Karsch:2003vd; @Ratti:2010kj].
For higher temperatures the model is modified by introducing finite widths of heavy hadrons [@Blaschke:2003ut; @Blaschke:2005za] with a heuristic ansatz for the spectral function which reflects medium modifications of hadrons. This fits nicely the lattice data, also above $T_c$ as is shown in Fig. \[Fig.1\].
This Mott-Hagedorn type model [@Turko:2011gw] has been constructed to fit nicely the lattice data, also above $T_c$ where it does so because it leaves light hadrons below a mass threshold of $m_0=1$ GeV unaffected. The description of the lattice data at high temperatures is accidental because the effective number of those degrees of freedom approximately coincides with that of quarks and gluons. The QGP presence in this region is formally simulated here by the smart choice of the mass cut-off parameter such that cut-off defined stable light hadrons provide the same number of degrees of freedom as partonic components of the QGP.
The considered model is here gradually refined to take into account those physical processes present in the full QCD treatment. The uniform treatment of all hadronic resonances, without artificial stability island, is reached by a state-dependent hadron resonance width $\Gamma_i(T)$ given by the inverse collision time scale in a resonance gas [@Blaschke:2011ry].
In order to remove this unphysical aspect of the otherwise appealing model one has to extend the spectral broadening also to the light hadrons and thus describe their disappearance due to the Mott effect while simultaneously the quark and gluon degrees of freedom appear at high temperatures due to chiral symmetry restoration and deconfinement.
In the present contribution we will report results obtained by introducing a unified treatment of all hadronic resonances with a state-dependent width $\Gamma_i(T)$ in accordance with the inverse hadronic collision time scale from a recent model for chemical freeze-out in a resonance gas [@Blaschke:2011ry]. The appearance of quark and gluon degrees of freedom is introduced by the Polyakov-loop improved Nambu–Jona-Lasinio (PNJL) model [@Fukushima:2003fw; @Ratti:2005jh]. The model is further refined by adding perturbative corrections to $\mathcal{O}(\alpha_s)$ for the high-momentum region above the three-momentum cutoff inherent in the PNJL model. One obtains eventually a good agreement with lattice QCD data, comparable with all important physical characteristics taken into account.
Extended Mott-Hagedorn resonance gas
====================================
No quarks and gluons; hadronic spectral function with state-independent ansatz
------------------------------------------------------------------------------
We introduce the width $\Gamma$ of a resonance in the statistical model through the spectral function $$%
\label{one}
%
A(M,m)=N_M \frac{\Gamma \cdot m}{(M^2-m^2)^2+\Gamma^2 \cdot m^2}~,$$ a Breit-Wigner distribution of virtual masses with a maximum at $M =
m$ and the normalization factor $$\begin{aligned}
%
\label{two}
%
N_M &=& \left[\, \int\limits_{m_0^2}^\infty {d(M^2)}
%
\frac{ \Gamma \cdot m }{ ( M^2 - m^2)^2 + \Gamma^2 \cdot m^2 } \right]^{-1}
\nonumber\\
%
&=&\frac{1}{ \frac{\pi}{2} + \arctan \left( \frac{m^2 - m^2_0 }{ \Gamma
\cdot m} \right) }\,.
%\end{aligned}$$ And the model ansatz for the resonance width $\Gamma$ is given by [@Blaschke:2003ut]
$$%
\label{three}
%
\Gamma (T) = C_{\Gamma}~ \left( \frac{ m}{T_H} \right)^{N_m}
%
\left( \frac{ T}{T_H} \right)^{N_T} \exp \left( \frac{ m}{T_H }
\right)~,
%$$
where $C_{\Gamma} = 10^{-4}$ MeV, $N_m = 2.5$, $N_T = 6.5$ and the Hagedorn temperature $T_H = 165$ MeV.
The energy density of this model with zero resonance proper volume for given temperature $T$ and chemical potentials: $\mu_B$ for baryon number and $\mu_{S}$ for strangeness, can be cast in the form $$\begin{aligned}
%
\label{four}
%
\varepsilon(T,\mu_B,\mu_S) &=&
\sum_{i:~ m_i < m_0} g_i ~\varepsilon_i (T,\mu_i;m_i)\nonumber \\
&& \hspace{-3cm}
+ \sum_{i:~ m_i \geq m_0} g_i ~\int\limits_{m_0^2}^\infty {d(M^2)}
~A(M,m_i)~\varepsilon_i (T,\mu_i;M),
%
%\label{one}\end{aligned}$$ where $m_0 = 1$ GeV and the energy density per degree of freedom with a mass $M$ is $$%
\label{five}
%
%%%\hspace*{2.0cm}
%
\varepsilon_i (T,\mu_i;M) = \int \frac{d^3 k}{ (2 \pi)^3 }
\frac{\sqrt{k^2+M^2}}{\exp \left(\frac{\sqrt{k^2 +M^2} - \mu_i}{T}
\right) + \delta_i } \, ,
%$$ with the degeneracy $g_i$ and the chemical potential $\mu_i = B_i
\cdot \mu_B + S_i \cdot \mu_S$ of hadron $i$. For mesons, $\delta_{i} = -1 ~$ and for baryons $~ \delta_{i} = 1$. According to Eq. (\[one\]) the energy density of hadrons consists of the contribution of light hadrons for $m_i < m_0$ and the contribution of heavier hadrons smeared with the spectral function for $m_i \geq
m_0$.
For simplicity, we assume $n_{S} = 0$ for the strangeness number density and $n_{B}= 0$ for the baryon number density. Then $\mu_{B}
= 0$ and $\mu_{S} = 0$ always, so the temperature is the only significant statistical parameter here. In such a case and for a fixed volume we have $$%
\label{six}
%
\varepsilon + P = T \cdot s = T \cdot \frac{\partial P}{\partial T
}~,$$ where $P = P(T)$ and $s = s(T)$ are the pressure and entropy density, respectively. This is the first order ordinary differential equation for the pressure and the general solution reads $$%
\label{seven}
%
P(T) = \frac{T}{T_0}\cdot P_0 + T \int\limits_{T_0}^T
{dT'}~\frac{\varepsilon(T')}{T'^2}~,$$ where $P_0 = P(T_0)$. To have well-defined solution for the initial temperature $T_0 = 0$ one has to assume that $\lim_{T_0 \rightarrow
0}P(T_0)/T_0 = s_0 < \infty$. Then $$%
\label{eight}
%
P(T) = s_0 \cdot T + T \int\limits_{0}^T
{dT'}~\frac{\varepsilon(T')}{T'^2}~.$$ And the entropy density reads: $$%
\label{nine}
%
s(T) = \frac{\partial P}{\partial T } = s_0 + \int\limits_{0}^T
{dT'}~\frac{\varepsilon(T')}{T'^2} + \frac{\varepsilon(T)}{T}~,$$ where $s(0) = s_0$. We put $s_0 = 0$ as suggested by the Nernst postulate. The sound velocity squared is given by $$%
\label{ten}
%
c_s^2 = \frac{\partial P}{\partial \varepsilon }~.$$
In Fig. \[Fig.1\] we show the results for the thermodynamic quantities (pressure, energy density and squared speed of sound) of the MHRG model at this stage. The nice correspondence with results from lattice QCD is not accidental for the temperature region $T\sim T_c \sim 200$ MeV, where has been shown in [@Borsanyi:2010cj] that a hadron resonance gas perfectly describes the lattice QCD data. For $T>T_c$ the broadening of the spectral function (\[one\]) which affects at this stage of the model only the hadronic resonances with $m>m_0$ leads to the vanishing of their contribution at about $2T_c$ while the light hadrons with masses $m<m_0$ are not affected and gradually reach the Stefan- Boltzmann (SB) limit determined by their number of degrees of freedom. As has been noted in [@Brown:1991dj], this number ($\sum_{i=\pi,K,\eta,f_0,\rho,\omega,K^*,\eta',f_0,a_0}g_i=
3+4+1+1+9+3+6+1+1+3=32$) accidentally (or by duality arguments) coincides with that of the quarks and gluons ($\sum_{i=q,g}g_i=7/8*N_c*N_f*N_s*2 + (N_c^2-1)*2=31.5$) for $N_c=N_f=3$. Therefore, imposing that all mesons lighter than $m_0=1$ GeV are stable provides us with a SB limit at high temperatures which fakes that of quarks and gluons in the case for three flavors.
Although providing us with an excellent fit of the lattice data, the high-temperature phase of this model is unphysical since it ignores the Mott effect for light hadrons. Due to the chiral phase transition at $T_c$, the quarks loose their mass and therefore the threshold of the continuum of quark-antiquark scattering states is lowered. At the same time the light meson masses, however, remain almost unaffected by the increase in the temperature of the system. Consequently, they merge the continuum and become unbound - their spectral function changes from a delta-function (on-shell bound states) to a Breit-Wigner-type (off-shell, resonant scattering states). This phenomenon is the hadronic analogue [@Zablocki:2010zz] of the Mott-Anderson transition for electrons in solid state physics (insulator-conductor transition).
![\[Fig.1\] (Color online) Thermodynamic quantities for the old Mott-Hagedorn Resonance Gas model [@Blaschke:2003ut]. Different line styles correspond to different values for the parameter $N_m$ in the range from $N_m=2.5$ (dashed line) to $N_m=3.0$ (solid line). Lattice QCD data are from Ref. [@Borsanyi:2010cj]. ](mhrg_latt_all){width="70.00000%"}
It has been first introduced for the hadronic-to-quark-matter transition in [@Blaschke:1984yj]. Later, within the NJL model, a microscopic approach to the thermodynamics of the Mott dissociation of mesons in quark matter has been given in the form of a generalized Beth-Uhlenbeck equation of state [@Hufner:1994ma], see also [@Radzhabov:2010dd].
Hadronic spectral function with state-dependent ansatz
------------------------------------------------------
As a microscopic treatment of the Mott effect for all resonances is presently out of reach, we introduce an ansatz for a state-dependent hadron resonance width $\Gamma_i(T)$ given by the inverse collision time scale recently suggested within an approach to the chemical freeze-out and chiral condensate in a resonance gas [@Blaschke:2011ry] $$\label{Gamma}
%
\Gamma_i (T) = \tau_{\rm coll,i}^{-1}(T)
= \sum_{j}\lambda\,\langle r_i^2\rangle_T \langle r_j^2\rangle_T~n_j(T)~,
%$$ which is based on a binary collision approximation and relaxation time ansatz using for the in-medium hadron-hadron cross sections the geometrical Povh-Hüfner law [@Povh:1990ad]. In Eq. (\[Gamma\]) the coefficient $\lambda$ is a free parameter, $n_j(T)$ is the partial density of the hadron $j$ and the mean squared radii of hadrons $\langle r_i^2 \rangle_T$ obtain in the medium a temperature dependence which is governed by the (partial) restoration of chiral symmetry. For the pion this was quantitatively studied within the NJL model [@Hippe:1995hu] and it was shown that close to the Mott transition the pion radius is well approximated by $$r_\pi^2(T)=\frac{3}{4\pi^2} f_\pi^{-2}(T)
=\frac{3M_\pi^2}{4\pi^2m_q}
|\langle \bar{q} q \rangle_{T}|^{-1}~.$$ Here the Gell-Mann–Oakes–Renner relation has been used and the pion mass shall be assumed chirally protected and thus temperature independent.
For the nucleon, we shall assume the radius to consist of two components, a medium independent hard core radius $r_0$ and a pion cloud contribution $r_N^2(T)=r_0^2+r_\pi^2(T)~,$ where from the vacuum values $r_\pi=0.59$ fm and $r_N=0.74$ fm one gets $r_0=0.45$ fm. A key point of our approach is that the temperature dependent hadronic radii shall diverge when hadron dissociation (Mott effect) sets in, driven basically by the restoration of chiral symmetry. As a consequence, in the vicinity of the chiral restoration temperature all meson radii shall behave like that of the pion and all baryon radii like that of the nucleon. The resulting energy density behaviour is shown in Fig. \[Fig.1a\].
![\[Fig.1a\] (Color online) Energy density (red lines and symbols) and pressure (black lines and symbols) for the state-dependent width model of Eq. (\[Gamma\]) and three values of the mass threshold $m_0$: 1 GeV (solid lines), 980 MeV (dashed lines), 0 (dash-dotted lines). Lattice QCD data are from Ref. [@Borsanyi:2010cj].](MHRG-spectral-m0.eps){width="70.00000%"}
This part of the model we call Mott-Hagedorn-Resonance-Gas (MHRG). When all hadrons are gone at $T\sim 250$ MeV, we are clearly missing degrees of freedom!
Quarks, gluons and hadron resonances
====================================
We improve the PNJL model over its standard versions [@Fukushima:2003fw; @Ratti:2005jh] by adding perturbative corrections in $\mathcal{O}(\alpha_s)$ for the high-momentum region above the three-momentum cutoff $\Lambda$. In the second step, the MHRG part is replaced by its final form, using the state-dependent spectral function for the description of the Mott dissociation of all hadron resonances above the chiral transition. The total pressure obtains the form $$P(T)=P_{\rm MHRG}(T)+P_{\rm PNJL,MF}(T)+P_2(T) ~.$$ where $P_{\rm MHRG}(T)$ stands for the pressure of the MHRG model, accounting for the dissociation of hadrons in hot dense, matter.
The $\mathcal{O}(\alpha_s)$ corrections can be split in quark and gluon contributions $$\label{P2}
P_2(T)=P_2^{{\rm quark}}(T) + P_2^{{\rm gluon}}(T)~,$$ where $P_2^{{\rm quark}}$ stands for the quark contribution and $P_2^{{\rm gluon}}$ contains the ghost and gluon contributions. The total perturbative QCD correction to $\mathcal{O}(\alpha_s)$ is $$P_2=-\frac{8}{\pi}\alpha_s T^4(I_{\Lambda}^+
+\frac{3}{\pi^2}((I_{\Lambda}^+)^2+(I_{\Lambda}^-)^2)),$$ where $I^{\pm}_{\Lambda}=\int\limits_{\Lambda/T}^{\infty}\frac{{\rm d}x~x}{{\rm e}^x\pm 1}$. The corresponding contribution to the energy density is given in standard way by Eq. (\[six\]).
We will now include an effective description of the dissociation of hadrons due to the Mott effect into the hadron resonance gas model by including the state dependent hadron resonance width (\[Gamma\]) into the definition of the HRG pressure $$P_{\rm MHRG}(T)=\sum_{i}\delta_id_i\!\int\!\frac{d^3p}{(2\pi)^3}dM\,
A_i(M) T \ln\left(1+\delta_i{\rm e}^{-\sqrt{p^2+M^2}/T} \right)\,.$$ From the pressure as a thermodynamic potential all relevant thermodynamical functions can be obtained. Combining the $\alpha_s$ corrected meanfield PNJL model for the quark-gluon subsystem with the MHRG description of the hadronic resonances we obtain the results shown in the right panel of Fig. \[Fig.1\] where the resulting partial contributions in comparison with lattice QCD data from Ref. [@Borsanyi:2010cj] are shown.
We see that the lattice QCD thermodynamics is in full accordance with a hadron resonance gas up to a temperature of $\sim 170$ MeV which corresponds to the pseudocritical temperature of the chiral phase transition. The lattice data saturate below the Stefan-Boltzmann limit of an ideal quark-gluon gas at high temperatures. The PNJL model, however, attains this limit by construction. The deviation is to good accuracy described by perturbative corrections to $\mathcal{O}(\alpha_s)$ which vanish at low temperatures due to an infrared cutoff procedure. The transition region $170\le T[{\rm MeV}]\le 250$ is described by the MHRG model, resulting in a decreasing HRG pressure which vanishes at $T \sim 250$ MeV.
![\[Fig.3\] (Color online) Thermodynamic quantities for the new Mott-Hagedorn Resonance Gas where quark-gluon plasma contributions are described within the PNJL model including $\alpha_s$ corrections (dashed lines). Hadronic resonances are described within the resonance gas with finite width, as an implementation of the Mott effect (dash-dotted line). The sum of both contributions (solid lines) is shown for the energy density (thick lines) and pressure (thin lines) in comparison with the lattice data from [@Borsanyi:2010cj]. ](MHRG_Latt_PNJL){width="70.00000%"}
We have presented two stages of an effective model description of QCD thermodynamics at finite temperatures which properly accounts for the fact that in the QCD transition region it is dominated by a tower of hadronic resonances. To this end we have further developed a generalization of the Hagedorn resonance gas thermodynamics which includes the finite lifetime of hadronic resonances in a hot and dense medium by a model ansatz for a temperature- and mass dependent spectral function.
Conclusion and outlook
======================
![\[Fig.3\](Color online) Thermodynamic quantities as in Fig. \[Fig.1\] for the MHRG-PNJL model compared to lattice data from Ref. [@Borsanyi:2010cj]. ](David_cs2.eps){width="70.00000%"}
After having presented the MHRG-PNJL model with the state-dependent spectral function approach we show the summary of the thermodynamic quantities in Fig. \[Fig.3\]. We have presented two stages of an effective model description of QCD thermodynamics at finite temperatures which properly accounts for the fact that in the QCD transition region it is dominated by a tower of hadronic resonances. In the first of the two stages of the developments we presented here, we have used the fact that the number of low-lying mesonic degrees of freedom with masses below $\sim 1$ GeV approximately equals that of the thermodynamic degrees of freedom of a gas of quark and gluons. In the second one we have further developed a generalization of the Hagedorn resonance gas thermodynamics which includes the finite lifetime of heavy resonances in a hot and dense medium by a model ansatz for a temperature- and mass dependent spectral function which is motivated by a model for the collision time successfully applied in the kinetic description of chemical freeze-out from a hadron resonance gas. A next step should take into account also the effects of continuum correlations in hadronic scattering channels in accordance with the Levinson theorem [@Dashen:1969ep] as discussed recently for the example of pion dissociation within the PNJL model [@Wergieluk:2012gd]. Our account for $\mathcal{O}(\alpha_s)$ corrections from quark and gluon scattering in the plasma may be seen from this perspective.
D.B. wants to thank K.A. Bugaev, K. Redlich and A. Wergieluk for discussions and collaboration. This work was supported in part by the Polish National Science Center (NCN) under contract No. N N202 0523 40 and the “Maestro” programme DEC-2011/02/A/ST2/00306 as well as by the Russian Foundation for Basic Research under grant No. 11-02-01538-a (D.B.).\
[99]{}
S. Borsanyi [*et al.*]{}, JHEP [**1011**]{}, 077 (2010). A. Bazavov [*et al.*]{}, Phys. Rev. D [**80**]{}, 014504 (2009). F. Karsch, K. Redlich and L. Turko, Z. Phys. C [**60**]{}, 519 (1993). F. Karsch, Lect. Notes Phys. [**583**]{}, 209 (2002); \[hep-lat/0106019\].
F. Karsch, K. Redlich and A. Tawfik, Eur. Phys. J. C [**29**]{}, 549 (2003). C. Ratti [*et al.*]{} \[Wuppertal-Budapest Collaboration\], Nucl. Phys. A [**855**]{}, 253 (2011) \[arXiv:1012.5215 \[hep-lat\]\]. D. B. Blaschke, K. A. Bugaev, Fizika B [**13**]{}, 491 (2004);\
Phys. Part. Nucl. Lett. [**2**]{}, 305 (2005). D. B. Blaschke, K. A. Bugaev, Phys. Part. Nucl. Lett. [**2**]{}, 305 (2005). \[Pisma Fiz. Elem. Chast. Atom. Yadra [**2**]{}, 69 (2005)\]. L. Turko, D. Blaschke, D. Prorok and J. Berdermann, Acta Phys. Polon. Supp. [**5**]{}, 485 (2012) \[arXiv:1112.6408 \[nucl-th\]\]. D. B. Blaschke [*et al.*]{}, Phys. Part. Nucl. Lett. [**8**]{}, 811 (2011). K. Fukushima, Phys. Lett. B [**591**]{}, 277 (2004).
C. Ratti, M. A. Thaler, W. Weise, Phys. Rev. D [**73**]{}, 014019 (2006). G. E. Brown, H. A. Bethe, P. M. Pizzochero, Phys. Lett. B [**263**]{}, 337 (1991). D. Zablocki, D. Blaschke and G. Röpke, [*Metal-to-Nonmetal Transitions*]{}, Springer Series in Materials Science, [**132**]{}, 161 (2010).
D. Blaschke [*et al.*]{}, Phys. Lett. B [**151**]{}, 439 (1985). J. Hüfner [*et al.*]{}, Annals Phys. [**234**]{}, 225 (1994). A. E. Radzhabov [*et al.*]{}, Phys. Rev. D [**83**]{}, 116004 (2011).
B. Povh, J. Hüfner, Phys. Lett. B [**245**]{}, 653 (1990).
H. J. Hippe and S. P. Klevansky, Phys. Rev. C [**52**]{}, 2172 (1995).
R. Dashen, S. -K. Ma and H. J. Bernstein, Phys. Rev. [**187**]{}, 345 (1969).
A. Wergieluk, D. Blaschke, Y. .L. Kalinovsky and A. Friesen, arXiv:1212.5245 \[nucl-th\].
[^1]: Talk presented at the Symposium: On Discovery Physics at the LHC – Kruger 2012, RPA, December 3 - 7, 2012
|
---
abstract: 'The recent measurement of atomic parity violation in cesium atoms shows a $2.3\sigma$ deviation from the standard model prediction. We show that such an effect can be explained by four-fermion contact interactions with specific chiralities or by scalar leptoquarks which couple to the left-handed quarks. For a coupling of electromagnetic strength, the leptoquark mass is inferred to be 1.1 to 1.3 TeV. We also show that these solutions are consistent with all other low-energy and high-energy neutral-current data.'
---
Date:
[Atomic Parity Violation, Leptoquarks, and Contact Interactions]{}\
0.7cm V. Barger$^a$ and Kingman Cheung$^b$\
$^a$ [*Department of Physics, University of Wisconsin, 1150 University Ave., Madison, WI 53706*]{}\
$^b$ [*Department of Physics, University of California, Davis, CA 95616 USA*]{}\
Parity violation in the standard model (SM) results from exchanges of weak gauge bosons. In electron-hadron neutral-current processes parity violation is due to the vector axial-vector interaction terms in the Lagrangian. These interactions have been tested to a high accuracy in atomic parity violation (APV) measurements. A very recent measurement in cesium (Cs) atoms has been reported [@apv] by measuring a parity-odd transition between the $6S$ and $7S$ energy levels of the Cs atoms. The measurement is stated in terms of the weak charge $Q_W$, which parameterizes the parity violating Hamiltonian.
The new measurement of the atomic parity violation in cesium atoms is [@apv] $$\label{first}
Q_W ( ^{133}_{55} {\rm Cs} ) = -72.06 \pm 0.28 ({\rm expt}) \pm 0.34
({\rm theo})\;.$$ This result represents a substantial improvement over the previously reported value [@oldapv], because of a more precise calculation of the atomic wavefunctions [@wave]. Compared to the standard model prediction $Q_W^{\rm SM}= -73.09 \pm 0.03$ [@sm-value], the deviation $\Delta Q_W$ is $$\label{data}
\Delta Q_W \equiv Q_W({\rm Cs}) - Q_W^{\rm SM}({\rm Cs})
= 1.03 \pm 0.44\;,$$ which is $2.3\sigma$ away from the SM prediction.
In this Letter, we propose leptoquark solutions to this APV measurement and also solutions with four-fermion contact interactions. We find that the weak-isospin-doublet leptoquark ${\cal S}^R_{1/2}$, which couples to the right-handed electron and left-handed $u$ and $d$ quarks, and the weak-isospin-triplet leptoquark $\vec{\cal S}_1^L$, which couples to left-handed electron and left-handed $u,d$ quarks, can explain the measurement with the coupling-to-mass ratio $\lambda/M \sim 0.29$ and 0.24 TeV$^{-1}$, respectively, where $\lambda$ is the coupling and $M$ is the leptoquark mass. For a coupling of electromagnetic strength the leptoquark masses are 1.1 to 1.3 TeV. We verify that these leptoquark explanations are comfortably consistent with all existing experimental constraints. We also find that contact interactions with $\eta_{RL}^{eu} = \eta_{RL}^{ed}=-0.043$ TeV$^{-2}$ and others can alternatively explain the APV measurement and are consistent with a global fit to data on $\ell\ell qq$ interactions.
Another possible explanation for the APV measurement is extra $Z$ bosons [@extra-z], which can come from a number of grand-unified theories. Previous work on constraining new physics using the atomic parity violation measurements can be found in Ref. [@prev].
The parity-violating part of the Lagrangian describing electron-nucleon scattering is given by $${\cal L}^{eq} = \frac{G_F}{\sqrt{2}} \sum_{q=u,d} \left\{
C_{1q}( \bar e \gamma^\mu \gamma^5 e ) (\bar q \gamma_\mu q)
+C_{2q}( \bar e \gamma_\mu e ) (\bar q \gamma^\mu \gamma^5 q) \right \}$$ where in the SM the coefficients $C_{1q}$ and $C_{2q}$ at tree level are given by $$C_{1q}^{\rm SM} = - T_{3q} + 2 Q_q \sin^2\theta_{\rm w}\;, \qquad
C_{2q}^{\rm SM} = - T_{3q} (1 - 4 \sin^2\theta_{\rm w})\;.$$ Here $T_{3q}$ is the third component of the isospin of the quark $q$ and $\theta_{\rm w}$ is the weak mixing angle. In terms of the $C_{1q}$, the weak charge $Q_W$ for Cs is $Q_W = -376 C_{1u} - 422 C_{1d}$. Since we are interested in the deviation of $Q_W$ from its SM value, we write $$\Delta Q_W ({\rm Cs}) = -376 \Delta C_{1u} - 422 \Delta C_{1d} \;.$$
A convenient form [@ours] for four-fermion $eeqq$ contact interactions is [@quigg] $${\cal L}_\Lambda = \sum_{q=u,d} \left \{
\eta_{LL} \overline{e_L} \gamma_\mu e_L \overline{q_L} \gamma^\mu q_L
+\eta_{LR} \overline{e_L} \gamma_\mu e_L \overline{q_R} \gamma^\mu q_R
+\eta_{RL} \overline{e_R} \gamma_\mu e_R \overline{q_L} \gamma^\mu q_L
+\eta_{RR} \overline{e_R} \gamma_\mu e_R \overline{q_R} \gamma^\mu q_R
\right \} \;,$$ where $\eta_{\alpha\beta} = 4\pi \epsilon/(\Lambda^{eq}_{\alpha\beta})^2$ and $\epsilon=\pm1$. The contact interaction contributions to the $\Delta C_{1q}$’s are $$\Delta C_{1q} = \frac{1}{2\sqrt{2} G_F} \left [
-\eta_{LL}^{eq} + \eta_{RR}^{eq} - \eta_{LR}^{eq} + \eta_{RL}^{eq} \right ]\;,$$ and the corresponding contributions to $\Delta Q_W$ are $$\label{th}
\Delta Q_W = ( -11.4\; {\rm TeV}^{2} ) \left[
-\eta_{LL}^{eu} + \eta_{RR}^{eu} - \eta_{LR}^{eu} + \eta_{RL}^{eu} \right ]
+
( -12.8\; {\rm TeV}^{2} ) \left[
-\eta_{LL}^{ed} + \eta_{RR}^{ed} - \eta_{LR}^{ed} + \eta_{RL}^{ed} \right ]
\;.$$
$\eta$ fitted value (TeV$^{-2}$) $\eta$ fitted value (TeV$^{-2}$)
------------------ --------------------------- ------------------ ---------------------------
$\eta_{LL}^{eu}$ $ 0.090$ $\eta_{LL}^{ed}$ $0.081 $
$\eta_{RR}^{eu}$ $-0.090$ $\eta_{RR}^{ed}$ $-0.081$
$\eta_{LR}^{eu}$ $ 0.090$ $\eta_{LR}^{ed}$ $0.081 $
$\eta_{RL}^{eu}$ $-0.090$ $\eta_{RL}^{ed}$ $-0.081$
: \[table1\] The values of $\eta_{\alpha\beta}^{eu,ed}$ required to fit the $\Delta Q_W$ data of Eq. (\[data\]). We assume one nonzero $\eta$ at a time.
In order to explain the data in Eq. (\[data\]) using contact interactions, we can apply Eq. (\[th\]) with nonzero $\eta$’s. However, from Eq. (\[th\]) we see that there could be cancellations among the $\eta$-terms. When we assume one nonzero $\eta$ at a time, the values required to fit the APV data are tabulated in Table \[table1\]. The value of $\Lambda$ corresponding to $\eta=0.090 (0.081)$ TeV$^{-2}$ is $11.8 (12.5)$ TeV. If we further assume a SU(2)$_L$ symmetry, then $\eta_{RL}^{eu}$ equals $\eta_{RL}^{ed}$ and the value to fit the APV data is $$\label{contact}
\eta_{RL}^{eu} = \eta_{RL}^{ed} = -0.043\; {\rm TeV}^{-2}\;,$$ which corresponds to a $\Lambda \sim 17$ TeV. Equation (\[contact\]) is relevant to one of the leptoquark solutions that we present in the next section.
The next question to ask is whether the above solutions are in conflict with other existing data. To answer this, we performed an analysis [@ours] of the neutral-current lepton-quark contact interactions using a global set of $\ell\ell qq$ data, which includes (i) the neutral-current (NC) deep-inelastic scattering at HERA, (ii) Drell-Yan production at the Tevatron, (iii) the hadronic production cross sections at LEPII, (iv) the parity violation measurements in $e$-(D, Be, C) scattering at SLAC, Mainz, and Bates, (v) the $\nu$-nucleon scattering measurements by CCFR and NuTeV, and (vi) the lepton-hadron universality of weak charged-currents. This is an update of the analysis in Ref. [@ours] with new data from LEPII, finalized and published data from H1 and ZEUS [@nc], and including DØ data on Drell-Yan production [@dy-d0]. The 95% C.L. one-sided limits on $\eta$’s and the corresponding limits on $\Lambda$ are given in Table \[table2\]. In obtaining these limits, we do not include the data on atomic parity violation, which is the new physics data that we want to describe in this paper.
In Table \[table2\], the most tightly constrained are $\eta_{LL}^{eu}$ and $\eta_{LL}^{ed}$, mainly due to the constraint of lepton-hadron universality of weak charged currents. In general, the constraints on $eu$ parameters are stronger than those on $ed$ parameters, because of Drell-Yan production, in which the $u\bar u$-initial-state channel is considerably more important than the $d\bar d$-initial-state channel. From Table \[table2\] the 95% C.L. one-sided limits on $\eta_{RL}$ are 0.30 TeV$^{-2}$ and $-0.64$ TeV$^{-2}$ for $\epsilon=+$ and $\epsilon=-$, respectively. Thus, the fit to the APV data in Eq. (\[contact\]) lies comfortably within the limits and so are the solutions with $\eta_{LR}$ and $\eta_{RR}$. On the other hand, the solution using $\eta_{LL}^{eu}$ is ruled out while the solution using $\eta_{LL}^{ed}$ is marginal.
$\eta$ fitted value (TeV$^{-2}$)
----------------------------------- --------------------------- --------- ---------- --------- ---------
[$+$]{} [$-$]{} [$+$]{} [$-$]{}
$\eta_{LL}^{eu}$ $-0.057 \pm 0.030$ 0.034 $-0.11$ 19.4 10.8
$\eta_{LR}^{eu}$ $-0.024 \pm 0.15$ 0.28 $-0.32$ 6.6 6.3
$\eta_{RL}^{eu}=\eta_{RL}^{ed}$ $-0.38 \err{0.20}{0.17}$ 0.30 $-0.64$ 6.4 4.4
$\eta_{RR}^{eu}$ $-0.23\err{0.15}{0.14}$ 0.20 $-0.46$ 7.9 5.2
$\eta_{LL}^{ed}$ $0.059\pm{0.033}$ 0.11 $-0.037$ 10.5 18.6
$\eta_{LR}^{ed}$ $-0.048\err{0.33}{0.31}$ 0.62 $-0.60$ 4.5 4.6
$\eta_{RR}^{ed}$ $0.32\err{0.26}{0.30}$ 0.73 $-0.61$ 4.1 4.5
$\eta_{LL}^{eu}=\eta_{LL}^{ed}/2$ $0.058\pm{0.034}$ 0.11 $-0.040$ 10.5 17.8
: \[table2\] The 95% C.L. one-sided limits on $\eta_{\alpha\beta}^{eq},\;
\alpha,\beta=L,R, q=u,d$. The “$+$” and “$-$” signs correspond to the $\epsilon$ in the definition of $\eta$’s. The corresponding limits on $\Lambda_{\alpha\beta}^{eq}$ are also shown. The SU(2)$_L$ implied relation $\eta_{RL}^{eu}=\eta_{RL}^{ed}$ is included.
The Lagrangians representing the interactions of the $F=0$ and $F=-2$ ($F$ is the fermion number) scalar leptoquarks are [@buch; @rizzo] $$\begin{aligned}
\label{9}
{\cal L}_{F=0} &=& \lambda_L \overline{\ell_L} u_R {\cal S}_{1/2}^L
+ \lambda_R \overline{q_L} e_R (i \tau_2 {\cal S}^{R*}_{1/2} )
+ \tilde{\lambda}_L \overline{\ell_L} d_R \tilde{{\cal S}}_{1/2}^L + h.c. \;,\\
\label{10}
{\cal L}_{F=-2} &=& g_L \overline{q_L^{(c)}} i \tau_2 \ell_L {\cal S}_0^L
+ g_R \overline{u_R^{(c)}} e_R {\cal S}_0^R
+ \tilde{g}_R \overline{d_R^{(c)}} e_R \tilde{{\cal S}}_0^R
+ g_{3L}\overline{q_L^{(c)}} i \tau_2 \vec{\tau} \ell_L \cdot \vec{\cal S}_1^L
+ h.c.\end{aligned}$$ where $q_L,\ell_L$ denote the left-handed quark and lepton doublets, $u_R,d_R,e_R$ denote the right-handed up quark, down quark, and electron singlet, and $q_L^{(c)}, u_R^{(c)}, d_R^{(c)}$ denote the charge-conjugated fields. The subscript on leptoquark fields denotes the weak-isospin of the leptoquark, while the superscript ($L,R$) denotes the handedness of the lepton that the leptoquark couples to. The components of the $F=0$ leptoquark fields are $${\cal S}_{1/2}^{L,R} = \left ( \begin{array}{c}
{S_{1/2}^{L,R} }^{(-2/3)} \\
{S_{1/2}^{L,R} }^{(-5/3)} \end{array} \right ) \;, \;\;\;\;\;
\tilde{{\cal S}}_{1/2}^L = \left( \begin{array}{c}
\tilde{S}_{1/2}^{L(1/3)} \\
- \tilde{S}_{1/2}^{L(-2/3)} \end{array} \right ) \;,$$ where the electric charge of the component fields is given in the parentheses, and the corresponding hypercharges are $Y({\cal S}_{1/2}^L)=
Y({\cal S}_{1/2}^R)=-7/3$ and $Y(\tilde{{\cal S}}_{1/2}^L)=-1/3$. The $F=-2$ leptoquarks ${\cal S}_0^L, {\cal S}_0^R, \tilde{{\cal S}}_0^R$ are isospin singlets with hypercharges $2/3, 2/3, 8/3$, respectively, while ${\cal S}_1^L$ is a triplet with hypercharge $2/3$: $${\cal S}_1^L = \left( \begin{array}{l}
{ S_1^L }^{(4/3)} \\
{ S_1^L }^{(1/3)} \\
{ S_1^L }^{(-2/3)} \end{array} \right ) \;.$$ The SU(2)$_L\times$ U(1)$_Y$ symmetry is assumed in the Lagrangians of Eqs. (\[9\]) and (\[10\]).
We have verified that the contributions of leptoquarks ${\cal S}^L_{1/2}$, $\tilde{\cal S}^L_{1/2}$, ${\cal S}_0^R$, and $\tilde{\cal S}_0^R$, that couple to the right-handed quarks, only give a negative $\Delta Q_W$, which cannot explain the measurement in Eq. (\[first\]). The only viable choices are the leptoquarks ${\cal S}^R_{1/2}$, ${\cal S}_0^L$, and $\vec{\cal S}_1^L$ that couple to the left-handed quarks. Let us first examine the contribution from the $F=0$ leptoquark ${\cal S}^R_{1/2}$. The effective interaction of electron-quark scattering via this leptoquark is $${\cal L} = - \frac{\lambda_{R}^2}{M^2_{ {\cal S}^R_{1/2}}} \left(
\overline{d_L} e_R \overline{e_R}d_L + \overline{u_L} e_R \overline{e_R}u_L
\right ) \;,$$ where we have assumed $M^2_{ {\cal S}_{1/2}^R} \gg s,|t|,|u|$ and the overall negative sign is due to the ordering of the fermion fields relative to the $\gamma,Z$ diagrams. After a Fierz transformation, the above amplitude can be transformed to $${\cal L} = - \frac{\lambda_{R}^2}{2 M^2_{{\cal S}^R_{1/2}}} \left(
\overline{e_R} \gamma^\mu e_R \overline{d_L} \gamma_\mu d_L
+ \overline{e_R} \gamma^\mu e_R \overline{u_L} \gamma_\mu u_L \right )\;.$$ Comparing with the contact interaction terms, we can relate the above equation to $\eta_{RL}$ as $$\eta_{RL}^{eu} = \eta_{RL}^{ed} = - \frac{\lambda_{R}^2}
{2 M^2_{{\cal S}^R_{1/2}}} \;.$$ Using the result on contact terms in Eq. (\[contact\]) and the above equation, we obtain the value for $\lambda_{R}/M_{{\cal S}^R_{1/2}}$ to be $$\label{final}
\frac{\lambda_{R}}{M_{{\cal S}^R_{1/2}}} = 0.29 \; {\rm TeV}^{-1} \;.$$ This result cannot specifically indicate the value for the mass or the coupling of the leptoquark, because the APV is a low-energy atomic process that only probes the $\lambda_{R}/M_{{\cal S}^R_{1/2}}$ ratio.
Similarly, the effective interaction of electron-quark scattering involving ${\cal S}^L_0$ is $${\cal L} = \frac{g_L^2}{2 M^2_{ {\cal S}_0^L}}
\overline{e_L} \gamma^\mu e_L \overline{u_L} \gamma_\mu u_L \;.$$ Therefore, the contribution from ${\cal S}_0^L$, in terms of contact interaction, is $$\eta_{LL}^{eu} = \frac{g_L^2}{2 M^2_{ {\cal S}_0^L} }\;.$$ Matching with the results in Table \[table1\] the coupling-to-mass ratio of the leptoquark is given by $$\label{final2}
\frac{g_L}{M_{ {\cal S}_0^L} } = 0.43\; {\rm TeV}^{-1}\;.$$ However, this leptoquark ${\cal S}_0^L$ contributes $\eta_{LL}^{eu}
=0.09$ TeV$^{-2}$ and that is ruled out by the limit in Table \[table2\].
The interaction of the $F=-2$ leptoquark $\vec{\cal S}_1^L$ is given by $${\cal L} = g_{3L} \left \{
-\left
(\overline{u_L^{(c)}}e_L + \overline{d_L^{(c)}} \nu_L \right)
\, {\cal S}_1^{L(1/3)}
- \sqrt{2}\; \overline{d_L^{(c)}} e_L \; {\cal S}_1^{L(4/3)}
+ \sqrt{2}\;
\overline{u_L^{(c)}} \nu_L \; {\cal S}_1^{L(-2/3)} + h.c. \right \}
\;.$$ The effective interaction of electron-quark scattering involving $\vec{\cal S}_1^L$ is $${\cal L} = \frac{ g_{3L}^2}{2 M^2_{ {\cal S}_1^L} }\;
\overline{e_L} \gamma^\mu e_L \; \overline{u_L} \gamma_\mu u_L +
\frac{ g_{3L}^2}{ M^2_{ {\cal S}_1^L} }\;
\overline{e_L} \gamma^\mu e_L \; \overline{d_L} \gamma_\mu d_L \;.$$ Therefore, the contributions from $\vec{\cal S}_1^L$, in terms of contact interaction, are $$\eta_{LL}^{eu} = \frac{ \eta_{LL}^{ed}}{2}
= \frac{g_{3L}^2}{2 M^2_{ {\cal S}_1^L} }\;.$$ Fitting to $\Delta Q_W$ using Eq. (\[th\]), we obtain the coupling-to-mass ratio of $\vec{\cal S}_1^L$ to be $$\label{3L}
\frac{g_{3L}}{M_{ {\cal S}_1^L}} = 0.24 \; {\rm TeV}^{-1} \;,$$ which gives $\eta_{LL}^{eu}=0.028\;{\rm TeV}^{-2}$ and $\eta_{LL}^{ed}=0.056\;{\rm TeV}^{-2}$. We recalculate the limit from the global set of neutral-current $\ell\ell qq$ data for the case of nonzero $\eta_{LL}^{eu}$ and $\eta_{LL}^{ed}$ with $\eta_{LL}^{eu} = \eta_{LL}^{ed}/2$. We obtain the 95% C.L. one-sided limits on $\eta_{LL}^{eu}=\eta_{LL}^{ed}/2$ as $0.11\; {\rm TeV}^{-2}$ and $-0.04\;{\rm TeV}^{-2}$ for $\epsilon=+$ and $\epsilon=-$, respectively (this result is listed in the last row of Table \[table2\].) Therefore, this leptoquark $\vec{\cal S}_1^L$ solution is also consistent with all other data.
As discussed above, there are two leptoquark solutions that are consistent with the limits in Table \[table2\]. The one with the $F=0$ leptoquark ${\cal S}_{1/2}^R$ requires the coupling-to-mass ratio equal 0.29 TeV$^{-1}$. With a coupling strength about the same as $e=0.31$, the inferred leptoquark mass is about 1.1 TeV for ${\cal S}_{1/2}^R$. Similarly, the coupling-to-mass ratio for the $F=-2$ leptoquark $\vec{\cal S}_1^L$ is required to be 0.24 TeV$^{-1}$, which corresponds to a mass of 1.3 TeV.
In the following we discuss the above leptoquarks that describe the APV measurement in the context of the limits from various collider experiments.
The model-independent search for the first generation scalar leptoquark at the Tevatron by CDF and DØ puts a lower bound of 242 GeV on the mass of the leptoquark [@cdf-d0]. The direct search for the first generation scalar leptoquark at HERA, on the other hand, depends on the coupling constants and the type of the leptoquark. ZEUS [@zeus] excluded the first generation scalar leptoquark (fermion number $F=0$) with electromagnetic coupling strength up to a mass of 280 GeV while H1 [@h1] excluded a mass up to 275 GeV in $e^+ p$ collisions. In the most recent searches in $e^- p$ collisions, ZEUS excluded $F=-2$ scalar leptoquarks up to about 290 GeV mass [@zeus]. In general, $e^\pm p$ colliders can search for leptoquarks up to mass almost equal to the center-of-mass energy of the machine.
The LEP collaborations performed both direct searches for leptoquarks and indirect searches for virtual effects of leptoquarks in fermion-pair production. OPAL [@opal] searched for real leptoquarks in pair production and excluded scalar leptoquarks up to about 88 GeV; DELPHI [@delphi] searched for leptoquarks in single production and excluded scalar leptoquarks up to about 161 GeV. Various LEP Collaborations [@lep] analyzed fermion-pair production and were able to rule out some leptoquark coupling and mass ranges, which depend sensitively on the leptoquark type and couplings. The best mass limit is around 300 GeV for electromagnetic coupling strength. The virtual effects in fermion-pair production have already been included in our global analysis presented in Sec. 2. If we take $\lambda_{R}$ and $g_{3L}$ of electromagnetic strength, the leptoquark masses are inferred to be 1.1 and 1.3 TeV, respectively, as already noted above. Therefore, the solutions in Eqs. (\[final\]) and (\[3L\]) lie comfortably with both the direct search limits and the virtual effects in neutral-current $\ell\ell qq$ data.
An important low-energy constraint to leptoquarks or contact interactions is lepton-hadron universality of weak charged-currents (CC), which we have already included in the global analysis in Sec. 2. Since it is particularly important to leptoquark interactions, we would like to explain it briefly. Because of the SU(2)$_L$ symmetry, the $\eta_{LL}^{eu}$ and $\eta_{LL}^{ed}$ are related to the CC contact interaction $\eta_{CC}
\overline{\nu_L}\gamma_\mu e_L \overline{d_L}\gamma^\mu u_L$ by $\eta_{LL}^{ed} - \eta_{LL}^{eu} = \eta_{CC}$. Thus, the NC contact interactions and leptoquarks are subject to the constraint on $\eta_{CC}$. The leptoquarks that are constrained by this $\eta_{CC}$ are ${\cal S}_0^L$ and ${\cal S}_1^L$, which couple to the left-handed leptons and quarks. The CC contact interaction $\eta_{CC} \overline{\nu_L}\gamma_\mu e_L
\overline{d_L}\gamma^\mu u_L$ could upset two important experimental constraints: (i) lepton-hadron universality in weak CC, and (ii) $e$-$\mu$ universality in pion decay, of which the former gives a stronger constraint. Using the values for the CKM matrix elements the constraint on $\eta_{CC}$ is $2 \eta_{CC} = (0.102 \pm
0.073)\; {\rm TeV}^{-2}$ [@ours]. It is mainly because of this constraint that the leptoquark ${\cal S}_0^L$ is ruled out while ${\cal S}_1^L$ remains consistent in our global analysis.
Studies of a future scalar leptoquark search at the LHC [@lhc] show that with a luminosity of 100 fb$^{-1}$ the LHC can probe leptoquark mass up to 1.5 TeV in the pair production channel (which does not depend on the Yukawa coupling) and up to about 3 TeV (with the Yukawa coupling the same as $e$) in the single production channel. Thus, the leptoquarks in our solutions can be observed or ruled out at the LHC. On the other hand, Run II at the Tevatron can only probe leptoquarks up to a mass of 425 GeV [@teva].
Comments about the origin of these leptoquarks are in order.
\(i) The $R$-parity violating (RPV) squarks, which arise from the supersymmetry framework without the $R$ parity, are special leptoquarks. The natural question to ask is whether the leptoquarks that are used to explain the APV measurement can be the RPV squarks. First, since the RPV squarks couple to leptons via the $LQD^c$ term in the superpotential, they only couple to the left-handed leptons. Therefore, ${\cal S}_{1/2}^R$, which couples to the right-handed electron, cannot be a RPV squark. Also, the leptoquark ${\cal S}_1^L$, which is an isospin-triplet, is not a RPV squark. On the other hand, the leptoquark ${\cal S}_0^L$ has the interactions $g_L ( \overline{u_L^{(c)}} e_L - \overline{d_L^{(c)}} \nu_L )\; {\cal S}_0^L$, which is exactly the same as the RPV squark $\tilde{d}^*_R$, while the isospin-doublet leptoquark $\tilde{\cal S}^L_{1/2}$, which has the interactions $\tilde{\lambda}_L \overline{\ell_L} d_R \tilde{\cal S}^L_{1/2}$, is equivalent to the left-handed RPV squark doublet $i \tau_2 (\tilde{u}^*_L,
\tilde{d}^*_L)^T$. The natural question to ask is whether the coexistence of ${\cal S}_0^L$ and $\tilde{\cal S}_{1/2}^L$ can help ${\cal S}_0^L$ to evade the constraint of lepton-hadron universality of weak charged currents, and at the same time still gives a positive $\Delta Q_W$. In Sec. 3, we have shown that ${\cal S}_0^L$ gives a positive $\Delta Q_W$ while $\tilde{\cal S}_{1/2}^L$ gives a negative $\Delta Q_W$, so that their contributions to $\Delta Q_W$ cancel. In fitting to the $Q_W$ measurement, the coexistence of ${\cal S}_0^L$ and $\tilde{\cal S}_{1/2}^L$ would give a lower ${\cal S}_0^L$ mass or a higher Yukawa coupling. However, ${\cal S}_0^L$ induces $\eta_{CC}$ as it couples to both left-handed leptons and quarks, while $\tilde{\cal S}_{1/2}^L$ does not because it couples to left-handed leptons and right-handed quarks. Therefore, the simultaneous existence of ${\cal S}_0^L$ and $\tilde{\cal S}_{1/2}^L$ would not help ${\cal S}_0^L$ to evade the constraint from lepton-hadron universality of weak charged-currents.
\(ii) The $F=-2$ leptoquark ${\cal S}_0^L$ is one of the leptoquarks of $E_6$ [@rizzo]. The $F=0$ leptoquark ${\cal S}^R_{1/2}$ can be embedded [@rizzo] in the flipped SU(5)$\times$U(1)$_X$ model [@ellis], in which the SM fermion content is extended by right-handed neutrinos. The associated right-handed neutrinos could be used to generate the neutrino masses by the see-saw mechanism. The ${\cal S}^R_{1/2}$ can be placed into ${\bf 10 + \overline{10}}$, which would also contain the $F=-2$ leptoquark $\tilde{\cal S}^R_0$. The simultaneous existence of these two leptoquarks with similar masses and couplings would give cancelling contributions to $\Delta Q_W$. Thus, from the view point of fitting to the $Q_W$ data, this is not favorable.
In summary, we have found leptoquark and contact interaction solutions to the atomic parity violation measurement, which stands at a $2.3\sigma$ deviation from the SM prediction. In addition, we have shown that these solutions are consistent with all other data.
**Acknowledgments** {#acknowledgments .unnumbered}
===================
This research was supported in part by the U.S. Department of Energy under Grants No. DE-FG03-91ER40674 and No. DE-FG02-95ER40896 and in part by the Davis Institute for High Energy Physics and the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
[99]{}
S.C. Bennett and C.E. Wieman, Phys. Rev. Lett. [**82**]{}, 2484 (1999).
C.S. Wood, [*et al.*]{}, Science [**275**]{}, 1759 (1997).
V. Dzuba, V. Flambaum, and O. Sushkov, Phys. Rev. [**A56**]{}, R4357 (1997).
C. Caso [*et al.*]{}, Eur. Phys. J. [**C3**]{}, 1 (1998). The most recent value for $Q_W({\rm Cs})$ is found in 1999 off-year partial update for the 2000 edition available on the PDG WWW site ([http://pdg.lbl.gov]{}), “[*Electroweak Model and Constraints on New Physics*]{}” by J. Erler and P. Langacker.
G. Cho, hep-ph/0002128; J. Rosner, Phys. Rev. [**D61**]{}, 016006 (1999); R. Casalbuoni, S. De Curtis, D. Dominici, and R. Gatto, Phys. Lett. [**B460**]{}, 135 (1999); J. Erler and P. Langacker, Phys. Rev. Lett. [**84**]{}, 212 (2000);
P. Langacker, M. Luo, and A. Mann, Rev. Mod. Phys. [**64**]{}, 87 (1992); M. Leurer, Phys. Rev. [**D49**]{}, 333 (1994); A. Deandrea, Phys. Lett. [**B409**]{}, 277 (1997).
V. Barger, K. Cheung, K. Hagiwara, and D. Zeppenfeld, Phys. Rev. [**D57**]{}, 391 (1998); K. Cheung, e-Print Archive: hep-ph/9807483; D. Zeppenfeld and K. Cheung, MADPH-98-1081, e-Print Archive: hep-ph/9810277.
E. Eichten, K. Lane, and M. Peskin, Phys. Rev. Lett [**50**]{}, 811 (1983).
ZEUS Coll., hep-ex/9905032; H1 Coll., hep-ex/9908059.
DØ Coll., Phys. Rev. Lett. [**82**]{}, 4769 (1999).
W. Buchmüller, R. Rückl, and D. Wyler, Phys. Lett. [**B191**]{}, 442 (1987).
J. Hewett and T. Rizzo, Phys. Rev. [**D56**]{}, 5709 (1997).
DØ Coll. (B. Abbott et al.), Phys. Rev. Lett. [**80**]{}, 2051 (1998); CDF Coll. (F. Abe et al.). Phys. Rev. Lett. [**79**]{}, 4327 (1997); “[*Combined limits on first generation leptoquarks from the CDF and DØ experiments*]{}”, by CDF Coll. and DØ Coll., e-Print Archive: hep-ex/9810015.
ZEUS Coll., DESY-00-023, hep-ex/0002038; “Study of High Mass $e^-$-jet Systems in Electron-Proton Scattering at HERA”, submission to Int. Euro. Conf. on High Energy Physics 99, Tampere, Finland, July 1999.
H1 Coll., Euro. Phys. J. [**C11**]{}, 447 (1999).
OPAL Coll. (G. Abbiendi [*et al.*]{}), CERN-EP-99-091, hep-ex/9908007.
DELPHI Coll. (P. Abreu [*et al.*]{}), Phys. Lett. [**B446**]{}, 62 (1999).
ALEPH Coll. (R. Barate [*et al.*]{}), Eur. Phys. J. [**C12**]{}, 183 (2000); L3 Coll. (M. Acciarri [*et al.*]{}), Phys. Lett. [**B433**]{}, 163 (1998); OPAL Coll. (G. Abbiendi [*et al.*]{}), Eur. Phys. J. [**C6**]{}, 1 (1999).
S. Abdullin and F. Charles, Phys. Lett. [**B464**]{}, 223 (1999); B. Dion, L. Marleau, G. Simon, and M. de Montigny, Eur. Phys. J. [**C2**]{}, 497 (1998); O. Eboli, R. Funchal, and T. Lungov, Phys. Rev. [**D57**]{}, 1715 (1998); J. Montalvo, O. Eboli, M. Magro, and P. Mercadante, Phys. Rev. [**D58**]{}, 095001 (1998).
O. Eboli and T. Lungov, hep-ph/9911292.
I. Antoniadis, J. Ellis, J. Hagelin, and D. Nanopoulos, Phys. Lett. [**194**]{}, 231 (1987).
|
---
abstract: 'Fabrication of freestanding or supported metal atom wires may offer unprecedented opportunities for investigating exotic behaviors of one-dimensional systems, including the possible existence of non-Fermi liquids. Many recent efforts have been devoted to the formation of different kinds of metal atom wires in freestanding forms by novel techniques like mechanical break junction or deposited on substrates via self-assembly, focusing on their mechanical, chemical and electronic properties. Various atom wires with different lengths can be obtained during fabricating processes. Their size distributions have been extensively analyzed, which exhibit diverse features. Although several factors such as strain and substrate effects have been employed to interpret these phenomena, the stability of atom wire itself is largely ignored. Using density functional theory calculations, we present a thorough study on freestanding metal atom wires, including *s*, *sd* and *sp* electron prototypes, to examine the size effect in their stabilities. We find that the total energy of all systems oscillates within wire length, which clearly indicates the existence of some preferred lengths. Increasing the length of atom wires, *s* electron system shows even-odd oscillation following a $a/x+b/x^2$ trend in the stability, due to both electrons pairing up and one-dimensional quantum confinement. Meanwhile, *sd* electron systems show a similar oscillation within wire length although *s-d* hybridization is presented. In *sp* electron systems, some oscillations beyond the even-odd one are exhibited due to unpaired *p* orbitals resulting in some nontrival filling rule. Our findings clearly demonstrate that electronic contribution is quite critical to the stability of freestanding atom wires and is also expected to dominate even when atom wires are deposited on substrates or under strain. This study sheds light on the formation of metal atom wires and helps understanding relevant phenomena.'
author:
- Haiping Lan
- Ping Cui
- 'Jun-Hyung Cho'
- Qian Niu
- Jinlong Yang
- Zhenyu Zhang
title: Quantum Size Effect and Electronic Stability of Freestanding Metal Atom Wires
---
INTRODUCTION
============
As a model of one-dimensional (1-D) systems, metal atom wires have attracted enormous attentions to investigate relevant exotic 1-D behaviors, including the existence of non-Fermi liquids[@snijders10; @springborg07]. Various experimental techniques have been developed to fabricate different kinds of metal atom wires. There have been many reports that atom wires can be self-assembled or manipulated on semiconducting or metallic substrates, such as Au/Si(111), Au/Si(557),Ag/Si(55 12), Pb/Si(557), Ga/Si(100) and In/Si(111)[@snijders10; @springborg07; @erwin98; @segovia99; @kim07; @ahn02; @robinson02; @snijders05; @gonzale04; @nilius02]. Furthermore, freestanding atom wires have also been obtained via some novel methods such a mechanical break junctions (MBJ)[@yanson98; @springborg07]. All these progresses greatly spark diverse investigations on properties/behaviors of metal atom wires. For example, Segovia *et al.* measured band structure of Au/Si(557) by angle-resolved photoemission spectroscopy(ARPES), and suggested the existence of a Luttinger liquid[@segovia99]. A later work on temperature-dependent ARPES and scanning tunnel microscopy study showed there is a symmetry-breaking metal transition in Au/Si(557) which can be interpreted as a traditional Peierls transition, precluding the formation of a Luttinger liquid at low temperature[@ahn03]. In addition, a recent measurement by electron energy loss spectroscopy revealed significant dynamic exchange correlation effects on the 1D plasmon despite its high electron density and large Fermi velocity[@nagao06]. Using MBJ technique, Yanson *et al.* explored Au atom wires, and found a wire with 7-atom length can be formed which behaves as a perfectly quantized one-dimensional conductor[@yanson98]. All these investigations clearly demonstrate that metal atom wires are desirable workhorses to testify theoretic predictions of 1-D systems.
To explore these exotic properties of 1-D systems, longer or defect-free metal atom wires are quite critical. Therefore, understanding the formation mechanism of these wires should help to improve our ability to control the fabrications such as formation wires by different materials and longer ones. Smit *et al.* gave a systematic study over 5*d* metals by employing MBJ, and suggested a stronger bonding of low-coordination atoms with respect to 4*d* metals is due to *sd* competition caused by the relativistic effects[@smit01]. Moreover, sequential theoretical simulations confirmed that 5*d* metal atom wires like Au and Pt have bonds much stronger than their bulk ones[@bahn01]. Given supported atom wires, a highly anisotropic substrate as the growth template is very important. Usually, surfaces with a periodic step structures like vicinal metal or semiconductor surfaces are employed in molecular beam epitaxy to fabricate arrays of atom wires. Once adatoms diffuse anisotropically on these surfaces, well-ordered 1-D atom wires are possibly self-assembled[@javorsky09; @oncel08].
Strictly spearking, there are no ideal 1-D systems like infinite atom wires exist in experiments but segments of different lengths obtained in fabrications. Such wires of varying lengths have been observed in some heteroepitaxial self-assembled systems and MBJ experiments[@snijders10; @yanson98]. The length distribution can affect the phase transition temperature, change the conductivity, inhibit charge orders, and alter the effective dimensionality of the system. Thus, it is essential to understand the factors leading to different sizes and to control the wire lengths. So far, a few works have suggested the length distribution generally depends on several factors including the deposition coverage, the substrate temperature, adatoms adsorption energy and surface defects[@albao05; @gambardella06; @stinchcombe08; @javorsky09; @tokar07; @tokar03; @kocan07]. With respect to different metal atom wires, the length distributions showed quite different behaviors. Some experimental works reported that the length distributions of Ga and In atom wires on Si(100) surface monotonously decrease for various coverages, and have certain scale relations[@albao05; @javorsky09]. Comparing with experimental data of Ag wires on Pt(997) surface, Gambardella *et al.* examined the size distribution of atom wires in the framework of 1-D lattice gas model, and found that the lengths obey the geometric distribution[@gambardella06]. This length distribution indicated that an atom binding to the wire is independent of its length, thus suggesting only nearest-interactions account for its growth. A later work by Tokar *et al.* showed that incorporating additional interactions like elastic strain and charge transfer can obtain higher accurate fitting results[@tokar07]. Whereas, an extensive analysis based on STM by Crain *et al.* found an oscillating length distribution is exhibited in Au/Si(553)[@crain06]. They ascertained short-range interactions like local rebonding of the surface result of a strong peak at a length of one atom, and found that even wire lengths are favored over odd lengths up to lengths of at least 16 atoms, which indicates the quantum size effect plays an important role. This growth feature is in analogy to the electron growth of thin film[@zz98], which implies pure electronic contribution can exert a fundmental impact on wires’ growth. Further theoretical analysis by Souza *et al.* suggested a model only within electronic structure can capture the feature of the even-odd oscillation, and qualitatively agreed with the experimental length distribution[@souza08].
As revealed in both experiments and preliminary tight-binding calculations, the quantum size effect can dominate Au atom wire’s growth[@crain06; @souza08], and result of an even-odd oscillation. Except Au system, few attentions have been given for the other metals about this aspect. Hence, it should be interesting to explore how other metal atom wires are modulated by the quantum size effect and some relevant impacts on their stabilites. Our purpose is to present a systematic *ab initio* study on the quantum size effect in linear metal atom wires’ stability, with a special focus on its development with wire length. We choose several prototypes of *s*, *sd* and *sp* electron systems such as Na, Ag, Au, Ga, In and Pb atom wires in freestanding form for these atom wires all have been obtained experimentally[@snijders10; @springborg07]. We hope our study will facilitate the understandings of various size distributions even under the influence of substrates, strain and charge transfer. We find that the total energies of all systems oscillate within wire length, which clearly indicates the existence of some preferred lengths. In particular, *s* electron system shows an even-odd oscillation following a $a/x+b/x^2$ trend in the stability when the length is increased, which results from the pair-up of electrons and 1-D quantum confinement along the wire. Meanwhile, *sd* electron systems show a similar oscillation with the wire length although *s-d* hybridization occurs. In *sp* electron systems, some beyond even-odd oscillations are exhibited as unpaired *p* orbitals result in some nontrival filling rule, which almost washes out the effect of 1-D confinement because of their localized bonding characteristics.
The rest of this paper is organized as follows. The computational methods and numerical details are presented in Sec. II. Sec. III gives the results of various metal atom wires. The quantum size effect and electronic stability are studied in detail. A brief summary is then given in Sec IV.
CALCULATION METHODS
===================
All results we present here were carried out by the projector augmented-wave(PAW) method[@paw] implemented in the Vienna *ab initio* simulation package(VASP)[@vasp1; @vasp2]. Based on density-functional theory(DFT), exchange and correlation effects were described by the generalized gradient approximation(GGA-PBE)[@pbe]. The energy cutoff for plane-wave basis of 300.0 eV was used for all examined systems, which ensured the total energy converging into 0.01 eV per atom. In particular, we employed a small Gaussian broadening width 0.01 eV to achieve integer occupation of states. Isolated atom wire was simulated in a rectangular box with periodic images’ separation larger than 10 Å , and the longest one was up to 20 atoms for each metal. As for relevant infinite systems, we sampled 1-D Brillouin zone within 40 Monkhorst-Pack grids after carefully checking the covergence of k-point sampling. Here, we adopted one atom per unitcell without explicit consideration of Peierls effect since further checks on the two-atom unitcell neither showed any apparent dimerization nor made any difference in bandstructure but just folded up the 1-D Brillouin zone. All geometric structures were relaxed using the conjugate gradient method until residual force per atom was less than 0.01 eV/ Å .
Since 1-D atom wires are likely to be metastable for all involved metals, here we adopted initial configurations for atom wires by setting bond lengths to that of the infinite one. Because of certain fine interactions resulting from magnetic coupling for *sd* and *sp* electron systems, it is impossible for us to conduct a thorough study of various combinations over initial magnetic moments of atoms to obtain a local minimum, especially for a wire of tens of atoms. Although there are some ambiguities in energy minimums of atom wires, these fine interactions are unlikely to yield major changes or affect dominant electronic interactions between atoms. So we used several simple combinations within initial magnetic moments of atoms like $\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow$(ferromagnetic), $\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow$(antiferromagnetic), $\uparrow\uparrow\uparrow\downarrow\downarrow\downarrow$(antiferromagnetic), $\uparrow\uparrow\downarrow\downarrow\uparrow\uparrow$,etc. for 6-atom wire in further geometric relaxation, and chose the energy minimum among them. A similar setting for initial conditions of different magnetic moments has been discussed in previous work on transition metal atom wires while examining average cohesive energy of various lengths[@atca08]. Our calculation results showed energy uncertainty of the minimum is less than 0.1 eV (about 0.01 eV per atom), ensuring us to reach further conclusions.
To characterize the electronic stability of atom wires and the corresponding size effect, we use the cohesive energy[@zz98; @cho98],reading $$\begin{aligned}
E_c(n)=(E_t(n)-n\cdot E_t(1))/n,
\label{eq1}
\end{aligned}$$ to examine atom wires’ stability in different length, where $E_t(n)$ is the total energy of the atom wire of $n$ atoms. Thus, the larger the absolute value of cohesive energy is, the more stable the wire segment will be. This energy value purely reflects the contribution of electronic binding interaction between atoms. We then introduce the second difference of $E_c$,reading $$\begin{aligned}
d^2E(n)=E_c(n+1)+E_c(n-1)-2\cdot E_c(n),
\label{eq2}\end{aligned}$$ as the criterion for the stability of a $sp$ metal wire segment in analogy with that for a thin film[@zz98]. According to Eq.(\[eq2\]), a wire of $n$ atoms is stable when $d^2E(n)>0$ and unstable otherwise.
RESULTS AND DISCUSSIONS
=======================
Na atom wires
-------------
Firstly, we consider Na atom wires. So far, considerable attention in Na wire has been drawn to its electron transport properties, especially to its quantum conductance measurements[@springborg07]. Yanson *et al.* experimentally studied the conductance through Na thin junction via MBJ technique, and found the conductance histogram shows a typical oscillation on measured times due to the shell effect, a pure electronic effect resulting from size confinement[@yanson99]. Further theoretical works were then extended to situations of atom wire[@barnett97; @lee04; @havu02], and calculated conductance showed an oscillation with wire length, which also largely depended on the structure of nanocontacts[@sim01; @havu02].
Since sodium has only one valence, the stability of atom wires would be greatly enhanced once electrons pair up. The cohesive energy therefore shows an even-odd oscillation with the length, as shown in Fig.\[fig01\](a). It is clear that the pair-up of electrons is much more pronounced for shorter wires, and gains small energy less than 0.01 eV when the length is up to 10-atom or more. Thus, relevant even-odd oscillation would be smeared off for longer wires, which suggests the quantum size effect can not be easily observed in the experiment as a bit strain or charge transfer is supposed to wash out the related oscillation . Actually, the cohesive energy converges in a $a/x +b/x^2$ trend as the length increases, here $x$ is the wire length, and $a$,$b$ are constants. This trend is simply caused by the quantum confinement along the wire direction, since the energy level $E_i$ of a 1-D square quantum well is inversely proportional to the square of its width $W$ ,i.e, $E_i\sim i^2/W^2$, as already shown in a particle-in-a-box model. As a result, the total energy of $n$ electrons is the sum over the energy levels and proportional to $n(n+1)(2n+1)/n^2$ given $W\sim n$. Therefore, the cohesive energy per electron follows the $a/x + b/x^2$ trend in this simple model. This remarkable agreement between *ab initio* results and the particle-in-a-box model is clearly due to the delocalized and undirectional dependence of the *s* orbitals of Na atom wires. The highest occupied wire state (HOWS) for 6-atom wire is then presented in Fig.\[fig01\](c), showing a typical $ss\sigma$ binding character with two nodes along wire. Clearly, the number of nodes in HOWS depends on the wire length, with $(n-1)/2$ for odd one while $(n-2)/2$ for even situation, stemming from the orthogonality relations between the wavefunctions.
Ag and Au atom wires
--------------------
Recently, both Ag and Au atom wires were fabiracated experimentally. It was reported that ultrathin silver wires with a width 0.4 nm were successfully synthesized inside nanotubes[@hong01]. Some MBJ techniques and atom manipulation by STM were extended to both Ag and Au systems for fabricating nanowires[@smit01; @yanson98; @rodrigues02; @sperl08; @nilius02]. In this case, the structure of nanowire is likely to be a linear atom wire. In addition, some experimental efforts have been devoted to depositing Ag atom wire in Si vicinal surface[@ahn02]. All these experimental progresses encouraged extensive theoretic studies on various aspects of Ag atom wires, especially on electric properties and quantum transport[@springborg07; @springborg03; @riberio03]. With regard to Au atom wire, there have been enormous works exploring its various properties since its first experimental fabrication[@yanson98; @ohnishi98; @snijders10; @springborg07]. A few attempts have been devoted to the study of its transport properties by MBJ techniques, and showed Au atom wire can survive in a longer length in comparison with Ag system due to stronger *sd* competition[@yanson98; @smit01]. Many efforts have also focused on possible realizations of Luttinger liquids and other 1-D exotic behaviors by depositing Au atom wires in various vicinal surfaces such as Si(553), Si(557), Si(337), Ge(100)[@snijders10; @crain06; @robinson02; @crain05]. All these works boost several seminal explorations such as end modes of 1-D systems and atom wires’ electronic growth[@crain05; @yan07; @crain06]. And experimentally, both Ag and Au atom wires showed size effects can largely modulate their electronic properties and stability[@sperl08; @crain06]. Silver is of the closed 4*d* shell, behaving as a single valence system. As displayed in Fig.\[fig01\](b), the cohesive energy of Ag atom wires shows a similar even-odd oscillation as that of Na atom wires, and relevant values are around -1.0 eV. Since certain *sd* hybridization is likely to enhance the binding interaction between atoms, Ag atom wires show a bit larger amplitude of the even-odd oscillation in comparison with Na atom wires. This enhanced interaction should lead to observing relevant quantum size phenomena in real experiments. Acutally, recent STM measurements have revealed that resonances of unoccupied states show a strong size dependence of energy values on Ag/Ag(111) system[@sperl08], which suggestes the size effect survives even under the influence of substrate. Therefore, we can expect relevant quantum size effects would exhibit in some self-assembled Ag wires. And we present HOWS for 6-atom wire in Fig.\[fig01\](d), which clearly shows a bit different character from that of Na atom wire due to some *sd* hybridization. Meanwhile, the number of nodes of HOWS is also the same as that of Na systems.
In contrast to Ag, relativistic effects are quite large in Au system, which result in strong *sd* competition. Consequently, the *s* shell is contracted and the *d* electrons move up in comparison with Ag. This electronic feature leads to stronger binding interaction in Au wires, and the cohesive energy approaches -1.5 eV when the length is increased as shown in Fig.\[fig02\](a). When the length is shorter than 13 atoms, the cohesive energy shows almost the same behavior as that of the Na and Ag wires. However, a crossover occurs when $n=13$, leading to a transition of the oscillation. This behavior can be interpreted by the bandstructure of infinite wire as shown in Fig.\[fig02\](b) and (c). Due to the axial symmetry, *d* bands split into 3 branches, e.g, $d_{z^2}$, $d_{x^2-y^2}$/$d_{xy}$ and $d_{xz}$/$d_{yz}$ . Then, four band branches below the Fermi level are shown in Fig.\[fig02\](c), respectively. At $\Gamma$ point, the lowest band branch is mostly of the $d_{z}$ character, which then develops across 1-D Brillouin zone by approaching the Fermi level at $K$ point. However, $d_{x^2-y^2}$/$d_{xy}$ branch is almost a flat band across 1-D Brillouin zone. This dispersionless character suggests there are no bonding interaction between the $d_{x^2-y^2}$/$d_{xy}$ electrons of atoms and they are just localized around Au atom itself. The next band branch, crossing the Fermi level at the middle of 1-D Brillouin zone, is mostly of the $s$ character. The last branch, $d_{xz}$/$d_{yz}$, is then in the vicinity of the Fermi level at $\Gamma$ point. At $\Gamma$ and $K$, two special symmetric points of 1-D systems, the band dispersions are all horizontal, leading to very sharp van Hove singularities of 1-D system which result in sharp peaks of density of states(DOS) as displayed in Fig.\[fig02\](b). Since the bands mostly have *d* character at the edges, the exchange energy gain could be rather large when a band spin splits so that one of the spinchannels band edges ups above the Fermi level, and the other then downs below. Thus, if a band edge ends up sufficiently near the Fermi level, e.g., appearance of sharp peaks of DOS, we can predict a magnetic moment [@delin03], which is known as the Stoner instability. Meanwhile, this instability depends closely on Au bond length, and a bit elongating(shortening) bond can eliminate(enhance) the related ferromagnetic behavior as also reported in previous works[@delin03]. Such behaviors mean the bond lengths of Au wires will affect significantly the positions of band edges . As for finite wires of different lengths, bonds oscillate around that of infinite one. Correspondingly, their HOWSs show different characters. When the wire length is less than 4-atom, bonds are smaller than that of infinite wire (2.59 Å), and HOWSs are of the $d_{z^2}$ character, partially with the $s$ contribution. 4-atom wire is of two bonds less than 2.59 Å, large part of HOWS also shows $d_{z^2}$ character. Longer than the number of 4-atom, central bonds of atom wires are then a bit longer than 2.59 Å, and HOWSs show $d_{xz}$/$d_{yz}$ character accordingly. When Au atom wire grows up to a certain length, the exchange energy gain thereby can split spin channels of $d_{xz}$/$d_{yz}$ orbitals, and cause a crossover of stability of Au wire when $n=13$ as shown in Fig.\[fig02\](a). As a result, the corresponding HOWSs change to $s$ character, in partial hybridization with $d_{z^2}$ orbitals. However, previous reports indicate there is only one conduction channel around the Fermi level[@springborg07], which suggests the emergence of $d_{xz}/d_{yz}$ bands edges around Fermi level for the infinite wire are problematic and may account for the above crossover of the cohesive energy. Moreover, strong localized $d_{x^2-y^2}$/$d_{xy}$ orbitals also mean that relevant self-interaction errors(SIE) cannot be ignored for 1-D wires[@wierzbowska05]. We then employed GGA+U scheme to shift the *d* bands and remove related SIE for verifications. As shown in Fig.\[fig02\](d) and Fig.\[fig02\](e), even a small $U_{eff}$=1.0 eV can eliminate Stoner instability as no sharp DOS peaks emerge around the Fermi level, and relevant crossover at $n= 13$ atoms vanishes at the meantime. Correspondingly, the HOWSs of different wires bear a similar characteristics with contribution of hybridized states between $s$ and $d_{z^2}$ . As displayed in Fig.\[fig02\](d),(h), HOWSs of 6-atom wire from GGA and GGA+U calculations are given, respectively. Obviously, GGA result shows a $d_{xz}$/$d_{yz}$ character, while GGA+U results of a $s+d_{z^2}$ hybridization orbital. These results clearly indicate that improper descriptions on the $d$ bands may lead to the ferromagnetic behavior of the infinite wire and the crossover of the finite wires stability. As revealed in Fig.\[fig02\](e), the cohesive energy also follows the $a/x+b/x^2$ trend well. Thus, the relativistic effects do not make much difference between stability trends of the Au and Ag wires but enhance binding interactions between Au atoms. This stronger binding property makes quantum size effects of Au wires more easily perceptible in experiments, even under the influence of charge transfer or strain effect. As reported in the experimental work by Crain *et al.*, an even-odd oscillation exhibited in Au/Si(553) system at least up to 16 atoms[@crain06]. Many relevant properties due to quantum size effects of Au atom wires are therefore both theoretically and experimentally predictible.
Ga, In and Pb Atom Wires
-------------------------
Bulk Ga, In and Pb are typical *p* electron metallic systems, and large efforts have been devoted to the formations of their corresponding atom wires in different vicinal surfaces to investigate the effect of *p* electron in 1-D systems[@snijders10; @snijders05; @gonzale04; @albao05; @javorsky09; @kim07]. These atom wires showed some new behaviors in comparison with *s*, or *sd* systems. Typically, the length distributions of Ga and In atom wires on Si(100) surface decrease monotonously for various coverages ,and show some scale relations[@javorsky09; @albao05]. Thus, it is of great interest to explore how the pure electronic contributions affect their length distributions.
We focus on Ga and In atom wires first. Both gallium and indium have 3 valences: 2 *s* electrons and 1 *p* electron in the outmost shell. Because of the large energy difference between the *s* and *p* electrons, little *s-p* hybridization exhibits in these two elements as revealed by the bandstructures of the infinite wires in Fig.\[fig03\](a) and (d), and the stability of relevant atom wires is thus dominated by the *p* electrons. The bandstructures of Ga and In atom wires are quite similar, particularly for the valence bands, and both have three band branches below the Fermi level. Accordingly, the lowest branches are both of the *s* character. Then, the next one at $\Gamma$ point is predominated by $p_{x}$/$p_{y}$ electrons, and crosses over the Fermi level. The branch of $p_z$ character is also across the Fermi level about $K$ point. Hence, the combination of $p_z$ orbitals of atom wire forms $pp\sigma$ bonds, and $p_{x}$/$p_y$ orbitals contribute to $pp\pi$ states. This bonding property results in complicated behaviors of atom wires. As shown in Fig.\[fig03\](b) and (e), the cohesive energies of Ga and In atom wires both show oscillations beyond the even-odd pattern. Furthermore, the second difference of the cohesive energy $d^2E$ is also shown together, and exhibits a clear picture for the stability of atom wires. It clearly shows that Ga atom wires of 2,3,4,6,8,12,16,18 atoms are stable segments,and In atom wires also give a similar pattern and favor 2,3,4,6,9,12,14,17 atoms. Since there is some uncertainty of 0.01 eV per atom to determine a local minimum for *sp* systems, some like 10-atom, 14-atom segments of Ga wires or 11-atom, 15-atom segments of In wires probably lean to stable. In other words, $d^2E$ of these wire segments is likely to be positive. Therefore, $d^2E$ of Ga wires would display an even-odd oscillation around zero after 4-atom, which implies that 6-atom,8-atom,12-atom,16-atom, 18-atom, and probably 10-atom, 14-atom are magically stable among others. Meanwhile, $d^2E$ of In wires shows a bit compilcated oscillation around zero, and also exhibits some magical sizes such as 6-atom, 9-atom, 14-atom and 17-atom. This fact would confound experimental observations out of the consideration of environmental factors like the charge transfer, which may lead to different size distributions. Although gallium and indium have the same valence configuration, a bit difference of nuclei radius should result in this distinction in the stability. Actually, gallium and indium have different crystal structures due to the same fact. 6-atom HOWSs of Ga and In systems are then presented in Fig.\[fig03\](c) and (f),respectively, clearly showing a $pp\pi$ character.
Like gallium and indium, lead also has little *sp* hybridization when forming the atom wires, which leads to only two *p* electrons contributing to binding interactions as revealed by the bandstructure of infinite wire shown in Fig.\[fig04\](a). The valence bands are also similar to that of Ga wire or In wire, with three band branches below the Fermi level. The *s* band, the lowest branch, is shown about 3.5 eV from $p_z$ branch at $K$ point, while both $p_x$/$p_y$ and $p_z$ branches cross over the Fermi level in the middle of 1-D Brillouin zone. Due to axial symmetry, $p_{x/y}$ orbitals are two-fold degenerate and contribute to $pp\pi$ states of atom wire while the combination of $p_z$ orbitals forms $pp\sigma$ bonds. Such binding behaviors are quite similar to those of Ga and In atom wires, but involve 2 *p* electrons per atom to be filled up. Therefore, the stability of Pb atom wires should present a certain oscillation beyond the even-odd feature as displayed in Fig.\[fig04\](b). And based on the second difference $d^2E$ in Fig.\[fig04\](b), we find the stable systems are of 2,3,4,6,7,10,11,13,15,18 atoms, respectively. Meanwhile, relative large amplitude of oscillation also suggests this stability trend survives even under environmental influences, within some preferred lengths in fabrications. As for a Pb dimer, two *p* electrons occupy $pp\sigma$ bond and other two then go into $pp\pi$ bond. Interestingly, HOWS of Pb dimer is $pp\sigma$ bonding state while that of other lengths is relevant $pp\pi$ state. Concerned with 3-atom wire, two electrons go into $pp\sigma$ bond, and the rest 4 *p* electrons then fully fill up the two-fold degenerate $pp\pi$ bonding state. The full occupation of bonding states makes 3-atom wire inert, and explains why 3-atom wire is stable among others. Thus, when the atom wire becomes longer, more complicated electronic states are required to fill up, contributing to this specific behavior. At the end, HOWS of 6-atom Pb wire is then given in Fig.\[fig04\](c), showing a typical $pp\pi$ character with a node in the middle of the atom wire.
Results on free-standing atom wires definitely oversimplify the stability trends of 1-D atom wires. The adsorption of atoms on metallic or semiconducting substrates should lead to certain hybridization of atom orbitals with electronic states of substrates. In fact, the interactions between single atoms in 1-D wires result from a direct overlap between atoms’ wavefunctions and substrate-mediated mechanisms, such as the interface Friedel oscillations[@zz98]. Thus, the tradeoff between these two interactions would result in different length distributions in experimental observations. As for the Au atom wires, the strong *sd* hybridization enhances binding interaction between Au atoms, leading to a pronounced even-odd oscillation of stability even for long wires. Consequently, relevant quantum size effect was observed in Au/Si(553) system while few work touches this issue for Na or Ag wires[@crain06]. This behavior implies atom wires with relative large amplitude of oscillations like Ga and Pb are likely to have quantum size phenomenon even under the influence of substrate or strain effect. Therefore, more works are expected to explore relevant quantum size effects in atom wires like Ga and Pb wires.
CONCLUSION
==========
In conclusion, we have given a systematic study on electronic stability and quantum size effect of 1-D metal atom wires by extensive *ab initio* calculations. The results show that the cohesive energy of Na atom wires presents a typical even-odd oscillation with a $a/x+b/x^2$ trend, which can be ascribed to a typical 1-D quantum confinement as well as the pair-up of electrons. Meanwhile, Ag atom wires show a similar behavior but with a stronger binding interaction due to certain *sd* hybridization. A good agreement with the $a/x+b/x^2$ trend actually suggests the hybridized states of Ag are still quite delocalized and undirectional. As for Au atom wires, short ones that are less than 13 atoms show a similar even-odd oscillation. Once the length increases to 13 atoms, a crossover occurs. Such behavior is due to the emergence of $d_{xz}/d_{yz}$ states into bonding interactions, which also results in ferromagnetic property of the infinite wire caused by 1-D von Hove singularities around Fermi level. Further GGA+U calculations show that even a small $U_{eff}$ =1.0 eV can eliminate the relevant crossover when $n=13$ and the ferromagnetic behavior of the infinite Au wire, which implies a proper description of the *d* states can affect significantly the behaviors of the Au wires. The large *sd* hybridization due to the relativistic effect enhances bindings between the Au atoms, relevant even-odd oscillation thereby can be observed experimentally. With respect to *sp* systems examined here, we find the stabilities of Ga, In and Pb atom wires are all dominated by binding interactions between the *p* electrons because of little hybridization between the *s* and *p* electrons. Thus, the directional binding behaviors of the *p* electrons, e.g., $pp\sigma$ due to the combination of $p_z$ orbitals and $pp\pi$ from $p_x/p_y$ orbitals result in intricate oscillations in the stability trends, and suggest relevant status of electrons filling-up determines the dominant behavior.
This work was supported in part by NSF(Grant No. DMR-0906025). The calculations were performed at NERSC of DOE.
[99]{} P. C. Snijders and H. H. Weitering, Rev.Mod.Phys. **82**,307 (2010). M. Springborg and Y. Dong, *Metallic Chains/Chains of Metals*, Elsevier(2007). S. C. Erwin and H. H. Weitering, Phys.Rev.Lett. **81**,2296 (1998). P. Segovia, D. Purdie, M. Hengsberger, and Y. Baer, Nature **402**,504(1999). K. S. Kim, H. Morikawa, W. H. Choi, and H. W. Yeom, Phys.Rev.Lett. **99**,196804(2007). I. K. Robinson, P. A. Bennett, and F. J. Himpsel, Phys.Rev.Lett. **88**,096104(2002). P.C. Snijders, S. Rogge, C. González, R. Pérez, J. Ortega, F. Flores and H. H. Weitering, Phys.Rev.B **72**, 125343(2005). C. González, P.C. Snijders, J. Ortega, R. Pérez, F. Flores, S. Rogge and H. H. Weitering, Phys.Rev.Lett **93**,126106(2004). N. Nilius, T. M. Wallis and W. Ho, Science, **297**,1853(2002). J.R. Ahn, Y.J. Kim, H.S. Lee, C.C. Hwang, B.S. Kim, and H.W. Yeom, Phys.Rev.B **66**, 153403(2002). A.I. Yanson, G.Rubio Bollinger, H. E.van den Brom, and N. Agraït and J.M. van Ruitenbeek, Nature **395**,783(1998). J. R. Ahn, H. W. Yeom, H. S. Yoon, and I.-W. Lyo, Phys.Rev.Lett. **91**,196403(2003). T. Nagao, S. Yaginuma, T. Inaoka, and T. Sakurai, Phys.Rev.Lett. **97**,116802(2006). R. H. M. Smit, C. Untiedt, A. I. Yanson, and J. M. van Ruitenbeek, Phys.Rev.Lett. **87**.266102(2001). S. R. Bahn and K. W.Jacobsen, Phys.Rev.Lett. **87**.266101(2001). N. Oncel, J. Phys.:Condens.Matter **20**,393001(2008). J. Javorský, M. Setvín, I. Oštádal, P. Sobo, and M. Kotrla, Phys.Rev.B **79**,165424(2009). M. A. Albao, M. M. R. Evans, J. Nogami, D. Zorn, M. S. Gordon, and J. W. Evans, Phys.Rev.B **72**,035426(2005). R. B. Stinchcombe and F. D. A. Aarão Reis, Phys.Rev.B **77**,035406(2008). V. I. Tokar and H. Dreyssé, Phys.Rev.B **76**,073402(2007). P. Gambardella, H. Brune, K. Kern, and V. I. Marchenko, Phys.Rev.B **73**,245425(2006). V. I. Tokar and H. Dreyssé, Phys.Rev.E **68**,011601(2003). P. Kocán, P. Sobotík, I. Oštádal, J. Javorský,and M. Setvín, Surf.Sci. **601**, 4506(2007). J. N. Crain, M. D. Stiles, J. A. Stroscio, and D. T. Pierce, Phys.Rev.Lett. **96**,156801(2006). A. M. Souza and H. Herrmann, Phys.Rev.B **77**,085416(2008). C. Ataca, S. Cahangirov, E. Durgun, Y.-R. Jang, and S. Ciraci, Phys.Rev.B **77**,214413(2008). Z. Zhang, Q. Niu, and C. Shih, Phys.Rev.Lett. **80**,5381(1998). J.-H. Cho, Q. Nin, and Z. Zhang, Phys.Rev.Lett. **80**, 3582(1998). P.E. Blöchl, Phys.Rev.B **50**, 17953 (1994). G. Kresse and J. Furthmüller, Phys.Rev.B **54**,11169 (1996). G. Kresse and D. Joubert, Phys.Rev.B **59**,1758 (1999). J. P. Perdew, K. Burke, and M. Ernzerhof, Phys.Rev.Lett. **77**,3865(1996). A.I. Yanson, I.K. Yanson, and J.M. van Ruitenbeek, Nature **400**,144(1999). R.N. Barnett and U. Landman, Nature **387**, 788(1997). Y.J. Lee, M. Brandbyge, M.J. Puska, J. Taylor, K. Stokbro, and R.M. Nieminen, Phys. Rev. B **69**, 125409 (2004). H.-S.Sim, H.-W.Lee, and K.J.Chang, Phys.Rev.Lett. **87**,096803(2001). P. Havu, T. Torsti, M.J. Puska, and R.M. Nieminen, Phys.Rev.B **66**, 075401(2002). B.H. Hong, S.C. Bae, C.-W. Lee, S. Jeong, and K.S. Kim, Science **294**, 348(2001). V. Rodrigues, J. Bettini, A.R. Rocha, L.G.C. Rego, and D. Ugarte, Phys.Rev.B **65**,153402 (2002). A. Sperl, J. Kröger, N. Néel, H. Jensen, R. Berndt, A. Franke, and E. Pehlke, Phys.Rev.B, **77**,085422(2008). M. Springborg and P. Sarkar, Phys.Rev.B **68**, 045430(2003). F.J. Ribeiro and M.L. Cohen, Phys.Rev.B **68**, 035423(2003). H. Ohnishi, Y. Kondo, and K. Takayanagi, Nature **395**, 780(1998). J. N. Crain and D. T. Pierce, Science **307**,5710(2005). J. Yan, Z. Yuan, and S. W. Gao, Phys. Rev. Lett. **98**,216602(2007). A. Delin and E. Tosatti, Phys.Rev.B **68**,144434(2003). M. Wierzbowska, A. Delin, and E. Tosatti, Phys.Rev.B **72**,035439(2005).
![(Color online)\[fig01\] The cohesive energy $E_c$ of (a) Na atom wires, and (b) Ag atom wires versus lengths. Fitting data are also shown in (a) and (b), respectively. HOWSs of (c) 6-atom Na wire and (d) 6-atom Ag wire are displayed, respectively. ](fig013.pdf){width="64.00000%"}
![(Color online)\[fig02\] (a) The cohesive energy $E_c$ of Au atom wires versus length, (b) and (c) are density of states(DOS) and bandstructure of the infinite Au wire, respectively. (e) The cohesive energy $E_c$ of Au atom wires versus length by GGA+U calculations within $U_{eff}=U-J=1.0$ eV,(f) and (g) are corresponding to DOS and bandstructure of the infinite Au wire. HOWS of 6-atom is then given in (h). In both cases, the Fermi level is the zero energy. Fitting data is also shown in (e). ](fig023.pdf){width="96.00000%"}
![(Color online)\[fig03\] (a) and (d), bandstructures of the infinite Ga atom wire and In atom wire,respectively, and the Fermi level is set to the zero energy. (b) is the cohesive energy $E_c$ of Ga atom wires versus length while (e) is that of In atom wires, and the second difference of $E_c$ is presented together. HOWSs for 6-atom Ga wire and In wire are given in (e) and (f),respectively. ](fig033.pdf){width="72.00000%"}
![(Color online)\[fig04\] (a), the bandstructure of the infinite Pb wire, and the Fermi level is set to the zero energy; (b) The cohesive energy $E_c$ of Pb atom wires versus lengths, together with the second difference of $E_c$, (c) is HOWS of 6-atom Pb wire. ](fig043.pdf){width="72.00000%"}
|
---
abstract: |
The antiferromagnetic $q$-state Potts model is perhaps the most canonical model for which the uniqueness threshold on the tree is not yet understood, largely because of the absence of monotonicities. Jonasson established the uniqueness threshold in the zero-temperature case, which corresponds to the $q$-colourings model. In the permissive case (where the temperature is positive), the Potts model has an extra parameter $\beta\in (0,1)$, which makes the task of analysing the uniqueness threshold even harder and much less is known.
In this paper, we focus on the case $q=3$ and give a detailed analysis of the Potts model on the tree by refining Jonasson’s approach. In particular, we establish the uniqueness threshold on the $d$-ary tree for all values of $d\geq 2$. When $d\geq3$, we show that the 3-state antiferromagnetic Potts model has uniqueness for all $\beta\geq 1-3/(d+1)$. The case $d=2$ is critical since it relates to the 3-colourings model on the binary tree ($\beta=0$), which has non-uniqueness. Nevertheless, we show that the Potts model has uniqueness for all $\beta\in (0,1)$ on the binary tree. Both of these results are tight since it is known that uniqueness does not hold in the complementary regime.
Our proof technique gives for general $q>3$ an analytical condition for proving uniqueness based on the two-step recursion on the tree, which we conjecture to be sufficient to establish the uniqueness threshold for all non-critical cases ($q\neq d+1$).
author:
- 'Andreas Galanis, Leslie Ann Goldberg, and Kuan Yang'
bibliography:
- '\\jobname.bib'
date: 'July 26, 2018'
title: 'Uniqueness for the 3-State Antiferromagnetic Potts Model on the Tree[^1] '
---
Introduction
============
The $q$-state Potts model is a fundamental spin system from statistical physics that has been thoroughly studied in probability and computer science. The model has two parameters $q$ and $\beta$, where $q\geq 3$ is the number of the states, and $\beta>0$ is a parameter which corresponds to the temperature of the system[^2]. The set of states is given by $[q]=\{1,\hdots,q\}$ and we will usually refer to them as *colours*. The case $q=2$ is known as the Ising model, and the Potts model is the generalisation of the Ising model to multiple states. When $\beta=0$, the Potts model is known as the $q$-colourings model.
A *configuration* of the Potts model on a finite graph $G=(V,E)$ is an assignment $\sigma: V\rightarrow [q]$. The *weight* of the configuration $\sigma$ is given by $w_G(\sigma)=\beta^{m(\sigma)}$, where $m(\sigma)$ denotes the number of monochromatic edges in $G$ under the assignment $\sigma$. The Gibbs distribution of the model, denoted by $\Pr_G[\cdot]$, is the probability distribution on the set of all configurations, where the probability mass of each configuration $\sigma$ is proportional to its weight $w_G(\sigma)$. Thus, for any $\sigma:V\rightarrow[q]$ it holds that $$\Pr_G[\sigma] = w_G(\sigma)/Z_G,$$ where $Z_G = \sum_{\sigma: V\to[q]} w_G(\sigma)$ is the so-called *partition function*. Note that in the case $\beta=0$ the Gibbs distribution becomes the uniform distribution on the set of proper $q$-colourings of $G$. The Potts model is said to be *ferromagnetic* if $\beta>1$, which means that more likely configurations have many monochromatic edges. It is said to be *antiferromagnetic* if $\beta<1$, which means that more likely configurations have fewer monochromatic edges. This paper is about the antiferromagnetic case.
For spin systems like the Ising model and the Potts model, one of the most well-studied subjects in statistical physics is the so-called uniqueness phase transition on lattice graphs, such as the grid or the regular tree. Roughly, the uniqueness phase transition on an infinite graph captures whether boundary configurations can exert non-vanishing influence on far-away vertices. In slightly more detail, for a vertex $v$ and an integer $n$, fix an arbitrary configuration on the vertices that are at distance at least $n$ from $v$. Does the influence on the state of $v$ coming from the boundary configuration vanish when $n\rightarrow \infty$? If yes, the model has *uniqueness*, and it has *non-uniqueness* otherwise.[^3] (See Definition \[def:uniqueness\] for a precise formulation in the case of the tree.) Note that uniqueness is a strong property, which guarantees that the effect of fixing an arbitrary boundary configuration eventually dies out. As an example, for the antiferromagnetic Ising model on the $d$-ary tree it is well-known that uniqueness holds iff $\beta\geq \tfrac{d-1}{d+1}$; the value $\tfrac{d-1}{d+1}$ is a point of a phase transition and is also known as the *uniqueness threshold* because it is the point at which the uniqueness phase transition occurs.
The uniqueness phase transition plays a prominent role in connecting the efficiency of algorithms for sampling from the Gibbs distribution to the properties of the Gibbs distribution itself. One of the first examples of such a connection is in the analysis of the Gibbs sampler Markov chain for the Ising model on the 2-dimensional lattice, where the uniqueness phase transition marks the critical value of $\beta$ where the mixing time switches from polynomial to exponential (see [@martinelli2; @martinelli1; @Thomas]).
From a computational complexity perspective, it is the uniqueness phase transition on the regular tree which is particularly important. For many 2-state spin models, including the antiferromagnetic Ising model and the hard-core model, it has been proved [@Sinclair; @SlySun; @GSVi; @LiLuYin] that the uniqueness phase transition on the tree coincides with a more general computational transition in the complexity of approximating the partition function or sampling from the Gibbs distribution. In the case of the antiferromagnetic Ising model for example, the problem of approximating the partition function on $(d+1)$-regular graphs undergoes a computational transition at the tree uniqueness threshold: it admits a polynomial-time algorithm when $\beta\in (\tfrac{d-1}{d+1},1)$ and it is NP-hard for $\beta\in(0,\tfrac{d-1}{d+1})$. This connection has been established in full generality for antiferromagnetic 2-state systems.
For antiferromagnetic multi-state systems, the situation is much less clear and, in fact, even understanding the uniqueness phase transition on the tree poses major challenges. One of the key reasons behind these difficulties is that certain monotonicities that hold for two-state systems simply do not hold in the multi-state setting, which therefore necessitates far more elaborate techniques. For analysing the uniqueness threshold on the tree, this difficulty has already been illustrated in the case of the $q$-colourings model, where Jonasson [@Jonasson02], building upon work of Brightwell and Winkler [@BW02], established via a painstaking method that the model is in uniqueness on the $d$-ary tree iff $q>d+1$. The goal of this paper is to extend this analysis to the Potts model (beyond the zero-temperature case).
There are several reasons for focusing on establishing uniqueness on the tree. For the colourings model and the antiferromagnetic Potts model, it is widely conjectured that the uniqueness phase transition on the $d$-ary tree captures the complexity of approximating the partition function on graphs with maximum degree $d+1$, as is the case for antiferromagnetic 2-state models. It has been known since the 80s that non-uniqueness holds for the colourings model when $q\leq d+1$ and for the Potts model when $\beta<1-q/(d+1)$, see [@Peruggi0]. More recently, it was shown in [@GSV] that the problem of approximating the partition function is NP-hard when $q<d+1$ for the colourings model and when $\beta<1-q/(d+1)$ for the Potts model (for $q$ even). It is not known however whether efficient algorithms can be designed in the complementary regime; for correlation decay algorithms in particular (see [@GK; @PY; @LYZZ]), it has been difficult to capture the uniqueness threshold in the analysis — this becomes even harder in the case of the Potts model where uniqueness is not known. For a more direct algorithmic consequence of uniqueness, it has been demonstrated that, on sparse random graphs, sampling algorithms for the Gibbs distribution can be designed by exploiting the underlying tree-like structure and the decay properties on the tree guaranteed by uniqueness. In particular, in the $G(n,d/n)$ random graph, Efthymiou [@Efthymiou] developed a sampling algorithm for $q$-colourings when $q>(1+\epsilon)d$, based on Jonasson’s uniqueness result. Related results on $G(n,d/n)$ appear in [@YZ; @EHSV; @Sinclair2017; @MS22]. Also, after presenting our main result, we will describe an application on random regular graphs, appearing in [@samplingpaper].
Our result {#sec:results}
----------
In this paper, we study the uniqueness threshold for the antiferromagnetic Potts model on the tree. We establish the uniqueness threshold for $q=3$ for every $d\geq 2$. Our proof technique, which is a refinement of Jonasson’s approach, also gives, for general $q>3$, an analytical condition for proving uniqueness, which we conjecture to be sufficient for establishing the uniqueness threshold whenever $q\neq d+1$. As we shall discuss shortly, the case $q=d+1$ is special, since it incorporates the critical case for the colourings model. To formally state our result, we will need a few definitions.
Given a graph $G=(V,E)$, a configuration $\sigma:V\rightarrow [q]$, and a subset $U$ of $V$, we use $\sigma(U)$ to denote the restriction of the configuration $\sigma$ to the vertices in $U$. For a vertex $v\in V$ and a colour $c\in[q]$, we denote by $\Pr_G[\sigma(v) = c]$ the probability that $v$ takes the colour $c$ in the Gibbs distribution. Let ${\mathbb{T}_{d,n}}$ be the $d$-ary tree with height $n$ (i.e., every path from the root to a leaf has $n$ edges, and every non-leaf vertex has $d$ children).[^4] Let ${\Lambda_{{\mathbb{T}_{d,n}}}}$ be the set of leaves of ${\mathbb{T}_{d,n}}$ and let ${v_{d,n}}$ be its root. The following definition formalises uniqueness on the $d$-ary tree. (See also [@BW02] for details about how to translate Definition \[def:uniqueness\] to the Gibbs theory formalisation.)
\[def:uniqueness\] The $q$-state Potts model with parameter $\beta$ has *uniqueness* on the infinite $d$-ary tree if, for all colours $c\in [q]$, it holds that $$\label{eq:f4g4553}
\limsup_{n\to\infty} \max_{\tau: {\Lambda_{{\mathbb{T}_{d,n}}}}\to [q]}{\left\vert\Pr_{ {\mathbb{T}_{d,n}}}[\sigma({v_{d,n}}) = c\mid\sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]-\frac{1}{q}\right\vert}=0.$$ It has *non-uniqueness* otherwise.
Equation \[eq:f4g4553\] formalises the fact that the correlation between the root of a $d$-ary tree and vertices at distance $n$ from the root vanishes as $n\rightarrow \infty$. We are now ready to state our main result.
\[thm:uniqueness\] Let $q=3$. When $d\geq 3$, the $3$-state Potts model on the $d$-ary tree has uniqueness for all $\beta\in[\tfrac{d-2}{d+1},1)$. When $d=2$, the $3$-state Potts model on the binary tree has uniqueness for all $\beta\in (0,1)$.
Theorem \[thm:uniqueness\] precisely pinpoints the uniqueness threshold for the 3-state Potts model since it is known that the model is in non-uniqueness in the complementary regime. When $d\geq 3$, non-uniqueness for $\beta<\tfrac{d-2}{d+1}$ follows from the existence of multiple semi-translation-invariant Gibbs measures[^5]. When $d=2$, the 3-state Potts model for $\beta=0$ corresponds to the 3-colouring model, and non-uniqueness holds in this case because of the existence of so-called *frozen* 3-colourings; in these colourings, the configuration on the leaves determines uniquely the colour of the root, see [@BW02].
Interestingly, our result and proof technique for the 3-state Potts model suggests that the only obstruction to uniqueness in the 3-colouring model on the binary tree are the frozen colouring configurations. It is reasonable to believe that this critical behaviour in the colourings model happens more generally whenever $q=d+1$. For comparison, note that the colourings model has non-uniqueness when $q<d+1$ ([@BW02], see also footnote \[foot:referf\]) and it has uniqueness when $q>d+1$ [@Jonasson02].
This critical behaviour for the colourings model when $q=d+1$ arises in the context of the Potts model as well, and, as we shall see in the next section, it causes complications in the proof of Theorem \[thm:uniqueness\]. Nevertheless, we formulate a general condition for all non-critical cases ($q\neq d+1$) which will be sufficient to establish the uniqueness threshold. We conjecture that the condition holds whenever $q\neq d+1$ (see Conjecture \[conj\]). The condition is tailored to the Potts model on a tree, unlike other known sufficient criteria for uniqueness (see for example [@Dobrushin; @Weitz]). Our condition reduces to single-variable inequalities and can be verified fairly easily for small values of $q,d$. Since Theorem \[thm:uniqueness\] includes the critical case $(q,d)=(3,2)$, our proof of the theorem necessarily goes a slightly different way (as we explain below), so in Section \[sec:approach\], we give a more detailed outline of our proof approach.
Application
-----------
We have already discussed some results in the literature where the uniqueness of spin-models on trees enables fast algorithms for sampling from these models on bounded-degree graphs and sparse random graphs. It turns out that Theorem \[thm:uniqueness\] can also be used in this way. In particular, Blanca et al. have obtained the following theorem.
\[Theorem 8 of [@samplingpaper]\] Let $q\geq 3$, $d\geq2$, and $\beta\in(0,1)$ be in the uniqueness regime of the $d$-ary tree with $\beta\neq (d+1-q)/(d+1)$. Then, there exists a constant $\delta>0$ such that, for all sufficiently large $n$, the following holds with probability $1-o(1)$ over the choice of a random $(d+1)$-regular graph $G=(V,E)$ with $n$ vertices.
There is a polynomial-time algorithm which, given the graph $G$ as input, outputs a random assignment $\sigma\colon V\rightarrow [q]$ from a distribution which is within total variation distance $O(1/n^{\delta})$ from the Gibbs distribution of the Potts model on $G$ with parameter $\beta$.
Thus, Theorem \[thm:uniqueness\] has the following corollary.
Let $q=3$. Suppose either $d=2$ and $\beta \in (0,1)$ or $d\geq 3$ and $\beta\in(\tfrac{d-2}{d+1},1)$. In either case, there exists a constant $\delta>0$ such that, for all sufficiently large $n$, the following holds with probability $1-o(1)$ over the choice of a random $(d+1)$-regular graph $G=(V,E)$ with $n$ vertices.
There is a polynomial-time algorithm which, given the graph $G$ as input, outputs a random assignment $\sigma\colon V\rightarrow [q]$ from a distribution which is within total variation distance $O(1/n^{\delta})$ from the Gibbs distribution of the Potts model on $G$ with parameter $\beta$.
We next discuss our approach for proving Theorem \[thm:uniqueness\].
Proof Approach {#sec:approach}
==============
In this section, we outline the key steps of our proof approach for proving uniqueness for the antiferromagnetic Potts model on the tree. As mentioned in the Introduction, the model does not enjoy the monotonicity properties which are present in two-state systems (or the ferromagnetic case)[^6], so we have to establish more elaborate criteria to resolve the uniqueness threshold.
We first review Jonasson’s approach for colourings [@Jonasson02]. One of the key insights there is to consider the ratio of the probabilities that the root takes two distinct colours and show that this converges to 1 as the height of the tree grows large. Jonasson analysed first a one-step recursion to establish bounds on the marginals of the root and used those to obtain upper bounds on the ratio. Then, he bootstrapped these bounds by analysing a more complicated two-step recursion and showed that the ratio converges to 1.
Our approach refines Jonasson’s approach in the following way; we jump into the two-step recursion and analyse the associated optimisation problem by giving an explicit description of the maximisers for general $q$ and $d$ (see Lemma \[lem:existence\]). It turns out that the maximisers change as the value of the ratio gets closer to 1, so to prove the desired convergence to 1, we need to account for the roughly $q^d$ possibilities for the maximiser. This yields an analytic condition that can be checked easily for small values of $q,d$ and thus establish uniqueness. In the context of Theorem \[thm:uniqueness\] where $q=3$, most of the technical work is to deal analytically with the potentially large values of the arity $d$ of the tree.
A further complication arises in the case $q=3$ and $d=2$ (and more generally $q=d+1$), since this incorporates the critical behaviour for colourings described in Section \[sec:results\]. This manifests itself in our proof by breaking the (global) validity of our uniqueness condition. We therefore have to use an analogue of Jonasson’s approach to account for this case by first using the one-step recursion to argue that the ratio gets sufficiently close to 1 and then finishing the argument with the two-step recursion.
Our proofs are computer-assisted but rigorous — namely we use the (rigorous) [Resolve]{} function of Mathematica to check certain inequalities. We also provide Mathematica code to assist the reader with tedious-but-straightforward calculations (such as differentiating complicated functions). The Mathematica code is in Section \[sec:code\].
Ratio for proving Theorem \[thm:uniqueness\]
--------------------------------------------
For $\beta\in (0,1)$ and $n>0$, define the following ratio. $$\label{eq:ratio}
{\gamma(q,\beta,d,n)}
=
\max_{\stackrel{ \tau: {\Lambda_{{\mathbb{T}_{d,n}}}}\to [q]}{c_1,c_2\in[q]}} \frac
{\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}}) = c_1\mid\sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]}
{\Pr_{ {\mathbb{T}_{d,n}}}[\sigma({v_{d,n}}) = c_2\mid\sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]}.$$ Note that if $\beta > 0$ and $n>0$, then for every $ \tau\colon {\Lambda_{{\mathbb{T}_{d,n}}}}\to [q]$ and every $c\in[q]$, $ \Pr_{ {\mathbb{T}_{d,n}}}[\sigma({v_{d,n}}) = c\mid\sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]>0$. So ${\gamma(q,\beta,d,n)}$ is well-defined.
Suppose, for fixed $q$, $\beta$ and $d$, that $\lim_{n\to\infty} \gamma(q,\beta,d,n) = 1$. This implies that the limsup in the uniqueness definition (Definition \[def:uniqueness\]) is zero. Thus, Theorem \[thm:uniqueness\] is an immediate consequence of the following theorem.
\[thm:main\] [ If $\beta\in(0,1)$ then $\lim_{n\to\infty} {\gamma(3,\beta,2,n)} = 1$. If $d\geq 3$ and $1-3/(d+1)\leq \beta < 1$ then $\lim_{n\to\infty} {\gamma(3,\beta,d,n)}
= 1$. ]{}
In Section \[sec:conclusion\] we obtain Theorem \[thm:uniqueness\] by proving Theorem \[thm:main\].
The two-step recursion {#sec:introtwostep}
----------------------
In this section, we formulate an appropriate recursion on the infinite $d$-ary tree, which will be one of our main tools for tracking the ratio $\gamma(q,\beta,d,n)$.
We denote the set of $q$-dimensional probability vectors by $\triangle$, i.e., $$\triangle = \{(p_1, p_2, \ldots, p_q)\colon0 \leq p_1, p_2, \ldots, p_q \leq 1\, \land\, p_1+p_2+\cdots+p_q = 1\}.$$ Suppose that $c_1$ and $c_2$ are two colours in $[q]$. We define two functions $g_{c_1,c_2,\beta}$ and $h_{c_1,c_2,\beta}$, indexed by these colours. The argument of each of these functions is a tuple $({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})$ where, for each $j\in[d]$, ${{\mathbf{p}}^{(j)}} \in \triangle$. The functions are defined as follows. $$\label{eq:gh12def}
\begin{aligned}
g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\ldots,{{\mathbf{p}}^{(d)}})
&:=\prod^d_{k=1}\bigg(1-\frac{(1-\beta) \big({p^{(k)}}_{c_1}-{p^{(k)}}_{c_2}\big)}{\beta {p^{(k)}}_{c_2}+\sum_{c\neq c_2}{p^{(k)}}_{c}}\bigg).\\
h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})
&:=1+\frac{(1-\beta)\big(1-g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})\big)}{\beta +\sum_{c\neq c_2}g_{c,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})}.
\end{aligned}$$ Note that the functions $g_{c_1,c_2,\beta}$ and $h_{c_1,c_2,\beta}$ are well-defined when $\beta\in (0,1)$ and all of ${{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}}$ have non-negative entries; they are also well-defined when $\beta=0$ and all of ${{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}}$ have positive entries.
One feature of the functions $g_{c_1,c_2,\beta}$ and $h_{c_1,c_2,\beta}$ which will be important shortly is that they are scale-free. This means that we can multiply each of their arguments by some constant without changing their value, i.e., for scalars $t_1,\hdots,t_d>0$ it holds that $$\label{eq:scalefree23}
\begin{aligned}
g_{c_1,c_2,\beta}(t_1{{\mathbf{p}}^{(1)}},\ldots,t_d{{\mathbf{p}}^{(d)}})&=g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\ldots,{{\mathbf{p}}^{(d)}}),\\
h_{c_1,c_2,\beta}(t_1{{\mathbf{p}}^{(1)}},\ldots,t_d{{\mathbf{p}}^{(d)}})&=h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\ldots,{{\mathbf{p}}^{(d)}}).
\end{aligned}$$ The following proposition, proved in Section \[sec:proverec\], shows the relevance of these functions for analysing the tree.
\[lem:twostep\]
Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in (0,1)$. For an integer $n\geq2$, let $T$ be the tree ${\mathbb{T}_{d,n}}$ with root $z=v_{d,n}$ and leaves $\Lambda={\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $\tau \colon \Lambda \to [q]$ be an arbitrary configuration.
Let $z_1,\ldots,z_d$ be the children of $z$ in $T$ and, for $i\in [d]$, let $\{z_{i,j}\}_{j\in[d]}$ be the children of $z_i$. Denote by $T_{i,j}$ the subtree of $T$ rooted at $z_{i,j}$ and by $\Lambda_{i,j}$ the set of leaves of $T_{i,j}$. For $i\in[d]$, $j\in[d]$ and $c\in[q]$, let $r^{(i,j)}_c:=
\Pr_{T_{i,j}}[\sigma(z_{i,j})=c \mid \sigma(\Lambda_{i,j})= \tau(\Lambda_{i,j})]$, and denote by ${\mathbf{r}}^{(i,j)}$ the vector ${\mathbf{r}}^{(i,j)}=\big(r^{(i,j)}_1, \ldots, r^{(i,j)}_q\big)$. Then for any colours $c_1\in[q]$ and $c_2\in[q]$ we have $$\frac
{\Pr_{{\mathbb{T}_{d,n}}}[\sigma(z) = c_1\mid\sigma( \Lambda)=\tau]}
{\Pr_{ {\mathbb{T}_{d,n}}}[\sigma(z) = c_2\mid\sigma( \Lambda)=\tau]}=\prod^{d}_{k=1}h_{c_1,c_2,\beta}\big({\mathbf{r}}^{(k,1)}, \ldots, {\mathbf{r}}^{(k,d)}\big).$$
We refer to the recursion introduced in Proposition \[lem:twostep\] as the *two-step recursion*. The two-step recursion will allow us to iteratively bootstrap our bounds on the ratio ${\gamma(q,\beta,d,n)}$. To formalise this, we will use the following definition.
\[def:max\] Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in [0,1)$. For any $\alpha > 1$, let $$\triangle_\alpha = \{(p_1,\ldots,p_q)\in \triangle \colon \max_{i\in[q]} p_i \leq \alpha \min_{j\in[q]} p_j \}.$$ (Note that every vector in $\triangle_\alpha$ has strictly positive entries.) For colours $c_1\in[q]$ and $c_2\in[q]$, let $$\label{eq:erff5g54r323234}
M_{\alpha,c_1,c_2,\beta}={\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big).$$ Since $\triangle_{\alpha}$ is compact and $h_{c_1,c_2,\beta} $ is continuous, the maximisation in is well-defined.
Definition \[def:max\] ensures that $\triangle_\alpha$ is the subset of $\triangle$ induced by probability vectors whose entries are within a factor of $\alpha>1$ of each other. $M_{\alpha,c_1,c_2,\beta}$ is the maximum of the two-step recursion function $h_{c_1,c_2,\beta}$ when each of its arguments are from $\triangle_\alpha$. The following proposition gives a preliminary condition for establishing uniqueness when $\beta\in(0,1)$ — it is proved in Section \[sec:conclusion\].
\[lem:steptwouniq\] [ Let $q\geq 3$, $d\geq 2$ and $\beta\in (0,1)$. Suppose that for all $\alpha>1$ and any colours $c_1,c_2\in [q]$, it holds that $$M_{\alpha, c_1,c_2,\beta}<\alpha^{1/d}.$$ Then, it holds that ${\gamma(q,\beta,d,n)}\rightarrow 1$ as $n\rightarrow \infty$, i.e., the $q$-state Potts model with parameter $\beta$ has uniqueness on the $d$-ary tree. ]{}
In the next section, we will show how to simplify the condition in Proposition \[lem:steptwouniq\].
A simpler condition for uniqueness {#sec:simplecondition}
----------------------------------
Proposition \[lem:steptwouniq\] gives a sufficient condition on the two-step recursion that is sufficient for establishing uniqueness based on the maximisation of $h_{c_1,c_2,\beta}$. Due to the many variables involved in the maximisation, this is rather complicated for any direct verification. We will simplify this maximisation signifantly by showing that it suffices to consider very special vectors whose entries are either equal to $\alpha$ or 1. We start with the following definition of “extremal tuples”.
Let $\alpha>1$, and consider a colour $c\in [q]$. A tuple $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})$ is called *$(\alpha,c)$-extremal* iff for all $k \in [d]$,
- for all $c'\in[q]$, either ${p^{(k)}}_{c'} = {p^{(k)}}_{c}$, or ${p^{(k)}}_{c'} = \alpha\cdot{p^{(k)}}_{c}$;
- there exists $c'\in [q]$ such that ${p^{(k)}}_{c'} = \alpha\cdot{p^{(k)}}_{c}$.
Our interest in extremal tuples is justified by the following lemma, whose proof is given in Section \[sec:existence\].
\[lem:existence\] [Let $q\geq 3$, $d\geq 2$ and $\beta\in [0,1)$. For any colours $c_1,c_2\in [q]$, there is an $(\alpha,c_2)$-extremal tuple which achieves the maximum in ${\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)$ (cf. ).]{}
One of the consequences of Lemma \[lem:existence\] is that the validity of the inequality in Proposition \[lem:steptwouniq\] is monotone with respect to $\beta$. In particular, we have the following lemma (also proved in Section \[sec:existence\]).
\[lem:mo12no12tone\] [ Let $q\geq 3$, $d\geq 2$ and $\beta', \beta''\in [0,1)$ with $\beta'\leq \beta''$. Then, for all $\alpha>1$ and any colours $c_1,c_2\in [q]$, it holds that $$M_{\alpha, c_1,c_2,\beta''}\leq M_{\alpha, c_1,c_2,\beta'}.$$ ]{}
Another consequence of Lemma \[lem:existence\] is that, combined with the scale-free property, it reduces the verification of the condition in Proposition \[lem:steptwouniq\] to the verification of single-variable inequalities in $\alpha$. These inequalities are obtained by trying all $d$-tuples of $q$-dimensional vectors whose entries are as follows. $$\label{ex:ca}
\mathrm{Ex}_{c}(\alpha)=\big\{(p_1,\hdots,p_q)
\in \{1,\alpha\}^q
\mid p_{c}=1 \land \exists c'\in[q] \text{ such that } p_{c'}=\alpha \big\}.$$ The following simplified condition will be our main focus henceforth.
\[cond:we\] Let $q\geq 3$, $d\geq 2$. Set $\beta_*:=\max\big\{1-\tfrac{q}{d+1},0\big\}$. For $\alpha>1$, let $\mathcal{C}(\alpha)$ be the condition $$\mathcal{C}(\alpha): \quad \forall c_1,c_2\in [q],\ h_{c_1,c_2,\beta_*}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)<\alpha^{1/d} \mbox{ for all } {{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\in \mathrm{Ex}_{c_2}(\alpha).$$ If $\mathcal{C}(\alpha)$ holds, we say that the pair $(q,d)$ satisfies Condition \[cond:we\] for $\alpha$.
Now, to verify the inequality in Proposition \[lem:steptwouniq\], we will show shortly that it suffices only to establish Condition \[cond:we\] for all $\alpha>1$, which turns out to be a much more feasible task because of the very explicit form of the set $\mathrm{Ex}_{c_2}(\alpha)$. In the next section, we discuss how to do this in detail, but for now let us state a proposition which asserts that this is indeed sufficient.
\[lem:condimpliesuniq\] Suppose that the pair $(q,d)$ satisfies Condition \[cond:we\] for all $\alpha>1$. Let $\beta_*=\max\big\{1-\tfrac{q}{d+1},0\big\}$. Then, the $q$-state Potts model on the $d$-ary tree has uniqueness for all $\beta\in(0,1)$ satisfying $\beta\geq \beta_*$.
We consider first the case where $\beta_*>0$. We will show that for all colours $c_1,c_2\in[q]$, it holds that $$\label{eq:z4r34rf4rf4}
M_{\alpha,c_1,c_2,\beta_*}<\alpha^{1/d} \mbox{ for all $\alpha>1$}.$$ Then by Lemma \[lem:mo12no12tone\], we obtain that, for all $\beta\in [\beta^*,1)$ it holds that $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$ as well for all $\alpha>1$ and therefore, by Proposition \[lem:steptwouniq\], the Potts model has uniqueness for all such $\beta$.
To prove , consider an arbitrary $\alpha>1$ and colours $c_1,c_2\in [q]$. By Lemma \[lem:existence\], there exists an $(\alpha,c_2)$-extremal tuple $({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}\big)$ such that $$\label{eq:z4r34rf4rf4a}
M_{\alpha,c_1,c_2,\beta_*}=h_{c_1,c_2,\beta_*}({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}\big).$$ For $c\in [q]$ and $k\in [d]$, denote by $p^{(k)}_c$ the entry of ${\mathbf{p}}^{(k)}$ corresponding to colour $c$ and let $\hat{{\mathbf{p}}}^{(k)}$ be the vector $t_k {{\mathbf{p}}}^{(k)}$ where $t_k=1/p^{(k)}_{c_2}$. By the definition of an $(\alpha,c_2)$-extremal tuple, we have that $$\hat{{\mathbf{p}}}^{(1)},\hdots,\hat{{\mathbf{p}}}^{(d)}\in \mathrm{Ex}_{c_2}(\alpha).$$ Moreover, by the scale-free property we have that $$\label{eq:z4r34rf4rf4b}
h_{c_1,c_2,\beta_*}(\hat{{\mathbf{p}}}^{(1)},\hdots, \hat{{\mathbf{p}}}^{(d)})=h_{c_1,c_2,\beta_*}({{\mathbf{p}}^{(1)}},\ldots,{{\mathbf{p}}^{(d)}}).$$ Finally, since the pair $(q,d)$ satisfies Condition \[cond:we\] for all $\alpha>1$, we have that $$\label{eq:z4r34rf4rf4c}
h_{c_1,c_2,\beta_*}(\hat{{\mathbf{p}}}^{(1)},\hdots, \hat{{\mathbf{p}}}^{(d)})<\alpha^{1/d}.$$ Combining , , and yields , as needed.
The case $\beta_*=0$ is analogous. Now, we need to show that we have uniqueness for all $\beta\in (0,1)$ assuming that Condition \[cond:we\] holds for all $\alpha>1$. Just as before, we obtain that $M_{\alpha,c_1,c_2,\beta_*}<\alpha^{1/d}$ for all $\alpha>1$ and hence by Lemma \[lem:mo12no12tone\], we have that $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$ for all $\alpha>1$ and $\beta\in (0,1)$. Uniqueness for $\beta\in(0,1)$ therefore follows from applying Proposition \[lem:steptwouniq\].
Note that, when $\beta_*>0$, the conclusion of Proposition \[lem:condimpliesuniq\] asserts uniqueness in the half-open interval $[\beta_*,1)$; when $\beta_*=0$, it instead asserts uniqueness in the open interval $(0,1)$.
Verifying the Condition
-----------------------
In this section, we give more details on how to verify Condition \[cond:we\].
To apply Proposition \[lem:condimpliesuniq\], we will need to verify Condition \[cond:we\]. The latter is fairly simple to verify for small values of $q,d$ since it reduces to single-variable inequalities in $\alpha$. We illustrate the details when $(q,d)=(3,3)$ and $(q,d)=(4,4)$.
\[lem:examplecond\] The pairs $(q,d)=(3,3)$ and $(q,d)=(4,4)$ satisfy Condition \[cond:we\] for all $\alpha>1$.
By symmetry among the colours, it suffices to verify the condition for colours $c_1=1$ and $c_2=q$. In Section \[app:examplecond\], we just try all possible $d$-tuples $({\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)})$ with ${\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)}\in \mathrm{Ex}_q(\alpha)$. For each such $d$-tuple, the inequality $$h_{c_1,c_2,\beta_*}({\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)})<\alpha^{1/d}$$ is a single-variable inequality in $\alpha$ which can be verified using Mathematica’s [Resolve]{} function for all $\alpha>1$. For $(q,d)=(3,3)$ and $(q,d)=(4,4)$, all the resulting inequalities are satisfied.
Combining Lemma \[lem:examplecond\] with Proposition \[lem:condimpliesuniq\] we get the following immediate corollary.
\[cor:example\] The $3$-state Potts model on the $3$-ary tree has uniqueness for all $\beta\in[1/4,1)$. The $4$-state Potts model on the $4$-ary tree has uniqueness for all $\beta\in[1/5,1)$.
Corollary \[cor:example\] establishes the uniqueness threshold for $(q,d)=(3,3)$ and $(q,d)=(4,4)$. More generally, we are interested in the following question: When is Condition \[cond:we\] satisfied for all $\alpha>1$? We conjecture the following.
\[conj\] When $q\neq d+1$, the pair $(q,d)$ satisfies Condition \[cond:we\] for all $\alpha>1$.
We have only been able to verify Conjecture \[conj\] for specific values of $q,d$ (with methods similar to those used in the proof of Lemma \[lem:examplecond\]). However, it is important to note that the restriction $q\neq d+1$ in the conjecture cannot be removed. For example, the pair $(q,d)=(3,2)$ does not satisfy Condition \[cond:we\] for all $\alpha>1$ — it only satisfies the condition for $\alpha$ fairly close to $1$. Thus, to prove Theorem \[thm:uniqueness\] we need a different argument to account for the case $(q,d)=(3,2)$.
Thus, instead of trying to prove Conjecture \[conj\] for *all* values of $\alpha$ (which wouldn’t be enough for our theorem), we follow Jonasson’s approach and use the one-step recursion to argue that the ratio $\gamma(q,\beta,d,n)$ gets moderately close to 1; close enough that we can then use the two-step recursion to finish the proof of uniqueness. Note that, in contrast to the two-step recursion, the one-step recursion is not sufficient on its own to obtain tight uniqueness results for any values of $q,d$ (this was also observed by Jonasson [@Jonasson02] in the case of colourings).
First, we state the one-step recursion that we are going to use on the tree. This recursion, as well as the two-step recursion of Proposition \[lem:twostep\], are well-known, but we prove them explicitly in Section \[sec:proverec\] for completeness.
\[lem:onesteprecursion\]
Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in (0,1)$. For an integer $n\geq 1$, let $T$ be the tree ${\mathbb{T}_{d,n}}$ with root $v=v_{d,n}$ and leaves $\Lambda={\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $\tau:\Lambda\rightarrow[q]$ be an arbitrary configuration.
Let $v_1,\ldots,v_d$ be the children of $v$ in $T$. For $i\in [d]$, let $T_i$ be the subtree of $T$ rooted at $v_i$ and let $\Lambda_i$ denote the set of leaves of the subtree $T_i$. Then, for any colour $c\in[q]$, it holds that $$\Pr_{T}[\sigma(v) = c\mid \sigma(\Lambda)=\tau] =
\frac{\prod_{i=1}^d \big(1-(1-\beta)
\Pr_{T_i}[\sigma(v_i)= c \mid \sigma(\Lambda_{i}) = \tau(\Lambda_{i})]\big)}
{\sum_{c'=1}^q\prod_{i=1}^d \big( 1-(1-\beta)
\Pr_{T_i}[\sigma(v_i) = c'\mid \sigma(\Lambda_{i}) = \tau(\Lambda_{i})]\big)}.$$
Tracking the one-step recursion relatively accurately requires a fair amount of work, and to aid the verification of Condition \[cond:we\] in the case $q=3$, we do this for general values of $d$. In particular, we prove the following lemma in Section \[sec:bounds12onestep\].
\[lem:onestepuniq\]
Let $q=3$ and $c\in [3]$ be an arbitrary colour. For $d\geq 2$, consider the $d$-ary tree ${\mathbb{T}_{d,n}}$ with height $n$ and let $\tau:{\Lambda_{{\mathbb{T}_{d,n}}}}\rightarrow [3]$ be an arbitrary configuration on the leaves.
When $d=2$, for all $\beta\in (0,1)$, for all sufficiently large $n$ it holds that $$\frac{459}{2000}{\leq}\Pr_{{\mathbb{T}_{2,n}}}[\sigma({v_{2,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{2,n}}}}) = \tau] {\leq}\frac{1107}{2500}.$$ When $d\geq 3$, for all $\beta\in [1-\tfrac{3}{d+1},1)$, there exist sequences $\{L_n\}$ and $\{U_n\}$ (depending on $d$ and $\beta$) such that for all sufficiently large $n$ $$L_n\leq\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]\leq U_n \mbox{ and } U_n/L_n\leq 53/27.$$
The following corollary is an immediate consequence of Lemma \[lem:onestepuniq\].
\[thm:ratiobound\] For $d=2$ and every $\beta\in (0,1)$ there is a positive integer $n_0$ such that, for every $n\geq n_0$, we have ${\gamma(3,\beta,d,n)} \leq {53}/{27}$.
For every $d\geq 3$ and every $\beta$ satisfying $1-3/(d+1)\leq \beta <1$ there is a positve integer $n_0$ such that, for every $n\geq n_0$, we have ${\gamma(3,\beta,d,n)} \leq {53}/{27}$.
We combine this with the following lemma which verifies Condition \[cond:we\] for all $\alpha\in(1,53/27]$. The proof is given in Section \[sec:twosteptoprove\].
\[lem:twosteptoprove\] [ Let $q=3$ and $d\geq 2$. Then, the pair $(q,d)$ satisfies Condition \[cond:we\] for all $\alpha\in(1,53/27]$. ]{}
Using Corollary \[thm:ratiobound\] and Lemma \[lem:twosteptoprove\], we give the proof of Theorem \[thm:main\] (which implies Theorem \[thm:uniqueness\]) in Section \[sec:conclusion\].
Concluding uniqueness {#sec:conclusion}
=====================
In this section, we prove Proposition \[lem:steptwouniq\] and also conclude the proof of Theorem \[thm:main\] (assuming for now Lemmas \[lem:onestepuniq\] and \[lem:twosteptoprove\] and also Lemma \[lem:existence\], which we have already used). Recall that $$\tag{\ref{eq:erff5g54r323234}}
M_{\alpha,c_1,c_2,\beta}={\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big).$$
We will need the following proposition.
\[lem:gammaisdecreasing\] Let $q\geq 3$, $d\geq 2$ and $\beta\in (0,1)$. Suppose that, for some integer $n\geq 3$ and some $\alpha>1$, it holds that ${\gamma(q,\beta,d,n-2)}=\alpha$ and $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$ for all colours $c_1,c_2\in[q]$. Then ${\gamma(q,\beta,d,n)}\leq(M_{\alpha,c_1,c_2,\beta})^d<{\gamma(q,\beta,d,n-2)}$.
Consider the tree ${\mathbb{T}_{d,n}}$ with root $z=v_{d,n}$ and leaves $\Lambda= {\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $\tau: {\Lambda_{{\mathbb{T}_{d,n}}}}\rightarrow [q]$ be an arbitrary configuration. As in Proposition \[lem:twostep\], let $\{z_{i,j}\}_{i,j\in [d]}$ denote the grandchildren of the root, let $T_{i,j}$ be the subtree of $T$ rooted at $z_{i,j}$, and let $\Lambda_{i,j}$ be the set of leaves of $T_{i,j}$. Further, let ${\mathbf{r}}^{(i,j)}$ be the marginal distribution at $z_{i,j}$ in the subtree $T_{i,j}$, conditioned on the configuration $\tau(\Lambda_{i,j})$.
By the assumption ${\gamma(q,\beta,d,n-2)}=\alpha$ and the definition of the ratio ${\gamma(q,\beta,d,n-2)}$, we have that ${\mathbf{r}}^{(i,j)}\in \triangle_\alpha$ for all $i,j\in [d]$. Proposition \[lem:twostep\] also guarantees that for colours $c_1\in [q]$ and $c_2\in [q]$ we have $$\frac
{\Pr_{\mathbb{T}_{d,n}}[\sigma(z) = c_1\mid\sigma( \Lambda)=\tau]}
{\Pr_{\mathbb{T}_{d,n}}[\sigma(z) = c_2\mid\sigma( \Lambda)=\tau]}=\prod^{d}_{k=1}h_{c_1,c_2,\beta}\big({\mathbf{r}}^{(k,1)}, \ldots, {\mathbf{r}}^{(k,d)}\big)\leq (M_{\alpha,c_1,c_2,\beta})^d <\alpha,$$ where the strict inequality follows by the assumption that $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$. Since $\tau$ was an arbritrary configuration on the leaves $\Lambda$, we obtain that ${\gamma(q,\beta,d,n)}<{\gamma(q,\beta,d,n-2)}$ as needed.
We start with Proposition \[lem:steptwouniq\], which we restate here for convenience.
[ Let $q\geq 3$, $d\geq 2$ and $\beta\in (0,1)$. Suppose that for all $\alpha>1$ and any colours $c_1,c_2\in [q]$, it holds that $$M_{\alpha, c_1,c_2,\beta}<\alpha^{1/d}.$$ Then, it holds that ${\gamma(q,\beta,d,n)}\rightarrow 1$ as $n\rightarrow \infty$, i.e., the $q$-state Potts model with parameter $\beta$ has uniqueness on the $d$-ary tree. ]{}
Fix $q$, $d$ and $\beta$ as in the statement. For all $n\geq 1$, let $\alpha_n:= {\gamma(q,\beta,d,n)}$. We may assume that $\alpha_n>1$ for all $n\geq 1$ (otherwise, uniqueness follows trivially by choosing $n_0$ such that $\alpha_{n_0}=1$ and then applying Proposition \[lem:onesteprecursion\] repeatedly to show $\alpha_n = 1$ for all $n\geq n_0$.)
Using Proposition \[lem:gammaisdecreasing\] and the assumption that $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$ for all $\alpha>1$ and colours $c_1,c_2\in [q]$, we obtain that $$\label{eq:decreasing}
1<\alpha_n<\alpha_{n-2}.$$ This implies that both of the sequences $\{\alpha_{2n}\}$ and $\{\alpha_{2n+1}\}$ are decreasing. Since both of these sequences is bounded below by 1, we obtain that for $n\rightarrow \infty$ it holds that[^7] $$\alpha_{2n}\downarrow \alpha_{\mathrm{ev}}, \quad \alpha_{2n+1}\downarrow \alpha_{\mathrm{odd}}$$ for some $\alpha_{\mathrm{ev}}, \alpha_{\mathrm{odd}}\geq 1$. We claim that in fact both of $\alpha_{\mathrm{ev}}, \alpha_{\mathrm{odd}}$ are equal to $1$, which proves that $ {\gamma(q,\beta,d,n)}\rightarrow 1$ as $n\rightarrow \infty$.
Suppose for the sake of contradiction that $\alpha_{\mathrm{ev}}>1$ (a similar argument applies for $\alpha_{\mathrm{odd}}$). Let $\mathbf{m}_{2n}=({{\mathbf{p}}^{(1)}}_{2n},\hdots,{{\mathbf{p}}^{(d)}}_{2n})$ achieve the maximum in for $\alpha=\alpha_{2n}$, i.e., $$M_{\alpha_{2n},c_1,c_2,\beta}=h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}}_{2n},\hdots,{{\mathbf{p}}^{(d)}}_{2n}\big).$$ Note that for all $n\geq 1$ we have that $$\label{eq:contrah1242r2}
h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}}_{2n},\hdots,{{\mathbf{p}}^{(d)}}_{2n}\big)\geq (\alpha_{\mathrm{ev}})^{1/d};$$ otherwise, we would have that $M_{\alpha_{2n}, c_1,c_2,\beta}<(\alpha_{\mathrm{ev}})^{1/d}$ and hence, by Proposition \[lem:gammaisdecreasing\], we would have that $\alpha_{2n+2}<\alpha_{\mathrm{ev}}$, contradicting that $\alpha_{2n}\downarrow \alpha_{\mathrm{ev}}$.
Moreover, observe that $\mathbf{m}_{2n}$ belongs to the compact space $\triangle^d$ for all $n\geq 1$ and therefore there exists a subsequence $\{n_k\}_{k\geq 1}$ and $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in \triangle^d$ such that $$\mathbf{m}_{2n_k}\rightarrow ({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}),$$ In fact, since $\{\alpha_{2n_k}\}_{k\geq 1}$ is a subsequence of the convergent sequence $\{\alpha_{2n}\}_{n\geq 1}$, we have that the sequence $\{\alpha_{2n_k}\}$ converges to $\alpha_{\mathrm{ev}}$ as well. From $\mathbf{m}_{2n_k}\in \triangle_{\alpha_{2n_k}}^d$, we therefore obtain that $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in \triangle^d_{\alpha_{\mathrm{ev}}}$. Applying the assumption $M_{\alpha,c_1,c_2,\beta}<\alpha^{1/d}$ for $\alpha=\alpha_{\mathrm{ev}}$, we therefore have that $$\label{eq:contrah1242r2b}
h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})<(\alpha_{\mathrm{ev}})^{1/d}.$$ Since the function $h_{c_1,c_2,\beta}$ is continuous on $\triangle^{d}$ for all $\beta\in (0,1)$, we have that as $k\rightarrow\infty$ $$h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}}_{2n_k},\hdots,{{\mathbf{p}}^{(d)}}_{2n_k})\rightarrow h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}).$$ This contradicts and . Therefore, $\alpha_{\mathrm{ev}}=1$, and similarly $\alpha_{\mathrm{odd}}=1$, completing the proof.
Assuming Lemmas \[lem:onestepuniq\] and \[lem:twosteptoprove\], we can also conclude the proof of Theorem \[thm:main\] in a similar way.
[ If $\beta\in(0,1)$ then $\lim_{n\to\infty} {\gamma(3,\beta,2,n)} = 1$. If $d\geq 3$ and $1-3/(d+1)\leq \beta < 1$ then $\lim_{n\to\infty} {\gamma(3,\beta,d,n)}
= 1$. ]{}
We first consider the case $d\geq 3$. Let $\beta\in[1-\tfrac{3}{d+1},1)$ and for all $n\geq 1$, set $\alpha_n=\gamma(3,d,\beta,n)$. By Lemma \[lem:onestepuniq\], we have that there exists $n_0$ such that for all $n\geq n_0$, it holds that $$\alpha_n\in (1,53/27].$$ (The reason that the left end-point of the interval is open is that we finish if $\alpha_n=1$, as in the proof of Proposition \[lem:steptwouniq\].)
By Lemma \[lem:twosteptoprove\], the pair $(q,d)$ satisfies Condition \[cond:we\] for $\alpha_n$ (for $n\geq n_0$). By the definition of Condition \[cond:we\], for all $c_1,c_2\in[q]$, and $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in \mathrm{Ex}_{c_2}(\alpha_n)$, we have $ h_{c_1,c_2,\beta_*}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)<\alpha_n^{1/d} $. By the scale-free property of $h_{c_2,c_2,\beta_*}$, $M_{\alpha_n,c_1,c_2,\beta_*} < \alpha_n^{1/d}$ so, by Lemma \[lem:mo12no12tone\], $M_{\alpha_n,c_1,c_2,\beta} < \alpha_n^{1/d}$. Using Proposition \[lem:gammaisdecreasing\] we obtain that for all $n\geq n_0+2$, it holds that $$\label{eq:decreasingb}
1<\alpha_n< \alpha_{n-2}.$$ This implies that both of the sequences $\{\alpha_{2n}\}_{n\geq n_0}$ and $\{\alpha_{2n+1}\}_{n\geq n_0}$ are decreasing, and since both are bounded below by 1, they converge. We now use an argument that is almost identical to the one used in the proof of Proposition \[lem:steptwouniq\]. The only difference is that now the sequences start from $2n_0$ and $2n_0+1$ instead of from $n=2$ and $n=3$, respectively. Using this argument, we obtain that the limits of $\{\alpha_{2n}\}_{n\geq n_0}$ and $\{\alpha_{2n+1}\}_{n\geq n_0}$ must be equal to 1, thus proving that $\gamma(3,d,\beta,n)\rightarrow 1$ as $n\rightarrow \infty$.
The argument for the case $d=2$ and $\beta\in (0,1)$ is actually the same; the only difference to the case $d\geq 3$ is that $\beta$ lies in an open interval instead of a half-open interval.
Proving Tree Recursions {#sec:proverec}
=======================
In this section, we give proofs of the (standard) tree recursions, which we have already used. We first prove Proposition \[lem:onesteprecursion\] for the one-step recursion.
Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in (0,1)$. For an integer $n\geq 1$, let $T$ be the tree ${\mathbb{T}_{d,n}}$ with root $v=v_{d,n}$ and leaves $\Lambda={\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $\tau:\Lambda\rightarrow[q]$ be an arbitrary configuration.
Let $v_1,\ldots,v_d$ be the children of $v$ in $T$. For $i\in [d]$, let $T_i$ be the subtree of $T$ rooted at $v_i$ and let $\Lambda_i$ denote the set of leaves of the subtree $T_i$. Then, for any colour $c\in[q]$, it holds that $$\Pr_{T}[\sigma(v) = c\mid \sigma(\Lambda)=\tau] =
\frac{\prod_{i=1}^d \big(1-(1-\beta)
\Pr_{T_i}[\sigma(v_i)= c \mid \sigma(\Lambda_{i}) = \tau(\Lambda_{i})]\big)}
{\sum_{c'=1}^q\prod_{i=1}^d \big( 1-(1-\beta)
\Pr_{T_i}[\sigma(v_i) = c'\mid \sigma(\Lambda_{i}) = \tau(\Lambda_{i})]\big)}.$$
For any graph $G$, we use $V(G)$ to denote the vertex set of $G$. Recall that, for any configuration $\sigma: V(G)\rightarrow [q]$, its weight in the Potts model with parameter $\beta$ is given by $w_G(\sigma)=\beta^{m(\sigma)}$ where $m(\sigma)$ denotes the number of monochromatic edges in $G$ under the assignment $\sigma$. If $v\in V(G)$ then we use the notation $w_G(\sigma(v)= c)$ to denote the quantity $w_G(\sigma(v)=c) = \sum_{\sigma'\colon V(G)\to[q], \sigma'(v)=c} w_G(\sigma')$. Similarly, if $S$ is a subset of $V(G)$ and $\tau$ is an assignment $\tau:S \rightarrow [q]$ then we use the notation $w_G(\sigma(S)=\tau) = \sum_{\sigma'\colon V(G)\to[q], \sigma'(S)=\tau} w_G(\sigma')$. We will typically be interested in the case where $G$ is a sub-tree of $T$. We have $$\label{eq:conditioned}
\Pr_{T}[\sigma(v) = c\mid \sigma(\Lambda)=\tau] =\frac{\Pr_{T}[\sigma(v) = c\land \sigma(\Lambda)=\tau]}{\Pr_{T}[\sigma(\Lambda)=\tau]}=\frac{w_{T}(\sigma(v) = c\land \sigma(\Lambda)=\tau)}{w_{T}(\sigma(\Lambda)=\tau)}.$$
Now we compute $w_T(\sigma(v) = c\land \sigma(\Lambda)=\tau)$: $$\begin{aligned}
\label{eq:weight}
w_T(\sigma(v) &= c\land \sigma(\Lambda)=\tau)\cr =&\prod_{i=1}^d
\Big( w_{T_i}\big(\sigma(v_i) \neq c\land \sigma(\Lambda_i)=\tau(\Lambda_i)\big)+\beta\, w_{T_i}\big(\sigma(v_i) = c\land \sigma(\Lambda_i)=\tau(\Lambda_i)\big)\Big)\cr
=&\prod_{i=1}^d w_{T_i}\big(\sigma(\Lambda_i)=\tau(\Lambda_i)\big) \left(1- (1-\beta)\frac{w_{T_i}\big(\sigma(v_i) = c\land \sigma(\Lambda_i)=\tau(\Lambda_i)\big)}{w_{T_i}\big(\sigma(\Lambda_i)=\tau(\Lambda_i)\big) }\right)\cr
=&\prod_{i=1}^d w_{T_i}\big(\sigma(\Lambda_i)=\tau(\Lambda_i)\big)
\Big(1-(1-\beta)
\Pr_{T_i}\big[\sigma(v_i)= c \mid \sigma(\Lambda_{i}) = \tau(\Lambda_{i})\big]\Big).
\end{aligned}$$ Also, we have $w_T(\sigma(\Lambda)=\tau) = \sum_{c'=1}^q w_T(\sigma(v) = c'\land\sigma(\Lambda)=\tau)$. Combining this with and , we obtain the statement of the lemma.
We next prove Proposition \[lem:twostep\] for the two-step recursion of Section \[sec:introtwostep\].
Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in (0,1)$. For an integer $n\geq2$, let $T$ be the tree ${\mathbb{T}_{d,n}}$ with root $z=v_{d,n}$ and leaves $\Lambda={\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $\tau \colon \Lambda \to [q]$ be an arbitrary configuration.
Let $z_1,\ldots,z_d$ be the children of $z$ in $T$ and, for $i\in [d]$, let $\{z_{i,j}\}_{j\in[d]}$ be the children of $z_i$. Denote by $T_{i,j}$ the subtree of $T$ rooted at $z_{i,j}$ and by $\Lambda_{i,j}$ the set of leaves of $T_{i,j}$. For $i\in[d]$, $j\in[d]$ and $c\in[q]$, let $r^{(i,j)}_c:=
\Pr_{T_{i,j}}[\sigma(z_{i,j})=c \mid \sigma(\Lambda_{i,j})= \tau(\Lambda_{i,j})]$, and denote by ${\mathbf{r}}^{(i,j)}$ the vector ${\mathbf{r}}^{(i,j)}=\big(r^{(i,j)}_1, \ldots, r^{(i,j)}_q\big)$. Then for any colours $c_1\in[q]$ and $c_2\in[q]$ we have $$\frac
{\Pr_{{\mathbb{T}_{d,n}}}[\sigma(z) = c_1\mid\sigma( \Lambda)=\tau]}
{\Pr_{ {\mathbb{T}_{d,n}}}[\sigma(z) = c_2\mid\sigma( \Lambda)=\tau]}=\prod^{d}_{k=1}h_{c_1,c_2,\beta}\big({\mathbf{r}}^{(k,1)}, \ldots, {\mathbf{r}}^{(k,d)}\big).$$
For $i\in[d]$ and $c\in[q]$, let $T_i$ be the subtree of $T$ rooted at $z_i$ with leaves $\Lambda_{T_i}$ and let $r^{(i)}_c=\Pr_{T_i}[\sigma(z_{i})=c \mid \sigma(\Lambda_{T_i})= \tau(\Lambda_{T_i})]$; let ${\mathbf{r}}^{(i)}$ be the vector $(r^{(i)}_1, \ldots, r^{(i)}_q )$.
For every $i\in[d]$ and $c\in[q]$ we can apply Proposition \[lem:onesteprecursion\] to $T_i$ to obtain $$r^{(i)}_c = \frac{\prod_{j=1}^d \big(1-(1-\beta)r^{(i,j)}_c\big)}{\sum_{c'=1}^q \prod_{j=1}^d \big(1-(1-\beta)r^{(i,j)}_{c'}\big)}.$$ We have $r^{(i)}_c>0$ for every $c\in[q]$, hence we can apply Proposition \[lem:onesteprecursion\] to $T$ to obtain $$\Pr_{{\mathbb{T}_{d,n}}}[\sigma(z)=c\mid \sigma(\Lambda) = \tau] = \frac{\prod_{i=1}^d \big(1-(1-\beta)r^{(i)}_c\big)}{\sum_{c'=1}^q \prod_{i=1}^d \big(1-(1-\beta)r^{(i)}_{c'}\big)}.$$
Thus, for every $i\in[d]$ and $c_1,c_2\in[q]$, $$\label{eq:f34g11123}
\frac{r^{(i)}_{c_1}}{r^{(i)}_{c_2}}= \prod^d_{j=1}\frac{1-(1-\beta)r^{(i,j)}_{c_1}}{1-(1-\beta)r^{(i,j)}_{c_2}}=g_{c_1,c_2,\beta} \big({\mathbf{r}}^{(i,1)},\ldots,{\mathbf{r}}^{(i,d)}\big).$$ Analogously, for every $c_1, c_2 \in [q]$, we have $$\label{eq:f34g11123a}
\frac{\Pr_{{\mathbb{T}_{d,n}}}[\sigma(z)=c_1\mid \sigma(\Lambda) = \tau]}{\Pr_{{\mathbb{T}_{d,n}}}[\sigma(z)=c_2\mid \sigma(\Lambda) = \tau]} = \prod_{k=1}^d\frac{1-(1-\beta)r^{(k)}_{c_1}}{1-(1-\beta)r^{(k)}_{c_2}}=\prod_{k=1}^d\bigg(1+\frac{(1-\beta)\big(1-r^{(k)}_{c_1}/r^{(k)}_{c_2}\big)}{\beta+\sum_{c\neq c_2} r^{(k)}_{c}/r^{(k)}_{c_2}}\bigg)$$ Plugging into , and using the definition of $h_{c_1,c_2,\beta}$ from , we obtain the statement of the lemma.
Bounds from the one-step recursion – Proof of Lemma \[lem:onestepuniq\] {#sec:bounds12onestep}
=======================================================================
In this section, we prove Lemma \[lem:onestepuniq\].
Bounding the marginal probability at the root by the one-step recursion {#sec:fns}
-----------------------------------------------------------------------
We begin by giving an upper and a lower bound for the marginal probability that the root is assigned a colour $c$ via the one-step recursion (see the upcoming Lemma \[lem:boundbyonesteprecursion\]).
First we define two functions. Let $$f_u(d, \beta, x,y) =
\frac{\big(1-(1-\beta)y\big)^d}{\big(1-(1-\beta) y\big)^d + 2\big(1-(1-\beta) x\big)^{d/2}\big(1-(1-\beta)(1-x-y)\big)^{d/2}}$$ and $$f_\ell(d, \beta,x, y) = \frac{\big(1-(1-\beta) x\big)^d}{\big(1-(1-\beta) x\big)^d + \big(1-(1-\beta) y\big)^d + \big(1-(1-\beta)(1-x-y)\big)^d}.$$
We will use the following lemma.
\[lem:ferfe\] Let $f$ be a convex function on an interval $I=[a,b]$.
1. \[it:dec1\] Given $\rho\in [2a,a+b]$, the function $g(\newx) = f(\newx) + f(\rho-\newx)$ is decreasing on $J=[a,\rho/2]$.
2. \[it:inc2\] Given $\rho\in [a+b,2b]$, the function $g(\newx) = f(\newx) + f(\rho-\newx)$ is increasing on $J=[\rho/2,b]$.
We first prove Item \[it:dec1\]. Suppose $y,\tempx\in J$ satisfy $y< \tempx$, we will show that $g(\tempx)\leq g(y)$. We have $y < \tempx \leq \rho-\tempx < \rho-y$ and $y\geq a$, $\rho-y\leq b$ (using $\rho\leq a+b$). It follows that all of $y,\tempx,\rho-y,\rho-\tempx$ belong to $I$. Moreover, by the convexity of $f$ on $I$, we conclude that the slope of $f$ in the interval $[\rho-\tempx,\rho-y]$ is greater or equal to the slope of $f$ in the interval $[y,\tempx]$, i.e., $\frac{f(\rho-y)-f(\rho-\tempx)}{(\rho-y)-(\rho-\tempx)}\geq \frac{f(\tempx)-f(y)}{\tempx-y}$. Re-arranging, we obtain $g(\tempx) \leq g(y)$.
The proof of Item \[it:inc2\] is analogous. For $y,\tempx\in J$ satisfying $y< \tempx$, we have that $\rho-\tempx<\rho-y\leq y < \tempx$ and all of $y,\tempx,\rho-y,\rho-\tempx$ belong to $I$ (using $\rho\geq a+b$). By the convexity of $f$ on $I$, we conclude that the slope of $f$ in the interval $[\rho-\tempx,\rho-y]$ is less or equal to the slope of $f$ in the interval $[y,\tempx]$, which gives that $g(\tempx)\geq g(y)$.
The following lemma gives recursively-generated bounds on the probability that the root of ${\mathbb{T}_{d,n}}$ is a given colour.
\[lem:boundbyonesteprecursion\] Suppose $q= 3$, $d\geq 2$ and $\beta\in [0,1]$. For any $n\geq0$, let $T={\mathbb{T}_{d,n}}$ with root $v=v_T$ and leaves $\Lambda=\Lambda_T$. Let $v_1,\ldots,v_d$ be the children of $v$ in $T$, $T_i$ be the subtree of $T$ rooted at $v_i$ and $\Lambda_i$ denote the set of leaves of the subtree $T_i$. Consider any configuration $\tau \colon \Lambda \to [q]$ and any real numbers $L,U \in [0,1]$ such that, for all $i\in [d]$ and all $j\in[3]$ we have $$L\leq \Pr_{T_i}[\sigma(v_i)=j \mid \sigma(\Lambda_i)= \tau(\Lambda_i)]\leq U.$$ Then, for all colours $c\in[q]$, we also have $$f_\ell(d, \beta, U, L) \leq \Pr_T[\sigma(v)=c \mid \sigma(\Lambda)= \tau] \leq f_u(d, \beta, U, L).$$
By symmetry between the colours, we may assume that $c=1$. Let $p= \Pr_T[\sigma(v)=1 \mid \sigma(\Lambda)= \tau]$. For any colour $c'\in [3]$ and any child $i\in [d]$, let $p_{i,c'}=\Pr_{T_i}[\sigma(v_i)=c' \mid \sigma(\Lambda_i)= \tau(\Lambda_i)]$.
For convenience, let $\hat{\beta} := 1-\beta \in [0,1]$. By Proposition \[lem:onesteprecursion\], with $q=3$ and $c=1$, we have that $$\label{eq:btb5y56}
p = \frac{1}{1+R}, \mbox{ where } R := \frac{\sum_{c'=2}^3\prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)}{\prod_{i=1}^d \big(1-\hat\beta p_{i,1}\big)}.$$ We first show that $ p \geq f_\ell(d, \beta, U, L)$. Since $p_{i,1}\leq U$ for every $i\in [d]$, we obtain that $$\label{eq:rvrveetr2}
R \leq \frac{\sum_{c'=2}^3\prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)}{\big(1-\hat\beta U\big)^d}.$$ For $c'\in[3]$, let $\bar{p}_{c'}$ denote the mean $\frac{1}{d}\sum_{i\in[d]} p_{i,c'}$. The function $f(x) = \ln(1-\hat{\beta} x)$ is concave on the interval $[0,1]$ so by Jensen’s inequality $$\frac{1}{d} \sum_{i=1}^d \ln(1-\hat{\beta} p_{i,c'}) \leq \ln(1-\hat{\beta} \bar{p}_{c'}).$$ Thus, for $c'\in\{2,3\}$ we have $ \prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)\leq
\big(1-\hat\beta \bar{p}_{c'}\big)^d$ which implies $$\label{eq:4g4g6gg}
\sum_{c'=2}^3\prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)\leq
\big(1-\hat\beta \bar{p}_2\big)^d+\big(1-\hat\beta \bar{p}_3\big)^d.$$
Let $f(x) = (1-\hat\beta x)^d$. Let $a=0, b=1, \rho=\bar{p}_2 + \bar{p}_3$ and consider the interval $I=[a,b]$. Since $f$ is convex on the interval $I$ and $\rho\in [2a,a+b]$, Item \[it:dec1\] of Lemma \[lem:ferfe\] implies that the function $g(x) = f(x) + f(\rho-x)$ is decreasing on $J=[a,\rho/2]=[0,\rho/2]$. Since $L\leq \bar{p}_2$ and $L\leq \bar{p}_3$, the values $L$ and $\min\{\bar{p}_2,\bar{p}_3\}$ are in $J$ and $\min\{\bar{p}_2,\bar{p}_3\}\geq L$, so $g(\min\{\bar{p}_2,\bar{p}_3\})\leq g(L)$, i.e., $$\big(1-\hat\beta \bar{p}_2\big)^d+\big(1-\hat\beta \bar{p}_3\big)^d
\leq \big(1-\hat\beta L\big)^d+\big(1-\hat\beta (\bar{p}_2+\bar{p}_3-L)\big)^d.$$ Since, for every $i\in[d]$, $p_{i,2}+p_{i,3} = 1-p_{i,1}\geq 1-U$, we have $\bar{p}_2 + \bar{p}_3 \geq 1-U$, so $$\big(1-\hat\beta \bar{p}_2\big)^d+\big(1-\hat\beta \bar{p}_3\big)^d
\leq \big(1-\hat\beta L\big)^d+\big(1-\hat\beta (1-U-L)\big)^d.$$ Plugging this into and then into , we obtain that $$R \leq \frac{\big(1-\hat\beta L\big)^d+\big(1-\hat\beta (1-U-L)\big)^d}{\big(1-\hat\beta U\big)^d}.$$ Therefore, using , we obtain the lower bound $ p\geq f_\ell(d,\beta, U, L)$.
Next, we show that $ p \leq f_u(d,\beta, U, L)$. To give an upper bound on $ p$, it suffices to lower bound $R$. Since $p_{i,1}\geq L$ for every $i\in [d]$, we obtain the lower bound $$\label{eq:rvrv342}
R \geq \frac{\sum_{c'=2}^3\prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)}{\big(1-\hat\beta L\big)^d}.$$ Using the arithmetic-mean geometric-mean inequality we have $$\label{eq:wds134}
\sum_{c'=2}^3\prod_{i=1}^d\big(1-\hat\beta p_{i,c'}\big)\geq 2\prod^d_{i=1}\Big(\big(1-\hat\beta p_{i,2}\big)\big(1-\hat\beta p_{i,3}\big)\Big)^{1/2}.$$
Now let $f(x) = -\ln(1-\hat\beta x\big)$ and consider an arbitrary $i\in [d]$. Let $a=-1, b=1, \rho=p_{i,2}+p_{i,3}$ and consider the interval $I=[a,b]$. Since $f$ is convex on $I$ and $\rho\in [a+b,2b]$, Item \[it:inc2\] of Lemma \[lem:ferfe\] implies that the function $g(x) = f(x) + f(\rho-x)$ is increasing on the interval $J=[\rho/2,b]=[\rho/2,1]$. Let $x= U$ and $y = \max\{p_{i,2},p_{i,3}\}$. Since $p_{i,2}$ and $p_{i,3}$ are at most $U$, $x$ and $y$ are in $J$ and satisfy $x\geq y$. Therefore, $g(x)\geq g(y)$, which gives that $$\ln\big(1-\hat\beta y\big) + \ln\big(1-\hat\beta (\rho-y)\big) \geq \ln\big(1-\hat\beta x\big) + \ln\big(1-\hat\beta (\rho-x)\big).$$ Thus, by substituting in the values of $x$, $y$ and $\rho$ and exponentiating, we have $$\big(1-\hat\beta p_{i,2}\big)\big(1-\hat\beta p_{i,3}\big)
\geq \big(1-\hat\beta U\big)\big(1-\hat\beta (p_{i,2}+p_{i,3}-U)\big).$$ Using the inequality $p_{i,2}+p_{i,3} = 1-p_{i,1}\leq 1-L$ in the right-hand side, we get $$\big(1-\hat\beta p_{i,2}\big)\big(1-\hat\beta p_{i,3}\big)
\geq \big(1-\hat\beta U\big)\big(1-\hat\beta (1-U-L)\big).$$ Plugging this into for each $i\in[d]$ and then into , we obtain that $$R \geq \frac{2\big(1-\hat\beta U\big)^{d/2}\big(1-\hat\beta(1-U-L)\big)^{d/2}}{\big(1-\hat\beta L\big)^d}.$$ Therefore, using , we obtain the upper bound $ p \leq f_u(d,\beta, U, L)$.
Properties of the functions $f_u$ and $f_\ell$
----------------------------------------------
In this section, we establish useful monotonicity properties of the functions $f_u$ and $f_\ell$ that will be relevant later.
\[lem:fbetamono\] For any fixed $d \geq 2$ and any fixed $x$ and $y$ satisfying $0\leq y \leq x \leq 1$ and $2y + x \leq 1 \leq 2x + y$,
- $f_u(d,\beta,x,y)$ is a decreasing function of $\beta$ on the interval $(0, 1)$ and
- $f_\ell(d, \beta,x,y)$ is an increasing function of $\beta$ on the interval $(0,1)$.
Let $\hat{\beta} := 1- \beta$, and $W = 1 - 3 y + \hat\beta \big((3 y - 1) + (1 - x - 2 y) + x (2 x + y - 1) +y (x - y)\big)$. The derivative of $f_u$ with respect to $\beta$ is given by $${\frac{\partial{f_u}}{\partial{\beta}}} = -f_u^2\frac{d(1-\hat\beta x)^{d/2}(1-\hat\beta (1-x-y))^{d/2} {W}\strut }
{\strut (1-\hat\beta y)^{d+1}(1-\hat\beta x)(1-\hat\beta (1-x-y))}.$$ (Obviously, this can be checked directly, but the reader may prefer to use the Mathematica code in Section \[app:fbetamono\] to check this and the derivative of $f_\ell$ with respect to $\beta$, which appears below.) Using the conditions on $x$ and $y$ in the statement of the lemma, we find that $$W \geq 1 - 3 y + \hat\beta (3y-1)=\beta(1-3y)\geq 0.$$ We conclude that $ {\frac{\partial{f_u}}{\partial{\beta}}}\leq 0$ so $f_u(d,\beta,x,y)$ is a decreasing function of $\beta$ on the interval $[0, 1]$. Similarly, $${\frac{\partial{f_\ell}}{\partial{\beta}}}= f_\ell^2\frac{d \left( (2 x+y-1) (1-\hat\beta (1-x-y))^{d-1}+(x-y) (1-\hat\beta y)^{d-1} \right)}{(1-\hat\beta x)^{d+1}}\geq 0,$$ so $f_\ell(d, \beta,x,y)$ is an increasing function of $\beta$ on the interval $(0,1)$.
\[lem:flxymono\] For any fixed $d\geq 2$ and $0<\beta\leq 1$,
1. \[it:weq51\] $f_\ell(d,\beta,x,y)$ is a decreasing function of $x$ when $x,y\in [0,1]$, and
2. \[it:weq52\] $f_\ell(d, \beta, x,y)$ is an increasing function of $y$ when $x+2y\leq 1$ and $x,y\in [0,1]$.
Let $$R := \frac{\big(1-(1-\beta) y\big)^d+\big(1-(1-\beta)(1-x-y)\big)^d}{\big(1-(1-\beta)x\big)^d}, \mbox{ so that } f_\ell(d,\beta,x,y)= \frac{1}{1+R}.$$
We first prove Item \[it:weq52\]. Let $a=0$, $b=1$, $\rho=1-x$ and consider the interval $I=[a,b]$. Since $f(y) = (1-(1-\beta) y)^d$ is convex on $I$ and $\rho\in [2a,a+b]$, Item \[it:dec1\] of Lemma \[lem:ferfe\] yields that $g(y) = f(y) + f(\rho-y)$ is decreasing on $J = [a,\rho/2]=[0,\rho/2]$. It follows that, for fixed $x$, $R$ is a decreasing function of $y$ on $J$ and therefore $f_\ell$ is increasing in $y$. It remains to observe that, for $x\in [0,1]$, the condition $y\in J$ is equivalent to the condition $x+2y\leq 1$ and $y\in [0,1]$ in the statement.
For Item \[it:weq51\], note that $1-(1-\beta)(1-x-y)$ is an increasing nonnegative function of $x$ and $1-(1-\beta)x$ is a decreasing nonnegative function of $x$, so $R$ is an increasing function of $x$. Thus, $f_\ell$ is a decreasing function of $x$.
\[lem:fuxymono\] For any fixed $d\geq 2$ and $0<\beta\leq 1$,
1. \[it:we14\] $f_u(d,\beta,x,y)$ is an increasing function of $x$ when $1 \leq 2x+y$ and $x,y\in[0,1]$, and
2. \[it:we15\] $f_u(d, \beta, x,y)$ is a decreasing function of $y$ when $x,y\in [0,1]$.
The proof is analogous to that of Lemma \[lem:flxymono\]. Let $$R := \frac{2\big(1-(1-\beta) x\big)^{d/2}\big(1-(1-\beta)(1-x-y)\big)^{d/2}}{\big(1-(1-\beta)y\big)^d}, \mbox{ so that } f_u(d,\beta,x,y)= \frac{1}{1+R}.$$ We first prove Item \[it:we14\]. Let $a=-y$, $b=1$, $\rho=1-y$ and consider the interval $I=[a,b]$. Since the function $f(x)=-\ln\big(1-(1-\beta)x\big)$ is convex on the interval $I$ and $\rho\in [a+b,2b]$, by Item \[it:inc2\] of Lemma \[lem:ferfe\], the function $g(x)=f(x)+f(\rho-x)$ is increasing on the interval $J=[\rho/2,b]$. It follows that the function $$\exp(-g(x))=\big(1-(1-\beta) x\big)\big(1-(1-\beta)(1-x-y)\big)$$ is a decreasing function of $x$ on $J$, and therefore $R$ has the same property as well. Thus, $f_u$ is an increasing function of $x$ on the interval $J$. It remains to observe that, for $y\in [0,1]$, the condition $x\in J$ is equivalent to the condition $1 \leq 2x+y$ and $x\in [0,1]$ in the statement.
For Item \[it:we15\], note that $1-(1-\beta)(1-x-y)$ is an increasing nonnegative function of $y$ and $1-(1-\beta)y$ is a decreasing nonnegative function of $y$, so $R$ is an increasing function of $y$. Thus, $f_u$ is a decreasing function of $y$.
\[lem:fimgrange\] For any fixed $d \geq 2$, $0 < \beta < 1$ and $0 \leq y\leq x \leq 1$ such that $2y+x \leq 1 \leq 2x + y$, we have$$2f_\ell(d,\beta,x,y) + f_u(d,\beta,x,y) \leq1\leq 2f_u(d,\beta,x,y)+f_\ell(d,\beta,x,y).$$
Since $2y+x \leq 1 \leq 2x+y$, we obtain that $y\leq 1-x-y\leq x$. Further, by the AM-GM inequality, $$\big(1-(1-\beta) x\big)^d+\big(1-(1-\beta)(1-x-y)\big)^d\geq 2\big(1-(1-\beta)
x\big)^{d/2}\big(1-(1-\beta)(1-x-y)\big)^{d/2}.$$ So the denominator in the definition of $f_\ell(d,\beta,x,y)$ is at least as big as the denominator in the definition of $f_u(d,\beta,x,y)$. We conclude that $$2f_\ell+f_u \leq\frac{2\big(1-(1-\beta)x\big)^d+\big(1-(1-\beta)y\big)^d}{\big(1-(1-\beta) y\big)^d + 2\big(1-(1-\beta) x\big)^{d/2}\big(1-(1-\beta)(1-x-y)\big)^{d/2}} \leq 1,$$ and $$2f_u + f_\ell \geq \frac{2\big(1-(1-\beta) y\big)^d+\big(1-(1-\beta) x\big)^d}{\big(1-(1-\beta) x\big)^d + \big(1-(1-\beta) y\big)^d + \big(1-(1-\beta)(1-x-y)\big)^d}\geq 1.\qedhere$$
\[lem:fulxymono\] For any fixed $d \geq 2$, $0 < \beta < 1$ and $0 \leq y_1 \leq y_2 \leq x_2 \leq x_1 \leq 1$ such that $2y_1+x_1 \leq 1 \leq 2x_1 + y_1$ and $2y_2+x_2 \leq 1 \leq 2x_2 + y_2$, we have $$f_u(d,\beta,x_2,y_2)\leq f_u(d,\beta,x_1,y_1) \mbox{ \ \ and \ \ } f_\ell(d,\beta,x_2,y_2)\geq f_\ell(d,\beta,x_1, y_1).$$
Using the assumptions in the statement of the lemma, we obtain $
1\leq 2x_1+y_2$ and $2y_1+x_2\leq 1$. Therefore, by Lemmas \[lem:flxymono\] and \[lem:fuxymono\], we obtain $$\begin{aligned}
&f_u(d,\beta, x_2,y_2) \leq f_u(d,\beta,x_1, y_2) \leq f_u(d,\beta,x_1,y_1), \text{ and}\\
&f_\ell(d,\beta,x_2,y_2) \geq f_\ell(d,\beta, x_2,y_1) \geq f_\ell(d, \beta,x_1,y_1).\qedhere
\end{aligned}$$
Bounding the marginal probability at the root by two sequences
--------------------------------------------------------------
For any $\beta > 0$ and $d \geq 2$, we define two sequences: $$\begin{cases}
u_0(d,\beta) = 1,\cr
\ell_0(d, \beta) = 0,
\end{cases}$$ and for every non-negative integer $n$, $$\label{def:unelln}
\begin{cases}
u_{n+1}(d,\beta) = f_u(d,\beta,u_n(d,\beta), \ell_n(d,\beta)), \text{and}\cr
\ell_{n+1}(d,\beta) = f_\ell(d,\beta, u_n(d,\beta),\ell_n(d, \beta)).
\end{cases}$$
Our interest in the sequences $u_n(d,\beta)$ and $\ell_n(d, \beta)$ is that they give upper and lower bounds on the probability $\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}}) = c]$, respectively (subject to any boundary configuration at the leaves).
\[lem:probbound\] Suppose that $q=3$, $d\geq 2$, and $\beta\in(0,1)$. For any $n\geq 0$, for the $d$-ary tree ${\mathbb{T}_{d,n}}$ with depth $n$ and root ${v_{d,n}}$, for any configuration $\tau\colon{\Lambda_{{\mathbb{T}_{d,n}}}}\to [q]$ on the leaves and any colour $c\in[q]$, it holds that $$\ell_n(d,\beta)\leq\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]\leq u_n(d,\beta).$$
We prove the lemma by induction on $n$. For the base case $n=0$, note that ${\mathbb{T}_{d,n}}$ has a single vertex. Thus, for every $c\in[q]$ and every $\tau$ assigning a colour to this vertex, $$\ell_0(d,\beta)=0\leq\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]\leq 1= u_0(d,\beta).$$
For the inductive step, suppose $n>0$. For convenience, denote by $T$ the tree ${\mathbb{T}_{d,n}}$, by $v$ the root ${v_{d,n}}$ and by $\Lambda$ the leaves ${\Lambda_{{\mathbb{T}_{d,n}}}}$. Let $v_1,\ldots,v_d$ be the children of $v$ in $T$ and let $\Lambda_i = \Lambda_{T[v_i]}$ denote the set of leaves of the subtree $T[v_i]$. Consider any configuration $\tau \colon \Lambda \to [q]$ and any colour $c$. By the induction hypothesis, for every $i \in[d]$ and $j \in [q]$, we have $$\ell_{n-1}(d,\beta)\leq\Pr_{T[v_i]}[\sigma(v_i)=j\mid \sigma(\Lambda_i) = \tau(\Lambda_i)] \leq u_{n-1}(d,\beta).$$ By Lemma \[lem:boundbyonesteprecursion\] and , we conclude that $\ell_n(d,\beta)\leq \Pr_{T}[\sigma(v)=c\mid \sigma(\Lambda)=\tau] \leq u_n(d,\beta)$.
The following lemma will be used to show that the sequences $u_n(d,\beta)$ and $\ell_n(d, \beta)$ converge.
\[lem:seqxymono\] For any fixed $d\geq 2$, $0<\beta < 1$ and $n \in {{\mathbb N}}$, we have
1. \[it:ununp1\] $u_n(d,\beta) \geq u_{n+1}(d,\beta)$,
2. \[it:lnlnp1\] $\ell_n(d, \beta) \leq \ell_{n+1}(d,\beta)$, and
3. \[it:unln1\] $2\ell_n(d,\beta) + u_n(d,\beta) \leq 1 \leq 2u_n(d,\beta) + \ell_n(d,\beta)$.
We prove this lemma by induction on $n$. Since $d$ and $\beta$ are fixed, we simplify the notation by writing $u_n$ for $u_n(d,\beta)$, writing $\ell_n$ for $\ell_n(d,\beta)$, writing $f_u(x,y)$ for $f_u(d,\beta,x,y)$ and writing $f_\ell(x,y)$ for $f_\ell(d,\beta,x,y)$.
For the base case $n=0$, we have $u_0=1$ and $\ell_0=0$, so Item \[it:unln1\] holds since $2\ell_0 + u_0= 1 < 2 u_0 + \ell_0$. Items \[it:ununp1\] and \[it:lnlnp1\] follow from $$u_{1} = f_u( 1,0)=\frac{1}{1+ 2\beta^{d/2}} < 1 = u_0, \mbox{\ and \ } \ell_{ 1} = f_\ell( 1,0) = \frac{\beta^d}{\beta^d + 2} > 0 = \ell_0.$$
For the inductive step, suppose $n > 0$. Item \[it:unln1\] follows (using the induction hypothesis) from Lemma \[lem:fimgrange\] with $x=u_{n-1}$ and $y=\ell_{n-1}$. We now obtain Items \[it:ununp1\] and \[it:lnlnp1\]. By the induction hypothesis, $$0\leq \ell_{n-1} \leq\ell_n \leq u_n \leq u_{n-1} \leq 1, \mbox{\ and \ } 2\ell_{n-1} + u_{n-1} \leq 1 \leq 2u_{n-1} + \ell_{n-1}.$$ Using Lemma \[lem:fulxymono\] (with these facts and with item 3), we obtain $$u_{n+1} = f_u( u_n , \ell_n )
\leq f_u( u_{n-1} , \ell_{n-1} )
= u_n,$$ proving Item \[it:ununp1\]. Similarly, we also obtain that $\ell_{n+1} = f_\ell( u_n, \ell_n )
\geq f_\ell( u_{n-1}, \ell_{n-1})
= \ell_n$, proving Item \[it:lnlnp1\].
By Lemma \[lem:seqxymono\], we have that the sequences $\{u_n(d,\beta\})$ and $\{\ell_n(d,\beta)\}$ are bounded and monotonic, so they both converge. Let $$\label{eq:tb53tb5b3fwefe}
u_\infty(d,\beta):=\lim_{n\to\infty} u_n(d,\beta), \mbox{\ and \ } \ell_\infty(d,\beta):=\lim_{n\to\infty} \ell_n(d, \beta).$$ We have the following characterisation of the limits $u_\infty(d,\beta), \ell_\infty(d,\beta)$.
\[lem:fixedpoints\] For any $d\geq 2$ and $0 < \beta \leq 1$, $(x,y)=(u_\infty(d, \beta), \ell_\infty(d,\beta))$ is a solution to the system of equations $$\begin{cases}
f_u(d,\beta,x,y) = x\cr
f_\ell(d,\beta,x,y)=y
\end{cases}$$ satisfying $0<y\leq 1-x-y \leq x < 1$.
Since $d$ and $\beta$ are fixed, we simplify the notation by writing $u_\infty$ for $u(d,\beta)$ and $\ell_\infty$ for $\ell(d,\beta)$. We also drop $d$ and $\beta$ as parameters of $u_n$, $\ell_n$, $f_u$ and $f_\ell$ (as in the proof of Lemma \[lem:seqxymono\]). By Lemma \[lem:seqxymono\], we have $\ell_\infty \geq \ell_1 = {\beta^d}/(\beta^d+2) > 0$ and $u_\infty \leq u_1 = 1/(1+2\beta^{d/2}) < 1$. Also, for every non-negative integer $n$, we have $
\ell_n \leq 1
- u_n - \ell_n
\leq u_n$, which implies, by applying limits, that $ \ell_\infty \leq 1-u_\infty - \ell_\infty \leq u_\infty$.
Recall that, for $n\geq 0$, $u_{n+1} =f_u( u_n ,\ell_n )$ and $\ell_{n+1} = f_\ell( u_n ,\ell_n )$. Using these definitions and the continuity of the functions $f_u( x,y)$ and $f_\ell( x,y)$ with respect to $x$ and $y$ (in the third equality below), we have $$u_\infty = \lim_{n\rightarrow \infty} u_n
= \lim_{n\rightarrow \infty} f_u( u_{n-1} ,\ell_{n-1})
= f_u( \lim_{n\rightarrow \infty} u_{n-1},\lim_{n\rightarrow\infty} \ell_{n-1})
= f_u(u_\infty,\ell_\infty).$$ Similarly, $\ell_\infty = f_\ell(u_\infty,\ell_\infty)$.
Bounding the maximum ratio
--------------------------
In this section, we place the final pieces for the proof of Lemma \[lem:onestepuniq\]. The first lemma accounts for the $d=2$ case of Lemma \[lem:onestepuniq\].
\[lem:d=2bound\] Suppose $q = 3$ and $\beta\in(0,1)$. Then there is a positive integer $n_0$ such that for every $n \geq n_0$, every $c \in [q]$, and every configuration $\tau\colon {\Lambda_{{\mathbb{T}_{2,n}}}}\to [q]$, $$\frac{459}{2000}{\leq}\Pr_{{\mathbb{T}_{2,n}}}[\sigma({v_{2,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{2,n}}}}) = \tau] {\leq}\frac{1107}{2500}.$$
For any $0 < \beta \leq 1$, by Lemma \[lem:fixedpoints\], $(x, y) = (u_\infty(2,\beta), \ell_\infty(d,\beta))$ is a solution to the system of equations $$\label{eq:d=2fixedpoints}
\begin{cases}
f_u(2,\beta,x,y) = x\cr
f_\ell(2,\beta,x,y)=y
\end{cases}$$ satisfying $0<y\leq 1-x-y \leq x < 1$. In Section \[app:d=2bound\], we use the Resolve function of Mathematica to show rigorously that there is no solution to satisfying $$0 < y \leq \frac{1}{3} \text{ and } \frac{1106}{2500} \leq x < 1$$ and there is no solution satisfying $$0 < y \leq \frac{460}{2000} \text{ and } \frac{1}{3} \leq x < 1.$$
If $0<y\leq 1-x-y \leq x < 1$ then $0<y\leq1/3$ and $1/3 \leq x < 1$. So any solution to which satisfies $0<y\leq 1-x-y \leq x < 1$ must also satisfy $ y> {460}/{2000}$ and $x < {1106}/{2500}$. We conclude that $ \ell_\infty(d,\beta)> {460}/{2000}$ and $u_\infty(d,\beta) < {1106}/{2500}$.
Since $\ell_\infty(2,\beta)$ and $u_\infty(2,\beta)$ are the limits of the sequences $\ell_n(2,\beta)$ and $u_n(2,\beta)$, respectively, there is a positive integer $n_0$ such that, for all $n\geq n_0$, $ \ell_n(2,\beta)\geq {459}/{2000}$ and $u_n(2,\beta) \leq {1107}/{2500}$. Thus by Lemma \[lem:probbound\], for every $n\geq n_0$ and every $\tau\colon{{\Lambda_{{\mathbb{T}_{2,n}}}}} \to [3]$, $$\frac{459}{2000}\leq\ell_n(2,\beta)\leq \Pr_{{\mathbb{T}_{2,n}}}[\sigma({v_{2,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{2,n}}}}) = \tau] \leq
u_n(2,\beta) \leq
\frac{1107}{2500}.\qedhere$$
For any $d \geq 3$, define the critical parameter $\beta_*(d)$ by $\beta_*(d) = 1 - \displaystyle\frac{3}{d+1}$.
Note that $\beta_*(d)>0$. The following lemma shows that $u_n(d,\beta)$ and $\ell_n(d,\beta)$ are bounded by the values corresponding to the critical parameter.
\[lem:seqbetamono\] Fix any $d\geq 3$. For any $\beta$ in the range $\beta_*(d)\leq \beta<1$ and any non-negative integer $n$, we have $u_n(d,\beta_*(d))\geq u_n(d, \beta)$ and $\ell_n(d,\beta_*(d)) \leq \ell_n(d,\beta)$.
We prove the lemma by induction on $n$. Since $d$ is fixed, we simplify the notation by writing $\beta_*$ for $\beta_*(d)$. We also drop the argument $d$ from $u_n(d,\beta)$, $\ell_n(d,\beta)$, $f_u(d,\beta,x,y)$ and $f_\ell(d,\beta,x,y)$.
For the base case $n=0$, note that for every $\beta$, it holds that $u_0(\beta) = u_0(\beta_*)=1$ and $\ell_0(\beta) = \ell_0(\beta_*)=0$.
For the inductive step, suppose $n>0$. By Lemma \[lem:seqxymono\], we have $$\begin{aligned}
&2\ell_{n-1}(\beta)+u_{n-1}(\beta)\leq 1 \leq 2u_{n-1}(\beta)+\ell_{n-1}(\beta), \text{ and}\\
&2\ell_{n-1}(\beta_*)+u_{n-1}(\beta_*)\leq 1 \leq 2u_{n-1}(\beta_*)+\ell_{n-1}(\beta_*).\end{aligned}$$ By Lemma \[lem:seqxymono\] and the induction hypothesis, we have $$0\leq \ell_{n-1}(\beta_*)\leq \ell_{n-1}(\beta) \leq u_{n-1}(\beta) \leq u_{n-1}(\beta_*)\leq 1.$$ Now, using the definitions and Lemma \[lem:fbetamono\], we get $$u_n(\beta_*) =f_u(\beta_*,u_{n-1}(\beta_*),\ell_{n-1}(\beta_*))
\geq f_u(\beta,u_{n-1}(\beta_*),\ell_{n-1}(\beta_*)).$$ Then using Lemma \[lem:fulxymono\], we continue with $$f_u(\beta,u_{n-1}(\beta_*),\ell_{n-1}(\beta_*))
\geq
f_u(\beta,u_{n-1}(\beta),\ell_{n-1}(\beta))
=u_n(\beta).$$ Similarly, again using Lemma \[lem:fbetamono\] and then Lemma \[lem:fulxymono\], we have that $$f_\ell(\beta_*,u_{n-1}(\beta_*),\ell_{n-1}(\beta_*))
\leq f_\ell(\beta,u_{n-1}(\beta_*),\ell_{n-1}(\beta_*))
\leq
f_\ell(\beta,u_{n-1}(\beta),\ell_{n-1}(\beta)),$$ which gives that $\ell_n(\beta_*)\leq \ell_n(\beta)$.
Our next goal is to prove Lemma \[lem:ffixedpoints\] below, which will help us to obtain an upper bound on the ratio $u_\infty(d,\beta_*(d))/\ell_\infty(d,\beta_*(d))$ when $d$ is sufficiently large. In order to do this, we first define some useful re-parameterisations of $f_u$ and $f_\ell$, and establish some properties of these.
\[def:gs\] Let $g_u(d,\mu,y)=f_u(d, \beta_*(d), \mu\cdot y, y)-\mu\cdot y$ and $g_\ell(d, \mu, y)= f_\ell(d, \beta_*(d), \mu\cdot y, y) - y$.
Note that the argument $\mu$ in $g_u,g_\ell$ corresponds to the ratio $x/y$ of the arguments $x,y$ of $f_u,f_\ell$.
\[lem:guymono\] For every $d\geq 5$ and $\mu \geq 1$, $g_u(d,\mu,y)$ is a decreasing function of $y$ in the range $1/(2\mu+1) \leq y \leq 1/(\mu+2)$.
Let $A={(1-{3 y}/{(d+1)})}^d$ and $$B = \left(1-\frac{3(1-y(\mu+1))}{d+1}\right)^{d/2} \left(1-\frac{3 \mu y}{d+1}\right)^{d/2}.$$ Since $y\leq 1/(\mu+2) \leq 1/3$ and $d\geq 5$, we have $3y<d+1$, so $A>0$. Also, $3\mu y< d+1$ and $3(1-y(\mu+1))<d+1$, so $B>0$. Let $W = A B / {(A+2B)}^2>0$. The derivative of $g_u$ with respect to $y$ (see Section \[app:guymono\] for Mathematica assistance) is given by the following. $${\frac{\partial{g_u}}{\partial{y}}} =-\mu+\left(\frac{9 d W}{3y(\mu+1)+d-2}\right)\left(\frac{\mu(2\mu y+y-1)}{1+d-3\mu y}-\frac{2\mu y+y+d-1}{1+d-3y}\right).$$ The upper bound on $y$ yields (crudely) $2\mu y+y-1< 1$ and $1+d-3\mu y > d-2$. Similarly, the lower bound on $y$ yields $2\mu y+y+d-1\geq d$ and (since $y$ is non-negative) $1+d-3y < 1+d$. Plugging these in, we obtain $$\label{eq:evref344346}
{\frac{\partial{g_u}}{\partial{y}}} <-\mu +\left(\frac{9 d W}{3 y(\mu +1)+d-2}\right)\left(\frac{\mu }{d-2}-\frac{d}{1+d}\right).$$ Note that $3y(\mu +1)> 0$ and $d> 2$ so the expression $3y(\mu +1)+d-2$ in the denominator is positive. We now consider two cases.
If $\mu /(d-2)-d/(1+d)\leq0$ then recall that $W>0$, so gives that ${\frac{\partial{g_u}}{\partial{y}}}<-\mu < 0$.
Otherwise, $\mu /(d-2)-d/(1+d)>0$. In this case, note that, for any $A$ and $B$, $(A-2B)^2 \geq 0$, so $8 A B \leq A^2 + 4 AB + 4B^2=(A+2B)^2$. From the definition of $W$, this ensures that $W\leq 1/8$. So, gives that $${\frac{\partial{g_u}}{\partial{y}}}< -\mu +\frac{\frac{9 d}{8} \left(\frac{\mu }{d-2}-\frac{d}{1+d}\right)}{3 y (\mu +1) +d-2} <
-\mu + \frac{9d}{8(d-2)}\left(\frac{\mu }{d-2}-\frac{d}{1+d}\right)<\mu \left(\frac{9d}{8(d-2)^2}-1\right)<0,$$ where the final inequality uses $d\geq 5$.
The following lemma is analogous to Lemma \[lem:guymono\], but for the function $g_\ell$.
\[lem:glymono\] For every $d\geq 3$ and $\mu \geq 1$, $g_\ell(d,\mu ,y)$ is a decreasing function of $y$ in the range $1/(2\mu +1) \leq y \leq 1/(\mu +2)$.
Let $$W = \frac{(\mu (2 d-1)+d+1)\big(\frac{3 y(\mu +1) +d-2}{d+1}\big)^d}{(1 + d - 3 \mu y) (d - 2 + 3 y (1 + \mu ) )} +\frac{(\mu -1) (d+1)\big(1-\frac{3 y}{d+1}\big)^d}{(1 + d - 3 y) (1 + d - 3 \mu y)}.$$ Since $\mu \geq 1$ and $d\geq 3$ and $y\leq 1/(\mu +2)$ all of the factors in $W$ are positive, so $W> 0$. The derivative of $g_\ell$ with respect to $y$ (see Section \[app:glymono\] for Mathematica assistance) is given by the following. $${\frac{\partial{g_\ell}}{\partial{y}}} = - \frac{3 d \big(1-\frac{3 \mu y}{d+1}\big)^d W}{\Big(\big(\frac{3 (\mu +1) y+d-2}{d+1}\big)^d+\big(1-\frac{3 \mu y}{d+1}\big)^d+\big(1-\frac{3 y}{d+1}\big)^d\Big)^2}-1.$$ We’ve already seen that $W>0$ and the denominator is greater than $0$ since it is a square. Since $3\mu y < 3<d+1$, the remaining term is also positive, so ${\frac{\partial{g_\ell}}{\partial{y}}}<0$, as required.
Next, we will identify a value $y_\mu $ so that, when $\mu$ and $d$ are sufficiently large, $g_u(d,\mu ,y_\mu )<0$ and $g_\ell(d,\mu ,y_\mu )>0$.
\[def:ymu\] Define the quantity $y_\mu $ as follows. $$y_\mu =
\begin{cases}
\frac{7}{10\mu +12}+\frac{3}{500} & \mbox{if $\mu < 32$,}\cr
\frac{7}{10\mu +12} & \mbox{if $\mu {\geq}32$}.
\end{cases}$$ Let $x_\mu = \mu y_\mu$. Now define the functions $h_u$ and $h_\ell$ as $h_u(d, \mu ) = g_u(d,\mu , y_\mu )$ and $h_\ell(d, \mu ) = g_\ell(d,\mu , y_\mu )$.
Then we have the following lemmas.
\[lem:ineqs\] If $\mu\geq 157/80$ then $0 < y_\mu < 1-x_\mu-y_\mu < \tfrac13 < x_\mu < 1-y_\mu$.
The inequalities follow directly from Definition \[def:ymu\]. Mathematica code is given in Section \[sec:ineqs\].
\[lem:dhu<0\] Suppose $d \geq 23$. If $157/80\leq \mu< 32$ or $32<\mu$ then ${\frac{\partial{h_u}}{\partial{\mu}}} < 0$.
Since $d$ is fixed in the proof of this lemma, we will drop it as an argument of $\beta_*$, $h_u$, $g_u$. We will use $\hat\beta_*$ to denote $1-\beta_*=3/(d+1)$. We will drop $d$ and $\beta_*$ as an argument of $f_u$. So, plugging in Definitions \[def:gs\] and \[def:ymu\], we get $h_u(\mu) = g_u(\mu,y_\mu) = f_u(x_\mu, y_\mu)- x_\mu$. We have $$\label{eq:parexpand}
{\frac{\partial{h_u(\mu)}}{\partial{\mu}}}
={\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{x_\mu}}}\cdot{\frac{\partial{x_\mu}}{\partial{\mu}}}+{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}\cdot{\frac{\partial{y_\mu}}{\partial{\mu}}}-{\frac{\partial{x_\mu}}{\partial{\mu}}}.$$ Let $$R(x,y)=\frac{(1-\hat\beta_* x)^{d/2}(1-\hat\beta_* (1-x-y))^{d/2}}{(1-\hat\beta_* y)^d}.$$ The derivatives of $f_u(x,y)$ with respect to $x$ and $y$ are as follows (see Section \[app:lem:dhu<0\] for Mathematica assistance). $$\label{mypartial}
\begin{aligned}
{\frac{\partial{f_u(x,y)}}{\partial{x}}}&= \left(\frac{f_u(x, y)^2 R(x,y) d \hat\beta_*}{1-\hat\beta_* (1-x-y)}\right) \left( \frac{\hat\beta_*(2x+y-1) }
{ 1-\hat\beta_* x}\right), \text{ and}\cr
{\frac{\partial{f_u(x,y)}}{\partial{y}}}&=-\left(\frac{f_u(x, y)^2 R(x,y) d \hat\beta_*}{1-\hat\beta_* (1-x-y)}\right)
\left( \frac{ 3 + \hat\beta_* (2 x + y-2) }{1-\hat\beta_* y }\right).
\end{aligned}$$ If $0\leq x\leq1$, $0\leq y\leq 1$, $0\leq 1-x-y\leq 1$ and $2x+y>1$ then all of the factors are positive, so by Lemma \[lem:ineqs\], ${\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{x_\mu}}} > 0$ and ${\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}} < 0$. Let $$\label{defz}
z =-{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{x_\mu}}}\bigg/{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}$$ and note that $z$ is positive. Using and , we can express ${\frac{\partial{h_u(\mu)}}{\partial{\mu}}}$ as $${\frac{\partial{h_u(\mu)}}{\partial{\mu}}}=\left(-z\cdot{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}-1\right){\frac{\partial{x_\mu}}{\partial{\mu}}}+{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}\cdot{\frac{\partial{y_\mu}}{\partial{\mu}}}.$$ From the definition of $y_\mu$ (Definition \[def:ymu\]), ${\frac{\partial{y_\mu}}{\partial{\mu}}}=-\frac{35}{2 (5 \mu+6)^2}<0$ for all $\mu\neq 32$. If $\mu < 32$ then ${\frac{\partial{x_\mu}}{\partial{\mu}}} = \frac{21}{(5 \mu+6)^2}+\frac{3}{500}>0$. If $\mu>32$ then ${\frac{\partial{x_\mu}}{\partial{\mu}}} = \frac{21}{(5 \mu+6)^2}>0$. Thus, to show ${\frac{\partial{h_u(\mu)}}{\partial{\mu}}}<0$, it suffices to show $$\label{dec:goal}
-{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}<
\frac{{\frac{\partial{x_\mu}}{\partial{\mu}}}\strut}{\strut z\cdot{\frac{\partial{x_\mu}}{\partial{\mu}}}-{\frac{\partial{y_\mu}}{\partial{\mu}}}}.$$
We will simplify by finding an upper bound for $z$. Using and , we have $$z=\bigg( \frac{1-\hat\beta_* y_\mu}{1-\hat\beta_* x_\mu}\bigg)\bigg(\frac{\hat\beta_* (2x_\mu+y_\mu-1)}{3+\hat\beta_* (2x_\mu+y_\mu-2)}\bigg)= \bigg(1+\frac{\hat\beta_*(x_\mu-y_\mu)}{1-\hat\beta_* x_\mu}\bigg)\bigg(\frac{\hat\beta_* (2x_\mu+y_\mu-1)}{3-\hat\beta_* (2-2x_\mu-y_\mu)}\bigg).$$ Since, by Lemma \[lem:ineqs\], $x_\mu > y_\mu$, $2 x_u + y_y > 1$, $x_\mu > 0$ and (since $x_\mu +y_\mu < 1$) $2 > 2 x_u + y_u$, $z$ is an increasing function of $\hat\beta_*$. Since $d\geq 23$, we have $\hat\beta_* = 3/(d+1)\leq 1/8$, so $z$ is upper-bounded by its value with $\hat\beta_*$ replaced by $1/8$. This gives that $$z\leq \frac{(8-y_\mu) (2 x_\mu+y_\mu-1)}{(8-x_\mu) (2 x_\mu+y_\mu+22)}.$$ Moreover, using Mathematica, we show in Appendix \[app:lem:dhu<0\] that $$\label{eq:err346gevrerf53}
\frac{(8-y_\mu) (2 x_\mu+y_\mu-1)}{(8-x_\mu) (2 x_\mu+y_\mu+22)}<\frac{1}{24} \mbox{ for all } \mu> 1.$$ It follows that $z<1/24$. Thus, we can re-write our goal from — to prove the lemma, it suffices to show $$\label{dec:newgoal}
-{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}<
\frac{{\frac{\partial{x_\mu}}{\partial{\mu}}}\strut}{\strut \tfrac{1}{24}\cdot{\frac{\partial{x_\mu}}{\partial{\mu}}}-{\frac{\partial{y_\mu}}{\partial{\mu}}}}.$$ The definitions of $f_u$ and $R$ imply that $f_u(x,y) = 1/(1+2 R(x,y))$. Therefore, using the fact that $\frac{a}{(1 + 2a)^2}\leq 1/8$ for all $a>0$, we have $$f_u(x,y)^2 R(x,y) =
\frac{R(x,y)}{\big(1 + 2R(x,y)\big)^2}
{\leq}\frac{1}{8}.$$ So, plugging this into the second equality in , recalling that $ {\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}<0$ and $\beta_*=3/(d+1)$, we get $$-{\frac{\partial{f_u(x_\mu,y_\mu)}}{\partial{y_\mu}}}
\leq \frac{d\hat\beta_* (3+\hat\beta_* (2x_\mu+y_\mu-2))}{8(1-\hat\beta_* (1-x_\mu-y_\mu))(1-\hat\beta_* y_\mu)}
\leq \frac{ 3 (3+\hat\beta_* (2x_\mu+y_\mu-2))}{8(1-\hat\beta_* (1-x_\mu-y_\mu))(1-\hat\beta_* y_\mu)}.$$
Let $Y$ be the right-hand-side of the previous expression. Using $\hat\beta_*\leq 1/8$ and the inequalities from Lemma \[lem:ineqs\], we find that $Y$ is increasing in $\hat\beta_*$ (see that Mathematica code in Appendix \[app:lem:dhu<0\]). Thus, we can replace $Y$ with its value with $\hat\beta_*$ replaced by $1/8$, which is $ {3(2 x_\mu+y_\mu+22)}/{((8-y_\mu) (x_\mu+y_\mu+7))}$. Plugging this into , it suffices to show
$$\label{eq:zbound}
\frac{3(2 x_\mu+y_\mu+22)}{(8-y_\mu) (x_\mu+y_\mu+7)}
<
\frac{{\frac{\partial{x_\mu}}{\partial{\mu}}}\strut}{\strut \tfrac{1}{24}\cdot{\frac{\partial{x_\mu}}{\partial{\mu}}}-{\frac{\partial{y_\mu}}{\partial{\mu}}}}.$$
We prove in two cases.
[**Case 1: $\mu> 32$:**]{} Using the values ${\frac{\partial{x_\mu}}{\partial{\mu}}},{\frac{\partial{y_\mu}}{\partial{\mu}}}$ that we calculated earlier, the right-hand side of is $8/7$. The Mathematica code in Appendix \[app:lem:dhu<0\] uses Resolve to show rigorously that there is no $\mu>32$ satisfying . 0.2cm
[**Case 2: $ \mu<32$:**]{} Using the values that we calculated earlier, the right-hand side of is $$\frac{24 \left(25 \mu^2+60 \mu+3536\right)}{25 \mu^2+60 \mu+73536}.$$ The Mathematica code in Appendix \[app:lem:dhu<0\] uses Resolve to show rigorously that there is no $\mu>1$ satisfying .
We will use the following function in several of the remaining lemmas.
\[def:psi\] Let $\psi(d, z) = {d}/(d-3 z+1)+\ln \left(d+1-3z\right)$.
\[lem:muone\] Suppose $d\geq 23$. Then $h_u(d,157/80) < 0$ and $h_u(d,32) < 0$.
Let $\zeta(d,x,y):=2\psi(d,y)-\psi(d,x)-\psi(d,1-x-y)$, where $\psi$ is the function defined in Definition \[def:psi\]. The derivative of $h_u(d,\mu)$ with respect to $d$ is given as follows (see Appendix \[app:muone\] for the Mathematica code). $$\begin{aligned}
{\frac{\partial{h_u(d,\mu)}}{\partial{d}}} &=
{\frac{\partial{f_u(d,\beta_*(d),x_\mu, y_\mu)}}{\partial{d}}}\nonumber\\
\label{eq:Dhud}
&=\frac{\big(1-\frac{3 x_\mu }{d+1}\big)^{\frac{d}{2}} \big(1-\frac{3 y_\mu }{d+1}\big)^d \big(1-\frac{3(1-x_\mu-y_\mu)}{d+1}\big)^{\frac{d}{2}} \zeta(d,x_\mu,y_\mu)}{\Big(\big(1-\frac{3 y_\mu }{d+1}\big)^d+2 \big(1-\frac{3 x_\mu }{d+1}\big)^{\frac{d}{2}} \big(1-\frac{3(1-x_\mu-y_\mu)}{d+1}\big)^{\frac{d}{2}}
\Big)^2}.\end{aligned}$$
First, fix $\mu=157/80$. We will prove three facts.
- [**Fact 1:**]{} For all $d\geq 23$, ${\frac{\partial{\zeta(d,x_\mu,y_\mu)}}{\partial{d}}} < 0$.
- [**Fact 2: $\lim_{d\rightarrow \infty} \zeta(d,x_\mu,y_\mu)=0$.**]{}
- [**Fact 3: $\lim_{d\rightarrow \infty} h_u(d,\mu)<0$.**]{}
Facts 1 and 2 guarantee that, for all $d\geq 23$, $\zeta(d,x_\mu,y_\mu)>0$. Lemma \[lem:ineqs\] guarantees that all other factors in are also positive. Thus, ${\frac{\partial{h_u(d,\mu)}}{\partial{d}}}$ is positive for all $d\geq 23$. Together with Fact 3, this proves the first part of the lemma, that $h_u(d,157/80) < 0$. The three facts are proved in the Mathematica code in Section \[app:muone\].
Finally, fix $\mu=32$. Lemma \[lem:ineqs\] again guarantees that all factors in other than $\zeta(d,x_\mu,y_\mu)$ are positive. Thus, it suffices to prove the three facts for $\mu=32$, and this is done in the Mathematica code in Section \[app:muone\].
Lemmas \[lem:dhu<0\] and \[lem:muone\] have the following corollary.
\[lem:hu<0\] For every $d \geq 23$ and $\mu \geq 157/80$, $h_u(d,\mu) <0$.
By Lemma \[lem:dhu<0\], $h_u(d,\mu)$ is decreasing for $\mu\in[157/80,32)$. Thus, for $\mu$ in this range, $h_u(d,\mu) \leq h_u(d,157/80)$ and by Lemma \[lem:muone\], $h_u(d,157/80)<0$.
By Lemma \[lem:dhu<0\], $h_u(d,\mu)$ is decreasing for $\mu> 32$. Thus, for $\mu>32$, $h_(d,\mu) \leq h(d,32)$ and by Lemma \[lem:muone\], $h_u(d,32)<0$.
\[lem:hl>0\] For every $d \geq 23$ and $\mu \geq 157/80$, $h_\ell(d,\mu) >0$.
We will show that $$\label{eq:hellmu}
{\frac{\partial{h_\ell(d,\mu)}}{\partial{d}}}>0 \mbox{ for all } d\geq 23 \mbox{ and } \mu\geq 157/80.$$ The Mathematica code in Appendix \[app:hl>0\] verifies that $h_\ell(23,\mu)>0$ for all $\mu\geq157/80$. Together with , this proves the lemma.
Therefore, in the rest of the proof, we prove . Using Definitions \[def:ymu\] and \[def:gs\], we have $h_\ell(d,\mu) = f_\ell(d,\beta_*(d) , x_\mu,y_\mu )-y_{\mu}$. We use the following definitions in order to describe ${\frac{\partial{h_\ell(d,\mu)}}{\partial{d}}}$. Recall from Definition \[def:psi\] that $\psi(d,z)={d}/(d-3 z+1)+\ln \left(d+1-3z\right)$. Let $A = (1-\frac{3 x_\mu}{d+1})^d$ and $B = (1-\frac{3 y_\mu}{d+1})^d$ and $C = (1-\frac{3(1-x_\mu-y_\mu)}{d+1})^d$. Then the derivative of $h_\ell(d,\mu)$ with respect to $d$ is given as follows (see Appendix \[app:hl>0\] for Mathematica assistance). $$\label{eq:Dhld}
{\frac{\partial{h_\ell(d,\mu)}}{\partial{d}}}=
{\frac{\partial{f_\ell(d,\beta_*(d),x_\mu,y_\mu)}}{\partial{d}}}=\frac{
A C(\psi(d,x_\mu)-\psi(d,1-x_\mu-y_\mu))+A B(\psi(d,x_\mu)-\psi(d, y_\mu))
}{(A+B+C)^2}.$$
Lemma \[lem:ineqs\] guarantees that $A$, $B$ and $C$ are positive, so to prove , and hence the lemma, it suffices to show $\psi(d,x_\mu)>\psi(d,1-x_\mu-y_\mu)$ and $\psi(d,x_\mu)>\psi(d, y_\mu)$.
Note that $${\frac{\partial{\,\psi\!\left(d, \frac{1}{3}+t\right)}}{\partial{t}}} = \frac{9 t}{(d-3 t)^2}, \quad \mbox{ and } \quad {\frac{\partial{\,\psi\!\left(d, \frac{1}{3}-t\right)}}{\partial{t}}} = \frac{9 t}{(d+3 t)^2}.$$ Thus, for fixed $d$, the function $\psi(d,z)$ is decreasing for $z\in [0,1/3]$. Since, by Lemma \[lem:ineqs\], $0<y_\mu < 1-x_\mu-y_\mu < 1/3$, we have $$\label{eq:first8}
\psi(d,y_\mu)\geq \psi(d,1-x_\mu-y_\mu).$$
The function $\psi(d,z)$ is increasing for $z\in[1/3,1]$. Since, Lemma \[lem:ineqs\] guarantees $1/3 < 2/3-y_\mu < x_\mu < 1$, we have $$\label{eq:second8}
\psi(d,x_\mu)\geq \psi(d,\tfrac{2}{3} - y_\mu).$$
Since the function $\psi(d, \tfrac{1}{3}+t)-\psi(d, \tfrac{1}{3}-t)$ is increasing for $t\in[0,1/3]$, and it is $0$ at $t=0$, we have $\psi(d, \tfrac{1}{3}+t)\geq\psi(d, \tfrac{1}{3}-t)$ for $t\in[0,1/3]$. Lemma \[lem:ineqs\] guarantees $0<y_\mu<1/3$, so taking $t=1/3-y_\mu$, we get $$\label{eq:third8}
\psi(d,\tfrac{2}{3} - y_\mu)\geq \psi(d,y_\mu).$$ Combining , and we obtain $\psi(d,x_\mu)>\psi(d,1-x_\mu-y_\mu)$ and $\psi(d,x_\mu)>\psi(d, y_\mu)$, which prove , and hence the lemma.
\[lem:ffixedpoints\] If $d \geq 23$ then there is no solution to the system of equations $$\label{eq:ffixedpoints}
\begin{cases}
f_u(d,\beta_*(d),x,y) = x\cr
f_\ell(d,\beta_*(d),x,y)=y
\end{cases}$$ which satisfies $x \geq 157y/80 \geq 0$ and $2x + y \geq 1 \geq 2y+x$.
Consider any fixed $d\geq 23$ and, for the sake of contradiction, assume that such an $(x,y)$ exists. Let $\mu = x/y$, so that (by Definition \[def:gs\]), $(\mu, y)$ is a solution to the equation $$g_u(d,\mu,y) = g_\ell(d,\mu,y) = 0.$$ The conditions $x \geq 157y/80 \geq 0$ and $2x + y \geq 1 \geq 2y+x$ translate into $\mu\geq {157}/{80}$ and $ {1}/{(2\mu+1)}\leq y \leq {1}/{(\mu+2)}$. Since $g_u(d,\mu,y) = 0$, by Lemma \[lem:guymono\] and Corollary \[lem:hu<0\], we have $y < y_\mu$. Since $g_\ell(d,\mu,y) = 0$, by Lemmas \[lem:glymono\] and \[lem:hl>0\], we have $y > y_\mu$. This yields a contradiction.
\[lem:d>=23bound\] For every integer $d \geq 23$, there exists a positive integer $n_0$ such that for all $n \geq n_0$, $$\frac{u_n(d,\beta_*(d))}{\ell_n(d,\beta_*(d))}\leq \frac{53}{27}.$$
Fix $d\geq 23$. For simplicity, we will write $\beta_*$ instead of $\beta_*(d)$.
Recall that the sequences $\{u_n(d,\beta_*)\}$ and $\{\ell_n(d,\beta_*)\}$ converge to the limits $u_\infty(d, \beta_*)$ and $\ell_\infty(d,\beta_*)$, respectively (cf. ). Moreover, by Lemma \[lem:fixedpoints\], the pair $(x,y)=(u_\infty(d, \beta_*), \ell_\infty(d,\beta_*))$ is a solution to the system of equations satisfying $0<y\leq 1-x-y \leq x < 1$. By Lemma \[lem:ffixedpoints\], there is no solution $(x,y)$ to such that $x \geq 157y/80 \geq 0$ and $2x + y \geq 1 \geq 2y+x$. So, it must be the case that ${u_\infty(d,\beta_*)}/{\ell_\infty(d,\beta_*)}<{157}/{80}$. Since $53/27>157/80$, there exists a positive integer $n_0$ such that for all $n \geq n_0$, ${u_n(d,\beta_*)}/{\ell_n(d,\beta_*)}\leq{53}/{27}$.
Corollary \[lem:d>=23bound\] accounts for integers $d\geq 23$. To account for integers $3\leq d\leq 22$, we define the following two sequences. $$\begin{cases}
u'_0(d) = 1\cr
\ell'_0(d) = 0
\end{cases}$$ and for every non-negative integer $n$, $$\begin{cases}
u'_{n+1}(d) = \displaystyle\frac{\lceil 10000\,f_u(d,\beta_*(d),u'_n(d), \ell'_n(d))\rceil}{10000}\\[8pt]
\ell'_{n+1}(d) = \displaystyle\frac{\lfloor 10000\,f_\ell(d,\beta_*(d), u'_n(d),\ell'_n(d))\rfloor}{10000}
\end{cases}.$$
We have the following lemma, which is proved by brute force.
\[lem:d<=22sequence\] For every integer $d\in\{3,\ldots,22\}$ and every integer $n\in \{0,\ldots,60\}$, we have $u'_n(d) \geq u'_{n+1}(d)$, $\ell'_n(d) \leq \ell'_{n+1}(d)$, $2u'_n(d)+ \ell'_n(d) \geq 1\geq 2\ell'_n(d)+u_n(d)$ and $u'_{60}(d)/\ell'_{60}(d)\leq \frac{53}{27}$.
In Appendix \[app:d<=22sequence\], we use Mathematica to compute all the values $u'_n(d)$ and $\ell'_n(d)$ for $n\in\{0,\ldots,60\}$ and $d\in\{3,\ldots,22\}$. We then check that all of the desired inequalities hold.
We next show that the sequences $\{u'_n(d)\}$ and $\{\ell'_n(d)\}$ bound the sequences $\{u_n(d,\beta_*(d))\}$ and $\{\ell_n(d,\beta_*(d))\}$ for $n\leq 60$.
\[lem:d<=22seqmono\] For every integer $d\in\{3,\ldots,22\}$ and every integer $n\in \{0,\ldots,60\}$, we have $u'_n(d) \geq u_n(d,\beta_*(d))$ and $\ell'_n(d) \leq \ell_n(d,\beta_*(d))$.
Fix $d$ to be an integer between 3 and 22. Since $d$ is fixed, we simplify the notation by writing $u_n$ for $u_n(d,\beta_*(d))$, $u_n'$ for $u_n'(d)$, $\ell_n$ for $\ell_n(d,\beta_*(d))$, $\ell_n'$ for $\ell'_n(d)$, $f_u(x,y)$ for $f_u(d,\beta_*(d),x,y)$ and $f_\ell(x,y)$ for $f_\ell(d,\beta_*(d),x,y)$.
We prove the lemma by induction on $n$. For the base case $n=0$, we have $u_n = u_n'=1$ and $\ell_n = \ell_n'=0$. For the inductive step, suppose $n>0$. By Lemmas \[lem:seqxymono\] and \[lem:d<=22sequence\], we have $$2\ell_{n-1}+u_{n-1}\leq 1 \leq 2u_{n-1}+\ell_{n-1}, \quad \text{and} \quad 2\ell'_{n-1}+u'_{n-1}\leq 1 \leq 2u'_{n-1}+\ell'_{n-1}.$$ By the induction hypothesis, we have $$\ell'_{n-1}\leq \ell_{n-1}\leq u_{n-1}\leq u'_{n-1}.$$ Using Lemma \[lem:fulxymono\], we therefore obtain that $$\begin{aligned}
&u'_n\geq f_u(u'_{n-1},\ell'_{n-1})\geq f_u(u_{n-1},\ell_{n-1})=u_n, \mbox{ and }\\
&\ell'_n\leq f_\ell(u'_{n-1},\ell'_{n-1})\leq f_\ell(u_{n-1},\ell_{n-1})=\ell_n.
\end{aligned}$$ This completes the proof.
\[lem:d<=22bound\] For every integer $d\in\{3,\ldots,22\}$ and every integer $n\geq 60$, $\displaystyle \frac{u_n(d,\beta_*(d))}{\ell_n(d,\beta_*(d))}\leq \displaystyle \frac{53}{27}$.
Fix an arbitrary integer $d$ between 3 and 22. We have the following chain of inequalities (see below for explanation): $$\frac{u_n(d,\beta_*(d))}{\ell_n(d,\beta_*(d))}\leq \frac{u_{60}(d,\beta_*(d))}{\ell_{60}(d,\beta_*(d))}\leq \frac{u_{60}'(d)}{\ell_{60}'(d)}\leq \frac{53}{27}.$$ The first inequality holds by Lemma \[lem:seqxymono\], since the sequence $\{u_n(d,\beta_*(d))\}$ is increasing and the sequence $\{\ell_n(d,\beta_*(d))\}$ is decreasing. The second inequality holds by Lemma \[lem:d<=22seqmono\]. Finally, the third inequality holds by Lemma \[lem:d<=22sequence\].
We can now prove Lemma \[lem:onestepuniq\], which we restate here for convenience.
Let $q=3$ and $c\in [3]$ be an arbitrary colour. For $d\geq 2$, consider the $d$-ary tree ${\mathbb{T}_{d,n}}$ with height $n$ and let $\tau:{\Lambda_{{\mathbb{T}_{d,n}}}}\rightarrow [3]$ be an arbitrary configuration on the leaves.
When $d=2$, for all $\beta\in (0,1)$, for all sufficiently large $n$ it holds that $$\frac{459}{2000}{\leq}\Pr_{{\mathbb{T}_{2,n}}}[\sigma({v_{2,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{2,n}}}}) = \tau] {\leq}\frac{1107}{2500}.$$ When $d\geq 3$, for all $\beta\in [1-\tfrac{3}{d+1},1)$, there exist sequences $\{L_n\}$ and $\{U_n\}$ (depending on $d$ and $\beta$) such that for all sufficiently large $n$ $$L_n\leq\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]\leq U_n \mbox{ and } U_n/L_n\leq 53/27.$$
The statement for $d=2$ follows directly from Lemma \[lem:d=2bound\].
Suppose $d\geq 3$. Let $U_n=u_n(d,\beta_*(d))$, $L_n=u_n(d,\beta_*(d))$. By Corollaries \[lem:d>=23bound\] and \[lem:d<=22bound\], there exists an integer $n_0$ such that for all $n \geq n_0$, $$\frac{U_n}{L_n}=\frac{u_n(d,\beta_*(d))}{\ell_n(d,\beta_*(d))}\leq \frac{53}{27}.$$ Furthermore, by Lemmas \[lem:probbound\] and \[lem:seqbetamono\], for any $n\geq 0$, any configuration $\tau\colon{\Lambda_{{\mathbb{T}_{d,n}}}}\to [3]$ and any colour $c\in[3]$, we have $$L_n=\ell_n(d,\beta_*(d))\leq\Pr_{{\mathbb{T}_{d,n}}}[\sigma({v_{d,n}})=c\mid \sigma({\Lambda_{{\mathbb{T}_{d,n}}}})=\tau]\leq u_n(d,\beta_*(d))=U_n.$$ This completes the proof.
Analysing the two-step recursion {#sec:twostep}
================================
In this section, we fix $q\geq 3$, $d\geq 2$ and $\beta\in [0,1)$. All of our notation depends implicitly on these three parameters, but when possible we avoid using them as indices to aid readability.
Our ultimate goal is to understand the case where $q=3$, but some of the lemmas are true more generally, so we start with $q\geq 3$. When we later fix $q=3$, we say so explicitly.
Characterising the maximiser of $h_{c_1,c_2,\beta}$ — Proof of Lemmas \[lem:existence\] and \[lem:mo12no12tone\] {#sec:existence}
----------------------------------------------------------------------------------------------------------------
In this section, we prove Lemmas \[lem:existence\] and \[lem:mo12no12tone\] from Section \[sec:simplecondition\]. Recall that $$\tag{\ref{eq:gh12def}}
\begin{aligned}
g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\ldots,{{\mathbf{p}}^{(d)}})
&:=\prod^d_{k=1}\bigg(1-\frac{(1-\beta) \big({p^{(k)}}_{c_1}-{p^{(k)}}_{c_2}\big)}{\beta {p^{(k)}}_{c_2}+\sum_{c\neq c_2}{p^{(k)}}_{c}}\bigg).\\
h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})
&:=1+\frac{(1-\beta)\big(1-g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})\big)}{\beta +\sum_{c\neq c_2}g_{c,c_2,\beta}({{\mathbf{p}}^{(1)}}, \ldots, {{\mathbf{p}}^{(d)}})}.
\end{aligned}$$ To prove Lemma \[lem:existence\], it will be helpful in this section to consider the set of maximisers of $h_{c_1,c_2,\beta}$.
Suppose $q\geq 3$, $ d\geq 2$ and $\beta\in [0,1)$. For colours $c_1,c_2\in[q]$ and $\alpha>1$, let $$\label{eq:max12misers}
{\mathcal{M}_{\alpha,c_1,c_2,\beta}}={\arg\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big).$$
The following lemmas give properties of the maximisers in ${\mathcal{M}_{\alpha,c_1,c_2,\beta}}$.
\[lem:gc1c2betaless1\] Fix $\alpha> 1$ and $\beta\in [0,1)$ and colours $c_1,c_2\in[q]$. Then for any vector $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$, we have $g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\leq 1$.
Assume for the sake of contradiction that $g_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})> 1$. Then, we have that $h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})<1$, which contradicts the fact that $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$ since $h_{c_1,c_2,\beta}$ can take the value 1 by setting all of its arguments to be equal to the uniform vector $(1/q,\hdots,1/q)\in \triangle_\alpha$.
\[lem:pc2min\] Fix $\alpha> 1$ and $\beta\in[0,1)$ and any two distinct colours $c_1\in[q]$ and $c_2\in[q]$. Then for any vector $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$ and any $k\in[d]$, we have ${{p}^{(k)}}_{c_2} = \min_{c\in[q]} \{{{p}^{(k)}}_{c}\}$.
Assume for the sake of contradiction that there is $k\in[d]$ and $c\in[q]$ such that ${{p}^{(k)}}_{c_2}> {{p}^{(k)}}_{c}$. Define $( {\tilde{\mathbf{p}}^{(1)}},\hdots,{\tilde{\mathbf{p}}^{(d)}})\in \triangle_{\alpha}^d$ as follows.
- If $j\neq k$, then ${\tilde{\mathbf{p}}^{(j)}} = {{\mathbf{p}}^{(j)}}$.
- If $c'\notin\{c,c_2\}$, then ${\tilde p^{(k)}}_{c'} = {p^{(k)}}_{c'}$.
- ${\tilde{p}^{(k)}}_{c_2}={{p}^{(k)}}_{c}$.
- ${\tilde{p}^{(k)}}_{c}= {{p}^{(k)}}_{c_2}$.
The definition of $g_{c',c_2}$ ensures that, for all $c'\neq c_2$, we have $$g_{c',c_2,\beta} ({\tilde{{\mathbf{p}}}^{(1)}},\ldots, {\tilde{{\mathbf{p}}}^{(d)}})< g_{c',c_2,\beta} ({{{\mathbf{p}}}^{(1)}},\ldots, {{{\mathbf{p}}}^{(d)}}),$$ since the $k$-th factor in the definition of $g_{c_1,c_2,\beta}$ became larger (by switching ${{\mathbf{p}}^{(k)}}$ to ${\tilde{{\mathbf{p}}}^{(k)}}$). The definition of $h_{c_1,c_2,\beta}$, together with the fact that $c_1$ and $c_2$ are distinct and Lemma \[lem:gc1c2betaless1\], implies $$h_{c_1,c_2,\beta} ({\tilde{{\mathbf{p}}}^{(1)}},\ldots, {\tilde{{\mathbf{p}}}^{(d)}})> h_{c_1,c_2,\beta} ({{{\mathbf{p}}}^{(1)}},\ldots, {{{\mathbf{p}}}^{(d)}}),$$ which contradicts the fact that $\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$.
\[lem:alpha>1\] Fix $\alpha> 1$ and $\beta\in [0,1)$ and any two distinct colours $c_1\in[q]$ and $c_2\in[q]$. Then for any vector $\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$ and any $k\in[d]$, we have ${{p}^{(k)}}_{c_2} < \max_{c\in[q]} \{{{p}^{(k)}}_{c}\}$.
For the sake of contradiction, suppose that there is $k\in[d]$ such that ${{p}^{(k)}}_{c_2}\geq\max_{c\in[q]}\{{{p}^{(k)}}_{c}\}$. By Lemma \[lem:pc2min\], ${p^{(k)}}_{c_2}$ is the minimum entry of the vector ${{\mathbf{p}}^{(k)}}$, so all entries of ${{\mathbf{p}}^{(k)}}$ must be equal (and hence, equal to $1/q$). Define the vector $( {\tilde{\mathbf{p}}^{(1)}},\hdots,{\tilde{\mathbf{p}}^{(d)}})\in \triangle_{\alpha}^d$ as follows.
- If $j\neq k$, then ${\tilde{\mathbf{p}}^{(j)}} = {{\mathbf{p}}^{(j)}}$.
- ${\tilde{p}^{(k)}}_{c_1}= \alpha/(\alpha+q-1)$.
- If $c \neq c_1$, then ${\tilde p^{(k)}}_{c} = 1/(\alpha+q-1)$.
The definition of $g_{c,c_2,\beta}$ together with Lemma \[lem:gc1c2betaless1\] ensure that $$g_{c_1,c_2,\beta} ({\tilde{{\mathbf{p}}}^{(1)}},\ldots, {\tilde{{\mathbf{p}}}^{(d)}})< g_{c_1,c_2,\beta} ({{{\mathbf{p}}}^{(1)}},\ldots, {{{\mathbf{p}}}^{(d)}})=1$$ and that for every $c \neq c_1$, $g_{c,c_2,\beta} ({\tilde{{\mathbf{p}}}^{(1)}},\ldots, {\tilde{{\mathbf{p}}}^{(d)}})= g_{c,c_2,\beta} ({{{\mathbf{p}}}^{(1)}},\ldots, {{{\mathbf{p}}}^{(d)}})$. The definition of $h_{c_1,c_2,\beta}$ therefore implies that $$h_{c_1,c_2,\beta} ({\tilde{{\mathbf{p}}}^{(1)}},\ldots, {\tilde{{\mathbf{p}}}^{(d)}})> h_{c_1,c_2,\beta} ({{{\mathbf{p}}}^{(1)}},\ldots, {{{\mathbf{p}}}^{(d)}}),$$ which contradicts the fact that $\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$.
To proceed, we will need the following technical fact.
\[lem:linearoverlinear\] Let $A_0$, $B_0$, $A_1$ and $B_1$ be real numbers. Let $a$ and $b$ be real numbers satisfying $a\leq b$ such that, for all $x\in[a,b]$, $A_1 + B_1 x \neq 0$. Let $\mathcal{L}(x) = {(A_0+B_0 x)}/{(A_1+B_1 x)}$. Then $$\max_{x\in[a,b]} \mathcal{L}(x) = \max\{\mathcal{L}(a), \mathcal{L}(b)\}.$$
Since $A_1+B_1x\neq 0$ and $${\frac{\mathrm{d}{\cal L}}{\mathrm{d}{x}}} = \frac{A_1B_0-A_0B_1}{(A_1+B_1x)^2},$$ $\mathcal{L}(x)$ is a monotone function on $[a, b]$. Thus, $\max_{x\in[a,b]} \mathcal{L}(x) = \max\{\mathcal{L}(a), \mathcal{L}(b)\}$.
\[lem:ratio\] Fix $\alpha > 1$ and $\beta\in[0,1)$ and two distinct colours $c_1$ and $c_2$ in $[q]$. Suppose that $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$. Then for every $k\in[d]$, there exists $\tilde{\mathbf{p}}\in\triangle_\alpha$ such that $$({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},\tilde{\mathbf{p}},{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$$ and, for all $c\in[q]$, $\tilde{p}_{c}/\tilde{p}_{c_2} \in\{1,\alpha\}$.
Fix a tuple $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$. Fix $k\in[d]$. Given any $\hat{\mathbf{p}}\in \triangle_\alpha$, we will be interested in the quantity $h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},\hat{{\mathbf{p}}},{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})$. Given any $c'\neq c_2$, define $$P_{c'} :=
\prod_{j\neq k}\left(1-\frac{(1-\beta) \big({p^{(j)}}_{c'}-{p^{(j)}}_{c_2}\big)}{\beta {p^{(j)}}_{c_2}+\sum_{c\neq c_2}{p^{(j)}}_{c}}\right).$$
It will be helpful to re-parameterise the elements of $\hat{\mathbf{p}}$. Recall that the definition of $\triangle_\alpha$ implies that $\hat p_c>0$ for every $c\in[q]$. For every $c\in[q]$, let $\mu_c(\hat{\mathbf{p}}) = \hat p_{c}/\hat p_{c_2}$. Let ${\boldsymbol{\mu}}(\hat{\mathbf{p}})$ be the tuple ${\boldsymbol{\mu}}(\hat{\mathbf{p}}) = (\mu_1(\hat{\mathbf{p}}),\ldots,\mu_q(\hat{\mathbf{p}}))$. Going the other direction from a tuple $\boldsymbol{\mu}$ with entries in $[1,\alpha]$, for every $c\in[q]$, let $p_c(\boldsymbol{\mu}) =
\mu_c /\sum_{c'\in[q]} \mu_{c'}$ and let ${\mathbf{p}}(\boldsymbol{\mu})$ be the tuple $(p_1(\boldsymbol{\mu}),\ldots,p_q(\boldsymbol{\mu}))$.
It is going to be important to note that the re-parameterisation is without loss of information, so, to this end, let $\Omega_\alpha = \{\boldsymbol{\mu} \in [1,\alpha]^q \mid \mu_{c_2} = 1\}$. The definition of $\triangle_\alpha$ and Lemma \[lem:pc2min\] ensure that, for every $({\tilde{\mathbf{p}}^{(1)}},\hdots,{\tilde{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$ and any $j\in[d]$, $\boldsymbol{\mu}({\tilde{\mathbf{p}}^{(j)}}) \in \Omega_\alpha$. Also, given any $\boldsymbol{\mu} \in \Omega_\alpha$, the vector ${\mathbf{p}}(\boldsymbol{\mu})$ is in $\triangle_\alpha$.
Given a tuple $\boldsymbol{\mu}\in \Omega_\alpha$, we will simplify notation by letting $\denkludge(\boldsymbol{\mu}) := \beta + \sum_{c\neq c_2} \mu_c$. Then we can write $g_{c',c_2}$ as $$\begin{aligned}
g_{c',c_2}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},\hat{{\mathbf{p}}},{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})
&=P_{c'} \left(1- \frac{(1-\beta) ( {\hat p}_{c'}- {\hat p} _{c_2} )}
{\beta {\hat p} _{c_2}+\sum_{c\neq c_2} {\hat p} _{c}}\right)\\
&= P_{c'}\left(1-\frac{(1-\beta)(\mu_{c'}(\hat{\mathbf{p}})-1)}{ \denkludge(\boldsymbol{\mu}(\hat{\mathbf{p}}))}\right).\end{aligned}$$
Given the (fixed) values of ${{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}}$ and ${{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}} $, let $$h(\boldsymbol{\mu}) :=
\frac{
\denkludge(\boldsymbol{\mu}) - \denkludge(\boldsymbol{\mu}) P_{c_1} + P_{c_1} (1-\beta)(\mu_{c_1}-1)
}
{\beta \denkludge(\boldsymbol{\mu}) + \denkludge(\boldsymbol{\mu}) \sum_{c'\neq c_2} P_{c'}
- \sum_{c'\neq c_2}
P_{c'} (1-\beta) (\mu_{c'}-1)
}.$$
Then we can write $h_{c_1,c_2,\beta}$ as $$\begin{aligned}
h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},\hat{{\mathbf{p}}},{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})
&=
1+\frac{(1-\beta)\left(1-
P_{c_1}\left(1-\frac{(1-\beta)(\mu_{c_1}(\hat{\mathbf{p}})-1)}{ \denkludge(\boldsymbol{\mu}(\hat{\mathbf{p}}))}\right)
\right)}{\beta +\sum_{c'\neq c_2}
P_{c'}\left(1-\frac{(1-\beta)(\mu_{c'}(\hat{\mathbf{p}})-1)}{ \denkludge(\boldsymbol{\mu}(\hat{\mathbf{p}}))}\right)
}
\\
&=
1+ (1-\beta) h(\boldsymbol{\mu}(\hat{\mathbf{p}})).
\end{aligned}$$
Since $({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in{\mathcal{M}_{\alpha,c_1,c_2,\beta}}$, taking $\hat{{\mathbf{p}}} = {{\mathbf{p}}^{(k)}} $ maximises $$h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},\hat{{\mathbf{p}}},{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})$$ over $\triangle_\alpha^d$. Thus, for any maximiser ${\boldsymbol{\mu}}$ of $h(\boldsymbol{\mu})$ over $\Omega_\alpha$, we have $$h_{c_1,c_2,\beta}({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(k-1)}},
\boldsymbol{p}({\boldsymbol{\mu}}),{{\mathbf{p}}^{(k+1)}},\hdots,{{\mathbf{p}}^{(d)}})
\in {\mathcal{M}_{\alpha,c_1,c_2,\beta}}.$$ So to prove the lemma (taking $\tilde{\mathbf{p}}= {\mathbf{p}}({\boldsymbol{\mu}})$) it suffices to find a maximiser ${\boldsymbol{\mu}}$ of $h(\boldsymbol{\mu})$ over $\Omega_\alpha$ such that for all $c\in[q]$, $\mu_c \in\{1,\alpha\}$. This is what we will do in the rest of the proof. The definition of $\Omega_\alpha$ guarantees that $\mu_{c_2} = 1$.
Fix any $c\neq c_2$. For any fixed values $\mu_1,\ldots,\mu_{c-1}$ and $\mu_{c+1},\ldots,\mu_q$, all in $[1,\alpha]$, satisfying $\mu_{c_2}=1$, consider $h(\boldsymbol{\mu})$ as a function of $\mu_c$. Note that both the numerator and denominator of $h(\boldsymbol{\mu})$ are linear in $\mu_c$. We will argue that the denominator is not zero when $\mu_c\in[1,\alpha]$. Using this, Lemma \[lem:linearoverlinear\] guarantees that, given $\mu_1,\ldots,\mu_{c-1},\mu_{c+1},\ldots,\mu_q$, $h(\boldsymbol{\mu})$ is maximised by setting $\mu_c \in \{1,\alpha\}$. Going through the colours $c$ one-by-one, we find the desired maximiser $\boldsymbol\mu$.
To complete the proof, we just need to show that the denominator of $h(\boldsymbol{\mu})$ is not zero when $\boldsymbol{\mu} \in \Omega_\alpha$. This follows easily, since, for all $c'\neq c_2$, $P_{c'}>0$ and $m(\boldsymbol{\mu}) \geq (1-\beta)(\mu_{c'}-1)$.
Lemmas \[lem:alpha>1\] and \[lem:ratio\] yield Lemma \[lem:existence\] as an immediate corollary.
[Let $q\geq 3$, $d\geq 2$ and $\beta\in [0,1)$. For any colours $c_1,c_2\in [q]$, there is an $(\alpha,c_2)$-extremal tuple which achieves the maximum in ${\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)$ (cf. ).]{}
Just use Lemmas \[lem:alpha>1\] and \[lem:ratio\].
We also now prove Lemma \[lem:mo12no12tone\].
[ Let $q\geq 3$, $d\geq 2$ and $\beta', \beta''\in [0,1)$ with $\beta'\leq \beta''$. Then, for all $\alpha>1$ and any colours $c_1,c_2\in [q]$, it holds that $$M_{\alpha, c_1,c_2,\beta''}\leq M_{\alpha, c_1,c_2,\beta'}.$$ ]{}
Fix $\alpha>1$ and arbitrary colours $c_1,c_2\in [q]$. Note that the set of $(\alpha,c_2)$-extremal tuples does not depend on the parameter $\beta$, and for each $\beta\in (0,1)$ there exists by Lemma \[lem:existence\] an $(\alpha,c_2)$-extremal tuple which achieves the maximum in ${\max}_{({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}})\in\triangle_{\alpha}^d}\, h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)$.
Therefore, the inequality will follow by showing that $$\label{eq:bb45yby6nu42}
h_{c_1,c_2,\beta''}({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)})\leq h_{c_1,c_2,\beta'}({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}),$$ where $\big({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}\big)$ is an arbitrary $(\alpha,c_2)$-extremal tuple. Using the extremality of the tuple $\big({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}\big)$, we have that $p^{(k)}_{c_2}\leq p^{(k)}_{c}$ for all colours $c\in [q]$ and every $k\in [d]$. Hence, using the definition of $g_{c_1,c_2,\beta}$ and that $\beta'\leq \beta''$, we have $$1\geq g_{c,c_2,\beta''}({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)})\geq g_{c,c_2,\beta'}({\mathbf{p}}^{(1)},\hdots,{\mathbf{p}}^{(d)}) \mbox{ for all } c\in [q].$$ In turn, using the definition of $h_{c_1,c_2,\beta}$, we obtain from this that holds, as wanted.
Bounding the two-step recursion when $q=3$ — Proof of Lemma \[lem:twosteptoprove\] {#sec:twosteptoprove}
----------------------------------------------------------------------------------
In this section, we assume that $q = 3$ and give the proof of Lemma \[lem:twosteptoprove\] that verifies Condition \[cond:we\] for all $\alpha\in (1,53/27]$.
Recall that for a pair $(q,d)$, Condition \[cond:we\], for a fixed value of $\alpha>1$ and colours $c_1,c_2\in [q]$, amounts to checking $$\label{eq:2ed2g2v2vtvt}
h_{c_1,c_2,\beta_*}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)<\alpha^{1/d} \mbox{ for all } {{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\in \mathrm{Ex}_{c_2}(\alpha),$$ where the set $\mathrm{Ex}_{c_2}(\alpha)$ is given by (cf. ) $$\mathrm{Ex}_{c}(\alpha)=\big\{(p_1,\hdots,p_q)
\in \{1,\alpha\}^q
\mid p_{c}=1 \land \exists c'\in[q] \text{ such that } p_{c'}=\alpha \big\}.$$ Note that, for $q=3$, $\mathrm{Ex}_{c}(\alpha)$ has exactly 3 vectors. As we shall see shortly, we can capture the value of $h_{c_1,c_2,\beta}$ when ${{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\in \mathrm{Ex}_{c}(\alpha)$ using a function ${\varphi}(d,d_0,d_1,\alpha,\beta)$ which depends on $\alpha$, $\beta$ and $d$, but also on two non-negative integers $d_0$ and $d_1$ with $d_0+d_1 \leq d$; roughly, $d_0$ is the number of ${{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}$ which are equal to the first vector in $\mathrm{Ex}_{c}(\alpha)$, $d_1$ to the second vector and the remaining $d-d_0-d_1$ to the third vector. Let us first define the function ${\varphi}(d,d_0,d_1,\alpha,\beta)$.
\[def:xy\] Let $\beta\in [0,1]$. Fix any $\alpha >1$ and any integer $d\geq 2$. Let $x(\alpha,\beta)=1- \frac{(1-\beta)(\alpha-1)}{\beta+2\alpha}$ and $y(\alpha,\beta)=1- \frac{(1-\beta)(\alpha-1)}{\beta+\alpha+1}$. For any nonnegative integers $d_0$ and $d_1$ such that $d_0 + d_1 \leq d$, let $${\varphi}(d,d_0,d_1,\alpha,\beta)=1+\frac{(1-\beta)(1-x(\alpha,\beta)^{d_0}y(\alpha,\beta)^{d_1})}{\beta+x(\alpha,\beta)^{d_0}(y(\alpha,\beta)^{d_1}+y(\alpha,\beta)^{d-d_0-d_1})}.$$
We then have the following lemma.
\[lem:vald0d1d2\] Suppose $q=3$, $d \geq 2$ and $\beta\in[0,1]$. Fix $\alpha > 1$ and distinct colours $c_1,c_2\in [q]$, and let ${{\mathbf{p}}^{(1)}}, {{\mathbf{p}}^{(2)}},\ldots,{{\mathbf{p}}^{(d)}} \in \mathrm{Ex}_{c_2}(\alpha)$. Then there are nonnegative integers $d_0$ and $d_1$ with $d_0+d_1\leq d$ such that $$h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)= \varphi(d, d_0, d_1, \alpha, \beta).$$
Let $c_3$ be the remaining colour in $[q]$ (other than $c_1$ and $c_2$), so that $[q]=\{c_1,c_2,c_3\}$. Since ${{\mathbf{p}}^{(1)}}, {{\mathbf{p}}^{(2)}},\ldots,{{\mathbf{p}}^{(d)}} \in \mathrm{Ex}_{c_2}(\alpha)$, for every $k\in[d]$ we have that ${p^{(k)}}_{c_2}=1$ and one of three situations occurs: $$\mbox{ (i) ${p^{(k)}}_{c_1} = {p^{(k)}}_{c_3} = \alpha$,
(ii)
${p^{(k)}}_{c_1} =\alpha$, ${p^{(k)}}_{c_3} =1$,
(iii) ${p^{(k)}}_{c_1} =1$, ${p^{(k)}}_{c_3} =\alpha$.}$$ If situation (i) occurs, then, using the notation from Definition \[def:xy\], we have $$\label{eq:usex}
1-\frac{(1-\beta) \big({p^{(k)}}_{c_1}-{p^{(k)}}_{c_2}\big)}{\beta {p^{(k)}}_{c_2}+
{p^{(k)}}_{c_1} + {p^{(k)}}_{c_3}
}
= x(\alpha,\beta).$$ If situation (ii) occurs, then this quantity is $y(\alpha,\beta)$. If situation (iii) occurs, then it is $1$. Now let $d_0$ be the number of $k\in[d]$ for which situation (i) occurs and let $d_1$ be the number of $k\in[d]$ for which situation (ii) occurs. Then plugging and the other similar observations above into the definition of $g_{c_1,c_2,\beta}$ and $g_{c_3,c_2,\beta}$, we have $$\begin{aligned}
g_{c_1,c_2,\beta} \big({\hat{{\mathbf{p}}}^{(1)}},\ldots, {\hat{{\mathbf{p}}}^{(d)}}\big)&=
x(\alpha,\beta)^{d_0}y(\alpha,\beta)^{d_1}, \mbox{ and }\\
g_{c_3,c_2,\beta} \big({\hat{{\mathbf{p}}}^{(1)}},\ldots, {\hat{{\mathbf{p}}}^{(d)}}\big)&=
x(\alpha,\beta)^{d_0}y(\alpha,\beta)^{d-d_0-d_1}.\end{aligned}$$ Plugging this into the definition of $h_{c_1,c_2,\beta}$, we have $$h_{c_1,c_2,\beta}\big({\hat{\mathbf{p}}^{(1)}},\hdots,{\hat{\mathbf{p}}^{(d)}}\big)=1+\frac{(1-\beta)\Big(1-g_{c_1,c_2,\beta}({\hat{\mathbf{p}}^{(1)}}, \ldots, {\hat{\mathbf{p}}^{(d)}})\Big)}{\beta +\sum_{c\neq c_2}g_{c,c_2,\beta}({\hat{\mathbf{p}}^{(1)}}, \ldots, {\hat{\mathbf{p}}^{(d)}})}=\varphi(d, d_0, d_1, \alpha, \beta).\qedhere$$
The following definition applies Definition \[def:xy\] to the critical value of $\beta$ for the special case where $d_0 + d_1=d$; we will see that this special case is all that we need to consider to verify Condition \[cond:we\] for $\alpha\in (1,2)$.
\[def:gocritical\] Let $q=3$. Fix any $\alpha > 1$ and any integer $d\geq 2$. Let $\beta_*(d) = 1-q/(d+1)$. Let $X = x(\alpha,\beta_*(d))$ and $Y = y(\alpha,\beta_*(d))$. Let $$\varphi_*(d,d_0,\alpha)=\varphi(d, d_0, d-d_0, \alpha, \beta_*(d))=1+\frac{3}{d+1}\cdot\frac{1-X^{d_0}Y^{d-d_0}}{\beta_*(d)+X^{d_0}(Y^{d-d_0}+1)}.$$
The values $X$ and $Y$ from definition \[def:gocritical\] are clearly functions of $d$ and $\alpha$, but we suppress this in the notation to avoid clutter. (These values will arise in proofs, but, unlike $\varphi_*(d,d_0,\alpha)$, they will not arise in statements of lemmas.)
The following lemma applies for $\alpha\in (1,2)$.
\[lem:hbound\] Suppose $q=3$, $d\geq 2$ and $ 1-q/(d+1)\leq \beta\leq 1$. Fix $\alpha\in (1,2)$ and distinct colours $c_1,c_2\in [q]$. Suppose that ${{\mathbf{p}}^{(1)}}, {{\mathbf{p}}^{(2)}},\ldots,{{\mathbf{p}}^{(d)}} \in \mathrm{Ex}_{c_2}(\alpha)$. Then, there is a non-negative integer $d_0 \leq d$ such that $$h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)\leq \varphi_*(d, d_0,\alpha).$$
Since $d$ is fixed, we simplify the notation by writing $\beta_*$ for $\beta_*(d)$.
Note that both $x(\alpha, \beta)$ and $y(\alpha, \beta)$ are increasing functions of $\beta$, so $\varphi$ is a decreasing function of $\beta$. Thus for all $\beta \in [\beta^*,1]$, we have $
{\varphi}(d,d_0,d_1,\alpha,\beta_*) \geq {\varphi}(d, d_0, d_1,\alpha,\beta)
$. Combining this with Lemma \[lem:vald0d1d2\], we find that there are nonnegative integers $d_0$ and $d_1$ with $d_0 + d_1 \leq d$ such that $$\label{eq:hphibound}
h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big) = {\varphi}(d, d_0, d_1, \alpha, \beta)\leq {\varphi}(d, d_0, d_1, \alpha, \beta_*).$$ Let $d_2 = d- d_0 - d_1 \geq 0$. If $d_1 < d_2$, then $${\varphi}(d,d_0,d_2,\alpha,\beta_*)=1+\frac{(1-\beta_*)(1-x(\alpha,\beta_*)^{d_0}y(\alpha,\beta_*)^{d_2})}{\beta_*+x(\alpha,\beta_*)^{d_0}(y(\alpha,\beta_*)^{d_1}+y(\alpha,\beta_*)^{d_2})}\geq{\varphi}(d,d_0,d_1,\alpha,\beta_*).$$ So we can further assume that $d_1 \geq d_2$. Since $1<\alpha < 2$, and $\beta_*\leq 1$, we have $$x(\alpha, \beta_*)^2-y(\alpha, \beta_*) =\frac{(1-\beta_*)(\alpha-1)}{(\beta_*+2\alpha)(\beta_*+\alpha+1)}\left(\frac{\beta_*+\alpha+1}{\beta_*+2\alpha}(1-\beta_*)(\alpha-1)-(\beta_*+2)\right) \leq 0,$$ which implies $$x(\alpha,\beta_*)^{d_0}y(\alpha,\beta_*)^{d_1} \geq x(\alpha,\beta_*)^{d_0+2}y(\alpha,\beta_*)^{d_1-1},$$ and $$x(\alpha,\beta_*)^{d_0}y(\alpha,\beta_*)^{d_2} \geq x(\alpha,\beta_*)^{d_0+2}y(\alpha,\beta_*)^{d_2-1}.$$ These (together with the definition of $\varphi$) imply that if $d_1\geq d_2\geq 1$ then $${\varphi}(d,d_0,d_1,\alpha,\beta_*) \leq{\varphi}(d,d_0+2,d_1-1,\alpha,\beta_*).$$ Repeating this until $d_2=0$, we obtain $${\varphi}(d,d_0,d_1,\alpha,\beta_*) \leq{\varphi}(d,d_0+2d_2,d_1-d_2,\alpha,\beta_*)=\varphi_*(d,d_0+2d_2,\alpha).$$ Combining this with , we obtain $$h_{c_1,c_2,\beta}\big({{\mathbf{p}}^{(1)}},\hdots,{{\mathbf{p}}^{(d)}}\big)\leq \varphi_*(d,d_0+2d_2,\alpha).\qedhere$$
\[lem:d<=22techlem\] For every fixed $d\in\{2,\ldots,22\}$, $d_0 \in \{0,\ldots,d\}$ and $\alpha \in (1,53/27]$, we have $\varphi_*(d,d_0,\alpha) < \alpha^{1/d}$.
This is rigorously verified using the Resolve function of Mathematica in Section \[app:d<=22techlem\].
\[lem:d>=23techlem\] For every fixed integer $d\geq 23$, $d_0 \in \{0,\ldots,d\}$ and $\alpha \in (1,53/27]$, we have $\varphi_*(d,d_0,\alpha) < \alpha^{1/d}$.
Fix $d$ and $d_0$ such that $d\geq 23$ and $d_0 \in \{0,\ldots,d\}$. Since $d$ is fixed, we simplify the notation by writing $\beta_*$ for $\beta_*(d)$. Note that $\beta_*\in (0,1)$. Given $d$, let $X$ and $Y$ be the functions of $\alpha$ defined in Definition \[def:gocritical\], and observe that these are positive for all $\alpha\geq 1$. Let $$\tilde{\varphi}(d, d_0, \alpha)=1+\frac{1-X^{d_0}Y^{d-d_0}\strut}{\strut d X^{\frac{2d_0}{2+\beta_*}}Y^{\frac{d-d_0}{2+\beta_*}}}-\alpha^{1/d}=1+\frac{1}{d}\Big(X^{-\frac{2d_0}{2+\beta_*}}Y^{-\frac{d-d_0}{2+\beta_*}}-X^{\frac{d_0 \beta_*}{2+\beta_*}}
Y^{\frac{(d-d_0)(1+\beta_*)}{2+\beta_*}}\Big)-\alpha^{1/d}.$$
By the weighted arithmetic-mean geometric-mean inequality[^8] we have $$\beta_*+X^{d_0}(Y^{d_1}+1)\geq (2+\beta_*)X^{\frac{2d_0}{2+\beta_*}}Y^{\frac{d_1}{2+\beta_*}},$$ which yields $\tilde{\varphi}(d,d_0,\alpha) {\geq}\varphi_*(d, d_0, \alpha) - \alpha^{1/d}$. Thus, our goal is to prove $\tilde{\varphi}(d,d_0,\alpha)< 0$ when $\alpha\in (1,53/27]$. When $\alpha=1$, $X$ and $Y$ are $1$ so $\tilde{\varphi}(d, d_0, 1)=0$. To prove the lemma, it suffices to show that ${\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}< 0$ for $\alpha\in (1,53/27]$. The rest of the proof is devoted to this technical fact. It is broken up into four steps.0.2cm
[**Step 1.**]{} Let $ \xi_1= 2d_0 \big(\frac{2 + \alpha \beta_*}{2 + \beta_*}\big)+(d-d_0)\alpha$, $M = -X^{d_0}Y^{d-d_0}\left(d_0\beta_*\frac{2 + \alpha \beta_*}{2 + \beta_*}+ (d-d_0)(\beta_*+1)\alpha\right)$, $S = X^{\frac{2d_0}{2+\beta_*}} Y^{\frac{d-d_0}{2+\beta_*}} (\alpha+\beta_*+1) (\alpha \beta_*+2) / (1-\beta_*)$, and $\xi_2 = M + \alpha^{1/d} S$. The goal of Step 1 is to show that ${\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}< 0$ follows from $\xi_1 \leq \xi_2$.
[**Technical details of Step 1.**]{} To calculate the derivative of the function $\tilde{\varphi}$, let $$g_X := (2 \alpha+\beta_*) (\alpha \beta_*+\alpha+1),\qquad g_Y :=(\alpha+\beta_*+1) (\alpha \beta_*+2).$$ Using Mathematica, we verify in Section \[app:d>=23techlem\] the formula $$\label{eq:vetvt4g66}
\begin{aligned}
{\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}&=
\bigg(\frac{(1-\beta)X^{-\frac{2d_0}{2+\beta_*}}Y^{-\frac{d-d_0}{2+\beta_*}}}{d\alpha g_Y} \bigg)
\left(2 d_0\alpha \frac{g_Y}{g_X} + (d-d_0)\alpha +\right.\\
&\hskip 5.5cm \left. X^{ d_0} Y^{ d-d_0} \Big(d_0 \beta_*\alpha \frac{g_Y}{g_X} +
(d-d_0)(1+\beta_*)\alpha \Big)
- \alpha^{1/d} S\right).
\end{aligned}$$ For all $\alpha>1$, we have that $$\alpha(2+\beta_*)(\alpha + \beta_* + 1)- (2\alpha + \beta_*)(\alpha \beta_* + \alpha +1)=-\beta_*(\alpha-1)^2 < 0,$$ so we obtain the bound $$\label{eqn:fxfy}
\frac{g_Y }{g_X }= \frac{(\alpha+\beta_*+1) (\alpha \beta_*+2)}{(2 \alpha+\beta_*) (\alpha \beta_*+\alpha+1)}< \frac{(\alpha+\beta_*+1) (\alpha \beta_*+2)}{\alpha(2+\beta_*)(\alpha + \beta_* + 1)}= \frac{2 + \alpha \beta_*}{\alpha (2 + \beta_*)}.$$
Now note that the first parenthesised expression in is positive since $X,Y>0$ for all $\alpha>1$. Thus, to show ${\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}< 0$, it suffices to show that the second parenthesised expression in is less than $0$. To do this, we can apply the strict upper bound on $g_X/g_Y$ from , and show that the resulting expression, which is $\xi_1 - \xi_2$, is at most $0$. Thus, we have completed Step 1. 0.2cm
[**Step 2.**]{} Let $W = 1+\frac{\alpha-1}{d}-\frac{(d-1) (\alpha-1)^2}{2 d^2} $ and $\xi_3 = M + W S$. The goal of Step 2 is to show $W \leq \alpha^{1/d}$, which implies $\xi_3 \leq \xi_2$. Given Step 1, this means that ${\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}< 0$ will follow from showing that $\xi_1 \leq \xi_3$.
[**Technical details of Step 2.**]{} Recall that $d\geq 23$. Let $m(\alpha,d) = \alpha^{1/d}- W$. Note that $$m(1,d) = {\frac{\partial{m}}{\partial{\alpha}}}\Big\vert_{\alpha=1} = {\frac{\partial{^2 m}}{\partial{\alpha^2}}}\Big\vert_{\alpha=1}=0,\quad\mbox{and}\quad{\frac{\partial{^3m}}{\partial{\alpha^3}}} = \frac{\left(2-\frac{1}{d}\right) \left(1-\frac{1}{d}\right) }{d \alpha^{3-\frac{1}{d}} }>0\mbox{\ for all $\alpha > 1$.}$$ Thus, $m(\alpha,d) \geq 0$ for $\alpha > 1$. 0.2cm
[**Step 3.**]{} Let $\kappa=d_0/d$. After re-parameterising $\xi_1$ and $\xi_3$ (so that they depend on $d$, $\kappa$ and $\alpha$), we show that $\xi_1 \leq \xi_3$ follows from ${\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}} > 0$. Given the other steps, this means that ${\frac{\partial{\tilde{\varphi}}}{\partial{\alpha}}}< 0$ follows from ${\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}} > 0$. For future reference, the re-parameterisation is as follows. Let $$\begin{aligned}
s_0&=-\left(\frac{(2 d-1) d (1-\kappa) \alpha}{d+1}+\frac{(d-2) \kappa ((d-2) \alpha+2(d+1))}{3(d+1)}\right), \text{ and}\cr
t_0&=\frac{((d+1)\alpha+2d-1) ((d-2) \alpha+2(d+1))}{3(d+1)}\left(1+\frac{\alpha-1}{d}-\frac{(d-1) (\alpha-1)^2}{2 d^2}\right).
\end{aligned}$$ Then $$\label{eq:rt4tv4ybyb}
\begin{aligned}
\xi_1&= \frac{\alpha-1}{3} (-d \kappa+3 d-4 \kappa)+d(\kappa+1), \text{ and}\\
\xi_3&= s_0 X^{d\kappa }Y^{d(1-\kappa)} + t_0 X^{\frac{2\kappa(d+1)}{3}} Y^{\frac{(1-\kappa)(d+1)}{3}}.
\end{aligned}$$
[**Technical details of Step 3.**]{}
The mathematica code in Section \[app:d>=23techlem\] verifies that, at $\alpha=1$, $ \xi_3 = d(\kappa+1)$ and $ {\frac{\partial{\xi_3}}{\partial{\alpha}}} = \tfrac{1}{3}(-d\kappa+3d-4\kappa) $. By Taylor’s theorem and the Lagrange form of the remainder, there exists a number $\tilde{\alpha} \in (1, \alpha]$ such that $$\label{eq:xi3}
\xi_3 =d(\kappa+1) + \frac{1}{3}(-d\kappa+3d-4\kappa)(\alpha-1)+ \frac{1}{2}\left({\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}}\Big\vert_{\alpha=\tilde{\alpha}}\right)(\alpha-1)^2.$$ Thus, if ${\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}} > 0$ for all $\alpha \in (1,53/27]$, we can conclude that $$\begin{aligned}
\label{eq:xi1xi3}
\xi_3 {\geq}d(\kappa+1) + \frac{1}{3}(-d\kappa+3d-4\kappa)(\alpha-1) = \xi_1.
\end{aligned}$$
[**Step 4.**]{} Using the parameterisation of Step 3, we show that ${\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}} > 0$, for all $d\geq 23$, $\kappa\in [0,1]$ and $\alpha \in (1,53/27]$, thus completing the proof.
[**Technical details of Step 4.**]{} We start with the observation that, for any $k_1$ and $k_2$ (not depending on $\alpha$) and any function $r$ of $\alpha$, it holds that $${\frac{\partial{(r X^{k_1} Y^{k_2})}}{\partial{\alpha}}} = f(r,k_1,k_2) X^{k_1} Y^{k_2}, \mbox{ where } f(r,k_1,k_2) = {\frac{\partial{r}}{\partial{\alpha}}} + \frac{k_1 r}{X} {\frac{\partial{X}}{\partial{\alpha}}} + \frac{k_2 r}{Y} {\frac{\partial{Y}}{\partial{\alpha}}}.$$ Applying this observation twice to each of the two summands in the expression for $\xi_3$, we see that there are rational functions $s_2$ and $t_2$ of $\alpha$, $d$ and $\kappa$ such that $${\frac{\partial{^2 \xi_3}}{\partial{\alpha^2}}} = s_2 X^{d\kappa }Y^{d(1-\kappa)} + t_2 X^{\frac{2\kappa(d+1)}{3}} Y^{\frac{(1-\kappa)(d+1)}{3}}.$$
In Section \[app:d>=23techlem\] we use Mathematica to calculate $t_2$ explicitly and to verify that for every $d\geq 23$ and $\alpha \in (1,53/27]$, we have $ {\frac{\partial{^2t_2}}{\partial{\kappa^2}}}\geq0$, $ {\frac{\partial{t_2}}{\partial{\kappa}}}\Big\vert_{\kappa=0}\geq 0$ and $ t_2\vert_{\kappa=0}\geq 0 $. We conclude that $t_2\geq 0$ for all $\kappa \in [0,1]$ (for the given ranges of $d$ and $\alpha$). Since $X$ and $Y$ are less than or equal to $1$ and $2 \kappa(d+1)/3 \leq d \kappa$ and $(1-\kappa)(d+1)/3 \leq d(1-\kappa)$, the fact that $t_2\geq 0$ implies
$$\label{eq:xi3derivative}
{\frac{\partial{^2\xi_3}}{\partial{\alpha^2}}} = s_2 X^{d\kappa} Y^{d(1-\kappa)} + t_2 X^{\frac{2\kappa(d+1)}{3}} Y^{\frac{(1-\kappa)(d+1)}{3}} {\geq}(s_2 + t_2) X^{d\kappa} Y^{d(1-\kappa)}.$$
The final Mathematica code segment verifies that $s_2+t_2>0$ for all $d\geq 23$, $\alpha \in (1,53/27]$ and $\kappa \in [0,1]$. This, together with , completes the proof since $X$ and $Y$ are positive.
We finish by giving the proof of Lemma \[lem:twosteptoprove\].
[ Let $q=3$ and $d\geq 2$. Then, the pair $(q,d)$ satisfies Condition \[cond:we\] for all $\alpha\in(1,53/27]$. ]{}
To prove Condition \[cond:we\], it suffices to check for $\alpha\in (1,53/27)$. By Lemma \[lem:hbound\], we only need to check that $\varphi_*(d, d_0,\alpha)<\alpha^{1/d}$ for $\alpha\in (1,53/27)$ and integers $0\leq d_0\leq d$. This has been verified in Lemma \[lem:d<=22techlem\] for all $d\leq 22$ and in Lemma \[lem:d>=23techlem\] for all $d\geq 23$.
Appendix: Mathematica Code {#sec:code}
==========================
Lemma \[lem:examplecond\] {#app:examplecond}
-------------------------
The following code checks that, for all $d$-tuples $({\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)})$ with ${\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)}\in\mathrm{Ex}_q(\alpha)$, it holds that $h_{c_1,c_2,\beta_*}({\mathbf{p}}^{(1)},\hdots, {\mathbf{p}}^{(d)})<\alpha^{1/d}$. The output is [True]{}, and the same is true when the first line changes to $q=4,d=4$.
q = 3; d = 3; b = 1 - q/(d + 1);
EX = Tuples[{1, alpha}, q-1];
G[vector_, colour_] := 1 - (1-b) (vector[[colour]]- 1)/
(b + Sum[vector[[cc]], {cc,1,q-1}]);
dTUPLES=Tuples[EX,d]; L=Length[dTUPLES];
UNIQ = True;
For[l = 1, l<= L && UNIQ, l++,
currenttuple=dTUPLES[[l]];
For[k=1, k <= d, k++,
vectorofchild[k]=currenttuple[[k]];
];
For[colour = 1, colour <= q-1, colour++,
g[colour] = Product[ G[vectorofchild[k], colour], {k, 1, d}];
];
h = 1 + (1 - b) (1 - g[1])/(b + Sum[g[c],{c,1,q-1}])/.{alpha->u^d};
CHECK = Resolve[Exists[u, h >= u && u > 1]];
If[CHECK == True, UNIQ = False];
];
Print[UNIQ]
Lemma \[lem:fbetamono\] {#app:fbetamono}
-----------------------
Both of the queries in the following code give the output [True]{}.
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
bb = 1-b;
W = 1 - 3y + bb ((3y - 1) + (1 - x - 2y) + x(2x + y - 1) + y(x - y));
lhs = D[fu, b];
rhs = -fu^2 (d (1 - bb x)^(d/2) (1- bb(1-x-y))^(d/2) W /
((1- bb y)^(d+1) (1- bb x) (1- bb (1-x-y))));
FullSimplify[lhs == rhs]
fl = (1 - (1 - b) x)^d /
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
lhs = D[fl, b];
rhs = fl^2 d ((2x+y-1) (1-bb(1-x-y))^(d-1) + (x-y) (1-bb y)^(d - 1)) /
((1-bb x)^(d+1));
FullSimplify[lhs == rhs]
Lemma \[eq:d=2fixedpoints\] {#app:d=2bound}
---------------------------
Both of the queries in the following code give the output [False]{}.
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
fl = (1 - (1 - b) x)^d /
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
Resolve[Exists[{x, y, b}, 0 < y <= 1/3 && 1106/2500 <= x < 1 &&
0 < b <= 1 && (fu /. {d -> 2}) == x && (fl /. {d -> 2}) == y]]
Resolve[Exists[{x, y, b}, 0 < y <= 460/2000 && 1/3 <= x < 1 &&
0 < b <= 1 && (fu /. {d -> 2}) == x && (fl /. {d -> 2}) == y]]
Lemma \[lem:guymono\] {#app:guymono}
---------------------
The following code gives the output [True]{}.
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
gu = fu - x /. {b -> 1 - 3/(d + 1), x -> m y};
A = (1 - 3 y/(d + 1))^d;
B = (1- 3(1-y(m+1))/(d+1))^(d/2) (1- 3m y/(d+1))^(d/2);
W = A B/(A+2B)^2;
lhs = D[gu, y];
rhs = -m+(9 d W /(3y(m+1)+d-2)) (m(2m y+y-1)/(1+d-3m y) - (2m y+y+d-1)/(1+d-3y));
FullSimplify[lhs == rhs]
Lemma \[lem:glymono\] {#app:glymono}
---------------------
The following code gives the output [True]{}.
fl = (1 - (1 - b) x)^d /
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
gl = fl - y /. {b -> 1 - 3/(d + 1), x -> m y};
W = (m(2d-1)+d+1) ((3y(m+1)+d-2) / (d+1))^d /
((1 + d - 3m y) (d-2+ 3y(1 + m))) +
(m-1)(d+1)(1-(3 y)/(d+1))^d / ((1 + d - 3 y) (1 + d - 3m y));
lhs = D[gl, y];
rhs = -3 d (1 - (3m y)/(d+1))^d W / ( ((3y(m+1)+d-2)/(d+1))^d
+ (1 - (3m y)/(d+1))^d + (1-(3 y)/(d+1))^d )^2 - 1;
FullSimplify[lhs == rhs]
Lemma \[lem:ineqs\] {#sec:ineqs}
-------------------
The following code outputs [False]{} and [False]{}.
y2 = 7/(10 m + 12);
y1 = y2 + 3/500;
x2 = m y2;
x1 = m y1;
Resolve[Exists[m, 157/80 <= m < 32 && (0 >= y1 || y1 >= 1 - x1 - y1 ||
1 - x1 - y1 >= 1/3 || 1/3 >= x1 || x1 >= 1 - y1 ) ]]
Resolve[Exists[m, 32 <= m && (0 >= y2 || y2 >= 1 - x2 - y2 ||
1 - x2 - y2 >= 1/3 || 1/3 >= x2 || x2 >= 1 - y2 ) ]]
Lemma \[lem:dhu<0\] {#app:lem:dhu<0}
----------------------
Both queries in the following code for the differentiation in output [True]{}.
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
bb = 1 - b;
R = (1 - bb x)^(d/2) (1 - bb (1 - x - y))^(d/2) / (1 - bb y)^d;
xlhs = D[fu, x];
xrhs = fu^2 R d bb^2 (2 x + y - 1) / ((1 - bb(1 - x - y))(1 - bb x));
ylhs = D[fu, y];
yrhs = -fu^2 R d bb (3 + bb(2 x + y - 2)) /((1 - bb(1 - x - y))(1 - bb y));
FullSimplify[xlhs == xrhs]
FullSimplify[ylhs == yrhs]
The proof of . We consider two cases — when $\mu<32$ and when $\mu\geq 32$. The output is [False]{} in both cases.
lhs=(8-y)(2x+y-1)/( (8-x)(2x+y+22) );
x1 = 7 m/(10 m + 12) + 3 m/500;
y1 = 7/(10 m + 12) + 3/500;
Resolve[Exists[m, 1<m <32 && (lhs/.{x->x1, y->y1}) >= 1/24]]
x2 = 7 m/(10 m + 12);
y2 = 7/(10 m + 12);
Resolve[Exists[m, m>=32 && (lhs/.{x->x2, y->y2}) >= 1/24]]
Here is the code to show that $Y$ is increasing in $\hat\beta_*$. The output is [False.]{}
Num = 3 + b (2 x + y - 2);
Den = (1 - b (1 - x - y)) (1 - b y);
Der = D[Num/Den , b];
Resolve[Exists[{d, x, y, b},
d >= 0 && 0 < b <= 1/8 && 0 < y < 1 - x - y < 1/3 < x < 1 - y &&
Der < 0]]
Here is the code for Case 1. The output is [False]{}.
lhs = 3 (2 x + y + 22)/((8 - y) (x + y + 7)) /.
{x -> 7 m/(10 m + 12), y -> 7/(10 m + 12)};
rhs = 8/7;
Resolve[Exists[m, lhs >= rhs && m > 32]]
Here is the code for Case 2. The output is [False]{}.
lhs = 3 (2 x + y + 22)/((8 - y) (x + y + 7)) /.
{x -> 7 m/(10 m + 12) + 3 m/500, y -> 7/(10 m + 12) + 3/500};
rhs = 24 (25 m^2 + 60 m + 3536)/(25 m^2 + 60 m + 73536);
Resolve[Exists[m, lhs >= rhs && 1 < m]]
Lemma \[lem:muone\] {#app:muone}
-------------------
The output of the following code, for the differentiation in , is [True.]{}
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
psi[d_,z_] := d/(d-3z+1) + Log[d +1-3z];
zeta[d_,x_,y_]:=2psi[d,y]-psi[d,x]-psi[d,1-x-y];
lhs = D[ (fu/.{b -> 1 - 3/(d + 1)}), d];
rhs = (1- 3x/(d+1))^(d/2) (1- 3y/(d+1))^d (1- 3(1-x-y)/(d+1))^(d/2)zeta[d,x,y]/
((1- 3y/(d+1))^d + 2(1- 3x/(d+1))^(d/2) (1- 3 (1-x-y)/(d+1))^(d/2))^2;
FullSimplify[lhs == rhs,d>=0]
The following code establishes Facts 1, 2, and 3 for $\mu=157/80$. The output is [False]{}, $0$, then [True.]{}
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
hu=fu-x /.{b -> 1 - 3/(d+1)};
psi[d_,z_] := d/(d-3z+1) + Log[d +1-3z];
zeta[d_,x_,y_]:=2psi[d,y]-psi[d,x]-psi[d,1-x-y];
Fd = D[zeta[d,x,y], d];
xm = 7 m/(10 m + 12) + 3 m/500 /. {m -> 157/80};
ym = 7/(10 m + 12) + 3/500 /. {m -> 157/80};
Resolve[Exists[d, d >= 23 && (Fd /. {x -> xm, y -> ym}) >= 0]]
Limit[zeta[d,xm,ym], d -> \[Infinity]]
Limit[hu /. {x -> xm, y -> ym}, d -> \[Infinity]]<0
The following code establishes Facts 1, 2, and 3 for $\mu=32$. The output is [False]{}, $0$, then [True.]{}
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
hu=fu-x /.{b -> 1 - 3/(d+1)};
psi[d_,z_] := d/(d-3z+1) + Log[d +1-3z];
zeta[d_,x_,y_]:=2psi[d,y]-psi[d,x]-psi[d,1-x-y];
Fd = D[zeta[d,x,y], d];
xm = 7 m/(10 m + 12) /. {m -> 32};
ym= 7/(10 m + 12) /. {m -> 32};
Resolve[Exists[d, d >= 23 && (Fd /. {x -> xm, y -> ym}) >= 0]]
Limit[zeta[d,xm,ym], d -> \[Infinity]]
Limit[hu /. {x -> xm, y -> ym}, d -> \[Infinity]]<0
Lemma \[lem:hl>0\] {#app:hl>0}
---------------------
We first prove that $h_\ell(23,\mu)>0$ for $\mu\geq 157/80$. The output to both queries is [False.]{}
fl = (1 - (1 - b) x)^d /
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
hl = fl - y /. {b -> 1 - 3/(d+1), x -> m y};
x1 = 7 m/(10 m + 12) + 3 m/500;
y1 = 7/(10 m + 12) + 3/500;
h1 = hl /. {d -> 23, x -> x1, y -> y1};
Resolve[Exists[m, 157/80<=m <32 && h1 <= 0]]
x2 = 7 m/(10 m + 12);
y2 = 7/(10 m + 12);
h2 = hl /. {d -> 23, x -> x2, y -> y2};
Resolve[Exists[m, m >= 32 && h2 <= 0]]
The output of the following code, for the differentiation in , is [True.]{}
fl = (1 - (1 - b) x)^d/
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
psi[d_, z_] := d/(d - 3 z + 1) + Log[d + 1 - 3 z];
A = (1 - 3 x/(d + 1))^d;
B = (1 - 3 y/(d + 1))^d;
CC = (1 - 3 (1 - x - y)/(d + 1))^d;
lhs = D[(fl /. {b -> 1 - 3/(d + 1)}), d];
rhs = (A CC (psi[d, x] - psi[d, 1 - x - y]) + A B (psi[d, x] - psi[d, y]))/
(A + B + CC)^2;
FullSimplify[lhs == rhs, d >= 0]
Lemma \[lem:d<=22sequence\] {#app:d<=22sequence}
------------------------------
The code checks that all of the desired inequalities are satisfied. The output is [True.]{}
fu = (1 - (1 - b) y)^d /
((1 - (1 - b) y)^d + 2 (1 - (1 - b) x)^(d/2) (1 - (1 - b) (1 - x - y))^(d/2));
fl = (1 - (1 - b) x)^d /
((1 - (1 - b) x)^d + (1 - (1 - b) y)^d + (1 - (1 - b) (1 - x - y))^d);
Flag = True;
u[0] = 1;
l[0] = 0;
For[dd = 3, dd <= 22, dd++, (
ffu = Ceiling[10000 fu /. {d -> dd, b -> 1 - 3/(dd + 1)}]/10000;
ffl = Floor[10000 fl /. {d -> dd, b -> 1 - 3/(dd + 1)}]/10000;
For[n = 0, n <= 60, n++, (
u[n + 1] = ffu /. {x -> u[n], y -> l[n]};
l[n + 1] = ffl /. {x -> u[n], y -> l[n]};
Flag = Flag && u[n] >= u[n + 1] && l[n] <= l[n + 1] &&
2 u[n] + l[n] >= 1 >= 2 l[n] + u[n])];
Flag = Flag && u[60]/l[60] <= 53/27;)];
Flag
Lemma \[lem:d<=22techlem\] {#app:d<=22techlem}
-----------------------------
The code checks all relevant values of $d$ and $d_0$. The output is [True.]{} The substitution of $u$ for $\alpha^{1/d}$ is there to make the code run faster. Despite this, it takes more than 5 minutes to run on our machine.
b = 1 - 3/(d + 1);
x = 1 - (1 - b) (a - 1)/(b + 2 a);
y = 1 - (1 - b) (a - 1)/(b + a + 1);
phi = 1 + (3/(d + 1)) (1 - x^d0 y^(d - d0))/(b +
x^d0 (y^(d - d0) + 1)) /. {a -> u^d};
Flag = True;
For[dd = 2, dd <= 22, dd++, u0 = (53/27)^dd;
For[dd0 = 0, dd0 <= dd, dd0++,
Flag = Flag && ! Resolve[Exists[u,
(phi /. {d -> dd, d0 -> dd0}) >= u && 1 < u <= u0]];];];
Flag
Lemma \[lem:d>=23techlem\] {#app:d>=23techlem}
-----------------------------
The following code outputs [True]{}, therefore verifying the differentiation in .
b = 1 - 3/(d + 1);
x = 1 - (1 - b) (a - 1)/(b + 2 a);
y = 1 - (1 - b) (a - 1)/(b + a + 1);
phi = 1+(1/d)( x^(-2d0/(2+b)) y^(-(d-d0)/(2+b))
- x^(d0 b/(2+b)) y^((d-d0)(1+b)/(2+b)) )-a^(1/d);
gx=(2a+b)(a b+a+1);
gy=(a+b+1)(a b+2);
S=x^(2d0/(2+b)) y^((d-d0)/(2+b)) (a+b+1) (a b+2)/(1-b);
rhs=( (1-b)x^(-2d0/(2+b)) y^(-(d-d0)/(2+b)) / (d a gy) ) *
(2 d0 a (gy/gx) + (d-d0)a + x^d0 y^(d-d0) (d0 b a (gy/gx)+ (d-d0)(1+b)a)
-a^(1/d) S);
Resolve[Simplify[D[phi, a] - rhs] == 0]
The following code outputs [True]{} [True]{}, verifying the calculation for Step 3.
b = 1 - 3/(d + 1);
x = 1 - (1 - b) (a - 1)/(b + 2 a);
y = 1 - (1 - b) (a - 1)/(b + a + 1);
s0 = -(((2 d - 1) d (1 - k) a)/(d +
1) + ((d - 2) k ((d - 2) a + 2 (d + 1))) /(3 (d + 1)));
W = 1 + (a - 1)/d - (d - 1) (a - 1)^2/(2 d^2);
t0 = W ((d + 1) a + 2 d - 1) ((d - 2) a + 2 (d + 1))/(3 (d + 1));
xi3 = s0 x^(d k) y^(d (1 - k)) +
t0 x^(2 k (d + 1)/3) y^((1 - k) (d + 1)/3);
FullSimplify[(xi3 /. {a -> 1}) == d + d k]
FullSimplify[(D[xi3, a] /. {a -> 1}) == (1/3) (- d k + 3 d - 4 k) ]
The final two code segments are for Step 4. The following code calculates $t_2$ and verifies that $ {\frac{\partial{^2t_2}}{\partial{\kappa^2}}}\geq0$, $ {\frac{\partial{t_2}}{\partial{\kappa}}}\Big\vert_{\kappa=0}\geq 0$ and $ t_2\vert_{\kappa=0}\geq 0 $. The output is [False]{}, [False]{}, and [False.]{}
b = 1 - 3/(d + 1);
x = 1 - (1 - b) (a - 1)/(b + 2 a);
y = 1 - (1 - b) (a - 1)/(b + a + 1);
s0 = -(((2 d - 1) d (1 - k) a)/(d +
1) + ((d - 2) k ((d - 2) a + 2 (d + 1)))/(3 (d + 1)));
W = 1 + (a - 1)/d - (d - 1) (a - 1)^2/(2 d^2);
t0 = W ((d + 1) a + 2 d - 1) ((d - 2) a + 2 (d + 1))/(3 (d + 1));
t2 = Simplify[ D[t0 x^(2 k (d + 1)/3) y^((1 - k) (d + 1)/3), {a,
2}]/(x^(2 k (d + 1)/3) y^((1 - k) (d + 1)/3))];
tk1 = D[t2, k];
tk2 = D[tk1, k];
Resolve[Exists[{d, a}, tk2 < 0 && d >= 23 && 1 <= a <= 53/27]]
Resolve[Exists[{d, a}, (tk1 /. {k -> 0}) < 0 && d >= 23 && 1 <= a <= 53/27]]
Resolve[Exists[{d, a}, (t2 /. {k -> 0}) < 0 && d >= 23 && 1 <= a <= 53/27]]
The following code calculates $s_2$ (as well as $t_2$) and verifies that $s_2+t_2$ is positive. It takes about 10 minutes to run. The output is [False.]{} The reason for the transformation of $\alpha$ (as a function of $r$) is to speed up the calculation.
b = 1 - 3/(d + 1);
x = 1 - (1 - b) (a - 1)/(b + 2 a);
y = 1 - (1 - b) (a - 1)/(b + a + 1);
s0 = -(((2 d - 1) d (1 - k) a)/(d +
1) + ((d - 2) k ((d - 2) a + 2 (d + 1)))/(3 (d + 1)));
W = 1 + (a - 1)/d - (d - 1) (a - 1)^2/(2 d^2);
t0 = W ((d + 1) a + 2 d - 1) ((d - 2) a + 2 (d + 1))/(3 (d + 1));
t2 = Simplify[ D[t0 x^(2 k (d + 1)/3) y^((1 - k) (d + 1)/3), {a,
2}]/(x^(2 k (d + 1)/3) y^((1 - k) (d + 1)/3))];
s2 = Simplify[
D[s0 x^(d k) y^(d (1 - k)), {a, 2}]/(x^(d k) y^(d (1 - k)))];
p = Simplify[s2 + t2 /. {a -> 1 + 3 r}];
Resolve[Exists[{d, r, k},
p <= 0 && d >= 23 && 0 <= k <= 1 && 0 < r <= 26/81]]
[^1]: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no. 334828. The paper reflects only the authors’ views and not the views of the ERC or the European Commission. The European Union is not liable for any use that may be made of the information contained therein. Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford, OX1 3QD, UK.
[^2]: Often, in the literature, $\beta$ is taken to be the *inverse tempertature*. Since we don’t need the physical details here, we simplify the notation by taking $\beta$ to be $e$ to the inverse temperture.
[^3]: \[eq:rferf3g555\]The terminology comes from the theory of Gibbs measures, where the interest is in examining whether there is a unique infinite-volume measure whose marginals on finite regions is given by the Gibbs distribution (it can be shown that an infinite-volume measure always exists). See [@Geo88; @friedlivelenik2017] for a thorough exposition of the theory. The two formulations of uniqueness/non-uniqueness that we have described, i.e., examining infinite-volume measures and examining the limit of marginals in growing finite regions, turn out to be equivalent.
[^4]: Note that the $d$-ary tree is essentially the same as a regular tree with degree $d+1$; the only difference is that the root of the $d$-ary tree has degree $d$ while the root of a $(d+1)$-regular tree has degree $d+1$. Accordingly, the uniqueness phase transition occurs at exactly the same location in both trees.
[^5]: \[foot:referf\]Roughly, in a semi-translation-invariant Gibbs measure, even-layered vertices have the same marginals and odd-layered vertices have the same marginals. By studying the number of fixpoints of a particular recursion, one can establish whether there exist multiple such measures. See [@BW02 Theorem 2.3 & Theorem 3.2] for details on this connection in the context of the colourings model and [@GSV Corollary 7.5] in the context of the Potts model. We also remark that such measures on the tree have been studied in the statistical mechanics literature as well, for example Peruggi, di Liberto, and Monroy [@Peruggi1; @Peruggi2] give a description of the phase diagrams of the models in non-uniqueness. We refer the reader to the book [@Rozikov] for a detailed treatment of Gibbs measures on the infinite tree.
[^6]: All two-state systems are either monotone or antimonotone on the tree, and therefore the root is most sensitive to boundary configurations where all the leaves have the same state. Uniqueness/non-uniqueness is therefore determined by examining whether the marginal at the root under these two extremal configurations coincide. Similarly, for the ferromagnetic Potts model, one can show that the extremal configurations on the leaves are those where all the leaves have the same colour.
[^7]: The notation $\alpha_{2n}\downarrow \alpha_{\mathrm{ev}}$ means that the sequence $\alpha_{2n}$ converges to $\alpha_{\mathrm{ev}}$ by decreasing monotonically.
[^8]: The inequality says that for non-negative $x_1,x_2,x_3,w_1,w_2,w_3$ with $w=w_1+w_2+w_3>0$, $w_1 x_1 + w_2 x_2 + w_3 x_3 \geq w x_1^{w_1/w} x_2^{w_2/w} x_3^{w_3/w}$. Take $x_1=1$, $x_2 = X^{d_0}Y^{d_1}$, $x_3 = X^{d_0}$, $w_1 = \beta_*$, $w_2=1$ and $w_3=1$.
|
---
abstract: 'We study the probability measure $\mu_{0}$ for which the moment sequence is $\binom{3n}{n}\frac{1}{n+1}$. We prove that $\mu_{0}$ is absolutely continuous, find the density function and prove that $\mu_{0}$ is infinitely divisible with respect to the additive free convolution.'
address:
- 'Instytut Matematyczny, Uniwersytet Wroc[ł]{}awski, Plac Grunwaldzki 2/4, 50-384 Wroc[ł]{}aw, Poland'
- 'Laboratoire de Physique Théorique de la Matière Condensée (LPTMC), Université Pierre et Marie Curie, CNRS UMR 7600, Tour 13 - 5ième ét., Boîte Courrier 121, 4 place Jussieu, F 75252 Paris Cedex 05, France'
author:
- 'Wojciech M[ł]{}otkowski, Karol A. Penson'
title: 'The probability measure corresponding to 2-plane trees'
---
Introduction
============
A *$2$-plane tree* is a planted plane tree such that each vertex is colored black or white and for each edge at least one of its ends is white. Gu and Prodinger [@guprodinger2009] proved, that the number of 2-plane trees on $n+1$ vertices with black (white) root is $\binom{3n+1}{n}\frac{1}{3n+1}$ (Fuss-Catalan number of order $3$, sequence A001764 in OEIS [@oeis]) and $\binom{3n+2}{n}\frac{2}{3n+2}$ (sequence A006013 in OEIS) respectively (see also [@guprodingerwagner2010]). We are going to study the sequence $$\label{aintsuma}
\binom{3n}{n}\frac{2}{n+1}=
\binom{3n+1}{n}\frac{1}{3n+1}+\binom{3n+2}{n}\frac{2}{3n+2},$$ which begins with $$2, 3, 10, 42, 198, 1001, 5304, 29070, 163438,\ldots,$$ of total numbers of such trees (A007226 in OEIS).
Both the sequences on the right hand side of (\[aintsuma\]) are positive definite (see [@mlotkowski2010; @mlopezy2013]), therefore so is the sequence $\binom{3n}{n}\frac{2}{n+1}$ itself. In this paper we are going to study the corresponding probability measure $\mu_{0}$, i.e. such that the numbers $\binom{3n}{n}\frac{1}{n+1}$ are moments of $\mu_0$. First we prove that $\mu_0$ is Mellin convolution of two beta distributions, in particular $\mu_0$ is absolutely continuous. Then we find the density function of $\mu_0$. In the last section we prove, that $\mu_0$ can be decomposed as additive free convolution $\mu_{1}\boxplus\mu_{2}$ of two measures, which are both infinitely divisible with respect to $\boxplus$ and are related to the Marchenko-Pastur distribution. In particular, the measure $\mu_0$ itself is $\boxplus$-infinitely divisible.
The generating function
=======================
Let us consider the generating function $$G(z)=\sum_{n=0}^{\infty}\binom{3n}{n}\frac{2z^n}{n+1}.$$ According to (\[aintsuma\]), $G$ is a sum of two generating functions. The former is usually denoted by $\mathcal{B}_{3}$: $$\mathcal{B}_{3}(z)=\sum_{n=0}^{\infty}\binom{3n+1}{n}\frac{z^n}{3n+1}$$ and satisfies equation $$\label{aintbfunction}
\mathcal{B}_{3}(z)=1+z\cdot\mathcal{B}_{3}(z)^3.$$ Lambert’s formula (see (5.60) in [@gkp]) implies, that the latter is just square of $\mathcal{B}_{3}$: $$\mathcal{B}_{3}(z)^{2}=\sum_{n=0}^{\infty}\binom{3n+2}{n}\frac{2z^n}{3n+2},$$ so we have $$\label{aintgbfunction}
G(z)=\mathcal{B}_{3}(z)+\mathcal{B}_{3}(z)^{2}.$$
Combining (\[aintbfunction\]) and (\[aintgbfunction\]), we obtain the following equation for $G$: $$\label{aintgfunctionequation}
2-z-(1+2z)G(z)+2zG(z)^2-z^2G(z)^3=0,$$ which will be applied later on.
Now we will give formula for $G(z)$.
For the generating function of the sequence (\[aintsuma\]) we have $$\label{aintgeneratingfunction}
G(z) %=\sum_{n=0}^{\infty}\binom{3n}{n}\frac{2z^n}{n+1}
=\frac{12\cos^2\alpha+6}{\left(4\cos^2\alpha-1\right)^2},$$ where $\alpha=\frac{1}{3}\arcsin\left(\sqrt{27z/4}\right)$.
Denoting $(a)_n:=a(a+1)\ldots(a+n-1)$ we have $$\frac{2(3n)!}{(n+1)!(2n)!}=
\frac{-2\left(\frac{-2}{3}\right)_{n+1}\left(\frac{-1}{3}\right)_{n+1}27^{n+1}}{3(n+1)!\left(\frac{-1}{2}\right)_{n+1}4^{n+1}}.$$ Therefore $$G(z)=\frac{2-2\cdot {}_{2}F_{1}\!\left(\left.\frac{-2}{3},\frac{-1}{3};\frac{1}{2}\right|\frac{27z}{4}\right)}{3z}.$$ Now we apply formula $${}_{2}F_{1}\!\left(\left.\frac{-2}{3},\frac{-1}{3};\frac{-1}{2}\right|u\right)
=\frac{1}{3}\sqrt{u}\sin\left(\frac{1}{3}\arcsin\left(\sqrt{u}\right)\right)
+\sqrt{1-u}\cos\left(\frac{1}{3}\arcsin\left(\sqrt{u}\right)\right),$$ which can be proved by hypergeometric equation (note that both the functions $w\mapsto w\sin\left(\frac{1}{3}\arcsin\left(w\right)\right)$, $w\mapsto\cos\left(\frac{1}{3}\arcsin\left(w\right)\right)$ are even, so the right hand side is well defined for $|u|<1$). Putting $\alpha=\frac{1}{3}\arcsin\left(\sqrt{u}\right)$, $u=27z/4$, we have $\sqrt{u}=\sin 3\alpha$, $\sqrt{1-u}=\cos 3\alpha$, which after elementary calculations gives (\[aintgeneratingfunction\]).
The measure
===========
In this part we are going to study the (unique) measure $\mu_{0}$ for which $\left\{\binom{3n}{n}\frac{1}{n+1}\right\}_{n=0}^{\infty}$ is the moment sequence. We will show that $\mu_0$ can be expressed as the Mellin convolution of two beta distributions. Then we will provide explicit formula for the density function $V(x)$ of $\mu_{0}$.
Recall (see [@balakrishnannevzorov]), that for $\alpha,\beta>0$, the *beta distribution* $\mathrm{Beta}(\alpha,\beta)$ is the absolutely continuous probability measure defined by the density function $$f_{\alpha,\beta}(x)
=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\cdot x^{\alpha-1}(1-x)^{\beta-1},$$ for $x\in(0,1)$. The moments of $\mathrm{Beta}(\alpha,\beta)$ are $$\int_{0}^{1} x^n f_{\alpha,\beta}(x)\,dx=\frac{\Gamma(\alpha+\beta)\Gamma(\alpha+n)}{\Gamma(\alpha)\Gamma(\alpha+\beta+n)}
=\prod_{i=0}^{n-1}\frac{\alpha+i}{\alpha+\beta+i}.$$
For probability measures $\nu_1$, $\nu_2$ on the positive half-line $[0,\infty)$ the *Mellin convolution* is defined by $$\left(\nu_1\circ\nu_2\right)(A):=\int_{0}^{\infty}\int_{0}^{\infty}\chi_{A}(xy)d\nu_1(x)d\nu_{2}(y)$$ for every Borel set $A\subseteq[0,\infty)$ ($\chi_{A}$ denotes the indicator function of the set $A$). This is the distribution of the product $X_1\cdot X_2$ of two independent nonnegative random variables with $X_i\sim\nu_i$. In particular, if $c>0$ then $\nu\circ\delta_c$ is the *dilation* of $\nu$: $$\left(\nu\circ\delta_c\right)(A)=\mathbf{D}_c\nu(A):=\nu\left(\frac{1}{c}A\right),$$ where $\delta_{c}$ denotes the Dirac delta measure at $c$.
If both the measures $\nu_1,\nu_2$ have all *moments* $$s_n(\nu_i):=\int_{0}^{\infty}x^n\,d\nu_i(x)$$ finite then so has $\nu_1\circ\nu_2$ and $$s_n\left(\nu_1\circ\nu_2\right)=s_n(\nu_1)\cdot s_n(\nu_2)$$ for all $n$. The method of Mellin convolution has been recently applied to a number of related problems, see for example [@mlopezy2013; @pezy].
Now we can describe the probability measure corresponding to the sequence $\binom{3n}{n}\frac{1}{n+1}$.
Define $\mu_0$ as the Mellin convolution $$\label{ameamuzerobeta}
\mu_0=\mathrm{Beta}(1/3,1/6)\circ\mathrm{Beta}(2/3,4/3)\circ\delta_{27/4}.$$ Then the numbers $\binom{3n}{n}\frac{1}{n+1}$ are moments of $\mu_0$: $$\int_{0}^{27/4} x^n\,d\mu_0(x)=\binom{3n}{n}\frac{1}{n+1}.$$
It is sufficient to check that $$\frac{(3n)!}{(n+1)!(2n)!}=
\prod_{i=0}^{n-1}\frac{1/3+i}{1/2+i}\cdot
\prod_{i=0}^{n-1}\frac{2/3+i}{2+i}\cdot
\left(\frac{27}{4}\right)^n.$$
In view of formula (\[ameamuzerobeta\]), the measure $\mu_0$ is absolutely continuous and its support is the interval $[0,27/4]$. Now we are going to find the density function $V(x)$ of $\mu_0$.
Let $$\begin{aligned}
V(x)=\frac{\sqrt{3}}{2^{10/3}\pi x^{2/3}}\left(3\sqrt{1-4x/27}-1\right)&\left(1+\sqrt{1-4x/27}\right)^{1/3}\\
%\]\[
+\frac{1}{2^{8/3}\pi x^{1/3}\sqrt{3}}\left(3\sqrt{1-4x/27}+1\right)&\left(1+\sqrt{1-4x/27}\right)^{-1/3},\end{aligned}$$ $x\in(0,27/4)$. Then $V$ is the density function of $\mu_0$, i.e. $$\int_{0}^{27/4} x^n\, V(x)\,dx=\binom{3n}{n}\frac{1}{n+1}$$ for $n=0,1,2,\ldots$.
The density $V(x)$ of $\mu_0$ is represented in Fig. 1.B.
Putting $n=s-1$ and applying the Gauss-Legendre multiplication formula $$\Gamma(mz)=(2\pi)^{(1-m)/2}m^{mz-1/2}\Gamma(z)\Gamma\left(z+\frac{1}{m}\right)
\Gamma\left(z+\frac{2}{m}\right)\ldots\Gamma\left(z+\frac{m-1}{m}\right)$$ we obtain $$\binom{3n}{n}\frac{1}{n+1}
=\frac{\Gamma(3n+1)}{\Gamma(n+2)\Gamma(2n+1)}
=\frac{\Gamma(3s-2)}{\Gamma(s+1)\Gamma(2s-1)}$$ $$=\frac{2}{27}\sqrt{\frac{3}{\pi}}\left(\frac{27}{4}\right)^s\frac{\Gamma(s-2/3)\Gamma(s-1/3)}{\Gamma(s-1/2)\Gamma(s+1)}
:=\psi(s).$$ Then $\psi$ can be extended to an analytic function on the complex plane, except the points $1/3-n$, $2/3-n$, $n=0,1,2,\ldots$.
Now we are going to apply a particular type of the Meijer $G$-function, see [@prudnikov3] for details. Let $\widetilde{V}$ denote the inverse Mellin transform of $\psi$. Then we have $$\begin{aligned}
\widetilde{V}(x)&=\frac{1}{2\pi\mathrm{i}}\int_{c-\mathrm{i}\infty}^{c+\mathrm{i}\infty}x^{-s}\psi(s)\,ds\\
&=\frac{2}{27}\sqrt{\frac{3}{\pi}}\,\frac{1}{2\pi\mathrm{i}}\int_{c-\mathrm{i}\infty}^{c+\mathrm{i}\infty}
\frac{\Gamma(s-2/3)\Gamma(s-1/3)}{\Gamma(s-1/2)\Gamma(s+1)}\left(\frac{4x}{27}\right)^{-s}\,ds\\
&=\frac{2}{27}\sqrt{\frac{3}{\pi}}\,
G^{2,0}_{2,2}\left(\frac{4x}{27}\left|\!\!
\begin{array}{cc}
-1/2,\!\!&\!\!1\\-2/3,\!\!&\!\!-1/3
\end{array}\!\!\!
\right.\right),\end{aligned}$$ where $x\in(0,27/4)$ (consult [@sneddon] for the role of $c$ in the integrals). On the other hand, for the parameters of the $G$-function we have $$(-2/3-1/3)-(-1/2+1)=-3/2<0$$ and hence the assumptions of formula 2.24.2.1 in [@prudnikov3] are satisfied. Therefore we can apply the Mellin transform on $\widetilde{V}(x)$: $$\begin{aligned}
\int_{0}^{27/4} x^{s-1}\widetilde{V}(x)\,dx
=\frac{2}{27}\sqrt{\frac{3}{\pi}}&\int_{0}^{27/4} x^{s-1}
G^{2,0}_{2,2}\left(\frac{4x}{27}\left|\!\!
\begin{array}{cc}
-1/2,\!\!&\!\!1\\-2/3,\!\!&\!\!-1/3
\end{array}\!\!\!
\right.\right)dx\\
=\frac{2}{27}\sqrt{\frac{3}{\pi}} \left(\frac{27}{4}\right)^{s}&\int_{0}^{1} u^{s-1}
G^{2,0}_{2,2}\left(u\left|\!\!
\begin{array}{cc}
-1/2,\!\!&\!\!1\\-2/3,\!\!&\!\!-1/3
\end{array}\!\!\!
\right.\right)du=\psi(s)\end{aligned}$$ whenever $\Re s>2/3$. Consequently, $\widetilde{V}=V$.
Now we use Slater’s formula (see [@prudnikov3], formula 8.2.2.3) and express $V$ in terms of the hypergeometric functions: $$V(x)=\frac{2}{27}\sqrt{\frac{3}{\pi}}\,\frac{\Gamma(1/3)}{\Gamma(1/6)\Gamma(5/3)}\left(\frac{4x}{27}\right)^{-2/3}
{}_{2}F_{1}\!\left(\left.\frac{-2}{3},\frac{5}{6};\frac{2}{3}\right|\frac{4x}{27}\right)$$ $$+\frac{2}{27}\sqrt{\frac{3}{\pi}}\,\frac{\Gamma(-1/3)}{\Gamma(-1/6)\Gamma(4/3)}\left(\frac{4x}{27}\right)^{-1/3}
{}_{2}F_{1}\!\left(\left.\frac{-1}{3},\frac{7}{6};\frac{4}{3}\right|\frac{4x}{27}\right)$$ $$=\frac{\sqrt{3}}{4\pi x^{2/3}}\,\,
{}_{2}F_{1}\!\left(\left.\frac{-2}{3},\frac{5}{6};\frac{2}{3}\right|\frac{4x}{27}\right)
+\frac{1}{2\pi\sqrt{3}x^{1/3}}\,\,
{}_{2}F_{1}\!\left(\left.\frac{-1}{3},\frac{7}{6};\frac{4}{3}\right|\frac{4x}{27}\right).$$ Applying the formula $${}_{2}F_{1}\!\left(\left.
\frac{t-2}{2},\frac{t+1}{2};\,t\right|z\right)
=\frac{2^t}{2t}\left(t-1+\sqrt{1-z}\right)\left(1+\sqrt{1-z}\right)^{1-t}$$ (see [@mlopezy2013]) for $t=2/3$ and $t=4/3$ we conclude the proof.
Relations with free probability
===============================
In this part we are going to describe relations of $\mu_0$ with free probability. In particular we will show that $\mu_{0}$ is infinitely divisible with respect to the additive free convolution.
Let us briefly describe the additive and multiplicative free convolutions. For details we refer to [@vdn; @ns].
Denote by $\mathcal{M}^c$ the class of probability measures on $\mathbb{R}$ with compact support. For $\mu\in\mathcal{M}^c$, with moments $$s_m(\mu):=\int_{\mathbb{R}} t^m\,d\mu(t),$$ and with the *moment generating function*: $$M_{\mu}(z):=\sum_{m=0}^{\infty}s_m(\mu)z^m
=\int_{\mathbb{R}}\frac{d\mu(t)}{1-tz},$$ we define its *$R$-transform* $R_{\mu}(z)$ by the equation $$\label{cfreertransform}
R_{\mu}\big(z M_{\mu}(z)\big)+1=M_{\mu}(z).$$ Then the *additive free convolution* of $\mu',\mu''\in\mathcal{M}^c$ is defined as the unique $\mu'\boxplus\mu''\in\mathcal{M}^c$ which satisfies $$R_{\mu'\boxplus\mu''}(z)=R_{\mu'}(z)+R_{\mu''}(z).$$
If the support of $\mu\in\mathcal{M}^c$ is contained in the positive halfline $[0,+\infty)$ then we define its *$S$-transform* $S_{\mu}(z)$ by $$\label{cfreemsrtransforms}
M_{\mu}\left(\frac{z}{1+z}S_{\mu}(z)\right)=1+z
\qquad\hbox{or}\qquad R_{\mu}\big(z S_{\mu}(z)\big)=z.$$ on a neighborhood of $0$. If $\mu',\mu''$ are such measures then their *multiplicative free convolution* $\mu'\boxtimes\mu''$ is defined by $$S_{\mu'\boxtimes\mu''}(z)=S_{\mu'}(z)\cdot S_{\mu''}(z).$$
Recall, that for dilated measure we have: $M_{\mathbf{D}_{c}\mu}(z)=M_{\mu}(cz)$, $R_{\mathbf{D}_{c}\mu}(z)=R_{\mu}(cz)$ and $S_{\mathbf{D}_{c}\mu}(z)=S_{\mu}(z)/c$. The operations $\boxplus$ and $\boxtimes$ can be regarded as free analogs of the classical and Mellin convolution.
For $t>0$ let $\varpi_t$ denote the *Marchenko-Pastur distribution* with parameter $t$: $$\varpi_t=\max\{1-t,0\}\delta_0+\frac{\sqrt{4t-(x-1-t)^2}}{2\pi x}\,dx,$$ with the absolutely continuous part supported on $\left[(1-\sqrt{t})^2,(1+\sqrt{t})^2\right]$. Then $$\begin{aligned}
M_{\varpi_t}(z)&=\frac{2}{1+z-tz+\sqrt{\big(1-z-tz\big)^2-4tz^2}}\label{cfreemvarpi}\\
&=1+\sum_{n=1}^{\infty}z^n\sum_{k=1}^{n}
\binom{n}{k}\binom{n}{k-1}\frac{t^k}{n},\end{aligned}$$ $$\label{cfreersfreepoisson}
R_{\varpi_t}(z)=\frac{tz}{1-z},\qquad\qquad S_{\varpi_t}(z)=\frac{1}{t+z}.$$ In free probability the measures $\varpi_{t}$ play the role of the Poisson distributions. Note that from (\[cfreersfreepoisson\]) the family $\{\varpi_{t}\}_{t>0}$ constitutes a semigroup with respect to $\boxplus$, i.e. we have $\varpi_{s}\boxplus\varpi_{t}=\varpi_{s+t}$ for $s,t>0$.
The measure $\mu_{0}$ is equal to the additive free convolution $\mu_{0}=\mu_{1}\boxplus\mu_{2}$, where $\mu_1=\mathbf{D}_{2}\varpi_{1/2}$, so that $$\begin{aligned}
\mu_1&=\frac{1}{2}\delta_{0}+\frac{\sqrt{8-(x-3)^2}}{4\pi x}\chi_{(3-\sqrt{8},3+\sqrt{8})}(x)\,dx,\label{cfreemu1}\\
\intertext{and $\mu_2=\frac{1}{2}\delta_{0}+\frac{1}{2}\varpi_{1}$, i.e.}
\mu_2&=\frac{1}{2}\delta_{0}+\frac{\sqrt{4x-x^2}}{4\pi x}\chi_{(0,4)}(x)\,dx.\label{cfreemu2}\end{aligned}$$ The measures $\mu_1,\mu_2$ are infinitely divisible with respect to the additive free convolution $\boxplus$, and consequently, so is $\mu_{0}$.
The absolutely continuous parts of the measures $\mu_1,\mu_2$ are represented in Fig. 1.A.
The moment generating function of $\mu_0$ is $M_{\mu_{0}}(z)=G(z)/2$. Then we have $M_{\mu_{0}}(0)=1$ and by (\[aintgfunctionequation\]) $$2-z-2(1+2z)M_{\mu_{0}}(z)+8zM_{\mu_{0}}(z)^2-8z^2M_{\mu_{0}}(z)^3=0.$$
Let $T(z)$ be the inverse function for $M_{\mu_{0}}(z)-1$, so that $T(0)=0$ and $M_{\mu_{0}}\big(T(z)\big)=1+z$. Then $$2-T(z)+(-1-2T(z))2(1+z)+8T(z)(1+z)^2-8T(z)^2(1+z)^3=0,$$ which gives $$8(1+z)^3T(z)^2-(8z^2+12z+3)T(z)+2z=0$$ and finally $$T(z)=\frac{8z^2+12z+3-\sqrt{9+8z}}{16(1+z)^3}=\frac{4z}{8z^2+12z+3+\sqrt{9+8z}}.$$ Therefore we can find the $S$-transform of $\mu_0$: $$S_{\mu_{0}}(z)=\frac{1+z}{z}T(z)=\frac{8z^2+12z+3-\sqrt{9+8z}}{16z(1+z)^2}=\frac{4(1+z)}{8z^2+12z+3+\sqrt{9+8z}}$$ and from (\[cfreemsrtransforms\]) we get the $R$-transform: $$R_{\mu_0}(z)=\frac{4z-1+\sqrt{1-2z}}{2(1-2z)}.$$ Now we observe that $R_{\mu_{0}}(z)$ can be decomposed as follows: $$R_{\mu_{0}}(z)=\frac{z}{1-2z}+\frac{1-\sqrt{1-2z}}{2\sqrt{1-2z}}=R_{1}(z)+R_{2}(z).$$ Comparing with (\[cfreersfreepoisson\]) we observe that $R_1(z)$ is the $R$-transform of $\mu_{1}=\mathbf{D}_{2}\varpi_{1/2}$, which implies that $\mu_{1}$ is $\boxplus$-infinitely divisible.
Consider the Taylor expansion of $R_2(z)$: $$R_{2}(z)=\sum_{n=1}^{\infty}\binom{2n}{n}2^{-n-1} z^n
=\frac{z}{2}+z^2\sum_{n=0}^{\infty}\binom{2(n+2)}{n+2}2^{-n-3}z^n.$$ Since the numbers $\binom{2n}{n}$ are moments of the *arcsine distribution* $$\frac{1}{\pi\sqrt{x(4-x)}}\chi_{(0,4)}(x)\,dx,$$ the coefficients of the last sum constitute a positive definite sequence. So $R_{2}(z)$ is $R$-transform of a probability measure $\mu_2$, which is $\boxplus$-infinitely divisible (see Theorem 13.16 in [@ns]). Now using (\[cfreertransform\]) we obtain $$M_{\mu_{2}}(z)=\frac{1+2z-\sqrt{1-4z}}{4z}
=\frac{1}{2}+\frac{1-\sqrt{1-4z}}{4z}=\frac{1}{2}+\frac{1}{1+\sqrt{1-4z}}.$$ Comparing with (\[cfreemvarpi\]) for $t=1$ we see that $\mu_2=\frac{1}{2}\delta_{0}+\frac{1}{2}\varpi_{1}$.
Let us now consider the measures $\mu_1,\mu_{2}$ separately. For $\mu_1=\mathbf{D}_{2}\varpi_{1/2}$ the moment generating function is $$M_{\mu_{1}}(z)=\frac{2}{1+z+\sqrt{1-6z+z^2}}
=1+\sum_{n=1}^{\infty}z^n\sum_{k=1}^{n}\binom{n}{k}\binom{n}{k-1}\frac{2^{n-k}}{n},$$ so the moments are $$1, 1, 3, 11, 45, 197, 903, 4279, 20793, 103049, 518859,\ldots.$$ This is the A001003 sequence in OEIS (little Schroeder numbers), $s_{n}(\mu_1)$ is the number of ways to insert parentheses in product of $n+1$ symbols. There is no restriction on the number of pairs of parentheses. The number of objects inside a pair of parentheses must be at least 2.
On the subject of $\mu_2$, applying (\[cfreemsrtransforms\]) we can find the $S$-transform: $$S_{\mu_{2}}(z)=\frac{2(1+z)}{(1+2z)^2}=\frac{1+z}{1/2+z}\cdot\frac{1}{1+2z}.$$ One can check, that $\frac{1+z}{1/2+z}$ is the $S$-transform of $\frac{1}{2}\delta_0+\frac{1}{2}\delta_1$, which yields $$\mu_2=\left(\frac{1}{2}\delta_0+\frac{1}{2}\delta_1\right)\boxtimes\mu_1.$$
We would like to thank G. Aubrun, C. Banderier, K. Górska and H. Prodinger for fruitful interactions.
[0.44]{} ![The densities of $\mu_1$, $\mu_2$ and $\mu_0=\mu_1\boxplus\mu_2$[]{data-label="fig:animals"}](figura1a "fig:")
[0.47]{} ![The densities of $\mu_1$, $\mu_2$ and $\mu_0=\mu_1\boxplus\mu_2$[]{data-label="fig:animals"}](figura2a "fig:")
[99]{}
N. Balakrishnan, V. B. Nevzorov, *A primer on statistical distributions,* Wiley-Interscience, Hoboken, N. J. 2003.
R. L. Graham, D. E. Knuth, O. Patashnik, *Concrete Mathematics. A Foundation for Computer Science,* Addison-Wesley, New York 1994.
N. S. S. Gu and H. Prodinger, *Bijections for 2-plane trees and ternary trees,* European J. of Combinatorics **30** (2009), 969-985.
N. S. S. Gu, H. Prodinger, S. Wagner, *Bijection for a class of labeled plane trees,* European J. of Combinatorics **31** (2010) 720–732.
W. Młotkowski, *Fuss-Catalan numbers in noncommutative probability,* Documenta Math. **15** (2010) 939–955.
W. M[ł]{}otkowski, K. A. Penson, K. Życzkowski, *Densities of the Raney distributions,* arXiv:1211.7259.
A. Nica, R. Speicher, *Lectures on the Combinatorics of Free Probability*, Cambridge University Press, 2006.
K. A. Penson, K. Życzkowski, *Product of Ginibre matrices: Fuss-Catalan and Raney distributions* Phys. Rev. E **83** (2011) 061118, 9 pp.
A. P. Prudnikov, Yu. A. Brychkov, O. I. Marichev, *Integrals and Series,* Gordon and Breach, Amsterdam (1998) Vol. 3: More special functions.
N. J. A. Sloane, *The On-line Encyclopedia of Integer Sequences,* (2013), published electronically at: http://oeis.org/.
I. N. Sneddon, *The use of Integral Transforms,* Tata McGraw-Hill Publishing Company, 1974.
D. V. Voiculescu, K. J. Dykema, A. Nica, *Free random variables*, CRM, Montréal, 1992.
|
---
abstract: 'This paper presents an omnidirectional spatial and temporal 3-dimensional statistical channel model for 28 GHz dense urban non-line of sight environments. The channel model is developed from 28 GHz ultrawideband propagation measurements obtained with a 400 megachips per second broadband sliding correlator channel sounder and highly directional, steerable horn antennas in New York City. A 3GPP-like statistical channel model that is easy to implement in software or hardware is developed from measured power delay profiles and a synthesized method for providing absolute propagation delays recovered from 3-D ray-tracing, as well as measured angle of departure and angle of arrival power spectra. The extracted statistics are used to implement a MATLAB-based statistical simulator that generates 3-D millimeter-wave temporal and spatial channel coefficients that reproduce realistic impulse responses of measured urban channels. The methods and model presented here can be used for millimeter-wave system-wide simulations, and air interface design and capacity analyses.'
author:
-
bibliography:
- 'ICC15\_MKS.bib'
title: '3-D Statistical Channel Model for Millimeter-Wave Outdoor Mobile Broadband Communications'
---
at ($(current page.north) + (0,-0.25in)$) [M. K. Samimi, T. S. Rappaport, ”3-D Statistical Channel Model for Millimeter-Wave Outdoor Mobile Broadband Communications,”]{}; at ($(current page.north) + (0,-0.4in)$) [*accepted at the 2015 IEEE International Conference on Communications (ICC)*, 8-12 June, 2015.]{};
28 GHz millimeter-wave propagation; channel modeling; multipath; time cluster; spatial lobe; 3-D ray-tracing.
Introduction
============
Millimeter-waves (mmWave) are a viable solution for alleviating the spectrum shortage below 6 GHz, thereby motivating many recent mmWave outdoor propagation measurements designed to understand the distance-dependent propagation path loss, and temporal and spatial channel characteristics of many different types of environments [@Rap13:2; @MacCartney14:1; @Roh14; @Rap15; @MacCartney15]. The mmWave spectrum contains a massive amount of raw bandwidth and will deliver multi-gigabit per second data rates for backhaul and fronthaul applications, and to mobile handsets in order to meet the expected 10,000x demand in broadband data over the next 10 years [@Rap15][@Pi11].
MmWave statistical spatial channel models (SSCMs) do not yet exist, but are required to estimate channel parameters such as temporal multipath delays, multipath powers, and multipath angle of arrival (AOA) and angle of departure (AOD) information. Both directional and omnidirectional channel models are needed based on real-world measurements. Further, SSCMs are needed to carry out link-level and system-level simulations for analyzing system performance required for designing next-generation radio-systems. New channel modeling frameworks, modulation schemes and corresponding key requirements are currently being considered to address 5G network planning [@Akdeniz14; @Ghosh14; @Samimi14], and future directional on-chip antennas [@Gutierrez09] require that 3-D statistical channel models be used at mmWave bands. This paper presents the worlds first 3-D SSCM based on wideband New York City measurements at 28 GHz.
Previous results obtained from the extensive New York City propagation database yielded directional and omnidirectional path loss models in dense urban line of sight (LOS) and non-line of sight (NLOS) environments [@MacCartney14:2], important temporal and spatial channel parameters, and distance-dependent path loss models at 28 GHz and 73 GHz based on measurements and ray-tracing results [@Samimi14][@Thomas14]. Initial MIMO network simulations were carried out in [@Sun14] using a 2-dimensional (2-D) wideband mmWave statistical simulator developed from 28 GHz wideband propagation measurements [@Samimi14], and showed orders of magnitude increase in data rates as compared to current 3G and 4G LTE using spatial multiplexing and beamforming gains at the base station for both LOS and NLOS dense urban environments.
Statistical channel modeling methods have thus far focused on extracting models from measured power azimuth spectra and by modeling the elevation dimension using 3-dimensional (3-D) ray-tracing predictions in order to make up for the lack of measured elevation data [@Thomas14] or to estimate 3-D angles in the absence of directional measurements due to the use of quasi-omnidirectional antennas [@MIWEBA]. In this paper, we present a 3-D statistical spatial and temporal channel model for the urban NLOS environment based on 28 GHz ultrawideband propagation measurements that used 3-D antenna positioning and directional antennas in New York City, where AOA elevation characteristics at the receiver have been extracted from measured power angular spectra (obtained with field measurements without the use of ray-tracing). This work extends our 2-D SSCM presented in [@Samimi14] to a true 3-D model that includes the AOA of arriving multipath, as well as AOD, and mutipath time statistics.
28 GHz Propagation Measurements
===============================
In 2012, an ultrawideband propagation measurement campaign was carried out at 75 transmitter (TX) - receiver (RX) locations in New York City to investigate 28 GHz wideband channels using a 400 megachips per second (Mcps) broadband sliding correlator channel sounder and 24.5 dBi (10.9$^{\circ}$ half-power beamwidth (HPBW)) highly directional steerable horn antennas at the TX and RX, in dense urban LOS and NLOS environments [@Rap13:2]. Over 4,000 power delay profiles (PDPs) were collected at unique azimuth and elevation pointing angles at both the TX and RX to properly describe mmWave wideband multipath channels over an 800 MHz RF null-to-null bandwidth in the time and spatial domains. Additional details pertaining to the 28 GHz measurement campaign and hardware equipment used can be found in [@Rap13:2] [@Samimi14] [@MacCartney14:2] [@Deng14].
Synthesizing 3-D Absolute Timing Omnidirectional Power Delay Profiles
=====================================================================
The collected directional PDPs did not make use of absolute timing synchronization between the TX and RX over the various sweeps, and therefore provided received power over *excess*, and not absolute, time delay. As illustrated in Fig. \[fig:47\]a, two typical measured excess time delay PDPs are shown as measured at two distinct AOA azimuth angles. The RX used the strongest arriving multipath component to trigger and establish the relative $t = 0$ ns time marker for all recorded PDPs, as illustrated in Case 1 of Fig. \[fig:47\]b where both PDPs from different angles are shown to start at $t = 0$ ns, and thus could not capture absolute arrival time of power. Case 2 illustrates the desired result, showing the two PDPs measured along an absolute propagation time axis, where ns corresponds to the energy leaving the TX antenna. In Case 2, the recorded excess time delay PDPs have been shifted in time appropriately, accounting for the propagation distance travelled of the first arriving multipath in each PDP, enabling accurate characterization of received power in time at the RX over all measured angles.
![Superimposed PDP of two individual received PDPs, where each PDP comes from a different AOA at the same RX location. The multipath signals from Angle 1 arrived before those of Angle 2 (i.e. multipath arriving at different times from two distinct lobes). The absolute propagation times were found using ray-tracing, thus allowing alignment with absolute timing of multipath signals originally measured at the RX, independent of AOAs (see [@Samimi14]). ](ch4_Absolute_Timing.eps){width="3.5in"}
\[fig:47\]
We developed a MATLAB-based 3-D ray-tracer to predict the propagation time delays of the first arriving multipath components at the strongest measured AOAs for each TX-RX measured location. Ray-tracing techniques have previously been shown to provide accurate time and amplitude predictions [@Rap15][@Thomas14][@Durgin97:1]. Fig. \[fig:RayTracedMap\] shows a typical ray-traced measured location where viable propagation paths between TX and RX are shown in red. The ray-tracing results showed strong spatial correlation with the measurements when comparing the strongest measured and predicted directions of departure and arrival (within $\pm$ two antenna beamwidths). The predicted propagation distances were paired with the closest strongest measured AOAs, and we superimposed each PDP recorded at the strongest AOAs in azimuth and elevation on an absolute propagation time axis by appropriately shifting and summing each PDP in time using the ray-tracing absolute time predictions. This method was performed over all measured RX locations, and allowed us to synthesize omnidirectional absolute timing PDPs as would have been measured with a quasi-omnidirectional isotropic antenna, yielding one unique omnidirectional 3-D power delay profile at each TX-RX location combination. Our measurement approach enabled high gain antennas to measure the channel over large distances in a directional mode, and the orthogonal beam patterns in adjacent beam pointing angles allow the orthogonal PDPs to be summed in time and space to form a 3-D omnidirectional model.
![A 3-dimensional view of the downtown Manhattan area using the MATLAB-based 3-D ray-tracer. The rays which leave the TX and successfully arrive at the RX are shown in red, and represent multipath signal paths. The TX was located on the rooftop of the Coles Sports Center 7 m above ground (yellow star), and the RX was located 113 m away, 1.5 m above ground (black circle) [@Rap13:2].[]{data-label="fig:RayTracedMap"}](RayTraced_Map.eps){width="3.5in"}
3-D Statistical Channel Parameters
==================================
Our statistical channel model recreates omnidirectional channel impulse responses $h(t,\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}})$, where $t$ denotes absolute propagation time delay, $\overrightarrow{\mathbf{\Theta}}=(\theta,\phi)_{TX}$ represents the vector of AOD azimuth and elevation angles, and $\overrightarrow{\mathbf{\Phi}}=(\theta,\phi)_{RX}$ is the vector of AOA azimuth and elevation angles, as defined in Eqs. (\[IREq\]), (\[3DEq1\]), and (\[3DEq2\]). Previous work modeled the channel impulse response as a function of time [@Saleh87], azimuth angle of arrival [@Spencer97][@Ertel98], and AOD/AOA elevation and azimuth dimensions [@WINNERII]. Note that [@WINNERII] includes AOD and AOA elevation and azimuth information for in-building, indoor-to-outdoor, and outdoor-to-indoor scenarios, but not for the outdoor urban microcellular environment as considered in this paper. Here, we generalize the channel impulse response as a function of time, as well as a function of AOD and AOA azimuth/elevation angles, allowing realistic simulations of directional transmissions at both the TX and RX in azimuth and elevation dimensions. The channel impulse response between TX and RX is written as: $$\label{IREq}\begin{split}
h(t,\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}}) =& \sum_{l_1=1}^{L_{AOD}}\sum_{l_2=1}^{L_{AOA}} \sum_{n=1}^{N} \sum_{m=1}^{M_n} a_{m,n,l_1,l_2} e^{j\varphi_{m,n}} \\
&\cdot \delta(t - \tau_{m,n}) \cdot \delta(\overrightarrow{\mathbf{\Theta}}-\overrightarrow{\mathbf{\Theta}}_{l_1}) \cdot \delta(\overrightarrow{\mathbf{\Phi}}-\overrightarrow{\mathbf{\Phi}}_{l_2})
\end{split}$$
where $L_{AOD}$ and $L_{AOA}$ are the number of AOD and AOA spatial lobes (defined in [@Samimi14][@Samimi13:1], and in Section \[sec:proc\]), respectively; $N$ and $M_n$ are the number of time clusters and the number of intra-cluster subpaths in the $n$^th^ time cluster (as defined in Section \[sec:cluster\]), respectively; $a_{m,n,l_1,l_2}$ is the amplitude of the $m$^th^ subpath component belonging to the $n$^th^ time cluster, departing from AOD lobe $l_1$, and arriving at AOA lobe $l_2$; $\varphi_{m,n}$, and $t_{m,n}$ are the phase and the propagation time of arrival of the $m$^th^ subpath component belonging to the $n$^th^ time cluster, respectively; $\overrightarrow{\mathbf{\Theta}}_{l_1}$ and $\overrightarrow{\mathbf{\Phi}}_{l_2}$ are the azimuth/elevation AODs and AOAs of lobes $l_1$ and $l_2$, respectively. In our channel model, each multipath component is assigned one joint AOD-AOA lobe combination, per the time cluster definition given in Section \[sec:cluster\].
Our statistical channel model also produces the joint AOD-AOA power spectra in the azimuth and elevation domains in 3-D, based on our 28 GHz New York City field measurements that only used a single TX pointing elevation angle at a $10^{\circ}$ downtilt and three RX elevation planes of $0^{\circ}$, $\pm 20^{\circ}$ (note: measurements at 73 GHz provided multiple AOD and AOA elevation angles, and may be extrapolated and used here). The spatial distribution of power is obtained by taking the magnitude squared of $h(t,\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}}) $, and integrating over the time dimension, as shown in (\[3DEq1\]) and (\[3DEq2\]): $$\begin{aligned}
\label{3DEq1}
P(\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}}) &= \int_{0}^{\infty} |h(t,\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}}) |^2 dt\\
\begin{split}P(\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}}) &= \sum_{l_1=1}^{L_{AOD}}\sum_{l_2=1}^{L_{AOA}}\sum_{n=1}^{N} \sum_{m=1}^{M_n} |a_{m,n,l_1,l_2}|^2 \\ \label{3DEq2}
&\cdot \delta(\overrightarrow{\mathbf{\Theta}}-\overrightarrow{\mathbf{\Theta}}_{l_1}) \cdot \delta(\overrightarrow{\mathbf{\Phi}}-\overrightarrow{\mathbf{\Phi}}_{l_2})
\end{split}\end{aligned}$$
Note that $P(\overrightarrow{\mathbf{\Theta}},\overrightarrow{\mathbf{\Phi}})$ in Eq. (\[3DEq2\]) are the total received powers (obtained by integrating the PDP over time) assigned to the lobe AODs and AOAs.
28 GHz Omnidirectional NLOS Path Loss Model
-------------------------------------------
The 28 GHz omnidirectional NLOS path loss model was recovered by summing the received powers measured at each and every azimuth and elevation unique pointing angle combination to recover the total received omnidirectional power at each TX-RX location [@MacCartney14:2]. This procedure is valid since adjacent angular beamwidths are orthogonal to each other, and phases of arriving multipath components with different angular propagation paths can be assumed identically and independently distributed (i.i.d.) and uniform between 0 and $2\pi$ [@Rap15], such that powers in adjacent beam angles can be added. After removing antenna gains and carefully removing double counts occuring from the TX and RX azimuth sweeps, we recovered the corresponding path loss at all measured NLOS locations, and extracted the path loss exponent and shadow factor using the 1 m close-in free space reference distance path loss model [@Rap15][@MacCartney14:2]: $$PL_{NLOS}(d)[dB] = 61.4 + 34 \log_{10} (d) + \chi_{\sigma}, \hspace{.2cm}d > 1 \text{ m}$$ where $\chi_{\sigma}$ is the lognormal random variable with 0 dB mean and shadow factor $\sigma = 9.7$ dB, and 61.4 dB is 28 GHz free space path loss at 1 m.
Cluster and Lobe Statistics {#sec:cluster}
---------------------------
The temporal and spatial components of our SSCM are modeled by a *time cluster* and *spatial lobe*, respectively, and faithfully reproduce omnidirectional PDPs and power azimuth spectra [@Samimi14]. Time clusters model a group of multipath components travelling closely in time over all directions, and can represent one or more spatial or angular directions within the same time epoch. Spatial lobes represent a small contiguous span of angles at the RX (TX) where energy arrives (departs) over a small azimuth and elevation angular spread. In our SSCM, multiple time clusters can arrive in one spatial lobe, and a time cluster can arrive over many spatial lobes within a small span of propagation time. Time clusters and spatial lobes statistics can be easily extracted from the propagation measurements to build a 3GPP-like statistical channel model, including simple extensions to the current 3GPP and WINNER models that account for cluster subpath delays and power levels [@3GPP:1]. Time cluster and spatial lobe characteristics were illustrated in [@Samimi14] (see Figs. 3 and 4). The measured data suggests that multiple clusters in the time domain arrive up to several hundreds of nanoseconds in excess time delay for arbitrary pointing angles, observable due to our 24.5 dBi high gain antennas. We note that multipath components within a time cluster, i.e., intra-cluster subpath components, were successfully used to model the indoor office environment based on wideband measurements [@Saleh87] [@Spencer97].
Key parameters that serve as inputs to the 3-D mmWave SSCM are referred to as *primary statistics*, and have been identified and defined in [@Samimi14], and include the number of time clusters and the cluster power levels. *Secondary statistics* describe statistical outputs of the SSCM, and include the RMS delay spread and RMS angular spreads which reflect second-order statistics. Secondary statistics provide a means of testing the accuracy of a statistical channel model and simulator over a large ensemble of simulated outputs.
Time Cluster Partitioning
-------------------------
In this work, omnidirectional PDPs were partitioned in time based on a 25 ns minimum inter-cluster void interval, by assuming that multipath components fall within time clusters that are separated by at least 25 ns in time span. Walkways or streets between building facades are typically 8 m in width (roughly 25 ns in propagation delay). The 25 ns inter-cluster void interval allowed us to resolve measured multipath channel dynamics in a simple, yet powerful way, offering a scalable clustering algorithm that can be modified by changing the inter-cluster void interval for arbitrary time resolution. As the minimum inter-cluster void interval is increased, the number of time clusters in a PDP is expected to decrease, while the number of intra-cluster subpaths must consequently increase. In turn, the fewer time clusters must be allocated a larger portion of the total received power, while the greater number of intra-cluster subpaths will receive a lesser amount of the cluster power. While the minimum inter-cluster void interval heavily affects the outcome of the temporal channel parameter statistics, we note that the RMS delay spread is the only time parameter that remains unchanged for arbitrary inter-cluster void interval, making it a good but not the only proper indicator for comparing the simulated PDP outputs with the ensemble of measured PDPs. The 25 ns minimum inter-cluster void interval of our model is comparable to the best/maximum measured time resolution (20 ns) of a single multipath component in 3GPP and WINNER models.
Fig. \[fig:clusterPowers\] shows the temporal cluster powers normalized to the total received power in the omnidirectional profiles, and the least-squares regression exponential model that reproduces the measured cluster powers. The cluster powers were obtained by partitioning omnidirectional PDPs based on a 25 ns minimum inter-cluster void interval, finding the area under each time cluster, and dividing by the total power (area under the PDP). The mean exponential curve is parameterized using two parameters, the average cluster power $\overline{P}_0$ in the first received cluster (i.e., the $y$-intercept at $\tau=0$ ns), and the average cluster decay time $\Gamma$ defined as the time required to reach 37% $(1/e)$ of $\overline{P}_0$. It is worth noticing the measured large cluster power at $\tau=80$ ns, containing close to 80% of the total received power, corresponding to large fluctuations in cluster powers. This phenomenon causes large delay spreads, and is typically modeled using a lognormal random variable, as discussed in Step 5 of Section \[sec:proc\]. In Fig. \[fig:clusterPowers\], we estimated $\overline{P}_0 = 0.883$, and $\Gamma=49.4$ ns. Similarly, Fig. \[fig:subpathPowers\] shows the intra-cluster subpath power levels (normalized to the total cluster powers), with $\overline{P}_0 = 0.342$ and $\gamma=16.9$ ns. The smaller subpath decay time physically means that intra-cluster subpaths decay faster than time clusters.
![Temporal cluster powers normalized to omnidirectional total received power, over cluster excess delays, using a 25 ns minimum inter-cluster void interval. The superimposed least-squares regression exponential curve has an average cluster decay constant of $\Gamma=49.4$ ns, and a $y$-intercept of $\overline{P}_0=88.3$% (see Step 7 in Section \[sec:mmWaveProc\]). A large cluster power can be seen at $\tau=80$ ns, which typically causes large delay spreads.[]{data-label="fig:clusterPowers"}](ClusterPowers.eps){width="3.5in"}
![Intra-cluster subpath powers normalized to time cluster powers, using a 25 ns minimum inter-cluster void interval. The superimposed least-squares regression exponential curve has an average cluster decay constant of $\gamma=16.9$ ns, and a $y$-intercept of $\overline{P}_0=34.2$% (see Step 8 in Section \[sec:mmWaveProc\]).[]{data-label="fig:subpathPowers"}](SubpathPowers.eps){width="3.5in"}
3-D Lobe Thresholding
---------------------
The 3-D spatial distribution of received power is used to extract 3-D directional spatial statistics, by defining a lobe threshold below the maximum peak power in the 3-D power angular spectrum, where all contiguous segment powers above this threshold in both azimuth and elevation were considered to belong to one 3-D spatial lobe. Spatial thresholding was performed on 3-D AOA power spectra, and 2-D AOD azimuth spectra, after linearly interpolating the directional measured powers (in linear units) to a 1$^{\circ}$ angular resolution in azimuth and elevation domains to enhance the angular resolution of our spatial statistics.
Generating 3-D mmWave Channel Coefficients {#sec:proc}
==========================================
A statistical channel model is now presented for generating 3-D mmWave PDPs and spatial power spectra that accurately reflect the statistics of the measurements over a large ensemble, valid for 28 GHz NLOS propagation with a noise floor of -100 dBm over an 800 MHz RF null-to-null bandwidth, and for a maximum system dynamic range of 178 dB [@Rap13:2]. The clustering and lobe thresholding methodologies used in our work effectively de-couple temporal from spatial statistics. Step 12 of our channel model bridges the temporal and spatial components of the SSCM by randomly assigning temporal subpath powers to spatial lobe AODs and AOAs, thereby re-coupling the time and space dimensions to provide an accurate joint spatio-temporal SSCM. In the following steps, $DU$ and $DLN$ refer to the discrete uniform and discrete lognormal distributions, respectively, and the notation $[x]$ denotes the closest integer to $x$. Also, Steps 11 through 15 apply to both AOD and AOA spatial lobes.
Step Procedure for Generating Channel Coefficients {#sec:mmWaveProc}
--------------------------------------------------
*Step 1: Generate the T-R separation distance $d$ ranging from 60-200 m in NLOS (based on our field measurements, and could be modified with further measurements)*: $$d \sim Uniform(d_{min} = 60,d_{max}=200)$$ Note: To validate our simulation, we used the above distance ranges, but for standards work any distance less than 200 m is valid.
*Step 2: Generate the total received omnidirectional power $P_r$ (dBm) at the RX location using the 1 m close-in free space reference distance path loss model [@Rap15][@MacCartney14:2]*: $$\begin{aligned}
&P_r [dBm] = P_t + G_t + G_r - PL[dB]\\
&PL_{NLOS}[dB] = 61.4 + 34\log_{10}\left(d\right)+\chi_{\sigma}, \hspace{.3cm}d\geq 1\text{ m}\end{aligned}$$ where $P_t$ is the transmit power in dBm, $G_t$ and $G_r$ are the TX and RX antenna gains in dBi, respectively, $\lambda = 0.0107$ m, and $\overline{n}$ = 3.4 is the path loss exponent for omnidirectional TX and RX antennas [@MacCartney14:2]. $\chi_{\sigma}$ is the lognormal random variable with 0 dB mean and shadow factor $\sigma=9.7$ dB.
*Step 3: Generate the number of time clusters $N$ and the number of spatial AOD and AOA lobes $(L_{AOD}, L_{AOA})$ at the TX and RX locations, respectively:* $$\begin{aligned}
&N \sim DU[1,6]\\
& L_{AOD} \sim \min \bigg\{ L_{max},\max \Big\{ 1, \min \big\{A,N \big\} \Big\} \bigg\}\\
& L_{AOA} \sim \min \bigg\{ L_{max}, \max \Big\{ 1, \min \big\{B,N \big\} \Big\} \bigg\}\\
\text{and: }& A\sim \text{Poisson}(\mu_{AOD}+0.2)\\
& B \sim \text{Poisson}(\mu_{AOA}+0.1)\end{aligned}$$
where $\mu_{AOD}=1.6$ and $\mu_{AOA} = 1.7$ are the mean number of AOD and AOA lobes observed in Manhattan, respectively, and $L_{max}=5$ is the maximum allowable number of lobes, for both AODs and AOAs. Work in [@Samimi13:1] found the mean number of AOA lobes to be 2.5 using a -20 dB threshold, whereas here we use a -10 dB threshold. The 28 GHz NLOS measurements found the maximum number of clusters $N_{max}=5$ based on measurements in [@Samimi13:1], while at 73 GHz we found $N_{max}=6$, therefore we used $N_{max}=6$ for both frequency bands. Note that $(L_{AOD},L_{AOA})$ must always remain less than or equal to $N$, since the number of spatial lobes must be at most equal to the number of traveling time clusters in the channel. Also, the use of coin flipping was introduced in our previous work [@Samimi14] to generate the pair $(N,L_{AOA})$, to obtain a close fit between measured and statistical data. In this work, however, we use standard well-known distributions, without the use of coin flipping, to promote ease of use in simulated software.
*Step 4: Generate the number of cluster subpaths (SP) $M_n$ in each time cluster:* $$\begin{aligned}
&M_n \sim DU[1, 30]\hspace{.5cm},\hspace{.5cm} n = 1, 2, ... N\end{aligned}$$ At 28 GHz, the maximum and second to maximum number of cluster subpaths were 53 and 30, respectively, over all locations, while it was 30 at 73 GHz, so we choose to use 30 for both frequency bands.
*Step 5: Generate the intra-cluster subpath excess delays $\rho_{m,n}$*: $$\rho_{m,n}(B_{bb}) = \bigg\{ \frac{1}{B_{bb}}\times (m-1) \bigg\}^{1+X}$$ where $B_{bb} = 400$ MHz is the baseband bandwidth of our transmitted PN sequence (and can be modified for different baseband bandwidths), $X$ is uniformly distributed between 0 and 0.43, and $m = 1,2,...M_n, n = 1,2,...N$. This step allows for a minimum subpath time interval of 2.5 ns, while reflecting our observations that the time intervals between intra-cluster subpaths tend to increase with time delay. The bounds on the uniform distribution for $X$ will likely differ depending on the site-specific environment, and can be easily adjusted to fit field measurement observations.
*Step 6: Generate the cluster excess delays $\tau_n$ (ns):* $$\begin{aligned}
&\tau^{\prime\prime}_n \sim \text{Exp}(\mu_{\tau})\\
&\Delta \tau_n = \text{sort}(\tau^{\prime\prime}_n)-\min(\tau^{\prime\prime}_n) \label{sort}\end{aligned}$$ $$\begin{aligned}
&\tau_n =
\begin{cases}
0, &n = 1\\
\tau_{n-1}+\rho_{M_{n-1},n-1}+\Delta \tau_n+ 25, &n = 2, ..., N
\end{cases}\end{aligned}$$
where $\mu_{\tau}=83$ ns, and *sort()* in (\[sort\]) orders the delay elements $\tau^{\prime\prime}_n$ from smallest to largest. This step assures no temporal cluster overlap by using a 25 ns minimum inter-cluster void interval.
*Step 7: Generate the time cluster powers $P_n$ (mW):* $$\begin{aligned}
\label{scale}
&P^{\prime}_n = \overline{P}_0 e^{-\frac{\tau_n}{\Gamma}} 10^{\frac{Z_n}{10}}\\ \label{eq2}
&P_n = \frac{P^{\prime}_n}{\sum^{k=N}_{k=1} P^{\prime}_k}\times P_r [mW]\\ \label{eq3}
&Z_n \sim N(0,3\text{ dB} ),n = 1, 2, ... N\end{aligned}$$ where $\Gamma=49.4$ ns is the cluster decay time, $\overline{P}_0=0.883$ is the average (normalized) cluster power in the first arriving time cluster, and $Z_n$ is a lognormal random variable with 0 dB mean and $\sigma=3$ dB. Eq. (\[eq2\]) ensures that the sum of cluster powers adds up to the omnidirectional power $P_r$, where $\overline{P}_0$ cancels out and can be used as a secondary statistic to validate the channel model.
*Step 8: Generate the cluster subpath powers $\Pi_{m,n}$ (mW)* : $$\begin{aligned}
&\Pi^{\prime}_{m,n} = \overline{\Pi}_0 e^{-\frac{\rho_{m,n}}{\gamma}} 10^{\frac{U_{m,n}}{10}}\\ \label{eqSP}
&\Pi_{m,n} = \frac{\Pi^{\prime}_{m,n}}{\sum^{k=N}_{k=1} \Pi^{\prime}_{k,n}}\times P_n [mW]\\
& U_{m,n} \sim N(0, 6\text{ dB})\\
&m = 1, 2, ..., M_n\hspace{.3cm},\hspace{.3cm} n = 1, 2, ... N\end{aligned}$$ where $\gamma=16.9$ ns is the subpath decay time, $\overline{\Pi}_0 = 0.342$ is the average subpath power (normalized to cluster powers) in the first intra-cluster subpath, and $U_{m,n}$ is a lognormal random variable with 0 dB mean and $\sigma= 6$ dB. For model validation, the minimum subpath power was set to -100 dBm. Eq. (\[eqSP\]) ensures that the sum of subpath powers adds up to the cluster power. Note: our measurements have much greater temporal and spatial resolution than previous models. Intra-cluster power levels were observed to fall off exponentially over intra-cluster time delay, as shown in Fig. \[fig:subpathPowers\] and in [@Samimi14].
*Step 9: Generate the cluster subpath phases $\varphi_{m,n}$ (rad)* : $$\begin{aligned}
&\varphi_{m,n} = \varphi_{1,n} + 2\pi f \rho_{m,n}\\
& \varphi_{1,n} \sim U(0,2\pi)\\
& m = 2, ..., M_n, n = 1, 2, ..., N\end{aligned}$$
where $f = 28 \times 10^{9}$ Hz, and $\rho_{m,n}$ are the intra-cluster subpath delays in $s$ from Step 5, where $f$ can be any carrier frequency. The subpath phases $\varphi_{m,n}$ are i.i.d and uniformly distributed between 0 and $2\pi$, as modeled in [@Saleh87] [@Spencer97].
*Step 10: Recover absolute time delays $t_{m,n}$ (ns) of cluster subpaths using the T-R Separation distance:* $$\begin{aligned}
&t_{m,n} = t_0 + \tau_n + \rho_{m,n}\hspace{.5cm}, \hspace{.5cm}t_0 = \frac{d}{c}\end{aligned}$$ where $m = 1,2,...M_n, n = 1,2,...N$, and $c = 3\times 10^8 \text{ } m/s$ is the speed of light in free space.
*Step 11 a: Generate the mean AOA and AOD azimuth angles $\theta_i(^{\circ})$ of the 3-D spatial lobes to avoid overlap of lobe angles:* $$\begin{aligned}
&\theta_{i} \sim DU[\theta_{min},\theta_{max}]\hspace{.3cm},\hspace{.3cm} i = 1, 2, ..., L\\
& \theta_{min} = \frac{360(i-1)}{L}\hspace{.4cm},\hspace{.3cm} \theta_{max} = \frac{360i}{L}\end{aligned}$$
*Step 11 b: Generate the mean AOA and AOD elevation angles $\phi_i(^{\circ})$ of the 3-D spatial lobes:* $$\begin{aligned}
&\phi_{i} \sim [N(\mu,\sigma)]\hspace{.3cm},\hspace{.3cm} i = 1, 2,..., L. \end{aligned}$$ Positive and negative values of $\phi_i$ indicate a direction above and below horizon, respectively. While our 28 GHz Manhattan measurements used a fixed 10$^{\circ}$ downtilt at the transmitter, and considered fixed AOA elevations planes of 0$^{\circ}$ and $\pm 20^{\circ}$ at the receiver, mmWave transmissions are expected to beamform in the strongest AOD and AOA elevation and azimuth directions as was emulated in our 73 GHz Manhattan measurements in Summer 2013 [@MacCartney14:1]. Thus, we specify $(\mu, \sigma)=(-4.9^{\circ},4.5^{\circ}$) for AOD elevation angles, and $(\mu,\sigma)=(3.6^{\circ},4.8^{\circ})$ for AOA elevation angles from our 73 GHz NLOS measurements.
*Step 12: Generate the AOD and AOA lobe powers $P(\theta_i, \phi_i)$ by assigning subpath powers $\Pi_{m,n}$ successively to the different AOD and AOA lobe angles ($\theta_i,\phi_i$):* $$\begin{aligned}
P(\theta_i, \phi_i) =& \sum_{n=1}^N\sum_{m=1}^{M_n} \delta_{i w_{m,n}} \Pi_{m,n}\hspace{.3cm},\hspace{.3cm} i = 1, 2, ..., L\\
&w_{m,n} \sim DU[1,L] \text{ and }\delta_{rs} =
\begin{cases}
0, r \neq s\\
1, r = s
\end{cases}\end{aligned}$$
where $\delta_{rs}$ corresponds to the Kronecker delta. The cluster subpath $(m,n)$, with power level $\Pi_{m,n}$, is assigned to lobe $i$ only if $w_{m,n}=i$. This step distributes subpath powers into the spatial domains based on measurements in [@Samimi13:1].
*Step 13: Generate the AOA and AOD lobe azimuth and elevation spreads $K_i$ (azimuth) and $H_i$ (elevation):*
For AODs:\
For AOAs:
AOD elevation spreads are fixed at $10^{\circ}$ based on our 28 GHz measurements that used a 10.9$^{\circ}$ antenna beamwidth. We have allowed for at most $10\%$ lobe azimuth and elevation overlap in adjacent spatial lobes.
*Step 14: Generate the discretized lobe segment azimuth and elevation angles ($\theta_{i,j}, \phi_{i,l}$) for lobe AODs and AOAs:* $$\begin{aligned}
\theta_{i,j} = \theta_i+k_j, j = 1,2,...K_i, i=1,2,...L\\
\phi_{i,l} = \phi_i+h_l, l = 1,2,...H_i, i=1,2,...L\end{aligned}$$
where: $(X, W) \sim DU[0,1]$, $Y=1-X$, $Z=1-W$. $$\begin{cases}
k_j=-\frac{K_i-1}{2},...,-1,0,1,...,\frac{K_i-1}{2},& K_i \text{ odd}\\
k_j=-\frac{K_i}{2}+Y,...,-1,0,1,..,\frac{K_i}{2}-X,& K_i \text{ even}\\
\end{cases}$$ $$\begin{cases}
h_l=-\frac{H_i-1}{2},...,-1,0,1,...,\frac{H_i-1}{2},& H_i \text{ odd}\\
h_l=-\frac{H_i}{2}+W,...,-1,0,1,...,\frac{H_i}{2}-Z,& H_i \text{ even}\\
\end{cases}$$
This step discretizes the spatial lobes into 1$^{\circ}$ angular segments in both azimuth and elevation dimensions.
*Step 15: Generate the AOD and AOA lobe angular powers $P(\theta_{i,j},\phi_{i,l})(mW)$ at each $1^{\circ}$ angular segment:* $$\begin{aligned}
&P(\theta_{i,j},\phi_{i,l}) = R(\Delta \theta_{i,j},\Delta \phi_{i,l}) P(\theta_i,\phi_i)\\
&R(\Delta \theta_{i,j},\Delta \phi_{i,l}) = \max \bigg\{ e^{-\frac{1}{2}\left(\frac{(\Delta \theta_{i,j})^2}{\sigma_{\theta_{i}}^2}+\frac{(\Delta \phi_{i,l})^2}{\sigma_{\phi_{i}}^2}\right) }, \frac{1}{10} \bigg\}\end{aligned}$$ $$\begin{aligned}
& j = 1, 2, ...,K_i, l = 1, 2, ..., H_i, i = 1,2,...,L\end{aligned}$$
For AODs, $\sigma_{\theta_i}\sim N(6.6^{\circ},3.5^{\circ})$ and $\sigma_{\phi_i}\sim N(5^{\circ},0^{\circ})$.
For AOAs, $\sigma_{\theta_i}\sim N(6^{\circ},1^{\circ})$ and $\sigma_{\phi_i}\sim N(6^{\circ},2^{\circ})$.
Measured vs. Simulated Statistics using a MATLAB-Based Statistical Simulator
----------------------------------------------------------------------------
A MATLAB-based statistical simulator using our 3-D SSCM generated a large ensemble (10,000) of mmWave PDPs, and AOD and AOA power spectra, where simulated and measured channel statistics were compared. Fig. \[fig:CDFOmni\] shows the cumulative distribution function (CDF) of the synthesized and simulated RMS delay spreads, showing relatively close agreement, with a median of 31 ns and 32 ns, respectively. The RMS delay spread CDF was skewed due to one large value of 222.4 ns, and so the median was chosen to reflect the empirical distribution trend. Fig. \[fig:CDFAngular\] shows the CDF of the RMS lobe azimuth and elevation spreads from the measured and simulated AOA power spectra, showing very good agreement to field measurements with identical measured and simulated means of 7$^{\circ}$ over both azimuth and elevation.
![28 GHz NLOS synthesized and simulated omnidirectional RMS delay spreads. The median RMS delay spreads were 31 ns and 32 ns, for the synthesized and simulated data sets, respectively. The close agreement between measured and simulated data validates our NLOS statistical channel model.[]{data-label="fig:CDFOmni"}](CDFOmni.eps){width="3.5in"}
![28 GHz NLOS RMS lobe azimuth and elevation spreads, measured as compared to simulated, using a -10 dB lobe threshold. The simulated data is in good agreement to the measured RMS angular spreads, validating the spatial component of our NLOS statistical channel model.[]{data-label="fig:CDFAngular"}](AngularSpread_CDF.eps){width="3.5in"}
Conclusion
==========
This paper presents the first comprehensive 3-D statistical spatial channel model for mmWave NLOS communication channels. The thousands of measured PDPs have allowed us to create a 3-D SSCM that recreates the measured channel statistics over a large ensemble of simulated channels, and can be extended to arbitrary bandwidths and antenna patterns for use in physical layer simulations, such as physical layer design, and 3-D beamforming and beamcombining simulations used in MIMO systems in NLOS environments.
|
---
abstract: 'The a.s. existence of a polymer probability in the infinite volume limit is readily obtained under general conditions of weak disorder from standard theory on multiplicative cascades or branching random walk. However, speculations in the case of strong disorder have been mixed. In this note existence of an infinite volume probability is established at critical strong disorder for which one has convergence in probability. Some calculations in support of a specific formula for the a.s. asymptotic variance of the polymer path under strong disorder are also provided.'
title: |
Tree polymers in the infinite volume limit at\
critical strong disorder
---
Introduction and Preliminaries
==============================
Polymers are abstractions of chains of molecules embedded in a solvent by non-self-intersecting polygonal paths of points whose probabilities are themselves random (reflecting impurities of the solvent). In this connection, tree polymers take advantage of a particular way to determine path structure and their probabilities as follows.
Three different references to paths occur in this formulation. An *$\infty$-tree path* is a sequence $s=(s_1,s_2,\ldots) \in \{-1,1\}^\mathbb{N}$ emanating from a root $0$. A *finite tree path* or *vertex* $v$ is a finite sequence $v=s|n=(s_1,\ldots,s_n)$, read path $s$ restricted to level $n$, of length $|v|=n$. The symbol $*$ denotes concatenation of finite tree paths; if $v=(v_1,
\ldots,v_n)$ and $t=(t_1,\ldots,t_m)$, then $v*t=(v_1,\ldots,v_n,t_1,\ldots,t_m)$. Vertices belong to $T:=\bigcup_{n=0}^\infty \{-1,1\}^n$, and can be viewed as unique finite paths to the root of the directed binary tree $T$ equipped with the obvious graph structure. We also write $$\partial T = \{-1,1\}^\mathbb{N}$$ for the boundary of $T$. The third type of path, and the one of main interest to polymer questions, is that of the *polygonal tree path* defined by $n\rightarrow(s)_n:=\sum_{j=1}^ns_j$, $n\geq0$, with $(s)_0:=0$, for a given $s \in \partial T$.
$\partial T$ is a compact, topological Abelian group for coordinate-wise multiplication and the product topology. The *uniform distribution* on $\infty$-tree paths is the Haar measure on $(\partial T, \mathcal{B})$, i.e. $$\lambda(ds)=\left( \frac{1}{2}\delta_+(ds) + \frac{1}{2}\delta_-(ds) \right)^\mathbb{N}.$$
Let $\{X_v:v\in T\}$ be an i.i.d. family of positive random variables on $(\Omega,\mathcal{F},
P)$ with $\mathbb{E}X < \infty$; we denote a generic random variable with the common distribution of $X_v$ by $X$. Without loss of generality we may assume that $\mathbb{E}X=1$. Define a sequence of *random probability measures* $\text{prob}_n(ds)$ on $(\partial T,
\mathcal{B})$ by the prescription that $$\text{prob}_n(ds) << \lambda(ds)$$ with $$\frac{d\text{prob}_n}{d\lambda}(s)=Z_n^{-1}\prod_{j=1}^n X_{s|j}$$ where $$Z_n=\int_{\partial T}\prod_{j=1}^n X_{s|j}\lambda(ds)=\sum_{|s|=n}\prod_{j=1}^n X_{s|j}2^{-n}.$$
Observing that $\{Z_n:n=1,2\ldots\}$ is a positive martingale, it follows that $$Z_\infty := \lim_{n\rightarrow\infty}Z_n$$ exists a.s. in $(\Omega,\mathcal{F},P)$. According to a classic theorem of Kahane and Peyrière (1976) in the context of multiplicative cascades, and Biggins (1976) in the context of branching random walks, one has the following dichotomy: $$\begin{aligned}
P(Z_\infty>0) = 1 \quad &\Longleftrightarrow \quad \mathbb{E}X \ln X < \ln 2 \\
P(Z_\infty=0) = 1 \quad &\Longleftrightarrow \quad \mathbb{E}X \ln X \geq \ln 2. \end{aligned}$$ The a.s. occurance of the event $[Z_\infty > 0]$ is refered to as *weak disorder* and that of $[Z_\infty=0]$ as *strong disorder*; see Bolthausen (1989). In particular, the critical case $\mathbb{E}X\ln X =\ln 2$ is strong disorder. In the case of tree polymers one may view the notions of weak/strong in terms of a disorder parameter defined by $\mathbb{E}X\ln X$ and relative to the branching rate, $\ln 2$.
In this short communication we provide some new insights into a few delicate problems for the case of strong disorder.
Tree Polymers under Weak Disorder
=================================
To set the stage for contrast, we record a rather robust consequence of weak disorder.
Under weak disorder, there is a random probability measure $\text{\emph{prob}}_\infty(ds)$ on $(\partial T ,\mathcal{B})$ such that a.s. $$\text{\emph{prob}}_n(ds) \Rightarrow \text{\emph{prob}}_\infty(ds)$$ where $\Rightarrow$ denotes weak convergence.
Define $\lambda_n(ds)=Z_n\text{prob}_n(ds)$, $n=1,2,\ldots$. By Kahane’s $T$-martingale theory, e.g., Kahane (1989), $\lambda_n(ds)$ converges vaguely to a non-zero random measure $\lambda_\infty(ds)$ on $(\partial T, \mathcal{B})$ with probability one. By definition of weak disorder $Z_n \rightarrow Z_\infty > 0$ a.s., thus we obtain $$\text{prob}_n(ds)=Z_n^{-1}\lambda(ds) \Rightarrow Z_\infty^{-1}\lambda_\infty(ds) \quad \text{a.s.}$$
Notice that in the case of no disorder, i.e. $X=1$ a.s., one has $$\text{prob}_n(ds)=\lambda(ds) \quad \forall n=1,2,\ldots.$$ Moreover, under $\lambda(ds)$, the polygonal paths are simply symmetric simple random walk paths, where the probability theory is quite will-known and complete. For example, the central limit theorem takes the form $$\lim_{n\rightarrow\infty} \lambda\left(\left\{s\in\partial T: \frac{(s)_n}{\sqrt{n}}\leq x
\right\}\right) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-\xi^2/2}d\xi.$$ For probability laws involving convergence in distribution, one may ask if the CLT continues to hold a.s. with $\lambda(ds)$ replaced by $\text{prob}_n(ds)$. This form of universality was answered in the affirmative by Waymire and Williams (2010) for weak disorder under the additional assumption that $\mathbb{E}X^{1+\delta}<\infty$ for some $\delta>0$. Problems involving limit laws such as a.s. strong laws, a.s. laws of the iterated logarithm, etc, however, require an infinite volume probability $\text{prob}_\infty(ds)$ for their formulation. While the preceding proposition answers this in the case of weak disorder, the problem is open for strong disorder. Moreover, it has been speculated by Yuval Peres (private communication) that $\text{prob}_n(ds)$ will a.s. have infinitely many weak limit points under strong disorder. However, in the case of critical strong disorder we show that a natural infinite volume polymer exists and is related to the finite volume polymers through limits in probability.
Tree Polymers at Critical Strong Disorder
=========================================
In this section we show the existence under critical strong disorder, i.e., assuming $\mathbb{E}X\ln X = \ln 2$, of an infinite volume polymer probability $\text{prob}_\infty(ds)$ that may be viewed as the weak limit in probability of the sequence $\text{prob}_n(ds), n\ge 1,$ in the sense that its characteristic function is the limit in probability of the corresponding sequence of characteristic functions of $\text{prob}_n(ds), n\ge 1$.
For $v\in T$, $v=(v_1,\ldots,v_m)$, say, let $$\Delta_m(v)=\{s\in\partial T:s_i=v_i, \text{ } i=1,\ldots,m\}, \qquad |v|=m.$$ Since $T$ is countable there are countably many such finite-dimensional rectangles in $\partial T$.
For $m>n$, note that $$\begin{aligned}
\text{prob}_n(\Delta_m(v)) &= \int_{\Delta_m(v)}\frac{d\text{prob}_n}{d\lambda}(s)\lambda(ds) \\
&= \int_{\Delta_m(v)}Z_n^{-1}\prod_{j=1}^nX_{s|j}\lambda(ds) \nonumber\\
&= Z_n^{-1}\int_{\Delta_m(v)}\prod_{j=1}^nX_{v|j}\lambda(ds) \nonumber\\
&= Z_n^{-1}\prod_{j=1}^n X_{v|j}\cdot 2^{-m}.
\nonumber\end{aligned}$$
For example, $$\begin{aligned}
\text{prob}_1(\Delta_m(v)) &= Z_1^{-1}X_{v|1}2^{-m}, \qquad Z_1=\frac{X_+ + X_-}{2} \\
&= \frac{X_{v|1}2^{-(m-1)}}{X_+ + X_-}\\
&= \left\{
\begin{array}{rl}
\frac{X_+2^{-(m-1)}}{X_+ + X_-}, & \quad v|1=+1 \\
\frac{X_-2^{-(m-1)}}{X_+ + X_-}, & \quad v|1=-1.
\end{array}\right.\end{aligned}$$ $\sum_{|v|=m}\text{prob}_1(\Delta_m(v))=1$ since there are $2^m$ such $v$’s, half of which have $v_1=+1$ and the other half have $v_1=-1$.
For $m\leq n, |v|=m$, we have $$\begin{aligned}
\text{prob}_n(\Delta_m(v))
&= Z_n^{-1}\int_{\Delta_m(v)}\prod_{j=1}^n X_{s|j} \lambda(ds) \\
&= Z_n^{-1}\prod_{j=1}^m X_{v|j}\sum_{|t|=n-m}\prod_{j=1}^{n-m} X_{(v*t)|j}2^{-n}\nonumber\\
&= Z_n^{-1}\left(\prod_{j=1}^m X_{v|j} 2^{-m}\right)Z_{n-m}(v),
\nonumber\end{aligned}$$ where $$Z_0(v)=1, \quad Z_{n-m}(v)=\sum_{|t|=n-m}\prod_{j=1}^{n-m}X_{(v*t)|j}2^{-(n-m)}.$$ In particular, $Z_n=Z_n(0)$, where $0\in T$ is the root.
Note that $$\begin{aligned}
Z_n &= \sum_{|u|=m}\sum_{|t|=n-m}\prod_{j=1}^{m}X_{u|j}2^{-m}\prod_{j=1}^{n-m}X_{(u*t)|j}2^{-(n-m)} \\
&= \sum_{|u|=m}Z_{n-m}(u)\prod_{j=1}^m X_{u|j}2^{-m}.
\nonumber\end{aligned}$$ Thus, letting $a_k = 1/\sqrt{k}, k\ge 1$, $$\begin{aligned}
\text{prob}_n(\Delta_m(v))
&= \frac{D_{n-m}(v)\prod_{j=1}^m X_{v|j}2^{-m}\frac{Z_{n-m}(v)}{a_{n-m}D_{n-m}(v)}}
{\sum_{|u|=m}D_{n-m}(u)\left(\prod_{j=1}^m X_{v|j}2^{-m}\right)\frac{Z_{n-m}(u)}{a_{n-m}D_{n-m}(u)}}\\
&\longrightarrow \frac{D_\infty(v)\prod_{j=1}^m X_{v|j}2^{-m}}
{\sum_{|u|=m}D_\infty(u)\left(\prod_{j=1}^m X_{v|j}2^{-m}\right)}
\nonumber\end{aligned}$$ where (i) the convergence to $D_\infty(v)$ is the almost sure limit of the [*derivative martingale*]{} obtained by Biggins and Kyprianou (2004), and (ii) $\lim_{n\rightarrow\infty}\frac{Z_{n-m}(v)}
{a_{n-m}D_{n-m}(v)} = c > 0$ is the limit in probability at critical strong disorder recently obtained by Aidékon and Shi (2011). The constant $c = ({2\over\pi\sigma^2})^{1/2}$, for $\sigma^2 =
\mathbb{E}\{X(\ln(X))^2\} - (\mathbb{E}\{X\ln(X)\})^2 > 0$, does not depend on $v\in T$. Aidékon and Shi (2011) also point out that the almost sure positivity of $D_\infty(v)$ follows from Biggins and Kyprianou (2004) and Aidékon (2011) The sequence $a_k = k^{-{1\over 2}}, k\ge 1,$ is referred to as the Seneta-Heyde scaling.
For each $v\in T$, there is a set $N(v)$ of probability zero such that $$D_\infty(v,\omega) = \lim_{n\rightarrow\infty}D_n(v,\omega), \quad \omega\in\Omega
\backslash N(v).$$ Since $T$ is countable, the set $N=\bigcup_{v\in T}N(v)$ is still a $P$-null subset of $\Omega$. The almost sure convergence of the derivative martingales is essential to the construction of $\text{\emph{prob}}_\infty$ given in the lemma below.
We now define $$\text{prob}_\infty(\Delta_m(v),\omega)
=\frac{D_\infty(v,\omega)\prod_{j=1}^m X_{v|j}(\omega)2^{-m}}
{\sum_{|u|=m}D_\infty(u,\omega)\left(\prod_{j=1}^m X_{u|j}(\omega)2^{-m}\right)}$$ for $\omega\in\Omega\backslash N$.
$\text{\emph{prob}}_\infty(\Delta_m(v),\omega)$ extends to a unique probability on $(\partial T,\mathcal{B})$ for each $\omega\in\Omega\backslash N$.
We use Caratheodory extension, taking careful advantage of the fact that the sets $\Delta(v)$, $v\in T$, are both open and closed subsets of the compact set $\partial T$. For $\omega\in
\Omega\backslash N$, $\text{prob}_\infty(\cdot,\omega)$ extends to the algebra generated by $\{\Delta(v):v\in T\}$ by addition. Since $\partial T$ is compact and the rectangles are both open and closed, countable additivity on this algebra must hold as a consequence of finite additivity; i.e. if $\bigcup_{i=1}^\infty \Delta(v_i)$ is contained in the algebra generated by $\{\Delta(v):v\in T\}$, then $\bigcup_{i=1}^\infty \Delta(v_i)$ is closed, hence compact, and its own open cover, i.e. $\bigcup_{i=1}^\infty \Delta(v_i)
=\bigcup_{i=1}^l \Delta(v_{i_l})$ for some finite subsequence $\{i_j\}_{j=1}^l$ of $\{1,2,\ldots\}$.
At critical strong disorder, for each finite set $F\subseteq\mathbb{N}$ $$\widehat{\text{\emph{prob}}_n(F)} \quad \Rightarrow \quad \widehat{\text{\emph{prob}}_\infty(F)} \qquad \text{in probability},$$ where $\widehat{\text{\emph{prob}}}_n, n\ge 1,
\widehat{\text{\emph{prob}}}_\infty$ denote their respective Fourier transforms as probabilities on the compact abelian multiplicative group $\partial T$ for the product topology.
The continuous characters of the group $\partial T$ are given by $$\chi_F(t) = \prod_{j\in F}t_j \quad \text{for finite sets } F\subseteq\mathbb{N}.$$ In particular there are only countably many characters of $\partial T$. From standard Fourier analysis it follows that we need only show that $$\lim_{n\rightarrow\infty}\mathbb{E}_{\text{prob}_n}\chi_F = \mathbb{E}_{\text{prob}_\infty}\chi_F
\quad \text{in probability}$$ for each finite set $F\subseteq\mathbb{N}$. Let $m=\text{max}\{k:k\in F\}$. Then for $n>m$, $$\begin{aligned}
\mathbb{E}_{\text{prob}_n}\chi_F &= \int_{\partial T=\bigcupdot_{|v|=m}\Delta_m(v)}\chi_F(s)\frac{d\text{prob}_n}{d\lambda}(s)\lambda(ds)\\
&= \sum_{|v|=m}\left(\prod_{j\in F}v_j\right) Z_n^{-1}(0) \prod_{j=1}^m X_{v|j}2^{-m}\sum_{|t|=n-m}
\prod_{j=1}^{n-m} X_{(v*t)|j}2^{-(n-m)}\\
&= \sum_{|v|=m}\left(\prod_{j\in F}v_j\right)\prod_{j=1}^m X_{v|j}2^{-m}
\frac{Z_{n-m}(v)}{Z_n(0)}\\
&= \sum_{|v|=m}\left(\prod_{j\in F}v_j\right)\prod_{j=1}^m X_{v|j}2^{-m}D_{n-m}(v)
\frac{\frac{Z_{n-m}(v)}{a_{n-m}D_{n-m}(v)}}{\sum_{|u|=m}\prod_{j=1}^m X_{u|j}2^{-m}D_{n-m}(u)
\frac{Z_{n-m}(u)}{a_{n-m}D_{n-m}(u)}}\\
&\longrightarrow \mathbb{E}_{\text{prob}_\infty}\chi_F,\end{aligned}$$ where the convergence is almost sure for terms of the form $D_{n-m}$ and in probability for those of the form $Z_{n-m}/(a_{n-m}D_{n-m})$ as $n\to\infty$.
Diffusivity Problems at Strong Disorder
=======================================
With regard to the aforementioned a.s. limits in distribution of polygonal tree paths, Waymire and Williams (2010) also obtained a.s. limits of the form $$\lim_{n\to\infty}{\ln E_{prob_n}e^{r(S)_n}\over n} = F(r)$$ under both weak and strong disorder. Let us refer to these as almost sure [*Laplace rates*]{} in reference to the Laplace principle of large deviation theory.
In the case of weak disorder the universal limit is $F(r) = \ln\cosh(r)$, in a neighborhood of the origin, otherwise independent of the distribution of $X$. In addition to being independent of the distribution of $X$ within the range of weak disorder, this universality of Laplace rates is manifested in the coincidence with the same limit obtained for $X \equiv 1$, i.e., for simple symmetric random walk.
For an illustrative case of strong disorder, consider $X = e^{\beta Z - {\beta^2\over 2}}$, where $Z$ is standard normal and $\beta \ge \beta_c = \sqrt{2\ln2}$. Then from Waymire and Williams (2010), it follows that a.s. in a neighborhood of the origin that $$F(r) = r\tanh(rh(r)) + \beta^2h(r) - \beta\beta_c,$$ where $h(r)$ is the uniquely determined solution to $$\beta^2 h^2(r) + 2rh(r)\tanh(rh(r))
-2\ln\cosh(rh(r)) = \beta_c^2;$$ also see Waymire and Williams (Sec 6, Cor 2, 2010) for the general formulae in the case of strong disorder. In particular, the universality of the Laplace rates breaks down, even at critical strong disorder. A graph of $F(r)$ computed from MATLAB is indicated in Figure 1 for the strong disorder case of $\beta = 2\beta_c$.
![Graph of the function $F$ for various $\beta$.[]{data-label="Figure 1: "}](Fplot.png)
Using the equations defining $F(r)$ one may easily verify that $F(0) = 0, F^\prime(0) = 0$ and $F^{\prime\prime}(0) = { 2\beta\beta_c-\beta_c^2\over\beta^2}$. While these specific calculations follow directly from the general results of Waymire and Williams (2010), from here one is naturally lead to speculate [^1] that the asymptotic variance under strong disorder is obtained under diffusive scaling by $\sqrt{n}$ precisely as $$\sigma^2(\beta) = {2\beta\beta_c-\beta_c^2\over\beta^2},\quad \beta\ge\beta_c.$$ In particular this formula continuously extends the weak disorder variance $\sigma^2(\beta) \equiv 1, \beta < \beta_c,$ across $\beta = \beta_c$. In any case, this quantity is a basic parameter of the rigorously proven limit $F(r)$.
Acknowledgment
==============
The authors thank the referee for spotting a serious error in the original draft and providing the reference to Aidékon and Shi (2011) used in this paper. The first author was partially supported by an NSF-IGERT-0333257 graduate training grant in ecosystems informatics at Oregon State University, and the second author was partially supported by a grant DMS-1031251 from the National Science Foundation.
[99]{}
(2011). Convergence in law of a minimum of a branching random walk, arXiv: 1101.1810.
(2011). Martingale ratio convergence in the branching random walk, arXiv:1102.0217
(2008). Superdiffusivity for a Brownian polymer in a continuous Gaussian environment. [**36**]{}(5), 1642–1675.
(1976). The first-and-last-birth problem for a multitype age-dependent branching process. [**8**]{}, 446–459.
(2004). Measure change in multitype branching, [**36**]{}, 544–581.
(1989). A note on diffusion of directed polymers in a random environment. [**123**]{}, 529–534.
(1991). On directed polymers in a random environment. [*Selected Proceedings of the Sheffield Symposium on Applied Probability*]{}, eds. I.V. Basawa, R.I. Taylor, IMS Lecture Notes Monograph Series. [**18**]{}, 41–47.
(2006). Directed polymers in random environment are diffusive at weak disorder. [**34**]{}(5), 1746–1770.
(2009). Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees. [**37**]{}(2), 742–789.
(1989). Random multiplications, random coverings, and multiplicative chaos. [*Proceedings of the Special Year in Modern Analysis*]{}, E. Berkson, N. Tenney Peck, J. Jerry Uhl, eds., London Math. Soc. Lect. Notes, Cambridge Univ. Press, London. [**137**]{}, 196–255.
(1976). Sur certaines martingales de Benoit Mandelbrot. [*Adv. in Math.*]{} [**22**]{}, 131–145.
(1994). A general decomposition theory for random cascades. [**31**]{}(2), 216–222.
(1995). Multiplicative cascades: dimension spectra and dependence. [*J. Four. Anal. and Appl.*]{}, special issue to honor J-P. Kahane, 589–609.
(1996). A cascade decomposition theory with applications to Markov and exchangeable cascades. [**348**]{}(2), 585–632.
(1995). Markov cascades. [*IMA Volume on Branching Processes*]{}, eds., K. Athreya and P. Jagers, Springer-Verlag, NY.
(2010). T-martingales, size-biasing and tree polymer cascades. [*Fractals and Related Fields*]{}, ed., J. Barral. <http://www.math.oregonstate.edu/~waymire/index.html> (to appear).
[^1]: To avoid potential confusion, let us mention that other forms of polymer scalings appear in the recent probability literature under which the polymer is referred to as superdiffusiveeven in the context of weak disorder; e.g., in reference to wandering exponents in Bezerra, Tindel, Viens (2008).
|
---
abstract: 'A recent paper by Alves, Das, and Perez contains a calculation of the one-loop self-energy in $\phi^{3}$ field theory at $T\neq 0$ using light-front quantization and concludes that the self-energy is different than the conventional answer and is not rotationally invariant. The changes of variable displayed below show that despite the complicated appearance of the thermal self-energy in light-front variables, it is exactly the same as the conventional result.'
author:
- 'H. Arthur Weldon'
title: 'Thermal self-energies using light-front quantization'
---
In Ref. [@ADP], Alves, Das, and Perez introduce the technique of light-front quantization into thermal field theory using a heat bath at rest. As shown in [@AW] the appropriate quantization evolves the system in the $x^{+}=(x^{0}\!+\!x^{3})/\sqrt{2}$ coordinate while keeping constant $x^{1}, x^{2},$ and $x^{3}$. (Normal light-front quantization keeps $x^{1}, x^{2},$ and $x^{-}=(x^{0}\!-\!x^{3})/\sqrt{2}$ constant.) The momenta conjugate to $x^{+},
x^{1}, x^{2}, x^{3}$ are $k^{0}, k^{1}, k^{2}, k^{+}$ as can be seen from the identity $$k^{0}x^{0}\!-k^{3}x^{3}\!-{\bf k}_{\bot}\cdot{\bf
x}_{\bot}=\sqrt{2}\,k^{0}x^{+}\!-\sqrt{2}\,k^{+}x^{3} \!-{\bf k}_{\bot}\cdot{\bf x}_{\bot}.$$ In the imaginary time propagator, $x^{+}$ is made negative imaginary: $-i\beta\le
\sqrt{2}x^{+}\le 0$. In the Fourier transform of the propagator $k^{0}=i2\pi nT$ whereas $k^{+}$ and ${\bf k}_{\bot}$ are real. The relation $$(k^{0})^{2}-(k^{3})^{2}-k_{\bot}^{2}=2\sqrt{2}k^{0}k^{+}-2(k^{+})^{2}-k_{\bot}^{2}$$ immediately leads to the propagator $$G(k^{+}, k_{\bot},n)={1\over i4\sqrt{2}\pi n k^{+}-2(k^{+})^{2}-\omega_{k}^{2}},$$ where $\omega_{k}^{2}=k_{\bot}^{2}+m^{2}$ is the transverse energy.
One of the interesting calculations performed by Alves, Das, and Perez [@ADP] using this propagator is the one-loop self-energy for a scalar field theory with self-interaction $g\phi^{3}/3!$. The result of performing the summation over the loop integer $n$ is given in Eq. (40) of Ref. [@ADP]: $$\Pi(p)\!=\!{g^{2}\over 8}\!\!\int \!{dk^{+}d^{2}k_{\bot}\over(2\pi)^{3}}
\,{\coth(X_{1}/2T)-\coth(X_{2}/2T)\over Y}.$$ During the summation the external variable $p^{0}$ is an integer multiple of $2\pi iT$, but after the summation $p^{0}$ is continued to real values. The quantities $X_{1}, X_{2},$ and $Y$ are complicated functions of the integration variables $k^{+}, {\bf k}_{\bot}$ and of the external variables $p^{0},
p^{+}, {\bf p}_{\bot}$: $$\begin{aligned}
X_{1}=&&{\omega_{k}^{2}+2(k^{+})^{2}\over 2\sqrt{2}k^{+}}\nonumber\\
X_{2}=&&{\omega_{k+p}^{2}+2(k^{+}\!+p^{+})^{2}\over
2\sqrt{2}(k^{+}+p^{+})}\nonumber\\
Y=&&2\sqrt{2}\,k^{+}(k^{+}+p^{+})[-X_{1}+X_{2}-p^{0}].\nonumber\end{aligned}$$ The self-energy (1) looks quite different from the usual result and is not manifestly invariant under $O(3)$ rotations of the external momentum ${\bf p}$.
The following will describe a change of integration variable from $k^{+}$ to a new variable $k^{3}$ that is chosen to make the self-energy a function of the two variables $p^{0}$ and ${\bf p}^{2}=p_{\bot}^{2}+(\sqrt{2}p^{+}-p^{0})^{2}$. The final answer is the sum of Eqs. (2), (3), (4), and (5).
(1a) For the term $\cosh(X_{1}/2T)/Y$ in Eq. (1), when $k^{+}$ is positive change to a new integration variable $k^{3}$ defined by $$k^{+}={1\over\sqrt{2}}\bigg[k^{3}+\sqrt{m^{2}+k_{\bot}^{2}+(k^{3})^{2}}\bigg].$$ The range of $k^{3}$ is $-\infty\le k^{3}\le\infty$. The Jacobian of the transformation is $dk^{+}/dk^{3}=k^{+}/E_{k}$, where $E_{k}=\sqrt{m^{2}+k^{2}}$ is the square root displayed above. Under this change, $$X_{1}=E_{k};\hskip0.6cm
Y=k^{+}\big[(E_{k+p})^{2}-(p^{0}+E_{k})^{2}\big],$$ where $E_{k+p}=\sqrt{m^{2}+({\bf k}+{\bf p})^{2}}$. The factor $k^{+}$ in the Jacobian cancels a similar factor in $Y$ and yields a contribution to the self-energy $$\Pi_{1a}={g^{2}\over 8}\!\!\int {d^{2}k_{\bot}\over (2\pi)^{3}}\!
\int_{-\infty}^{\infty}\!{dk^{3}\over E_{k} }\;{\coth(E_{k}/2T)\over
(E_{k+p})^{2}-(p^{0}\!+\!E_{k})^{2}}.$$ This integrand is invariant invariant under simultaneous rotations of the vectors ${\bf k}$ and ${\bf p}$. Thus $\Pi_{1a}$ depends only on $|{\bf p}|$ and $p^{0}$.
(1b) For term $\cosh(X_{1}/2T)/Y $in Eq. (1), when $k^{+}$ is negative make the change of variable $$k^{+}={1\over\sqrt{2}}\bigg[k^{3}-\sqrt{m^{2}+k_{\bot}^{2}+(k^{3})^{2}}\bigg],$$ where $-\infty\le k^{3}\le\infty$. The Jacobian of the transformation is $dk^{+}/ dk^{3}=-k^{+}/E_{k}$ and $$X_{1}=-E_{k};\hskip0.6cm
Y=k^{+}\big[(E_{k+p})^{2}-(p^{0}-E_{k})^{2}\big].$$ The corresponding self-energy contribution is $$\Pi_{1b}={g^{2}\over 8}\int {d^{2}k_{\bot}\over (2\pi)^{3}}
\int_{-\infty}^{\infty}{dk^{3}\over E_{k} }\;{\coth(E_{k}/2T)\over
(E_{k+p})^{2}-(p^{0}\!\!-\!E_{k})^{2}}.$$ The sum of Eqs. (2) and (3) is an even function of $p^{0}$.
(2a) In the second term in Eq. (1), $\cosh(X_{2}/2T)/Y$, when $k^{+}\!+\!p^{+}>0$ then change to $k^{3}$ given by $$k^{+}={1\over\sqrt{2}}\bigg[k^{3}\!-\!p^{0}+\sqrt{m^{2}+({\bf k}_{\bot}\!+\!{\bf
p}_{\bot})^{2} +(k^{3}\!+\!p^{3})^{2}}\bigg],$$ where $-\infty\le k^{3}\le \infty$. Using $dk^{+}/dk^{3}=(k^{+}\!\!+\!p^{+})/ E_{k+p}$, and $$X_{2}=E_{k+p};\hskip0.6cm Y=(k^{+}\!+p^{+})\big[(p^{0}\!-E_{k+p})^{2}\!-E_{k}^{2}\big].$$ This contribution is $$\Pi_{2a}={g^{2}\over 8}\!\int {d^{2}k_{\bot}\over (2\pi)^{3}}
\!\int_{-\infty}^{\infty}{dk^{3}\over E_{k+p} }\;{\coth(E_{k+p}/2T)
\over E_{k}^{2}-(p^{0}\!-E_{k+p})^{2}}.$$
(2b) In the second term in Eq. (1), $\cosh(X_{2}/2T)/Y$, if $k^{+}+p^{+}<0$ make the change of variable $$k^{+}={1\over\sqrt{2}}\bigg[k^{3}\!-\!p^{0}-\sqrt{m^{2}+({\bf k}_{\bot}\!+\!{\bf
p}_{\bot})^{2} +(k^{3}\!+\!p^{3})^{2}}\bigg],$$ where $-\infty\!\le \!k^{3}\!\le\! \infty$. Since $dk^{+}/dk^{3}=-(k^{+}\!+p^{+})/E_{k+p}$, and $$X_{2}=-E_{k+p},\hskip0.5cm Y=(k^{+}\!+p^{+})\big[(p^{0}+E_{k+p})^{2}-E_{k}^{2}\big],$$ the contribution to the self-energy is $$\Pi_{2b}={g^{2}\over 8}\!\int\! {d^{2}k_{\bot}\over (2\pi)^{3}}\!
\int_{-\infty}^{\infty}{dk^{3}\over E_{k+p}}\;{\coth(E_{k+p}/2T)
\over E_{k}^{2}-(p^{0}+E_{k+p})^{2}}.$$ The sum of Eq. (4) and (5) is an even function of $p^{0}$.
The sum of these four contributions Eq. (2), (3), (4), and (5) is the standard answer for the self-energy [@Das]. Therefore the light-front formulation is a different, and in some cases a more efficient [@AW], way of organizing the calculation, but the results are the same.
This work was supported in part by the U.S. National Science Foundation under Grant No. PHY-0099380.
V.S. Alves, Ashok Das, and Silvana Perez, Phys. Rev. D [**66**]{}, 125008 (2002).
H.A. Weldon, Phys. Rev. D [**67**]{}, 0850XX (2003).
A. Das, [*Finite Temperature Field Theory*]{} (Cambridge, University Press, Cambridge, England, 1996), page 24.
|
---
abstract: 'We present ElectroAR, a visual and tactile sharing system for hand skills training. This system comprises a head-mounted display (HMD), two cameras, a tactile sensing glove, and an electro-tactile stimulation glove. The trainee wears the tactile sensing glove that gets pressure data from touching different objects. His/her movements are recorded by two cameras, which are located in front and top side of the workspace. In the remote site, the trainer wears the electro-tactile stimulation glove. This glove transforms the remotely collected pressure data to electro-tactile stimuli. Additionally, the trainer wears an HMD to see and guide the movements of the trainee. The key part of this project is to combine distributed tactile sensor and electro-tactile display to let the trainer understand what the trainee is doing. Results show our system supports a higher user’s recognition performance.'
author:
- Jonathan Tirado
- Vladislav Panov
- Vibol Yem
- Dzmitry Tsetserukou
- Hiroyuki Kajimoto
title: 'ElectroAR: Distributed Electro-tactile Stimulation for Tactile Transfer'
---
Introduction
============
There are several tasks that incorporate hand-skill training, such as surgery, palpation, handwriting, etc. We are developing an environment where a skilled person (trainer), who actually works at a different place, can collaborate with a non-skilled person (trainee) in high precision activities. The trainer needs to feel as if he/she exists at the place and work there. The trainee can improve his/her performance with the trainer’s help. This can be regarded as one type of telexistence [@Susumu], in which remote robot is replaced by trainee.
We especially focus on the situation that incorporates finger contact. This requires a tactile sensor on the trainee’s side and tactile display on the trainer’s side. The trainee handles real objects, and the tactile sensor-display pair enables the trainer to feel the same tactile experience as the trainee; thus, he/she can command or show what the trainee should do next.
For tactile sensors, a wide variety of these pads have been developed in the past for robotics and medical applications, using resistive, capacitive, piezoelectric, or optical elements. These pads have often been placed in gloves to monitor hand manipulation. While some of them are bulky and inevitably deteriorate the human haptic sense, recently, several researches are focused on reducing this problem by using thinner and more flexible force-sensing pads [@Beebe]. In this study, we use a similar tactile sensor array with high spatio-temporal resolution.
For tactile display, there were also several researches on wearable tactile displays [@Choi]. They are simple, yet cannot present distributed tactile information that our sensor can detect. As we believe that distributed tactile information is important, especially when we recognize shapes, we need some way to present distributed tactile information to fingertips. There were also several works on pin-array type tactile display [@Kim; @kang], [@Sarakoglou]. We employ electro-tactile display [@Saunders], [@P.Bach], since it is durable, light-weight, and easy to be made small and extends to several fingers.
This paper is an initial report of our system, especially focuses on how well the shape information can be transmitted through our system.
![ElectroAR. (a) Follower side. (b) Leader side. (c) Cylindrical stick with regular prismatic shape[]{data-label="fig1"}](system_overview.png){width="92.00000%"}
System overview
===============
As shown in Fig.1, the system consists of three main components. On the follower (trainee’s) side, the user wears a tactile sensing glove. The glove gets the pressure data from touching objects. The data of pressure sensors were spatially filtered by using equation (\[E:relation\]),
$$\label{E:relation}
p\textsc{\char13}_{i,j}=\frac{p_{i,j}+p_{i+1,j}+p_{i,j+1}+p_{i+1,j+1}}{4}$$
where *$p$* is a pressure value and *$p$*<span style="font-variant:small-caps;">13</span> is a filtered pressure value, i and j are order number on the axis of width and height of the sensor array [@Yem; @Kaji]. The leader’s glove transforms the filtered pressure data to electro-tactile stimuli at fingertips. They are linked not only with haptic feedback but also with visual and audio feedback. Visual feedback gives for the leader side full information of the movement on the follower side, and audio feedback provides for the follower side commands from the leader.
Tactile sensor glove
--------------------
We are using a glove that contains three tactile sensor arrays [@Yem; @Kaji]. These sensor arrays are located on the three fingers of the right hand (thumb, index and middle). Fig.1 (a) shows the internal distribution of the pressure sensors in the array of 5 by 10 for each finger. The force range of sensing element was not accurately measured, but it can discriminate edge shapes by natural pressing force, as will be shown in the experiment section. The center-to-center distance between each sensing point is 2.0 *$mm$*.
Electro-tactile glove
---------------------
Fig.1 (b) shows the glove of electro-tactile display for the leader user [@Yem; @Kaji]. The module controller was embedded inside the glove [@Kaji; @electro]. For each finger, the electro-tactile stimulator array has 4 by 5 points. The center-to-center distance between each point is 2.0 *$mm$*. This module was used for tactile stimulation of thumb, index and middle finger. The pulse width is set to 100 *$us$*.
### Random Modulator
In order to adjust the intensity of the stimulus, a typical method is to express the intensity by a pulse frequency. However, in practice, the stimulator must communicate with the PC at fixed intervals (in our case at 120 *$Hz$*). Therefore, although it is relatively easy to set the pulse frequency to, for example, 30 *$Hz$*, 60 *$Hz$*, or 120 *$Hz$*, it is a little difficult to perform electrical stimulation of an arbitrary frequency.
Here, we propose a method to change the probability of stimulation as a substitute for setting pulse frequency. For each time interval (in our case 1/120 *$second$*), the system gives the probability of stimulating each electrode. The higher the probability, the higher the average stimulus frequency. The algorithm is expressed as follows.
$$\label{E:relation1}
\textbf{if}\:\;\;rand\:() \leq p\:\;\;\textbf{then}\:\;\; stimulate\:()$$
Where *$rand \: ()$* is a uniformly distributed random variable from 0 to 1. If it is less than or equal to a value *$p$*, the electrode is stimulated. Otherwise, it is not stimulated. The probability that the electrode is stimulated is hence *$p$*. This calculation is performed for the electrode every cycle, resulting in an average stimulation cycle of *$120*p$* *$Hz$*.
The value *$p$* represents the probability of stimulation, and a function representing the relationship between *$p$* and the subjective stimulus intensity *$S$* is required. In general, higher stimulus frequency gives stronger subjective stimulus, so this function is considered to be a monotonically increasing function.
$$\label{E:srelation}
S = F(p)$$
If *$F$* is obtained, the inverse function can be used to determine how the stimulus probability *$p$* should be set for the intensity *$S$* to be expressed as follows.
$$\label{E:prelation}
p = F^{-1}(s)$$
View sharing system
-------------------
Ideally, the view sharing system should be bi-directional. However, as the scope of this paper is to examine the ability of our tactile sensor-display pair, we used a simplified visual system only for the trainer.
As shown in Fig.1 (b), the trainer wears an HMD. At the remote side, two cameras are installed for having full view information for the trainer, both from the top and from the side. This information is presented in virtual screens which are located in front and the horizontal view. Although the view is not three-dimensional, it can provide sufficient information of the trainee’s hand movement, and the trainer can mimic the movement while perceiving the tactile sensation by the electro-tactile display glove.
Experiment
==========
Preliminary Experiment : Random modulator’s function
----------------------------------------------------
The proposed random modulation method needs a function *$F$*, which can represent the relationship between strength perception and the probability of stimulating each electrode. This preliminary experiment has the objective of collect data for fitting function *$F$*. In the whole experiment, the base stimulation frequency was 120 *$Hz$*. For example, if the probability is 1, the stimulation is done at 120 *$pps$* (pulses per second).
### Experimental Method
The strength of stimulation was evaluated by the magnitude estimation method. First, the user’s right index fingertip was put on the electrodes’ array, and exposed to a pulsatory stimulation, provided by electrodes. The user was asked to find a comfortable and recognizable level (absolute stimulation level), which was set as 100.
In the second part, we prepared six probability levels: 0.1, 0.2, 0.4, 0.6, 0.8, 1.0. There were five trials for each level, 30 trials in total in random order. Each trial was composed of an initial one-second impulse with the 100 intensity level, followed by a one-second randomly modulated stimulation with assigned probability. After each trial, the user must determine how lower or higher was the second stimulus presented. We recruited seven participants, five males and two females aged 21-27; all right-handed and all without previous training.
### Result
The result in Fig.\[bs1\] (a) shows a sigmoid function tendency. Thus, the data were fitted using Matlab, as shown in Fig.\[bs1\] (b).
Once we know the function, we calculated the inverse function that determines the stimulation probability from desired strength, which is the function *$F^{-1}$*, described in the equation (\[eq:3\]), where *$a,b$* and *$k$* are coefficients of the sigmoid function, *$p$* is the probability of the electrode being stimulated and *$S$* is the subjective stimulus intensity.
$$\label{eq:3}
p=\frac{a-log(\frac{k}{S}-1)}{b}$$
![Random Modulation. (a) Experimental results. The quantitative relation between cumulative probability distribution and the strength perception percentage estimated for the volunteers. (b) Sigmoid function regression. Experimental data were fitted to sigmoid function by logistic regression[]{data-label="bs1"}](logisticfuncombined11.jpg){width="102.00000%"}
Experiment 1: Static shapes recognition
---------------------------------------
The following two experiments try to validate that our system is capable of transmitting tactile information necessary for tactile skill transfer. In many haptic related tasks, we typically use a pen-type device that we pinch by our index finger and thumb. These can be a scalpel, a driver, a tweezer, or a pencil. In such situations, we identify the orientation of the device with tactile sense.
Our series of experiments try to reproduce part of these situations. Experiment 1 was carried out to assess the electro-tactile display’s capacity for presenting bar-shape in different orientations.
### Experimental Method
Four patterns, which are line with inclinations of 0, 45, 90 and 135 *$degrees$* were presented on the right index finger. The experiment was divided into three steps. The first step was to identify a suitable stimulus level. The second step was the training phase, in which each pattern was presented twice to the volunteers.
After a two minutes break, the evaluation stage was performed. They were asked to try randomly chosen pattern and chose from the four candidates. The recognition time was also recorded. We recruited ten volunteers, nine males and one female, aged 21-27; all right-handed. There were seven trials per pattern, 28 in total.
### Result
Fig.\[bs2\] (a) shows a numerical comparison of the effective recognition level for each proposed pattern. The four patterns have a similar range of recognition, being the 90 *degrees* pattern pointed the lowest (73% accuracy) and the 0 *degrees* pattern the highest (87% accuracy). The result also indicates that the 90 *degrees* pattern is often confused with the 135 *degrees* (10% error), and in the same way the 45 *degrees* is confused with 90 *degrees* pattern (10% error).
Fig.\[bs2\] (b) shows that the recognition time for the majority of the volunteers ranges between 4 and 10 *seconds* for all of the patterns. The median time is close to 6 *seconds*.
![image](experiment11.jpg){width="102.00000%"}
Experiment 2: Dynamic pattern perception
----------------------------------------
Experiment 2 was carried out to assess our system’s capacity to convey dynamic tactile information. As mentioned before, we focused on the situation of handling a bar-shaped device. We confirmed if we can identify different “devices” that we handle with our index finger and thumb.
### Data set acquisition
Four cylindrical sticks with regular prismatic shape in their middle section were designed for the experiment (Fig.1 (c)). The total length of each stick is 150 *mm*, and 28 *mm* for their middle section. Every prism has a different cross-section: circle, triangle, square, and hexagon. The radius of the sticks was 9 *mm* and the circumradius of the prisms 5 *mm*. This special design visually covers the middle section for avoiding the possibility of answering only by observation. On this way, we provide only the motion of the hand as visual feedback.
Using the tactile sensing glove, one of the authors grasped the stick in a 90 *degrees* orientation, and he slowly scrolled the bar back and force between two fingers, repeating for ten times. The pressure patterns were recorded, and the video was taken by two cameras that we described in the previous section.
### Experimental Method
In the main experiment, the recorded videos were replayed so that the user can mimic the hand motion. Simultaneously, the tactile feedback was delivered to two fingertips (right index finger and thumb) using the recorded pressure patterns.
A set of twenty randomly ordered samples was presented, and the user must associate this visual and tactile sensation with one of the previously indicated shapes. Visual feedback was provided to show the motion of the hand, but at the same time, the shape of the prism was visually hidden. The recognition time was also recorded.
We recruited eight participants, six males and two females aged 21-27; all right-handed and all without previous training.
### Result
Fig.\[bs3\] (a) shows a numerical comparison of the effective shape recognition level for each proposed pattern. We observe that the four patterns have a different range of recognition, being the *square* pattern pointed the lowest (40% accuracy) and the *cylinder* pattern the highest (65% accuracy). The result also indicates that the *square* pattern is frequently confused with the *triangle* pattern (37% error), and the *cylinder* is confused with *hexagonal* pattern (25% error).
The experiment also includes an analysis of exploration time. Fig.\[bs3\] (b) shows that the recognition time for the majority of the volunteers ranges between 8 and 18 *seconds*. The median time is close to 13 *seconds* also for all of the cases, except for the *triangle* pattern which median exploratory time is 16 *seconds*.
![image](experiment23.jpg){width="102.00000%"}
Conclusion
==========
In this paper, we mainly developed a haptic feedback component of the virtual reality system for remote training. We implemented a simple tactile communication capable of transmitting shape sensations produced at the moment of manipulating 3D objects with two fingers: thumb and index fingers. The follower side comprises a tactile-sensor glove and the leader side comprises an electro-tactile display glove.
We tested our system with two experiments: static shape perception and dynamic pattern perception, both assuming the situation of grasping a bar-like object. The results confirmed our expectations, that this system has the ability to deliver information of 3D bar-like object.
There are several limitations to the current work. The visual part of the system is incomplete; the follower side should see the hand gesture of the trainer, and the leader side should see 3D visual information of the follower by the use of 3D display technologies. Tactile display and sensor are slightly small, and it must be enlarged to cover the whole fingertips. Roughness and temperature sensations must be considered for providing material sense. All these will be handled in our future work.
Tachi, S.: Tele-existence - Toward Virtual Existence in Real and/or Virtual Worlds, In. Proc. ICAT ’91, pp.85-94, 1991
Kim, S.-C., Kim, C.-H., Yang, G.-H., Yang, T.-H., Han, B.-K., Kang, S.-C., Kwon, D.-S.: Small and lightweight tactile display (SaLT) and its application. In: Proceedings WorldHaptics, pp. 69–74 (2009)
D. Beebe, D. Denton, R. Radwin, and J. Webster, “A silicon-based tactile sensor for finger-mounted applications,” IEEE Trans. Biomed. Eng., vol. 45, pp. 151–159, 1998
Choi, I., Hawkes, E.W., Christensen, D.L., Ploch, C.J., Follmer, S.: Wolverine: a wearable haptic interface for grasping in virtual reality. In. Proc. IROS, pp. 986-993.
Sarakoglou, I., Tsagarakis, N., Caldwell, D.G.: A portable fingertip tactile feedback array – transmission system reliability and modelling. In. Proc. WHC2005, pp. 547-548
F.A. Saunders, In Functional Electrical Stimulation: Applications in Neural Prostheses, ed. by F.T. Hambrecht, J.B. Reswick (Marcel Dekker, New York, 1977), pp. 303–309
P. Bach-y-Rita, K.A. Kaczmarek, M.E. Tyler, J. Garcia-Lara, Form perception with a 49-point electrotactile stimulus array on the tongue. J. Rehab. Res. Dev. 35, 427–430 (1998)
Kajimoto H. (2016) Electro-tactile Display: Principle and Hardware. In: Kajimoto H., Saga S., Konyo M. (eds) Pervasive Haptics. Springer, Tokyo
Yem V., Kajimoto H., Sato K., Yoshihara H. (2019) A System of Tactile Transmission on the Fingertips with Electrical-Thermal and Vibration Stimulation. In. Proc. HCII 2019
|
---
abstract: 'We present our conclusions of the investigation of the self assembly and growth of an [*array*]{} of $CdS$ nanotubes: a consequence of a fine balance of directed motion, diffusion and aggregation of reacting ${\rm Cd^{+2}}$ and ${\rm S^{-2}}$ ions. In a previous communication [@kiruthiga], we identified the mechanism of a unexpected growth of a very uniform $CdS$ nano-cylinder from the end of a nano-channel. Furthermore, the cylinder had a pore along the axis but were closed at one end. This unique phenomenon of self assembly of [*monodisperse*]{} CdS nano-cylinders had been observed in a rather simple experiment where two chambers containing 0.1 M ${\rm Cd Cl_2 }$ and 0.1 M ${\rm Na_2 S}$ solutions were joined by an array of anodized aluminium oxide (AAO) nano-channels [@shouvik]. Interestingly, the growth of CdS nano-tubes was observed only in the ${\rm Na_2 S } $-chamber. Our previous study focussed on identifying the principles governing the growth of a single nano-tube at the exit point of a single AAO-nano-channel. In this communication, we identify factors affecting the self-assembly of a nano-tube in the presence of neighbouring nano-tubes growing out an array of closely spaced AAO nano-channel exits, a study closer to experimental reality. Our model is not $Cd^{+2}$ or $S^{2-}$ specific, and our conclusions suggest that the experimental scheme can be extended to self assemble a general class of reacting-diffusing A and B ions with A (in this case ${\rm Cd^{+2}}$) selectively migrating out from a nano-channel. In particular, we note that after the initial prolonged growth of nanotubes, there can arise a severe deficiency of B-ions (${\rm S^{-2}}$) ions near the AAO-nano-channel exits, the points where the reaction and aggregation occurs to form the $CdS$ nanotube, thus impeding further growth of uniform CdS nano-tubes. We further identify the parameters which can be tuned to obtain an improved crop of monodisperse nanotubes. Thereby we predict the necessary characteristics of reacting systems which can be self assembled using suitable adaptations of experiments used to grow CdS cylinders.'
author:
- 'J. Kiruthiga$^1$, Apratim Chatterji$^2$'
title: 'Self assembly of monodisperse CdS nano-cylinders with a pore. '
---
Controlled self assembly of micron to nanometer sized structures of different morphologies has been at the forefront of research interests for over a decade spanning disciplines of physics, chemistry and even biology [@israel; @vermant; @witten; @biop; @einax; @opto]. Recently, a very simple experiment produced the unexpected growth of very uniform Cadmium sulphide (CdS) nano-structures of rather unique morphology as was reported by Varghese and Datta [@shouvik]. The authors took $0.1$ M ${\rm CdCl_2}$ and $0.1$ M ${\rm Na_2S}$ solutions in two different chambers and allowed them to come into contact with each other through some Anodized Aluminium oxide (AAO) nano-channels. The diameter of AAO nano-cylinders was was varied from $20$ nm to $100$ nm. The radial dimension of the AAO nano-channels are significantly larger than the ionic dimensions as well as the Bjerrum length ($ \sim 7 \AA$ at room temperature), thereby the authors expected the nano-channels to get clogged by CdS precipitate formed by diffusing and reacting ${\rm Cd^{+2}}$ and ${\rm S^{-2}}$ ions inside the AAO nano-channel.
Contrary to their expectations, cylindrical CdS nano-tubes with a pore along the center of the cylinder but closed at one end were found to grow outwards from the ends of the AAO nano-channels. The diameter of the CdS nano-cylinders were of the same order as that of the AAO nano-channels ([**N-C**]{} for brevity), SEM photos of the CdS nano-cylinders can be found in reference [@shouvik]. Furthermore, the CdS nano-structures were found to grow in the ${\rm Na_2 S}$ chamber only and never in the ${\rm CdCl_2}$ chamber. The ${\rm CdS}$ nano-tubes ([**N-T**]{} in short) with a pore continued to grow in the ${\rm CdCl_2}$ chamber even if the surface charge on the AAO N-C was reversed during the preparation, clearly establishing that the selective migration of ${\rm Cd^{+2}}$ from the ${\rm CdCl_2}$ to ${\rm Na_2S}$ chamber is not a simple electrostatic potential effect in the presence of ionic screening or otherwise. Traces of ${\rm CdS}$ are not found in the ${\rm CdCl_2}$ chamber indicating ${\rm S^{-2}}$ ions do not migrate to the ${\rm CdCl_2}$ chamber through the AAO N-C. However as a control experiment, if the ${\rm CdCl_2}$ solution is replaced by a chamber of pure water connected to the $0.1$ M ${\rm Na_2 S}$ solution, one does detect ${\rm S^{-2}}$ ions in chamber containing pure water. One can conclude that it is not just AAO-specific properties which prevent ${\rm S^{-2}}$ ions from migrating to the ${\rm CdCl_2 S}$ chamber. Further experiments by concerned researchers are needed before one can chose one of the following scenarios (amongst many others) as a possible cause of selective transport of ${\rm Cd^{+2}}$: (a) capillary action induced by chemical potential difference in the two chambers leading to directed motion of ${\rm CdCl_2}$ solution (b) formation of different-sized large hydrated ${\rm S^{-2}}$ and ${\rm Cd^{+2}}$ ion-clusters with very different diffusivities (c) AAO N-C induced low density of ions inside the channel, thereby, increasing the effective charge screening length.
The other surprising aspect of the experimental observations is the unexpected morphology of the $CdS$ nanotubes formed. What could be the physical conditions at the exit of the AAO-nanochannel that a cylinder with a pore along the center and closed at one end would be formed, each cylinder jutting out from the AAO N-C ends ? This question was investigated in a previous paper of ours, where the main focus was on the growth of a single N-T in the ${\rm Na_2 S}$ chamber by exiting ${\rm Cd^{+2}}$ from a isolated AAO N-C. These ideas are applicable to a general class of reacting-diffusing and aggregating-precipitating chemical species, and hence for the rest of the discussion we shall refer of ${\rm Cd^{+2}}$ ions are A, ${\rm S^{-2}}$ ions as B; thereby $ A + B \rightarrow C$, where CdS $\equiv$ C. $Na^{+}$ ions remains passive in the model of the process and will not be explicitly considered in the rest of the paper.
The key ingredient to understand the observed morphology of cylinders with a pore is the assumption that A particles exit the N-C with a finite velocity to enter a bath of B-ions. Such selective transport of fluids through a nano-pore is not a unreasonable assumption to make [@majumdar]. These A-ions then meet the diffusing B-ions to form C, which in turn diffuse around a bit before they find an appropriate site in the forming C-aggregate. At times $t=0 +$ of the growth process, A ions meet lots of B, thereby react and aggregate to form a plug of C which is pushed ahead by the pressure of fluid, exiting the N-C. The region between the plug and the N-C exit becomes B-ion scarce thereafter, and is filled in rapidly by the exiting fluid containing A-ions. These then diffuse out to meet the diffusing-in B ions. $A + B \rightarrow C$ occurs and aggregation of C forms the walls of the N-T cylinder with a closed plug at one end. These ideas were implemented in a lattice model by us and we obtained cylinders with a pore exactly as seen in experiments. A detailed description can be obtained from reference [@kiruthiga].
The main focus of this rapid communication is the growth of multiple C-nanotubes (N-T) from an array of exit points of AAO N-C. The presence of neigbouring growing N-Ts adds an extra complication to into the phenomenon. The supply of reacting B-ions gets severly depleted near the AAO N-C exit as the reaction proceeds. Diffusion is the principal pathway by which B-ions can move from bulk into the regions near the reaction points close to the N-C exits. In this paper we explore the consequences of the slow diffusion of B and other conditions and criteria which effectively gets imposed to have well formed N-Ts of C particles, with few C-deficient regions within cylinder-walls. This in turn will help experiments to identify and fine tune conditions to self assemble structures from different chemical species separated by a suitably chosen nano-channel.
We very briefly describe the model of self-assembling N-T, this will also help define the various physical quantities relevant for our model. The ${\rm Na_2 S}$ chamber is modelled as a 3-D lattice of size $L_x \times L_y \times L_z$, and A and B can reside on only discrete points of a lattice. A and B are self-avoiding, however, a diffusing A (or B) ion can hop into a site occupied by B (or A) which then turns into C particle. The probability that a A,B or C particle will attempt a random hop to a neighbouring vacant site (mimicking diffusion) is given by $D_A, D_B$ and $D_C$ respectively. The exit points of the N-C (explicitly not modelled) are at $x=0$, A enters the lattice from different areas of dimension $2 r_{NC} \times 2 r_{NC}$, in turn each of these square exit-areas form a square lattice. This corresponds to N-Cs being arranged in a square lattice in our model for ease of computation, though one can in principle also work with a triangular lattice. The center of from the center of a square to the center of the nearest neighbour square is $d_c$, and the distance between neighboring N-C walls is $S_{cyl} = d_c - 2 r_{NC}$. $L_y \times L_z$ are chosen such that PBC can be maintained in $y$, $z$ direction as $S_{cyl}$ is varied.
Flow of A is modelled by making A hop a lattice constant $a$ to the right (i.e.$+\hat{x}$ direction) every iteration (time $\tau$) if a vacant site or a site occupied by B is available. Thus the distance $a$ hopped in one time step $\tau$ sets the length and time scale of the model. If the lattice site on the right of a about-to-hop A is occupied by another A ion, then the first A tried to “flow” around the A on the right by taking a random step in $\pm \hat{y}$ or $\pm \hat{z} $ direction if suitably vacant. At each time-step, A-ions are replenished with probability $p_A$ at each of lattice-points which constitute the square exits of the N-C at $x=0$. The initial density of B ions in the lattice is set to be $\rho_B$, i.e., $\rho_B$ is fraction of all lattice points are occupied by B at time $t=0$. Note however, that if the exit of N-C are already occupied by A, then no new A-s effectively get introduced at the lattice sites at $x=0$, till they become unoccupied by a hop of A-s in the $+\hat{x}$ direction. The number of interations $N_{it}$ are chosen such that the average length of the N-T at the end of simulation is $\approx 200 a$, i.e. $p_P \times N_{it} =200$; thus lower values of $p_P$ correspond to longer simulation runs. Correspondingly $L_x =1000a$ for all our runs.
![\[fig0\] Snapshots of nanotubes (N-T) formed from the exit points of 9 nanochannels (N-C) arranged in a square lattice at $x=0$ for two different growth conditions. Blue particles denote C, whereas A is shown as red-dots. B is not shown to maintain clarity. Plot (a) $p_A=0.01$, $p_P=0.025$, $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$, $S_{cyl}=18.$; (b) $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $D_B=0.2$, $\rho_B=0.6$, $S_{cyl}=40$. Only $1/4$& $1/8$-th of A&C particles are seen. ](snap_S_cyl_18_1.eps "fig:"){width="0.45\columnwidth"} ![\[fig0\] Snapshots of nanotubes (N-T) formed from the exit points of 9 nanochannels (N-C) arranged in a square lattice at $x=0$ for two different growth conditions. Blue particles denote C, whereas A is shown as red-dots. B is not shown to maintain clarity. Plot (a) $p_A=0.01$, $p_P=0.025$, $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$, $S_{cyl}=18.$; (b) $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $D_B=0.2$, $\rho_B=0.6$, $S_{cyl}=40$. Only $1/4$& $1/8$-th of A&C particles are seen. ](snap_S_cyl_40_diffB_2_diffC_0125_1.eps "fig:"){width="0.53\columnwidth"}
![\[fig1\] Plots of the average number density $\rho_C(r)$ (units of $a^{-3}$) of C (CdS) of different sections along the length of N-T as a function of the radial distance $r$ from the center of the N-T. Plots (a), (b) and (c) correspond to different separations $S_{cyl}$ between the N-C exits. The first section is closest to the N-C exit at $x=0$, whereas the 5th-section is farthest from the N-C exit and contains the closed end. The density averaged over all the 5 sections of the N-T is also shown. Values of other parameters are $p_A=0.01$, $p_P=0.025$, $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$. All parameters are probabilities, thereby have no units. ](S_cyl_10.eps "fig:"){width="0.32\columnwidth"} ![\[fig1\] Plots of the average number density $\rho_C(r)$ (units of $a^{-3}$) of C (CdS) of different sections along the length of N-T as a function of the radial distance $r$ from the center of the N-T. Plots (a), (b) and (c) correspond to different separations $S_{cyl}$ between the N-C exits. The first section is closest to the N-C exit at $x=0$, whereas the 5th-section is farthest from the N-C exit and contains the closed end. The density averaged over all the 5 sections of the N-T is also shown. Values of other parameters are $p_A=0.01$, $p_P=0.025$, $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$. All parameters are probabilities, thereby have no units. ](S_cyl_18.eps "fig:"){width="0.32\columnwidth"} ![\[fig1\] Plots of the average number density $\rho_C(r)$ (units of $a^{-3}$) of C (CdS) of different sections along the length of N-T as a function of the radial distance $r$ from the center of the N-T. Plots (a), (b) and (c) correspond to different separations $S_{cyl}$ between the N-C exits. The first section is closest to the N-C exit at $x=0$, whereas the 5th-section is farthest from the N-C exit and contains the closed end. The density averaged over all the 5 sections of the N-T is also shown. Values of other parameters are $p_A=0.01$, $p_P=0.025$, $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$. All parameters are probabilities, thereby have no units. ](S_cyl_40.eps "fig:"){width="0.32\columnwidth"}
Fig. \[fig0\] shows two snapshots of C-particle N-T from our simulations. The radius of the N-C exits are $r_c = 5a$ in both figures, however, the distance $S_{cyl}$ between the N-C walls are correspondingly $L_y \times L_z$ are different in Fig. \[fig0\] $a$ and $b$. For Fig. \[fig0\]$a$ $S_{cyl} =18a$, the distance $d_c$ between N-C centers is $d_c=28a$ and $L_x \times L_y \times L_z = 1000 \times 84 \times 84 a^3$, whereas for Fig. \[fig0\]b $S_{cyl} =40a$, $d_c=50a$ and $L_x \times L_y \times L_z = 1000 \times 150 \times 150 a^3$ Figure \[fig0\](a) shows that the N-Ts are well-formed at one end near the $x=200a$, but the wall has low C-density near $x=0$, i.e., near the exit of the nano-channel. On the other hand, Fig. \[fig0\]b, with a higher value of $D_B$ and $S_{cyl}$ shows clear and well-formed N-Ts with high density of C along the entire length of the N-T. The reasons for such observations have been discussed earlier: depletion of B-ions supplies near the N-C exits as the reaction and growth of N-T gradually proceeds. A higher value of separation $S_{cyl}$ between N-T walls and a higher $D_B = 0.2$ naturally ensures a better supply of B to the reactant A-ions near the N-C exits leading to better N-Ts, as seen in Fig. \[fig0\]b.
To separate out the effects of varying $S_{cyl} $ and $D_B$ in our model we fix a relatively low value of the ratio $D_B/D_A=4$ and vary $S_{cyl}$ in subfigures (a), (b) and (c) and plot the average density $\rho_C(r)$ of $C$ as a function of the radial distance $r$ from the center of the respective N-Ts. Furthermore, each of the 9 N-Ts are divided into 5 sections along its length, and we calculate and plot the density profile in each section (averaged over the 9 N-Ts) as a function $r$. The quantity $\rho_C(r)$ has low values near N-T center ($r=0$) but peaks near $r=5a$ indicative of the formation of walls of distinct N-T of C with a pore at center, and then decays back nearly to $0$ with increase of $r$. A non-zero value of $\rho_C(r)$ near $r \approx 10a$ is indicative of spread of C and the fusion between adjacent N-T, as seen especially for section 4 with $S_{cyl} =10a$. The density profile of section-5 (section containing the closed end and farthest from the NC-exit) is nearly the same for $S_{cyl} =10a, 18a$ and $40a$, but the peak in $\rho_C(r)$ for section-1 gets progressively higher for larger $S_{cyl}$ values. We see a higher density of C in section-4 near $r=5a$ in Fig. \[fig1\] (b) and (c) because just after the initial phase of flow of A and plug formation, there will be cylinder of surplus A between plug and N-C exit, which will promptly react with B at the surface and aggregate to form N-T walls. This will be followed by B-deficiency leading to lower values of $\rho_C(r)$-peak for sections $3,2$ and $1$.
![\[fig2\] Plots of $\rho_C (r)$ versus $r$. These figures explore the effect of varying the growth rate $g = p_P a/\tau$ and the rate of influx of A particles (by varying the probability $p_A$) on the N-T cylinder formation. In subfigure (a) the ratio $p_P/p_A =2.5$ is held fixed, in (b) $p_P$ is kept fixed while $p_A$ is varied, in (c) $p_p$ take different values maintaining $p_A = 0.01$. Other parameter values are $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$, $S_{cyl}=18a$. The reference case of $p_P =0.025$ and $p_A=0.01$ plotted as blue filled triangles in (b) and (c). ](chG_ratio_A_G_constant.eps "fig:"){width="0.32\columnwidth"} ![\[fig2\] Plots of $\rho_C (r)$ versus $r$. These figures explore the effect of varying the growth rate $g = p_P a/\tau$ and the rate of influx of A particles (by varying the probability $p_A$) on the N-T cylinder formation. In subfigure (a) the ratio $p_P/p_A =2.5$ is held fixed, in (b) $p_P$ is kept fixed while $p_A$ is varied, in (c) $p_p$ take different values maintaining $p_A = 0.01$. Other parameter values are $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$, $S_{cyl}=18a$. The reference case of $p_P =0.025$ and $p_A=0.01$ plotted as blue filled triangles in (b) and (c). ](ch_denA_g025.eps "fig:"){width="0.32\columnwidth"} ![\[fig2\] Plots of $\rho_C (r)$ versus $r$. These figures explore the effect of varying the growth rate $g = p_P a/\tau$ and the rate of influx of A particles (by varying the probability $p_A$) on the N-T cylinder formation. In subfigure (a) the ratio $p_P/p_A =2.5$ is held fixed, in (b) $p_P$ is kept fixed while $p_A$ is varied, in (c) $p_p$ take different values maintaining $p_A = 0.01$. Other parameter values are $D_A=D_C=0.0125$, $D_B=0.05$, $\rho_B=0.6$, $S_{cyl}=18a$. The reference case of $p_P =0.025$ and $p_A=0.01$ plotted as blue filled triangles in (b) and (c). ](chG_denA_01.eps "fig:"){width="0.32\columnwidth"}
Since the N-T formation by reaction, diffusion and aggregation is critically dependent on the rate of replenishment of A as well as the growth-rate $g = p_P a/\tau$ of N-T as compared to the diffusion-rate of B. Hence in Fig. \[fig2\] we keep $D_B$ fixed and vary the probabilities $p_A$ and $p_P$ to investigate how the density profile $\rho_C(r)$ is affected by the variation. Low values of $p_A$ ($p_A = 0.0025, p_A=0.005$) keeping the ratio $p_P/p_A = 2.5$ fixed fills up the axial of the N-T at $r=0$ with C, as B get more time to diffuse in and react with A. For such cases we obtain solid cylinders as seen in Fig. \[fig2\]a and b. In Fig. \[fig2\]b, lowering the rate of replenishment of A by decreasing $p_A$ leads to the shrinking of the radius of N-T seen by the shift in the peak of $\rho_C(r)$, whereas, a higher value of $p_A =0.025$ ensures an excess of A at the N-C exit which fills up the center of the N-T with A, and so C is excluded from the axial region. Thus a wide tube with a pore at the center is obtained. Moreover, non-zero values of $\rho_C$ at $r=10a$ indicates that this excess A spreads out radially, reacts with B to form C, and gets conjoined with the neighbouring N-T. On the other hand in Fig. \[fig2\]c with fixed $p_A =0.01$, a low growth rate set by $p_P = 0.008$ again leads to diffussive spread of excess A exiting the AAO N-C to form C after reacting with B-s at radii $r>6 a$ , thus again leading to the possibility of fused N-T for $S_{cyl} = 18 a$. Higher growth rate of N-T lead to the shift in the peak of $\rho_C (r)$ radially inwards. Thus a balance of the values of $p_P $ and independently $p_A$ are essential for the growth of well-separated distinct N-Ts. Of course experimentally, $p_P$ and $p_A$ are not free parameters as in simulations, but some control and variation over these quantities could be achieved experimentally by playing around with the material of the N-C (in this case AAO) and/or the radius of the N-C.
![\[fig3\] Plots of $\rho_C (r)$ versus $r$. Subfigure (a) and (b) explore the role of varying rate of diffusion of B ions (controlled by probability $D_B$) on cylinder formation at two different values of the probability $D_C$ of diffusive random hop of C-particle. Subfigures (c) and (d) show the average density of different sections along the length of the N-T at two different values of $D_B$. Other values are $p_A=0.0075$, $p_P=0.0125$, $D_A=0.0125$, $\rho_B=0.6$, $S_{cyl}=18a$. ](ch_diffB_DC0125.eps "fig:"){width="0.4\columnwidth"} ![\[fig3\] Plots of $\rho_C (r)$ versus $r$. Subfigure (a) and (b) explore the role of varying rate of diffusion of B ions (controlled by probability $D_B$) on cylinder formation at two different values of the probability $D_C$ of diffusive random hop of C-particle. Subfigures (c) and (d) show the average density of different sections along the length of the N-T at two different values of $D_B$. Other values are $p_A=0.0075$, $p_P=0.0125$, $D_A=0.0125$, $\rho_B=0.6$, $S_{cyl}=18a$. ](ch_diffB_DC025.eps "fig:"){width="0.4\columnwidth"}\
![\[fig3\] Plots of $\rho_C (r)$ versus $r$. Subfigure (a) and (b) explore the role of varying rate of diffusion of B ions (controlled by probability $D_B$) on cylinder formation at two different values of the probability $D_C$ of diffusive random hop of C-particle. Subfigures (c) and (d) show the average density of different sections along the length of the N-T at two different values of $D_B$. Other values are $p_A=0.0075$, $p_P=0.0125$, $D_A=0.0125$, $\rho_B=0.6$, $S_{cyl}=18a$. ](sec_diffB_05_DC025.eps "fig:"){width="0.4\columnwidth"} ![\[fig3\] Plots of $\rho_C (r)$ versus $r$. Subfigure (a) and (b) explore the role of varying rate of diffusion of B ions (controlled by probability $D_B$) on cylinder formation at two different values of the probability $D_C$ of diffusive random hop of C-particle. Subfigures (c) and (d) show the average density of different sections along the length of the N-T at two different values of $D_B$. Other values are $p_A=0.0075$, $p_P=0.0125$, $D_A=0.0125$, $\rho_B=0.6$, $S_{cyl}=18a$. ](sec_diffB_3_DC025.eps "fig:"){width="0.4\columnwidth"}\
Next, in Fig. \[fig3\]a and \[fig3\]b we observe the change in density profile $\rho_C (r)$ with varying $D_B$ for two different values of $D_C$. Higher values of $D_B$ ($D_B = 0.2,0.3$) lead to sharp peaks in the density for $C$ at $r\approx 5a$ pointing to well separated distinct N-T, refer Fig. \[fig3\]a. However, increasing $D_C$ leads to diffusive spread of C before setting down to final position within N-T leading to a lower value of maxima of $\rho_C(r)$, refer Fig. \[fig3\]b. But even in this case a large assymetry in the values of $D_B$ and $D_A$ favours high density of C at the N-T walls. There is a jump in the maximum of $\rho_C(r)$ from $0.5 a^{-3}$ to $0.8 a^{-3}$ as $D_B$ is changed from $0.1$ to $0.2$ corresponding to $D_B/D_A$ change from $8$ to $16$, but does not change significantly thereafter. We has shown earlier [@kiruthiga] that $D_C=0$ is not a amenable condition for good growth of N-T walls, thus a small but finite value of $D_C$ plays a crucial role in the self-assembly.
Figures \[fig3\]c and \[fig3\]d focus at the $\rho_C(r)$ for different sections along the length of the N-Ts for two distinct values of $D_B$ at $D_C=0.025$ . The 5th section always has a non-zero density of C at $r=0$, refer both \[fig3\]c and \[fig3\]d. The 3-rd section at the middle of the N-T has a higher density of C than 5th section in Fig. \[fig3\]d for $D_B =0.3$. Even the 1-st section in Fig. \[fig3\]d manages a distinct peak corresponding to a dense wall of C particles at $r=5a$. In contrast, we see in Fig. \[fig3\]c that the 3rd section already feels B-deficiency near the N-C exit for $D_B =0.05$, with lower values of $\rho_C(r)$ in section-3 than section-5. The values of $\rho_C(r)$ are close to zero in the 1-st section near N-C exit, indicating an acute scarcity of B-ions. We see also a significant spread in the density profile of $C$ at sections $5$ and $3$, thus forming fused N-Ts.
The plots of Fig. \[fig4\] further analyze the data of Fig.\[fig3\]a, but with added perspective of having two different values of $S_{cyl}$ for $D_B=0.1$ and $D_B =0.2$. For $S_{cyl}=18a$ and $D_B=0.1$ in Fig. \[fig4\]a, the 5th and 3rd section shows a peak around $0.6 a^{-3}$, but $\rho_C(r)$ for section-2 has a wider spread than for section-5, indicating that A has to diffuse further out before it meets B to form C. Section-1 of Fig. \[fig4\]a shows low values of $\rho_C(r)$, but for $S_{cyl}=40a$ in Fig. \[fig4\]b $\rho_C(r)$ for all sections show a significant jump in value. Moreover, the ratio of the maximum-values of $\rho_C$ for section-1 and 5 is lower for Fig. \[fig4\]b, indicating relatively uniform densitites along the length of the N-Ts. The growth of N-Ts of uniform densities is further improved in Fig. \[fig4\]c and d for $D_B =0.2$ with $\rho_C \approx 1 a^{-3}$ at the walls. The values of the maxima of $\rho_C(r)$ is nearly the same in both Fig. \[fig4\]c and d. This is in contrast with Fig. \[fig4\]a and b, where the peak in the average $\rho_C(r)$ (over all sections) is higher in Fig. \[fig4\]b compared to a. Higher values of B-diffusivities ensure a good supply of B from bulk, thereby $\rho_C(r)$ in Fig. \[fig4\]c and d do not crucially depend on $S_{cyl}$-values. Of course, distinct N-Ts of (nearly) uniform densities do not form for $S_{cyl}=10 a$.
![\[fig4\] Plots of $\rho_C(r)$ versus $r$. These subfigures explore the av. density variation at different sections along the length of tube for two different values of separation $S_{cyl}$ between N-T and two different values of $D_B$. In contrast to Fig. \[fig1\], $D_B/D_A \approx 10$ in this figure. Other parameters are kept fixed at $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $\rho_B=0.6$. ](diffB_1.eps "fig:"){width="0.4\columnwidth"} ![\[fig4\] Plots of $\rho_C(r)$ versus $r$. These subfigures explore the av. density variation at different sections along the length of tube for two different values of separation $S_{cyl}$ between N-T and two different values of $D_B$. In contrast to Fig. \[fig1\], $D_B/D_A \approx 10$ in this figure. Other parameters are kept fixed at $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $\rho_B=0.6$. ](diffB_1_large_box.eps "fig:"){width="0.4\columnwidth"}\
![\[fig4\] Plots of $\rho_C(r)$ versus $r$. These subfigures explore the av. density variation at different sections along the length of tube for two different values of separation $S_{cyl}$ between N-T and two different values of $D_B$. In contrast to Fig. \[fig1\], $D_B/D_A \approx 10$ in this figure. Other parameters are kept fixed at $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $\rho_B=0.6$. ](diffB_2.eps "fig:"){width="0.4\columnwidth"} ![\[fig4\] Plots of $\rho_C(r)$ versus $r$. These subfigures explore the av. density variation at different sections along the length of tube for two different values of separation $S_{cyl}$ between N-T and two different values of $D_B$. In contrast to Fig. \[fig1\], $D_B/D_A \approx 10$ in this figure. Other parameters are kept fixed at $p_A=0.0075$, $p_P=0.0125$, $D_A=D_C=0.0125$, $\rho_B=0.6$. ](diffB_2_large_box.eps "fig:"){width="0.4\columnwidth"}
In conclusion, a improved crop of self-assembling $CdS$ (C) nano-tubes can be obtained by increasing the distance $S_{cyl}$ between the N-Cs in the AAO template. This allows larger amounts of $ {\rm S^{-2}}$ (B) to diffuse in between the growing CdS N-Ts. Alternatively, a lower rate of inflow of A (${\rm Cd^{+2}}$) ions into the B (${\rm S^{2-}}$) chamber promotes uniform growth, but have to be suitably balanced by growth rate $p_P a/\tau$, else one obtains solid cylinders without a axial pore, or else, fused nano-tubes. Furthermore, we predict that good growth of cylindrical C nano-tubes with high-density walls will be occur only when there is marked assymmetry in the diffusion constants of $A$ and $B$ ions with $D_B \gg D_A$ and a non-zero, small value of $D_C$. Experimentally, an excess of Cd on the surface of CdS N-Ts have been observed in tune with our expectations [@shouvik1], but stoichiometric analysis of CdS densities at different sections of the N-Ts are yet to be done.
AC thanks computational facilities of Nano-Science unit at IISER, funded by DST, India: project no. SR/NM/NS-42/2009, and discussions with Shouvik Datta. JK acknowledges the summer research fellowship to visit IISER-Pune provided by IAS, Bangalore, India.
[99]{}
J. Kiruthiga and A. Chatterji, J. Chem. Phys., [**138**]{} 024905 (2013). A. Varghese, S. Datta, Phys. Rev. E., [**85**]{}, 056104 (2012). Intermolecular and Surface Forces, J.N. Israelachvili, 3rd Edition, Academic Press, (2010). M. Grzelczak, J. Vermant, E.M.Furst, L.M.Liz-Marzan, ACS-Nano,[**4**]{},3591 (2010). T.A. Witten, Rev. Mod. Phys., [**71**]{}, S368 (1999). A.J. Koch and H. Meinhardt, Rev. Mod. Phys., [**66**]{}, 1481 (1994). M. Einax, W. Dieterich, P. Maass, Rev. Mod. Phys., [**85**]{} 921 (2013). K. Dholakia and P. Zemanek, Rev. Mod. Phys., [**82**]{} 1767 (2013). M. Majumdar, N. Chopra, R. Andrews and B.J. Hinds, Nature, [**438**]{}, 44, (2005). Private communications with Shouvik Dutta.
|
[**Covariance, Geometricity, Setting, and Dynamical Structures on Cosmological Manifold**]{}\
[**Vladimir S. MASHKEVICH**]{}[^1]\
[*Physics Department\
Queens College\
The City University of New York\
65-30 Kissena Boulevard\
Flushing, New York 11367-1519*]{}\
[**Abstract**]{}
The treatment of the principle of general covariance based on coordinate systems, i.e., on classical tensor analysis suffers from an ambiguity. A more preferable formulation of the principle is based on modern differential geometry: the formulation is coordinate-free. Then the principle may be called “principle of geometricity.” In relation to coordinate transformations, there had been confusions around such concepts as symmetry, covariance, invariance, and gauge transformations. Clarity has been achieved on the basis of a group-theoretical approach and the distinction between absolute and dynamical objects. In this paper, we start from arguments based on structures on cosmological manifold rather than from group-theoretical ones, and introduce the notion of setting elements. The latter create a scene on which dynamics is performed. The characteristics of the scene and dynamical structures on it are considered.
Introduction {#introduction .unnumbered}
============
In a fundamental work on the general theory of relativity \[1\], Einstein gave due attention to the principle of general covariance as one of the cornerstones of the theory. The principle was formulated as the requirement that the general laws of nature must be expressed in terms of equations valid in all coordinate systems. However, Kretschmann \[2\] argued that equations originally written in any coordinate system may be extended to all coordinate systems and thus made covariant; therefore the principle of general covariance involves no physical content. Einstein concurred with the argumentation \[3\].
The treatment of the principle of general covariance based on coordinate systems, i.e., on classical tensor analysis, as will be seen later, suffers from an ambiguity—as long as the geometric character of quantities is not specified in advance. A more preferable formulation of the principle is based on modern differential geometry: such a formulation is coordinate-free. We quote \[4\]: “Every physical quantity must be describable by a (coordinate-free) geometric object, and the laws of physics must all be expressible as geometric relationships between these geometric objects.” In such a formulation, the principle may be called “principle of geometricity.”
In relation to coordinate transformations, there were confusions around such concepts as symmetry, covariance, invariance, and gauge transformation \[5\]. In a textbook, the point was for the first time made clear by Anderson \[6\]. His treatment is based on a group-theoretical approach and the distinction between absolute and dynamical objects (see also \[7\], \[5\]).
In this paper, we start from arguments based on structures on cosmological manifold rather than from group-theoretical ones. Therefore we introduce the notion of setting objects—instead of absolute ones. The setting objects create a scene on which dynamics is performed.
The purpose of the paper is to consider briefly the characteristics of the scene and dynamical structures on it.
Covariance and geometricity
===========================
Cosmological manifold
---------------------
A primary setting object, or element is a cosmological manifold, i.e., a smooth 4-manifold \[8\] $$M=M^{4}\,,\quad p\in M$$ At this juncture, there is no structure on $M$.
The principle of general covariance in terms of coordinate\
systems
-----------------------------------------------------------
The principle of general covariance was originally formulated on the basis of classical tensor analysis, in which it is necessary to exploit coordinate systems. In this approach, tensor quantities are defined in terms of their components and of the transformation rules for the latter under coordinate changes. The principle is formulated as follows \[1\]:
[*“The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).”*]{}
Kretschmann argued that any equation written in an arbitrary coordinate system may be rewritten in any other coordinate system—on the basis of the transformation rules.
Ambiguity
---------
There is an ambiguity in the application of the transformation rules—as long as the tensor character of quantities involved in the equation is not specified. This is an example. Let four equations be given in a coordinate system $x=(x^{\mu})_{\mu=0}^{3}$:
$$f_{1}^{(\nu)}(x)=f_{2}^{(\nu)}(x)$$ where $\nu=0,1,2,3$ is the label of the $f$. Let $\bar{x}=(\bar{x}^{\mu})$ be another coordinate system:
$$M\ni p\leftrightarrow x\leftrightarrow \bar{x}$$ There are different possibilities:
1\) both $f_{1}^{(\nu)}$ and $f_{2}^{(\nu)}$ are functions (i.e., scalars); then
$$f_{n}^{(\nu)}(x)=f_{n}^{(\nu)}(x(\bar{x}))=:\bar{f}_{n}^{(\nu)}(\bar{x})$$ and we have the implication
$$f_{1}^{(\nu)}(x)=f_{2}^{(\nu)}(x)\Rightarrow
\bar{f}_{1}^{(\nu)}(\bar{x})=\bar{f}_{2}^{(\nu)}(\bar{x})$$ or
$$f_{1}^{(\nu)}(p)=f_{2}^{(\nu)}(p)\,,\quad p\in M$$ which is covariant.
2\) both $f_{1}^{(\nu)}$ and $f_{2}^{(\nu)}$ are components of vectors:
$$f_{n}^{(\nu)}=v_{n}^{\nu}$$ then $f_{1}^{(\nu)}(x)=f_{2}^{(\nu)}(x)$ amounts to
$$v_{1}^{\nu}=v_{2}^{\nu}$$ which is fulfilled in all coordinate systems, i.e., is covariant.
3\) $f_{1}^{(\nu)}$ represents a vector, whereas $f_{2}^{(\nu)}$ is a function:
$$f_{1}^{(\nu)}=v^{\nu}\,,\quad f_{2}^{(\nu)}=f^{(\nu)}$$ then
$$f_{1}^{(\nu)}(x)=f_{2}^{(\nu)}(x)\nRightarrow
v^{\nu}=f^{(\nu)}$$
The principle of geometricity
-----------------------------
To avoid the ambiguity we have to be based on modern differential geometry rather than on classical tensor analysis, and formulate the principle of geometricity:
Spacetime structure and spacetime aspects of matter objects must be expressed in terms of geometric notions.
Now in the above example, the $f_{n}^{(\nu)}$ should be specified as geometric objects.
Setting
=======
Setting as a scene for performing dynamics
------------------------------------------
We consider a physical theory that involves dynamics on cosmological manifold $M$. (A definition of dynamics will be given below.) All physical objects are classified into two categories: setting objects and dynamical ones. A setting is a family of setting objects; a dynamical system is a set of dynamical objects. The mathematical representation of the setting is independent of the dynamical system, whereas the representation of the latter involves the setting.
Figuratively speaking, a setting is a scene on which dynamics is performed.
Natural and free setting elements
---------------------------------
The setting objects are classified into two subcategories: natural and free objects. The natural setting is induced by dynamics in the sense that the latter involves the former. The free setting elements, if any, play an auxiliary role.
These are the examples of natural setting elements: affine structures of Aristotelian and Newtonian spacetimes \[9\], the Minkowskian metric, the gravitational and cosmological constants, interaction constants of quantum field theory; and of free setting elements: reference frames (tetrads), coordinate systems.
The principle of minimal (free) setting
---------------------------------------
Now we may endow the principle of geometricity with a certain constructive meaning. In view of that principle, we advance the principle of minimal (free) setting:
A (free) setting should include as few elements as possible.
It is the absence of free setting elements that is in accordance with the principle of geometricity.
Objects and related fields
==========================
Objects and fields
------------------
Let $w$ be an object which is an element of a set $\mathcal{W}$,
$$w\in \mathcal{W}$$ A related field is defined on a submanifold \[8\] of cosmological manifold,
$$M'\subset M$$ as
$$w_{M'}\in \prod_{p\in M'}\mathcal{W}_{p}\,,\;\;w_{M'}(p)\in
\mathcal{W}_{p}$$ $\mathcal{W}_{p}$ is related to $p\in M$.
Introduce an abridged notation:
$$w_{M'}(p)=:w(p)$$
Geometric quantities
--------------------
Let
$$F_{\gamma}\,,\;\;\gamma\in\Gamma$$ be a geometric quantity, $\gamma$ being the type of the latter: scalar, vector, tensor, spinor. $F_{\gamma}$ is an element of a space $\mathcal{F}_{\gamma}$,
$$F_{\gamma}\in \mathcal{F}_{\gamma}$$ Introduce
$$\mathcal{F}_{U}:=\bigcup_{\gamma\in\Gamma}\mathcal{F}_{\gamma}\,,\quad
F\in\mathcal{F}_{U}$$ where $F$ is a generic $F_{\gamma}$.
Geometric fields are defined according to the preceding subsection.
Variables, states, and valuables
--------------------------------
Introduce variables:
$$v_{\gamma b}\,,\quad b\in \mathcal{B}$$ variable value sets:
$$\mathcal{V}_{\gamma}\,,\quad v_{\gamma
b}\in\mathcal{V}_{\gamma}$$ $$\mathcal{V}_{U}:=\bigcup_{\gamma\in\Gamma}\mathcal{V}_{\gamma}\,,\quad
v\in\mathcal{V}_{U}$$ and states:
$$\omega\in\Omega$$ In the final analysis, it is the expectation values of variables in states that have immediate physical meaning. Therefore we introduce the notion of valuable:
$$\langle\;\rangle:\mathcal{V}_{U}\times\Omega\rightarrow\mathcal{F}_{U}\,,\quad
(v,\omega)\mapsto\langle v,\omega\rangle\in\mathcal{F}_{U}$$ or, in more detail
$$\langle v_{\gamma b},\omega\rangle\in\mathcal{F}_{\gamma}$$
Classical variables and fields
------------------------------
Introduce the following notation for classical variables:
$$v_{\gamma b}=\xi_{\gamma b}\in\Xi_{\gamma}\,,\quad
b=b^{\mathrm{class}}\in\mathcal{B}^{\mathrm{class}}$$ In classical physics, no distinction is usually made between an abstract variable and its expectation value \[10\]. So we put
$$\langle\xi_{\gamma
b},\omega^{\mathrm{class}}\rangle=:\xi_{\gamma
b}\in\mathcal{F}_{\gamma}$$ For a classical field, we have a notation
$$\xi_{\gamma bM'}\,,\quad\xi_{\gamma b}(p)\,,\;p\in M'$$
Quantum variables and fields
----------------------------
Introduce the following designations: the Hilbert space $\mathcal{H}$,
$$\hat{\mathcal{A}}:=L(\mathcal{H,H})\,,\quad
\hat{A}\in\hat{\mathcal{A}}\,,\quad \hat{A}:\mathcal{H\rightarrow
H }$$ A quantum entity (variable or field) generally consists of two components: classical $F_{\gamma}$ and properly quantum $\hat{A}$. Namely, a quantum variable
$$\hat v_{\gamma b}=F_{\gamma b}\hat{A}_{b}:=F_{\gamma
b}\otimes\hat{A}_{b}\,,\quad
b=b^{\mathrm{quant}}\in\mathcal{B}^{\mathrm{quant}}$$ where $\hat{A}$ as a geometric quantity is considered to be a scalar. For a valuable we have
$$\langle \hat v_{\gamma
b},\omega^{\mathrm{quant}}\rangle=F_{\gamma
b}\langle\hat{A}_{b},\omega^{\mathrm{quant}}\rangle$$ and (for a pure state)
$$\langle\hat{A},\omega^{\mathrm{quant}}\rangle=
(\Psi,\hat{A}\Psi)\in\mathbb{C}\,,\quad\Psi\in\mathcal{H}$$ so that
$$\langle \hat v_{\gamma
b},\omega^{\mathrm{quant}}\rangle\in\mathcal{F}_{\gamma}$$ Generally
$$\hat v_{\gamma}\in\mathcal{F}_{\gamma}\otimes\hat{\mathcal{A}}$$ or
$$\hat v_{\gamma b}=\int\limits_{\mathcal{L}}\mu(dl)F_{\gamma
bl}\hat{A}_{bl}$$ and a valuable
$$\langle \hat v_{\gamma
b},\omega^{\mathrm{quant}}\rangle=\int\limits_{\mathcal{L}}\mu(dl)F_{\gamma
bl}(\Psi,\hat{A}_{bl}\Psi)$$
A quantum field $\hat v_{\gamma bM'}$ may be described as follows:
$$\hat v_{\gamma
b}(p)=\int\limits_{\mathcal{L}(p)}\mu_{p}(dl)F_{\gamma bl
}\hat{A}_{bl}$$
$$\langle \hat v_{\gamma
b}(p),\omega^{\mathrm{quant}}\rangle=\int\limits_{\mathcal{L}(p)}\mu_{p}(dl)F_{\gamma
bl }(\Psi,\hat{A}_{bl}\Psi)$$
Dynamics on cosmological manifold without structure
===================================================
Dynamics
--------
Dynamics on $M'\subset M$ is a family of valuable fields:
$$\{\xi_{\gamma
bM'}:\gamma\in\Gamma\,,\;b\in\mathcal{B}^{\mathrm{class}}\}$$ and
$$\{\langle \hat v_{\gamma
b}(p),\omega^{\mathrm{quant}}\rangle:\gamma\in\Gamma\,,\;
b\in\mathcal{B}^{\mathrm{quant}}\,,\;p\in M'\}$$ Classical dynamics is constructed on the basis of the $\xi$ themselves, quantum dynamics is constructed on the basis of the $\mathcal{F}_{U}\,,\;\mathcal{H}$, and $\hat{\mathcal{A}}$.
Mode-series expansion: Manifold modes
-------------------------------------
Let us introduce the expansion of a quantum field in terms of manifold modes:
$$\hat v_{\gamma b}(p)=\int\limits_{\mathcal{M}}\mu(dm)
\sum_{n\in\mathcal{N}_{\gamma}}F_{\gamma bmn }(p)\hat{A}_{bmn}$$
$$\langle \hat v_{\gamma
b},\omega^{\mathrm{quant}}\rangle=\int\limits_{\mathcal{M}}\mu(dm)
\sum_{n\in\mathcal{N}_{\gamma}}F_{\gamma
bmn}(p)(\Psi,\hat{A}_{bmn}\Psi)$$ The set
$$\{F_{\gamma
mnM'}:m\in\mathcal{M}\,,\;n\in\mathcal{N}_{\gamma}\}$$ of manifold modes forms a complete system on $M'$.
Now we put
$$F_{\gamma mn}(p)=f_{m}(p)e_{\gamma mn}(p)$$ where $f_{mM'}$ is a scalar field on $M'$. The set
$$\{f_{mM'}:m\in\mathcal{M}\}$$ forms a complete system on $M'$, and the set
$$\{e_{\gamma mn}(p):n\in\mathcal{N}_{\gamma}\}$$ forms a complete system at $p\in M'$.
The Cauchy problem and manifold foliation
=========================================
A Cauchy surface and a foliation
--------------------------------
Let $M$ possess a Cauchy surface. Then there exists a foliation of $M$ \[11\], \[12\]:
$$M=T\times S\,,\quad M\ni p=(t,s)\,,\;t\in T\,,\;s\in S$$ where 1-manifold $T$ is a cosmological time and 3-manifold $S$ is a cosmological space. The tangent space $M_{p}$ at a point $p\in M$ is
$$M_{p}=T_{t}\oplus S_{s}\,,\quad p=(t,s)$$ A Cauchy surface
$$M_{\mathrm{C}}=\{t_{0}\}\times S\ni p=(t_{0}\,,s)$$ specifies a unique foliation (by means of synchronous coordinates \[13\]). In the synchronous reference (i.e., in every synchronous reference frame)
$$T_{t}\perp S_{s}$$
As to the choice of a Cauchy surface, notice the following. If metric is given, different surfaces generally give rise to different foliations; however, if the Cauchy problem includes the determination of metric, the choice of the surface in general does not affect physical results.
Thus as long as dynamics is constructed starting from initial conditions, a natural construction involves a Cauchy surface with the associated foliation and synchronous reference.
Now
$$M'=T'\,,\quad T'\subset T$$
Initial conditions
------------------
Initial conditions for classical fields are of the form
$$\{(\xi\,,\;\partial_{t}\xi)_{M_{\mathrm{C}}}\}$$ which corresponds to second order dynamics.
For quantum fields we have
$$\{\hat v_{M_{\mathrm{C}}}\;\;\mathrm{or}\;\; (\hat
v\,,\;\partial_{t}\hat v)_{M_{\mathrm{C}}}\}$$ or
$$\{F_{mnM_{\mathrm{C}}}\;\;\mathrm{or}\;\;(F_{mn}\,,
\partial_{t}F_{mn})_{M_{\mathrm{C}}}\}$$ which corresponds to first or second order dynamics, respectively.
Dynamics on a foliated manifold
===============================
Time dependent quantum objects
------------------------------
As long as cosmological manifold is foliated, it is natural to introduce time dependent quantum objects:
operator
$$\hat{A}_{T'}\in\hat{\mathcal{A}}^{T'}\,,\quad
\hat{A}_{T'}(t)=:\hat{A}(t)\in\hat{\mathcal{A}}$$
state
$$\omega_{T'}\in\Omega^{T'}\,,\quad\omega(t)\in\Omega\,,
\quad\Psi_{T'}\in\mathcal{H}^{T'}\,,\quad\Psi(t)\in\mathcal{H}$$
valuable
$$(\Psi(t),\hat{A}(t)\Psi(t))$$
Mode-series expansion: Space modes
----------------------------------
We introduce the expansion of a quantum field in terms of space modes:
$$\hat v_{\gamma b\{t\}\times
S}=\int\limits_{\mathcal{M}}\mu(dm)\sum_{n\in\mathcal{N_{\gamma}}}F_{\gamma
bmn\{t\}\times S }\hat{A}_{bmn}(t)\,,\quad
b\in\mathcal{B}^{\mathrm{quant}}$$ or
$$\hat v_{\gamma
b}(t,s)=\int\limits_{\mathcal{M}}\mu(dm)\sum_{n\in\mathcal{N}_{\gamma}}F_{\gamma
bmn }(t,s)\hat{A}_{bmn}(t)$$ so that
$$\langle \hat v_{\gamma
b}(t,s),\omega^{\mathrm{quant}}\rangle=\int\limits_{\mathcal{M}}\mu(dm)
\sum_{n\in\mathcal{N}_{\gamma}}F_{\gamma
bmn}(t,s)(\Psi(t),\hat{A}_{bmn}(t)\Psi(t))$$ Next we put
$$F_{\gamma mn}(t,s)=f_{m}(t,s)e_{\gamma mn}(t,s)$$ The $F_{\gamma
mn\{t\}\times S }$ and $f_{m\{t\}\times S}$ are time dependent space modes.
The sets
$$\{F_{\gamma mn\{t\}\times
S}:m\in\mathcal{M}\,,\;n\in\mathcal{N_{\gamma}}\}$$ and
$$\{f_{m\{t\}\times S}:m\in\mathcal{M}\}$$ form complete systems on $\{t\}\times S$, the set
$$\{e_{\gamma mn}(t.s):n\in\mathcal{N}_{\gamma}\}$$ forms a complete system at $(t,s)$.
Now the initial conditions are
$$(\{\hat{A}_{bmn}(t_{0}):b\in\mathcal{B}^{\mathrm{quant}}\,,
\;m\in\mathcal{M}\,,\;n\in\mathcal{N}\}\,,\;\;\Psi(t_{0}))$$ which corresponds to first order dynamics.
Dynamical pictures
------------------
There are these dynamical pictures:
the Schrödinger picture:
$$\Psi_{S}=U_{S}(t,t_{0})\Psi(t_{0})\,,\quad\hat{A}_{S}=\mathrm{const}$$
the Heisenberg picture:
$$\Psi_{H}=\Psi_{S}(t_{0})=\mathrm{const}\,,
\quad\hat{A}_{H}(t)=U_{S}^{\dag}(t,t_{0})\hat{A}_{S}U_{S}(t,t_{0})$$
a generic picture:
$$\Psi(t)=U_{1}(t)\Psi_{S}(t_{0})\,,\quad\hat{A}(t)=
U_{2}^{\dag}(t)\hat{A}_{S}U_{2}(t)\,,\quad
U_{2}(t)U_{1}(t)=U_{S}(t,t_{0})$$
Note that a Schrödinger variable $\hat v_{S}$ depends on $t$ through $F$.
Setting elements
================
Manifold
--------
Let us list setting elements in the above structures.
Cosmological manifold $M^{4}$ is a primary natural setting element involved in all structures.
Initial conditions and Cauchy surface
-------------------------------------
Dynamics implies initial conditions, and the latter involve a Cauchy surface. So initial conditions and a Cauchy surface are natural setting elements.
Foliation
---------
In general, a foliation is not unique. So let
$$M=T\times
S\;\;\mathrm{and}\;\,M=\overline{T}\times\overline{S}\,,\quad M\ni
p\leftrightarrow(t,s)\leftrightarrow(\bar{t},\bar{s})$$ Then there are modes
$$f_{m\{t\}\times S}=:f_{mt}(s)$$ and
$$\bar{f}_{\bar{m}\{\bar{t}\}\times\overline{S}}=:
\bar{f}_{\bar{m}\bar{t}}(\bar{s})$$ Let
$$\varphi(t,s)=\int\limits_{\mathcal{M}}\mu(dm)c_{m}(t)f_{mt}(s)$$ We have
$$\bar{\varphi}(\bar{t},\bar{s})=\bar{\varphi}(\bar{t}(t,s),\bar{s}(t,s))
=:\tilde{\bar{\varphi}}(t,s)=
\int\limits_{\mathcal{M}}\mu(dm)\tilde{\bar{c}}_{m}(t)f_{mt}(s)$$ Thus
$$\bar{f}_{\bar{m}\bar{t}}(\bar{s})=\int\limits_{\mathcal{M}}\mu(dm)
\tilde{\bar{c}}_{\bar{m}m}(t)f_{mt}(s)$$ The $\tilde{\bar{c}}_{\bar{m}m}(t)$ are functions of $t$, so that different foliations are not equivalent, and generally a foliation is a free setting element. But as long as a Cauchy surface is specified, the related foliation is a natural setting element.
Setting for manifold and space modes
------------------------------------
The setting for manifold modes is a choice of them and initial conditions for them. The setting is free.
The setting for space modes is a foliation $M=T\times S$ and initial conditions for $\hat{A}_{mn}$ and $\Psi$. As long as a Cauchy surface is specified, the setting is natural.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank Alex A. Lisyansky for support and Stefan V. Mashkevich for helpful discussions.
[99]{}
A. Einstein, Die Grundlage der allgemeinen Relativitätsteore, Ann.Phys., , 769-822 (1916); The Foundation of the General Theory of Relativity, in: The Principle of Relativity (Dover Publications, Inc.).
E. Kretschmann, Ann.Phys., , 575 (1917).
A. Einstein, Ann.Phys., , 241 (1918).
Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation (W.H. Freeman and Company, San Francisco, 1973).
Norbert Straumann, General Relativity With Applications to Astrophysics (Springer, 2004).
James L. Anderson, Principles of Relativity Physics (Academic Press, 1967).
Michael Friedman, Foundations of Space-Time Theories (Princeton University Press,1983).
John M. Lee, Introduction to Smooth Manifolds (Springer, 2003).
M. Crampin, F.A.E. Pirani, Applicable Differential Geometry (Cambridge University Press, 1986).
L.D. Faddeev, O.A. Yakubowsky, Lectures on Quantum Mechanics (Leningrad University Press, 1980) (in Russian).
S.A. Fulling, Aspects of Quantum Field Theory in Curved Space-Time (Cambridge University Press, 1989).
Theodore Frankel, The Geometry of Physics (Cambridge University Press, 2004).
L.D. Landau, E.M. Lifshitz, Quantum Mechanics (Pergamon, 1977).
[^1]: E-mail: Vladimir\_Mashkevich@qc.edu
|
---
address: |
Departamento de Física, Universidad de Oviedo,\
C/Calvo Sotelo s/n, Oviedo, Spain
author:
- 'S. Folgueras for the CMS Collaboration'
title: 'SEARCH FOR NEW PHYSICS USING EVENTS WITH TWO SAME-SIGN ISOLATED LEPTONS IN THE FINAL STATE AT CMS'
---
Introduction
============
Events with same sign dilepton final states are very rare in the SM context, but they appear naturally in many different new physics scenarios such as SUSY where two same-sign dileptons can be produced in the decay chain of supersymetric particles.
Two different scenarios are considered: SUSY processes dominated by strong production of gluinos and squarks where $3^{rd}$ generation squarks are lighter than other squarks, resulting in an abundance of top and bottom quarks produced in the decay chain. Direct electroweak production of charginos() and neutralinos(), asuming that the strongly interacting particles are too heavy to play a role, resulting in events with multiple leptons in the final state. In either case the SUSY decay chain ends with the LSP (), that escapes undetected and therefore contribute strongly to the of the event.
In general, same-sign dileptons can be particularly sensitive to SUSY models with compressed spectra where the mass of the LSP is very close to the mass of the produced supersymetric particle, either if it is produced via strong production (squarks or gluinos) when it is accompanied with high hadronic activity or if it is produced via ewk production (charginos or neutralinos) when almost no hadronic activity is present. We therefore search for SUSY using same sign dilepton events with/out hadronic activity and large and we interpret the results in the context of various SUSY models. What we present here is just a short summary of two analysis performed CMS [@CMS], more details can be found in the original publications [@RA5; @SUS12022].
Event selection
===============
We require two isolated same-sign leptons ($e$ or $\mu$) with $>$ 20 , consistent with originating from the same vertex. Events are collected using dilepton triggers and an extra veto on the third lepton is applied to suppress Drell-Yan production. The isolation of the leptons is computed with particle-flow information, and an event-by-event correction is made to account for the effect of the multiple pp interaction in the same bunch crossing (pileup). This correction consists in substracting the estimated contribution from the pileup in the isolation cone.
The baseline selection differs slightly depending which signature are we considering: strong (SS+b) or electroweak (EWK) production of SUSY. For the first we expect high bjet multiplicity so we will also require the presence of at least two b-tagged jets (with $>$ 40 ). For the second, hardly any hadronic activity is expected therefore we pick events with $>$ 120(coming from the two LSP) to suppress background events. The signal regions are defined by impossing tighter cuts on the number of (b) jets, scalar sum of the of all identified jets () and for the analysis targeting strong production of SUSY, and on and the number of bjets for the one targeting electroweak production.
Figure \[fig:baseline\] shows all the event passing the selection in the and plane.
![Distribution of versus for the events passing the baseline selection. Left plot shows the events passing the selection for the analysis targetting strong production and the right plot the ones targeting electroweak production of SUSY.[]{data-label="fig:baseline"}](baseline_ssb){width="0.9\linewidth"}
![Distribution of versus for the events passing the baseline selection. Left plot shows the events passing the selection for the analysis targetting strong production and the right plot the ones targeting electroweak production of SUSY.[]{data-label="fig:baseline"}](baseline_ewk){width="0.9\linewidth"}
Background estimation
=====================
There are some sources of SM background to potential new physics signals: events with one or two fake leptons, opposite-sign events in which one of the electron charge has been badly measured and events with two same-sign prompt-leptons.
A description of the relevance of these backgrounds and how they are estimated is presented in this section. The validity of these estimation methods is proved in the baseline regions, that are background dominated.
Backgrounds with one or two fake leptons
----------------------------------------
Backgrounds with one or two fake leptons, include processes such as semi-leptonic $\mathrm{t\bar{t}}$ or $W+jets$ where one of the leptons comes from a heavy-flavor decay, misindentified hadrons, muons from light-meson decay in flight, or electrons from unidentified photon conversions. We estimate this background starting from measuring the probability of a lepton being fake or prompt using a QCD or Z enriched sample respectively. We then apply those probabilities to events passing the full kinematic selection but in which one or two of the leptons fail the isolation requirements. About 40-50% of the total background is due to this processes and we assign a 50% systematic uncertainty to account for the lack of estatistics in the control sample as well as the little knowledge we about about the control sample composition.
Events with charge mis-identification
-------------------------------------
These are events with opposite-sign isolated leptons where one of the leptons (typically an electron) and its charge is misreconstructed due to sever bremsstrahlung in the tracker materia (this effect is negligible for muons). We estimate this background by selecting opposite-sign $ee$ or $e\mu$ events passing the full kinematic selection, weighted by the probability of electron charge misassignment. This probability is measured in a $Z\rightarrow e e$ sample in data by simply calculating the ratio between same-sign and opposite-sign events in such sample and it validated in MC, this probability is of the order 0.02 (0.2)% for electrons in the barrel(endcap). This source of background only only accounts for the 5-10% of the total background. A 20% systematic uncertainty on this background is considered to account for the dependence of the probability.
Rare SM processes.
------------------
These include SM processes that yield two same-sign prompt leptons, including $\mathrm{t\bar{t}W}$, $\mathrm{t\bar{t}Z}$, $\mathrm{W}^\pm\mathrm{W}^\pm$ among others. These processes constitutes about 30-40% of the total background. $\mathrm{WZ}$ production is also very relevant for the EWK analysis, constituting nearly 40% of the total background.
All these background are obtained from Monte Carlo simualtions. For the $\mathrm{WZ}$ production the MC is validated in dataa and a 20% systematic uncertainty to account for the differences. Other backgrounds are assigned a 50% systematic uncertainty to this background sources as we have very little knowledge on the cross-sections.
Results
=======
Strong production of SUSY
-------------------------
The search is based on comparing observed and predicted yields in 8 signal regions with different requirements motivated by various possible new physics models. The definition of these search regions, as well as the observed and predicted yields are shown in Table \[tab:RA5\].
None of the search regions shows any significant excess over the SM background predictions, therefore we set interpret the results in several physics models[@RA5], for example Figure \[fig:interpretation\] shows the exclusion regions for gluino-pair production decaying into on-shell stops. We are able to exclude gluino masses up to 1 with such models.
Electroweak production of SUSY
------------------------------
This analysis is targeting production decaying via sleptons. This process naturally gives three-lepton final states. However when the mass of the intermediate slepton is too close either to the or to the , the third lepton would be too soft and the event would be missed by the tri-lepton analysis. We can recover such events using the same-sign analysis. We will assume that the strongly interacting particles do not play a role in this scenario.
Figure \[fig:interpretation\] shows the exclusion region for pair production decaying via sleptons, when the mass of the slepton is very close to the mass of the LSP. One can see that the same-sign analysis (red-dashed line) drives the exclusion near the diagonal. We are able to exclude chargino/neutralino masses up to roughly 600 .
![Exclusion regions for gluino-pair production, in the $m(\widetilde{t}_1)$ vs $m(\widetilde{g})$ plane, where each of the gluinos decays $\widetilde{g}\rightarrow t\bar{t}\LSP$ with on-shell stops (left). Exclusion region for production with intermediate sleptons assuming that the mass of the slepton is very close to the mass of the .[]{data-label="fig:interpretation"}](interpretation_ssb){width="0.9\linewidth"}
![Exclusion regions for gluino-pair production, in the $m(\widetilde{t}_1)$ vs $m(\widetilde{g})$ plane, where each of the gluinos decays $\widetilde{g}\rightarrow t\bar{t}\LSP$ with on-shell stops (left). Exclusion region for production with intermediate sleptons assuming that the mass of the slepton is very close to the mass of the .[]{data-label="fig:interpretation"}](interpretation_EWK_05){width="0.9\linewidth"}
Conclusions
===========
We have presented results of a search for new physics with events with same-sign dileptons using the CMS detector at the LHC. No significant deviations from the standard model expectations are observed. The results are used to set exclusion limits into several SUSY models, both assuming strong-dominated production and electroweak-dominated production. With the first we are able to probe gluino masses up to 1 and with the latter we exclude chargino/neutralino masses up to roughly 600 .
Acknowledgments {#acknowledgments .unnumbered}
===============
We wish to congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Korea); LAS (Lithuania); CINVESTAV, CONA- CYT, SEP, and UASLP-FAI (Mexico); MSI (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MON, RosAtom, RAS and RFBR (Russia); MSTD (Serbia); MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA).
References {#references .unnumbered}
==========
[99]{} CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3 S08004 (2008). CMS Collaboration,“Search for new physics in events with same-sign dileptons and $b$ jets in $pp$ collisions at $\sqrt{s}=8$ TeV”, JHEP [**1303**]{} (2013) 037 \[arXiv:1212.6194\]. CMS Collaboration,“Search for direct EWK production of SUSY particles in multilepton modes with 8TeV data”, CMS-PAS-SUS-12-022.
|
[**** ]{}\
Krushi Patel^1^, Kaidong Li^1^, Ke Tao^2^, Quan Wang^2^, Ajay Bansal^3^, Amit Rastogi^3^, Guanghui Wang^1^\
School of Engineering, University of Kansas, Lawrence, KS 66045\
The First Hospital of Jilin University Changchun 130000, China\
The University of Kansas Medical Center, Kansas City, KS 66160\
Corresponding author. ghwang@ku.edu
Abstract {#abstract .unnumbered}
========
Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called ‘polyp’. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.
Introduction {#introduction .unnumbered}
============
Colorectal cancer is the third most common cancer diagnosed in both men and women in the united states [@state]. According to the American Cancer Society, a total of 101,420 new cases of colon cancer and 44,180 new cases of rectal cancer occurred in 2019. The lifetime risk of developing colorectal cancer is about 4.99% for men and 4.15% for women [@state]. Colorectal cancer is the second leading cause of cancer-related deaths. Colon cancer is expected to cause about 51,020 death in the United States during 2020.
Polyps are considered the harbinger of colorectal cancer. Early detection and recognition of polyps can reduce death caused by colorectal cancers. Broadly speaking, colorectal polyps can be divided into two categories: non-neoplastic (Hyperplastic) and neoplastic (Adenomatous) [@shinya1979morphology]. Hyperplastic polyps do not predispose to cancer, whereas adenomatous polyps are considered pre-cancerous as they account for approximately 85% [@kim] of sporadic colorectal cancers via the adenoma-carcinoma pathway. Therefore, adenomatous polyps are removed during colonoscopy to prevent future cancer. Therefore, differentiating the two types of polyp histology is critical to determine which patient needs close follow up at shorter intervals and which patient can be surveyed every 10 years.
Colonoscopy is the main diagnostic procedure to detect and recognize polyps located on colorectal walls. The accurate detection and correct classification depend on the skills and experience of the endoscopists, however, even for experienced endoscopists, working on conventional colonoscopy for long hours leads to mental and physical fatigue and degraded analysis and diagnosis. Other factors that may affect the classification results include varying illumination conditions, variant texture and appearance, and occlusion. Moreover, different types of polyps are hard to differentiate since they may exhibit a very similar appearance with a subtle difference, as shown in Fig \[fig:class\_examples\]. It requires a thorough examination of fine details to distinguish one category form the other. Therefore, an accurate and effective automatic computer-aided system for colonoscopy is required to help endoscopists to detect and classify the type of polyps. This automated recognition mechanism can also be used as a second opinion to determine whether a further biopsy is required for diagnosis, which in turn will greatly reduce the cost of diagnosis. In addition, such an intelligent system can also be used as an educational resource for gastroenterology trainees to reduce the learning curve and cost.
------------------------------------------------------------- ------------------------------------------------------------- ---------------------------------------------------------------
![image](figures/fig1/test8_12){width="30mm" height="26mm"} ![image](figures/fig1/786){width="30mm" height="26mm"} ![image](figures/fig1/test22_114){width="30mm" height="26mm"}
![image](figures/fig1/246){width="30mm" height="26mm"} ![image](figures/fig1/test6_55){width="30mm" height="26mm"} ![image](figures/fig1/465){width="30mm" height="26mm"}
------------------------------------------------------------- ------------------------------------------------------------- ---------------------------------------------------------------
In recent years, deep learning algorithms have shown their outstanding performance on various generic datasets [@li2019object]. In some computer vision tasks, including strategic board games, Atari games, and generic object recognition, deep learning even outperforms human accuracy. However, there is a significant difference between generic images and medical images, as medical images contain more quantitative information and the object have no canonical orientation. In addition, acquiring medical data is expensive and labeling them requires the involvement of domain experts. In this work, although we have used a total of 27,048 images to train our models, they are extracted from only 119 video sequences with each sequence contains one polyp. In short, we have only 119 different polyp images taken from various viewpoints with varying lighting conditions to train our models.
Based on the result of our previous study [@mo2018efficient][@li2019polyp] and the results of MICCAI Endoscopic Vision Challenge [@bernal2017comparative], we can see that the state-of-the-art object detection models can already yield a very high precision in polyp detection. In this study, we assume the polyps have been detected and focus our study only on classification.
In our previous work [@li2019polyp], we have collected and annotated a collection of endoscopic dataset, which contains 157 video sequences and a total of 35,981 frames. We have also labeled the ground truth of the polyp location and histogram class. In order to evaluate the performance of different classification models, we generate two polyp datasets from the annotated endoscopic dataset. As shown in Fig \[fig:typeofinput\], one dataset (set-1) only contains the cropped polyp patches from the original video frames; the other dataset (set-2) contains not only the cropped polyps but also around 55% background around the polyps. As described in [@nice], polyps have different surrounding and vascular patterns and color in vessels and background according to the type of polyps. Therefore, we generate set-2 to study the effect of background features [@nice] in polyp classification.
------------------------------------------------------- ------------------------------------------------------------------ --------------------------------------------------------------
![image](figures/fig2/17){width="38mm" height="32mm"} ![image](figures/fig2/test20_17back){width="38mm" height="32mm"} ![image](figures/fig2/test20_17){width="38mm" height="32mm"}
(a) (b) (c)
\[6pt\]
------------------------------------------------------- ------------------------------------------------------------------ --------------------------------------------------------------
Fig \[fig:typeofinput\] illustrates the difference between the two generated datasets. We have evaluated and compared the performance of six classification models on these two datasets. Our results show that there is no significant difference in classification accuracy between the two datasets. We have also analyzed the performance based on both individual frames and individual sequences. The major contribution of this work include:
- We have generated two datasets for polyp classification. To the best of our knowledge, there are no such datasets available in the literature,
- we have implemented six state-of-the-art deep learning-based image classification models and compared their performance on the two datasets. This is the first comparative evaluation for polyp classification using different convolutional neural network (CNN) models.
- This study can serve as a baseline for future studies on polyp classification. The trained classification models, as well as the test dataset will be available for free to the research community on the author’s website.
Related Work {#related-work .unnumbered}
------------
Various approaches and models have been proposed for polyp detection in colonoscopy. Previous comparative validation study on MICCAI 2015 polyp detection challenge shows the proposed models using handcrafted features as well as deep learning models. However, to the best of our knowledge, most previous works were focused on polyp detection, rather than classification, due to the unavailability of the dataset. There have been very few models proposed for polyp classification which classify the polyp into the hyperplastic and adenomatous type. Previous polyp classification approaches can be broadly divided into two categories: handcrafted feature based and deep learning based model.
**Conventional Computer Vision Approaches:** Most of the polyp classification work in the literature are based on handcrafted features. Some approaches employ a pit pattern classification scheme to classify the polyp [@wimmer2016evaluation] into two classes: normal mucosa and hyperplastic. Hafner [*et al.*]{} [@hafner2015local] went beyond the conventional pit patterns approach and exploited fractal dimension based (LFD) strategy. Uhl [*et al.*]{} proposed a blob-adapted local fractal dimension(BA-LFD) approach [@uhl2014shape] to classifying polyps. Maximal-minimal filter bank strategy proposed by [@wimmer2016novel] outperformed the BA-LFD based approach.
**Neural Network Based Approaches:** The study [@ribeiro2016exploring] provided a first review of various deep learning based models for polyp classification. They compared the performance of VGG-VD [@simonyan2014very], CNN-F [@chatfield2014return], CNN-M [@chatfield2014return], CNN-S [@chatfield2014return], AlexNet [@krizhevsky2012imagenet], and GoogleLeNet [@szegedy2015going] on i-Scan1, i-Scan2 and i-Scan3 database. The paper [@korbar2017deep] utilized CNN model to classify the polyp, but in their experiments they employed whole side images instead. The study [@akbari2018classification] classified the polyps into informative and non informative categories instead of hyperplastic and adenomatous.
**Deep Learning Models:** Inspired by the success of AlexNet [@krizhevsky2012imagenet] in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012, convolutional neural networks (CNN) have attracted a lot of attention and been successfully applied to image classification [@cen2019boosting][@cen2019dictionary][@wu2019unsupervised], object detection [@ma2020mdfn][@li2019object][@ma2018mdcn], depth estimation [@he2018learning][@he2018spindle], image transformation [@xu2019toward][@xu2019adversarially], and crowd counting [@sajid2020zoomcount]cite[sajid2020plug]{}. VGGNets [@simonyan2014very], and GoogleNet [@szegedy2015going], the ILSVRC winners of 2014 and 2015, proved that deeper models could significantly increase the ability of representations. ResNet [@he2016deep] proposed a skip connection based residual module to solve the vanishing gradient problem of very deep models. Highway networks [@srivastava2015highway] proposed a gating mechanism to regulate the flow of information in short - connections. ResNetxt [@xie2017aggregated] proposed to employ multi-branch architecture and proved the cardinality as an essential factor in the CNN architecture. Huang [*et al.*]{} proposed DenseNet [@huang2017densely] where each layer is connected to all subsequent layers. The winner of ILSVRC 2017, SENet [@hu2018squeeze], achieved 82.7% top-1 accuracy by improving channel interdependencies at almost no computational cost. Recently, EfficientNet [@tan2019efficientnet] has been proposed, which introduced a new scaling method for CNN and achieved improved performance.
Most of the proposed CNN models are based on the following three approaches: (1) Increasing the depth (number of layers) and/or width of the block architecture; (2) introducing an attention module; and (3) using a neural architecture search mechanism. The models chosen in this work are the classical models using all these three approaches. In the task of object detection, classification models are used as a backbone network, and the performance of object detection largely relied on the backbone network. The most widely adopted backbone networks including VGG, ResNet, and DenseNet. Therefore, we include all these three models in our study. In addition, we also include SENet and MnasNet. SENet employs a novel channel-wise attention mechanism, while MnasNet uses a neural architecture search. These models will demonstrate the performance of the state-of-the-art CNN models in polyp classification.
Materials and methods {#materials-and-methods .unnumbered}
=====================
Convolutional neural networks have been widely applied to various computer vision tasks including object detection and classification. A general CNN network consists of different blocks, including an input layer, an output layer, and a number of hidden layers made up of convolution layers, pooling layers, and activation layers. CNNs adaptively learn spatial hierarchies of features via back propagation through these building blocks. In this section, we make a brief review of the classical object classification models used in this comparative study. These models include VGG [@simonyan2014very], ResNet [@he2016deep], DenseNet [@huang2017densely], Squeeze-and-Excitation Network (SENet) [@hu2018squeeze] and MnasNet [@tan2019mnasnet].
### VGG {#vgg .unnumbered}
VGG Net [@simonyan2014very] was proposed by Simonyan and Zisserman to improve the classification performance by adding more convolutional layers to increase the depth of the network. This could be possible by replacing a large filter size ($11 \times 11$ and $5 \times 5$) with $3 \times 3$ multiple kernel sized filter stacked together. Max pooling layer is used to reduce spatial dimensions at every few layers. There are three back-to-back fully connected and a softmax layer respectively followed by stacking the $3 \times 3 $ convolution layers at the end. VGG is the first network structure that adopts block-based architecture. ReLU non-linearity has been added to all hidden layers. The number of weight parameters in VGG is larger than the previously proposed AlexNet, though it takes fewer epochs to converge because of implicit regularization imposed by its depth and small convolution filter size.
### ResNet {#resnet .unnumbered}
To address the problem of vanishing gradients in deep neural networks, He [*at al.*]{} [@he2016deep] proposed ResNet which was implemented using the idea of Residual - Blocks, with skip connection to fit the input from the previous layer to the next layer without modifying it. In addition, the residual block structure was structured for different deep variants of ResNet, ResNet-50, and ResNet-101, by including bottleneck design. For each residual block, they used a stack of 3 layers instead of 2 layers, which includes $ 1 \times 1$ convolution layer back and forth of $3 \times 3$ layer. Here $ 1\times 1 $ layer is responsible for adjusting the dimensions. Though ResNet is deeper than the VGG net, it has fewer filters and lower complexity. ResNet-34 has 3.6 billion Flops which is only 18 % of VGG-19.
### DenseNet {#densenet .unnumbered}
Huang [*at al.*]{} [@huang2017densely] proposed DenseNet based on the observation that deep network is efficient to train if they contain shorter connections between layers close to the input and layers close to the output. DenseNet is made up of several dense blocks and the feature maps from all previous layers are used as an input, and its own feature map is used as input to all subsequent layers. DenseNet uses concatenation operation to add the features from previous layers instead of using element-wise addition. In DenseNet, each layer has fewer number of filters(12 filters), which makes the network thinner and compact. In addition to fewer weight parameters, DenseNet is easy to train because of improved information flow and gradients throughout the network.
As each layer produces $k$ feature maps. $1 \times 1$ convolution layer is used to reduce the number of input feature map before applying it to a $3\times 3 $ convolution layer. With this unique design architecture, DenseNet has succeeded to reduce the vanishing gradient problem as well as strengthen feature propagation and encourage feature reuse.
### SENet {#senet .unnumbered}
Researchers have tried to improve the accuracy by stacking layers in different ways. Hu [*at al.*]{} [@hu2018squeeze] proposed a new architecture block squeeze and excitation based on the observation that not all feature maps are equally important. In conventional convolutional networks, the output feature maps are equally weighted, whereas SENet block weights each channel adaptively in a kind of content-aware mechanism. In more formal terms: SE block employs global information to selectively emphasize informative features and suppress less useful ones. The SE block is made up of two different operations: Squeeze and excitation. The squeeze operation uses global average pooling to generate channel-wise statistics which is a $n$-dimensional feature vector where $n$ is the number of channels. The excitation operation utilizes this $n$-dimensional feature vector, passes through two fully connected layers, and generates the same length vector. This resultant vector is used to weight the original feature maps. This squeeze and excitation block can be embedded into any state-of-the-art object classification models at a slightly additional cost. The squeeze and excitation network won the first place in ILSVRC 2017 classification and reduced the top-5 error to 2.251%.
### MnasNet {#mnasnet .unnumbered}
MnasNet [@tan2019mnasnet], proposed by Google Brain, is an automated mobile neural architecture search approach, based on reinforcement learning, which can identify a model that could achieve a good trade-off between accuracy and latency. MnasNet introduced a hierarchical search space that provides layer diversity throughout the network instead of repeatedly stack the same cells through the network. The main components of MnasNet include (i) RNN-Controller used for sampling model architecture; (ii) a trainer used to trained model sampled by RNN-controller; and (iii) a mobile phone-based inference engine for measuring latency. MnasNet has been implemented on the ImageNet [@deng2009imagenet] and COCO [@lin2014microsoft] database. In this work, we used the architecture which was searched by MnasNet on the ImageNet[@deng2009imagenet] dataset.
Implementation {#implementation .unnumbered}
--------------
### Dataset Preparation {#dataset-preparation .unnumbered}
In order to evaluate the performance of different models on the classification of polyps. We collected and labelled the following datasets.
1. **MICCAI 2017 Dataset:** This dataset was published at the GIANA Endoscopic Vision Challenge held at MICCAI 2017. It contains 18 short videos for training and 20 videos for testing[@bernal2017comparative]. Each frame in the training set has its associated ground truth in the form of segmentation mask.
2. **CVC ColonDB Dataset:** This dataset was published by Bernel [*at al.*]{} [@bernal2012towards], which contains 15 short colonoscopy video sequence, with the ground truth of polyp segmentation mask.
3. **ISIT-UMR Colonoscopy Dataset:** This dataset was published by Mesejo [*at al.*]{} [@mesejo2016computer]. It contains 76 short video sequences. Each video sequence was labeled by the polyp categories, however, there is no ground truth of segmentation.
4. **KUMC Colonoscopy Dataset:** This is a dataset collected at the University of Kansas Medical Center with ethical oversight . It consist of 80 colonoscopy video sequences.
With the help of three endoscopists from the medical school of Jilin University and the University of Kansas Medical Center, we labeled the polyp classification results of all videos in datasets 1, 2, and 4. We also annotated the location bounding boxes for all the polyps in datasets 3 and 4. During the annotation process, the endoscopists could not reach an agreement on some sequences since they may need further biopsy verification. Those videos are removed from the datasets. We finally obtained a dataset of 157 videos (35,981 frames) with the labeled ground truth of the polyp histology and bounding boxes.
For the labeled dataset, we randomly split all the videos into training, validation, and test sets which contains 119, 16, and 22 video sequences, respectively. The study focuses on evaluating the performance of the state-of-the-art classification models. We assume the polyps have been accurately detected and generate two separate datasets for the evaluation. As shown in Fig \[fig:typeofinput\], set-1 only contains the patches of the cropped polyps, and set-2 contains not only the cropped polyps but also about 55% background around the polyps.
Training {#training .unnumbered}
--------
In this study, we implemented and compared a total of 6 classical classification models: VGG19 with/without batch normalization [@simonyan2014very], ResNet50 [@he2016deep], DenseNet121 [@huang2017densely], SE-ResNet50 [@hu2018squeeze] and MnasNet [@tan2019mnasnet]. The training dataset contains 119 sequences (27,048 images). We train all the model using NVIDIA Tesla K80 or P100 GPUs. The hyperparameters used to train the models are tabulated in Table \[tab:3\]. All models were initialized by pre-trained ImageNet weights and the training time of each model ranges from 1 to 3 hours.
[|l+l|l|l|l|l|]{} Model&Learning rate& Batch size&Epoch&Step size&Gamma\
VGG19 & 0.001&32&25&-&-\
VGG19-BN&0.001&32&25&-&-\
ResNet50&0.001&64&25&-&-\
DenseNet&0.001&64&25&-&-\
SE-ResNet&0.001&64&50&30&0.1\
MnasNet&0.001&64&150&-&-\
The hyperparameters used to train different models.
\[tab:3\]
Evaluation Metrics {#evaluation-metrics .unnumbered}
------------------
In the experiments, we train each model until it achieves the optimal performance on the validation set. To evaluate the model performance, we calculate the top-1 classification error. In order to make a fair comparison of different models, the performance has also been evaluated in terms of sensitivity, specificity, accuracy, precision, and F1-Score. The definitions of these matrices are listed in Table \[tab:def\]. We evaluates the performance of all models on each sequences individually for both datasets.
[-2.25in]{}[0in]{}
[|l+p[12cm]{}|]{} & Polyp classification\
True Positive(TP) & Numbers of adenomatous polyps that are correctly classified\
True Negative(TN)& Numbers of hyperplastic polyps that are correctly classified\
False Positive(FP)& Numbers of hyperplastic polyps that are incorrectly misclassified as adenomatous\
False Negative(FN) & Numbers of adenomatous polyps that are incorrectly classified as hyperplastic\
Sensitivity&% of actual adenoma are correctly classified. Also termed as recall and accuracy of adenoma. $\frac{TP}{TP + FN} \times 100$\
Specificity& % of actual hyperplastic are correctly classified. Also termed as recall and accuracy of hyperplastic. $\frac{TN}{TN + FP} \times 100$\
Precision(Adenoma)&% of predicted adenoma that are truly adenoma. $\frac{TP}{TP + FP} \times 100 $\
Precision(Hyperplastic)&% of predicted hyperplastic that are truly hyperplastic. $\frac{TP}{TP + FP} \times 100 $\
Accuracy&Overall accuracy of both classes. $\frac{TP + TN}{TP + TN + FP + FN}\times 100$\
F1-Score& Weighted average of precision and recall. $2\frac{precision \times recall}{precision + recall }\times 100$\
Error&$\frac{1 - Accuracy} { 100}$\
ROC & Receiver operating characteristic curve\
AUC &Area under the curve (of ROC)\
Evaluation metrics used in the comparison. Precision, Recall(class based accuracy) and F1-Score are calculated for both classes
\[tab:def\]
Results {#results .unnumbered}
=======
In this section, we report the classification results of all comparative models using the two datasets. All input images are resized to $224 \times 224$ for a fair comparison. All models include batch normalization except VGG-19. The test set contains a total of 22 sequences (4719 frames), where 13 sequences (2890 frames) belong to adenomatous and 9 sequences (1829 frames) belong to hyperplastic. All models employ softmax as the classifier to yield the scores for the two classes, and the model outputs the class corresponding to the higher score. The top-1 error, precision, recall (individual class accuracy), and F1-score for both categories are as shown in Table \[tab:confmat\]. To alleviate the influence of the variation of illumination, all images in the datasets were normalized with respect to their mean and standard deviation. The mean and standard deviation of both datasets are listed in Table \[tab:4\].
[-2.25in]{}[0in]{}
[|l+p[0.4cm]{}p[0.4cm]{}p[0.4cm]{}p[0.4cm]{}p[0.8cm]{}p[1.1cm]{}p[0.9cm]{}p[1.2cm]{}p[1.2cm]{}p[1.2cm]{}p[1.2cm]{}p[1.2cm]{}p[0.8cm]{}|]{} Model&TP&TN&FP&FN&Ade&Hyper&Acc&Err&Pre-1&Pre-2&F1-1&F1-2&AUC\
&&&&&(%)&(%)&(%)&(%)&(%)&(%)&(%)&(%)&(%)\
VGG-19(set-1)&2424&1149&680&466&**83.87**&62.82&**75.71**&**24.28**&78.09&**71.14**&**80.88**&66.72&76.43\
VGG-19(set-2)&2419&1346&483&471&**83.70**&**73.59**&**79.78**&**20.21**&**83.35**&**74.07**&**83.52**&**73.83**&84.80\
VGG19-BN(set-1)&2071&1440&389&819&71.66&**78.73**&74.40&25.59&**84.18**&63.74&77.42&**70.45**&78.58\
VGG19-BN(set-2)&2295&1345&484&595&79.41&73.53&77.13&22.86&82.58&69.32&80.96&71.37&82.20\
ResNet50(set-1)&2350&1222&607&540&81.31&66.81&75.69&24.30&79.47&69.35&80.38&68.05&77.25\
ResNet50(set-2)&2042&1305&524&848&70.65&71.35&70.92&29.07&79.57&60.61&74.85&65.54&76.27\
DenseNet(set-1)&2246&1282&547&644&77.71&70.09&74.76&25.23&80.41&66.56&79.042&68.28&79.28\
DenseNet(set-2)&2065&1306&523&825&71.45&71.40&71.43&28.56&79.79&61.28&75.39&65.95&78.65\
SENet(set-1)&2230&1320&509&660&77.16&72.17&75.22&24.77&81.41&66.66&79.23&69.30&72.78\
SENet(set-2)&2338&1138&691&552&80.89&62.21&73.65&26.34&77.18&62.21&78.99&64.67&82.05\
MnasNet(set-1)&2239&1213&616&651&77.47&66.32&73.15&26.84&78.42&65.07&77.94&65.69&73.32\
MnasNet(set-2)&2115&1242&587&775&73.18&67.90&71.13&28.86&78.27&61.57&75.64&64.58&77.11\
Overall performance of all model on set-1 and set-2 based on individual frame irrespective of sequence.
\[tab:confmat\]
[|l+p[7.3cm]{}|]{} & Mean and standard deviation used for normalization\
Set-1 & \[0.6916, 0.5297, 0.4158\]\[0.1439, 0.1377, 0.1306\]\
Set-2 & \[0.6594, 0.5112, 0.4026\]\[0.2469,0.2254,0.2095\]\
Mean and standard deviation of set-1 and set-2, used to normalize input images. \[tab:4\]
Discussion {#discussion .unnumbered}
==========
Frame-based Performance {#frame-based-performance .unnumbered}
-----------------------
We first report the comparative performance of different models based on each individual frame. Frame-based performance is measured without considering the particular sequence of those frames. It measures the overall accuracy similar to the generic classification evaluation for other datasets. As shown in Table \[tab:confmat\], VGG19 outperforms all other models with an overall accuracy of 75.71% and 79.78% for set-1 and set-2, respectively. The precision of Adenomatous class is higher than that of Hyperplastic class for every model in both datasets, except for VGG-19 with batch normalization (on set-1) and ResNet50 (on set-2). If we consider precision and F1-score for every model in both datasets, the precision of Adenomatous is always higher than that of Hyperplastic. VGG-19 has also achieved the highest recall for both classes on set-2. The most recently proposed models, like ResNet, SENet, and MnasNet did not perform well in both datasets, although they have better performance than VGG-19 on generic image classification datasets.
From Table \[tab:confmat\] we also observe that VGG-19 outperforms VGG-19 with batch normalization in most metrics. This is contradicting to what was observed in other datasets. The reason might because that, in polyp classification, the exact intensity values of the pixels may be more useful for the discrimination of different types of polyps than that in generic image classification. While batch normalization layer scales the pixel values with respect to the batches, which may affect the intensity information and downgrade the performance.
To better visulize the performance, we employ AUC (area under the curve) ROC (receiver operating characteristics) curve to demonstrate the frame-based performance. AUC-ROC curve represents the degree of separability of a classification problem. It demonstrates the capability of a model in differentiating classes. Fig \[fig:Roccrop\] and Fig \[fig:rocback\] show the ROC curves of different models for set-1 and set-2, respectively. The results show that, in general, the models achieve better classification performance on set-2 than that on set-1 except for ResNet. We can also see that VGG-19 achieves the highest ROC score and the best accuracy on set-2.
[-2.25in]{}[0in]{}
------------------------------------------------------------------ ----------------------------------------------------------------- -----------------------------------------------------------------
![image](figures/fig3/vgg19crop){width="55mm" height="55mm"} ![image](figures/fig3/vgg19bn-crop){width="55mm" height="55mm"} ![image](figures/fig3/resnet-crop){width="55mm" height="55mm"}
(a) (b) (c)
![image](figures/fig3/DenseNet-crop){width="55mm" height="55mm"} ![image](figures/fig3/senet-crop){width="55mm" height="55mm"} ![image](figures/fig3/MnasNet-crop){width="55mm" height="55mm"}
(d) (e) (f)
------------------------------------------------------------------ ----------------------------------------------------------------- -----------------------------------------------------------------
[-2.25in]{}[0in]{}
------------------------------------------------------------------ ----------------------------------------------------------------- -----------------------------------------------------------------
![image](figures/fig4/vgg19-back){width="55mm" height="55mm"} ![image](figures/fig4/vgg19bn-back){width="55mm" height="55mm"} ![image](figures/fig4/resnet-back){width="55mm" height="55mm"}
(a) (b) (c)
![image](figures/fig4/DenseNet-back){width="55mm" height="55mm"} ![image](figures/fig4/SENet-back){width="55mm" height="55mm"} ![image](figures/fig4/MnasNet-back){width="55mm" height="55mm"}
(d) (e) (f)
------------------------------------------------------------------ ----------------------------------------------------------------- -----------------------------------------------------------------
Sequence-based Performance {#sequence-based-performance .unnumbered}
--------------------------
Based on the classification of each frame, we can measure the performance of each sequence. The sequence-by-sequence performance for the two datasets are shown in Fig \[fig:cropseq\] and Fig \[fig:backseq\], respectively. We can see that the results are not consistent among all frames within the same sequence of the same polyp. This is because the appearance of the polyp may subject to significant appearance changes due to the variance of the viewpoints, zooming scales, and illumination. Fig \[fig:viewpoint\] shows some sample frames of a sequence under different viewpoints and lighting conditions. In this case, even experienced endoscopists cannot make an accurate prediction from a single frame. As a result, not all frames can be correctly classified. In practice, we calculate the percentage of correctly classified frames for each sequence. Then, we set a threshold in terms of the percentage, and a sequence is considered to be correctly classified if the percentage of correctly classified frames is greater than the specified threshold. Table \[tab:thresbased\] shows the performance corresponding to different thresholds for the two datasets.
[|l+ccc|]{} Model & Threshold(70%) & Threshold(60%) & Threshold(50%)\
VGG-19 &63.63/**68.18**&72.72/**81.81**&81.81/**90.90**\
VGG19-BN&**69.63**/**68.18**&72.72/**81.81**&81.81/90.90\
ResNet50&68.18/59.09&**77.27**/72.72&**86.36**/81.81\
DenseNet&59.09/63.63&72.72/68.18&**86.36**/68.18\
SE-ResNet&63.63/54.54&72.72/72.72&72.72/77.27\
MnasNet&54.54/54.54&68.18/68.18&81.81/81.81\
Accuracy per sequence for all models based on different threshold with set-1 / set-2 . First term before ’/’ specifies accuracy for set-1 and and term after ’/’ indicates accuracy for set-2.
\[tab:thresbased\]
[-2.25in]{}[0in]{}
![image](figures/fig5/set-1){width="0.8\linewidth"}
[-2.25in]{}[0in]{}
![image](figures/fig6/set-2){width="0.8\linewidth"}
[-2.25in]{}[0in]{}
------------------------------------------------------- ------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------------
![image](figures/fig7/40){width="25mm" height="25mm"} ![image](figures/fig7/57){width="25mm" height="25mm"} ![image](figures/fig7/106){width="25mm" height="25mm"} ![image](figures/fig7/114){width="25mm" height="25mm"} ![image](figures/fig7/237){width="25mm" height="25mm"} ![image](figures/fig7/278){width="25mm" height="25mm"}
(a) (b) (c) (d) (e) (f)
\[6pt\]
------------------------------------------------------- ------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- -------------------------------------------------------- --------------------------------------------------------
As shown in Fig \[fig:cropseq\] and Fig \[fig:backseq\], the classification result for each sequence is not consistent. The test sequences 1, 3, 10, 12, 13, 14, 18,19, 21, and 22 are correctly classified by all models for both datasets, while the results of sequences 2, 4, 5, 6, 7, 9, 11, 17, and 20 are not consistent because the percentage of the correctly classified frames is in between 40-50%. Sequences 5 and 6 could not be classified well by all models. Some sample frames of sequences 5 and 6 are shown in Fig \[fig:exams\], which subject large variations in appearance that cause the difficulty in classification. Table \[tab:thresbased\] shows the threshold-based performance of all models. The results indicate the consistency of the prediction of different models, from which we can see that VGG models achieve relatively better performance than other models. For example, VGG-19 achieves around 70%, 80%, and 90% accuracy at the thresholds of 70%, 60%, and 50%, respectively. Comparing Table \[tab:confmat\] and Table \[tab:thresbased\], we can find that if we set the threshold at 50%, the sequence-based accuracy is much higher than frame-based based accuracy, especially for VGG models. However, at a higher threshold of 70%, the overall accuracy of the frame-based is higher than the sequence-based approaches, which indicates the consistent prediction within the sequence.
To better visualize the sequence-based performance, we have included the box plots. Box plots show the accuracy per sequence distribution of the total 22 sequences. Fig \[fig:cropbox\] shows the box plots of different models on set-1 and set-2, respectively. It can be seen that the maximum accuracy of all models is 100% because at least one sequence has been correctly classified by each of the models. The upper quartile range is dependent on the median value. A high median value decreases the upper half range, which shows the ability of the model to consistently correctly classified sequence. On set-1, VGG-19 achieves the highest median value, which indicates that half of the sequences are correctly classified with a very high threshold. On set-2, ResNet-50 yields the most consistent results with the highest median value. We can also see from the results that the upper quartile ranges are smaller than the lower quartile range, which indicates that the spread of accuracy below the median value is very high.
------------------------------------------------------------- -------------------------------------------------------------- --------------------------------------------------------------
![image](figures/fig8/test6_16){width="28mm" height="28mm"} ![image](figures/fig8/test6_44){width="28mm" height="28mm"} ![image](figures/fig8/test6_116){width="28mm" height="28mm"}
(a)
![image](figures/fig8/test8_15){width="28mm" height="28mm"} ![image](figures/fig8/test8_192){width="28mm" height="28mm"} ![image](figures/fig8/test8_279){width="28mm" height="28mm"}
(b)
\[6pt\]
------------------------------------------------------------- -------------------------------------------------------------- --------------------------------------------------------------
[-2.25in]{}[0in]{}
-------------------------------------------------------- --------------------------------------------------------
![image](figures/fig9/cropplot){width="0.5\linewidth"} ![image](figures/fig9/backplot){width="0.5\linewidth"}
(a) (b)
-------------------------------------------------------- --------------------------------------------------------
Polyp Crops vs Crops with Background {#polyp-crops-vs-crops-with-background .unnumbered}
------------------------------------
In order to test the background information in polyp classification, we generate two datasets in the experiment, set-1 has only polyp crops and set-2 contains polyp crops with 50% background. From Table \[tab:confmat\] we can see that, if we consider frame-based performance, except for the VGG models, all other models achieve higher accuracy on set-1 than on set-2. If we consider the overall AUC-ROC score, set-2 yields better performance which means the two classes are easier to distinguish in set-2 than in set-1. If we consider sequence-based analysis, the performance of all sequences is almost similar for both types of datasets. For consistency-based performance, the consistency is improved by VGG-19, VGG-19 with batch normalization, and DenseNet for set-2, whereas for other models, the overall threshold-based accuracy is very close. If we consider the box plots and set median as a threshold, the consistency of correctly classifying sequence is improved by ResNet, DenseNet, and SENet for set-2.
Conclusion {#conclusion .unnumbered}
==========
In this paper, we have established two datasets and compared six state-of-the-art deep learning-based classification models. We have evaluated the results both at the frame level and at the polyp level. Our results show that VGG-19, in general, outperforms other models in both cases for both datasets. While some more advanced classification models, like ResNet, DenseNet, SENet, and MnasNet did not perform well in our experiments, though they have advantages on other benchmark datasets. The poor performance may be caused by the limited size of the polyp dataset. This study provides a good baseline for future research to develop more accurate and more robust polyp classification models.
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors would like to thank Dr. Vijay Kanakadandi at the University of Kansas Medical Center for his insightful help and advice for this study.
[10]{}
Society AC. Key Statistics for Colorectal Cancer;.
Shinya H, Wolff WI. Morphology, anatomic distribution and cancer potential of colonic polyps. Annals of surgery. 1979;190(6):679.
KIM DH, PICKHARDT PJ. Chapter 1 - colorectal polyps: Overview and classifi- cation. In P. J. Pickhardt and D. H. Kim (Eds.), CT Colonography: Principles and Practice of Virtual Colonoscopy. 2010; p. 3–9.
Li K, Ma W, Sajid U, Wu Y, Wang G. Object Detection with Convolutional Neural Networks. arXiv preprint arXiv:191201844. 2019;.
Mo X, Tao K, Wang Q, Wang G. An efficient approach for polyps detection in endoscopic videos based on faster R-CNN. In: 2018 24th International Conference on Pattern Recognition (ICPR). IEEE; 2018. p. 3929–3934.
Li K, Fathan MI, Patel K, Wang G. Colonoscopy Polyp Detection and Classification: Dataset Creation and Comparative Evaluation. ITTC Technical Report, the University of Kansas. 2019;.
Bernal J, Tajkbaksh N, S[á]{}nchez FJ, Matuszewski BJ, Chen H, Yu L, et al. Comparative validation of polyp detection methods in video colonoscopy: results from the MICCAI 2015 endoscopic vision challenge. IEEE transactions on medical imaging. 2017;36(6):1231–1249.
NICE Polyp Classification;. <https://www.endoscopy-campus.com/en/classifications/polyp-classification-nice/>.
Wimmer G, Gadermayr M, Kwitt R, H[ä]{}fner M, Merhof D, Uhl A. Evaluation of i-scan virtual chromoendoscopy and traditional chromoendoscopy for the automated diagnosis of colonic polyps. In: International Workshop on Computer-Assisted and Robotic Endoscopy. Springer; 2016. p. 59–71.
H[ä]{}fner M, Tamaki T, Tanaka S, Uhl A, Wimmer G, Yoshida S. Local fractal dimension based approaches for colonic polyp classification. Medical image analysis. 2015;26(1):92–107.
Uhl A, Wimmer G, Hafner M. Shape and size adapted local fractal dimension for the classification of polyps in HD colonoscopy. In: 2014 IEEE International Conference on Image Processing (ICIP). IEEE; 2014. p. 2299–2303.
Wimmer G, Uhl A, H[ä]{}fner M. A novel filterbank especially designed for the classification of colonic polyps. In: 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE; 2016. p. 2150–2155.
Ribeiro E, Uhl A, Wimmer G, H[ä]{}fner M. Exploring deep learning and transfer learning for colonic polyp classification. Computational and mathematical methods in medicine. 2016;2016.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556. 2014;.
Chatfield K, Simonyan K, Vedaldi A, Zisserman A. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:14053531. 2014;.
Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems; 2012. p. 1097–1105.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 1–9.
Korbar B, Olofson AM, Miraflor AP, Nicka CM, Suriawinata MA, Torresani L, et al. Deep learning for classification of colorectal polyps on whole-slide images. Journal of pathology informatics. 2017;8.
Akbari M, Mohrekesh M, Rafiei S, Soroushmehr SR, Karimi N, Samavi S, et al. Classification of Informative Frames in Colonoscopy Videos Using Convolutional Neural Networks with Binarized Weights. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2018. p. 65–68.
Cen F, Wang G. Boosting occluded image classification via subspace decomposition-based estimation of deep features. IEEE transactions on cybernetics. 2019;.
Cen F, Wang G. Dictionary representation of deep features for occlusion-robust face recognition. IEEE Access. 2019;7:26595–26605.
Wu Y, Zhang Z, Wang G. Unsupervised deep feature transfer for low resolution image classification. In: Proceedings of the IEEE International Conference on Computer Vision Workshops; 2019. p. 0–0.
Ma W, Wu Y, Cen F, Wang G. MDFN: Multi-scale deep feature learning network for object detection. Pattern Recognition. 2020;100:107149.
Ma W, Wu Y, Wang Z, Wang G. Mdcn: Multi-scale, deep inception convolutional neural networks for efficient object detection. In: 2018 24th International Conference on Pattern Recognition (ICPR). IEEE; 2018. p. 2510–2515.
He L, Wang G, Hu Z. Learning depth from single images with deep neural network embedding focal length. IEEE Transactions on Image Processing. 2018;27(9):4676–4689.
He L, Yu M, Wang G. Spindle-Net: CNNs for monocular depth inference with dilation kernel method. In: 2018 24th International Conference on Pattern Recognition (ICPR). IEEE; 2018. p. 2504–2509.
Xu W, Shawn K, Wang G. Toward learning a unified many-to-many mapping for diverse image translation. Pattern Recognition. 2019;93:570–580.
Xu W, Keshmiri S, Wang G. Adversarially approximated autoencoder for image generation and manipulation. IEEE Transactions on Multimedia. 2019;21(9):2387–2396.
Sajid U, Sajid H, Wang H, Wang G. Zoomcount: A zooming mechanism for crowd counting in static images. IEEE Transactions on Circuits and Systems for Video Technology. 2020;.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 770–778.
Srivastava RK, Greff K, Schmidhuber J. Highway networks. arXiv preprint arXiv:150500387. 2015;.
Xie S, Girshick R, Doll[á]{}r P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 1492–1500.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 4700–4708.
Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2018. p. 7132–7141.
Tan M, Le QV. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv preprint arXiv:190511946. 2019;.
Tan M, Chen B, Pang R, Vasudevan V, Sandler M, Howard A, et al. Mnasnet: Platform-aware neural architecture search for mobile. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019. p. 2820–2828.
Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. Ieee; 2009. p. 248–255.
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft coco: Common objects in context. In: European conference on computer vision. Springer; 2014. p. 740–755.
Bernal J, S[á]{}nchez J, Vilarino F. Towards automatic polyp detection with a polyp appearance model. Pattern Recognition. 2012;45(9):3166–3182.
Mesejo P, Pizarro D, Abergel A, Rouquette O, Beorchia S, Poincloux L, et al. Computer-aided classification of gastrointestinal lesions in regular colonoscopy. IEEE transactions on medical imaging. 2016;35(9):2051–2063.
|
---
abstract: 'In this paper, we present an alternative and elementary proof of a sharp version of the classical boundary Schwarz lemma by Frolova et al. with initial proof via analytic semigroup approach and Julia-Carathéodory theorem for univalent holomorphic self-mappings of the open unit disk $\mathbb D\subset \mathbb C$. Our approach has its extra advantage to get the extremal functions of the inequality in the boundary Schwarz lemma.'
address:
- 'Guangbin Ren, Department of Mathematics, University of Science and Technology of China, Hefei 230026, China'
- 'Xieping Wang, Department of Mathematics, University of Science and Technology of China, Hefei 230026, China'
author:
- Guangbin Ren
- Xieping Wang
title: Extremal functions of boundary Schwarz lemma
---
[^1]
Introduction
============
The Schwarz lemma as one of the most influential results in complex analysis puts a great push to the development of several research fields, such as geometric function theory, hyperbolic geometry, complex dynamical systems, composition operators theory, and theory of quasiconformal mappings. We refer to [@Abate; @EJLS] for a more complete insight on the Schwarz lemma.
The classical Schwarz lemma as well as the Schwarz-Pick lemma concerns with holomorphic self-mappings of the open unit disk $\mathbb D$ in the complex plane $\mathbb C$, which provides the invariance of the hyperbolic disks around the interior fixed point under self-mappings of $\mathbb D$. A verity of its boundary versions are in the spirit of Julia [@Julia], Carathéodory [@Caratheodory], and Wolff [@Wolff1; @Wolff2] involving the boundary involving the boundary fixed points.
Recall that a boundary point $\xi\in\partial \mathbb D$ is called a fixed point of $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ if $$f(\xi):=\lim\limits_{r\rightarrow 1^-}f(r\xi)=\xi.$$ Here ${\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ denotes the class of holomorphic self-mappings of the open unit disk $\mathbb D$. It is well known that, for any $f\in H(\mathbb D, \mathbb D)$, its radial limit is the same as its angular limit and both exist for almost all $\xi\in\partial \mathbb D$; moreover, the exceptional set in $\partial \mathbb D$ is of capacity zero.
The classification of the boundary fixed points of $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ can be performed via the value of the *angular derivative* $$f'(\xi):=\angle\lim\limits_{z\rightarrow \xi}\frac{f(z)-\xi}{z-\xi},$$ which belongs to $(0,\infty]$ due to the celebrated Julia-Carathéodory theorem; see [@Caratheodory; @Abate]. This theorem also asserts that the finite angular derivative at the boundary fixed point $\xi$ exists if and only if the holomorphic function $f'$ admits the finite angular limit $\angle\lim\limits_{z\rightarrow \xi}f'(z)$. For a boundary fixed point $\xi$ of $f$, if $$f'(\xi)\in (0,\infty),$$ then $\xi$ is called a regular boundary fixed point. The regular points can be *attractive* if $f'(\xi)\in (0,1)$, *neutral* if $f'(\xi)=1$, or *repulsive* if $f'(\xi)\in (1,\infty)$.
The Julia-Carathéodory theorem [@Caratheodory; @Abate] and the Wolff lemma [@Wolff2] imply that there exists a unique regular boundary fixed point $\xi$ such that $$f'(\xi)\in (0,1]$$ if $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with no interior fixed point; otherwise the assumption that the mapping $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with an interior fixed point forces $f'(\xi)>1$ for any boundary fixed point $\xi\in\partial \mathbb D$. Moreover, Unkelbach [@Unkelbach] and Herzig [@Herzig] proved that if $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ has a regular boundary fixed point at point $1$, and $f(0)=0$, then $$\label{UHO}
f'(1)\geq\frac{2}{1+|f'(0)|}.$$ Moreover, equality in (\[UHO\]) holds if and only if $f$ is of the form $$f(z)=-z\frac{a-z}{1-az},\qquad \forall\,z\in\mathbb D,$$ for some constant $a\in (-1, 0 ]$.
This result is improved sixty years later by Osserman [@Osserman] by removing the assumption of the existence of interior fixed points.
[**(Osserman)**]{}\[Osserman\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with $\xi=1$ as its regular boundary fixed point. Then $$\label{Osserman-inequality}
f'(1)\geq\frac{2\big(1-|f(0)|\big)^2}{1-|f(0)|^2+|f'(0)|}.$$
This inequality is strengthened very recently by Frolova et al. in [@FLSV] as follows.
[**(Frolova et al.)**]{}\[Main theorem\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with $\xi=1$ as its regular boundary fixed point. Then $$\label{key inequality}
f'(1)\geq\frac{2}{{\rm{Re}}\bigg(\dfrac{1-f(0)^2+f'(0)}{(1-f(0))^2}\bigg)}.$$
The initial proof given in [@FLSV] is based on analytic semigroup approach as well as the Julia-Carathéodory theorem for [*univalent*]{} holomorphic self-mappings of $\mathbb D$, which is proved via the method of extremal length [@Anderson].
The purpose of this article is to study the extremal functions of inequality (\[key inequality\]), in addition to present an alternative and elementary proof of (\[key inequality\]). Our main result is as follows.
\[Main theorem-2\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with $\xi=1$ as its regular boundary fixed point. Then equality holds in inequality $(\ref{key inequality})$ if and only if $f$ is of the form $$\label{exe-fun}
f(z)=\dfrac{f(0)-z\dfrac{a-z}{1-az}\dfrac{1-f(0)}{1-\overline{f(0)}}}{1-z\dfrac{a-z}{1-az}
\dfrac{1-f(0)}{1-\overline{f(0)}}\overline{f(0)}},\qquad \forall\,z\in\mathbb D,$$ for some constant $a\in [-1,1)$.
As a direct consequence of Theorems \[Main theorem\] and \[Main theorem-2\], we obtain a strong version of Osserman’s inequality.
\[Main corollary\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ with $\xi=1$ as its regular boundary fixed point. Then $$\label{inequality from below}
f'(1)\geq\frac{2\big|1-f(0)\big|^2}{1-|f(0)|^2+|f'(0)|}.$$ Moreover, equality holds in this inequality if and only if $f$ is of the form $$\label{exe-funs}
f(z)=\dfrac{f(0)-z\dfrac{a-z}{1-az}\dfrac{1-f(0)}{1-\overline{f(0)}}}{1-z\dfrac{a-z}{1-az}
\dfrac{1-f(0)}{1-\overline{f(0)}}\overline{f(0)}},\qquad \forall\,z\in\mathbb D,$$ for some constant $a\in [-1,0]$.
\[Main remark\] From Corollary \[Main corollary\], one easily deduce that equality in (\[Osserman\]) hold if and only if $f$ is of the form $$f(z)=\dfrac{f(0)-z\dfrac{a-z}{1-az}}{1-z\dfrac{a-z}{1-az}f(0)}, \qquad \forall\,z\in\mathbb D,$$ for some constant $a\in [-1,0]$ with $f(0)\in \mathbb [0,1)$.
As an application, Corollary \[Main corollary\] immediately results in a quantitative strengthening of a classical theorem of Löwner (i.e the second assertion in the following Corollary, see [@Lowner]) as follows.
[**(Löwner)**]{}\[thm:Lowner\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ and $f(0)=0$. Assume that $f$ extends continuously to an arc $C\in\partial\mathbb D$ of length $s$ and maps it onto an arc $f(C)\in\partial\mathbb D$ of length $\sigma$. Then $$\sigma\geq\frac{2}{1+|f'(0)|}s.$$ In particular, we have $$\label{Lowner ineq}
\sigma\geq s$$ with equality if and only if either $\sigma=s=0$ or $f$ is just a rotation.
The length $\sigma$ of $f(C)$ is to be taken with multiplicity, if $f(C)$ is a multiple covering of the image.
For generalizations of Theorems \[Main theorem\] and \[Main theorem-2\] as well as Corollary \[Main corollary\] to the setting of quaternions for slice regular self-mappings of the open unit ball $\mathbb B\in\mathbb H$, see [@RW].
Proof of main results
=====================
In this section, we shall give the proofs of the main results. Before presenting the details, we first recall the concrete contents of the classical Julia lemma and Julia-Carathéodory theorem; see e.g. [@Abate; @Sarason1], [@Sarason2 p. 48 and p. 51].
[**(Julia)**]{}\[Julia\] Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ and let $\xi\in\partial \mathbb D$. Suppose that there exists a sequence $\{z_n\}_{n\in \mathbb N}\subset \mathbb D$ converging to $\xi$ as $n$ tends to $\infty$, such that the limits $$\alpha:=\lim\limits_{n\rightarrow\infty}
\frac{1-|f(z_n)|}{1-|z_n|}$$ and $$\eta:=\lim\limits_{n\rightarrow\infty}f(z_n)$$ exist $($finitely$)$. Then $\alpha>0$ and the inequality $$\begin{aligned}
\label{ineq:Julia}
\frac{\big|f(z)-\eta\big|^2}{1-|f(z)|^2}\leq \alpha\,\frac{|z-\xi|^2}{1-|z|^2}\end{aligned}$$ holds throughout the open unit disk $\mathbb D$ and is strict except for Möbius transformations of $\mathbb D$.
\[Julia-Caratheodory\][**(Julia-Carathéodory)**]{} Let $f\in {\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ and let $\xi\in\partial \mathbb D$. Then the following conditions are equivalent:
1. The lower limit $$\begin{aligned}
\label{def:alpha-Julia}\alpha:=\liminf\limits_{z\rightarrow \xi}\dfrac{1-|f(z)|}{1-|z|}\end{aligned}$$ is finite, where the limit is taken as $z$ approaches $\xi$ unrestrictedly in $\mathbb D$;
2. $f$ has a non-tangential limit, say $f(\xi)$, at the point $\xi$, and the difference quotient $$\frac{f(z)-f(\xi)}{z-\xi}$$ has a non-tangential limit, say $f'(\xi)$, at the point $\xi$;
3. The derivative $f'$ has a non-tangential limit, say $f'(\xi)$, at the point $\xi$.
Moreover, under the above conditions we have
1. $\alpha>0$ in $(\rm{i})$;
2. the derivatives $f'(\xi)$ in $(\rm{ii})$ and $(\rm{iii})$ are the same;
3. $f'(\xi)=\alpha \overline{\xi}f(\xi)$;
4. the quotient $\dfrac{1-|f(z)|}{1-|z|}$ has the non-tangential limit $\alpha$ at the point $\xi$.
Now we come to prove Theorems \[Main theorem\] and \[Main theorem-2\].
Let $f$ be as described in Theorem $\ref{Main theorem}$. Set $$g(z):=\frac{f(z)-f(0)}{1-\overline{f(0)}f(z)}\frac{1-\overline{f(0)}}{1-f(0)},$$ which is in ${\rm{\textsf{H}}}(\mathbb D, \mathbb D)$ such that $\xi=1$ is its regular boundary fixed point and $g(0)=0$. Moreover, an easy calculation shows that $$\label{ineq:01}
f'(1)=\frac{|1-f(0)|^2}{1-|f(0)|^2}\,g'(1),$$ and $$\label{ineq:02}
g'(0)=\frac{f'(0)}{1-|f(0)|^2}\frac{1-\overline{f(0)}}{1-f(0)},$$ which is no more than one in modulus. Applying the Julia-Carathéodory theorem and the Julia inequality (\[ineq:Julia\]) in the Julia lemma to the holomorphic function $h: \mathbb D\rightarrow \overline{\mathbb D}$ defined by $$h(z
):=\dfrac{g(z)}{z},\qquad \forall\,z\in\mathbb D,$$ we obtain $$\label{derivative ineq}
g'(1)=1+h'(1)\geq 1+\frac{|1-g'(0)|^2}{1-|g'(0)|^2}=\frac{2\big(1-\textrm{Re}g'(0)\big)}{1-|g'(0)|^2}.$$ In particular, $$\label{ineq:03}
g'(1)\geq\frac{2}{1+\textrm{Re}g'(0)}.$$ Now inequality (\[key inequality\]) follows by substituting equalities in (\[ineq:01\]) and (\[ineq:02\]) into (\[ineq:03\]).
If equality holds in inequality (\[key inequality\]), then equalities also hold in the Julia inequality (\[ineq:Julia\]) at point $z=0$ and inequality (\[ineq:03\]), it follows from the condition for equality in the Julia inequality and that for equality in inequality (\[ineq:03\]) that $$\label{ineq:04}
g(z)=z\frac{z-a}{1-\bar{a}z}\frac{1-\bar{a}}{1-a},$$ for some constant $a\in\overline{\mathbb D}$, and $g'(0)\in(-1,1]$, which is possible only if $a\in [-1,1)$. Consequently, $f$ must be of the form $$\label{extremal function}
f(z)=\dfrac{f(0)-z\dfrac{a-z}{1-az}\dfrac{1-f(0)}{1-\overline{f(0)}}}{1-z\dfrac{a-z}{1-az}
\dfrac{1-f(0)}{1-\overline{f(0)}}\overline{f(0)}},\qquad \forall\,z\in\mathbb D,$$ where $a\in [-1,1)$. Therefore, the equality in inequality (\[key inequality\]) can hold only for holomorphic self-mappings of the form (\[extremal function\]), and a direct calculation shows that it does indeed hold for all such holomorphic self-mappings. This completes the proof.
Inequality (\[inequality from below\]) follows immediately from inequality (\[key inequality\]), and equality in (\[inequality from below\]) holds if and only if $$\frac{f'(0)}{\big (1-f(0)\big)^2} \in [0, \infty),$$ which is equivalent to $g'(0)\in \mathbb [0, 1]$, i.e. $a\in [-1,0]$. Here the function $g$ is the one in (\[ineq:04\]).
By the classical Schwarz reflection principle, $f$ can extend to be holomorphic in the interior of $C$. Applying Corollary $\ref{Main corollary}$ to the holomorphic self-mapping of $\mathbb D$ defined by $$g(z)=\frac{f(\xi z)}{f(\xi)}, \qquad \forall\, z\in\mathbb D$$ yields the inequality $$|f'(\xi)|\geq\frac{2}{1+|f'(0)|}$$ for any point $\xi$ in the interior of $C$. Consequently, the desired result follows.
[99]{}
M. Abate, *Iteration theory of holomorphic maps on taut manifolds*, Mediterranean Press, Rende, 1989.
J. M. Anderson, A. Vasil’ev, *Lower Schwarz-Pick estimates and angular derivatives*. Ann. Acad. Sci. Fenn. **33** (2008), 101-110.
C. Carathéodory, *Uber die Winkelderivierten von beschränkten Analytischen Funktionen*. Sitzungsber. Press. Acad. Viss. Berlin, Phys. Math. **4** (1929), 39-54. M. Elin, F. Jacobzon, M. Levenshtein, D. Shoikhet, *The Schwarz lemma: rigidity and dynamics.* Harmonic and Complex Analysis and its Applications. Springer International Publishing, 2014, 135-230.
A. Frolova, M. Levenshtein, D. Shoikhet, A. Vasil’ev, *Boundary distortion estimates for holomorphic maps*. Complex Anal. Oper. Theory. **8** (2014), 1129-1149. A. Herzig, *Die Winkelderivierte und das Poisson-Stieltjes-Integral*. Math. Z. **46** (1940), 129-156. G. Julia, *Extension nouvelle d’un lemme de Schwarz*. Acta Math. **42** (1920), 349-355.
K. Löwner, *Untersuchungen über schlichte konforme Abbildungen des Einheitskreises. I*. Math. Ann. **89** (1923), 103-121.
R. Osserman, *A sharp Schwarz inequality on the boundary*. Proc. Amer. Math. Soc. **128** (2000), 3513-3517. G. B. Ren, X. P. Wang, *Julia theory for slice regualr functions*. submitted.
D. Sarason, *Angular derivatives via Hilbert space*. Complex Variables. **10** (1988), 1-10. D. Sarason, *Sub-Hardy Hilbert Spaces in the Unit Disk*. University of Arkansas Lecture Notes in the Mathematical Sciences, vol. 10. Wiley, New York, 1994.
H. Unkelbach, *Über die Randverzerrung bei konformer Abbildung*. Math. Z. **43** (1938), 739-742. J. Wolff, *Sur l’itération des fonctions holomorphes dans une region, et dont les valeurs appartiennet a cette region*. C. R. Acad. Sci. **182** (1926), 42-43. J. Wolff, *Sur une generalisation d’un theoreme de Schwarz*. C. R. Acad. Sci. **182** (1926), 918-920.
[^1]: This work was supported by the NNSF of China (11071230), RFDP (20123402110068).
|
---
author:
- 'A. Rouco Escorial [^1]'
- 'J. van den Eijnden'
- 'R. Wijnands'
bibliography:
- 'references.bib'
date: 'Received; accepted'
title: 'Discovery of accretion-driven pulsations in the prolonged low X-ray luminosity state of the Be/X-ray transient GX 304-1'
---
Introduction {#sec:GX304_introduction}
============
In high-mass X-ray binaries, a compact object is accreting from a massive companion (with a mass of $>$10$M_\odot$). The most common systems are the neutron star (NS) Be/X-ray transients in which magnetised NSs (with surface magnetic-field strengths of B$\sim$10$^{12-13}$ G) accrete matter from the decretion disks of their Be-type companions during outbursts (for a review see @Reig2011). These systems can exhibit ‘normal’, type-I outbursts and, in addition, sometimes ‘giant’, type-II outbursts. The type-I outbursts occur at periastron passages when the NS moves through the decretion disk of the Be star and accretes matter. In these outbursts, sources exhibit X-ray luminosities (L$_\textnormal{X}$) of $\sim$10$^{36-37}$erg s$^{-1}$. The type-II outbursts are brighter than the normal ones and can reach luminosities of $\sim$10$^{38-39}$erg s$^{-1}$. Their duration is also longer than type-I outbursts, generally lasting more than an orbital period. The nature of the mechanism(s) behind these type-II outbursts is unclear (for possible models see @Moritani2013 [@Martin2014; @Monageng2017; @Laplace2017]).
The high luminosities exhibited by Be/X-ray transients during outbursts allow for detailed studies of their behaviour. Consequently, their outburst phenomenology is well known. When not in outburst, their luminosities are significantly lower, and studying their behaviours becomes more difficult. However, it is clear that the NS spin and magnetic field play important roles in the phenomenology that these systems display at low luminosities. It is expected that below a certain accretion rate, matter cannot reach the NS surface anymore. This is due to the pressure exerted by the rotating NS magnetic field that expels the matter away; the systems then enter the so-called propeller regime (@Illarionov1975; @Romanova2004; @DAngelo2010). After that, such systems may only exhibit very low luminosities ($< 10^{33}$erg s$^{-1}$).
However, in the case of slow rotating Be/X-ray transients (with typical spin periods P$_\textnormal{spin}$ of several tens of seconds and magnetic field strengths of $10^{12-13}$ G), it has been observed that several sources show an intermediate bright state (L$_\textnormal{X}$$\sim$10$^{34-35}$erg s$^{-1}$) between their outbursts (e.g., @Tsygankov2017a; @Ducci2018; @Reig2018). @Tsygankov2017a introduced a scenario to explain the observed X-ray emission during this state for the slowly rotating Be/X-ray transient GRO J1008-57 (P$_\textnormal{spin}$$\sim$93.6s; @Stollberg1993). For such slowly rotating systems, below a certain accretion rate and before the matter of the accretion disk is ejected by the propeller effect, the temperature of the matter in this disk may drop below the ionisation temperature of hydrogen. This results in a disk with a low degree of ionisation (called a ‘cold disk’), which can penetrate the magnetic field more easily than a hot ionised disk. The cold disk can move relatively close to the NS before it becomes hot again, causing the matter to be channelled by the magnetic field to its poles. This might lead to observable pulsations [@Tsygankov2017b]. So far, the long-term evolution of the cold-disk state has been poorly studied. In this letter, we present our X-ray monitoring campaign of the Be/X-ray transient GX 304-1 when it was likely accreting from a cold disk.
GX 304-1 is located at a distance of 2.01$^{+0.14}_{-0.13}$ kpc[^2]. Its NS has a spin period of $\sim$275s (@McClintock1977 [@Sugizaki2015])[^3] and a surface magnetic-field strength of $\sim$4.7$\times$10$^{12}$G [@Yamamoto2011; @Rothschild2017]. GX 304-1 is characterized by periods of strong activity wherein type-I outbursts recur periodically with a period of $\sim$132.2 days (the orbital period; @Sugizaki2015), and periods wherein the source exhibited hardly any to no activity [@Priedhorsky1983; @Pietsch1986; @Sugizaki2015; @Malacaria2017]. In particular, the source remained dormant since the early 1980’s to 2008, when it showed renewed activity [@Manousakis2008]. The last reported outburst occurred in May 2016 [@Nakajima2016; @Sguera2016; @Rouco2016] and since then the source has remained in a low-luminosity state. (Fig.\[fig:GX304\_combine\]).
Observations, analysis and results {#sec:GX304_observations_analysis_results}
==================================
We have monitored GX 304-1 using the X-ray Telescope (XRT; for $\sim$84.7ks) aboard the Neil Gehrels *Swift* observatory (hereafter *Swift*) to investigate its behaviour outside outbursts (ObsIDs 35072 and 88780). We have also intensively monitored the source since October 2017 when it became clear that its outburst activity had stopped. We also obtained a *NuSTAR* observation (Section \[subsec:GX304\_timing\]) to study its spectrum above $>$10 keV and search for pulsations during its low-luminosity state.
Light curves of GX 304-1 {#subsec:GX304_lightcurve}
------------------------
In Fig.\[fig:GX304\_combine\], we show the light curves of the source obtained using the Burst Alert Telescope (BAT) aboard [*Swift*]{} (from the BAT transient monitor web page[^4]; @Krimm2013) and the XRT (produced with the XRT products web interface[^5]; @Evans2009). In the left inset, we show the first two outbursts exhibited by the source in 2012. The maximum observed BAT count rates were $\sim$0.25 and $\sim$0.20counts cm$^{-2}$ s$^{-1}$, respectively for the first and the second outbursts (i.e., the second peak of the second outburst). The XRT was used to monitor the evolution of both outbursts and the interval between them. The maximum observed XRT count rates were $\sim$87counts s$^{-1}$ for the first outburst and $\sim$44counts s$^{-1}$ for the second one (i.e., for the first peak of this outburst). After the initial fast decay at the end of the first outburst, the source entered a state in which it decreased at a much slower rate: the XRT count rate dropped from $\sim$3.5 to $\sim$0.37counts s$^{-1}$ in $\sim$79days (between MJD 55970 and 56049). No further count rate evolution could be investigated because the second outburst started. These count rates correspond to 0.5-100 keV luminosities of $\sim2.7 \times 10^{35}$ and $2.8 \times 10^{34}$ erg s$^{1}$, respectively. These luminosities were calculated using the spectral analysis and luminosities reported in @Tsygankov2018, when the source was even fainter, and then scaled using our observed XRT count rates (see below; this assumes that the spectral shape does not change at such low luminosities; this is consistent with what we can infer from our low quality XRT data).
![image](GX_general_newaxes_v3.ps){width="2\columnwidth"}
Following the two outbursts reported above, a brighter one was detected around MJD 56234 (Fig.\[fig:GX304\_combine\]; with peak BAT count rate of $\sim$0.41counts cm$^{-2}$ s$^{-1}$). After this outburst, several additional outbursts were observed, but none of them were as bright as the 2012 ones (Fig.\[fig:GX304\_combine\]). No XRT data were obtained for several years during this period. However, on February 5, 2016, (MJD 57423) we obtained additional XRT observations after we noticed a small outburst in the BAT on MJD 57415 (peaking at $\sim$0.89$\times$10$^{-2}$counts cm$^{-2}$ s$^{-1}$; Fig.\[fig:GX304\_combine\], middle inset). This outburst was followed by a brighter outburst between MJD 57510 and 57550. From the middle inset of Fig.\[fig:GX304\_combine\], we can see that the source behaviour between these two outbursts is similar to what we observed in 2012. The observed XRT count rate decreased from $\sim$1.5 (on MJD 57423) to $\sim$0.36 counts s$^{-1}$ (on MJD 57506) in $\sim$83 days (corresponding 0.5-100 keV luminosities of $\sim$11.5 and $\sim$2.8 $\times 10^{34}$ erg s$^{-1}$; using the method described earlier). This drop in count rate is lower than the one observed in-between the 2012 outbursts due to the lower count rate at the start of this phase. However, the end count rates are remarkably similar. We note that the overall trend during the 2016 low-luminosity state appears less smooth and with more variability, than what we observed in 2012.
After June 2016, GX 304-1 did not exhibit any detectable outbursts. When it was clear that the source indeed was not exhibiting outbursts anymore (after a few orbital cycles), we started an additional XRT monitoring campaign (started on MJD 58011; September 15, 2017) to investigate the overall behavior of the source (i.e, to determine if the source exhibited any increase in activity at periastron, and to determine if it would decay to fainter levels than previously observed). The results of our campaign are shown in the right inset in Fig.\[fig:GX304\_combine\]. We indeed observed the source at lower count rates than ever seen before, but we did not observe a clear overall trend in activity level (neither a decrease nor an increase). The source is quasi-stable (for over a year now) with count rates of $\sim$1-2.5$\times$10$^{-1}$counts s$^{-1}$ with only a factor of 2-3 variability (this corresponds to 0.5-100 keV luminosities of $\sim$0.8-1.9$\times10^{34}$erg s$^{-1}$; determined using the method outlined above). Although it appears that the count rate increased slightly during the several periastron passages that we monitored, similar count rate increases were also observed at other orbital phases (i.e., also at apoastron). Therefore, these fluctuations could just be random occurrences. Moreover, we planned our [*NuSTAR*]{} observation (Section \[subsec:GX304\_timing\]) at apoastron (to make sure the source was not in outburst) and we had several XRT observations scheduled close in time. The XRT count rate increased from $\sim$1.3$\times$10$^{-1}$counts s$^{-1}$ (this XRT observation was simultaneously with our [*NuSTAR*]{} one) to 2.6$\times$10$^{-1}$counts s$^{-1}$ within only a day.
Timing analysis of the *NuSTAR* observation {#subsec:GX304_timing}
-------------------------------------------
We observed GX 304-1 using *NuSTAR* (@Harrison2013) on June 3 (05:56:09 UTC), 2018, for $\sim$50ks with both FPMA and FPMB detectors (ObsID 90401326002). This observation was obtained to investigate the spectral behaviour of the source above 10 keV (reported in @Tsygankov2018) and to search for pulsations. We ran the NUPIPELINE task (with SAAMODE=strict and TENTACLE=yes, due to the slightly high background event rates) to obtain clean event files and used the BARYCORR tool to perform the barycenter correction (using version 82 of the *NuSTAR* clock correction). Finally, we obtained the light curves by means of NUPRODUCTS. In both observations, we used a circular region of 30 arcsec for extracting the source photons, and a 60 arcsec circular region for the background from a different chip (because of the background gradient that affected the chip where the source was located). Although we produced background-subtracted light curves, we restricted them to the 3–30keV energy range as the background dominates the source above that energy.
![The *NuSTAR* combined FPMA and FPMB light curve (3-30 keV) folded on the best fitting period of **$P = 275.12$s.** The count rate has been normalized by dividing it by the mean count rate of $0.313$ count s$^{-1}$ (indicated by the red dashed line). The green dotted line shows the best fitting model consisting of a fundamental and one harmonic at twice the frequency.[]{data-label="fig:GX304_folded_lc"}](Folded_LC.ps){width="\columnwidth"}
We searched for pulsations in the [*NuSTAR*]{} observation using the phase folding method introduced in @Leahy1983, and used it as implemented in the <span style="font-variant:small-caps;">ftool</span>[^6] <span style="font-variant:small-caps;">efsearch</span>. Using a custom <span style="font-variant:small-caps;">python</span> script, we folded the light curve on a range of periods around the known period of $\sim$275s. The best fit period is defined as the period for which the $\chi^2$ of the folded light curve with respect to a constant is maximum. We clearly detected pulsations in the combined FPMA and FPMB light curve at P=275.12$\pm$0.02s (with 1$\sigma$ error). The pulsations are present in both individual detectors and are consistent with the known period of GX 304-1. The error on the period was determined following the approach in @Brumback2018a [@Brumback2018b]: using the best-fit period, we simulated $500$ fake sets of an FPMA and an FPMB light curve. For each set, we repeated our analysis and measured the period in the simulated data. We adopted the standard deviation of the obtained best-fit periods as the 1$\sigma$ error (as quoted above) on the measured period.
The light curve folded on the best-fitting period is shown in Fig.\[fig:GX304\_folded\_lc\]. The profile can be described by a combination of a cosine function and a harmonic at half the period. A fit with such a model is shown in Fig.\[fig:GX304\_folded\_lc\] as the green dotted line. This combined model implies a fundamental amplitude of $19.3\%$, while the harmonic contributes $8.6\%$. Taking a model-independent view, from Fig.\[fig:GX304\_folded\_lc\] we conclude that the waveform varies over a range of $0.72$ to $1.25$ times the mean count rate.
Discussion {#sec:GX304_discussion}
==========
We have reported on [*Swift*]{}/XRT observations of the Be/X-ray transient GX 304-1, which harbours a slow X-ray pulsar ($\sim$275s), obtained when the source was in-between type-I outbursts in 2012, and after these outbursts ceased in 2016. Additionally, we report on the timing analysis of our *NuSTAR* observation in the latter period. At all times, the source was clearly detected at luminosities of $\sim$10$^{34-35}$erg s$^{-1}$, which are significantly lower than the ones observed during outbursts, but still relatively high compared to the much fainter luminosities observed in other Be/X-ray transients when not in outburst ($\sim10^{32-33}$ erg s$^{-1}$; @Tsygankov2017b and references therein). However, such intermediate bright states have also been detected in several other, slowly spinning systems (e.g., @Haberl2016 [@Tsygankov2017a]). So far, only for one system, GRO J1008-57, this state has been equally well-monitored [@Tsygankov2017a] as we did for GX 304-1. Remarkably, the behaviour is very similar in both sources: both exhibited luminosities in the range of $\sim10^{34-35}$erg s$^{-1}$, which slowly declined between the adjacent type-I outbursts (e.g., clearly visible when comparing the left inset in Fig.\[fig:GX304\_combine\] with Fig.1a in @Tsygankov2017a). Therefore, it is quite likely that this behaviour is caused by the same physical mechanism in both sources.
@Tsygankov2017a suggested that during this state the X-ray emission originates from accretion of matter down to the NS through a cold, non-ionised disk. This state can only occur for systems with slow spinning (with spin of several tens of seconds or slower) and magnetised ($\sim$10$^{12-13}$G) NSs, since only for such systems the accretion disk would become non-ionised before the propeller effect is initiated (see Section \[sec:GX304\_introduction\]). Since GX 304-1 spins slowly at $\sim$275s and its magnetic field is $\sim$4.7$\times$10$^{12}$G, we suggest that GX 304-1, similar to GRO J1008-57, was accreting from such a disk between its type-I outbursts (as was proposed by @Tsygankov2017a, see their Fig. 1a).
However, alternative physical scenarios have been proposed to explain the low-luminosity behaviour in Be/X-ray transients: the two main ones are the cooling emission from an accretion-heated NS crust and residual low-level accretion onto the NS surface even when the system is in the propeller regime. In the NS crust cooling scenario, the crust would have been heated by the accretion of matter during the preceding outburst and in between outbursts this crust would cool resulting in observable emission. However, we observed GX 304-1 at luminosities of $\sim10^{34}$ erg s$^{-1}$ which are, at least, one order of magnitude higher than the ones observed in systems that might indeed have exhibited such crust cooling behaviour (L${_\textnormal{X}}$$\sim$10$^{32-33}$erg s$^{-1}$; e.g. @Wijnands2016 and @Rouco2017; see the review by @Wijnands2017). In addition, we observed short term variability (on time scales of days) for GX 304-1 which is not expected in the cooling scenario. Therefore, we do not think that we observed NS crust cooling emission in GX 304-1.
The other main alternative model to explain low-level emission in between outbursts is that in which the systems have entered the propeller regime but that still matter might reach the surfaces of the NSs (i.e., due to “leakage” of matter through the magnetosphere, although how exactly this would work is not fully understood; e.g., @Orlandini2004; @Mukherjee2005; @Rothschild2013; @Doroshenko2014). For this model to work, the source has to have entered the propeller regime but this would only happen for GX 304-1 at luminosities of $< 2 \times 10^{32}$ erg s$^{-1}$ (using Eq. 4 of @Tsygankov2017a with a NS mass of 1.4 M$_\odot$ and a radius of 10 km) which is significantly lower than the actual luminosities we observed for this source. Therefore, we think that this is also not a viable model to explain our observed emission and we conclude that the cold disk hypothesis is the most compelling explanation for the low-luminosity state in GX 304-1.
In the case of GX 304-1, we have now also determined that this low-luminosity, cold-disk state is a recurrent phenomenon because we have now observed it in-between two sets of type-I outbursts (Fig.\[fig:GX304\_combine\]). During the outbursts, the accretion disk around the NS most likely fills up again to such a degree that, once the outbursts are over, the disk around the NS contains enough matter for the source to enter the cold-disk phase. In this respect, GX 304-1 is a very interesting source because, after June 2016, the system did not exhibit any outbursts anymore. One would expect that if the accretion disk around the NS is not being fed with matter in the absence of outbursts, the cold disk would slowly empty since all the matter would eventually be accreted onto the NS. Therefore, in this scenario, one would expect that the luminosity would slowly decrease during this phase (until all the matter in the cold disk is consumed and other emission mechanisms might take over; see also the discussion in @Tsygankov2017b).
To test this hypothesis, we set up a XRT campaign to investigate the long-term behaviour of GX 304-1 after it was clear that the source did not exhibit any outbursts anymore. Indeed, we found that the luminosity had decreased by a factor of 2-3 compared to that observed in the cold-disk phase between type-I outbursts. However, we did not observe an overall decay trend as we expected. Instead, the source has now been in a quasi-stable state for over one year, in which its count rate only varies by a factor of 2-3. This variability does not seem to be correlated with periastron passages because similar variability is also observed at other orbital phases (i.e, at apoastron). The reasons for this quasi-stable state and the observed variability are unclear. It might be that during periastron passages the cold disk continues to be replenished, either because matter is transferred from the decretion disk or due to the wind of the Be star. In any case, it remains unclear why this extra matter does not cause a full type-I outburst or, at least, noticeable increases in luminosity at periastron. We continue our monitoring campaign to further study this enigmatic state in GX 304-1.
Although the luminosities observed for GX 304-1 and GRO J1008-57, during the cold-disk state are of such high level that only accretion down to the inner regions could cause the emission, so far it was not proven that matter indeed reached their NSs. Accretion down to the surface would be demonstrated if pulsations were detected, and @Tsygankov2017b predicted that such phenomenon should be observed. Our detection of X-ray pulsations at the spin period of GX 304-1 during our *NuSTAR* observation at the quasi-stable state of the source (performed at apoastron and, by coincidence, at one of the lowest observed luminosity), confirms the fact that, indeed, matter is still accreted all the way down to the surface. Since this quasi-stable state is, most likely, just an extension of the cold-disk phase, we now have strong evidence that matter is accreted down to the surface when these systems are in the cold-disk phase.
Our observed period is consistent with the one expected from the general spin-down trend that the source seems to follow, which started at the end of the strong type-I outburst activity in 2012 and continued to the present day (see the [*Fermi*]{}/GBM data as linked in footnote\[foonoteFermi\]; see @Postnov2015 and @Sugizaki2015 for the spin evolution until 2013). The strength of the pulsations ($\sim20$%) and the pulse profile are similar to the ones observed during outbursts [@Devasia2011; @Malacaria2015; @Jaisawal2016]. However, the pulse profiles vary strongly during and between outbursts, and with energy. Therefore, it is unclear whether the observed similarities between outburst and the quasi-stable state are due to the same underlying (inner) accretion geometry or, purely, due to chance. The fact that the source is currently in a quasi-stable state allows for additional observations to study the pulsations in more detail and to better understand accretion through a cold disk. In addition, pulsations are also expected in the cold-disk phase of other slowly rotating Be/X-ray transients (i.e., in GRO J1008-57), therefore these systems are perfect targets to study this stage further.
\[sec:GX304\_acknowledgements\]
ARE and RW acknowledge support from a NWO Top grant, module 1, awarded to RW. JvdE is supported by NWO. The authors thank Fiona Harrison and the *NuSTAR* team for rapidly approving and executing our observation. We also thank the [*Swift*]{} team (and, i.e, Neil Gehrels and Brad Cenko) for granting and scheduling our (many) XRT observations.
\[GX304\_references\]
[^1]: A.RoucoEscorial@uva.nl
[^2]: The source is known with source Identifier 863533199843070208 in the Second *Gaia* Data Release, GDR2 [@Gaia2018]. From this we estimated the distance following @Bailer-Jones2018.
[^3]: See <https://gammaray.nsstc.nasa.gov/gbm/science/pulsars/lightcurves/gx304m1.html> for the most recently observed spin period of the source (using the [*FERMI*]{} Gamma-ray Burst monitor). \[foonoteFermi\]
[^4]: <https://swift.gsfc.nasa.gov/results/transients/weak/GX304-1/>
[^5]: <http://www.swift.ac.uk/user_objects/>
[^6]: <https://heasarc.gsfc.nasa.gov/ftools/ftools_menu.html>
|
---
author:
- Di Wang
- David M Kahn
- Jan Hoffmann
bibliography:
- 'db.bib'
- 'more.bib'
title: 'Raising Expectations: Automating Expected Cost Analysis with Types'
---
<ccs2012> <concept> <concept\_id>10011007.10011006.10011008</concept\_id> <concept\_desc>Software and its engineering General programming languages</concept\_desc> <concept\_significance>500</concept\_significance> </concept> <concept> <concept\_id>10003456.10003457.10003521.10003525</concept\_id> <concept\_desc>Social and professional topics History of programming languages</concept\_desc> <concept\_significance>300</concept\_significance> </concept> </ccs2012>
Introduction {#sec:intro}
============
Probabilistic programming [@JCSS:Kozen81; @book:MM05] is an effective tool for customizing probabilistic inference [@kn:CGH17; @misc:dippl; @PLDI:MSH18] as well as for modeling and analyzing randomized algorithms [@TassarottiH19], cryptographic protocols [@POPL:BGB09], and privacy mechanisms [@POPL:BKO12]. In this paper, we study probabilistic programs as models of the execution cost (or resource use) of programs. Execution cost can be defined by a cost semantics or a programmer-defined metric. For such a cost model, a probabilistic program defines a distribution of cost that depends on the distribution of the inputs as well as the probabilistic choices that are made in the code.
The problem of statically analyzing the cost distribution of probabilistic programs has attracted growing attention in recent years. Kaminski et al. [@ESOP:KKM16; @LICS:OKK16] have built on the work of Kozen [@JCSS:Kozen81], studying weakest-precondition calculi for deriving upper bounds on the expected worst-case cost of imperative programs, as well as reasoning about lower bounds [@POPL:HKG20]. It has been shown that this calculus can be specialized to automatically infer constant bounds on the sampling cost of non-recursive Bayesian networks [@ESOP:BKKM18] and polynomial bounds on the worst-case expected cost of arithmetic programs [@PLDI:NCH18; @POPL:CFN16; @CAV:CFG16]. The key innovation that enables the inference of symbolic bounds is a template-based approach that reduces bound inference to efficient *linear-program* (LP) solving, a reduction which has been previously applied non-probabilistic programs [@PLDI:CHS15; @CAV:CHR17]. This technique has been extended to best-case bounds and non-monotone cost [@PLDI:WFG19] as well as to incorporate higher-moment reasoning for deriving tail bounds using linear [@WangHR20] and non-linear [@TACAS:KUH19] constraint solving.
The only existing technique for analyzing the expected cost of probabilistic (higher-order) functional programs, is the recent work of Avanzini et al. [@LICS:ALG19]. It applies an affine refinement type system, called $\ell$RPCF, to derive bounds on the expected worst-case cost for an affine version of PCF [@Plotkin77]. $\ell$RPCF can be seen as a probabilistic version of d$\ell$PCF [@LICS:LG11]. While the refinement types of $\ell$RPCF are expressive and flexible, a disadvantage is that the complexity of the corresponding refinement constraints hampers type inference. It seems unclear if type checking $\ell$RPCF is decidable.
This article presents the first automatic analysis of worst-case bounds on the expected cost of probabilistic functional programs. It is based on *automatic amortized resource analysis* (AARA) [@POPL:HJ03; @POPL:HAH11], a type system for inferring worst-case bounds. The expressivity of AARA’s type-based approach for probabilistic programs goes beyond existing techniques for imperative integer programs in the following ways:
1. The analysis infers expected cost bounds for higher-order programs.
2. Bounds can be functions of the sizes of values of (potentially nested) inductive types
3. Bounds can be functions of symbolic probabilities.
In addition, AARA for probabilistic programs preserves many advantageous features of classical AARA for deterministic programs, which include
- efficient type checking (linear in the size of the type derivation),
- reduction of type inference for *polynomial bounds* to *linear programming*,
- use of the potential method to amortize operations with varying expected cost, and
- natural compositionality, as types summarize the cost behavior of functions.
Nonetheless, while AARA for deterministic programs naturally derives bounds on the high-water mark of non-monotone resources that can become available during evaluation (like memory), this is not the case for AARA for probabilistic programs. Reasoning about high-watermark resource usage of probabilistic programs is in fact an open problem even for manual reasoning systems for first-order languages. This problem is out of the scope of this article and we limit the development to monotone resources like time. The technical difficulties with non-monotone resources are discussed in more detail in \[sec:semantics\].
To focus on the novel ideas, we present the analysis for a simple probabilistic functional language with probabilistic branching and lists (\[sec:semantics\]) with linear potential functions (\[sec:type\]). However, the results carry over to multivariate polynomial potential functions and user-defined inductive data structures. We implemented the analysis as an extension of Resource Aware ML (RaML) [@POPL:HDW17] that we call pRaML (\[sec:goat\]).
The main technical innovations are the introduction of a type rule for probabilistic branching, and a new type for symbolic probabilities (\[sec:overview,sec:type\]). While these new features are fairly intuitive, proving their soundness with respect to a cost semantics is not. The existing proof method for deterministic AARA does not directly generalize to the probabilistic setting because of the complexities introduced by a probabilistic cost semantics. To address the challenges of the probabilistic setting, we present a novel soundness proof with respect to a probabilistic operational cost semantics based on Borgstr[ö]{}m et al.’s trace-based and step-indexed-distribution-based semantics [@ICFP:BLG16] (\[sec:sound\]). The details are discussed in \[sec:semantics\].
We evaluate the effectiveness of pRaML by analyzing textbook examples (\[sec:goat\]) and by exploring novel problem domains (\[sec:app\]). The first domain (\[sec:sample\]) is the implementation and analysis of discrete probability distributions. Specifically, we use pRaML to analyze the *sample complexity* of the distributions, i.e., on average, how many steps a program needs to produce a sample from the target distribution. Low sample complexity has recently become an important criterion for efficient sampler implementations, as many probabilistic inference methods require billions of random samples [@misc:Djuric19]. We also verify some more complex fractional bounds in pRaML using a scaled model. The second domain (\[sec:model\]) is the estimation of average-case cost of functional programs on a specific input distribution as a three step process. First, we gather statistics on the branching behavior of conditional branches by evaluating the program on small inputs that are representative for the input distribution. Second, the conditionals are replaced with probabilistic branches that mirror the observed branching behavior on the small inputs. Third, the resulting program is analyzed with pRaML to determine a symbolic bound on the expected cost of the resulting probabilistic program for all input sizes.
In summary, we make the following *contributions*:
1. Design of a novel type-based AARA for probabilistic programs
2. Type soundness proof with respect to a probabilistic operational cost semantics
3. Implementation as an extension of RaML
4. Application of RaML to automatically analyze sample complexity
5. Automatic average-case analysis that combines the use of RaML with empirical statistics
Topic Overview {#sec:overview}
==============
#### AARA
The type system of *automatic amortized resource analysis* (AARA) is a pre-existing framework for inferring cost bounds for deterministic functional programs [@POPL:HJ03; @POPL:JHL10; @POPL:HDW17]. It imbues its types with potential energy so as to perform the *physicist’s method* (or *potential method*) of amortized analysis [@kn:Tarjan85]. When performing type inference, the system generates linear constraints on this potential that, when solved, provide the coefficients of polynomials or other functions. These functions express concrete (non-asymptotic) bounds on worst- or best-case [@SP:NDF17] execution costs, parameterized by input size. In more detail, the potential method works as follows. We say that $\Phi : \mathsf{State} \to \bbQ_{\ge 0}$ is a valid potential function if, for all states $S \in \mathsf{State}$ and operations $o:S \to S$, the following holds. $$\Phi(S) \geq 0 \hspace{3em} \text{and} \hspace{3em} \Phi(S) \ge \mathit{cost}(S,o(S)) + \Phi(o(S)).$$ The second inequality states that the potential of the current state is sufficient to pay for the cost of the transition from $S$ to $o(S)$ and potential of the next state. It then follows that the potential of the initial state establishes an *upper* bound on the *worst-case* cost of a sequence of operations.
The AARA type system is designed to automatically assign such potential functions to functional programs, where we view evaluation steps as operations on machine states of an abstract machine. Automation is enabled by fixing the format potential functions to linear combinations of base functions, and then incorporating them into the types of values. Consider for example the function from the OCaml List module in \[fig:exists\]. We model its cost behaviour using explicit expressions that consume $q \geq 0 \in \mathbb{Q}$ when evaluated. The function has a cost of $1$ for in every recursive call, and therefore the worst-case cost is equal to the length of in addition to the cost of the calls to the function . To automatically derive this bound in linear AARA we assign the following type template where $q_0,q_1,q,p,r$ and $r'$ are yet unknown non-negative coefficients. $${\text{\sl exists}} : {\ensuremath{{\ensuremath{\langle {\ensuremath{{\ensuremath{\langle \tau,r \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},r' \rangle}}}},q_0 \rangle}} \to {\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{p}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{p}(\tau)}}}}},q_1 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},q \rangle}}}}}}$$
A valid instantiation of the potential annotation would for instance be the following type. $${\text{\sl exists}} : {\ensuremath{{\ensuremath{\langle {\ensuremath{{\ensuremath{\langle \tau,0 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},0 \rangle}}}},0 \rangle}} \to {\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{1}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{1}(\tau)}}}}},0 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},0 \rangle}}}}}}$$
If we ignore the potential annotations in $\tau$ and the cost of evaluating the function , then this type expresses that the cost of evaluating is $1 \cdot |{\text{\sl lst}}|$, as marked by requiring a list argument with 1 unit of potential per element. Another valid typing is $${\text{\sl exists}} : {\ensuremath{{\ensuremath{\langle {\ensuremath{{\ensuremath{\langle \tau,2 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},0 \rangle}}}},0 \rangle}} \to {\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{3}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{3}(\tau)}}}}},0 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},0 \rangle}}}}}} \; .$$ It now expresses that the cost of evaluating is $3 \cdot |{\text{\sl lst}}|$ if the cost of evaluating is raised to $2$. The ${\text{\sl pred}}$ function here is typed to take 2 units of potential to run, but is balanced by each element of the list argument being paired with 3 units of potential, 2 more than previously.
In general, type inference constrains this type’s annotation variables with $p \geq r +1$ and $q_1 \geq q$, and leaves the other annotations unconstrained. This aids in the compositionality of the approach, as the specific constants chosen can be adapted to the arguments, including arguments that are themselves functions like ${\text{\sl pred}}$ here.
To exemplify such compositionality, consider some function that merely iterates over a list, consumes 1 resource every iteration, and then returns the list. It can be typed ${\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{p}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{p}(\tau)}}}}},0 \rangle}} \to {\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{q}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{q}(\tau)}}}}},0 \rangle}}}}$ where $p \geq q+1$. If we chain its application to some list as , then we might instantiate the type of the inner application with $p=2,q=1$, and the outer with $p=1,q=0$, composing the costs naturally. In this case, we would also type as ${\ensuremath{{ \ifthenelse{\equal{2}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{2}(\tau)}}}}}$.
Of course, AARA cannot do the impossible of successfully analyzing all programs. AARA uses structural reasoning methods that cannot pick up on semantical properties that the program may depend on, like Peano arithmetic. Further, not all resource usage can be accurately expressed in a given class of resource functions. For instance, polynomials will over-approximate logarithms, and simply cannot express exponentials. The resource functions we present in this paper are linear, but we make use of polynomial resource functions in our implementation.
[0.32]{}
``` {xleftmargin="0pt"}
let rec exists pred lst =
match lst with
| [] -> false
| hd::tl ->
let _ = tick 1 in
if pred hd
then true
else exists pred tl
```
[0.32]{}
``` {xleftmargin="0pt"}
let rec bernoulli lst =
match lst with
| [] -> false
| hd::tl ->
let _ = tick 1 in
match flip 0.5 with
| H -> true
| T -> bernoulli tl
```
[0.34]{}
``` {xleftmargin="0pt"}
let rec rdwalk lst =
match lst with
| [] -> ()
| p::ps ->
let _ = tick 1 in
match flip p with
| H -> rdwalk (0.2::0.4::ps)
| T -> rdwalk ps
```
#### Probabilistic programming
In this paper, we extend AARA to deriving bounds on the expected cost of probabilistic programs. In contrast to a deterministic program, a probabilistic program may not always evaluate to the same value (if any), but rather to a distribution over values and divergence. Similarly, the evaluation cost of a probabilistic program is given by a distribution.
Consider for example the function in \[fig:bernoulli\]. It is similar to the function , but the conditional is replaced with the probabilistic construct . Intuitively, this construct means that we flip a coin and evaluate the heads or tails branch based on the outcome. In probabilistic programming, we assume that such flips are truly random (as opposed to an implementation that may rely on a pseudorandom number generator). As a result, function describes a Bernoulli process across the elements of an input list. It terminates with probability $1$ and has the same linear worst-case cost as $\mathit{exists}$, namely $1 \cdot |{\text{\sl lst}}|$. However, the expected cost of is only $1$.
For an example with a more interesting expected cost, consider the function in \[fig:rdwalk\]. Its argument is a list of probabilities that are used, one after another, to determine the odds in a probabilistic branch that either pops the head off the list (in the tails case) or adds two new probabilities to the list (in the heads case). The random walk consumes $1$ in each iteration and terminates if the argument list is empty. One can show that the function terminates with probability 1 and the expected cost is a function of the argument $[p_1,\ldots,p_n]$ as $$n + \sum_{1 \leq i \leq n} 5 p_i \; .$$
This is an example of a program with non-terminating execution that may nonetheless have expected costs that can be bounded. If only finite cost is accrued on non-terminating execution, nontermination may even occur with positive probability and still yield a finite bound. Conversely, programs that terminate with probability $1$ may still have unbounded expected cost, e.g., a symmetric random walk over natural numbers that stops at 0 [@book:MM05].
#### AARA for Expected Cost
Now reconsider the potential method in the presence of probabilistic operations, that is, the cost and the next state of an operation are given by distributions. Let $o(S)$ denote the probability distribution over possible next states induced by $o$ operating on $S$. One can derive bounds on the worst-case *expected* cost by requiring that the following inequality for the potential function holds over all states $S$ and operations $o$. We use the notation $\bbE_{S' \sim o(S)}$ (defined in \[sec:semantics\]) to weight expected cost over states $S'$ by the probability given by $o(S)(S')$. $$\Phi(S) \ge \bbE_{S' \sim o(S)}(\mathit{cost}(S,S') + \Phi(S')) = \bbE_{S' \sim o(S)}(\mathit{cost}(S,S')) + \bbE_{S' \sim o(S)} (\Phi(S')) ,$$ The intuitive meaning of the inequality is that the potential $\Phi(S) \geq 0$ is sufficient to pay for the expected cost of the operation $o$ from the state $S$, and the expected potential of the next state $S'$ with respect to the probability distribution $o(S)$.
Further, if for some operation $o'$ we have $\Phi(S') \ge \bbE_{S'' \sim o'(S')}(\mathit{cost}(S',S'')) + \bbE_{S'' \sim o'(S')}(\Phi(S''))$ for each state $S'$ the could succeed $S$ under $o$, then we can *compose* the reasoning for $o$ and $o'$ as follows. $$\begin{aligned}
\Phi(S) & \ge \bbE_{S' \sim o(S)}(\mathit{cost}(S,S')) + \bbE_{S' \sim o(S)} (\Phi(S')) \\
& \ge \bbE_{S' \sim o(S)}(\mathit{cost}(S,S')) + \bbE_{S' \sim o(S)} \lrsq{ \bbE_{S'' \sim o'(S')}(\mathit{cost}(S',S'')) + \bbE_{S'' \sim o'(S')}(\Phi(S'')) } \\
& = \bbE_{S' \sim o(S),S''\sim o'(S')}(\mathit{cost}(S,S')+\mathit{cost}(S',S'')) + \bbE_{S' \sim o(S), S'' \sim o'(S')}(\Phi(S'')).\end{aligned}$$ Thus, the potential $\Phi(S)$ is sufficient to cover the expected cost of operations $o$ and $o'$, as well as the expected potential of the final state. This can be sequenced indefinitely to cover all operations of an entire program. A valid potential assignment for the initial state of the program then provides an *upper* bound on the *expected* total cost of running the program.
In \[sec:type\], we extend the AARA type system to support this kind of potential-method reasoning while preserving the benefits of AARA such as compositionality and reduction of type inference to LP solving. For example, our probabilistic extension to AARA can type the code of the function in \[fig:bernoulli\] as $${\text{\sl bernoulli}} : {\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{0}{XX}} {\ensuremath{L(\tau)}} {\ensuremath{L^{0}(\tau)}}}}},1 \rangle}} \to {\ensuremath{\langle {\ensuremath{\mathsf{bool}}},0 \rangle}}}}$$ where the input can be typed as a list with $0$ units of potential per element (assuming $\tau$ does not assign potential). To cover the expected cost, it only needs 1 available potential unit per run, indicated by the 1 paired with the input type. When typing the probabilistic ${\text{\sl flip}}$, this single unit of potential can pay for the expected cost of the two equally-likely branches: The $H$ branch costs 0, the $T$ branch costs 2 (1 each for the recursive call and for ${\text{\sl bernoulli}}$ to consume), and they average to 1. As ${\text{\sl bernoulli}}$ can be typed to consume 1 unit of potential, the upper bound AARA finds is exact.
The functions and form an example of the automatic average-case estimation algorithm that we introduce in \[sec:model\]. Assume that you want to run on a certain distribution of inputs and you want to determine the average cost of on this distribution. To approximately answer this question, we collapse code like $\mathit{exists}$ into code like $\mathit{bernoulli}$ and use pRaML to estimate that the average cost is $1$. In this case, such a collapse would be justified by finding empirically that $\mathit{pred}$ holds with probability $0.5$.
The technical innovation that makes possible the typing of is a new typing rule for probabilistic branching. Another innovation is the introduction of the type ${ \ifthenelse{\equal{XX}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{XX}_{}}}}{}$ for probabilities. The introduction form for values of type ${ \ifthenelse{\equal{XX}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{XX}_{}}}}{}$ simply takes a rational number $0 \leq p \leq 1$ and the elimination form is a probabilistic branch. We can assign potential $${\ensuremath{\Phi(p : { \ifthenelse{\equal{q_H}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{q_H}_{q_T}}}}})} \defeq q_H \cdot p + q_T \cdot (1-p)$$ to a value $p$ of type ${ \ifthenelse{\equal{XX}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{XX}_{}}}}{}$. The potential $q_H$ and $q_T$ then becomes available in the head and tails cases, respectively, of the probabilistic branching.
Consider for example the function in \[fig:rdwalk\] again. Our probabilistic analysis can automatically derive the typing $${\text{\sl rdwalk}} : {\ensuremath{{\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{1}{XX}} {\ensuremath{L({ \ifthenelse{\equal{5}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{5}_{0}}}})}} {\ensuremath{L^{1}({ \ifthenelse{\equal{5}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{5}_{0}}}})}}}}},0 \rangle}} \to {\ensuremath{\langle \mathsf{unit} ,0 \rangle}}}} \; .$$ The potential of the argument $${\ensuremath{\Phi([p_1,\ldots,p_n] : {\ensuremath{\langle {\ensuremath{{ \ifthenelse{\equal{1}{XX}} {\ensuremath{L({ \ifthenelse{\equal{5}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{5}_{0}}}})}} {\ensuremath{L^{1}({ \ifthenelse{\equal{5}{XX}} {\ensuremath{\vvmathbb{P}}} {\ensuremath{\vvmathbb{P}^{5}_{0}}}})}}}}},0 \rangle}}})} = n + \sum_{1 \leq i \leq n} 5 p_i \; ,$$ corresponds to the exact bounds on the expected cost.
Here we present these novel ideas for a simple functional language with lists and linear potential functions. However, the results carry over to user-defined inductive types and multivariate polynomial potential functions of RaML [@POPL:HDW17] that we use in the implementation. The main theorem of this paper (see \[sec:sound\]) states that the expected cost bounds are sound, with respect to a step-indexed distribution-based operational semantics inspired by Borgstr[ö]{}m et al.’s semantics for the probabilistic lambda calculus [@ICFP:BLG16]. We then extend the semantics with *partial evaluations* to capture the resource behavior of non-terminating executions of a probabilistic program. This novel extension enables an improved soundness result, which implies that expected bounds on run-times ensure termination with probability 1.
Implementation and Examples {#sec:goat}
===========================
In this section we present some non-trivial probabilistic models which our implementation pRaML can handle in the same manner as described in previous sections. We follow up with a collection of experimental benchmarks from typing variants of our examples, and other examples from literature.
For these complex examples, we use our implementation pRaML of the probabilistic AARA type system extended to *multivariate polynomial* potential functions with user-defined data types. While the potential functions supported in linear AARA are already multivariate, as each addend can depend on a different input size, the term *multivariate* in the setting of potential functions refers to each addend depending on *products* of input sizes - and in this case, also products of symbolic probabilities. With user-defined data types, those sizes can also measure the number of particular constructor types. We also include additional support for symbolic probabilities by allowing complementation (i.e., subtraction from 1). Extending the probabilistic type system laid out here to these domains does not involve significant conceptual changes; the potential function extensions - described in [@POPL:HAH11] and [@POPL:HDW17] - are orthogonal to the new probabilistic operation.
\[tab:benchmarks\] shows some analysis data given by pRaML on models described below and some examples from literature. It displays the number of linear constraints generated by typing the program using resource polynomials at a fixed degree for all programs of the same class, as well as how fast pRaML can complete type inference on consumer hardware. The literature examples include some example probabilistic loop code and conditional sampling model [@gordon2014probabilistic], the simulation of a fair die with a fair coin using a Markov chain [@knuth1976algorithms], a probabilistic variant of example code demonstrating quadratic resource usage [@CAV:CHR17], and the program [@PLDI:NCH18]. The final example, and , fills a list with probability values of $\sfrac 1 2$ or $\sfrac 1 3$ randomly according to a symbolic probability $p$, then iterates over the list, flipping a coin biased by each probability, and paying cost 1 for each heads flip.
Random walks form the core of stochastic algorithms and simulations. The Internet is so large that the tractability of measuring its contents is real concern, and it can be solved by random walks [@bar2008random]. Modeling problems from various fields also use random walks, ranging from economics [@meese1983empirical], to biology [@codling2008random], to ecology [@visser1997using], to astrophysics [@macleod2010modeling], and beyond. However, many random walks are non-trivial to analyze, which obscures properties like code efficiency from a non-expert programmer, and obscures stochastic model properties from their users. Even knowing the bounds of complex random walk first, the bounds can be nontrivial to verify by hand. Nonetheless, AARA can find them quickly, giving non-experts automatic access to expert bounds.
[0.48]{}
let rec gr Alice Bob =
match Alice with
| [] -> ()
| ha::ta ->
match Bob with
| [] -> ()
| hb::tb ->
let _ = tick 1 in
match flip 0.5 with
| H -> gr ta (ha::Bob)
| T -> gr (hb::Alice) tb
[0.48]{}
let rec goat below at above =
let _ = tick 1 in
match at with
| Lichen -> match flip 0.75 with
| H -> match below with
| [] -> ()
| hd::tl -> goat tl hd (at::above)
| T -> match above with
| [] -> ()
| hd::tl -> goat (at::below) hd tl
| Grass -> match flip 0.5 with
| H -> match below with
| [] -> ()
| hd::tl -> goat tl hd (at::above)
| T -> match above with
| [] -> ()
| hd::tl -> goat (at::below) hd tl
There is an old problem in probability called the *Gambler’s Ruin*. \[fig:gambler\] shows an implementation. It is set up so that Alice and Bob continually bet one dollar against each other on the results of a coin-flip until one player runs out of money. This is essentially a 2-sided random walk. If the coin is fair, Alice starts with $A$ dollars and Bob starts with $B$ dollars, then this series of bets is expected to take $AB$ time. Our multivariate implementation finds this bound exactly.
[0.33]{}
``` {xleftmargin="0pt"}
let reprice price =
match flip 0.6 with
| H ->
match price with
| [] -> []
| _::t -> t
| T -> ()::price
```
[0.33]{}
``` {xleftmargin="0pt"}
let rec buy price =
match price with
| [] -> ()
| _::t ->
let _ = tick 1 in
buy t
```
[0.32]{}
``` {xleftmargin="0pt"}
let rec trade price time =
match time with
| [] -> ()
| _::t ->
let () = match flip 1/3 with
| H -> buy price
| T -> ()
in
trade (reprice price) t
```
\[exa:mountain-goat\]
Consider modeling the following scenario: A mountain goat lives high up in the Rocky Mountains, eating grasses and lichens from the rocks. Depending on the food it find abundant, it either moves up or down the mountain. When it finds only lichens, it moves down with probability 75% in an attempt to find better food sources. When it finds grasses, it moves with equal probability in either direction. However, if the goat moves too far down the mountain, it passes the treeline and gets hunted by wolves. On the other hand, if the goat tries to go up the mountain when at the very top, it falls off a cliff. Given some distribution of grasses and lichens on the mountain, and where the goat starts, what is the expected lifetime of the goat?
This is nontrivial to analyze by hand, but easy to code with the function in \[fig:goat\]. Then pRaML can find a cost bound. Letting $B$ be the distance from the goat to the treeline below, $G_A$ be the number of grassy areas above the goat, and $G$ be the total number of grassy areas, the expected lifetime is bounded above by $(B+1)(2(G+1)-G_B)$. This bound is rather complex, but its generality reveals some interesting cost dependencies. For instance, the derived bound is independent of the actual distance to the top of the mountain. It also makes it easy to get a sense of cost behaviour for particular cases: If the whole mountain is covered in lichen, then the expected lifespan is $2(B+1)$, in line with the goat’s expected movement of half-a-space down the mountain per iteration. On the other hand, if the mountain is all grassy, then the lifetime more like the stopping time of the Gambler’s Ruin experiment.
\[tab:benchmarks\] lists the analysis data for many different movement probabilities for varying amounts of plants. There we also use $A$ to represent the distance to the top of the mountain.
Program description Bound \#Constraints Time (in sec.)
---------------------------------------------------- ----------------------------------------------------- --------------- ----------------
with $\frac 1 2, \frac 3 4$ $(B+1)(2(G+1)-G_B)$ 2084 0.15
with $ \frac 2 3, \frac 3 4$ $3B+3$ 2084 0.14
with $\frac 1 2, \frac 2 3, \frac 3 4$ $(B+1)(2(G+1.5)-G_B)$ 5336 0.25
with $\frac 1 2, \frac 3 5, \frac 2 3, \frac 3 4$ $(B+1)(2(G+2.5)-G_B)$ 10996 1.95
with $\frac 3 5, \frac 1 3$ $\frac 1 {15} T^2 + \frac 1 3 TP + \frac 4 {15} T$ 157 0.04
with $\frac 3 5, 1$ $\frac 1 {5} T^2 + TP + \frac 4 5 T$ 157 0.03
with $\frac 2 5, 1$ $\frac 3 {10} T^2 + TP + \frac 7 {10} T$ 157 0.03
with $\frac 2 5, \frac 1 3$ $\frac 1 {10} T^2 + \frac 1 3 TP + \frac 7 {30} T$ 157 0.04
probabilistic loop Ex 3 [@gordon2014probabilistic] $\sfrac 4 3$ probability 61 0.01
bayes sampling Ex 6 [@gordon2014probabilistic] $\sfrac 3 5$ probability 112 0.01
die simulation from coin [@knuth1976algorithms] $\sfrac 1 6$ per die face 5731 0.33
random no-op variant [@CAV:CHR17] $M^2+M$ 205 0.03
from [@PLDI:NCH18] $\sfrac{15}2 M$ 31 0.01
and $(\frac 1 3 + \frac p 6)M$ 633 0.11
: Experimental data of typing with pRaML.\[tab:benchmarks\]
\[exa:stock-buying\] Stock prices may behave like a random walk. In \[fig:stock\] we simulate a buyer occasionally buying some stock over time, similarly to [@PLDI:NCH18]. Analysis with pRaML finds that the expected expenditure is $\frac 1 {15} T^2 + \frac 1 3 TP + \frac 4{15} T$, where $T$ is the time span and $P$ is the starting stock price. Results for other parameters for the price’s walk and buy rate, respectively, may be found under in \[tab:benchmarks\].
Applications
============
In this section, we discuss two application domains of pRaML: analysis of discrete distributions (\[sec:sample\]) and estimation of average-case cost (\[sec:model\]). \[sec:app\]
Conclusion
==========
By combining a carefully developed probabilistic semantics with the AARA type system, we have shown that probabilistic programs in a functional language can be effectively analyzed in an automated manner. Our implementation pRaML infers worst-case expected bounds on resource usage for a variety of probabilistic models and algorithms, and parameterizes the bounds by both input sizes and symbolic probabilities. We make use of these parameterized bounds to analyze new and interesting application domains, like sample-complexity and a generalized average-case analysis. In the future, we hope to overcome the semantic soundness obstacles that bar non-monotone resource usage, and in doing so provide a fully-conservative extension of non-probabilistic AARA.
This article is based on research supported by DARPA under AA Contract FA8750-18-C-0092 and by the National Science Foundation under SaTC Award 1801369, CAREER Award 1845514, and SHF Awards 1812876 and 2007784. Any opinions, findings, and conclusions contained in this document are those of the authors and do not necessarily reflect the views of the sponsoring organizations.
|
---
abstract: 'We introduce an orientation-preserving landmark-based distance for continuous curves, which can be viewed as an alternative to the [Fréchet]{}or Dynamic Time Warping distances. This measure retains many of the properties of those measures, and we prove some relations, but can be interpreted as a Euclidean distance in a particular vector space. Hence it is significantly easier to use, faster for general nearest neighbor queries, and allows easier access to classification results than those measures. It is based on the *signed* distance function to the curves or other objects from a fixed set of landmark points. We also prove new stability properties with respect to the choice of landmark points, and along the way introduce a concept called signed local feature size (slfs) which parameterizes these notions. Slfs explains the complexity of shapes such as non-closed curves where the notion of local orientation is in dispute – but is more general than the well-known concept of (unsigned) local feature size, and is for instance infinite for closed simple curves. Altogether, this work provides a novel, simple, and powerful method for oriented shape similarity and analysis.'
author:
- |
Jeff M. Phillips and Hasan Pourmahmood-Aghababa\
University of Utah\
`jeffp|aghababa@cs.utah.edu`
title: 'Orientation-Preserving Vectorized Distance Between Curves'
---
Introduction
============
The [Fréchet]{}distance [@AG95] is a very popular distance between curves; it has spurred significantly practical work improving its empirical computational time [@bringmann2019walking] (including a recent GIS Cup challenge [@werner2018acm; @baldus2017fast; @buchin2017efficient; @dutsch2017filter] and inclusion in sklearn) and has been the subject of much algorithmic studies on its computational complexity [@Bri14; @ABKS14; @buchin2017four]. While in some practical settings it can be computed in near-linear time [@driemel2012approximating], there exists settings where it may require near-quadratic time – essentially reverting to dynamic programming [@Bri14].
The interest in studying the [Fréchet]{}distance (and similar distances like the discrete [Fréchet]{}distance [@TEHM1994], Dynamic Time Warping [@lemire2009faster], edit distance with real penalties [@edr]) has grown recently due to the very real desire to apply them to data analysis. Large corpuses of trajectories have arisen through collection of GPS traces of people [@geolife-gps-trajectory-dataset-user-guide], vehicles [@GTDS2016], or animals [@buchin2019klcluster], as well as other shapes such as letters [@williams2007primitive], time series [@DKS16], and more general shapes [@AKW04]. What is common about these measures, and what separates them from alternatives such as the Hausdorff distance is that they capture the direction or orientation of the object. However, this enforcing of an ordering seems be directly tied to the near-quadratic hardness results [@Bri14], deeply linked with other tasks like edit distance [@BK15; @backurs2015edit].
Moreover, for data analysis on large data sets, not only is fast computation needed, but other operations like fast nearest-neighbor search or inner products. While a lot of progress has been made in the case of [Fréchet]{}distance and the like [@SLB2018; @XLP2017; @BCG11; @DS17; @DPS19; @FFK20], these operations are still comparatively slow and limited. For instance, some of the best fast nearest neighbor search for LSH have bounds [@FFK20], for discrete [Fréchet]{}distance on $n$ curves with $m$ waypoints can answer a query within $1+{\varepsilon}$ using $O(m)$ time, but requiring $n \cdot O((1/{\varepsilon})^m)$ space; or if we reduce the space to something reasonable like $O(n \log n +mn)$, then a query in $O(m \log n)$ time can provide only an $O(m)$ approximation [@DS17].
On the other hand, fast nearest neighbor search for Euclidean distance is far more mature, with better LSH bounds, but also quite practical algorithms [@KGraph; @FALCONN]. Moreover, most machine learning libraries assume as input Euclidean data, or for other standard data types like images [@imagenet] or text [@Art3; @Mik1] have sophisticated methods to map to Euclidean space. However, [Fréchet]{}distance is known not to be embeddable into a Euclidean vector space without quite high distortion [@indyk1998approximate; @DS17].
#### Embeddings first.
This paper on the other hand starts with the goal of embedding ordered/oriented curve (and shape) data to a Euclidean vector space, where inner products are natural, fast nearest neighbor search is easily available, and it can directly be dropped into any machine learning or other data analysis framework.
This builds on recent work with a similar goals for halfspaces, curves, and other shapes [@PT19a; @PT20]. But that work did not encode orientation. This orientation preserving aspect of these distances is clearly important for some applications; it is needed to say distinguish someone going to work versus returning from work.
Why might [Fréchet]{}be better for data analysis than discrete [Fréchet]{}or DTW or the many other distances? One can potentially point to long segments and no need to discretize, or (quasi-)metric properties. Regardless, an equalizer is in determining how well a distance models data is the prediction error for classification tasks; such tasks demonstrate how well the distances encode what truely matters on real tasks. The previous vectorized representations matched or outperformed a dozen other measures [@PT19a]. In this paper, we show an oriented distance performs similarly (although not quite as well in general tasks), but on a synthetic task where orientation is essential, does better than these non-orientation preserving measures. Moreover, by extending properties from similar, but non-orientable vectorized distances [@PT19a; @PT20], our proposed distance inherits metric properties, can handle long segments, and *also* captures curve orientation.
More specifically, our approach assumes all objects are in a bounded domain $\Omega$ a subset of $\mathbb{R}^d$ (typically $\mathbb{R}^2$). This domain contains a set of landmark points $Q$, which might constitute a continuous uniform measure over $\Omega$, or a finite sample approximation of that distribution. With respect to an object $\gamma$, each landmark $q_i \in Q$ generates a value $v_{q_i}(\gamma)$. Each of these values $v_{q_i}(\gamma)$ can correspond with the $i$th coordinate in a vector $v_Q(\gamma)$, which is finite (with $|Q|=n$-dimensions) if $Q$ is finite. Then the distance between two objects $\gamma$ and $\gamma'$ is the square-root of the average squared distance of these values – or the Euclidean distance of the finite vectors $$d_Q(\gamma, \gamma') = \|v_Q(\gamma) - v_Q(\gamma')\|.$$ The innovation of this paper is in the definition of the value $v_{q_i}(\cdot)$ and the implications around that choice. In particular in previous works [@PT19a; @PT20] in this framework this had been (mostly) set as the *unsigned* minDist function: $v^{\mathsf{mD}}_q(\gamma) = \min_{p \in \gamma}\|p - q\|$. In this paper we alter this definition to not only capture the distance to the shape $\gamma$, but in allowing negative values to also capture the orientation of it.
This new definition leads to many interesting structural properties about shapes. These include:
- When the shapes are simple curves that are closed and convex or have matching endpoints, then ${\ensuremath{\mathtt{d}_{Q}}}(\gamma,\gamma') < \sqrt{2} {\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma')$; that is, ${\ensuremath{\mathtt{d}_{Q}}}$ is stable up to [Fréchet]{}perturbations. When the curves $\gamma,\gamma'$ are also $\kappa$-bounded [@AKW04] then there is an interleaving: $\frac{1}{\kappa+1} {\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma') \leq {\ensuremath{\mathtt{d}_{Q}}}^{\infty}(\gamma,\gamma') \leq \sqrt{2}{\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma')$where ${\ensuremath{\mathtt{d}_{Q}}}^\infty$ uses the $l^\infty$ distance between vector representations, or it is an equality when curves are closed and convex. Thus ${\ensuremath{\mathtt{d}_{Q}}}$ captures orientation. In contrast for a class of curves we show ${{\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}}^{,\infty}$ can be equal to the Hausdorff distance, so explicitly does not capture orientation.
- We introduce a new concept called the *signed local feature size* which captures the stability of the new signed $v_i(\gamma) : \mathbb{R}^d \to \mathbb{R}$ at any landmark $q_i \in \Omega$ for a fixed shape $\gamma$. Unlike its unsigned counterpart (which plays a prominent role in shape reconstruction [@ACK01; @cheng2012delaunay; @CC12] and computational topology [@CFLMRW14; @ChazalCohen-SteinerMerigot2011; @CL05]), the signed local feature size for closed simple curves is infinite. This captures that while reconstruction or medial axis properties (governed by local feature size) might be unstable on such shapes, their signed distance function (governed by signed local feature size) is still stable. However, for curves which are not simple, it is zero. And when curves have boundary, the boundary definition is finicky, and gives rise to nontrivial values of the signed local features size.
- We show that when the signed local feature size $\delta$ is positive but finite (i.e., $0 < \delta < \infty$) then we can set a scale parameter $\sigma$ in the definition of $v_q$ (denoted $v_q^\sigma$) and hence in ${\ensuremath{\mathtt{d}_{Q}}}$ (denoted as ${\ensuremath{\mathtt{d}_{Q}}}^\sigma$) so that when $\sigma < \delta / (4(1+\sqrt{\ln(2/{\varepsilon})})$, then the signed distance function $v_i^\sigma$ is again stable up to a value ${\varepsilon}$.
Altogether, these results build and analyze a new vectorized, and sketchable distance between curves (or other geometric objects) which captures orientation like [Fréchet]{}(or dynamic time warping, and other popular measures), but avoids all of the complications when actually used. As we demonstrate, fast nearest neighbor search, machine learning, clustering, etc are all now very easy.
Preliminaries
=============
By a [*curve*]{} we mean the image of a continuous mapping $\gamma: [0,1] \to \mathbb{R}^2$; we simply use $\gamma$ to refer to these curves. For any curve $\gamma$ we correspond a direction, defined as the increasing direction of $t\in [0,1]$; that is, if $t,t' \in [0,1]$ and $t<t'$, then the direction of $\gamma$ would be from $\gamma(t)$ to $\gamma(t')$. Two curves $\gamma, \gamma'$ with different mappings but the same images in $\mathbb{R}^2$ are in the same equivalence class if and only if they also have the same direction. A curve is [*closed*]{} if $\gamma(0)= \gamma(1)$. It is [*simple*]{} if the mapping does not cross itself.
Let $\Gamma$ be the class of all simple curves $\gamma$ in $\mathbb{R}^2$ with the property that at almost every point $p$ on $\gamma$, considering the direction of the curve, there is a unique normal vector $n_p$ at $p$, which is equivalent to the existence of a tangent line almost everywhere on $\gamma$. Such points are called [*regular points*]{} of $\gamma$ and the set of regular points of $\gamma$ is denoted by $\reg(\gamma)$. Points of $\gamma \setminus \reg(\gamma)$ are called [*critical points*]{} of $\gamma$. The terminology “almost every point" means that the Lebesgue measure of those $t\in [0,1]$ such that $\gamma(t)$ is a critical point is zero. We also assume that at critical points, which are not endpoints of a non-closed curve, left and right tangent lines exist. Finally, we assume that non-closed curves in $\Gamma$ have left/right tangent line at endpoints. These assumptions will guarantee the existence of a unique normal vector at critical points.
#### Baseline distances.
Important baseline distances are the Hausdorff and [Fréchet]{}distances. Given two compact sets $A,B \subset {\ensuremath{\mathbb{R}}}^d$, the *directed Hausdorff distance* is $\overrightarrow{{\ensuremath{\mathtt{d}_{H}}}}(A,B) = \max_{a \in A} \min_{b \in B} \|a-b\|$. Then the *Hausdorff distance* is defined ${\ensuremath{\mathtt{d}_{H}}}(A,B) = \max\{ \overrightarrow{{\ensuremath{\mathtt{d}_{H}}}}(A,B), \overrightarrow{{\ensuremath{\mathtt{d}_{H}}}}(B,A)\}$.
The [Fréchet]{}distance is defined for curves $\gamma,\gamma'$ with images in ${\ensuremath{\mathbb{R}}}^d$. Let $\Pi$ be the set of all monotone reparamatrizations (a non-decreasing function $\alpha$ from $[0,1] \to [0,1]$). It will be essential to interpret the inverse of $\alpha$ as interpolating continuity; that is, if a value $t$ is a point of discontinuity for $\alpha$ from $a$ to $b$, then the inverse $\alpha^{-1}$ should be $\alpha^{-1}(t') = t$ for all $t' \in [a,b]$. Together, this allows $\alpha$ (and $\alpha^{-1}$) to represent a continuous curve in $[0,1] \times [0,1]$ that starts at $(0,0)$ and ends at $(1,1)$ while never decreasing either coordinate; importantly, it can move vertically or horizontally. Then the *[Fréchet]{}distance* is $${\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma') = \inf_{\alpha \in \Pi} \max\{\sup_{t \in [0,1)} \|\gamma(t) - \gamma'(\alpha(t))\|, \sup_{t \in [0,1)} \|\gamma(\alpha^{-1}(t)) - \gamma'(t)\|\}.$$ We can similarly define the [Fréchet]{}distance for closed oriented curves (see also [@AKW04; @SVY14]); such a curve $\gamma$ is parameterized again by arclength. Given an arbitrary point $c_0 \in \gamma$, then $\gamma(t)$ for $t \in [0,1)$ indicates the distance along the curve from $c_0$ in a specified direction, divided by the total arclength. Let $\Pi^\circ$ denote the set of all monotone, cyclic parameterizations; now $\alpha \in \Pi^\circ$ is a function from $[0,1) \to [0,1)$ where it is non-decreasing everywhere except for exactly one value $a$ where $\alpha(a) = 0$ and $\lim_{t \nearrow a} \alpha(t) = 1$. Again, $\alpha^{-1} \in \Pi^{\circ}$ has the same form, and interpolates the discontinuities with segments of the constant function. Then the [Fréchet]{}distance for oriented closed curves is defined ${\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma') = \inf_{\alpha \in \Pi^\circ} \max\{ \sup_{t \in [0,1)} \|\gamma(t) - \gamma'(\alpha(t))\|, \sup_{t \in [0,1)} \|\gamma(\alpha^{-1}(t)) - \gamma'(t) \|\}$. Oriented closed curves are important for modeling boundary of shapes and levelsets [@LC87], orientation determines inside from outside.
#### Shape descriptors.
Given a curve $\gamma$ in ${\ensuremath{\mathbb{R}}}^2$, previous work identified ways it interacts with the ambient space. The *medial axis* $\MA(\gamma)$ [@lee1982medial; @amenta2001power] is the set of points $q \in {\ensuremath{\mathbb{R}}}^2$ where the minimum distance $\min_{p \in \gamma} \|p-q\|$ is not realized by a unique point $p \in \gamma$. The *local features size* [@amenta1999surface] for a point $p \in \gamma$ is defined $\lfs_p(\gamma) = \inf_{r \in \MA(\gamma)} \|r-p\|$ is the minimum distance from $p$ to the medial axis of $\gamma$.
New Definitions for Signed Distance and Shape Descriptors {#S1}
---------------------------------------------------------
\[d3\] Let $\gamma \in \Gsim$ be a curve in $\mathbb{R}^2$ and $p$ be a point of $\reg(\gamma)$. Define $$\delta_p = \inf\{\|p-p'\|: \langle n_p, p-p' \rangle \langle n_{p'}, p'-p \rangle < 0, \ {\rm int}(\overline{pp'}) \cap \gamma = \emptyset, \ p' \in \reg(\gamma)\},$$ where we assume that the infimum of the empty set is $\infty$. Then we introduce the [*signed local feature size*]{} ($\slfs$ in short) of $\gamma$ to be $\delta(\gamma)= \inf_{p\in \reg(\gamma)} \delta_p(\gamma)$.
\[e1\] Any line segment and any closed curve in $\Gsim$ have infinite signed local feature size.
Following the notation of signed local feature size, one can adapt the related notion of signed medial axis. For each $q \in {\ensuremath{\mathbb{R}}}^2$ and corresponding minDist point $p= \argmin_{p'\in \gamma} \|q-p'\|$ on $\gamma$, we need to define a normal direction $n_p(q)$ at $p$. For regular points $p \in \reg(\gamma)$, this can be defined naturally by the right-hand rule. For endpoints we use the normal vector of the tangent line compatible with the direction of the curve. For non-endpoint critical point, there are technical conditions for non-simple curves (see Appendix \[app:alg\]), but in general we use the direction $u$ which maximizes $|\langle u, q-p \rangle|$ with sign subject to the right-hard-rule.
\[d5\] Let $\gamma \in \Gsim$ and let $q$ be a point in ${\ensuremath{\mathbb{R}}}^2$. We say that $q$ belongs to the [*signed medial axis*]{} of $\gamma$ ($\SMA(\gamma)$ in short) if there are at least two points $p,p'$ on $\gamma$ such that $p, p'= \argmin_{p\in \gamma} \|q-p\|$ and $\langle n_p(q), p-p' \rangle \langle n_{p'}(q), p'-p \rangle < 0$.
The signed medial axis of a curve is a subset of its usual medial axis. Also, if $\slfs(\gamma) = \infty$, then $\gamma$ has no signed medial axis, i.e. $\SMA(\gamma) = \emptyset$.
\[d6\] Let $\gamma$ be a curve, $Q$ be a finite subset of $\mathbb{R}^2$ and $\sigma >0$. For each $q \in Q$ let $p = \argmin_{p' \in \gamma} \|q-p'\|$. If $p$ is not an endpoint of $\gamma$, we define $$v_q^{\sigma}(\gamma)= \frac{1}{\sigma} \langle n_{p}(q), q-p\rangle e^{- \frac{\|q-p\|^2}{\sigma^2}}.$$ Otherwise (for endpoints) we set $$v_q^{\sigma}(\gamma) = \frac{1}{\sigma} \langle n_p, \frac{q-p}{\|q-p\|} \rangle \|q\|_{1,p} \, e^{\frac{\|q-p\|^2}{\sigma^2}},$$ where $\|q\|_{1,p}$ is the $l^1$-norm of $q$ in the coordinate system with axis parallel to $n_p$ and $L$ (tangent line at $p$) and origin at $p$; see Figure \[Fig10\](right) for an illustration. See an example of $v_q^{\sigma}$ over ${\ensuremath{\mathbb{R}}}^2$ in Figure \[Fig10\](left). Notice that $\|q\|_{2,p} = \|q-p\|$ and so $1 \leq \frac{\|q\|_{1,p}}{\|q-p\|} \leq \sqrt{2}$. If $Q=\{q_1, q_2, \ldots, q_n\}$, setting $v_i^{\sigma}(\gamma) = v_{q_i}^{\sigma}(\gamma)$ we obtain a [*feature mapping*]{} $v_Q^{\sigma}: \Gsim \to \mathbb{R}^n$ defined by $v_Q^{\sigma}(\gamma)= (v_1^{\sigma}(\gamma), \cdots, v_n^{\sigma}(\gamma))$. (We will drop the superscript $\sigma$ afterwards, unless otherwise specified.)
![Left: Example signed distance function $v_q$ for curve. Right: Definition of $v_q$ at endpoints.[]{data-label="Fig10"}](distance-fxn.png "fig:"){width="54.00000%"} ![Left: Example signed distance function $v_q$ for curve. Right: Definition of $v_q$ at endpoints.[]{data-label="Fig10"}](Fig8 "fig:"){width="35.00000%"}
\[d4\] Let $\gamma_1, \gamma_2 \in \Gsim$ be two curves in $\mathbb{R}^2$, $Q=\{q_1, q_2, \ldots, q_n\}$ be a point set in ${\mathbb{R}^2}$, $\sigma>0$ be a positive constant and $p\in [1, \infty]$. The orientation preserving distance of $\gamma_1$ and $\gamma_2$, associated with $Q$, $\sigma$ and $p$, denoted $d_Q^{\sigma,p}(\gamma_1, \gamma_2)$, is the normalized $l^p$-Euclidean distance of two $n$-dimensional feature vectors $v_Q(\gamma_1)$ and $v_Q(\gamma_2)$ in $\mathbb{R}^n$, i.e. for $p \in [1,\infty)$, $${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,p}(\gamma_1, \gamma_2)= \frac{1}{\sqrt{n}} \|v_Q(\gamma_1)- v_Q(\gamma_2)\|_p = \bigg(\frac{1}{n}\sum_{i=1}^n |v_i(\gamma_1)- v_i(\gamma_2)|^p \bigg)^{1/p},$$ and for $p = \infty$, $${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,\infty}(\gamma_1, \gamma_2)= \|v_Q(\gamma_1)- v_Q(\gamma_2)\|_{\infty}= \max_{1 \leq i \leq n} |v_i(\gamma_1)- v_i(\gamma_2)|.$$ As default we use ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma}$ instead of ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,2}$. Landmarks $Q$ can be described by a probability distribution $\mu : {\ensuremath{\mathbb{R}}}^2 \to {\ensuremath{\mathbb{R}}}$, then $v_Q$ is infinite-dimensional, and we can define ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,p}(\gamma_1,\gamma_2) = (\int_{q \in {\ensuremath{\mathbb{R}}}^2} |v_q(\gamma_1)-v_q(\gamma_2)|^p \mu(q))^{1/p}$, and ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,\infty}(\gamma_1,\gamma_2) = \sup_{q \in {\ensuremath{\mathbb{R}}}_\mu^2}|v_q(\gamma_1)-v_q(\gamma_2)|$ where ${\ensuremath{\mathbb{R}}}_\mu^2 = \{x \in {\ensuremath{\mathbb{R}}}^2 \mid \mu(x) > 0\}$.
Since we employ a feature map to embed curves into a Euclidean space and then the usual $l^p$-norm to define the distance between two curves, the function $d_Q^{\sigma,p}$ enjoys all properties of a metric but the definiteness property. That is, it satisfies triangle inequality, is symmetric, and $d_Q^{\sigma,p}(\gamma_1, \gamma_2)=0$ provided $\gamma_1 = \gamma_2$ (i.e. $\gamma_1$ and $\gamma_2$ have a same range and a same direction). However, $d_Q^{\sigma,p}(\gamma_1, \gamma_2)=0$ does not necessarily imply $\gamma_1 = \gamma_2$: consider two curves which overlap, and all landmarks have closest points on the overlap.
To address this problem, following Phillips and Tang [@PT19a], we can restrict the family of curves to be $\tau$-separated (they are piecwise-linear and critical points are a distance of at least $\tau$ to non-adjecent parts of the curve), and assume the landmark set is sufficiently dense (e.g., a grid with separation $\leq \tau/16$). Under these conditions again ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma,p}$ is definite, and is a metric.
Stability Properties of $d_Q^{\sigma, p}$
=========================================
Now we proceed to the stability properties of the distance $d_Q^{\sigma, p}$. Our first goal is to show that $d_Q^{\sigma}$ is stable under perturbations of $Q$, which is given in two Theorems \[t3\] and \[t4\]. The next is to verify its stability under perturbations of curves (see Theorem \[t5\], and its several corollaries).
Stability of Landmarks $Q$
--------------------------
Before stating stability properties under perturbations of landmarks, we discuss some cases that will not satisfy in the desired inequality and we have to exclude these cases. The first case is when $\SMA(\gamma) = \emptyset$ and two landmarks $q_1$ and $q_2$ are in different sides of the medial axis of $\gamma$ and at least one of them chooses an endpoint as $\argmin$ point (see Figure \[Fig3\](a)). The other case is when $\SMA(\gamma)$ is nonempty, $q_1$ and $q_2$ are in different sides of $\SMA(\gamma)$ and at least one of them chooses an endpoint as $\argmin$ point and is along the tangent of that endpoint (see Figure \[Fig3\](b)). In both cases, $q_1$ can be arbitrarily close to $q_2$ but $|v_1(\gamma) - v_2(\gamma)|$ is likely to be roughly $1/\sqrt{e}$ since $v_{q_2}(\gamma)=0$. For instance, in Figure \[Fig3\](a), $|v_1(\gamma) - v_2(\gamma)| = |v_1(\gamma)| = \frac{1}{\sigma} \|q_1-p_1\| e^{\frac{-\|q_1-p_1\|^2}{\sigma^2}}$, which can be as close as $1/\sqrt{e}$ when $\|q_1-p_1\|$ is about $\sigma/ \sqrt{2}$.
\[t3\] Let $\gamma \in \Gamma$ and $q_1, q_2$ be two points in $\mathbb{R}^2$. If $\delta(\gamma)= \infty$ and $q_1$ and $q_2$ do not satisfy the above first case (e.g., $\gamma$ is a closed curve), then $|v_1(\gamma)- v_2(\gamma)|\leq \frac{\sqrt{2}}{\sigma}\|q_1-q_2\|$.
Let $p_1= \argmin_{p\in \gamma} \|q_1-p\|$ and $p_2 = \argmin_{p\in \gamma} \|q_2-p\|$. We prove the theorem in four cases.\
[**Case 1.**]{} $v_1(\gamma) v_2(\gamma) \leq 0$, the line segment $\overline{q_1q_2}$ passes through $\gamma$ and $p_1$ and $p_2$ are not endpoints (see Figure \[Fig123\](a)). Let $p$ be the intersection of the segment $\overline{q_1q_2}$ with $\gamma$. Then $$\begin{array}{ll}
|v_1(\gamma)- v_2(\gamma)| \!\!\! &
= \bigg| \frac{1}{\sigma}\langle q_1-p_1, n_{p_1}(q_1)\rangle e^{- \frac{\|q_1-p_1\|^2}{\sigma^2}} - \frac{1}{\sigma}\langle q_2-p_2, n_{p_2}(q_2)\rangle e^{- \frac{\|q_2-p_2\|^2}{\sigma^2}} \bigg| \vspace{0.1cm} \\ &
\leq \frac{1}{\sigma} \|q_1-p_1\| e^{- \frac{\|q_1-p_1\|^2}{\sigma^2}} + \frac{1}{\sigma} \|q_2-p_2\| e^{- \frac{\|q_2-p_2\|^2}{\sigma^2}} \vspace{0.2cm} \\ &
\leq \frac{1}{\sigma} (\|q_1-p_1\| + \|q_2-p_2\|)
\leq \frac{1}{\sigma} (\|q_1-p\| + \|q_2-p\|)
= \frac{1}{\sigma} \|q_1-q_2\|.
\end{array}$$ [**Case 2.**]{} $v_1(\gamma) v_2(\gamma) \geq 0$, the line segment $\overline{q_1q_2}$ does not pass through $\gamma$ and $p_1$ and $p_2$ are not endpoints (see Figure \[Fig123\](b)). Without loss of generality we may assume that both $v_1(\gamma)$ and $v_2(\gamma)$ are non-negative. In this case, $q_1-p_1$ and $q_2-p_2$ are parallel to $n_{p_1}(q_1)$ and $n_{p_2}(q_2)$ respectively. Therefore, $\langle q_1-p_1, n_{p_1}(q_1)\rangle= \|q_1-p_1\|$ and $\langle q_2-p_2, n_{p_2}(q_2)\rangle= \|q_2-p_2\|$. Utilizing the fact that the function $f(x)= \frac{x}{\sigma} e^{-x^2/ \sigma^2}$ is Lipschitz with constant $\frac{1}{\sigma}$, we get $$|v_1(\gamma)- v_2(\gamma)| \leq \frac{1}{\sigma} |\|q_1-p_1\| - \|q_2-p_2\||.$$ Now applying triangle inequality we infer $\|q_1-p_1\| \leq \|q_1-p_2\| \leq \|q_1-q_2\| + \|q_2-p_2\|$ and so by symmetry, $|\|q_1-p_1\| - \|q_2-p_2\|| \leq \|q_1-q_2\|$. Therefore, $|v_1(\gamma)- v_2(\gamma)| \leq \frac{1}{\sigma} \|q_1-q_2\|$.\
[**Case 3.**]{} Endpoints. Let $\ell$ be the tangent line at an endpoint $p$ on $\gamma$, $n_p$ be its unique unit normal vector and let $q_1$ and $q_2$ be in different sides of $\ell$ and $p_1 = p_2 = p$ (see Figure \[Fig123\](c)). Assume $q$ is the intersection of the segment $\overline{q_1q_2}$ with $\ell$. Then $\langle n_p, q-p \rangle = 0$ and so noting that $n_{p_1}(q_1)=n_{p_2}(q_2)=n_p$ we have $\langle q_1-p, n_p \rangle = \langle q_1-q, n_p \rangle$ and $\langle q_2-p, n_p \rangle = \langle q_2-q, n_p \rangle$. Therefore, $$\begin{array}{ll}
|v_1(\gamma)- v_2(\gamma)| &
= \bigg| \frac{1}{\sigma} \langle n_{p}, \frac{q_1-p}{\|q_1-p\|} \rangle \|q_1\|_{1,p} \, e^{\frac{\|q_1-p\|^2}{\sigma^2}} - \frac{1}{\sigma} \langle n_{p}, \frac{q_2-p}{\|q_2-p\|} \rangle \|q_2\|_{1,p} \, e^{\frac{\|q_2-p\|^2}{\sigma^2}} \bigg| \vspace{0.2cm} \\ &
= \frac{1}{\sigma} \bigg| \Big\langle n_p, \frac{\|q_1\|_{1,p}}{\|q_1\|_{2,p}} e^{- \frac{\|q_1-p\|^2}{\sigma^2}} (q_1-q)- \frac{\|q_2\|_{1,p}}{\|q_2\|_{2,p}} e^{- \frac{\|q_2-p\|^2}{\sigma^2}} (q_2-q) \Big\rangle \bigg| \vspace{0.2cm} \\ &
= \frac{1}{\sigma} \bigg\| \frac{\|q_1\|_{1,p}}{\|q_1\|_{2,p}} e^{- \frac{\|q_1-p\|^2}{\sigma^2}} (q_1-q)- \frac{\|q_2\|_{1,p}}{\|q_2\|_{2,p}} e^{- \frac{\|q_2-p\|^2}{\sigma^2}} (q_2-q) \bigg\| \vspace{0.2cm} \\ &
\leq \frac{\sqrt{2}}{\sigma} (\|q_1-q\| + \|q_2-q\|)
= \frac{\sqrt{2} }{\sigma} \|q_1-q_2\|.
\end{array}$$ If $q_1$ and $q_2$ are in one side of $\ell$ and $p_1 = p_2 = p$, the proof is the same as in Case 2. We only need to apply Cauchy-Schwarz inequality.\
[**Case 4.**]{} The case where $p_1$ is an endpoint but $p_2$ is not can be gained from a combination of above cases. Basically, choose a point $q$ on the line segment $\overline{q_1q_2}$ so that $q-p_1$ is parallel to $n_{p_1}$ and then use the triangle inequality.
![$q_1$ and $q_2$ in different cases[]{data-label="Fig123"}](Fig123)
#### Remark.
If $\gamma$ is closed, **Case 3** (Endpoints) does not occur, and $|v_1(\gamma)-v_2(\gamma)| \leq \frac{1}{\sigma} \|q_1 - q_2\|$.
\[t4\] Let $\gamma \in \Gsim$ and $q_1, q_2$ be two points in $\mathbb{R}^2$ not satisfying the second case mentioned before Theorem \[t3\]. If $\delta(\gamma)< \infty$, $\epsilon \leq \frac{\delta(\gamma)}{4}$ is an arbitrary positive real number and $\sigma \leq \delta(\gamma) / (4(1+ \sqrt{\ln(2/\epsilon)}))$, then $|v_1(\gamma)- v_2(\gamma)|\leq \max\{\epsilon, \frac{2}{\sigma}\|q_1-q_2\|\}$.
By Theorem \[t3\] it is enough to consider only the case where there is a signed medial axes, say $M$, and $q_1$ and $q_2$ are in different sides of $M$ (the case they are in same side of $M$ is included in Theorem \[t3\]). We handle the proof in two various cases. For the sake of convenience assume $x= \|q_1-p_1\|$ and $y= \|q_2-p_2\|$. The proof is based on the following observations.
[**(O1)**]{} Since $\|p_1-q_1\| \leq \|p_2 - q_1\|$, we get $x+ \epsilon \geq \frac{1}{2} \|p_1-p_2\| \geq \frac{\delta(\gamma)}{2} \geq 2 \sigma (1+ \sqrt{\ln({2}/{\epsilon})})$. So applying $\epsilon \leq \frac{\delta(\gamma)}{4}$ we gain $x \geq \frac{\delta(\gamma)}{4} \geq \sigma (1+ \sqrt{\ln({2}/{\epsilon})})$.
[**(O2)**]{} Similarly, we obtain $y \geq \frac{\delta(\gamma)}{4} \geq \sigma (1+ \sqrt{\ln({2}/{\epsilon})})$.
[**(O3)**]{} Employing the inequality $\dfrac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} \leq e^{\frac{2x}{\sigma}-1} e^{-\frac{x^2}{\sigma^2}}= e^{-(\frac{x}{\sigma}-1)^2}$ and (O1) we get $\dfrac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} \leq \dfrac{\epsilon}{2}$.
[**(O4)**]{} Similarly, we earn $\dfrac{y}{\sigma} e^{-\frac{y^2}{\sigma^2}} \leq \dfrac{\epsilon}{2}$.
If $\|q_1-q_2\| \leq \epsilon$, then $\epsilon \leq \frac{\delta(\gamma)}{4}$ implies $|v_1(\gamma)- v_2(\gamma)|= \dfrac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} + \dfrac{y}{\sigma} e^{-\frac{y^2}{\sigma^2}} \leq \epsilon$. Otherwise, $\|q_1-q_2\| \geq \epsilon$ and we encounter four cases (see Figure \[Fig8\](left)).\
[**Case 1.**]{} If $x, y< \frac{\delta(\gamma)}{4}$, then $|v_1(\gamma)- v_2(\gamma)| = \frac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} + \frac{y}{\sigma} e^{-\frac{y^2}{\sigma^2}} \leq \frac{x}{\sigma} + \frac{y}{\sigma} \leq \frac{\delta(\gamma)}{4 \sigma } + \frac{\delta(\gamma)}{4 \sigma} \leq \frac{1}{\sigma} \|q_1-q_2\|$. [**Case 2.**]{} If $x\geq \frac{\delta(\gamma)}{4}$ and $y < \frac{\delta(\gamma)}{4}$, then applying (O3) we infer
$
|v_1(\gamma)- v_2(\gamma)| = \dfrac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} + \dfrac{y}{\sigma} e^{-\frac{y^2}{\sigma^2}} \leq \frac{\epsilon}{2} + \frac{\delta(\gamma)}{4 \sigma}\leq \Big(1+\sqrt{\ln(\frac{2}{\epsilon})} \, \Big)+\frac{\delta(\gamma)}{4 \sigma} \leq \frac{\delta(\gamma)}{2 \sigma} \leq \frac{2}{\sigma} \|q_1-q_2\|.$\
[**Case 3.**]{} The case $x< \frac{\delta(\gamma)}{4}$ and $y \geq \frac{\delta(\gamma)}{4}$ is the same as Case 2.\
[**Case 4.**]{} Finally, if $x\geq \frac{\delta(\gamma)}{4}$ and $y \geq \frac{\delta(\gamma)}{4}$, by (O3) and (O4), $|v_1(\gamma)- v_2(\gamma)| = \frac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}} + \frac{y}{\sigma} e^{-\frac{y^2}{\sigma^2}} \leq \epsilon$.
![Left: The case that a landmark point $q$ lies between $\SMA(\gamma)$ and $\SMA(\gamma')$. Right: The case that there is a $\SMA$ and $q_1$ and $q_2$ in different sides of $\SMA(\gamma)$[]{data-label="Fig9"}](Fig4new "fig:"){width="40.00000%"} ![Left: The case that a landmark point $q$ lies between $\SMA(\gamma)$ and $\SMA(\gamma')$. Right: The case that there is a $\SMA$ and $q_1$ and $q_2$ in different sides of $\SMA(\gamma)$[]{data-label="Fig9"}](Fig7 "fig:"){width="50.00000%"}
\[Fig8\]
Stability of Curves
-------------------
We next show stability properties of ${\ensuremath{\mathtt{d}_{Q}}}$ under perturbation of curves; we do this in the context of other distances, namely the [Fréchet]{}and Hausdorff distances. Specifically, we show if two curves are close under another distance, e.g., [Fréchet]{}, then they must also be close under ${\ensuremath{\mathtt{d}_{Q}}}$, under some conditions.
Moreover, non-closed curves create subtle issues around endpoints. The example in Figure \[Fig9\](right) shows that, without controlling behavior of endpoints, we may make $\gamma$ arbitrarily close to $\gamma'$ in [Fréchet]{}distance, whereas ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma}(\gamma, \gamma')$ is possibly $1/\sqrt{e}$, where $Q = \{q\}$. This is the case where $q$ lies between $\SMA(\gamma)$ and $\SMA(\gamma')$. So we cannot get the desired inequality (${\ensuremath{\mathtt{d}_{Q}}}^\sigma(\gamma,\gamma') \leq {\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma')$) for this case. Thus, the landmarks which fall between the signed medial axes may cause otherwise similar curves to have different signatures. For a large domain $\Omega$ (and especailly with $\sigma$, relatively small) these should be rare, and then ${\ensuremath{\mathtt{d}_{Q}}}^\sigma$ which averages over these landmarks should not be majorly effected. We formalize when this is the case in the next theorem, and its corollaries.
\[t5\] Let $\gamma, \gamma' \in \Gsim$ and $q_i \in \mathbb{R}^2$. If one of the following three conditions hold, then $|v_i(\gamma)- v_i(\gamma')|\leq \frac{\sqrt{2}}{\sigma} {\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma')$.
\(1) $v_i(\gamma)v_i(\gamma') \geq 0$;
\(2) $v_i(\gamma)v_i(\gamma') \leq 0$ and $q_i$ is on a line segment $\overline{\gamma(t) \gamma'(\alpha(t))}$ of the alignment between $\gamma$ and $\gamma'$ achieving the optimal [Fréchet]{}distance;
\(3) $q_i$ is far enough from both curves: $\min_{p \in \gamma} \|q_i - p\|, \min_{p' \in \gamma'} \|q_i - p'\| \geq \sigma (1 + \sqrt{\ln(2\sigma / {\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma'))})$.
Let $p= \argmin_{p\in \gamma} \|q_i-p\|$ and $p'= \argmin_{p\in \gamma'} \|q_i-p\|$.
\(1) Let $p=\gamma(t)$ and $p'= \gamma(t')$ for some $t,t' \in [0,1]$ and without loss of generality assume that $t \leq t'$. Then $\|q_i-p'\| \leq \|q_i- \gamma'(t)\|$ and so $$\quad\qquad \|q_i-p'\| - \|q_i-p\| \leq \|q_i- \gamma'(t)\| - \|q_i-p\| \leq \|p- \gamma'(t)\|= \|\gamma(t)- \gamma'(t)\| \leq \|\gamma - \gamma'\|_{\infty}.$$ Similarly, $\|q_i-p\| - \|q_i-p'\| \leq \|\gamma - \gamma'\|_{\infty}$. Now using the fact that the function $f(x)= \frac{x}{\sigma} e^{-\frac{x^2}{\sigma^2}}$ is Lipschitz, considering $l^1$-norm at endpoints, we get $$|v_i(\gamma)- v_i(\gamma')|\leq \frac{\sqrt{2}}{\sigma} |\|q_i-p'\| - \|q_i-p\|| \leq \frac{\sqrt{2}}{\sigma} \|\gamma - \gamma'\|_{\infty}.$$ This shows that for two arbitrary reparametrizations $\alpha$ and $\alpha'$ of $[0,1]$ we have $
|v_i(\gamma \circ \alpha)- v_i(\gamma' \circ \alpha')|\leq \frac{\sqrt{2}}{\sigma} \|\gamma \circ \alpha - \gamma' \circ \alpha'\|_{\infty}.
$ Noting that a reparametrization of a curve does not change either the range or the direction of the curve, we get $|v_i(\gamma \circ \alpha)- v_i(\gamma'\circ \alpha')|= |v_i(\gamma)- v_i(\gamma')|$. Thus taking the infimum over all reparametrizations $\alpha$ and $\alpha'$ we obtain $|v_i(\gamma)- v_i(\gamma')| \leq \frac{\sqrt{2}}{\sigma} d_F(\gamma, \gamma')$.
\(2) Let $r = d_F(\gamma, \gamma')$ and let $q_i$ be on a line segment alignment of $\gamma, \gamma'$ with length at most $r$. So, there are points $a$ and $b$ on $\gamma$ and $\gamma'$ respectively within distance $r$ such that $q_i$ lies on the line segment $\overline{ab}$. Hence we have $\|p-q_i\| + \|q_i-p'\| \leq \|a-q_i\| + \|b-q_i\| = \|a-b\| \leq r = d_F(\gamma, \gamma')$, so $$|v_i(\gamma)- v_i(\gamma')| \leq \frac{\sqrt{2}}{\sigma} (\|p-q_i\| + \|q_i -p'\|) \leq \frac{\sqrt{2}}{\sigma} d_F(\gamma, \gamma').$$
\(3) This case implies $v_i(\gamma), v_i(\gamma') \leq \frac{\sqrt{2}}{\sigma}{\ensuremath{\mathtt{d}_{F}}}(\gamma,\gamma')$.
#### Remark.
In proof of Theorem \[t5\] we see that when $q_i$ does not take an endpoint as argmin, which is always the case for closed curves, then the constant $\sqrt{2}$ is not necessary in the inequality.
\[cor:fix-endpoints\] Let $\gamma, \gamma' \in \Gsim$ with the same endpoints, and also the same tangent at each end point. Then ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma}(\gamma, \gamma') \leq \frac{\sqrt{2}}{\sigma} d_F(\gamma, \gamma')$.
Let $q_i \in Q$. If $q_i$ is not in the area between $\gamma$ and $\gamma'$, then it satisfies Condition (1) of Theorem \[t5\]. Otherwise, we claim that $q_i$ must lie on a line segment $\overline{\gamma(t) \gamma'(\alpha(t))}$ induced by reparametrization $\alpha$ achieving the optimal [Fréchet]{}distance, and thus will satisfy Condition (2) of Theorem \[t5\].
Let $\acute{\alpha}$ describe a reparametrization mapping from $\gamma$ to $\gamma'$ as a non-decreasing, continuous path in the $[0,1] \times [0,1]$ parameter space. It defines a homotopy from $\gamma$ to $\gamma'$, and each $t \in [0,1]$ defines the point on either curve corresponding to a $t$-fraction of the arclength along this path. We can instantiate this homotopy as a flow $f$, the continuous transformation of $\gamma$ to $\gamma'$ parametrized by a value $s \in [0,1]$, so $f(0) = \gamma$ and $f(1) = \gamma'$, and any value in between $f(s)$ is also a curve with image in ${\ensuremath{\mathbb{R}}}^2$. In particular each point on such curve is still parametrized by $t \in [0,1]$ where $f(s)(t) = (1-s)\gamma(t) + s \gamma'(t)$. Each point $q_i$ in between the curves must be included as some intermediate point $f(s)(t)$; if not then it would mean that the two curves are not homotopic, however they are both both simple curves sharing endpoints [@hatcher2005algebraic]\[Proposition 1.2\]. Thus, it implies that $q_i$ is on the segment between $\gamma(\acute{\alpha}(t))$ and $\gamma'(\acute{\alpha}(t))$, and must satisfy condition (2).
\[c4\] Let $\gamma, \gamma' \in \Gsim$ and closed and convex, with both oriented clockwise/counterclockwise. Then ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma}(\gamma, \gamma') \leq \frac{1}{\sigma} d_F(\gamma, \gamma')$.
Because every continuous curve can be approximated by smooth curves, without loss of generality we may assume that $\gamma$ and $\gamma'$ are smooth. Let $q\in Q$ be arbitrary and let $A$ and $B$ denote the regions surrounded by $\gamma$ and $\gamma'$ respectively. If $q$ is in $A \cap B$ or in the complement of $A \cup B$, then $v_q(\gamma)$ and $v_q(\gamma')$ would have a same sign and so Condition (1) in Theorem \[t5\] holds. Otherwise, $q$ will be in $A \setminus B$ or $B \setminus A$. Assume that $q$ lies in $B \setminus A$ (the case $A \setminus B$ comes by symmetry). Let $p\in \gamma$ be the closest point of $\gamma$ to $q$. Obviously, $q-p$ is perpendicular to $\gamma$. Now let $p'$ be the intersection of $\gamma'$ and the half-line starting from $p$ and passing through $q$. Then $p$ is the closest point of $\gamma$ to $p'$ (since $p-p'$ is normal to $\gamma$ at $p$ and $\gamma$ is convex) and $q$ lies on the line segment $\overline{pp'}$. Let $p''$ be the nearest point of $\gamma'$ to $q$. Then according to the direction of curves we have $$\begin{aligned}
|v_q(\gamma) - v_q(\gamma')|
& = &
\frac{1}{\sigma} \|q-p\| e^{\frac{- \|q-p\|^2}{\sigma^2}} + \frac{1}{\sigma} \|q-p''\| e^{- \frac{\|q-p'' \|^2}{\sigma^2}}
\\ & \leq &
\frac{1}{\sigma} (\|q-p\| + \|q-p'' \|)
\leq
\frac{1}{\sigma} (\|q-p\| + \|q-p' \|) \vspace{0.1cm}
\\ & = &
\frac{1}{\sigma} \|p-p' \|
\leq
\frac{1}{\sigma} d_H(\gamma, \gamma')
\leq
\frac{1}{\sigma} d_F(\gamma, \gamma'). $$
Interleaving Bounds for $l^\infty$ Variants {#sec:interleaving}
-------------------------------------------
Using the $l^\infty$ variants, we can show a stronger interleaving property. Let $\Omega$ be a bounded domain in $\mathbb{R}^2$. Let $\diam(\Omega) = \sup_{x,y \in \Omega} \|x-y\|$ be the diameter of $\Omega$. We also denote by $\Gsim_{\Omega}$ the subset of $\Gsim$ containing all curves with image in $\Omega$. In Appendix \[app:Hausdorff\] we show if $\gamma,\gamma' \in \Gamma_\Omega$ and $Q$ is uniform on a domain $\Omega$, then ${\ensuremath{\mathtt{d}_{H}}}(\gamma,\gamma') = {{\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}}^{,\infty}(\gamma,\gamma')$. The signed variant ${\ensuremath{\mathtt{d}_{Q}}}^\infty$ is more related to ${\ensuremath{\mathtt{d}_{F}}}$, but it is difficult to show an interleaving result in general because if a curve cycles around multiple times, its image may not significantly change, but its [Fréchet]{}distance does. However, by appealing to a connection to the Hausdorff distance, and then restricting to closed and convex or $\kappa$-bounded [@AKW04] curves, we can still achieve an interleaving bound.
We first focus on closed curves, so $\slfs$ is infinite, and there are no boundary issues; thus it is best to set $\sigma$ sufficiently large so the $\exp(-\|p-q\|^2/\sigma^2)$ term in $v_q$ goes to $1$ and can be ignored. Regardless, ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma} \leq 1/\sqrt{2e}$. Note that $v_q$ and hence ${\ensuremath{\mathtt{d}_{Q}}}$ has a $\frac{1}{\sigma}$ factor, so those terms in the expressions cancel out.
\[lem:H-lb\] Assume that $Q$ is a uniform measure on $\Omega$ and $\sigma$ is sufficiently large. Let $\gamma, \gamma' \in \Gsim_{\Omega}$ be two closed curves such that $d_H(\gamma, \gamma') \leq \frac{\sigma}{\sqrt{2e}}$. Then $\frac{1}{\sigma} d_H(\gamma, \gamma') \leq {\ensuremath{\mathtt{d}_{Q}}}^{\sigma, \infty}(\gamma, \gamma')$.
Let $r = {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$. Without loss of generality we can assume that $r = \sup_{p \in \gamma} \inf_{p' \in \gamma'} \|p - p'\|$. Since the range of $\gamma$ is compact (the image of a compact set under a continuous map is compact), there is $p \in \gamma$ such that $r = \min_{p' \in \gamma'} \|p - p'\|$. Similarly, by the continuity of the range of $\gamma'$ we conclude that there is $p' \in \gamma$ such that $r = \|p - p'\|$. Because $Q$ is dense in $\Omega$ then $p \in Q$. Since $p' = \argmin_{p'' \in \gamma'} \|p - p''\|$, we observe that $v_p(\gamma') = \frac{1}{\sigma} \|p-p'\| = \frac{r}{\sigma}$. On the other hand, $v_p(\gamma) = 0$ as $p \in \gamma$. Therefore, at least one of the components of the sketched vector $v_Q(\gamma') - v_Q(\gamma)$ is $\frac{r}{\sigma}$ and so $\sigma d_Q^{\sigma, \infty}(\gamma, \gamma') \geq r$.
\[c1\] Assume $Q$ is a uniform measure on $\Omega$ and $\sigma$ is sufficiently large. Let $\gamma, \gamma' \in \Gsim_{\Omega}$ be two closed curves. Then $d_H(\gamma, \gamma') \leq \sqrt{2e} \, {\rm diam}(\Omega) d_Q^{\sigma, \infty}(\gamma, \gamma')$.
\[c6\] Let $Q$ be a uniform measure on $\Omega$ and $\sigma$ be sufficiently large. Let $\gamma, \gamma' \in \Gsim_{\Omega}$ be two closed convex curves with both oriented clockwise/counterclockwise and $d_H(\gamma, \gamma') \leq \frac{\sigma}{\sqrt{2e}}$. Then $d_Q^{\sigma, \infty}(\gamma, \gamma') = \frac{1}{\sigma} d_F(\gamma, \gamma')$.
Applying Lemma \[lem:H-lb\] and the $p = \infty$ version of Corollary \[c4\] we get $\frac{1}{\sigma} {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma') \leq d_Q^{\sigma, \infty}(\gamma, \gamma') \leq \frac{1}{\sigma} d_F(\gamma, \gamma')$. Now by Theorem 1 of [@AKW04] we know that the Hausdorff and Fr$\rm{\acute{e}}$chet distances coincide for closed convex curves. Therefore, ${\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma') = {\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma')$ and the proof is complete.
The proof of Lemma \[lem:H-lb\] shows that the inequality $\frac{1}{\sigma}{\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma') \leq {\ensuremath{\mathtt{d}_{Q}}}^{\sigma, \infty}(\gamma, \gamma')$ remains valid for non-closed curves $\gamma$ and $\gamma'$ as long as $p'$ in the proof is not an endpoint of $\gamma'$. A piecewise linear curve $\gamma$ in $\mathbb{R}^2$ is called [*$\kappa$-bounded*]{} [@AKW04] for some constant $\kappa \geq 1$ if for any $t,t' \in [0,1]$ with $t < t'$, $p = \gamma(t)$, $p' = \gamma(t')$ we have $\gamma([t,t']) \subseteq B_r(p) \cup B_r(p')$, where $r=\frac{\kappa}{2} \|p-p' \|$. The class of $\kappa$-bounded curves comprises of $\kappa$-straight curves [@AKW04], curves with increasing chords [@Rot94] and self-approaching curves [@AAIKLR01].
\[c7\] Let $Q$ be a uniform measure on $\Omega$ and $\sigma$ be sufficiently large. Let $\gamma',\gamma \in \Gsim_{\Omega}$ with the same endpoints and with the same tangents at endpoints, and both $\kappa$-bounded, with ${\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma') \leq \frac{\sigma}{\sqrt{2e}}$. Then $$\frac{1}{\sigma(\kappa+1)} {\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma') \leq {\ensuremath{\mathtt{d}_{Q}}}^{\sigma, \infty}(\gamma, \gamma') \leq \frac{\sqrt{2}}{\sigma} {\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma').$$
The $p = \infty$ version of Corollary \[cor:fix-endpoints\] provides the second inequality. Using Lemma \[lem:H-lb\] we have ${\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma') \leq \sigma {\ensuremath{\mathtt{d}_{Q}}}^{\sigma, \infty}(\gamma, \gamma')$. Now, since $\gamma$ and $\gamma'$ are $\kappa$-bounded, by Theorem 2 of [@AKW04] we have ${\ensuremath{\mathtt{d}_{F}}}(\gamma, \gamma') \leq (\kappa +1) {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$. Combining these inequalities we get the desired result.
Experiments: Trajectories Analysis via ${\ensuremath{\mathtt{d}_{Q}}}^{\sigma}$ Distance
========================================================================================
Like with recent vectorized distance ${\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}$ [@PT19a; @PT20], this structure allows for very simple and powerful data analysis. Nearest neighbor search can use heavily optimized libraries [@KGraph; @FALCONN]. Clustering can use Lloyd’s algorithm for $k$-means clustering. And we can directly use many built in classification methods.
#### Beijing driver classification.
We first recreate the main classification experiment from Phillips and Tang [@PT19a] on the Geolife GPS trajectory dataset [@geolife-gps-trajectory-dataset-user-guide]. After pruning to 128 users, each with between 20 and 200 trajectories, we train a classifier on each pair of users. We repeat on each pair 10 times with different test/train splits, and report the average misclassification rate under various methods, now including on the new vectors $v_Q(\gamma)$ for each curve, using $\sigma = 0.3$ from a domain $[0,1]^2$. The results with linear SVM, Gaussian SVM (hyperparameters $C=2$ and $\gamma = 0.1$), polynomial kernel SVM, decision tree, and random forest (hyperparameters to “auto”) are shown in Table \[both-tables-Generated\]. While $v_Q^{\mathsf{mD}}$ (with an error rate of $0.052$ with random forest) outperforms $v_Q^\sigma$ (error rate of $0.097$ with random forest), other distances can only use KNN classifiers. $v_Q^\sigma$ performs slightly worse than DTW, Eu, ${\ensuremath{\mathtt{d}_{H}}}$, LCSS and EDR (in range $0.072$ to $0.088$; see Table 1 in [@PT19a]), but better than discrete [Fréchet]{}and LSH approximations of it (in range $0.105$ to $0.241$).
#### Directional synthetic dataset classification.
Second, we create a synthetic data set for which the direction information is essential. We generate $200$ trajectories so $100$ start from a square $A = [-1,1] \times [-1,1]$ and end in another rectangle $B = [98,99] \times [-1,1]$. The other half start in $B$ and end in $A$. Each is also given $98$ other critical points, the $i$th in rectangle $[i,i+1] \times [-5,5]$ (or reverse order for $B$ to $A$). We try to classify the first half (A to B) from the second half (B to A). We repeat 1000 balanced 70/30 train/test splits and report the classification test error in Table \[both-tables-Generated\]. Now, while $v_Q^{\mathsf{mD}}$ never achieves better than $0.43$ error rate (not much better than random), with all classifiers we achieve close to an error rate of $0$ using $v_Q^\sigma$. Using KNN classifiers, dynamic time warping, and [Fréchet]{}can also achieve near-$0$ error rates.
[**Distance**]{} $v_Q^{\sigma}$ $v_Q^{\mathsf{mD}}$
-- ---------------------------------- -------------- ---------------- ------------------ -------------- --------------------- ------------------
[**Classifier**]{} [**Mean**]{} [**Median**]{} [**Variance**]{} [**Mean**]{} [**Median**]{} [**Variance**]{}
Linear kernel SVM 0.2623 0.2500 0.0191 0.1766 0.1429 0.0173
Gaussian kernel SVM 0.2210 0.2083 0.0152 0.1890 0.1579 0.0180
Poly kernel SVM, deg=auto 0.2732 0.2667 0.0183 0.2349 0.2222 0.0186
Decision Tree 0.1229 0.1000 0.0096 0.0680 0.0513 0.0050
RandomForest with 100 estimators 0.0972 0.0759 0.0079 0.0521 0.0364 0.0038
Linear SVM 0.0012 0.0000 0.0000 0.4900 0.4833 0.0030
Gaussian kernel SVM 0.0005 0.0000 0.0000 0.4360 0.4333 0.0031
SVM, poly, deg= auto 0.0004 0.0000 0.0000 0.4670 0.4667 0.0031
Decision Tree 0.0287 0.0167 0.0006 0.4827 0.4833 0.0037
LogisticRegression 0.0000 0.0000 0.0000 0.4866 0.4833 0.0031
: Test errors with $v_Q^{\sigma}$ and $v_Q^{\mathsf{mD}}$ vectorizations.[]{data-label="both-tables-Generated"}
[10]{}
Pankaj K. Agarwal, Rinat [Ben Avraham]{}, Haim Kaplan, and Micha Sharir. Computing the discrete frechet distance in subquadratic time. , 43:429–449, 2014.
O. Aichholzer, F. Aurenhammer, C. Icking, R. Klein, E. Langetepe, and G. Rote. Generalized self- approaching curves. , 109:3–24, 2001.
Ann Arbor Algorithms. K-graph. Technical report, <https://github.com/aaalgo/kgraph>, 2018.
Helmut Alt and Michael Godau. Computing the fr[é]{}chet distance between two polygonal curves. , 5:75–91, 1995.
Helmut Alt, Christian Knauer, and Carola Wenk. Comparison of distance measures for planar curves. , 2004.
Nina Amenta and Marshall Bern. Surface reconstruction by voronoi filtering. , 22(4):481–504, 1999.
Nina Amenta, Sunghee Choi, and Ravi Krishna Kolluri. The power crust. In [*Proceedings of the sixth ACM symposium on Solid modeling and applications*]{}, 2001.
Nina Amenta, Sunghee Choi, and Ravi Krishna Kolluri. The power crust, unions of balls, and the medial axis transform. , 19(2-3):127–153, 2001.
Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless seth is false). In [*Proceedings of the forty-seventh annual ACM symposium on Theory of computing*]{}, pages 51–58, 2015.
Julian Baldus and Karl Bringmann. A fast implementation of near neighbors queries for fr[é]{}chet distance (gis cup). In [*Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems*]{}, 2017.
Karl Bringmann. Why walking the dog takes time: [F]{}rechet distance has no strongly subquadratic algorithms unless [SETH]{} fails. In [*FOCS*]{}, 2014.
Karl Bringmann and Marvin Künnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In [*FOCS*]{}, 2015.
Karl Bringmann, Marvin K[ü]{}nnemann, and Andr[é]{} Nusser. Walking the dog fast in practice: Algorithm engineering of the frechet distance. In [*International Symposium on Computational Geometry*]{}, 2019.
Kevin Buchin, Maike Buchin, Wouter Meulemans, and Wolfgang Mulzer. Four soviets walk the dog: Improved bounds for computing the fr[é]{}chet distance. , 58(1):180–216, 2017.
Kevin Buchin, Yago Diez, Tom van Diggelen, and Wouter Meulemans. Efficient trajectory queries under the fr[é]{}chet distance (gis cup). In [*Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems*]{}, 2017.
Kevin Buchin, Anne Driemel, Natasja van de L’Isle, and Andr[é]{} Nusser. klcluster: Center-based clustering of trajectories. In [*Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems*]{}, pages 496–499, 2019.
Frédéric Chazal and David Cohen-Steiner. Geometric inference. , 2012.
Frédéric Chazal, David Cohen-Steiner, and Quentin Mérigot. Geometric inference for probability measures. , 11(6):733–751, 2011.
Frédéric Chazal, Brittany Terese Fasy, Fabrizio Lecci, Bertrand Michel, Alessandro Rinaldo, and Larry Wasserman. Robust topolical inference: Distance-to-a-measure and kernel distance. Technical report, arXiv:1412.7197, 2014.
Frédéric Chazal and Andre Lieutier. The “$\lambda$-medial axis”. , 67:304–331, 2005.
Lei Chen, M. Tamer Özsu, and Vincent Oria. Robust and fast similarity search for moving object trajectories. In [*SIGMOD*]{}, pages 491–502, 2005.
Siu-Wing Cheng, Tamal K Dey, and Jonathan Shewchuk. . CRC Press, 2012.
Michael O. Cruz, Hendrik Macedo, R. Barreto, and Adolfo Guimaraes. , February 2016.
Mark de Berg, Atlas F. [Cook IV]{}, and Joachim Gudmundsson. Fast frechet queries. In [*Symposium on Algorithms and Computation*]{}, 2011.
Anne Driemel, Sariel Har-Peled, and Carola Wenk. Approximating the fr[é]{}chet distance for realistic curves in near linear time. , 48(1):94–127, 2012.
Anne Driemel, Amer Krivosija, and Christian Sohler. Clustering time series under the [F]{}rechet distance. In [*ACM-SIAM Symposium on Discrete Algorithms*]{}, 2016.
Anne Driemel, Ioannis Psarros, and Melanie Schmidt. Sublinear data structures for short frechet queries. Technical report, ArXiv:1907.04420, 2019.
Anne Driemel and Francesco Silvestri. Locality-sensitive hashing of curves. In [*33rd International Symposium on Computational Geometry*]{}, 2017.
Fabian D[ü]{}tsch and Jan Vahrenhold. A filter-and-refinement-algorithm for range queries based on the fr[é]{}chet distance (gis cup). In [*Proceedings of the 25th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems*]{}, pages 1–4, 2017.
Thomas Eiter and Heikki Mannila. Computing discrete [F]{}rechet distance. Technical report, Christian Doppler Laboratory for Expert Systems, 1994.
Arnold Filtser, Omrit Filtser, and Matthew J. Katz. Approximate nearest neighbor for curves — simple, efficient, and deterministic. In [*ICALP*]{}, 2020.
Allen Hatcher. . Cambridge University Press, 2005.
Piotr Indyk. On approximate nearest neighbors in non-euclidean spaces. In [*Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No. 98CB36280)*]{}, pages 148–155. IEEE, 1998.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In [*Advances in neural information processing systems*]{}, pages 1097–1105, 2012.
Der-Tsai Lee. Medial axis transformation of a planar shape. , 4(4):363–369, 1982.
Daniel Lemire. Faster retrieval with a two-pass dynamic-time-warping lower bound. , 42(9):2169–2180, 2009.
William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. , 21:163–169, 1987.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In [*NeurIPS*]{}, 2013.
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In [*EMNLP*]{}, 2014.
Jeff M. Phillips and Pingfan Tang. Simple distances for trajectories via landmarks. In [*ACM GIS SIGSPATIAL*]{}, 2019.
Jeff M. Phillips and Pingfan Tang. Sketched mindist. In [*International Symposium on Computational Geometry*]{}, 2020.
Ilya Razenshteyn and Ludwig Schmidt. Falconn-fast lookups of cosine and other nearest neighbors. <https://falconn-lib.org>, 2018.
G. Rote. Curves with increasing chords. , 115:1–12, 1994.
M.I. Schlesinger, E.V. Vodolazskiy, and V.M. Yakovenko. Similarity of closed polygonal curves in [F]{}rechet metric. , 11(6), 2018.
Zeyuan Shang, Guoliang Li, and Zhifeng Bao. Dita: Distributed in-memory trajectory analytics. In [*SIGMOD*]{}, 2018.
Martin Werner and Dev Oliver. Acm sigspatial gis cup 2017: Range queries under fr[é]{}chet distance. , 10(1):24–27, 2018.
Ben H Williams, Marc Toussaint, and Amos J Storkey. A primitive based generative model to infer timing information in unpartitioned handwriting data. In [*IJCAI*]{}, pages 1119–1124, 2007.
Dong Xie, Feifei Li, and Jeff M. Phillips. Distributed trajectory similarity search. In [*VLDB*]{}, 2017.
Yu Zheng, Hao Fu, Xing Xie, Wei-Ying Ma, and Quannan Li. , July 2011.
Relation of ${\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}$ to the Hausdorff distance {#app:Hausdorff}
===================================================================================
In this section we show that the unsigned variant of the sketch $v_{q_i}^{\mathsf{mD}}(\gamma)$ based only on the minDist function, has a strong relationship to the Hausdorff distance. In particular, when $Q$ is dense enough, and the $l^\infty$ variant is used, they are identical.
\[old d\_Q\] Let $\gamma, \gamma'$ be two continuous curves and $q \in \mathbb{R}^2$. Then $|v_q^{\mathsf{mD}}(\gamma) - v_q^{\mathsf{mD}}(\gamma') | \leq d_H(\gamma, \gamma').$ Consequently, ${\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}(\gamma, \gamma') \leq {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$ for any landmark set $Q$.
Let $r = d_H(\gamma, \gamma')$. Suppose $p = \argmin_{p\in \gamma} \|q-p\|$ and $p' = \argmin_{p'\in \gamma'} \|q-p'\|$. Let also $y = \argmin_{y\in \gamma} \|y-p' \|$ and $y' = \argmin_{y' \in \gamma} \|y'-p \|$. Then we have $\|q-p\| \leq \|q-y\|$ and $\|q-p' \| \leq \|q-y' \|$ and according to the definition of the Hausdorff distance $\|y-p' \| \leq r$ and $\|y' - p\| \leq r$. Now there are two possible cases:
1. $\|q-p\| \leq \|q-p' \|$. Then using the triangle inequality we get $$0 \leq \|q-p\| - \|q-p' \| \leq \|q-y\| - \|q-p' \| \leq \|y- p' \| \leq r.$$
2. $\|q-p' \| \leq \|q-p\|$. Then $$0 \leq \|q-p' \| - \|q-p\| \leq \|q-y' \| - \|q-p\| \leq \|y' - p\| \leq r.$$
Therefore, $| \|q-p\| - \|q-p' \| | \leq r$. The next inequality is immediate as we take average in computing $d_Q$.
\[c2\] Let $\Omega \subset \mathbb{R}^2$ be a bounded domain and $Q$ is dense in $\Omega$. If the range of $\gamma, \gamma'$ are included in $\Omega$, then ${{\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}}^{\infty}(\gamma, \gamma') = {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$.
Employing Theorem \[old d\_Q\] we only need to show ${{\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}}^{\infty}(\gamma, \gamma') \geq {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$. Let $r = {\ensuremath{\mathtt{d}_{H}}}(\gamma, \gamma')$. Without loss of generality we can assume that $r = \sup_{p \in \gamma} \min_{p' \in \gamma'} \|p-p'\|$. Since the range of $\gamma$ is compact (the image of a compact set under a continuous map is compact), there is $p \in \gamma$ such that $r = \min_{p' \in \gamma'} \|p-p'\|$. Similarly, by continuity of the range of $\gamma'$ we conclude that there is $p' \in \gamma$ such that $r = \|p - p'\|$. Because $Q$ is dense in $\Omega$, without loss of generality, with an $\varepsilon$-discussion, we may assume that $p \in Q$. Since $p' = \argmin_{p' \in \gamma'} \|p - p' \|$, we observe that $v_p^{\mathsf{mD}}(\gamma') = \|p-p' \| = r$. On the other hand, $v_p^{\mathsf{mD}}(\gamma) = 0$ as $p \in \gamma$. Therefore, at least one of the components of the sketched vector $v_Q^{\mathsf{mD}}(\gamma') - v_Q^{\mathsf{mD}}(\gamma)$ is $r$ and so ${{\ensuremath{\mathtt{d}^{\textsf{mD}}_{Q}}}}^{\infty}(\gamma, \gamma') \geq r$.
Technical Details on Defining the Normal and Computing $v_q(\gamma)$ {#sec8}
====================================================================
\[app:alg\]
We need to assign a normal vector to each point of a curve $\gamma$. Let $\gamma\in \Gamma'$ and $q \in \mathbb{R}^2$. Assume $p= \argmin_{p'\in \gamma} \|q-p'\|$. If $p \in \reg(\gamma)$, as we mentioned earlier, according to the right hand rule, we can assign a unit normal vector to $\gamma$ at $p$ which is compatible with the direction of $\gamma$. We also assign a fixed normal vector at endpoints of a non-closed curve as we can use the normal vector of tangent line at endpoints that is compatible with the direction of curve. Now it remains to define a normal vector at critical points. We must be careful about doing this as any vector can be considered as a normal vector at critical points. Our aim is to define a unique normal vector at critical points with respect to a landmark point. Assume $p$ is not an endpoint of $\gamma$. Denote by $N(p)$ the closure of the set of all unit vectors $u$ such that $u$ is perpendicular to $\gamma$ at $p$ and is compatible with the direction of $\gamma$ by the right hand rule (for instance, in Figure \[Fig15\](a), $N(p)$ is the set of all unit vectors between $n$ and $n'$). Notice that for regular points on the curve $N(p)$ is a singleton and indeed, it does not depend on $q$ but only on the direction of curve. Then we define $$n_{p}(q)= \argmax \big\{|\langle u, q-p\rangle|: u \in N(p)\big\}.$$ It can be readily seen that $n_p(q) = \sign(q,p,\gamma) \frac{q-p}{\|q-p\|}$, where $\sign(q,p,\gamma)$ can be obtained via Algorithm \[alg1\]. At endpoints we fix a normal vector, the one that is perpendicular to the tangent line. Notice, as the range of $\gamma$ is a compact subset of $\mathbb{R}^2$ and the norm function is continuous, the point $p$ exists. Continuity of the inner product and compactness of $N(p)$ guarantee the existence of $n_{p}(q)$ as well. (A different algorithmic approach for finding $n_p(q)$, when $p$ is a critical point, is given below.)
Computing $n_p(q)$ and the Sign
-------------------------------
Assume that a trajectory $\gamma$ is given by the sequence of its critical points (including endpoints) $\{p_i\}_{i=0}^n$ and $\|p_i-p_{i+1}\|>0$ for each $0\leq i \leq n-1$. The following algorithm determines the sign of a landmark point $q$ with respect to $\gamma$ when $p = \argmin\{\|q-x\|: x \in \gamma\}$ is a critical point of $\gamma$.
Find $w_i$ (normal vector to the segment $\overrightarrow{c_i c_{i+1}}$). Find $\alpha$ (the angle between $w_{i-1}$ and $w_i$). Find $\theta$ (the angle between $w_{i-1}$ and $q-p$). Put $t = \frac{1}{2}(1 - \cos(\frac{\pi \theta}{\alpha}))$. Let $n_t$ be the normalized convex combination of $w_{i-1}$ and $w_i$ by $t$. Sign of the inner product of $n_t$ and $q-p$.
Because each step in the algorithm takes constant time, it is clear that the runtime is $O(1)$. Regarding the space, it is only required to save two consecutive segments that $p$ lies in their intersection and some variables. Thus, memory usage is almost nothing.
Now, in Algorithm \[alg2\], for a landmark point $q\in \mathbb{R}^2$ and a trajectory $\gamma$ we provide steps to compute the sketch vector $v_q(\gamma)$.
Find $d_i$, the distance of $c_i$ from segment $S_i = \overrightarrow{c_i c_{i+1}}$. Find $l_i$ the signed distance of $q$ to the line through the segment $S_i$. Set $j = \argmin\{d_i: 0 \leq i \leq n-1\}$. Set $p = \argmin\{\|q-x\|: x \in S_j\}$. Using Algorithm \[alg1\] compute $v_q(\gamma)$. $v_q(\gamma)$.
It can be readily seen that the algorithm can be run in linear time in terms of the size of $\gamma$. Details are included in Appendix \[B3\]. Turning to the memory usage, it is easy to observe that $O(n)$ space is enough.
Defining the Normal: A Computational Approach
---------------------------------------------
If we look at self-crossing curves, for instance, we will notice that the landmark point $q$ will opt for the crossing point $p$ only if tangent lines of $\gamma$ at $p$ (where $q$ lies in it) make an angle $\beta \geq \pi$ (see Figure \[Fig11\]). Therefore, there is no need to define a normal vector for such crossing points and without loss of generality we may assume that $\beta \geq \pi$.
![Choice of self-crossing point[]{data-label="Fig11"}](self-cross)
Let $\gamma$ be a curve with $\beta \geq \pi$ at a crossing point $p$ (Figure \[Fig11\](b),(c)). For $t\in [0,1]$ we consider $n_t = \frac{(1-t) n + t n'}{\|(1-t) n + t n'\|}$, where $n$ and $n'$ are normal vectors to the curve at a crossing point $p$. It is necessary to agree that $n_{1/2} = 0$ if $n'= -n$ which is possible when $\beta = \pi$. Now the question is how to choose the parameter $t$? Let $\alpha$ be the angle between $n$ and $n'$ and $\theta$ be the angle between $n$ and $q-p$ (as shown in Figure \[Fig11\](b),(c)), i.e. $$\alpha = \arccos(\langle n, n'\rangle) \quad {\rm and} \quad \theta = \arccos(\langle n, \frac{q-p}{\|q-p\|}\rangle).$$ Then $0 < \alpha \leq \pi$ and $0 \leq \theta \leq \alpha$ and thus $0 \leq \frac{\pi}{\alpha} \theta \leq \pi$. Now we can set $n_t = \dfrac{1-\cos(\frac{\pi}{\alpha} \theta)}{2}.$ Therefore, the following hold:
1. If $\theta = 0$, then $t=0$, $q$ is on the left dashed green line in Figure \[Fig11\](c) and $n_t = n = \frac{q-p}{\|q-p\|}$.
2. Moving towards the bisector, $n_t$ rotates towards $n'$ and so $\theta$ increases (but still $\theta < \pi/2$). Hence $\langle n_t, q-p \rangle$ is positive and is decreasing as a function of $\theta$.
3. When $\theta = \frac{\alpha}{2}$, $t=\frac{1}{2}$ and $q$ is on the bisector of $\theta$ and $\langle n_t, q-p \rangle = 0$ (Figure \[Fig11\](c)).
4. Moving from bisector towards the other side of the green dashed angle, $\theta$ increases and $\theta > \pi/2$. Thus $\langle n_t, q-p \rangle$ is negative and decreases.
5. If $\theta = \alpha$, then $t=1$, $q$ is on the right dashed green line in Figure \[Fig11\](c) and $n_t = n' = - \frac{q-p}{\|q-p\|}$.
However, we will only need to compute the inner product of $n_t$ and $g-p$, which can easily be obtained by $$\langle n_t, q-p \rangle = \|q-p\| \cos(\frac{\pi}{\alpha} \theta).$$
The way we defined $n_t$ is a general rule for any crossing and critical point. Now we are going to clarify obtaining $n_t$ in different situations.
1. Let $p$ be a critical point which is not a crossing point of $\gamma$. Then as above a landmark point $q$ will choose $p$ as an $\argmin$ point only if tangent lines at $p$ constitute an angle $\beta \geq \pi$ and $q$ is inside of that area (Figure \[Fig15\](a)). In this case we can easily see that $n_t = \frac{q-p}{\|q-p\|}$ for any $t$.
2. In self-crossing case of Figure \[Fig11\](b), again we can observe that $n_t = \frac{q-p}{\|q-p\|}$ for any $t$.
3. If $p$ is an end point, we consider $n$ as the normal vector at $p$ to the tangent line at $p$ to $\gamma$ and we set $n' = -n$. Then for $t \in [0,\frac{1}{2})$, $n_t = n$, $n_{1/2} = 0$ and for $t \in (\frac{1}{2},1]$, $n_t = -n$ (see Figure \[Fig15\](b)).
As we saw above, $n_t$ depends upon the landmark point $q$, that is, a critical point $p$ can be an $\argmin$ point for many landmark points $q$. Therefore, we will use the notation $n_p(q)$ instead of $n_t$. For $p \in \reg(\gamma)$ we set $n_p(q) = n_p$ for any $q$ such that $p = \argmin_{p, \in \gamma} \|q - p'\|$.
The Algorithmic Steps {#B3}
---------------------
Detailed versions of Algorithms \[alg1\] and \[alg2\] are included here.
Set $w_i = (b_{i+1}-b_i,a_i - a_{i+1})$ and $w_i' = w_{i-1}$. Set $w_0 = (b_0-b_1,a_1 - a_0)$ and $w_0' = - w_0$. Set $w_n = w_{n-1}$ and $w_n' = - w_{n-1}$. Set $\alpha = \arccos(\langle w_i, w_i' \rangle)$ and $\theta = \arccos(\langle w_i, \frac{q-p}{\|q-p\|} \rangle)$. Set $t = \frac{1}{2}(1 - \cos(\frac{\pi \theta}{\alpha}))$ and $n_t = ((1-t) w_i + t w_i')/\|(1-t) w_i + t w_i'\|$. $\sign(\langle n_t, q-p \rangle)$.
$S_i = {\rm segment}(c_i,c_{i+1}) = \overrightarrow{c_i c_{i+1}}$. $L_i =$ line passing from $c_i, c_{i+1}$ with normal vector $w_i = (b_{i+1}-b_i,a_i - a_{i+1})$. Set $l_i = {\rm signeddist}(q,L_i)$, $d_i = \dist(q,S_i) = \min\{|l_i|, \|q-c_i\|, \|q-c_{i+1}\|\}$. Set $j = \argmin\{d_i: 0 \leq i \leq n-1\}$, $p = \argmin\{\|q-x\|: x \in S_j\}$. Set $v= \frac{1}{\sigma} l_j e^{ - l_j^2/ \sigma^2}$. Set $v = \frac{1}{\sigma} \sign(q,p,\gamma) d_j e^{ - d_j^2/ \sigma^2}$. Set $v = \frac{1}{\sigma} \sign(q,c_0,\gamma) \langle q-c_0, \frac{w_0}{\|w_0\|} \rangle (|\langle q-c_0, \frac{w_0}{\|w_0\|} \rangle| + |\langle q-c_0, \frac{c_1 - c_0}{\|c_1 - c_0\|} \rangle|) e^{- d_0^2/ \sigma^2}$. Set $v = \frac{1}{\sigma} \sign(q,c_n,\gamma) \langle q-c_n, \frac{w_{n-1}}{\|w_{n-1}\|} \rangle (|\langle q-c_n, \frac{w_{n-1}}{\|w_{n-1}\|} \rangle| + |\langle q-c_n, \frac{c_{n}-c_{n-1}}{\|c_{n}-c_{n-1}\|} \rangle|) e^{ - d_{n-1}^2/ \sigma^2}$. $v$.
Since every step in the for-loop of Algorithm \[alg4\] needs $O(1)$ time to be computed, the for-loop only needs $O(n)$ time where $n$ is the number of critical points of $\gamma$. Note that $l_i$ can be computed by $$l_i = \frac{(b_{i+1}-b_i) x_0 + (a_i - a_{i+1}) y_0 + a_{i+1}b_i - b_{i+1} a_i}{a^2+b^2},$$ where $a = a_i - a_{i+1}$, $b = b_{i+1}-b_i$. Finding the minimum of an array takes a linear time in size of the array, so $j$ needs $O(n)$ time to be computed. Obviously, $p$ needs only a constant time as it is $p_i$ or $p_{i+1}$ or $$\Big(\frac{a(ax_0 - by_0)-bc}{a^2+b^2}, \frac{b(by_0 - ax_0)-bc}{a^2+b^2}\Big),$$ where $c = b_{i+1} a_i - a_{i+1}b_i$. The rest of the algorithm requires $O(1)$ time considering the fact that calculating $\sign(q,\gamma)$, utilizing Algorithm \[alg1\], takes a constant time. Therefor, Algorithm \[alg2\] will be run in linear time in terms of the critical points of $\gamma$. Turning to the memory usage, first we need to save $\gamma$ as $n$ points. Inside the for-loop, we do not need to save $S_i$ and $L_i$. It is necessary to save $l_i$ and $d_i$. Hence, $O(n)$ space is enough for the for-loop. Then we only need 4 spaces to save 4 variables $j,p, \sigma, v$. Therefore, the algorithm requires $O(n)$ space to be run, where $n$ is the size of critical points of $\gamma$.
|
---
abstract: 'The first-principles full-potential linearized augmented plane-wave method based on density functional theory is used to investigate electronic structure and magnetic properties of hypothetical binary compounds of I$^{A}$ subgroup elements with nitrogen (LiN, NaN, KN and RbN) in assumed three types of cristalline structure (rock salt, wurtzite and zinc-blende). We find that, due to the spin polarized *p* orbitals of N, all four compounds are half-metallic ferromagnets with wide energy bandgaps (up to 2.0 eV). The calculated total magnetic moment in all investigated compounds for all three types of crystal structure is exactly 2.00 $\mu _{\text{B}}$ per formula unit. The predicted half-metallicity is robust with respect to lattice-constant contraction. In all the cases ferromagnetic phase is energetically favored with respect to the paramagnetic one. The mechanism leading to half-metallic ferromagnetism and synthesis possibilities are discussed.'
author:
- 'Krzysztof Zberecki, Leszek Adamowicz and Michał Wierzbicki'
title: |
Half-metallic ferromagnetism in binary compounds\
of alkali metals with nitrogen: *Ab initio* calculations
---
Introduction
============
Half-metallic (HM) ferromagnets are materials in which, due to the ferromagnetic decoupling, one of the spin subbands is metallic, whereas the Fermi level falls into a gap of the other subband. The concept of HM ferromagnet was first introduced by de Groot *et al.* in 1983 [degroot]{} on the basis of band structure calculations for NiMnSb and PtMnSb semi-Heusler alloys. HM ferromagnets are considered as promising materials to exploit the spin of charge carriers in new generations of transistors and other integrated spintronic devices [@spintro], in particular, as a source of spin-polarized carriers injected into semiconductors, since only one spin channel is active during charge transport, thus leading to 100%spin-polarized electric current.
Since 1983 many HM ferromagnets have been theoretically predicted, but very few of them found experimental confirmation like metallic oxides CrO$_{2}$ [@oxid1] and Fe$_{3}$O$_{4}$ [@oxid2] or manganese perovskite La$%
_{0.7}$Sr$_{0.3}$MnO$_{3}$ [@perov]. Many HM ferromagnetic materials were predicted in transition-metals pnictides [@pni1; @pni2] and chalcogenides [@chalco1; @chalco2] by means of first-principles calculations.
Recently, an unusual class of ferromagnetic materials [@hmf1; @hmf2; @hmf3; @hmf4; @hmf5; @chang], which do not contain transition-metal or rare-earth atoms, has been predicted and analysed theoretically by *ab initio* calculations. In Ref. [@hmf1] the authors present *ab initio* calculations for CaAs in zinc-blende structure, where magnetic order is created with the main contribution of the anion ${p}$ electrons (“${p}$-electron” ferromagnetism). More comprehensive study was made in [@hmf2], where the authors investigate ${p}$-electron ferromagnetism in a number of tetrahedrally coordinated binary compounds of I/II-V elements. The characteristic feature of this class of materials is the integer value of the total magnetic moment per formula unit, which, in some combinations of elements, can be as large as 3 $\mu_{\text{B}}$ [@chang]. Results presented in Ref. [@hmf3], where magnetic and structural properties of II$^{A}$-V nitrides have been investigated using *ab initio* methods, motivated us to check possibility of finding half-metallic ferromagnetism in I$^{A}$-N binary compounds. Another motivation was that the atomic bonds in I$^{A}$-N nitrides are supposed to be mostly ionic in nature (due to significant difference in electronegativity between I$^{A}$ atoms and nitrogen atom) which was essential for appearing of ${p}$-electron ferromagnetism in all previously considered HM cases ([@hmf1]-[@hmf5]).
In this paper we present electronic structure and magnetic properties of hypothetical I$^{A}$-N binary compounds (LiN, Nan, KN and RbN) with the rock salt (RS), wurtzite (WZ) and zinc-blende (ZB) crystalline structure, calculated by means of first-principles full-potential linearized augmented plane-wave method. We find that all four compounds in all three types of structures are HM ferromagnets with robust half-metallicity against lattice compresion in the range from 2% for LiN (RS) up to 50% for NaN (WZ). This is crucial parameter for the practice of epitaxial growth, *e.g.* by means of MBE or MOCVD, to fabricate magnetic ultrathin layer structures for spin injection into suitable semiconductor substrate.
The paper is organized as follows. Section II shows details of our calculation method, while section III presents calculated total energy and band structure. In section IV we analyse density of states and the origin of half-metallic ferromagnetism. The paper is concluded with section V.
Computational method
====================
All calculations were performed using WIEN2k code [@wien1] which implements the full-potential linearized augmented plane wave (FLAPW) method [@flapw1]. Exchange and correlation were treated in the local spin density approximation (LSDA) by adding gradient terms. This GGA approximation was used in the Perdew-Burke-Ernzerhoff [@gga1] parametrization. It should be mentioned that LSDA gives the magnetic moments in a very good agreement with experiment but underestimates the lattice constants in the case of transition metals. On the other hand, gradient corrections significantly reduce this error and give the correct phase stability but tend to overestimate the magnetic moment. We checked this deficiency of GGA in our calculations and it turned out that it has a small efect on the considered systems.
The convergence of the basis set was controlled by the cut-off parameter $
RK_{\text{max}}$ = 8 together with 5000 k-point mesh for the integration over the Brillouin zone. The angular momentum expansion up to $l$ = 10 and $G_{\text{max}}$ =12 a.u.$^{-1}$ ($G_{\text{max}}$=14 a.u.$^{-1}$ in case of RbN) for the potential and charge density was employed in the calculations. Self-consistency was considered to be achieved when the total energy difference between succeeding iterations is less then 10$^{-5}$ Ry per formula unit. Geometry optimization was performed allowing all atoms in the unit cell to relax, constrained to the initially assumed crystal symmetry.
Total energy and band structure
===============================
For all four compounds investigated by us the total energies *E* have been calculated as a function of the lattice parameter *a* for three crystal structures, namely RS, WZ and ZB, in ferromagnetic and paramagnetic state. We also optimized lattice constants *a* by minimization of the total energy *E*. Figure 1 presents total energy *E* versus lattice constant *a* for all three crystal structures. As one can see in all compounds the RS strucure is the most energetically favored, thus, indicating the ionic bonding as prevailed coupling mechanism. Also, in all the cases, the paramagnetic phase is higher in energy than the corresponding ferromagnetic one. Calculated equilibrium lattice constants *a*$_{0}$ can be found in Table I as well as differences between total equilibrium energies of ferromagnetic and paramagnetic states $\Delta E_{\text{tot}}^{%
\text{f-p}}$.
The calculated total magnetic moment in all investigated compounds for all three types of crystal structures is exactly 2.00 $\mu _{\text{B}}$ per formula unit. An integer value of magnetic moment is a characteristic feature of HM ferromagnetism [@hmf2]. Table I shows total values of magnetic moments for all compounds as well as contributions from I$^{A}$ atoms (Li, Na, K and Rb), N atom and interstitial region to total magnetic moment. The main contribution in all cases comes from nitrogen anion ranging from 1.51 $\mu _{\text{B}}$ for LiN in ZB to 1.85 $\mu _{\text{B}}$ for KN in RS and ZB, which confirms the general feature that in HM ferromagnetic compounds the most of resulting global magnetic moment is carried by anion electrons. To verify that the integer value of total magnetic moment stabilizes the crystal structure, we have calculated the total energy *E* as a function of the total magnetic moment $\mu _{\text{tot}}$. Indeed, Figure 2 shows that the creation of magnetic moment equal to 2.00 $\mu _{\text{B}}$ leads to the minimization of the total energy in all the cases.
>From the application point of view it is important to study robustness of the half-metallicity with respect to lattice constant. Figure 3 shows magnetic moment as a function of lattice constant for all four compounds in three crystal structures. In the case of LiN the total magnetic moment remains integer until the lattice constant is compressed to critical value of 2.75$\ {\rm\AA}$, 3.75$\ {\rm\AA}$ and 4.25$\ {\rm\AA}$ for WZ, ZB and RS structures, respectively. In case of NaN, KN and RbN the values can be read from Figure 3. In all studied compounds the most promising is wurtzite structure, where half-metallicity is maintained up to contraction of the lattice parameter of 9% (LiN), 54% (NaN), 22% (KN) and 21% (RbN). Wide bandgap semiconductors AlN, GaN and ZnO in WZ crystal structure are considered as strong potential materials for spintronic applications. The spin scattering relaxation time of charge carriers during the transport process in these materials is strongly increased as compared to GaAs [@relax]. This means longer lifetime of electrons in particular spin state and longer spin “memory" of electrons when injected into these materials. Lattice constants of these wide bandgap semiconductors are close to 3.2$\ {\rm\AA}$, what is in the range of LiN and NaN HM ferromagnetism in WZ structure with 3% and 18% lattice mismatch, respectively. Therefore, having in mind state-of-the-art in producing artificial structures, at least some of the HM ferromagnets studied here seem to be potential candidates for epitaxial growth on AlN, GaN and ZnO substrates as layers for injection of 100 % spin polarized electrons. Most of ZB III-V semiconductors as well as narrow bandgap IV-VI semiconductors in RS structure (binary compounds of Pb with S, Se and Te) also satisfy lattice matching conditions as possible substrates for bulk growth of proposed here I$^{A}$-N compounds in HM ferromagnetic phase.
Figures 4 and 5 illustrate calculated spin resolved electron band structure for RbN at equilibrium lattice constant for three types of crystal structures. In all cases the minority spin subband is metallic, while the majority spin subbands are separated from the Fermi level by wide bandgaps of 4.22 eV, 3.48 eV and 3.45 eV in RS, WZ and ZB crystal structure, respectively. This is in contrast with the 3*d* pnictides in which the bandgap appears in the minority spin subband as a consequence of $d$ bands splitting [pni2]{}. Three remaining compounds (LiN, NaN, and KN) also demonstrate half-metallic nature with wide bandgaps. The HM bandgaps, separating allowed energies for majority spin states from the Fermi level, are wide enough and ranging from about 0.2 eV for LiN (RS) to no less than 2.0 eV in the case of KN and RbN for three considered crystal structures. The last value is, to our knowledge, the largest one ever obtained for binary compound HM ferromagnet, thus making KN and RbN very promising materials for spintronic applications. The bands for RbN are flat and nearly dispersionless close to the Fermi level. The mechanism leading to this kind of energy bands has been discussed by other authors [@hmf2; @hmf3] as an important condition for stability of HM ferromagnetism.
In the case of NaN and KN the bands look very similar to those presented in Figures 4 and 5 for RbN. For LiN the situation is slightly different, because the bands near the Fermi level are more dispersive, but far from breaking down HM ferromagnetism. Table II presents calculated values of majority spin-up $E_{\text{g}}^{\text{up}}$ and minority spin-down $E_{\text{g}}^{\text{dn}}$ subbands main bandgaps, as well as HM bandgaps $E_{\text{g}}^{\text{HM}}$ in the majority-spin subband. Decrease of main bandgaps (of direct character only in the case of RS structure) and increase of HM bandgaps with increasing atomic number of alkali metal elements is observed.
-- --
-- --
$a_{0}$(${\rm \AA}$) $\mu_{\text{tot}}(\mu_{\text{B}})$ $\mu_{\text{I}^A}(\mu_{\text{B}})$ $\mu_{N}(\mu_{\text{B}})$ $\mu_{\text{int}}(\mu_{\text{B}})$ $\Delta{E}_{\text{tot}}^{\text{f-p}}$(Ry)
------ ---------------------- ------------------------------------ ------------------------------------ ---------------------------- ------------------------------------ ------------------------------------------- --
LiN
(RS) 4.36 2.00 0.06 1.72 0.21 $-0.025$
(WZ) 3.31 2.00 0.03 1.17 0.80 $-0.102$
(ZB) 4.68 2.00 0.04 1.51 0.44 $-0.052$
NaN
(RS) 5.03 2.00 0.01 1.73 0.26 $-0.061$
(WZ) 3.91 2.00 0.02 1.47 0.51 $-0.141$
(ZB) 5.49 2.00 0.01 1.67 0.31 $-0.074$
KN
(RS) 5.82 2.00 0.02 1.85 0.12 $-0.075$
(WZ) 4.51 2.00 0.03 1.75 0.22 $-0.160$
(ZB) 6.38 2.00 0.03 1.85 0.11 $-0.080$
RbN
(RS) 6.12 2.00 0.03 1.83 0.13 $-0.074$
(WZ) 4.75 2.00 0.03 1.73 0.24 $-0.156$
(ZB) 6.70 2.00 0.03 1.83 0.13 $-0.077$
: Equilibrium lattice constants *a*$_{0}$ (in$\ {\rm\AA}$) together with total $\protect\mu_{\text{tot}}$, partial atomic-site resolved $\protect\mu_{\text{I}^{A}}$, $\protect\mu_{\text{N}}$ and interstitial $\protect\mu_{\text{int}}$ magnetic moments (in $\protect\mu
_{\text{B}}$) as well as total energy difference $\ \Delta{E}_{\text{tot}}^{
\text{f-p}}$ (Ry) between ferro- and paramagnetic state (in Ry) for RS, WZ and ZB crystal structure.
-- --
-- --
-- --
-- --
-- --
-- --
-- --
-- --
$E_{\text{g}}^{\text{up}}$ (eV) $E_{\text{g}}^{\text{dn}}$ (eV) $E_{\text{g}}^{\text{HM}}$ (eV)
---------- --------------------------------- ---------------------------------- -----------------------------------
LiN (RS) 8.05 6.26 0.2
LiN (WZ) 6.56 4.53 0.9
LiN (ZB) 6.55 4.52 1.2
NaN (RS) 5.17 3.12 1.0
NaN (WZ) 4.38 2.20 1.7
NaN (ZB) 4.44 2.37 1.6
KN (RS) 4.54 2.37 2.0
KN (WZ) 3.81 1.72 2.0
KN (ZB) 3.77 1.77 2.0
RbN (RS) 4.22 2.06 2.0
RbN (WZ) 3.48 1.42 2.0
RbN (ZB) 3.45 1.39 2.0
: Energy bandgaps for majority up-spin ($E_{\text{g}}^{\text{up}}$) and minority down-spin ($E_{\text{g}}^{\text{dn}}$) states together with half-metallic bandgaps ($E_{\text{g}}^{\text{HM}}$) for three crystal structures.
-- --
-- --
Density of states and HM ferromagnetism
=======================================
It is instructive to compare the band structure (Figs. 4 and 5) with spin resolved total and partial site and symmetry projected density of states (DOS) presented in Figure 6. There are six valence electrons in RbN (Rb: $5s^{1}$; N: $2s^{2}2p^{3}$). Two of them occupy the low-energy N $s$ states, about 11 eV below the Fermi level, and the remaining four electrons are mainly involved in filling anion N $p$ states. The crystal field of cubic or hexagonal symmetry caused by surrounding N anions splits the Rb cation 4${d}$ states with three $t_{2g}$ states ($d_{xy}, d_{yz}, d_{zx}$) lying lover in energy and the two $e_{g}$ states ($d_{x^{2}-y^{2}}, d_{3z^{2}-r^{2}}$) lying higher. Hybridization between N *p* states and Rb $t_{2g}$ or $e_{g}$ 4*d* states (depending on the type of crystal lattice) creates both bonding and antibonding hybrid orbitals. The bonding orbital, lower in energy, appears at the edge of normally occupied region of N *p* states. The antibonding hybrid orbital remains in the Rb $t_{2g}$ or $e_{g}$ manifold but pushed up in energy relative to the nonbonding state.
The bands around Fermi level are formed mainly of N *p* states with small admixture of Rb ${d}$ states with predominant $e_{g}$ symmetry in RS structure and $t_{2g}$ symmetry for ZB and WZ structures for both spin orientations. In contrast to the *p* states, the $s$ states are placed well below or high above Fermi level giving no net spin polarization. Similarly, the bands located at higher energies (from 4.5 to 12.5 eV above the Fermi level) are mainly composed of *d* states belonging to $t_{2g}$ and $e_{g}$ representations. Hence, the *s* and *d* states are not directly involved in creation of spin polarisation and ferromagnetism in RbN and other I$^{A}$-N binary compounds.
Existence of narrow bands near Fermi level translates into peaks in the DOS diagrams. According to qualitative Stoner criterion the high density of electron states at the Fermi level should stabilize the ferromagnetic order. Pressure induced lattice contraction produces changes in the band structure, increase of band widths with increasing kinetic energy, decrease of the spin splitting and destroying ferromagnetism.
In the presented here results of the ground state total energy calculations we can see the influence of Coulomb intra-atomic electron repulsion. It produces energy shift $\Delta = 2\ \text{eV}$ between narrow majority and minority spin energy band states, visible as peaks at – and below Fermi level in the DOS diagrams, shown in Figure 6. This type of band structure with well separated up and down spin states can be attributed to the Hubbard-like interaction $U=\Delta/2$. The parameter $U=1\ \text{eV}$ is a measure of energy formation for magnetic moment localized at nitrogen ion. According to Hund’s rule and Pauli principle the limiting value of magnetic moment for $p$ electrons is the value of 3 $\mu _{\text{B}}$. The four valence $p$ electrons, present in the system, can produce magnetic moment not greater than $(3-1)~\mu _{\text{B}}=2~\mu _{\text{B}}$, what agrees with our calculations.
Estimation of thermodynamic stability of HM ferromagnetism requires additional calculations to determine the strength of inter-atomic exchange coupling [@kubler]. This task requires supercell calculations for planar antiferromagnetic structure of the first (AF1) and second kind (AF2) as particular cases of spiral structure with the **q** vector $[001]$ and $\frac{1}{2}[111]$, respectively [@hmf2]. We have performed relevant calculations only for LiN in WZ crystal structure. The energy differences are $E_{\text{AFM1}}-E_{\text{FM}} = 0.247$ eV and $E_{\text{PARA}}-E_{\text{FM}} = 0.701$ eV. The estimated lover limit for paramagnetic Curie temperature is about 464 K, what makes the material promising for experimental investigation.
Conclusions
===========
On the basis of *ab initio* calculations employing density functional theory we have investigated half-metallic ferromagnetism in rock salt, wurtzite and zinc-blende compounds composed of group I$^{A}$ alkali metals as cations and nitrogen as anion. We find that, due to the spin polarized *p* orbitals of N, all four compounds are half-metallic ferromagnets with wide energy bandgaps (up to 2.0 eV). The calculated total magnetic moment in all investigated compounds is exactly 2.00 $\mu _{\text{B}}$ per formula unit. Our calculations show that the predicted half-metallicity is robust with respect to lattice constant contraction. The formation of ferromagnetic order requires large lattice constants, high ionicity, empty $d$ orbitals and slight hybridization between N anion $p$ states and I$^{A}$ cation $d$ states with energies in vicinity of the Fermi level. It is interesting to note that palladium (with its eight 4$d$ electrons), as dopant replacing Ga atom in WZ GaN semiconductor, interacts with surrounding N atoms in a similar way like alkali atoms do, but hybridisation between Pd 4${d}$ and N $p$ orbitals leads to formation of ferromagnetic order with Pd as the main contributor (not N) to the total magnetic moment [@osuch].
Demonstrated here ferromagnetic order is always more energetically stable than antiferromagnetic and paramagnetic state, what makes these materials possible candidates for spin injection in spintronic devices. Calculations of the total energy indicate that this class of materials can exist in stable or metastable phase. Their highly interesting magnetic properties should encourage experimentalists to stabilize these materials in properly coordinated structures via vacuum molecular beam epitaxy, chemical transport or laser deposition on suitable substrates.
[99]{} R. A. de Groot, F. M. Mueller, P. G. van Engen, and K. H. J. Buschow, Phys. Rev. Lett. **50**, 2024 (1983).
I. Žutić, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. **76**, 323-410 (2004).
S. P. Lewis, P. B. Allen, and T. Sasaki, Phys. Rev. B **55**, 10253 (1997). Yu. S. Dedkov, U. Rüdiger, and G. Güntherodt, Phys. Rev. B **65**, 064417 (2002).
J.-H. Park, E. Vescovo, H.-J. Kim, C. Kwon, R. Ramesh, and T. Venkatesan, Nature (London) **392**, 794 (1998).
Y.-Q. Xu, B.-G. Liu, and D.G. Pettifor, Phys. Rev. B **66**, 184435 (2002).
B.-G. Liu, Phys. Rev. B **67**, 172411 (2003).
W.-H. Xie, Y.-Q. Xu, B.-G. Liu, and D. G. Pettifor, Phys. Rev. Lett. **91**, 037204 (2003).
I. Galanakis and P. Mavropoulos, Phys. Rev. B **67**, 104417 (2003).
K. Kusakabe, M. Geshi, H. Tsukamoto, and N. Suzuki, J. Phys.: Condens. Matter **16** (2004) S5639.
M. Sieberer, J. Redinger, S. Khmelevskyi and P. Mohn, Phys. Rev. B **73**, 024404 (2006).
O. Volnianska, and P. Bogusławski, Phys. Rev. B **75**, 224418 (2007).
G. Y. Gao, K. L. Yao, E. Şaş[i]{}oğlu, L. M. Sandratskii, Z. L. Liu and J. L. Jiang, Phys. Rev. B **75**, 174442 (2007).
Chang-wen Zhang, Shi-shen Yan, and Hua Li, phys. stat. sol. (b) **245**, 201 (2008).
Chang-Wen Zhang, J. Phys. D: Appl. Phys. **41**, 085006 (2008).
P. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnicka, and J. Luitz, in WIEN2k, An Augmented Plane Wave Plus Local Orbitals Program for Calculating Crystal Properties, edited by K. Schwarz, Techn. Universität Wien, Austria, 2001.
E. Wimmer, H. Krakauer, M. Weinert, and A. J. Freeman, Phys. Rev. B **24**, 864 (1981).
J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. **77**, 3865 (1996).
E. A. Barry, A. A. Kiselev, and K. W. Kim, Appl. Phys. Lett. **82**, 3686 (2003); L. Adamowicz, P. Borowik, A. Duduś, and M. Kiecana, Molecular Physics Reports **39**, 13 (2004).
J. Kübler, A. R. Wiliams and C. B. Sommers, Phys. Rev. B **48**, 1745 (1983); J. Kübler, J. Phys.: Condens. Matter **18**, 9795 (2006);
K. Osuch, E. B. Lombardi, and L. Adamowicz, Phys. Rev. **B** 71, 165213 (2005).
|
---
abstract: 'The $Z b \bar{b}$ coupling determined from the $Z$-pole measurements at LEP/SLD shows an about $3\sigma$ deviation from the SM prediction, which would signal the presence of new physics in association with the $Zb\bar b$ coupling. In this work we give a comprehensive study for the full one-loop supersymmetric effects on the $Z b \bar{b}$ coupling in both the MSSM and the NMSSM by considering all current constraints which are from the precision electroweak measurements, the direct search for sparticles and Higgs bosons, the stability of Higgs potential, the dark matter relic density, and the muon $g-2$ measurement. We analyze the characters of each type of the corrections and search for the SUSY parameter regions where the corrections could be sizable. We find that the sizable corrections may come from the Higgs sector with light $m_A$ and large $\tan \beta$, which can reach $-2\%$ and $-6\%$ for $\rho_b $ and $\sin^2 \theta_{eff}^b$, respectively. However, such sizable negative corrections are just opposite to what needed to solve the anomaly. We also scan over the allowed parameter space and investigate to what extent supersymmetry can narrow the discrepancy. We find that under all current constraints, the supersymmetric effects are quite restrained and cannot significantly ameliorate the anomaly of $Zb\bar b$ coupling. Compared with $\chi^2/dof = 9.62/2$ in the SM, the MSSM and NMSSM can only improve it to $\chi^2/dof = 8.77/2$ in the allowed parameter space. Therefore, if the anomaly of $Zb\bar b$ coupling is not a statistical or systematic problem, it would suggest new physics beyond the MSSM or NMSSM.'
author:
- |
\
Junjie Cao$^1$, Jin Min Yang$^2$
title: ' Anomaly of $Zb\bar b$ coupling revisited in MSSM and NMSSM'
---
introduction
============
Although most of the electroweak data are consistent with the Standard Model (SM) to a remarkable precision, there are still some experimental results difficult to accommodate in the SM framework. A well known example is that the effective electroweak mixing angle $\sin^2 \theta_{eff}$ determined from the leptonic asymmetry measurements is much lower than the value determined from the hadronic asymmetry measurements [@2005ema; @Grunewald], and the averaged value over all these asymmetries has a $\chi^2/dof$ of $11.8/5$, corresponding to a probability of only $3.7\%$ for the asymmetry data to be consistent with the SM hypothesis. Such a large discrepancy mainly stems from the two most precise determinations of $\sin^2 \theta_{eff}$, namely the measurement of $A_{LR}$ by SLD and the measurement of the bottom forward-backward asymmetry $A_{FB}^{b}$ at LEP, which give values on opposite sides of the average and differ by $3.2$ standard deviation. It is interesting to note that if such a discrepancy is attributed to experimental origin and thus the hadronic asymmetry measurements are not included in the global fit, then a rather light Higgs boson around 50 GeV is indicated from the fit [@Chanowitz; @electro-data], which is in sharp contrast with the LEP II direct search limit of 114 GeV [@Barate] and results in a compatible probability as low as $3\%$. If we resort to new physics to solve this discrepancy, the new physics effects must significantly modify the $Z b \bar{b}$ coupling while maintain the $Z$-boson couplings to other fermions basically unchanged. In this work we focus on the $Z b \bar{b}$ coupling and scrutinize the supersymmetric effects.
In our analysis we choose to parameterize the $Z f \bar{f} $ interaction at $Z$-pole in term of the parameter $\rho_f$ and effective electroweak mixing angle $\sin^2
\theta^f_{eff} $ [@Veltman; @Jegerlehner]: $$\begin{aligned}
\Gamma_{Z f \bar{f} }^\mu &=& (\sqrt{2} G_\mu \rho_f )^\frac{1}{2}
m_Z \gamma^\mu \left[ - 2 Q_f \sin^2 \theta_{eff}^f + I_3^f ( 1 -
\gamma_5 ) \right] \label{redefined}\end{aligned}$$ This parametrization is preferred from the experimental point of view because all the measured asymmetries are only dependent on $\sin^2 \theta_{eff}^f$ and their precise measurements can directly determine the value of $\sin^2 \theta_{eff}^f$. From the combined LEP and SLD data analysis, the fitted values of $\rho_f$ and $\sin^2 \theta_{eff}^f$ agree well with their SM predictions for leptons and light quarks, but for the bottom quark their fitted values are respectively $1.059 \pm 0.021$ and $0.281 \pm 0.016$ (with correlation coefficient 0.99), which significantly deviate from their SM predictions of 0.994 and 0.233 (for $m_t =174$ GeV and $m_h = 115$ GeV) and leads to $\chi^2/dof = 9.62/2$ (corresponding to a compatible probability of $0.8\%$). To best fit the experimental data, $\rho_b$ and $\sin^2\theta^b_{eff}$ should be enhanced by about $6.5\%$ and $20\%$, respectively. While we can envisage that the supersymmetric effects are not usually so large, we want to figure out to what extent supersymmetry can improve the situation. For this purpose, we choose two popular supersymmetric models: the minimal supersymmetric model (MSSM) [@Haber] and the next-to-minimal supersymmetric model (NMSSM) [@Franke].
For the NMSSM effects on $Z b \bar{b}$ coupling, which have not been studied in the literature, we will perform the calculation to one-loop level. For the MSSM effects, which have been studied by many authors [@Djouadi; @Boulware; @Cao], we will renew the study in the parametrization of $\rho_b$ and $\sin^2 \theta_{eff}^b$ (the previous studies usually examined the effects on the $Z$-width, the ratio $R_b$ and the asymmetry $A_{FB}^b$). For both the MSSM and NMSSM, we will consider various current experimental constraints on the parameter space, which are from the precision electroweak measurements, the direct search for sparticles and Higgs bosons, the stability of the Higgs potential, the cosmic dark matter relic density, and the muon g-2 measurement.
This paper is organized as follows. In Sec.II we introduce the general formula for the calculation of $\rho_f$ and $\sin^2 \theta_{eff}^f $ and apply them to the MSSM and NMSSM. In Sec.III we summarize the constraints considered in this work and briefly discuss their characters. In Sec. IV and Sec. V we perform numerical study for the corrections to $\rho_b$ and $\sin^2 \theta_{eff}^b $ in the MSSM and NMSSM, respectively. We will first show the characters of different type corrections, then we will scan the whole SUSY parameter space to investigate the compatibility of the supersymmetric predictions of $\rho_b$ and $\sin^2 \theta_{eff}^b $ with their experimental results. Finally, in Sec. VI we conclude our work with an outlook on the possibility of solving the $Z b \bar{b}$ anomaly.
general formula to calculate $\rho_f$ and $\sin^2
\theta_{eff}^f$
=================================================
In the SM with the input parameters the Fermi constant $G_F$, the fine-structure constant $\alpha$, $Z$-boson mass $m_Z$ and fermion masses $m_f$, the electroweak mixing angle $ s_W = \sin \theta_W $ is determined at loop level by [@Sirlin; @Hollik; @Denner] $$\begin{aligned}
s_W^2= \frac{1}{2} \left ( 1 - \sqrt{ 1 - \frac{ 4 \pi
\alpha}{\sqrt{2} G_\mu m_Z^2} \frac{1}{ 1 - \Delta r } } \ \right
) \label{sw2}\end{aligned}$$ where $\Delta r $ is given by $$\begin{aligned}
\Delta r = \frac{\hat{\Sigma}^W (0)}{m_W^2} + \frac{\alpha}{4 \pi
s_W^2} \left( 6 + \frac{7 - 4 s_W^2}{2 s_W^2} \ln ( 1 - s_W^2)
\right) + 2 \delta^v + \delta^b \label{deltar}\end{aligned}$$ with $\hat{\Sigma}^W $ denoting the renormalized $W$-boson self-energy, $ \delta^v $ and $\delta^b $ being the vertex correction and box diagram correction to $\mu$ decay $\mu \to
\nu_\mu e \bar{\nu}_e$, respectively. To get a more precise numerical result for $s_W^2$, one can iterate Eqs.(\[sw2\]) and (\[deltar\]) a few times.
With the $s_W$ defined above, the effective $Z f \bar{f}$ coupling at Z-pole takes the following form [@Jegerlehner; @Hollik] $$\begin{aligned}
\Gamma_{Z f \bar{f} }^\mu &=& \left(\sqrt{2} G_\mu ( 1 - \Delta r)\right)^{\frac{1}{2}}
m_Z \gamma^\mu \left \{ v_f - a_f \gamma_5 + \delta
v_f - \delta a_f \gamma_5 \right. \nonumber \\
&& \left. - \frac{1}{2} \left[ \Sigma_Z^\prime (m_Z^2) + \delta
Z_2^Z \right] ( v_f - a_f \gamma_5 ) - 2 Q_f s_W^2 \Delta \kappa
\right \}, \label{original}\end{aligned}$$ where $v_f = I_3^f - 2 Q_f s_W^2 $ and $a_f = I_3^f $ are respectively the vector and axial vector coupling coefficients of $Z f \bar{f}$ interaction at tree level, and $\delta v_f$ and $\delta a_f$ are their corresponding corrections. $\Sigma_Z^\prime $ is the derivative of the unrenormalized $Z$-boson self-energy $\Sigma_Z$ with respect to the squared momentum $p^2$, and $\delta Z_2^Z$ is the field renormalization constant of $Z$-boson given by $$\begin{aligned}
\delta Z_2^Z = - \Sigma^\prime_\gamma (0) - 2 \frac{c_W^2 - s_W^2 }{
s_W c_W} \frac{\Sigma_{\gamma Z}(0)}{m_Z^2} + \frac{c_W^2 -
s_W^2}{s_W^2} \left ( \frac{Re \Sigma_Z (m_Z^2)}{m_Z^2} - \frac{Re
\Sigma_W (m_W^2)}{m_W^2} \right ),\end{aligned}$$ and $\Delta \kappa $ is given by $$\begin{aligned}
\Delta \kappa &=& \frac{c_W^2}{s_W^2} \left \{ \frac{\Sigma_Z
(m_Z^2) }{m_Z^2} - \frac{\Sigma_W (m_W^2) }{m_W^2} -
\frac{s_W}{c_W} \frac{\Sigma_{\gamma Z} (m_Z^2) + \Sigma_{\gamma Z}
(0) }{m_Z^2} \right \}.\end{aligned}$$ In Eq.(\[original\]) the factor $\frac{1}{2} ( \Sigma_Z^\prime
(m_Z^2) + \delta Z_2^Z ) $ comes from the fact that the residue of the renormalized Z propagator is different from 1, while the last term enters due to $Z-\gamma$ mixing at $Z$-pole.
If we re-express $\Gamma_{Z f \bar{f} }^\mu $ in Eq.(\[original\]) in term of $\rho_f$ and $ \sin \theta_{eff}^f$ as in Eq.(\[redefined\]), we get $$\begin{aligned}
\rho_f &=& 1 + \delta \rho_{se} + \delta \rho_{f, v}, \\
\sin^2 \theta_{eff}^f & =& ( 1 + \delta \kappa_{se} + \delta
\kappa_{f,v} ) s_W^2,\end{aligned}$$ with $\delta \kappa_{se}= \Delta \kappa $ and $$\begin{aligned}
\delta \rho_{se} & =&
\frac{\Sigma_Z (0) }{m_Z^2} - \frac{\Sigma_W (0) }{m_W^2}
- 2 \frac{s_W}{c_W} \frac{\Sigma_{\gamma Z} (0)}{m_Z^2}
+ \frac{\Sigma_Z (m_Z^2) - \Sigma_Z (0)}{m_Z^2} -
\Sigma^\prime_Z(m_Z^2); \nonumber \\
\delta \rho_{f, v}& =& 2 \frac{\delta a_f}{a_f} - 2 \delta^v - \delta^b; \nonumber \\
\delta \kappa_{f, v} &=& \frac{a_f \delta v_f - v_f \delta a_f}{- 2
Q_f a_f s_W^2}. \label{definition}\end{aligned}$$ In above equations the subscript ‘$se$’ means the contribution from the gauge boson self-energy which is flavor independent, and ‘$f,v$’ denotes the contribution from the vertex correction to $ Z f \bar{f}$ interaction. In practice, it is convenient to express $\delta
\rho_{f,v}$ and $\delta \kappa_{f,v}$ in term of $\delta g_L^f $ and $\delta g_R^f$ respectively $$\begin{aligned}
\delta \rho_{f, v}& =& \frac{\delta g_L^f - \delta g_R^f }{a_f} - 2
\delta^v - \delta^b; \quad \quad \delta \kappa_{f, v} = \frac{ (
a_f - v_f ) \delta g_L^f + ( a_f + v_f ) \delta g_R^f}{- 4 Q_f a_f
s_W^2 } \label{drb}\end{aligned}$$ where $ \delta g_{L,R}^f = \delta v_f \pm \delta a_f $ are the corrections to $Z f_L \bar{f}_L $ and $Z f_R \bar{f}_R $ interactions, respectively. From above equations one can learn that the correction to $\delta \rho_{f,v}$ is decided by the competition of $\delta g_L^f$ and $\delta g_R^f$, while $\delta \kappa_{f,v}$ is mainly determined by $\delta g_R^f$ due to $ (a_f + v_f)/(a_f - v_f)
\simeq 5.4$.
Noting that the Feynman rules for $Z$-boson couplings in SUSY models usually differ from their corresponding rules in the SM by a minus sign [@Haber; @Franke], $\Sigma_{\gamma Z}$ and $\delta
\kappa_{f,v} $ in the above formula should change sign if one uses the Feynman rules in SUSY models. The self-energies and the vertex corrections in SUSY models then include both the SM-particle loop contributions and SUSY-particle loop contributions. Since the SM-particle contributions are well known, in Appendix A and B we only list the one-loop expressions for the SUSY contributions. The only subtlety one should note is to avoid the double-counting of the Higgs contributions. This problem arises due to the following reason. On the one hand, the SM values of $\rho_{b}$ and $\sin^2
\theta_{eff}^b $ are known to higher orders, and one usually incorporates such high-order SM effects when performing numerical calculations in SUSY models. On the other hand, because the SUSY Higgs sector is quite different from the SM, one cannot get the SUSY Higgs contributions simply by adding some additional terms to the SM Higgs contributions. In our calculation in SUSY models, to avoid the double-counting of the Higgs contributions, we first subtract the SM Higgs contributions from their SM values (calculated by the codes TOPAZ0 [@Montagna] and ZFITTER [@Bardin]), and then we add the full one-loop contributions from the SUSY Higgs bosons and sparticles.
constraints on SUSY parameters
==============================
Before we proceed to discuss the SUSY corrections to $Zb\bar b$ coupling in the MSSM and NMSSM, we take a look at the SUSY parameters involved in our calculations. From the expressions of $Z
f \bar{f}$ vertex correction listed in Appendix B, one can learn that the SUSY- EW correction depends on the masses and the mixings of top squarks, bottom squarks, charginos and neutralinos, the SUSY-QCD vertex correction depends on gluino mass and the masses and the chiral mixing of bottom squarks, and the Higgs-mediated vertex correction depends on the masses and the mixings of Higgs bosons. The expressions of the gauge boson self-energies listed in Appendix A indicate that the SUSY correction also depends on the masses of sleptons and the first-two generation squarks. About these SUSY parameters, we consider the following constraints
- Constraints from the direct search for the sparticles at LEP and Tevatron [@Yao] $$\begin{aligned}
&& m_{\tilde{\chi}_1^0} > 41 {\rm ~GeV}, \quad m_{\tilde{\chi}_2^0} > 62.4 {\rm ~GeV},
\quad m_{\tilde{\chi}_3^0} > 99.9 {\rm ~GeV}, \quad m_{\tilde{\chi}^\pm} > 94 {\rm ~GeV}, \\
&& m_{\tilde{e}} > 73 {\rm ~GeV}, \quad m_{\tilde{\mu}} > 94 {\rm ~GeV},
\quad m_{\tilde{\tau}} > 81.9 {\rm GeV}, \quad m_{\tilde{q}} > 250 {\rm ~GeV}, \\
&& m_{\tilde{t}} > 89 {\rm ~GeV}, \quad m_{\tilde{b}} > 95.7 {\rm ~GeV},
\quad m_{\tilde{g}} > 195 {\rm ~GeV},\end{aligned}$$ where $m_{\tilde{\chi}^0_i}$ denote the masses of the neutralinos and $m_{\tilde{q}}$ denotes the masses for the first two generation squarks.
- Constraint from the direct search for Higgs boson at LEP [@Higgs]. This constraint can limit the values of $m_A$, $\tan \beta$ and the masses and the chiral mixing of top squarks. In case of large $\tan \beta$, it can also put constraints on the masses and the mixing of bottom squarks. Generally speaking, this constraint requires the product of two top squark masses, $m_{\tilde{t}_1}
m_{\tilde{t}_2}$, should be much larger than $m_t^2$ [@Higgs-theory].
- Constraint from the theoretical requirements that there is no Landau pole for the running Yukawa couplings $Y_b$ and $Y_t$ below the GUT scale, and that the physical minimum of the Higgs potential with non-vanishing $ \langle H_u \rangle$ and $ \langle H_d \rangle $ is lower than the local minima with vanishing $ \langle H_u \rangle$ and $ \langle H_d \rangle $.
- Constraints from precision electroweak observalbes such as $\rho_{lept}$, $\sin^2 \theta_{eff}^{lept}$, $\rho_c$, $\sin^2
\theta_{eff}^c$ and $M_W$. These constraints are equivalent to those from the well known $\epsilon_i (i=1,2,3) $ parameters [@Altarelli] or $S$, $T$ and $U$ parameters [@Peskin]. The measured values of these observables are [@2005ema] $$\begin{aligned}
&&\rho_{lept} = 1.0050 \pm 0.0010, \quad \sin^2
\theta_{eff}^{lept} = 0.23153 \pm 0.00016, \nonumber \\
&&\rho_c = 1.013 \pm 0.021, \quad \sin^2 \theta_{eff}^{c} =
0.2355 \pm 0.0059, \quad M_W = 80.403 \pm 0.029 {\rm ~GeV}, \nonumber\end{aligned}$$ and their SM fitted values are $\rho_{lept}^{SM} = 1.0051 $, $
\sin^2 \theta_{eff}^{lept, SM} = 0.23149 $, $ \rho_c^{SM} = 1.0058
$, $ \sin^2 \theta_{eff}^{c} = 0.2314 $ and $M_W = 80.36$ GeV for $m_t = 173$ GeV and $m_h = 111$ GeV. In our calculations we require the theoretical predictions to agree with the experimental values at $2\sigma$ level.
- Constraint from $R_b = \Gamma (Z \to b \bar{b} ) / \Gamma
( Z \to hadrons ) $. The measured value of $R_b$ is $R_b^{exp} =
0.21629 \pm 0.00066 $ and its SM prediction is $R_b^{SM} = 0.21578 $ for $m_t = 173$ GeV [@Yao]. In our analysis, we require $R_b^{SUSY}$ is within the $2 \sigma$ range of its experimental value.
- Constraint from the relic density of cosmic dark matter, i.e. $ 0.0945 < \Omega h^2 < 0.1287 $ [@dmconstr]. This constraint can rule out a broad parameter region for guagino masses $M_{1,2}$, $\mu$ parameter, $m_A$ and $\tan \beta$ [@darkmatter].
- Constraint from the muon anomalous magnetic momentum, $a_\mu$. Now both the theoretical prediction and the experimental measurement of $a_\mu$ have reached a remarkable precision, which show a significant deviation $a_\mu^{exp} - a_\mu^{SM} = ( 29.5 \pm 8.8 ) \times 10^{-10} $ [@Miller]. In our analysis we require the SUSY effects to account for such difference at $2 \sigma$ level.
Note that in our analysis we do not include the constraints from $B$ physics, like $b \to s \gamma$ [@bsr] and $B_s-\bar{B_s}$ mixing [@Ball], because these constraints are sensitive to squark flavor mixings which are irrelevant to our discussion.
Among the constraints listed above, the constraints (4) and (5), especially the observables $M_W$, $\rho_{lept}$, $\sin^2 \theta_{eff}^{lept}$ and $R_b$, are most relevant to our study of $\rho_b$ and $\sin^2 \theta_{eff}^b$. Let us look at these constraints in more details.
First, the precise measurements of $M_W$, $\rho_{lept}$ and $\sin^2
\theta_{eff}^{lept}$ stringently constrain $\delta \rho_{se}$, $\delta \kappa_{se}$ and the gaugino loop contributions to $\delta
\rho_{b,v}$ and $\delta \kappa_{b,v}$. The approximate forms of the SUSY corrections to $M_W$, $\delta \rho_{se} $ and $\delta
\kappa_{se}$ [@Heinemeyer] in case of heavy sparticles are given by $$\begin{aligned}
\frac{\delta M_W}{M_W} &= & \frac{s_W^2}{c_W^2 - s_W^2}
\frac{\delta (\Delta r)}{ 2 ( 1 - \Delta r ) } \simeq -
\frac{c_W^2}{c_W^2 - s_W^2} \frac{\Delta \rho}{2}, \nonumber \\
\delta \rho_{se} & \simeq & \Delta \rho, \nonumber \\
\delta \kappa_{se} & \simeq & \frac{c_W^2}{s_W^2} \Delta \rho,\end{aligned}$$ where $$\begin{aligned}
\Delta \rho & =& \frac{\Sigma_Z (0) }{m_Z^2} - \frac{\Sigma_W (0)
}{m_W^2} - 2 \frac{\sin \theta_W}{\cos \theta_W}
\frac{\Sigma_{\gamma Z} (0) }{m_Z^2}\end{aligned}$$ is the correction to the classical $\rho $ parameter [@Veltman] and is only sensitive to the mass spectrum of the third generation squarks. Through the above relations the precisely measured $M_W$ then stringently restricts $\Delta \rho$ (of order $O(10^{-4})$) and subsequently restricts $\delta \rho_{se} $ and $\delta \kappa_{se}$. This restriction together with the precisely determined $\rho_{lept}$ and $\sin^2 \theta_{eff}^{lept}$ stringently constrains the magnitude of $\delta \rho_{l,v}$ and $\delta
\kappa_{l,v}$ defined in Eq.(\[definition\]) to be below $O(10^{-4})$. Since the gaugino loop effects in $\delta \rho_{b,v} $ and $\delta \kappa_{b,v}$ are strongly correlated with $\delta
\rho_{l,v}$ and $\delta \kappa_{l,v}$ (the main difference is caused by the mass difference between sleptons and squarks), the gaugino loop contributions to $\delta \rho_{b,v}$ and $\delta \kappa_{b,v}$ are also suppressed, which are found to be below $5 \times 10^{-4}$ from our numerical calculations.
For the constraint from the precision observable $R_b$, an interesting character is that it does not stringently constrain the magnitude of $\delta v_b $ and $\delta a_b$, but it favors the relation $\delta v_b \sim -1.44 \delta a_b$, which can be seen from the expression of the radiative correction to $R_b$ [@Djouadi; @Boulware; @Cao] $$\begin{aligned}
\delta R_b& \simeq & \frac{2 R_b^{SM}(1-R_b^{SM})
}{v_b^2(3-\beta^2)+2a_b^2\beta^2} \big[v_b(3-\beta^2) \delta v_b
+ 2a_b\beta^2 \delta a_b \big] \propto ( \delta v_b + 1.44 \delta a_b )\end{aligned}$$ with $\beta = \sqrt{ 1 - m_b^2/m_Z^2} $ being the velocity of bottom quark in $Z$ decay.
Now we turn to the constraint from the muon anomalous magnetic momentum. To get an intuitive understanding of this constraint, we look at a simple case of the MSSM that all the gaugino masses and soft-breaking masses in smuon sector have a common scale $M$. In this case, $a_\mu^{SUSY} $ is approximated by [@Ibrahim] $$\begin{aligned}
a_\mu^{SUSY} \simeq 13 \times 10^{-10} \left ( \frac{100 {\rm ~GeV}}{M}
\right )^2 \tan \beta\ sign(\mu).\end{aligned}$$ The gap between $a_\mu^{SM}$ and $a_\mu^{exp}$ then prefers a positive $\mu$, and constrains the product $ \left ( \frac{100 {\rm ~GeV}}{M}
\right )^2 \tan \beta$ in the range \[1.0,3.6\] at $2\sigma$ level. So the SUSY scale can be higher for larger $\tan \beta$.
In our calculations we use the code NMSSMTools [@Ellwanger] to generate the masses and the mixings for all sparticles and Higgs bosons in the framework of the NMSSM with all known radiative corrections included. There are two advantages in using this code. One is that all the masses and the mixings in the MSSM can be easily recovered if we set the parameters $\lambda = \kappa \simeq 0 $ and $A_{\kappa}$ to be negatively small. The other is that it incorporates the code MicrOMEGAs [@Belanger] which calculates the relic density of cosmic dark matter. It should be noted that the current version of NMSSMTools only includes the constraints (1), (2), (3) and (6), and we extend it by including the constraints (4), (5) and (7). We note that the muon anomalous magnetic momentum was recently calculated in the NMSSM [@Domingo] and our calculations agree with theirs.
One-loop corrections to $\rho_b$ and $\sin^2 \theta_{eff}^b$ in MSSM
======================================================================
In this section we investigate $\rho_b$ and $\sin^2
\theta_{eff}^b$ to one-loop level in the MSSM. As discussed above, the self-energy corrections to these two observables are generally small and thus we mainly scrutinize the vertex corrections which include the SUSY-EW corrections, the SUSY-QCD corrections and the Higgs-mediated vertex corrections. We pay special attention to the cases where the magnitudes of the corrections are large, and show that $\tan \beta$ is crucial in enhancing the vertex corrections. Our analysis is organized as follows: we first investigate the characters of the vertex corrections to get an intuitive understanding of them, then by scanning over the MSSM parameter space, we study the compatibility of the MSSM predictions for $\rho_b$ and $\sin^2 \theta_{eff}^b$ with their experimental results.
The SM input parameters involved in our calculations are taken from [@Yao], which are $\alpha = 1./128.93$, $G_F = 1.16637 \times 10^{-5}$, $\alpha_s (m_Z) = 0.1172$, $m_Z = 91.1876$ GeV, $m_b (m_b )= 4.2$ GeV and $m_t = 172.5$ GeV.
Characters of vertex corrections in MSSM
----------------------------------------
As for the SUSY-EW contribution to $\delta \rho_{b,v}$ and $\delta
\kappa_{b,v}$, the parameters involved are guagino masses $M_{1,2}$, Higgsino mass $\mu$, $\tan \beta =v_2/v_1$ with $v_{1,2}$ being the vacuum expectation values of the Higgs fields, the soft-breaking masses $M_{Q_3}$, $M_{U_3}$, $M_{D_3}$, and the coefficients of the trilinear terms $A_t$ and $A_b$. The first four parameters enter the mass matrices of neutralinos and charginos, and the last seven parameters affect the masses and the chiral mixings of the third generation squarks [@Haber].
As discussed in the preceding section, the gaugino loop contribution is small, and hence we only discuss the Higgsino loop contribution. The magnitude of such Higgsino loop contribution is sensitive to $\tan \beta $, the Higgsino mass $\mu$, and the masses and the chiral mixings of the third generation squarks. There are two characters for this contribution. One is that, due to the fact that the bottom Yukawa coupling $Y_b$ is proportional to $ 1/\cos \beta $, the contribution can be potentially large in case of large $\tan \beta$ and small $\mu$. The other is that the contribution is moderately sensitive to the chiral mixings of the third generation squarks, and potentially large contribution comes from the case where the mixing is small and the component of the lighter squark is dominated by the left-handed squark [@Boulware]. To illustrate these characters we consider three cases in the squark sector:
- $M_S= M_{Q_3} = M_{U_3} = M_{D_3} = 400$ GeV, $A_t = A_b = 800$ GeV;
- $M_{Q_3} = 200$ GeV, $M_{U_3} = M_{D_3} = 600$ GeV, $A_t = A_b = 800$ GeV;
- $M_{Q_3} = 600$ GeV, $M_{U_3} = M_{D_3} = 200$ GeV, $A_t = A_b = 800$ GeV,
and fix other SUSY parameters as $$\begin{aligned}
M_1 = 75 {\rm ~GeV}, \quad M_2 = 150 {\rm ~GeV}, \quad m_A = 500 {\rm ~GeV}, \quad
M_{SUSY} = 1 {\rm ~TeV}, \label{parameters}\end{aligned}$$ where $M_{SUSY}$ denotes the soft-breaking parameters for sleptons and the first-two generation squarks. Case-I corresponds to maximal chiral mixing case, Case-II is the small mixing case with the component of the lighter squark dominated by the left-handed squark and Case-III is also the small mixing case but with the component of the lighter squark dominated by the right-handed squark.
In Fig.\[SUSY-EW1\] we show the dependence of the SUSY-EW contribution to $\delta \rho_{b,v} $ and $\delta \kappa_{b, v}$ on $\tan \beta $ in the three cases. One can see that both $\delta
\rho_{b,v}$ and $\delta \kappa_{b,v}$ are sensitive to $\tan \beta$. As $\tan \beta$ increases, $\delta \rho_{b,v}$ and $\delta
\kappa_{b,v}$ get more negative contributions and, for small $\mu$, they become negative with sizable magnitudes. This behavior can be understood as following. As $\tan \beta$ gets large, the bottom Yukawa coupling increases and the correction to the right-handed $Z
b \bar{b}$ coupling $\delta g_R^b$ increases positively, and then $\delta \rho_{b,v}$ and $\delta \kappa_{b,v}$ get more negative contribution from the increasing $\delta g_R^b$(see Eq.(\[drb\]) and also $\delta g_R^b$ in Appendix B). One also see from these figures that the magnitude of $\delta \kappa_{b,v}$ is usually larger than $\delta \rho_{b,v}$. The factor $\sin^2 \theta_W $ in the denominator of $\delta \kappa_{b,v} $ (see Eq.(\[definition\]) ) can to a large extent account for this.
Note that in these figures we only plot our results within the range of $\tan \beta $ that survives the constraints (1-5). The constraint (7), i.e. the muon anomalous magnetic moment, can in principle also limit $\tan \beta $. But this constraint relies on the mass scale of smuon, $M_{SUSY}$ in Eq.(\[parameters\]), which $\rho_b$ and $\sin^2 \theta_{eff}^b $ are not sensitive to, so we do not apply it in plotting these figures. Our numerical results indicate that the muon anomalous magnetic moment allows for a vast region of $M_{SUSY}$ and $\mu$ where $\tan \beta $ can be as large as 60, and hence the sizable SUSY-EW corrections to $\rho_b$ and $\sin^2 \theta_{eff}^b$ are possible. For example, with the parameters in Eq.(\[parameters\]), the range of $\tan \beta $ allowed by the muon $g-2$ is $\tan \beta \geq 25 $ for $\mu = 200$ GeV, $\tan \beta \geq 33$ for $\mu = 500$ GeV, and $\tan \beta \geq
44 $ for $\mu = 800$ GeV. If we choose $M_{SUSY} = 0.5$ TeV, these allowed ranges are correspondingly given by $ 7 \leq \tan \beta \leq
57 $, $12 \ \leq \tan \beta \leq 71 $ and $\tan \beta \geq 14$.
Next we discuss the SUSY-QCD corrections. The relevant parameters are gluino mass and $M_{Q_3}$, $M_{D_3}$ and $X_b = ( A_b - \mu \tan
\beta ) $ which enter the mass matrix of the bottom squarks. From the large strength of the strong coupling, $g_s (m_Z ) \simeq 1.2
\simeq 50 \times Y_b^{SM} $, one may naively postulates that the SUSY-QCD contributions to $\delta \rho_{b,v}$ and $\delta
\kappa_{b,v} $ should be much larger than the Higgsino loop contributions in case of $m_{\tilde{g}} \simeq \mu $ and $\tan \beta
\ll 50$. However, our numerical results show that in case of small sbottom chiral mixing the SUSY-QCD contributions to $\delta
\rho_{b,v}$ and $\delta \kappa_{b,v}$ are negligibly small. The underlying reason is that for the SUSY-QCD corrections there is a strong cancellation between different diagrams in case of small sbottom chiral mixing, which can be seen from the expressions of $\delta g_{L,R}^b $ listed in Appendix B. It should be noted that such a cancellation can be alleviated for a large sbottom mixing, or equivalently, a large term $\mu \tan \beta $ appeared in the non-diagonal elements of sbottom mass matrix (we checked this from numerical calculations). So the contribution may be sizable in case of large $\mu \tan \beta $, as shown in Fig.\[SUSY-QCD1\].
Compared with the Higgsino loop corrections, the SUSY-QCD contributions in Fig.\[SUSY-QCD1\] exhibit a similar behavior with respect to $\tan \beta$. The difference is that the most sizable effects come from Case-I (maximal sbottom mixing case) with large $\mu$, instead of Case-II with small $\mu$ for the Higgsino loop corrections.
Finally, we consider the Higgs loop contributions to $\delta
\rho_{b,v}$ and $\delta \kappa_{b,v}$ [@Logan]. To calculate this part of contribution, we need to know the masses and the mixing of the Higgs bosons, which are determined by $m_A$ and $\tan \beta $ at tree-level, and also by the soft-breaking masses for the third generation squarks if the important loop correction to the Higgs boson masses is taken into account. As shown in Fig.\[SUSY-Higgs\], the contributions exhibit a similar dependence on $\tan \beta$, and the significant contribution comes from the case of small $m_A$ and large $\tan \beta$. We checked that the results in Fig.\[SUSY-Higgs\] are not sensitive to $\mu $ or $M_S$, and also not sensitive to the choice of different case ( Case-I, Case-II or Case-III).
>From the above figures one can infer that among the three types of corrections, the potentially largest correction comes from the Higgs loops, which can reach $2\%$ for $\rho_b$ and $6\%$ for $\sin^2 \theta_{eff}^b$. Such large corrections reach the current experimental sensitivity since the current experimental measurements are $\rho_b^{exp} = 1.059 \pm 0.021$ and $ \sin^2
\theta_{eff}^{b, exp} = 0.281 \pm 0.016 $.
Before we end this section, we would like to point out that in the large $\tan \beta $ limit the relic density of cosmic dark matter allows the possibility of small $\mu$ or small $m_A$ (but not both small). This can be seen from Fig.\[dark matter\], where we show the allowed regions in the plane of $\tan \beta $ versus $\mu$ for different $m_A$. In plotting this figure, we choose Case-I and fix other related parameters in Eq.(\[parameters\]). Fig.\[dark matter\] implies that the SUSY-EW contribution and the Higgs-loop contribution to $\delta \rho_{b,v}$ and $\delta \kappa_{b,v}$ cannot simultaneously reach their maximal values.
MSSM predictions for $\rho_b$ and $\sin^2 \theta_{eff}^b$
---------------------------------------------------------
As mentioned above, the extracted values of $\rho_b$ and $\sin^2
\theta_{eff}^b$ from combined LEP and SLD data analysis are respectively $1.059 \pm 0.021$ and $0.281 \pm 0.016$ with correlation coefficient 0.99 [@2005ema]. This result is shown in Fig.\[contours\] with the three ellipses corresponding to $68\%$, $95.5\%$ and $ 99.5\%$ confidence level (CL), respectively. Noting that the SM predictions are $\rho_b^{SM}=
0.994 $ and $\sin^2 \theta_{eff}^{b SM} = 0.233 $, one may infer that large positive corrections to $\rho_b $ and $\sin^2
\theta_{eff}^b$ are needed to narrow the gap between the experimental data and the SM prediction. As discussed in the preceding section, the MSSM corrections can be sizable for large $\tan\beta$, which, however, are negative and thus cannot narrow the gap. To figure out to what extent the MSSM predictions can agree with the experiment, we consider all the constraints discussed in Sec. III and scan over the SUSY parameter space: $$\begin{aligned}
&&0 < M_{1}, M_{2}, M_3, \mu, M_{Q_3}, M_{U_3}, M_{D_3}, M_A,
M_{SUSY} \leq 1 {\rm ~TeV}, \nonumber \\
&& -3 {\rm ~TeV} \leq A_t, A_b \leq 3 {\rm ~TeV}, \quad \quad 1 < \tan
\beta \leq 60, \label{region}\end{aligned}$$ Based on a twenty billion sample, we find the best MSSM predictions are $ \rho_b = 0.9960$ and $\sin^2 \theta_{eff}^b = 0.2328$, which give a $\chi^2/dof = 9.07/2$ when compared with the experiment data. If we do not consider the dark matter constraint, the best MSSM predictions are $\rho_b = 0.99737$ and $\sin^2 \theta_{eff}^b =0.2336 $, which give a $\chi^2/dof = 8.77/2$. Moreover, we find that such a best case happens when $\mu, m_A, m_{\tilde{g}} \sim 1$ TeV so that the three types of vertex corrections are suppressed.
One-loop predictions for $\rho_b$ and $\sin^2 \theta_{eff}^b$ in NMSSM
========================================================================
Introduction to the NMSSM
-------------------------
As a popular extension of the MSSM, the NMSSM provides an elegant solution to the $\mu$-problem via introducing a singlet Higgs superfield $\hat{S}$, which naturally develops a vacuum expectation value of the order of the SUSY breaking scale and gives rise to the required $\mu$ term. Another virtue of the NMSSM is that it can alleviate the little hierarchy problem since the theoretical upper bound on the SM-like Higgs boson mass is pushed up and the LEP II lower bound on the Higgs boson mass is relaxed due to the suppressed $ZZh$ coupling or the suppressed decay $h \to b\bar b $ [@Dermisek]. Since the NMSSM is so well motivated, its phenomenology has been intensively studied in recent years, such as its effects in Higgs physics [@NMSSM-Higgs], neutralino physics [@NMSSM-Neutralino], B-physics [@NMSSM-B] as well as squark physics [@NMSSM-Sq]. In the following we recapitulate the basics of the NMSSM with emphasis on its difference from the MSSM.
The superpotential of the NMSSM takes the form [@Franke; @Ellwanger] $$\begin{aligned}
W & = & \lambda \varepsilon_{ij} \hat{H}_u^i \hat{H}_d^j \hat{S} +
\frac{1}{3} \kappa \hat{S}^3 + h_u \varepsilon_{ij}
\hat{Q}^i \hat{U} \hat{H}_u^j - h_d \varepsilon_{ij}
\hat{Q}^i \hat{D} \hat{H}_d^j -h_e \varepsilon_{ij}
\hat{L}^i \hat{E} \hat{H}_d^j
\label{Superpotential}\end{aligned}$$ where $\hat S$ is the singlet Higgs superfield, and $\varepsilon_{12} = - \varepsilon_{21} = 1 $. For the soft SUSY breaking terms, we take $$\begin{aligned}
\label{vs} V_{\mbox{soft}} & = &
\frac{1}{2} M_2 \lambda^a \lambda^a +\frac{1}{2} M_1 \lambda '\lambda '
+m_d^2 |H_d|^2 + m_u^2 |H_u|^2+m_S^2 |S|^2 \nonumber \\
& & + m_Q^2 |\tilde{Q}|^2 + m_U^2 |\tilde{U}|^2 + m_D^2 | \tilde{D}|^2
+m_L^2 |\tilde{L}|^2 + m_E^2 |\tilde{E}|^2 \nonumber \\
& & + (\lambda A_\lambda \varepsilon_{ij} H_u^i H_d^j S + \mbox{h.c.})
+ (\frac{1}{3} \kappa A_\kappa S^3 + \mbox{h.c.}) \nonumber \\
& & + (h_u A_U \varepsilon_{ij} \tilde{Q}^i \tilde{U} H_u^j
-h_d A_D \varepsilon_{ij} \tilde{Q}^i \tilde{D} H_d^j
-h_e A_E \varepsilon_{ij} \tilde{L}^i \tilde{E} H_d^j +\mbox{h.c.})\end{aligned}$$ With the above configuration of the model, the $\mu$ parameter is given by $\mu = \lambda \langle S \rangle $ with $ \langle S \rangle
$ being the vacuum expectation value of $S$ field, and the $m_A$ parameter in the MSSM corresponds to the combination $m_A^2 =
\frac{2 \mu}{\sin 2 \beta } ( A_\lambda + \frac{\kappa \mu}{\lambda}
) $ (see Eq.(\[CP-odd\])). So compared with the MSSM, the NMSSM has three additional input parameters $\lambda$, $\kappa$ and $A_\kappa$. These three parameters should be subject to the constraints listed in Sec. III, and the argument that the NMSSM should keep perturbative up to the Planck scale requires $\lambda $ and $\kappa$ to be smaller than 0.7.
The differences of the NMSSM and MSSM come from the Higgs sector and the neutralino sector. In the Higgs sector, now we have three CP-even and two CP-odd Higgs bosons. In the basis $[Re(H_u^0),Re(H_d^0), Re(S)]$, the mass-squared matrix entries for CP-even Higgs bosons are [@Franke; @Ellwanger] $$\begin{aligned}
{\cal M}_{S,11}^2 & = & m_A^2 \cos^2 \beta + m_Z^2 \sin^2 \beta, \nonumber \\
{\cal M}_{S,22}^2 & = & m_A^2 \sin^2 \beta + m_Z^2 \cos^2 \beta, \nonumber \\
{\cal M}_{S,33}^2 & = & \frac{\lambda^2 v^2}{4 \mu^2} m_A^2 \sin^2 2 \beta
-\frac{\lambda \kappa}{2} v^2 \sin 2 \beta
+ \frac{\kappa}{\lambda^2} \mu ( \lambda A_\kappa + 4 \kappa \mu ), \nonumber \\
{\cal M}_{S,12}^2 & = & (2 \lambda^2 v^2 - m_Z^2 - m_A^2 ) \sin \beta \cos \beta, \nonumber \\
{\cal M}_{S,13}^2 & = & 2 \lambda \mu v \sin \beta
- \frac{\lambda v}{2 \mu} m_A^2 \sin 2 \beta \cos \beta - \kappa \mu v \cos \beta, \nonumber \\
{\cal M}_{S,23}^2 & = & 2 \lambda \mu v \cos \beta
- \frac{\lambda v}{2 \mu} m_A^2 \sin \beta \sin 2 \beta - \kappa \mu v \sin \beta,
\label{CP-even}\end{aligned}$$ and for the CP-odd Higgs bosons, their mass-squared matrix entries in the basis $[\tilde{A}, Im(S)]$ with $\tilde{A} = \cos \beta~ Im(H_u^0) + \sin \beta~ Im(H_d^0)$ are $$\begin{aligned}
{\cal M}_{P,11}^2 & = & \frac{2
\mu}{\sin 2 \beta } ( A_\lambda + \frac{\kappa \mu}{\lambda} ) \equiv m_A^2, \nonumber \\
{\cal M}_{P,22}^2 & = & \frac{3}{2} \lambda \kappa v^2 \sin 2 \beta
+ \frac{\lambda^2 v^2}{4 \mu^2} m_A^2 \sin^2 2 \beta - 3 \frac{\kappa}{\lambda} \mu A_\kappa, \nonumber \\
{\cal M}_{P,12}^2 & = & \frac{\lambda v}{2 \mu} m_A^2 \sin 2 \beta
- 3 \kappa \mu v. \label{CP-odd}\end{aligned}$$ Eqs.(\[CP-even\]) and (\[CP-odd\]) indicate that the parameters $\lambda $ and $\kappa \mu $ affect the mixings of the doublet fields with the singlet field, $A_\kappa$ only affects the squared-mass of the singlet field, and in the limit $\lambda, \kappa \to 0 $, the NMSSM can recover the MSSM. One can also learn that in case of small $\lambda$ and $\kappa$ so that the mixings are small, the physical state with the singlet being the dominant component should couple weakly to bottom quarks and thus its loop contribution to $\rho_b$ and $\sin^2 \theta_{eff}^b$ should be small.
The NMSSM predicts five neutralinos, and in the basis $(- i\lambda_1, - i \lambda_2, \psi_u^0, \psi_d^0, \psi_s )$ their mass matrix is given by [@Franke; @Ellwanger] $$\begin{aligned}
\left( \begin{array}{ccccc}
M_1 & 0 & m_Z \sin \theta_W \sin \beta & - m_Z \sin \theta_W \cos \beta & 0 \\
& M_2 & -m_Z \cos \theta_W \sin \beta & m_Z \cos \theta_W \cos \beta & 0 \\
& & 0 & -\mu & -\lambda v \cos \beta \\
& & & 0 & - \lambda v \sin \beta \\
& & & & 2 \frac{\kappa}{\lambda} \mu \end{array} \right) . \label{mass
matrix}\end{aligned}$$ This mass matrix is independent of $A_\kappa$, and the role of $\lambda$ is to introduce the mixings of $\psi_s $ with $\psi_u^0 $ and $\psi_d^0$, and $k \mu $ is to affect the mass of $\psi_s$. Quite similar to the discussion about the Higgs bosons, in case of small $\lambda$, the correction to $\rho_b$ and $\sin^2
\theta_{eff}^b$ should be insensitive to the value of $\kappa \mu$.
NMSSM correction to $\rho_b $ and $\sin^2 \theta_{eff}^b$
---------------------------------------------------------
We first look at the SUSY-EW corrections in the NMSSM. Compared with the corresponding MSSM corrections, the NMSSM effects involve two additional parameters $\lambda$ and $\kappa$. As discussed below Eq.(\[mass matrix\]), in case of small $\lambda$, the corrections are insensitive to $\kappa$ (our numerical results verified this conclusion), and thus here we mainly study the dependence on $\lambda$. We choose a value for $\kappa$ so that the allowed range of $\lambda$ is wide.
In Fig.\[NMSSM-EW\] we show the SUSY-EW contributions to $\delta
\rho_{b,v}$ and $\delta \kappa_{b,v}$ as a function of $\lambda$, in which $\tan \beta = 40$, $\kappa=0.4$, $A_\kappa = -100$ GeV and other parameters are same as in Fig.\[SUSY-EW1\]. One character of this figure is that both $\delta \rho_{b,v}$ and $\delta
\kappa_{b,v}$ become more negative with the increase of $\lambda$, which enlarges the gap between the theoretical values and the experimental data. Another character of this figure is that the contributions are less sensitive to $\lambda$ when $\mu$ becomes large. This can be explained from Eq.(\[mass matrix\]) which shows that the mixings between $\psi_s $ and the doublets $(\psi_u^0,
\psi_d^0)$ become negligiblly small for sufficiently large $\mu$ and thus reduce the sensitivity of the contributions to $\lambda$.
We now turn to the Higgs loop contributions to $\delta \rho_{b,v} $ and $\delta \kappa_{b,v}$ in the NMSSM. For these contributions, besides $m_A$ and $\tan \beta$, the parameters $\lambda $, $\kappa$ and $A_\kappa$ are also involved. Noting that these contributions are more sensitive to $\lambda$ and $\kappa$ than to $A_\kappa$, we only study their dependence on $\lambda$ and $\kappa$.
In Fig.\[NMSSM-Higgs1\] we show the contributions versus $\lambda $, where $\tan \beta = 40 $, $ \kappa = 0.4 $, $A_\kappa = - 100$ GeV and other parameters are same as in Fig.\[SUSY-Higgs\]. This figure shows the same behavior as in Fig.\[NMSSM-EW\], and the dependence on $\lambda$ becomes rather weak in case of large $m_A$.
In Fig.\[NMSSM-Higgs2\], we show the dependence of the contributions on $\kappa$, as shown. This figure exhibits the similar behavior to Fig.\[NMSSM-Higgs1\]. Compared with Fig.\[NMSSM-Higgs1\] and Fig.\[NMSSM-Higgs2\], one can learn that the contributions have a stronger dependence on $\lambda $ than on $\kappa$.
Like in Fig.\[contours\], we also investigate the extent to which the NMSSM predictions can agree with the experiment by scanning over the SUSY parameter space in the region of Eq.(\[region\]) and $$\begin{aligned}
\lambda, \kappa \leq 0.7, \quad \quad -1 {\rm~ TeV} < A_\kappa < 1 {\rm~ TeV}.\end{aligned}$$ Our result is shown in Fig.\[contours1\]. Compared with Fig.\[contours\], one can learn that the NMSSM cannot improve the agreement and instead may exacerbate the agreement in a large part of the allowed parameter space.
If we define a quantity $F(\lambda, \kappa) - F(0, 0 )$ with $F$ denoting either $\delta \rho_{b,v}$ or $\delta \kappa_{b,v}$ with $F(\lambda, \kappa) $ being the value of $F$ in the NMSSM with arbitrary values of $\lambda $ and $\kappa$, and $F(0,0)$ being the value of $F$ in the MSSM limit, then by studying various cases we find this quantity is generally smaller than $5 \times 10^{-3}$, which means that in the allowed region for $\lambda$ and $\kappa$, NMSSM only slightly modifies the MSSM predictions of $\rho_b$ and $\sin^2 \theta_{eff}^b$.
Conclusions
===========
The $Z b \bar{b}$ coupling determined from the $Z$-pole measurements at LEP/SLD deviate significantly from the SM prediction. In terms of $\rho_b $ and $\sin^2 \theta_{eff}^b$, the SM prediction is about $3\sigma$ below the experimental data. If this anomaly is not a statistical or systematic effect, it would signal the presence of new physics in association with the $Zb\bar
b$ coupling. In this work we scrutinized the full one-loop supersymmetric effects on $Z b \bar{b}$ coupling in both the MSSM and the NMSSM, considering all current constraints which are from the precision electroweak measurements, the direct search for sparticles and Higgs bosons, the stability of Higgs potential, the dark matter relic density, and the muon g-2 measurement. We analyzed the characters of each type of the corrections and searched for the SUSY parameter regions where the corrections could be sizable. We found that the potentially sizable corrections come from the Higgs sector with light $m_A$ and large $\tan \beta$, which can reach $-2\%$ and $-6\%$ for $\rho_b $ and $\sin^2 \theta_{eff}^b$, respectively. However, such sizable negative corrections are just opposite to what needed to solve the anomaly. We also scanned over the allowed parameter space and investigated to what extent supersymmetry can narrow the discrepancy between theoretical predictions and the experimental values. We found that under all current constraints, the supersymmetric effects are quite restrained and cannot significantly ameliorate the anomaly of $Zb\bar b$ coupling. Compared with $\chi^2/dof = 9.62/2$ in the SM, the MSSM and NMSSM can only improve it to $\chi^2/dof = 8.77/2$ in the allowed parameter space.
In the future the GigaZ option at the proposed International Linear Collider (ILC) with an integrated luminosity of 30 fb$^{-1}$ is expected to produce more than $10^9$ $Z$-bosons [@ILC] and will give a more precise measurement of $Z b \bar{b}$ coupling, which will allow for a test of new physics models. If the anomaly of $Z b \bar{b}$ coupling persists, it would suggest new physics beyond the MSSM and NMSSM. One possible form of such new physics is the model with additional right-handed gauge bosons which couple predominantly to the third generation quarks [@He]. These new gauge bosons usually mix with $Z$ and $W$ so that the $Z b_R \bar{b}_R $ and $W b_R
\bar{t}_R$ couplings in the SM may be greatly changed. A careful investigation of top quark processes at the LHC, such as top quark decay to the polarized W boson [@Fischer], may test this model in the near future.
Acknowledgement {#acknowledgement .unnumbered}
===============
This work was supported in part by the National Sciences and Engineering Research Council of Canada, by the National Natural Science Foundation of China (NNSFC) under grant No. 10505007, 10725526 and 10635030, and by HASTIT under grant No. 2009HASTIT004.
gauge boson self-energy in NMSSM
================================
In the NMSSM the contributions to vector boson self-energy come from the loops mediated by the SM fermions, gauge bosons, Higgs bosons, sfermions, charginos and neutralinos, respetively. In the following we list the expressions for pure new physics contributions, namely from the loops of Higgs bosons, sfermions, charginos and neutralinos, respectively. We adopt the convention of [@Ellwanger] for the SUSY parameters.
- Higgs contribution:
The NMSSM has an extended Higgs boson sector with a pair of charged Higgs bosons $H^\pm$, two CP-odd Higgs boson $a_i$ and three CP-even Higgs boson $h_i$. The Higgs contribution to gauge boson self-energy arises from $VHH$, $VVHH$ and $VVH$ interactions and because we choose ’t Hooft-Feynman gauge to calculate the contribution, the gauge boson contribution and the Higgs contribution are in general entangled. In our calculation, we are actually interested in the difference between the contribution from the NMSSM Higgs sector and that from the SM Higgs sector (see the discussion in the last paragraph of Sect. II). Since the SM contribution is well known[@Hollik; @Denner], we only list the NMSSM contribution. $$\begin{aligned}
\Sigma^{T}_{\gamma \gamma} (p^2) &=& \frac{e^2}{16 \pi^2} B_5 (p, m_{H^+}, m_{H^+}), \\
\Sigma^{T}_{\gamma Z} (p^2) &=& \frac{1}{16 \pi^2} \frac{e g \cos 2
\theta_W}{2 \cos \theta_W} B_5 (p, m_{H^+}, m_{H^+}),\end{aligned}$$ $$\begin{aligned}
\Sigma^{T}_{ZZ} (p^2) &=& \frac{1}{16 \pi^2} \frac{g^2}{4 \cos^2
\theta_W} \biggl\{ \biggl[ ( |S_{i1}|^2 + |S_{i2}|^2 ) A(m_{h_i}) +
|P_{i1}^\prime|^2 A(m_{a_i}) + A(m_Z )
\nonumber \\
& & - 4 | \sin
\beta S_{i2} -\cos \beta S_{i1} |^2 | P_{j1}^\prime|^2 B_{22}(p, m_{a_j}, m_{h_i}) \nonumber \\
& & - 4 | \cos
\beta S_{i2} + \sin \beta S_{i1} |^2 B_{22}(p, m_{Z}, m_{h_i}) \biggr] \nonumber \\
&& +2 \cos^2 2 \theta_W \biggl[ A(m_{H^+}) - 2
B_{22} (p, m_{H^+}, m_{H^+} ) \biggr ] \nonumber \\
&& + 4 m_Z^2 |\cos\beta S_{i2} + \sin \beta S_{i1}|^2 B_0 (p, m_Z,
m_{h_i}) \biggr\},\\
\Sigma^{T}_{WW} (p^2) &=& \frac{1}{16 \pi^2} \frac{g^2}{4} \biggl\{
\biggl[ A(m_{H^+}) + ( |S_{i1}|^2 + |S_{i2}|^2 ) A(m_{h_i}) + A(m_W) \nonumber \\
& & - 4 |\sin \beta S_{i2} -\cos \beta S_{i1}|^2 B_{22}(p, m_{H^+}, m_{h_i}) \nonumber \\
& & - 4 |\cos \beta S_{i2} + \sin \beta S_{i1}|^2 B_{22}(p, m_{W}, m_{h_i}) \biggr]\nonumber \\
&& + \biggl[ A(m_{H^+}) + | P_{i1}^\prime |^2 A(m_{a_i})
- 4 | P_{i1}^\prime |^2 B_{22} (p, m_{H^+}, m_{a_i} ) \biggr ] \nonumber \\
&& + 4 m_W^2 |\cos\beta S_{i2} + \sin \beta S_{i1} |^2 B_0 (p, m_W,
m_{h_i} ) \biggr\},\end{aligned}$$ In above equations, $g$ is the SU(2) gauge coupling, and $S$ and $P^\prime$ are the rotation mass matrices defined in the Appendix A of [@Ellwanger] to diagonalize CP-even and CP-odd Higgs mass matrices, respectively. $A$ and $B_{22}$ are the standard one- and two-point loop functions firstly defined in [@Passarino]. $B_5$ is related with standard loop functions by [@Hagiwara] $$\begin{aligned}
B_5(p, m_1, m_2) = A(m_1) + A(m_2) - 4 B_{22}(p, m_1, m_2).\end{aligned}$$
- Sfermion contribution:
The sfermion contributions are given by $$\begin{aligned}
\Sigma^T_{WW} (p^2) &=& \frac{1}{16 \pi^2} \frac{g^2}{2} C_f
R^{\tilde{u} \ast}_{\alpha 1} R^{\tilde{u}}_{\alpha 1} R^{\tilde{d}
\ast}_{\beta 1} R^{\tilde{d}}_{\beta 1} B_5(p, m_{\tilde{u}_\alpha},
m_{\tilde{d}_\beta} ) ,\end{aligned}$$ $$\begin{aligned}
\Sigma^T_{ZZ} (p^2) &=& \frac{1}{16 \pi^2}
\frac{g^2}{\cos^2 \theta_W} C_f
\biggl\{ I_{3f}^2 R^{\tilde{f} \ast}_{\alpha 1}
R^{\tilde{f}}_{\alpha 1} R^{\tilde{f} \ast}_{\beta 1}
R^{\tilde{f}}_{\beta 1} B_5(p, m_{\tilde{f}_\alpha}, m_{\tilde{f}_\beta} ) \nonumber \\
&& - 2 s_W^2 I_{3f} Q_f R^{\tilde{f} \ast}_{\alpha 1}
R^{\tilde{f}}_{\alpha 1} B_5(p, m_{\tilde{f}_\alpha},
m_{\tilde{f}_\alpha} ) + s_W^4 Q_f^2 B_5(p,
m_{\tilde{f}_\alpha}, m_{\tilde{f}_\alpha} ) \biggr\}, \\
\Sigma^T_{\gamma \gamma} (p^2) &=& \frac{e^2}{16 \pi^2} C_f Q_f^2
B_5(p, m_{\tilde{f}_\alpha}, m_{\tilde{f}_\alpha} ),\\
\Sigma^T_{\gamma Z} (p^2) &=& \frac{e}{16 \pi^2} \frac{g}{\cos\theta_W}
C_f \biggl\{ I_{3f} Q_f R^{\tilde{f} \ast}_{\alpha 1}
R^{\tilde{f}}_{\alpha 1} -Q_f^2 s_W^2 \biggr\}
B_5(p, m_{\tilde{f}_\alpha}, m_{\tilde{f}_\alpha} ),\end{aligned}$$ where the color factor $C_f$ is 3 for squarks and 1 for sleptons. The electric charge $Q_f$ is given by $2/3 ,-1/3, 0, -1$ for $\tilde{u}, \tilde{d}, \tilde{\nu}_l, \tilde{l}$, respectively. $I_{3f}$ denotes the third component of the weak isospin, which is $+1/2$ and $-1/2$ for the up- and down-type sfermions, respectively. $R$ is the rotation matrix to diagonalize sfermion mass matrix.
- Chargino and neutralino contribution:
For a generic interaction between a vector boson and two fermions, it contributes to vector boson self-energy in the form: $$\begin{aligned}
\Sigma_{V^\prime V}^T (p^2) &= & \frac{2}{16 \pi^2} \biggl\{ (
g_L^{\bar{\psi}_j \psi_i V^\prime } g_L^{\bar{\psi}_i \psi_j V^\ast}
+ g_R^{\bar{\psi}_j \psi_i V^\prime } g_R^{\bar{\psi}_i \psi_j
V^\ast} ) ( 2 p^2 B_3 - B_4 )(p, m_{\psi_i}, m_{\psi_j} ) \nonumber
\\ && + ( g_L^{\bar{\psi}_j \psi_i V^\prime } g_R^{\bar{\psi}_i \psi_j
V^\ast} + g_R^{\bar{\psi}_j \psi_i V^\prime} g_L^{\bar{\psi}_i
\psi_j V^\ast} ) m_{\psi_i} m_{\psi_j} B_0 (p, m_{\psi_i},
m_{\psi_j} ) \biggr\}, \label{fermion}\end{aligned}$$ where $g_{L,R}^{\bar{\psi}_i \psi_j V} $ is the coupling strength of the vector boson with left-handed or righ-handed fermions. The functions $B_3$ and $B_4$ are related with the standard two-point functions by [@Hagiwara] $$\begin{aligned}
B_3(p, m_1, m_2) &=& - B_1(p, m_1, m_2) - B_{21}(p, m_1, m_2), \\
B_4(p, m_1, m_2) &=& - m_1^2 B_1(p, m_2, m_1) - m_2^2 B_1(p, m_1, m_2).\end{aligned}$$ For the charginos and neutralinos, the coefficients of their interactions with vector bosons take following forms: $$\begin{aligned}
&& g_L^{\bar{\tilde{\chi}}^0_i \tilde{\chi}^+_j W^-} = g ( -
\frac{1}{\sqrt{2}} N_{i3} V_{j2}^\ast + N_{i2} V_{j1}^\ast ), \quad
g_R^{\bar{\tilde{\chi}}^0_i \tilde{\chi}^+_j W^-} = g (
\frac{1}{\sqrt{2}} N_{i4}^\ast U_{j2} + N_{i2}^\ast U_{j1} ), \\
&& g_L^{\bar{\tilde{\chi}}^0_i \tilde{\chi}^0_j Z} =
\frac{g}{2 \cos\theta_W} ( - N_{i4} N_{j4}^\ast
+ N_{i3} N_{j3}^\ast ), \quad
g_R^{\bar{\tilde{\chi}}^0_i \tilde{\chi}^0_j Z} =
\frac{g}{2\cos\theta_W} ( N_{i4}^\ast N_{j4} - N_{i3}^\ast N_{j3} ), \\
&& g_L^{\bar{\tilde{\chi}}^+_i \tilde{\chi}^+_j Z}
= \frac{g}{\cos\theta_W} ( - V_{i1} V_{j1}^\ast
- \frac{1}{2} V_{i2} V_{j2}^\ast + \delta_{ij} \sin^2 \theta_W ), \quad
g_L^{\bar{\tilde{\chi}}^+_i \tilde{\chi}^+_j \gamma} = - e \delta_{ij}, \\
&& g_R^{\bar{\tilde{\chi}}^+_i \tilde{\chi}^+_j Z} =
\frac{g}{\cos\theta_W} ( - U_{i1}^\ast U_{j1} - \frac{1}{2}
U_{i2}^\ast U_{j2} + \delta_{ij} \sin^2 \theta_W ), \quad
g_R^{\bar{\tilde{\chi}}^+_i \tilde{\chi}^+_j \gamma} = - e \delta_{ij}.\end{aligned}$$ But as for the contribution from neutralino sector, one should note that, due to the Majorana nature of neutralinos, an addition factor $\frac{1}{2}$ should be multiplied when using above formulae to get neutralino contribution to $Z$-boson self-energy.
Vertex corrections to $Z\to f \bar{f}$ in NMSSM
===============================================
In this section we present the expressions of the radiative correction to $Z \bar{f} f$ vertex in the NMSSM, namely $\delta v_f
$ and $ \delta a_f$ defined in Eq.(\[original\]). In our calculation we neglect terms proportional to fermion mass except for $f= b $ (bottom quark) where we keep terms proportional to bottom quark Yukawa coupling, $Y_b \sim \frac{m_b}{\cos \beta}$, since those terms may be enhanced by large $\tan \beta$. Throughout this section all $Z$-boson coupling coefficients, such as $\delta v_f
$ and $\delta a_f$, are defined so that the common factor $e/(2 \sin \theta_W \cos \theta_W )$ has been extracted.
To neatly present $\delta v_f$ and $\delta a_f$, it is convenient to introduce the quantities $\delta g_{\lambda}^f$ with $\lambda =L,
R$, which denote the vertex correction to $Z \bar{f}_\lambda
f_\lambda $ interaction and are related with $\delta v_f$ and $\delta a_f$ by $\delta v_f = ( \delta g_L^f + \delta g_R^f )/2 $ and $\delta a_f = ( \delta g_L^f - \delta g_R^f )/2$, respectively. $\delta g_\lambda^f$ is given by [@Hollik] $$\begin{aligned}
\delta g_\lambda^f &=& \Gamma_{f_\lambda}(m_Z^2) - g_\lambda^{Z \bar{f}f} \Sigma_{f_\lambda}
(m_f^2) - 2 \delta_{\lambda L} a_f \frac{\cos \theta_W}{\sin \theta_W}
\frac{\Sigma^{\gamma Z} (0)}{m_Z^2}, \label{vertex correction}\end{aligned}$$ where $\Gamma_{f_\lambda}$ is the unrenormalized vertex correction to $Z \bar{f}_\lambda f_\lambda$ interaction, the second term on the RHS denotes the counter term arising from the fermion $f_\lambda$ self-energy, and the last term is the counter term from the vector boson self-energy.
Assuming the interaction between scalars $\phi_i$ with $Z $ boson takes the form $\Gamma^{\phi^\ast_i \phi_j Z} = g^{\phi^\ast_i
\phi_j Z} ( p_{\phi_i} + p_{\phi_j} ) $, we can write down $\Sigma_{f_\lambda}(m_f^2)$ and the vertex function $\Gamma_{f_\lambda}(q^2)$ mediated by a fermion $\psi$ and a scalar $\phi$ in a compact generic notation as $$\begin{aligned}
(4\pi)^2 \Sigma_{f_\lambda}(p_f^2) &=&
C_g \biggl| g^{\bar{\psi}_j f \phi_i^\ast}_\lambda \biggr|^2
\biggl( B_0 + B_1 \biggr) (p_f, m_{\phi_i}, m_{\psi_j}), \\
(4\pi)^2 \Gamma_{f_\lambda}(q^2) &=&
- C_g \Biggl\{
\biggl( g_\lambda^{\bar{\psi}_j f \phi_k^\ast} \biggr)^*
g_\lambda^{\bar{\psi}_i f \phi_k^\ast}
\biggl[
g_\lambda^{\bar{\psi}_j \psi_i Z} m_{\psi_i} m_{\psi_j} C_0
\nonumber \\
&&
+ g_{-\lambda}^{\bar{\psi}_j \psi_i Z}
\biggl\{-q^2 (C_{12} + C_{23}) - 2 C_{24} + \frac{1}{2} \biggr\}
\biggr] (p_{\bar{f}}, p_{f}, m_{\psi_i}, m_{\phi_k}, m_{\psi_j})
\nonumber \\
&&
- \biggl( g^{\bar{\psi}_k f \phi_i^\ast}_\lambda \biggr)^*
g^{\bar{\psi}_k f \phi_j^\ast}_\lambda
g^{\phi_i^\ast \phi_j Z}
2 C_{24}(p_{\bar{f}}, p_f, m_{\phi_j}, m_{\psi_k}, m_{\phi_i})
\Biggr\}.\end{aligned}$$ Here $C_g$ is $4/3$ for the gluino contribution ($\psi=gluino$) and 1 for the others. The chirality index $-\lambda$ follows the rule: $-L=R, -R=L$.
If $f$ is a lepton, the following combination of $\{ \psi, \phi
\}$ contribute to the vertex:
- [Chargino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^- , \tilde{\nu} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^-_j l \tilde{\nu}^\ast} = - g
V_{j1}^\ast; \quad \quad g_R^{\bar{\tilde{\chi}}^-_j l
\tilde{\nu}^\ast} = 0 ; \quad \quad g^{\tilde{\nu}^\ast \tilde{\nu}
Z} = - 1 ; \nonumber \\
&& g_L^{\bar{\tilde{\chi}}^-_j \tilde{\chi}^-_i Z} = 2 ( U_{i1}^\ast
U_{j1} + \frac{1}{2}
U_{i2}^\ast U_{j2} - \delta_{ij} \sin^2 \theta_W ); \nonumber \\
&&g_R^{\bar{\tilde{\chi}}^-_j \tilde{\chi}^-_i Z} = 2 ( V_{i1}
V_{j1}^\ast + \frac{1}{2} V_{i2} V_{j2}^\ast - \delta_{ij} \sin^2
\theta_W );\end{aligned}$$
- [Neutralino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^0 , \tilde{l} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^0_j l \tilde{l}_\alpha^\ast} =
\frac{g}{\sqrt{2}} R^{\tilde{l}}_{\alpha 1} ( N_{j2}^\ast + \tan
\theta_W N_{j1}^\ast ) ; \quad \quad g_R^{\bar{\tilde{\chi}}^0_j l
\tilde{l}_\alpha^\ast} = - \sqrt{2} g R^{\tilde{l}}_{\alpha 2} \tan
\theta_W N_{j1} ;
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^0_j \tilde{\chi}^0_i Z} = - N_{j4}
N_{i4}^\ast + N_{j3} N_{i3}^\ast ; \quad \quad
g_R^{\bar{\tilde{\chi}}^0_j \tilde{\chi}^0_i Z} = N_{j4}^\ast N_{i4} - N_{j3}^\ast N_{i3}; \nonumber \\
&&g^{\tilde{l}_\alpha^\ast \tilde{l}_\beta Z} = ( 1 - 2 \sin^2
\theta_W ) R^{\tilde{l}}_{\alpha 1} R^{\tilde{l} \ast}_{\beta 1} - 2
\sin^2 \theta_W R^{\tilde{l}}_{\alpha 2} R^{\tilde{l} \ast}_{\beta
2};\end{aligned}$$
If $f$ is the bottom quark, the following combination of $\{ \psi,
\phi \}$ contribute to the vertex:
- [Chargino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^- , \tilde{t} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^-_j b \tilde{t}_\alpha^\ast} = g ( -
R^{\tilde{t}}_{\alpha 1} V_{j1}^\ast + Y_t R^{\tilde{t}}_{\alpha 2}
V_{j2}^\ast ); \quad g_R^{\bar{\tilde{\chi}}^-_j b
\tilde{t}_\alpha^\ast} = g R^{\tilde{t}}_{\alpha 1} Y_b U_{j2};
\nonumber \\
&&g^{\tilde{t}_\alpha^\ast \tilde{t}_\beta Z} = ( -1 + \frac{4}{3}
\sin^2 \theta_W ) R^{\tilde{t}}_{\alpha 1} R^{\tilde{t} \ast}_{\beta
1} + \frac{4}{3} \sin^2 \theta_W R^{\tilde{t}}_{\alpha 2}
R^{\tilde{t} \ast}_{\beta 2};\end{aligned}$$ Note that in order to write the couplings in a neat form, we define $Y_t = m_t/\sqrt{2} m_W \sin \beta$, and $Y_b = m_b/\sqrt{2}
m_W \cos \beta$. Such definitions differ from their conventional definitions by a factor $g$. We adopt such a convention throughout our paper.
- [Neutralino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^0 , \tilde{b} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^0_j b \tilde{b}_\alpha^\ast} = g \biggl(
\frac{\sqrt{2}}{2} R^{\tilde{b}}_{\alpha 1} ( N_{j2}^\ast -
\frac{1}{3} \tan \theta_W N_{j1}^\ast ) - Y_b
R^{\tilde{b}}_{\alpha 2} N_{j4}^\ast
\biggr); \nonumber \\
&& g_R^{\bar{\tilde{\chi}}^0_j b \tilde{b}_\alpha^\ast} = - g (
R^{\tilde{b}}_{\alpha 1} Y_b N_{j4} + \frac{\sqrt{2}}{3}
R^{\tilde{b}}_{\alpha 2} \tan \theta_W N_{j1} );
\nonumber \\
&&g^{\tilde{b}_\alpha^\ast \tilde{b}_\beta Z} = ( 1 - \frac{2}{3}
\sin^2 \theta_W ) R^{\tilde{b}}_{\alpha 1} R^{\tilde{b} \ast}_{\beta
1} - \frac{2}{3} \sin^2 \theta_W R^{\tilde{b}}_{\alpha 2}
R^{\tilde{b} \ast}_{\beta 2};\end{aligned}$$
- [Gluino correction]{}: $$\begin{aligned}
&& \{\psi, \phi \} =\{ \tilde{g} , \tilde{b} \}:
\nonumber \\
&& g_L^{\bar{\tilde{g}} b \tilde{b}_\alpha^\ast} = -\sqrt{2} g_s
R^{\tilde{b}}_{\alpha 1} ; \quad g_R^{\bar{\tilde{g}} b
\tilde{b}_\alpha^\ast} = \sqrt{2} g_s R^{\tilde{b}}_{\alpha 2} ;\end{aligned}$$
- [Charged Higgs contribution]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ t , H^- \}:
\nonumber \\
&& g_L^{\bar{t} b (H^-)^\ast} = \frac{g m_t}{\sqrt{2} m_W} \cot
\beta; \quad
g_R^{\bar{t} b (H^-)^\ast} = \frac{g m_b}{\sqrt{2} m_W} \tan \beta;
\nonumber \\
&& g_L^{\bar{t}tZ}= - ( 1 - \frac{4}{3} \sin^2 \theta_W ); \quad
g_R^{\bar{t}tZ}= \frac{4}{3} \sin^2 \theta_W; \nonumber
\\ && g^{(H^-)^\ast H^- Z} = \cos 2
\theta_W\end{aligned}$$
- [Neutral Higgs contribution]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ b , (h, a, G^0)
\}:
\nonumber \\
&& g_L^{\bar{b} b h_i} = -\frac{g m_b}{2 m_W \cos \beta} S_{i2};
\quad
g_R^{\bar{b} b h_i} = -\frac{g m_b}{2 m_W \cos \beta} S_{i2};
\nonumber \\
&& g_L^{\bar{b} b a_i} = -\frac{i g m_b}{2 m_W \cos \beta} P_{i2} =
-\frac{i g m_b }{2 m_W } P_{i1}^\prime \tan \beta ; \nonumber \\
&& g_R^{\bar{b} b a_i} = \frac{i g m_b}{2 m_W \cos \beta} P_{i2} =
\frac{i g m_b}{2 m_W } P_{i1}^\prime \tan \beta;
\nonumber \\
&& g_L^{\bar{b} b G^0} = - \frac{i g m_b}{2 m_W} ; \quad
g_R^{\bar{b} b G^0} = \frac{i g m_b}{2 m_W};
\nonumber \\
&& g_L^{\bar{b}b Z}= ( 1 - \frac{2}{3} \sin^2 \theta_W ); \quad
g_R^{\bar{b}b Z}= - \frac{2}{3} \sin^2 \theta_W; \nonumber
\\ && g^{h_i^\ast a_j Z} = - i ( S_{i2}
P_{j2} - S_{i1} P_{j1} ) = - i (S_{i2} \sin \beta - S_{i1} \cos
\beta )
P_{j1}^\prime, \nonumber \\
&& g^{a_j^\ast h_i Z} = i ( S_{i2} P_{j2} - S_{i1} P_{j1} ) = i
(S_{i2} \sin \beta
- S_{i1} \cos \beta ) P_{j1}^\prime, \nonumber \\
&& g^{h_i^\ast G^0 Z} = - i ( S_{i2}
\cos \beta + S_{i1} \sin \beta ), \nonumber \\
&& g^{G^{0 \ast} h_i Z} = i ( S_{i2} \cos \beta + S_{i1} \sin \beta).\end{aligned}$$ Note that in the above formulas we did not include the contribution to $\delta g_\lambda$ from the loop of $\{t, G^-\}$. Such contribution alone is UV-convergent and should be attributed to the SM radiative effects. This situation is quite different for the neutral Higgs contribution where the effects of the loops of $\{b, G^0\}$ are UV divergence and must be included with other neutral Higgs contribution to get an finite result.
If $f$ is the charm quark, the following combination of $\{ \psi,
\phi \}$ contribute to the vertex:
- [Chargino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^+ , \tilde{s} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^+_j c \tilde{s}_\alpha^\ast} = - g
R^{\tilde{s}}_{\alpha 1} U_{j1}^\ast; \quad \quad \quad \quad
g_R^{\bar{\tilde{\chi}}^+_j c \tilde{s}_\alpha^\ast} = 0 ;
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^+_j \tilde{\chi}^+_i Z} = - 2 (
V_{i1}^\ast V_{j1} + \frac{1}{2}
V_{i2}^\ast V_{j2} - \delta_{ij} \sin^2 \theta_W ); \nonumber \\
&&g_R^{\bar{\tilde{\chi}}^+_j \tilde{\chi}^+_i Z} = - 2 ( U_{i1}
U_{j1}^\ast + \frac{1}{2} U_{i2} U_{j2}^\ast - \delta_{ij} \sin^2
\theta_W ); \nonumber \\
&&g^{\tilde{s}_\alpha^\ast \tilde{s}_\beta Z} = ( 1 - \frac{2}{3}
\sin^2 \theta_W ) R^{\tilde{s}}_{\alpha 1} R^{\tilde{s} \ast}_{\beta
1} - \frac{2}{3} \sin^2 \theta_W R^{\tilde{s}}_{\alpha 2}
R^{\tilde{s} \ast}_{\beta 2};\end{aligned}$$
- [Neutralino correction]{}: $$\begin{aligned}
&& \{\psi , \phi \} =\{ \tilde{\chi}^0 , \tilde{c} \}:
\nonumber \\
&& g_L^{\bar{\tilde{\chi}}^0_j c \tilde{c}_\alpha^\ast} = - \frac{ g
}{\sqrt{2}} R^{\tilde{c}}_{\alpha 1} ( N_{j2}^\ast +
\frac{1}{3} \tan \theta_W N_{j1}^\ast ); \nonumber \\
&& g_R^{\bar{\tilde{\chi}}^0_j c \tilde{c}_\alpha^\ast} = \frac{2
\sqrt{2} g}{3} R^{\tilde{c}}_{\alpha 2} \tan \theta_W N_{j1} ;
\nonumber \\
&&g^{\tilde{c}_\alpha^\ast \tilde{c}_\beta Z} = ( - 1 + \frac{4}{3}
\sin^2 \theta_W ) R^{\tilde{c}}_{\alpha 1} R^{\tilde{c} \ast}_{\beta
1} + \frac{4}{3} \sin^2 \theta_W R^{\tilde{c}}_{\alpha 2}
R^{\tilde{c} \ast}_{\beta 2};\end{aligned}$$
- [Gluino correction]{}: $$\begin{aligned}
&& \{\psi, \phi \} =\{ \tilde{g} , \tilde{c} \}:
\nonumber \\
&& g_L^{\bar{\tilde{g}} c \tilde{c}_\alpha^\ast} = -\sqrt{2} g_s
R^{\tilde{c}}_{\alpha 1} ; \quad g_R^{\bar{\tilde{g}} c
\tilde{c}_\alpha^\ast} = \sqrt{2} g_s R^{\tilde{c}}_{\alpha 2} ;\end{aligned}$$
The above expressions then suffice to calculate all the $Z f_\alpha
\bar{f}_\alpha$ vertex corrections $\delta g_\alpha^f$. Summation should be taken over all non-vanishing coupling combinations, such as over the indices of sfermions, charginos, neutralinos, scalar Higgs and pseudo-scalar Higgs.
NMSSM contributions to the $\mu$-decay
======================================
In the NMSSM the flavor-dependent correction to the decay $\mu \to \nu_\mu e \bar{\nu}_e$ mainly comes from the loops mediated by gauginos, and the corrected amplitude can be written as [@Heinemeyer] $$\begin{aligned}
M = M_B \left( 1 + 2 \delta^{(v)} + \delta^{(b)} \right),\end{aligned}$$ where $M_B$ is the Born amplitude, $\delta^{(v)}$ is the vertex correction for either $\bar{e} \nu_e W$ interaction or $\bar{\mu}
\nu_\mu W$ interaction ( since we assume the mass degeneracy for the first two generations of sleptons, the two corrections are same), and $\delta^{(b)}$ denotes box diagram correction.
\(1) Vertex corrections
Similar to Eq.(\[vertex correction\]), the correction to $\bar{f}_1 f_2 W$ interaction can be expressed as $$\begin{aligned}
g_L^{\bar{f}_1 f_2 W} \delta^{(v)}
&=& \Gamma^{\bar{f}_1 f_2 W}(q^2) - \frac{1}{2} g_L^{\bar{f}_1 f_2 W}
\biggl \{ \Sigma_{f_1}(m_{f_1}^2) + \Sigma_{f_2}(m_{f_2}^2)
\biggr\}.\end{aligned}$$ For the $\bar{e} \nu_e W$ interaction, we have $g_L^{\bar{e} \nu_e
W^-} = - \frac{g}{\sqrt{2}}$, $$\begin{aligned}
(4 \pi)^2 \Sigma_{e_L}(m_e^2) &=& | g_L^{\bar{\tilde{\chi}}^0_i e
\tilde{e}_L^\ast} |^2 (B_0 + B_1) (m_e^2, m_{\tilde{e}_L},
m_{\tilde{\chi}_i^0} ) + | g_L^{\bar{\tilde{\chi}}_j^- e
\tilde{\nu}_e^\ast } |^2
(B_0 + B_1) (m_e^2, m_{\tilde{\nu}_e}, m_{\tilde{\chi}_j^-}),
\nonumber \\
(4 \pi)^2 \Sigma_{\nu_e}(m_{\nu_e}^2) &=& |
g_L^{\bar{\tilde{\chi}}^0_i \nu_e \tilde{\nu}_e^\ast} |^2 (B_0 +
B_1) (m_{\nu_e}^2, m_{\tilde{\nu}_e}, m_{\tilde{\chi}_i^0} ) + |
g_L^{\bar{\tilde{\chi}}_j^+ \nu_e \tilde{e}_L^\ast } |^2
(B_0 + B_1) (m_{\nu_e}^2, m_{\tilde{e}_L}, m_{\tilde{\chi}_j^+}),
\nonumber \\
(4 \pi)^2 \Gamma_{\bar{e} \nu_e W^-} &=&
- ( g_L^{\bar{\tilde{\chi}}_{i}^0 e \tilde{e}_L^\ast} )^\ast
g_L^{\bar{\tilde{\chi}}^+_j \nu_e \tilde{e}_L^\ast }
\nonumber \\
&& \times \left \{
g_L^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W } m_{\tilde{\chi}_i^0}
m_{\tilde{\chi}_j^+} C_0
+ g_R^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W }
(-2 C_{24} + \frac{1}{2} ) \right \} (p_{\nu_e}, p_e, m_{\tilde{\chi}_j^+}, m_{\tilde{e}_L}, m_{\tilde{\chi}_i^0}
)
\nonumber \\
&& - ( g_L^{\bar{\tilde{\chi}}_{j}^- e \tilde{\nu}_e^\ast} )^\ast
g_L^{\bar{\tilde{\chi}}^0_i \nu_e \tilde{\nu}_e^\ast }
\nonumber \\
&& \times \left \{
g_L^{\bar{\tilde{\chi}}_j^- \tilde{\chi}^0_i W } m_{\tilde{\chi}_i^0}
m_{\tilde{\chi}_j^-} C_0
+ g_R^{\bar{\tilde{\chi}}_j^- \tilde{\chi}^0_i W }
(-2 C_{24} + \frac{1}{2} ) \right \} (p_{\nu_e}, p_e, m_{\tilde{\chi}_i^0}, m_{\tilde{\nu}_e}, m_{\tilde{\chi}_j^-}
)
\nonumber \\
&& + 2 ( g_L^{\tilde{\chi}_{i}^0 e \tilde{e}_L^\ast } )^\ast
g_L^{\tilde{\chi}_{i}^0 \nu_e \tilde{\nu}_e^\ast }
g^{\tilde{e}_L^\ast \tilde{\nu}_e W} C_{24}(p_{\nu_e}, p_e, m_{\tilde{\nu}_e},
m_{\tilde{\chi}_{i}^0}, m_{\tilde{e}_L}).\end{aligned}$$ In the above equations, summation over $i=1$ to $5$ $(\tilde{\chi}_i^0)$ and $j=1$ to 2 $(\tilde{\chi}_j^\pm)$ is implied. The coupling $g_L$ takes the following forms $$\begin{aligned}
&&g_L^{\bar{\tilde{\chi}}^0_i \nu_e \tilde{\nu}_e^\ast} =
\frac{g}{\sqrt{2}} ( N_{i1}^\ast \tan \theta_W - N_{i2}^\ast );
\quad \quad g_L^{\bar{\tilde{\chi}}^0_i e \tilde{e}_L^\ast} =
\frac{g}{\sqrt{2}}
( N_{i1}^\ast \tan \theta_W + N_{i2}^\ast ); \nonumber \\
&& g_L^{\bar{\tilde{\chi}}^+_j \nu_e \tilde{e}_L^\ast} = - g
U_{j1}^\ast; \quad \quad \quad \ \ \quad \quad \quad \quad \quad
g_L^{\bar{\tilde{\chi}}^-_j e \tilde{\nu}_e^\ast} = - g V_{j1}^\ast \nonumber \\
&& g_L^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W } =
\frac{g}{\sqrt{2}} ( \sqrt{2} V_{j1}^\ast N_{i2} - V_{j2}^\ast
N_{i3} ); \quad g_R^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W } =
\frac{g}{\sqrt{2}} ( \sqrt{2} U_{j1} N_{i2}^\ast +
U_{j2} N_{i4}^\ast ); \nonumber \\
&& g_L^{\bar{\tilde{\chi}}_j^- \tilde{\chi}^0_i W } = -
g_R^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W }; \quad \quad
g_R^{\bar{\tilde{\chi}}_j^- \tilde{\chi}^0_i W } = -
g_L^{\bar{\tilde{\chi}}_i^0 \tilde{\chi}^+_j W }; \quad \quad
g^{\tilde{e}_L^\ast \tilde{\nu}_e W} = - \frac{g}{\sqrt{2}},
\nonumber\end{aligned}$$ and for the three-point loop functions, since we take their external momentum to be zero, their expressions are greatly simplified: $$\begin{aligned}
C_0 (m_1,m_2,m_3) & =& -\frac{1}{m_3^2} \left \{ - \frac{(1+a)
\ln(1+a)}{a b} + \frac{(1+a+b) \ln(1+a+b)}{(a+b) b} \right \} \nonumber \\
C_{24} (m_1,m_2,m_3) & =& \frac{\Delta}{4} - \frac{1}{4} \ln
\frac{m_3^2}{\mu^2} - \frac{1}{2} \left \{ \frac{-2(1+a)^2
\ln(1+a)}{4 a b} \right . \nonumber \\
&& \left . + \frac{- 3 b (a +b ) + 2 (1+ a+b)^2 \ln (1+a+b)}{4 b
(a+b)} \right \} \nonumber\end{aligned}$$ with $a = \frac{m_2^2 -m_3^2}{m_3^2} $ and $b = \frac{m_1^2 -
m_2^2}{m_3^2}$.
\(2) Box corrections
The box diagram contributions to the $\mu \to \nu_\mu e \bar{\nu}_e$ amplitude can be expressed as $$\begin{aligned}
i T & = & i \left \{
M(1) + M(2) + M(3) + M(4) \right \}
\bar{u}_e \gamma^\mu P_L v_{\nu_e}
\bar{u}_{\nu_\mu} \gamma^\mu P_L u_{\mu}.\end{aligned}$$ Taking into account the normalization of the tree-level amplitude, $- g^2/2 M_W^2$, the box diagram contributions can be written as $$\begin{aligned}
\delta^{(b)} &=& -\frac{2
M_W^2}{g^2} \sum_{i=1}^4 M(i).\end{aligned}$$ with each $M(i)$ given by $$\begin{aligned}
16 \pi^2 M(1) &=& (g_L^{\bar{\tilde{\chi}}_i^0 e
\tilde{e}_L^\ast})^*
g_L^{\bar{\tilde{\chi}}_i^0 \mu \tilde{\mu}_L^\ast }
(g_L^{\bar{\tilde{\chi}}^+_j \nu_\mu \tilde{\mu}_L^\ast})^*
g_L^{\bar{\tilde{\chi}}^+_j \nu_e \tilde{e}_L^\ast}
D_{27}(m_{\tilde{\mu}_L}, m_{\tilde{e}_L}, m_{\tilde{\chi}^+_j},
m_{\tilde{\chi}_i^0}),
\nonumber \\
16 \pi^2 M(2) &=& (g_L^{\bar{\tilde{\chi}}_j^- e
\tilde{\nu}_e^\ast})^*
g_L^{\bar{\tilde{\chi}}_j^- \mu \tilde{\nu}_\mu^\ast }
(g_L^{\bar{\tilde{\chi}}_i^0 \nu_\mu \tilde{\nu}_\mu^\ast})^*
g_L^{\bar{\tilde{\chi}}_i^0 \nu_e \tilde{\nu}_e^\ast}
D_{27}(m_{\tilde{\nu}_\mu}, m_{\tilde{\nu}_e},
m_{\tilde{\chi}_j^-}, m_{\tilde{\chi}_i^0}), \nonumber
\\
16 \pi^2 M(3) &=& \frac{1}{2} m_{\tilde{\chi}_i^0}
m_{\tilde{\chi}_j^-}
g_L^{\bar{\tilde{\chi}}^+_j \nu_e \tilde{e}_L^\ast}
g_L^{\bar{\tilde{\chi}}_j^- \mu \tilde{\nu}_\mu^\ast }
(g_L^{\bar{\tilde{\chi}}_i^0 \nu_\mu \tilde{\nu}_\mu^\ast})^*
(g_L^{\bar{\tilde{\chi}}_i^0 e \tilde{e}_L^\ast})^*
D_0(m_{\tilde{\nu}_\mu}, m_{\tilde{e}_L}, m_{\tilde{\chi}_j^-},
m_{\tilde{\chi}_i^0}),
\nonumber \\
16 \pi^2 M(4) &=& \frac{1}{2} m_{\tilde{\chi}_i^0}
m_{\tilde{\chi}_j^-}
g_L^{\bar{\tilde{\chi}}_i^0 \nu_e \tilde{\nu}_e^\ast}
g_L^{\bar{\tilde{\chi}}_i^0 \mu \tilde{\mu}_L^\ast}
(g_L^{\bar{\tilde{\chi}}^+_j \nu_\mu \tilde{\mu}_L^\ast})^*
(g_L^{\bar{\tilde{\chi}}_j^- e \tilde{\nu}_e^\ast } )^*
D_0(m_{\tilde{\mu}_L}, m_{\tilde{\nu}_e}, m_{\tilde{\chi}_j^-},
m_{\tilde{\chi}_i^0}).
\nonumber\end{aligned}$$ Here all the $D$-functions are evaluated at the zero momentum-transfer limit. Noting the fact that $m_{\tilde{\mu}_L} \simeq
m_{\tilde{e}_L} \simeq m_{\tilde{\nu}_\mu} \simeq m_{\tilde{\nu}_e}
$, we may write the $D$ functions as $$\begin{aligned}
D_0 (m_1, m_1, m_2, m_3 )&=& \frac{1}{m_3^4} \left \{
\frac{-(1+a)\ln(1+a)}{ab^2} \right.\nonumber \\
&& \left. + \frac{ -b(a+b) +
((a+b)(1+a+b)+ b)\ln(1+a+b)}{b^2(a+b)^2} \right\}, \nonumber \\
D_{27} (m_1, m_1, m_2, m_3 )&=& - \frac{1}{2 m_3^2} \left \{
\frac{(1+a)^2 \ln(1+a)}{2 a b^2} \right.\nonumber \\
&& \left. - \frac{ (1+a+b) ( - b(a+b) + ((a+b)(1+a )+ b)\ln(1+a+b))
}{2 b^2(a+b)^2} \right\}. \nonumber\end{aligned}$$
[99]{} LEP and SLD Collaborations, Phys. Rept. [**427**]{}, 257 (2006). M. W. Grunewald, arXiv:0710.2838 \[hep-ex\]. M. S. Chanowitz, Phys. Rev. Lett. [**87**]{}, 231802 (2001); M. S. Chanowitz, Phys. Rev. D [**66**]{}, 073002 (2002). For a recent discussion of this subject, see F. del Aguila, J. de Blas and M. Perez-Victoria, arXiv:0803.4008 \[hep-ph\], M. S. Chanowitz, arXiv:0806.0890 \[hep-ph\]. R. Barate [*et al.*]{}, Phys. Lett. B [**565**]{}, 61 (2003). D. A. Ross and M. J. G. Veltman, Nucl. Phys. B [**95**]{}, 135 (1975); M. J. G. Veltman, Nucl. Phys. B [**123**]{}, 89 (1977). F. Jegerlehner, Prog. Part. Nucl. Phys. [**27**]{}, 1 (1991). H. E. Haber and G. L. Kane, Phys. Rept. [**117**]{}, 75 (1985). J. F. Gunion and H. E. Haber, Nucl. Phys. B [**272**]{}, 1 (1986) \[Erratum-ibid. B [**402**]{}, 567 (1993)\]. J. R. Ellis, J. F. Gunion, H. E. Haber, L. Roszkowski and F. Zwirner, Phys. Rev. D [**39**]{} (1989) 844. M. Drees, Int. J. Mod. Phys. A [**4**]{} (1989) 3635. U. Ellwanger, M. Rausch de Traubenberg and C. A. Savoy, Phys. Lett. B [**315**]{} (1993) 331; Nucl. Phys. B [**492**]{} (1997) 21; S. F. King and P. L. White, Phys. Rev. D [**52**]{} (1995) 4183; F. Franke and H. Fraas, Int. J. Mod. Phys. A [**12**]{} (1997) 479; B. A. Dobrescu, K.T. Matchev, JHEP 0009 (2000) 031. D.J. Miller, R. Nevzorov, P.M. Zerwas, 681, 3 (2004).
A. Djouadi, J. L. Kneur and G. Moultaka, Phys. Lett. B [**242**]{}, 265 (1990); A. Djouadi, et al., Nucl. Phys. B[**349**]{}, 48 (1991); C. S. Li, [*et al.*]{}, Commun. Theor. Phys. [**20**]{}, 213 (1993); J. Phys. G 19, L13 (1993); X. Wang,J. L. Lopez and D. V. Nanopoulos, Phys. Rev D[**52**]{}, 4116 (1995); M. Boulware and D. Finnell, Phys. Rev D[**44**]{}, 2054 (1991). J. j. Cao, Z. h. Xiong and J. M. Yang, Phys. Rev. Lett. [**88**]{}, 111802 (2002). A. Sirlin, Phys. Rev. D [**22**]{}, 971 (1980). M. Bohm, H. Spiesberger and W. Hollik, Fortsch. Phys. [**34**]{} (1986) 687; W. F. L. Hollik, Fortsch. Phys. [**38**]{}, 165 (1990). A. Denner, Fortsch. Phys. [**41**]{}, 307 (1993). G. Montagna, F. Piccinini, O. Nicrosini, G. Passarino and R. Pittau, Nucl. Phys. B [**401**]{}, 3 (1993); G. Montagna, F. Piccinini, O. Nicrosini, G. Passarino and R. Pittau, Comput. Phys. Commun. [**76**]{}, 328 (1993). D. Y. Bardin, M. S. Bilenky, G. Mitselmakher, G. Mitselmakher, T. Riemann and M. Sachwitz, Z. Phys. C [**44**]{}, 493 (1989); D. Y. Bardin, M. S. Bilenky, T. Riemann, M. Sachwitz and H. Vogt, Comput. Phys. Commun. [**59**]{}, 303 (1990);D. Y. Bardin [*et al.*]{}, Nucl. Phys. B [**351**]{}, 1 (1991). W. M. Yao [*et al.*]{}, Particle Data Group, J. Phys. G [**33**]{} (2006) 1. S. Schael, [*et al.*]{}, Eur. Phys. J. C [**47**]{}, 547 (2006). H.E. Haber, R. Hempfling, 66, 1815 (1991); Y. Okada, M. Yamaguchi, T. Yanagida, Prog. Theor. Phys. [**85**]{}, 1 (1991); 262,54(1991); J. Ellis, G. Ridolfi, F. Zwirner, 257, 83 (1991); 262, 477 (1991); J. R. Espinosa and R. J. Zhang, JHEP [**0003**]{} (2000) 026; A. Dabelstein, Z. Phys. C[**67**]{}, 495 (1995).
C. L. Bennett [*et al.*]{}, Astrophys. J. Suppl. [**148**]{} (2003)1; D. N. Spergel [*et al.*]{}, Astrophys.J. Suppl. [**148**]{} (2003) 175.
A. Menon, D. E. Morrissey and C. E. M. Wagner, Phys. Rev. D [**70**]{}, 035005 (2004); G. Belanger, F. Boudjema, C. Hugonie, A. Pukhov and A. Semenov, JCAP [**0509**]{}, 001 (2005); V. Barger, P. Langacker and H. S. Lee, Phys. Lett. B [**630**]{}, 85 (2005). G. Altarelli and R. Barbieri, Phys. Lett. B [**253**]{}, 161 (1991); G. Altarelli, R. Barbieri and S. Jadach, Nucl. Phys. B [**369**]{}, 3 (1992) \[Erratum-ibid. B [**376**]{}, 444 (1992)\]; G. Altarelli, R. Barbieri and F. Caravaglios, Nucl. Phys. B [**405**]{}, 3 (1993); G. Altarelli, R. Barbieri and F. Caravaglios, Phys. Lett. B [**314**]{}, 357 (1993). M. E. Peskin and T. Takeuchi, Phys. Rev. Lett. [**65**]{}, 964 (1990); M. E. Peskin and T. Takeuchi, Phys. Rev. D [**46**]{}, 381 (1992).
For a recent review, see J. P. Miller, E. de Rafael and B. L. Roberts, Rept. Prog. Phys. [**70**]{}, 795 (2007); D. Stockinger, arXiv:0710.2429 \[hep-ph\]. See, e.g., P. Chankowski, [*et al.*]{}, 417, 101 (1994); D. Garcia and J. Solà, Mod. Phys. Lett. [**A 9**]{}, 211 (1994); S. Heinemeyer, W. Hollik and G. Weiglein, Phys. Rept. [**425**]{}, 265 (2006). See, for example, T. Ibrahim and P. Nath, Phys. Rev. D [**62**]{}, 015004 (2000); S. P. Martin and J. D. Wells, Phys. Rev. D [**64**]{}, 035003 (2001). See for example, T. Besmer, C. Greub, T.Hurth, 609, 359 (2001); F. Borzumati, [*et al.*]{}, 62, 075005(2000). See for example, P. Ball, S. Khalil and E. Kou, 69, 115011 (2004); M. Ciuchini, L. Silvestrini, hep-ph/0603114.
U. Ellwanger, J. F. Gunion and C. Hugonie, JHEP [**0502**]{}, 066 (2005); U. Ellwanger and C. Hugonie, Comput. Phys. Commun. [**175**]{}, 290 (2006). G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Comput. Phys. Commun. [**176**]{}, 367 (2007). F. Domingo and U. Ellwanger, arXiv:0806.0733 \[hep-ph\]. H. E. Logan, hep-ph/9906332; H. E. Haber and H. E. Logan, Phys. Rev. D [**62**]{}, 015011 (2000). G. Passarino and M. J. G. Veltman, Nucl. Phys. B [**160**]{}, 151 (1979). K. Hagiwara, S. Matsumoto, D. Haidt and C. S. Kim, Z. Phys. C [**64**]{}, 559 (1994) \[Erratum-ibid. C [**68**]{}, 352 (1995)\]; G. C. Cho and K. Hagiwara, Nucl. Phys. B [**574**]{}, 623 (2000). R. Dermisek and J. F. Gunion, Phys. Rev. Lett. [**95**]{}, 041801 (2005); Phys. Rev. D [**73**]{}, 111701 (2006); Phys. Rev. D [**75**]{}, 075019 (2007). See for example, U. Ellwanger, J. F. Gunion and C. Hugonie, JHEP [**0507**]{}, 041 (2005); V. Barger, P. Langacker, H. S. Lee and G. Shaughnessy, Phys. Rev. D [**73**]{}, 115010 (2006); U. Ellwanger and C. Hugonie, Phys. Lett. B [**623**]{}, 93 (2005); A. Arhrib, K. Cheung, T. J. Hou and K. W. Song, JHEP [**0703**]{}, 073 (2007); S. Moretti, S. Munir and P. Poulose, Phys. Lett. B [**644**]{}, 241 (2007); V. Barger, P. Langacker and G. Shaughnessy, Phys. Rev. D [**75**]{}, 055013 (2007); K. Cheung, J. Song and Q. S. Yan, Phys. Rev. Lett. [**99**]{}, 031801 (2007); M. Carena, T. Han, G. Y. Huang and C. E. M. Wagner, JHEP [**0804**]{}, 092 (2008); J. R. Forshaw, [*et al.*]{}, JHEP [**0804**]{}, 090 (2008); A. G. Akeroyd, A. Arhrib and Q. S. Yan, Eur. Phys. J. C [**55**]{}, 653 (2008); X. G. He, J. Tandean and G. Valencia, JHEP [**0806**]{}, 002 (2008); A. Djouadi [*et al.*]{}, JHEP [**0807**]{}, 002 (2008); A. Belyaev [*et al.*]{}, arXiv:0805.3505 \[hep-ph\]. J. F. Gunion, D. Hooper and B. McElrath, Phys. Rev. D [**73**]{}, 015011 (2006); V. Barger, P. Langacker and G. Shaughnessy, Phys. Lett. B [**644**]{}, 361 (2007); V. Barger, [*et al.*]{}, Phys. Rev. D [**75**]{}, 115002 (2007); F. Ferrer, L. M. Krauss and S. Profumo, Phys. Rev. D [**74**]{}, 115007 (2006). G. Hiller, Phys. Rev. D [**70**]{}, 034018 (2004); X. G. He, J. Tandean and G. Valencia, Phys. Rev. Lett. [**98**]{}, 081802 (2007); F. Domingo and U. Ellwanger, JHEP [**0712**]{}, 090 (2007); Z. Heng, [*et al.*]{}, Phys. Rev. D [**77**]{}, 095012 (2008); R. N. Hodgkinson, Phys. Lett. B [**665**]{}, 219 (2008). S. Kraml and W. Porod, Phys. Lett. B [**626**]{}, 175 (2005). J. A. Aguilar-Saavedra, [*et al.*]{}, hep-ph/0106315.
X. G. He and G. Valencia, Phys. Rev. D [**68**]{}, 033011 (2003). M. Fischer, S. Groote, J. G. Korner and M. C. Mauser, Phys. Rev. D [**65**]{}, 054036 (2002); J. j. Cao, R. J. Oakes, F. Wang and J. M. Yang, Phys. Rev. D [**68**]{}, 054019 (2003).
|
---
abstract: 'Through photometric monitoring of the extended transit window of HD 97658b with the MOST space telescope, we have found that this exoplanet transits with an ephemeris consistent with that predicted from radial velocity measurements. The mid-transit times are $5.6\sigma$ earlier than those of the unverified transit-like signals reported in 2011, and we find no connection between the two sets of events. The transit depth together with our determined stellar radius ($R_\star = 0.703^{+0.039}_{-0.034} R_\odot$) indicates a 2.34$^{+0.18}_{-0.15}$ $R_\earth$ super-Earth. When combined with the radial velocity determined mass of 7.86 $\pm 0.73$ $M_\earth$, our radius measure allows us to derive a planet density of 3.44$^{+0.91}_{-0.82}$ g cm$^{-3}$. Models suggest that a planet with our measured density has a rocky core that is enveloped in an atmosphere composed of lighter elements. The star of the HD 97658 system is the second brightest known to host a transiting super-Earth, facilitating follow-up studies of this not easily daunted, warm and likely volatile-rich exoplanet.'
author:
- 'Diana Dragomir, Jaymie M. Matthews, Jason D. Eastman, Chris Cameron, Andrew W. Howard, David B. Guenther, Rainer Kuschnig, Anthony F. J. Moffat, Jason F. Rowe, Slavek M. Rucinski, Dimitar Sasselov, Werner W. Weiss'
title: 'MOST[^1] detects transits of HD 97658, a warm, likely volatile-rich super-Earth'
---
Introduction
============
Transiting super-Earth exoplanets are an important and interesting class of planets to study for two main reasons: no super-Earths exist in the Solar System, and their masses and radii generally allow for a significant range of compositions. The most common way to home in on the composition of a super-Earth is by precisely determining its mass and radius. The [*Kepler*]{} mission has been extremely successful in finding super-Earths and measuring their radii with unprecedented precision. However, the majority of the stars hosting these planets are too faint to allow for the precise radial velocity (RV) measurements that most effectively determine the mass of an exoplanet. Spectroscopic observations of these exoplanets’ atmospheres also require bright host stars. For these reasons, super-Earths transiting bright stars like HD 97658 are essential for the characterization of this class of exoplanet.
The planet orbiting the K1 dwarf HD 97658 was announced by [@How11] with a minimum mass of 8.2 $\pm$ 1.2 $M_\earth$ and an orbital period of 9.494 $\pm$ 0.005 days. This potential super-Earth has already been photometrically searched for transits since its discovery. Transits announced in 2011 [@Hen11] were later shown to be spurious [@Dragomir97] using high-precision MOST (@Mat04, @Wal03) photometry. G. Henry was also unable to confirm the transits with additional Automated Photometric Telescope (APT) photometry acquired during the 2012 observing season (private communication).
The MOST photometry that was used to reject those events only covered the RV transit window between +0.55 and +3.6$\sigma$ of the predicted mid-transit time[^2] (i.e., 71% of the mid-transit time’s posterior probability). We completed the coverage of the 3$\sigma$ transit window by scheduling another set of MOST observations in April 2012, covering -3.7 to +1.5$\sigma$ of the predicted RV transit window (i.e., 99.97% of the mid-transit time’s posterior probability, when combined with the previous results). We noticed an intriguing dip in this light curve, but were unable to follow it up because the star had left the satellite’s Continuous Viewing Zone (CVZ). We were able to confirm that the candidate signal is real by re-visiting the system in 2013 and observing the signal at the expected time during four additional consecutive transit windows.
In this Letter we announce the discovery of HD 97658b transits, the depth of which indicates, together with the mass obtained from the RVs, that the planet is a super-Earth. We describe our data reduction procedure, our analysis of the photometry and our conclusions in the sections that follow.
Observations
============
For consistency, all our times for both the RV and photometric data sets are in BJD$_{TDB}$ [@East10]. Below we describe the two sets of observations.
Keck Radial Velocity Measurements
---------------------------------
Since the publication of [@Dragomir97], we have obtained four new Keck HIRES RV measurements. These were acquired and reduced using the same techniques as in [@How11]. We combined the new measurements with the existing radial velocities and excluded the same three outliers as in [@Dragomir97]. In total, we used 171 radial velocities for the analysis described in Section 3. The full set of RVs are listed in Table 2.
MOST Photometry
---------------
In an effort to monitor as much of the radial-velocity predicted transit window as possible and wrap up the search for transits of HD 97658b, we have acquired MOST observations of the system in addition to those published in [@Dragomir97]. The first of these new data were acquired on April 11-12, 2012 and cover the RV transit window between approximately -3.7 and +1.5$\sigma$. This transit window was computed from the ephemeris reported in [@Dragomir97]. A shallow dip can be seen at BJD of about 2456029.7 or approximately 1.1$\sigma$ before the predicted mid-transit time. It was not possible to obtain further MOST photometry of the system in order to verify the repeatibility of this candidate before it left the satellite’s CVZ. As soon as HD97658 re-entered the MOST CVZ in 2013 and we were able to interrupt primary target observations, we re-observed it during four transit windows based on the mid-transit time of the 2012 candidate. Those data were acquired on March 10, 19, 29 and April 7, 2013. The exposure times were 1.5 s, and the observations were stacked on board the satellite in groups of 21 for a total integration time of 32 s per data point.
The light curves covering each transit window were reduced individually. The raw photometry was extracted using aperture photometry [@Row08]. Outliers more than 3$\sigma$ from the mean of each light curve were clipped. The resulting magnitudes were then de-correlated from the sky background using 4th or 5th order polynomials, and from x and y position on the CCD using 2nd or 3rd order polynomials. After these steps, a straylight variation at the orbital period of the satellite remains. This variation is filtered by folding each light curve on this 101.4-minute period, computing a running average from this phased photometry, and removing the resulting waveform from the corresponding light curve.
The five light curves are shown in Figure \[fig:transits1\]. The 2012 observations and the last set of 2013 observations were acquired when the star was on the edge of or slightly outside the CVZ. Therefore, a star in the CVZ had to be used as a switch target during part of every MOST orbit, leading to the gaps that are visible in each of those two light curves. The increasing flux portions at the beginning of the three middle light curves correlate with a sudden change in temperature of the pre-amp board. This occurs when the satellite switches between two targets that are far apart from each other on the sky.
![image](HD97658_individual.pdf)
Analysis
========
Every one of the five light curves shows a transit-like event, spaced by $\approx$9.49 d and occurring about 1.2 - 1.3$\sigma$ earlier than the radial-velocity predicted mid-transit time (the solid grey vertical bars). The extent of the 1$\sigma$ RV-predicted transit window is shown (enclosed by pairs of dotted grey vertical bars), as well as the predicted time of the [@Hen11] events (if they were real), propagated to the epochs of our transits [*using our more precise estimate of the orbital period*]{} listed in Table 1. The first and last light curves suffered from increased scatter from straylight, and the instrument’s pointing stability from one HD 97658 visit to the next (within a given light curve) is not optimized because of the alternating target setup. In addition, the last step of our reduction routine (the removal of straylight artifacts) is not as effective for interrupted light curves. Photometry during part of every MOST orbit is missing, and the re-constructed waveform is not as accurate as for continuous light curves. The effect also depends on the phase and fraction of the MOST orbit that is missing. This in turn affects the shape of shallow signals with durations on the order of one or a couple of satellite orbits.
As an additional check of the transits’ authenticity, we have inspected the light curves of the two other stars in MOST’s field of view of HD 97658. We do not observe any brightness variations in those two stars resembling the HD 97658b transits in either duration or phase.
Before fitting the data, we quantified the correlated noise present in the light curves by following the method described in [@Win11] with minor modifications. We binned the out-of-transit photometry using bin sizes between 5 and 60 minutes and compared the rms of each binned light curve to what we would expect it to be if the light curve only contained white noise. We multiplied the photometric uncertainties (determined during the aperture photometry extraction) by 2.5, the largest value of the scaling factor found during these comparison tests.
We fit the data with EXOFAST [@East13], a MCMC algorithm that can simultaneously model RVs and photometry. We used a modified version of the algorithm which employs Yonsei-Yale isochrones [@Yi; @Dem] together with spectroscopically derived values for stellar effective temperature ($T_{eff}$) and metallicity ($[Fe/H]$), as well as the transit photometry to constrain the stellar parameters. We used $T_{eff}=5119 \pm 44 K$ and $[Fe/H]=-0.3 \pm 0.03$ from [@Hen11]. The algorithm uses the values and uncertainties of these two parameters to determine the stellar mass and radius via isochrone analysis, the uncertainties of which then propagate to the planetary mass and radius. Therefore it is important that the uncertainties on $T_{eff}$ and $[Fe/H]$ are not underestimated. When comparing values from different catalogs, there is evidence that uncertainties on individual metallicity estimates are often underestimated [@Hinkel]. Further, we found that the values of these two parameters differ by 1-2$\sigma$ between [@How11] and [@Hen11]. Therefore, we scaled these uncertainties upwards, to 50 K for the effective temperature and to 0.08 dex for the metallicity [@Buch], for the EXOFAST fit. We used quadratic limb darkening coefficients for the MOST bandpass of $u_{1}=0.621 \pm 0.050$ and $u_{2}=0.141 \pm 0.050$ generated using the models of [@CK] (A. Prsa, private communication).
![\[fig:transits2\] MOST photometry of the three HD 97658b continuous transit light curves, folded on the best-fit (median) period from the EXOFAST fit (9.4909 days) and averaged in 5-min bins. The red curve is the best-fitting transit model based on the EXOFAST fit of the three continuous transits.](HD97658_foldbinmiddle.pdf)
We fit the photometry together with the RVs to ensure the two data sets were consistent with each other. We carried out one run using all five transits, and another using only the three continuous transits. The former run resulted in a planetary radius just over 1$\sigma$ shallower than the latter run. For the reasons discussed at the beginning of this section, we chose to use the results from the run based only on the continuous transits for the remainder of this Letter.
It has been shown (@Win11, @Dragomir97) that the MOST reduction pipeline can sometimes suppress the depth of a transit signal. We have carried out a transit injection and recovery test to quantify this effect for the HD 97658 time series. Simulated limb-darkened transits corresponding to a planet with radius 2.23 $R_\earth$ (the value obtained from the EXOFAST fit) were inserted in the raw photometry at 100 randomly distributed orbital phases overlapping with out-of-transit sections of the three continuous light curves. The photometry containing the simulated transits was then reduced following the same steps as for the unmodified photometry. The depths of the recovered transits were on average suppressed by 10%, corresponding to a 5% suppression in the planetary radius. We increased the planetary radius output by EXOFAST by this percentage, and adjusted the planetary density and surface gravity accordingly.
The final values for the stellar and planetary parameters of this fit are listed in Table 1. The folded transit based on the three continuous light curves, binned in 5-min bins, is shown in Figure \[fig:transits2\]. To produce the phased time series, we omitted the first 0.1 days of each light curve which were affected by the temperature change described at the end of Section 2.2.
We note that we find the uncertainties on the planetary radius are only 13% larger than if the unscaled photometric uncertainties were used. This indicates that the uncertainty in the stellar radius is the dominating factor on the precision to which we can measure the planet’s size with the current data. The stellar mass and radius obtained from the EXOFAST fit are in excellent agreement with those quoted in [@Hen11]. Our value of $a/R_\star$ is consistent with the value of this ratio derived using $a$ from the RVs alone and $R_\star$. Finally, we compare the value of log$g$ determined by EXOFAST from the photometry and stellar parameters (4.618$^{+0.036}_{-0.041}$) with the spectroscopic log$g$ from [@Hen11] (4.52 $\pm$ 0.06). [@Buch] show that the noise floor for spectroscopic log$g$ dominated by uncertainties in stellar models is 0.1. Using this value as the uncertainty on the spectroscopic log$g$ for this star, we find that the photometric and spectroscopic log$g$ values agree to within 1$\sigma$.
![Mass-radius diagram for currently known transiting super-Earths with masses measured either by RVs or TTVs. Planetary parameters were obtained from the Exoplanet Orbit Database at exoplanets.org [@Wri11]. Density model curves are shown for 100% water, 50% water/40% silicate mantle/6% iron core, and rock (silicate) planets [@Seager3]. The maximum iron fraction curve corresponds to planets with minimum radius defined by the maximum mantle stripping limit [@Marcus].) planets.[]{data-label="fig:mr"}](dim.pdf)
![The orbital periods of known transiting planets as a function of their host star V magnitude. Planets with intermediate or long periods ($\gtrapprox$ 6 days) orbiting stars brighter than V = 9 are shown in blue. HD 97658b (in red) now joins their ranks.[]{data-label="fig:magp"}](magper.pdf)
Discussion
==========
We have carried out a search for transits of HD 97658b throughout its 3$\sigma$ RV-predicted transit window. We have discovered that the planet does cross the disk of its host star, allowing us to measure its size and therefore its density. The transits we have detected occur approximately 6$\sigma$ earlier than the transit-like signals reported in [@Hen11]. Propagating our mid-transit time backward to spring of 2011 (the epoch of the @Hen11 observations) indicates that transits are predicted to have occurred 16 $\pm$ 3 hours earlier than the transit-like signals observed by [@Hen11], so our 3$\sigma$ transit window does not overlap with theirs. Further, our derived planetary radius is $>$3$\sigma$ smaller. For these reasons, we conclude that the transits announced in this paper bear no connection to the previously announced transit-like signals.
HD 97658b has a radius of $2.34^{+0.18}_{-0.15} R_{\oplus}$, slightly larger than that of 55 Cnc e [@Win11]. Figure \[fig:mr\] shows the mass and radius of HD 97658b relative to those of other known transiting super-Earths. Its density of $3.44^{+0.91}_{-0.82}$ g cm$^{-3}$ suggests the planet is probably not solely rocky. If it is composed of a rocky core, this core is most likely surrounded by an atmosphere of volatiles, by which we mean planetary ingredients lighter than just rock and iron. The mass and radius of HD 97658b are very similar to those of Kepler-68b, a planet in a multiple system with a period of 5.4 days [@Gilli]. Of the two, HD 97658b is significantly less irradiated, a characteristic which supports the existence of light elements such as hydrogen or helium in its atmosphere. Indeed, its zero-albedo equilibrium temperature is $\sim$1030 K assuming no heat redistribution, and $\sim$ 730 K for even heat redistribution. However, the measured density of this super-Earth is also consistent with a water planet.
HD 97658b is the second super-Earth known to transit a very bright ($V = 7.7$) star. Figure \[fig:magp\] shows the orbital period of known transiting planets as a function of the magnitude of their host star. Of the now three exoplanets in the sparsely populated upper right area of the diagram, HD 97658b is the only super-Earth. It is enlightening to study how the structure and composition of warm super-Earths differs from those of their hotter counterparts. The brightness of HD 97658 makes this exoplanet system ideal for such investigations. We encourage follow-up observations of this system to more precisely constrain the planet’s physical parameters and to begin probing its atmosphere. In fact, HD 97658b is an ideal candidate for atmospheric characterization with the James Webb Space Telescope (JWST; @Seager4 [@Shab11]).
More exoplanets will be found to transit bright stars through systematic photometric monitoring of known RV planets by projects such as the MOST and Spitzer super-Earth transit searches. The Transit Ephemeris Refinement and Monitoring Survey (TERMS; @Kane09 [@Drag12]) in particular will help further populate the upper right corner of Figure \[fig:magp\] by searching for transits of RV planets with intermediate and long periods.
We have learned from the Kepler mission that super-Earths exist in multiple planet systems. We believe it is worthwhile to continue the radial velocity monitoring of the HD 97658 system in order to search for additional planetary companions.
To close, the 4% [*a priori*]{} transit probability of HD 97658b reminds us of the impartial nature of statistics: all probabilities, no matter how small, count in the race toward 100 percent.
Acknowledgments
===============
We are grateful to Michael Gillon, Kaspar von Braun, Dan Fabrycky, Darin Ragozzine, Jean Schneider and Josh Winn for useful suggestions and assistance with Figures 2 and 3. We thank Peter McCullough, Heather Knutson and especially the anonymous referee for feedback which has helped improve this manuscript. We also thank Geoff Marcy, Debra Fischer, John Johnson, Jason Wright, Howard Isaacson and other Keck-HIRES observers from the California Planet Search. Finally, the authors wish to extend special thanks to those of Hawai‘ian ancestry on whose sacred mountain of Mauna Kea we are privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible.
The Natural Sciences and Engineering Research Council of Canada supports the research of DBG, JMM, AFJM and SMR. Additional support for AFJM comes from FQRNT (Québec). RK and WWW were supported by the Austrian Science Fund (P22691-N16) and by the Austrian Research Promotion Agency-ALR.
Buchhave, L. A. et al., 2012, Nature, 486, 375 Castelli, F., Kurucz, R. L., 2004, arXiv:0405087 Demarque, P. et al, 2004, ApJS, 155, 667 Demory, B.-O. et al., 2011, A$\&$A, 533, 114 Dragomir, D. et al., 2012, ApJ, 754, 37 Dragomir, D. et al., 2012, ApJ, 759, L41 Eastman, J. et al., 2010, PASP, 122, 935 Eastman, J. et al., 2013, PASP, 125, 83 Gilliland, R. L. et al., 2013, 766, 40 Henry, G. W. et al., 2011, ApJ, withdrawn (arXiv:1109.2549v1) Hinkel, N. R., Kane, S. R., 2013, ApJ, accepted (arXiv:1304.0450) Howard, A. W. et al., 2011, ApJ, 730, 10 Kane, S. R. et al., 2009, PASP, 121, 1386 Marcus, R. A. et al., 2010, ApJ, 712, L73 Matthews, J. M. et al., Nature, 430, 921 Rowe, J. F. et al., 2008, ApJ, 689, 1345 Seager, S. et al., 2007, ApJ, 669, 1279 Seager, S. et al., 2009, Transiting Exoplanets with JWST, Astrophysics in the Next Decade, Astrophysics and Space Science Proceedings, Springer (Netherlands), p.123 Shabram, M. et al., 2011, ApJ, 727, 65 Walker, G. et al., 2003, PASP, 115, 1023 Winn, J. N. et al., 2011, ApJ, 737, L18 Wright, J. T. et al., 2011, PASP, 123, 412 Yi, S. et al., 2001, ApJS, 136, 417
[lcc]{} ${\it V}$ [*mag*]{} &[*Apparent V magnitude*]{}& ${\it 7.7}$\
${\it u_1}$&[*linear limb-darkening coeff*]{}& ${\it 0.621}\pm{\it 0.050}$\
${\it u_2}$&[*quadratic limb-darkening coeff*]{}& ${\it 0.141}\pm{\it 0.050}$\
${\ensuremath{{\it T}_{\rm {\it eff}}}}$&[*Effective temperature (K)*]{}& ${\it 5119}\pm{\it 50}$\
${\ensuremath{\left[{\rm {\it Fe}}/{\rm {\it H}}\right]}}$&[*Metallicity*]{}& ${\it-0.30}_{{\it-0.08}}^{{\it+0.08}}$\
$M_{*}$&Mass ([$\,M_\Sun$]{})& $0.747_{-0.030}^{+0.031}$\
$R_{*}$&Radius ([$\,R_\Sun$]{})& $0.703_{-0.030}^{+0.035}$\
$\rho_*$&Density (cgs)& $3.04_{-0.39}^{+0.38}$\
$\log(g_*)$&Surface gravity (cgs)& $4.618_{-0.039}^{+0.034}$\
$e$&Eccentricity& $0.063_{-0.044}^{+0.059}$\
$\omega_*$&Argument of periastron (degrees)& $-9_{-63}^{+67}$\
$P$&Period (days)& $9.4909_{-0.0015}^{+0.0016}$\
$a$&Semi-major axis (AU)& $0.0796\pm0.0011$\
$M_{P}$&Mass ($M_{\oplus}$)& $7.86\pm0.73$\
$R_{P}$&Radius ($R_{\oplus}$)& $2.341_{-0.15}^{+0.17}$\
$\rho_{P}$&Density (cgs)& $3.35_{-0.65}^{+0.76}$\
$\log(g_{P})$&Surface gravity& $3.146_{-0.069}^{+0.065}$\
$K$&RV semi-amplitude (m/s)& $2.90\pm0.25$\
$M_P\sin i$&Minimum mass ($M_{\oplus}$)& $7.86\pm0.73$\
$T_C$&Time of mid-transit ([$\rm {BJD_{TDB}}$]{})& $2456361.8050_{-0.0033}^{+0.0030}$\
$R_{P}/R_{*}$&Radius of planet in stellar radii& $0.0306\pm0.0014$\
$a/R_{*}$&Semi-major axis in stellar radii& $24.36_{-1.1}^{+0.97}$\
$i$&Inclination (degrees)& $89.45_{-0.42}^{+0.37}$\
$b$&Impact Parameter& $0.23_{-0.16}^{+0.18}$\
$\delta$&Transit depth& $0.000934_{-0.000084}^{+0.000090}$\
$\tau$&Ingress/egress duration (days)& $0.00391_{-0.00030}^{+0.00054}$\
$T_{14}$&Total duration (days)& $0.1238_{-0.0053}^{+0.0052}$
\[tab:HD97658\]
[lcc]{} 2453398.041747 & 7.40 & 0.65\
2453748.036160 & 4.76 & 0.71\
2453806.962215 & 2.51 & 0.71\
2454085.159590 & -4.83 & 0.79\
2454246.878923 & -2.33 & 0.72\
2454247.840558 & -4.86 & 0.94\
2454248.945454 & -2.82 & 1.08\
2454249.803197 & 0.22 & 1.07\
2454250.840581 & 1.56 & 0.94\
2454251.895304 & -0.07 & 0.96\
2454255.872627 & -0.87 & 0.72\
2454277.818152 & -0.91 & 0.98\
2454278.839136 & -0.02 & 0.97\
2454279.830756 & 2.28 & 0.99\
2454294.764264 & -5.63 & 1.10\
2454300.742505 & -0.60 & 1.02\
2454304.762991 & -5.95 & 1.17\
2454305.759223 & -5.06 & 0.80\
2454306.772505 & -4.31 & 0.97\
2454307.747998 & -0.93 & 0.77\
2454308.749850 & 2.39 & 0.75\
2454309.748488 & 1.32 & 1.04\
2454310.744183 & -0.21 & 1.03\
2454311.744669 & 4.26 & 1.12\
2454312.743176 & -2.24 & 1.04\
2454313.744948 & -1.35 & 1.19\
2454314.751499 & -0.07 & 1.14\
2454455.155080 & -5.29 & 1.08\
2454635.798361 & 0.75 & 0.96\
2454780.126213 & -3.91 & 1.10\
2454807.091282 & -7.71 & 1.20\
2454808.158585 & -4.91 & 1.25\
2454809.144268 & -1.59 & 1.14\
2454810.025842 & 1.52 & 1.23\
2454811.115461 & -2.72 & 1.30\
2454847.118970 & -2.42 & 1.34\
2454927.899109 & -1.44 & 1.19\
2454928.963980 & -9.31 & 1.12\
2454929.842494 & -7.99 & 1.26\
2454934.959381 & -5.61 & 1.63\
2454954.970889 & 0.00 & 1.08\
2454955.924351 & 0.73 & 0.58\
2454956.906013 & 1.26 & 0.58\
2454963.966588 & 1.83 & 0.61\
2454983.873218 & -1.21 & 0.64\
2454984.903863 & -0.58 & 0.65\
2454985.846210 & -4.03 & 0.64\
2454986.888827 & -3.76 & 0.64\
2454987.896299 & -4.52 & 0.63\
2454988.844192 & -5.88 & 0.64\
2455041.753229 & 0.80 & 1.32\
2455164.116578 & 0.93 & 1.20\
2455188.160043 & -3.03 & 0.73\
2455190.133815 & -6.54 & 0.64\
2455191.162113 & -4.92 & 0.68\
2455192.130000 & -1.20 & 0.63\
2455193.117044 & 0.90 & 0.65\
2455197.145482 & -2.04 & 0.65\
2455198.064452 & -3.58 & 0.67\
2455199.090244 & -3.69 & 0.65\
2455256.958605 & 2.93 & 0.66\
2455285.941646 & -2.31 & 0.69\
2455289.831867 & -1.33 & 0.61\
2455311.785608 & -5.80 & 0.67\
2455312.860423 & -4.16 & 0.56\
2455313.768309 & 0.20 & 0.64\
2455314.782223 & 0.82 & 0.66\
2455317.962910 & -0.08 & 0.64\
2455318.944742 & -3.03 & 0.65\
2455319.903509 & -4.99 & 0.58\
2455320.861260 & -4.83 & 0.57\
2455321.834736 & -1.17 & 0.58\
2455342.878772 & -2.10 & 0.61\
2455343.830638 & -1.23 & 0.63\
2455344.880812 & 1.12 & 0.65\
2455350.781814 & -4.60 & 0.56\
2455351.884649 & 0.72 & 0.59\
2455372.756478 & 2.28 & 0.58\
2455373.784809 & -0.70 & 0.56\
2455374.759589 & 0.04 & 0.57\
2455375.776842 & -0.64 & 0.59\
2455376.744926 & -2.63 & 0.56\
2455377.741425 & -0.96 & 0.55\
2455378.743929 & 2.85 & 0.61\
2455379.791225 & 0.98 & 0.62\
2455380.744606 & 5.58 & 0.59\
2455400.743251 & -0.31 & 0.66\
2455401.770164 & -1.35 & 1.38\
2455403.738397 & -3.92 & 0.70\
2455404.737600 & -4.22 & 0.64\
2455405.740598 & -5.58 & 0.63\
2455406.738423 & -3.22 & 0.60\
2455407.758497 & 0.37 & 0.75\
2455410.738763 & 5.05 & 0.64\
2455411.734694 & 2.03 & 0.65\
2455412.732758 & 2.06 & 1.15\
2455413.736205 & 1.52 & 0.71\
2455501.151034 & -2.46 & 0.64\
2455522.132828 & 1.48 & 0.63\
2455529.170850 & -12.62 & 1.53\
2455543.149154 & 1.28 & 0.64\
2455546.125032 & -2.30 & 0.65\
2455556.136099 & 12.63 & 0.70\
2455557.076967 & -3.11 & 0.67\
2455559.127467 & -2.44 & 0.77\
2455585.099940 & -3.61 & 0.64\
2455605.987844 & -5.75 & 0.66\
2455606.982970 & -3.25 & 0.65\
2455607.982479 & -2.00 & 0.74\
2455614.039397 & -0.19 & 0.64\
2455614.875472 & 0.94 & 0.78\
2455633.993494 & -4.99 & 0.64\
2455635.051246 & -1.85 & 0.68\
2455635.997009 & -1.25 & 0.62\
2455636.758083 & -1.68 & 0.59\
2455663.885585 & -5.65 & 0.68\
2455667.968022 & -0.67 & 0.60\
2455668.935707 & 1.36 & 0.62\
2455670.839673 & -3.71 & 0.61\
2455671.811836 & -2.62 & 0.62\
2455672.799190 & -1.02 & 0.64\
2455673.805866 & -0.23 & 0.60\
2455696.875840 & -1.48 & 0.63\
2455697.796128 & -2.55 & 0.60\
2455698.800020 & -3.67 & 0.59\
2455699.806142 & -6.66 & 0.61\
2455700.825166 & -4.76 & 0.62\
2455703.777981 & -1.62 & 0.60\
2455704.749841 & 0.40 & 0.58\
2455705.750495 & -2.57 & 0.58\
2455706.809245 & -2.11 & 0.59\
2455707.799643 & -2.35 & 0.63\
2455723.768188 & 1.27 & 0.42\
2455728.755558 & -3.21 & 0.60\
2455731.795592 & 1.39 & 0.87\
2455733.760561 & 0.91 & 0.58\
2455734.783228 & -2.46 & 0.55\
2455735.790021 & -2.10 & 0.54\
2455738.754067 & -4.23 & 0.58\
2455751.746336 & 3.76 & 0.65\
2455752.741618 & 1.77 & 0.62\
2455759.747774 & -0.60 & 0.61\
2455760.738878 & -0.60 & 0.63\
2455761.744163 & 1.19 & 0.59\
2455762.753715 & -0.61 & 0.68\
2455763.748343 & -2.38 & 0.64\
2455768.736141 & 0.15 & 0.62\
2455770.749400 & 3.34 & 0.70\
2455871.127171 & -4.76 & 0.70\
2455878.144281 & -4.84 & 0.64\
2455879.098924 & -7.54 & 0.69\
2455880.152597 & -9.09 & 0.67\
2455882.154960 & -3.74 & 0.62\
2455902.045678 & 0.46 & 0.63\
2455903.044030 & 4.20 & 0.66\
2455904.134955 & 5.19 & 0.62\
2455905.066792 & 3.94 & 0.56\
2455929.146269 & -5.11 & 0.62\
2455932.049104 & -1.17 & 0.62\
2455945.099002 & -0.97 & 0.76\
2455961.023265 & 6.60 & 0.65\
2455967.944966 & -0.47 & 0.67\
2455972.956960 & -0.51 & 0.61\
2455990.905528 & 5.94 & 0.71\
2455991.904565 & 2.46 & 0.70\
2455997.010626 & -0.79 & 0.62\
2455999.795911 & 3.83 & 0.67\
2456018.905001 & 6.66 & 0.66\
2456019.968956 & 2.26 & 0.62\
2456027.799773 & 2.93 & 0.66\
2456102.749354 & 4.11 & 0.57\
2456111.744838 & 4.65 & 0.64\
2456145.735429 & 0.73 & 0.75\
2456266.103384 & 2.22 & 0.70\
[^1]: Based on data from the MOST satellite, a Canadian Space Agency mission operated by Microsatellite Systems Canada Inc. (MSCI; former Dynacon Inc.) and the Universities of Toronto and British Columbia, with the assistance of the University of Vienna.
[^2]: The transit window is the time span during which a transit is predicted to occur, calculated from the uncertainties on the orbital period and those on the predicted mid-transit time.
|
---
abstract: 'The dynamical system behaviour and thermal evolution of a homogeneous and isotropic dissipative universe are analyzed. The dissipation is driven by the bulk viscosity $\xi = \alpha \rho^s $ and the evolution of bulk viscous pressure is described using the full causal Israel-Stewart theory. We find that for $s=1/2$ the model possesses a prior decelerated epoch which is unstable and a stable future accelerated epoch. From the thermodynamic analysis, we have verified that the local as well as the generalised second law of thermodynamics are satisfied throughout the evolution of the universe. We also show that the convexity condition $S''''<0$ is satisfied at the end stage of the universe which implies an upper bound to the evolution of the entropy. For $s\neq1/2,$ the case $s<1/2$ is ruled out since it does not predict the conventional evolutionary stages of the universe. On the other hand, the case $s>1/2$ does imply a prior decelerated and a late de Sitter epochs, but both of them are unstable fixed points. The thermal evolution corresponding to the same case implies that GSL is satisfied at both the epochs but convexity condition is violated by both, so that entropy growth is unbounded. Hence for $s>1/2$ the model does not give a stable evolution of the universe.'
author:
- |
Jerin Mohan N D, Krishna P B, Athira Sasidharan and Titus K Mathew\
Department of Physics, Cochin University of Science and Technology,\
Kochi-22, India.\
jerinmohandk@cusat.ac.in;krishnapb@cusat.ac.in,\
athirasnair91@cusat.ac.in,titus@cusat.ac.in
title: '[**Dynamical system analysis and thermal evolution of the causal dissipative model**]{}'
---
Introduction
============
Astronomical observations ([@Riess; @Perlmutter; @Bennett; @Riess2; @Tegmark; @Seljak; @Komatsu]) have shown that the current universe is expanding at an accelerating rate. The most successful model which explains this recent acceleration is the $\Lambda$CDM, which assumes the cosmological constant $\Lambda,$ with equation of state $\omega_{\Lambda}=-1$ as the cosmic component responsible for the acceleration. But due to the huge difference between the predicted and observed values of the cosmological constant and also due to the surprising coincidence between the present densities of the dark matter and dark energy [@Sami1], attention has been turned towards dynamical dark energy models [@Wang; @Caldwell; @Bamba]. However the nature and composition of dark energy is still a mystery and its possible coupling with matter [@wang1] is also unknown. Modified gravity theories [@Dvali; @Freese] have been proposed as alternative solutions. Another interesting approach is to invoke viscosity in the dark matter sector which can produce adequate negative pressure to cause the late acceleration [@Brevik1; @Brevik2; @Avelino1; @Athira]. In the Weinberg formalism [@Misner; @Weinberg; @Weinberg1] of the imperfect fluid, the bulk viscous fluid can act as a source in Einstein field equation. Very recently it has been shown that the viscosity of the dark matter can alleviate the discrepancy in the values of the cosmological parameters when one use the large scale structure (LSS) and Planck data [@anand1] to constrain the parameters in the respective cosmological models.
Physically, bulk viscosity can be generated whenever a system deviates from the local thermodynamic equilibrium [@Wilson]. In cosmic evolution, the viscosity arises as an effective pressure to restore the system back into the thermal equilibrium whenever the universe undergoes fast expansion or contraction [@Okumura]. The bulk viscosity thus generated can cause a negative pressure similar to the cosmological constant or quintessence [@Mathews; @Avelino2]. Even though this is a possible realistic picture for the generation of bulk viscosity, its origin in the expanding universe is still not clearly understood. Some authors have shown that different cooling rates of components of the cosmic medium can produce bulk viscosity [@Weinberg2; @Schweizer; @Udey; @Zimdahl3]. Another proposal is that bulk viscosity of the cosmic fluid may be the result of the particle number non-conserving interactions [@Murphy; @Turok; @Zimdahl4].
The simple way of accounting the bulk viscosity in the expanding universe is through the Eckart theory [@Eckart], which gives a linear relationship between the bulk viscous pressure and the expansion rate of the universe. Since it is limited to the first order deviation from the equilibrium, the Eckart theory suffers from serious short comings like the violation of causality [@Coley2; @Israel1] and the occurrence of unstable equilibrium states [@Hiscock1]. But it has been used by several authors to model the bulk viscosity in explaining the late acceleration of the universe [@Brevik1; @Fabris; @Barrow1; @Colistete; @Avelino1; @Avelino2; @Athira; @Athira2], primarily due to its simplicity. Such cosmological models lead to reasonably good description of the background evolution of the universe, but become problematic while considering the structure formation scenario.
A more general theory, consistent with the relativistic second order evolution of the bulk viscous pressure, was suggested by Israel and Stewart [@Israel1; @IsraelStewart; @IsraelStewart2] and is free from the shortcomings of the Eckart formalism. The inclusion of the dissipative second order terms ensure causality in the Israel-Stewart model and it also accounts for the stability of the corresponding solutions. In the limit of vanishing relaxation time, the Israel-Stewart theory reduces to the Eckart theory. In some recent dissipative cosmological models [@Piattella], a truncated version of the Israel-Stewart theory has been used in which one omits the divergence terms in the expression for the evolution of the bulk viscous pressure. Strictly speaking such an approximation is valid only when cosmic fluid is very close to the equilibrium state.
It was noted in [@Padmanabhan; @Prisco] that both causal and non-causal dissipative models in the context of early inflation of the universe have some critical issues which makes role of viscosity in the early universe rather unlikely. But in the context of the late evolution of the universe the bulk viscous models are promising. Based on the Eckart approach, the late acceleration can be explained without invoking to any fictitious dark energy component [@Brevik1; @Brevik2; @Avelino1; @Athira]. A dynamical system analysis of the same model can predict the conventional evolution of the late universe if the bulk viscous coefficient is a constant [@Athira2]. The background evolution of the bulk viscous universe using the full Israel-Stewart theory has been analyzed in our previous work [@EPJC1] where we have obtained analytical solutions which explain the late acceleration of the universe with a transition redshift, $z_T\sim0.52,$ which shows the feasibility of describing a late accelerating universe. The current status of the viscous models are described in the review [@Brevikrev1].
In the present work, our aim is two fold. Firstly to perform a dynamical system analysis of dissipative model of the late universe and secondly to study the thermodynamic evolution of the model based on the IS theory. In both analyses we choose the viscosity as $\xi \propto \rho^s,$ where the parameter take values $s=1/2$ or $s \neq 1/2.$ The first method is aimed at finding the critical points of the autonomous differential equations which are obtained from the Friedmann equations consistent with the conservation conditions. The sign and properties of the eigenvalues corresponding to these critical points will then determine the asymptotic stability of the model. Our analysis show that, there exists an unstable critical point corresponds to prior decelerated universe and an asymptotically stable critical point corresponding to a future accelerating epoch for $s=1/2.$ We also explore the status of the energy conditions, both the strong and dominant energy conditions, to check the physical feasibility of the solutions corresponding to the respective critical points. Further, we analyses the thermal evolution of the model where we check the status of the generalized second law (GSL) and the convexity condition, $S^{\prime\prime} <0,$ where $S$ is the entropy and the prime denotes a derivative with respect to a suitable cosmic variable. In this context we found that the end stage in this model is thermodynamically stable with an upper bound for entropy when $s=1/2,$ which indicates that our universe behaves like an ordinary macroscopic system [@Pavon1]. Authors in reference [@Cruz1] have analyses the viscous model following Israel-Stewart approach, by considering an ansatz for the Hubble parameter and with varying barotropic equation of state and have shown, in contrary, that the end stage violates the convexity condition. However for $s\neq 1/2$ the results, in the present model, are not in favour of the evolution towards stable epoch of the universe.
The paper is organized as follows. In section (\[sec:1\]), the Hubble parameter from the full causal Israel-Stewart theory is obtained. The dynamical behaviour of the bulk viscous model for $s=1/2$ and $s\neq1/2$ are studied in section (\[sec:2\]). The section (\[sec:3\]) deals with the analysis of the thermodynamic conditions during the evolution of the present model of the universe and our conclusions are given in section (\[sec:4\]). The possibilities of attaining a pure de Sitter phase for $s=1/2$ is discussed in the Appendix.
The causal viscous model {#sec:1}
========================
We consider a flat FLRW universe with viscous matter as the cosmic component. The basic equations governing the evolution of the universe are, $$\label{eqn:F1}
3H^{2}={\rho_m},$$ $$\label{eqn:Hdot1}
\dot H = -H^2 - \frac{1}{6} \left(\rho_m+3P_{eff} \right),$$ where $H=\frac{\dot{a}}{a}$ is the Hubble parameter with $a$ is the scale factor, $ \rho_{m} $ is the matter density and $$\label{eqn:effectivep}
P_{eff}=p+\Pi,$$ is the effective pressure, $ p=(\gamma-1)\rho $ is the normal kinetic pressure with $ \gamma $ as the barotropic index and $ \Pi $ is the bulk viscous pressure. The evolution of the density of the viscous fluid satisfies the conservation equation, $$\label{eqn:con1}
\dot{\rho}_{m}+3H(\rho_{m}+P_{eff})=0.$$ In the full causal IS theory, the evolution of the viscous pressure is given by, $$\label{eqn:IS1}
\tau\dot\Pi+\Pi=-3\xi H-\frac{1}{2}\tau\Pi\left(3H+\frac{\dot\tau}{\tau}-\frac{\dot\xi}{\xi}-
\frac{\dot{T}}{T}\right),$$ where $\tau$, $ \xi $ and $ T $ are the relaxation time, bulk viscosity and temperature respectively and are generally functions of the density of the fluid, defined by the following equations [@Maartens], $$\label{eqn:tau}
\tau=\alpha\rho^{s-1}, \, \, \,
\xi=\alpha\rho^{s}, \, \, \,
T=\beta\rho^{r},$$ Here $\alpha$, $\beta$ and $s$ are all positive constant parameters and $ r=\frac{\gamma-1}{\gamma} $. For $\tau=0$, the differential equation (\[eqn:IS1\]) reduces to the simple Eckart equation, $\Pi=-3\xi H.$ Friedmann equation (\[eqn:F1\]) can be combined with (\[eqn:con1\]) and (\[eqn:effectivep\]) to express the bulk viscous pressure $\Pi$ as, $$\label{eqn:pi}
\Pi=-\left[2\dot{H}+3H^2+(\gamma-1)\rho\right].$$ Following this, the bulk viscosity evolution in (\[eqn:IS1\]) can be expressed as, $$\begin{aligned}
\ddot H + \frac{3}{2}\left[1+(1-\gamma)\right] H \dot H + 3^{1-s} \alpha^{-1} H^{2-2s} \dot H \nonumber\\
- (1+r)H^{-1} {\dot H}^2 + \frac{9}{4}(\gamma -2)H^3+
\frac{1}{2}3^{2-s}\alpha^{-1}\gamma H^{4-2s} =0.\end{aligned}$$ For $\gamma=1$ corresponding to non-relativistic matter and taking $s=\frac{1}{2}$ [@ChimentoJacubi], the above equation admits solution [@EPJC1] of the form, $$\label{eqn:Hubbleparameter}
H=H_0\left(C_1 a^{-m_1}+C_2a^{-m_2}\right),$$ where $ H_0$ is the present Hubble parameter and the other constants are [@EPJC1], $$\label{consatntC1}
C_{1;2}=\frac{\pm 1+\sqrt{1+6\alpha^{2}} \mp \sqrt{3}\alpha\tilde{\Pi}_0}{2\sqrt{1+6\alpha^{2}}},$$ $$\label{constantm1}
m_{1;2=}\frac{\sqrt{3}}{2\alpha}\left(\sqrt{3}\alpha+1 \mp \sqrt{1+6\alpha^{2}}\right).$$ Here $ \tilde{\Pi}_0=\frac{\Pi_0}{3H_{0}^{2}} $ is the dimensionless bulk viscous pressure parameter, with $\Pi_0$ as the present value of $\Pi.$ The model parameters up to 1$\sigma$ level were estimated by contrasting the model with the supernovae data [@EPJC1] and are given in table \[Table:ParametersFIS\]. We find $m_1=0.31,$ $ m_2=5.29.$ Since $m_1<1 \, \textrm{and} \, m_2>1,$ the expansion rate will be dominated by $a^{-m_2}$ in the early epoch, while the term $a^{-m_1}$ dominates in the late epoch. Hence in the limit $a\rightarrow 0,$ the deceleration parameter $q,$ becomes $q=-1-\dot H/H^2 \to -1+m_2 > 0,$ which implies a prior decelerated expansion phase. But in the limit $a\rightarrow\infty,$ it turn out that $q\to -1+m_1<0,$ which implies a late accelerating phase of expansion and therefore the model predicts a transition into the late accelerating epoch. However, since $m_1$ is a positive quantity, the deceleration parameter will general be greater than $-1,$ but owing to the smallness of $m_1$ it can approach a value near to $-1$ corresponding to a pure de Sitter epoch [@EPJC1]. We will look into this point at later section. Further since the model assumes a single cosmic component, it follows that $\Omega_{total} \sim \Omega_{darkmatter}.$ From (9) the matter density parameter $\Omega_m$ is obtained as $$\Omega_m=\frac{\rho_m}{\rho_{critical}}=\frac{ H^2}{H_0^2}=(C_1 a^{-m_1}+C_2 a^{-m_2})^2.$$ The matter density parameter in the present time $\Omega_{m0},$ corresponding to $a=1$ and is, $$\Omega_{m0}=(C_1+C_2)^2=1.$$
\
Dynamical system analysis {#sec:2}
=========================
We will now consider the dynamical system analysis [@Ellis] of the model. For this we define the following dimensionless variables, $$\label{eqn:dimensionlessdensityphasespace}
\Omega=\frac{\rho_m}{3H^2}, \, \, \,
\tilde{\Pi}=\frac{\Pi}{3H^2}, \, \, \, \textrm{and} \, \, \,
H(t)dt=d\tilde{\tau},$$ where the last relation is equivalent to a new time variable. The (\[eqn:Hdot1\]), (\[eqn:con1\]) and the IS equation (\[eqn:IS1\]) can then be re-written as, $$\label{eqn:H'}
H'=-H\left[1+\frac{1}{2}(\Omega+3\tilde{\Pi})\right],$$ $$\label{eqn:omega'}
%\label{eq:omega'}
\Omega'=(\Omega-1)(\Omega+3\tilde{\Pi}),$$ and $$\label{eqn:Pi'}
\tilde{\Pi'}=-3\Omega- \tilde{\Pi}\left[\frac{3}{2}\left(2+\frac{\tilde{\Pi}}{\Omega}\right) +
\frac{H^{1-2s}}{\alpha (3\Omega)^{s-1}}-\Omega-3\tilde{\Pi}-2\right],$$ where the $'prime'$ denotes a derivative with respect to the new variable $\tilde{\tau}.$ Since $H$ is always positive for an expanding flat universe, the above equations are well defined. The above three dynamical equations constitute the evolution of the system in a phase space described by the variables $(H,\Omega, \Pi).$ We are considering a universe with single component, the viscous matter, implying that $\Omega=1.$ Then the phase space becomes two dimensional with variables $(H, \tilde{\Pi}).$ The critical parameter in studying the evolution is $s.$ We have found exact solutions for $s=1/2$ in the previous section. However for analyzing dynamical system behaviour we will consider choices $s\neq1/2 $ also in accounting for the bulk viscosity.
Choice 1. $s=1/2$
-----------------
For this choice (\[eqn:H’\]) and (\[eqn:Pi’\]) decouple from each other and as a result the phase space will effectively reduces to one dimension and (\[eqn:Pi’\]) represents the evolution of this single dimensional phase space. More over in the present case, since $\Omega=1,$ (\[eqn:Pi’\]) can be expressed in a much simpler form in terms of the equation of state, $\omega=\tilde{\Pi}/\Omega$ as, $$\label{eqn:autoeq1}
\omega'=\frac{3}{2}(\omega-\omega^+)(\omega-\omega^-),$$ where $$\label{eqn:omegapm}
\omega^{\pm}=\frac{1}{\sqrt{3}\alpha}[1\pm\sqrt{1+6\alpha^2}]$$ are the fixed points. The equation (\[eqn:omegapm\]) implies that, $\omega^+>0$ and $\omega^-<0$ for all $\alpha>0.$ The early phase corresponding to $\omega^+$ is decelerating. If $\alpha$ is sufficiently large then $\omega^-< -1/3$ and consequently the late epoch of the universe will be accelerating. So depending on the value of the parameter $\alpha$ the equation of state can assume values accordingly. For a range $\frac{2\sqrt{3}}{17} \leq \alpha \leq \frac{2}{\sqrt{3}}$ the equation of state vary between to $-1/3 > \omega \geq -1.$ So an asymptotic de Sitter epoch ($\omega \to -1$) is possible only if $\alpha$ assumes the upper limit value around $\frac{2}{\sqrt{3}}.$ For the best estimated value of the model parameter, we have obtained that $\omega^+=2.52$ and $\omega^-=-0.79$ and are corresponding to a prior decelerated phase in which the viscous matter assumes a stiff fluid nature and a late accelerated epoch, in which the matter assumes a quintessence nature respectively. So for the case with $\gamma=1$ (the barotropic index) and $\epsilon=1$ ($\gamma$ and $\epsilon$ appears in the general equation of relaxation time (\[eqn:relaxationtimegeneral\]) given in the Appendix) the late universe with bulk viscous matter can be accelerating but it will not approach a pure de Sitter epoch like the standard $\Lambda$CDM.
The deceleration parameter corresponding to the equilibrium points can be obtained using the relation $1+q=\frac{3}{2}(1+\omega),$ through which we arrive at $q^+\sim4.28$ and $q^-\sim-0.69.$ Taking account of these facts, it is possible to re-write the general solution given in (\[eqn:Hubbleparameter\]) as, $$\label{eqn:Hubbleparameterwithq}
H=H_0\left(C_1 a^{-(1+q^-)} + C_2 a^{-(1+q^+)} \right).$$ Using this the transition from the decelerated to the current accelerated phase of expansion can easily be explained. The transition redshift $z_T.$ can be obtained using (\[eqn:Hubbleparameterwithq\]) as, $$\label{Transition redshift}
z_{T}=\left(-\frac{C_1 q^+}{C_2 q^-}\right)^{-\frac{1}{q^+ - q^-}}\sim 0.52,$$ where the numerical value is corresponding to the best estimated values of the model parameters and is found to be in the WMAP range $z_T=(0.45-0.73)$ [@U.Alam].
Without knowing the analytical solution, it is possible to analyze the cosmic evolution from (\[eqn:autoeq1\]) in a transparent way by drawing the phase diagram of $\omega,$ namely plotting $\omega^{\prime}$ versus $\omega.$ Since the phase space is one dimensional, we interpret (\[eqn:autoeq1\]) as a vectorfield on a single line [@Awad1]. The evolution of $\omega$ is represented by the direction of the change of $\omega$ along the axis and is determined by the sign of $\omega^{\prime}.$ A small variation in $\omega$ is expressed as $\delta\omega=\omega'\delta\tilde{\tau},$ so that for $\omega$ flows towards the increasing direction of $\tilde{\tau}$ (right) if $\omega'>0$ and flows towards the decreasing direction (left) if $\omega'<0.$ Perturbations in the $\omega$ space around the critical point, $\omega_c \, \, (\omega^+ \, \textrm{or} \,
\omega^-)$ propagates with a rate\
$$\frac{d}{d\tilde{\tau}}\left(\delta\omega\right)=\omega'=f(\omega)=f(\omega_c+\delta \omega).$$ The Taylor series expansion around $\omega_c$ can be written as, $f(\omega_c+\delta \omega)=f(\omega_c)+\delta \omega f'(\omega_c)+O(\delta \omega^2), $ where $f'(\omega_c)=\frac{d}{d\omega}f(\omega)|_{\omega_c},$ from which we get $\frac{d}{d\omega} (\delta \omega)=\delta \omega f'(\omega_c).$ By linearising $\delta \omega$ about the critical point $\omega_c$ we get, $$\label{eqn:stability1D}
\delta \omega(\tilde{\tau}) \propto e^{f'(\omega)\tilde{\tau}}.$$ The above equation tells us that the stability of critical points is determined by the slope, $f^{\prime}(\omega).$ If $f'(\omega_c)>0,$ then any small disturbance around the critical point grow exponentially and hence it becomes unstable (repeller). On the other hand, if $f'(\omega_c)<0,$ all small disturbances around critical point decay exponentially and it will be a stable one(attractor). The critical point will be semi stable, if the slope $f'(\omega_c)$ changes its sign at the critical point.
The slope corresponding to (\[eqn:autoeq1\]) can be obtained as, $$\label{eqn:omega''1D}
f'(\omega)=\frac{3}{2}\left[2\omega-\omega^+ - \omega^-\right].$$ For the best estimated values of the model parameters, it is clear that the condition, $\omega^- < \omega < \omega^+$ is always be satisfied. Then, at the critical points $\omega^{\pm}$ the slope will satisfy the conditions, $$\label{eqn:omega''+}
f'(\omega^+)=\frac{3}{2}\left[\omega^+ - \omega^-\right]>0 \,\, for \,\,\alpha>0,$$ $$\label{eqn:omega''-}
f'(\omega^-)=\frac{3}{2}\left[\omega^- - \omega^+\right]<0 \,\,for \,\,\alpha>0,$$ which indicates that $\omega^+$ is an unstable fixed point while $\omega^-$ is a stable fixed point. Hence the universe will evolves from an unstable decelerated epoch to the stable accelerated epoch. So in effect we get a qualitative description of the behaviour of the cosmological evolution without relying on the exact solution. The phase portrait is shown in figure \[plot:onedimensionalplot\].
The exact solutions corresponding to the fixed points $\omega^\pm$ follows from (\[eqn:H’\]) are, $$\label{eqn:H1D}
H_{\omega^\pm}=\frac{1}{(1+q^{\pm})t}, \quad
a_{\omega^\pm}=a_0 t^{\frac{1}{(1+q^\pm)}}.$$ For $\omega^+ $ we have $\frac{1}{(1+q^{+})}<1,$ indicating a decelerating solution, while for $\omega^-$ we have $\frac{1}{(1+q^{-})}>1$ implying an accelerating solution. The density and pressure then follows the evolution, $$\label{eqn:densityandpressure1D}
\rho_{\omega^\pm}=\frac{3}{(1+q^\pm)^2 t^2}, \,\,\,\,\, \tilde{\Pi}_{\omega^\pm}=\frac{3\omega^\pm}{(1+q^\pm)^2 t^2}.$$ For $\omega^+$ the pressure $\tilde\Pi >0$ and for $\omega^-$ it becomes negative, $\tilde\Pi <0,$ implying the generation of negative pressure in the late acceleration epoch.\
![The one dimensional phase portrait of evolution of $\omega'$ versus $\omega$ in the bulk viscous matter dominated universe using the full causal IS theory for best estimated value of the model parameter, when $s=1/2.$[]{data-label="plot:onedimensionalplot"}](Phaseplot1D.pdf)
It is essential to know the status of the energy conditions [@Visser] which characterize the feasibility of different solutions. The strong energy condition (SEC) implies that $\rho+3P_{eff}\geq0$. The violation of SEC indicates an accelerating expansion of the universe. The dominant energy condition (DEC) implies that $\rho+P_{eff}\geq0.$ A violation of DEC causes the breakdown of the generalised second law of thermodynamics in normal case. However, if there occur dissipative effects in the cosmic fluid, the GSL can still be satisfied even when DEC is violated [@Pavon12]. In term of equation of state SEC and DEC can translated as, $1+3\,\omega\geq0$ and $1+\omega\geq0$ respectively. For the best estimated values of the parameters, it can easily be seen that both SEC and DEC are satisfied by $\omega^+.$ In the case of the fixed point $\omega^-,$ SEC is violated, since it represents an accelerating solution, but DEC is satisfied as it is a physically feasible epoch. All these facts are summarized in table \[Table:FIS1/2\].\
\
A similar case of the non-validity of the strong energy condition for a future accelerating epoch was also pointed out by Barrow [@Barrow2] using the Eckart formalism to account for the viscosity.
Choice 2. $s\neq1/2$
--------------------
Unlike in the case of $s=1/2$ a complete description of dynamic evolution is difficult for $s\neq1/2.$ However, we can get a qualitative description of the evolution by extracting the information from the equilibrium points. In this case we have a two dimensional space $(H,\Pi).$ For simplicity we define a variable. $$\label{eqn:h}
h=H^{1-2s}.$$ The dynamical equations (\[eqn:H’\]) and (\[eqn:Pi’\]) then become, $$\label{eqn:h'}
h'=-\frac{3}{2}(1-2s)(1+\omega)h,$$ $$\label{eqn:omega'2D}
\omega'=-3\left[1+\omega\left(\frac{3^{(-s)}}{\alpha}h-\frac{\omega}{2}\right)\right].$$ having two critical points, $$\label{eqn:P2}
P_1:\quad h=0,\quad\qquad \quad \omega=\sqrt{2},$$ $$\label{eqn:P3}
P_2:\quad h=\frac{3^s\alpha}{2}, \qquad \quad \omega=-1.$$
For $s<1/2$ the critical point $P_1$ with $h=0$ implies a static universe with the Hubble parameter $H=0.$ The critical point $P_2$ corresponds to a de Sitter epoch at which the Hubble parameter is a non-zero constant. Since the first phase $P_1$ is a static one it will not imply any further evolution. Hence the case $s<1/2$ fails to predict a prior decelerated epoch, it is not worth exploring any further. On the other hand, for $s>1/2,$ the fixed point $P_1$ is representing a prior decelerated epoch with infinitely large Hubble parameter and $P_2$ corresponds to a late de Sitter epoch. However the equation of state corresponding to the prior decelerated epoch is greater than one, implying that the matter is of stiff nature at this epoch. We will restrict to the case $s>1/2$ in our further analysis. Regarding the energy conditions, it is found that both SEC and DEC are satisfied at $P_1,$ and hence it corresponds to physically feasible decelerating epoch. The fixed point $P_2,$ satisfies DEC but violates SEC as it is corresponding to an accelerating epoch.\
To determine the stability property of the critical points, we first linearize (\[eqn:h’\]) and (\[eqn:omega’2D\]) about the critical points and obtain the Jacobian matrix as, $$J(h, \omega)=\left[ \begin{array}{cc}
-\frac{3}{2}(1-2s)h & -\frac{3}{2}(1-2s)(1+\omega) \\
-3\left(\frac{3^{-s} h}{\alpha}-\omega\right) & -\frac{3^{1-s}\omega}{\alpha}
\end{array} \right]$$
Diagonalising the Jacobian matrix, we obtain the eigenvalues $$\label{eqn:eigenvalueP2}
\lambda_1^{\pm}=\frac{3^{1-s}}{\sqrt{2}\alpha}\left[-1\pm \sqrt{1+9^s (\sqrt{2}+2)(2s-1)\alpha^2}\right],$$ $$\label{eqn:eigenvalueP3}
\lambda_2^{+}=\frac{3^{1-s}}{\alpha}, \qquad \lambda_2^{-}=\frac{3^{1+s}}{4}(2s-1)\alpha,$$ for $P_1$ and $P_2$ respectively. Here we restrict $\alpha$ to the range $0<\alpha<1.$ The fixed point $P_1$ is a saddle one, since the eigenvalues are, $\lambda_1^+>0,$ $\lambda_1^-<0,$ while $P_2$ is found to be unstable, since its eigenvalues are both positive, $\lambda_2^+>0,$ $\lambda_2^->0.$ The saddle nature of the early decelerated phase implies that the system will continue the evolution further. For the sake of completeness, it may be noted that, for $s<1/2,$ the fixed point $P_1,$ which corresponds to a static universe, is found to be stable since $\lambda_1^+<0$ and $\lambda_1^-<0$ and $P_2$ is a saddle point as the eigenvalues satisfies, $\lambda_2^+>0$ and $\lambda_2^-<0.$ All these facts are summarised in table \[Table:FIS3/21/4\].
The fixed point $P_2$ corresponds to a solution given by, $$a=a_0 e^{\bar{H}_0 t},$$ where $\bar{H}_0=\left(\frac{3^s \alpha}{2}\right)^{\frac{1}{1-2s}}$ with $s>1/2.$ This is a de Sitter type solution, ensuring accelerated expansion. It is not possible to get any corresponding exact solution for $P_1$ as the Hubble parameter in this case is infinity. Even though an exact solution for $P_1$ is impossible, an approximate solution can be obtained. For this, first express the (\[eqn:h’\]) in terms of $H$ and then through a simple integration we arrive at, $$\label{eqn:HP1P2}
H \sim e^{-(1+q) \tau},$$ from which it is evident that as $\tau\rightarrow-\infty,$ $H\rightarrow\infty.$ Integrating the above equation by changing the variable from $\tau$ to $t$ using (\[eqn:dimensionlessdensityphasespace\]), we get the scale factor as $a\sim t^{\frac{1}{(1+q)}}$ and the corresponding pressures is $\tilde{\Pi}\sim \frac{3\,\omega}{(1+q)^2 t^2}$.
\
Thermodynamic analysis {#sec:3}
======================
This section is devoted to the analysis of the evolution of entropy. Viscosity can cause entropy generation and the local entropy thus generated can be obtained as [@Weinberg], $$\label{eqn:local entropy}
T\nabla_{\nu}S^{\nu}=\xi(\nabla_{\nu}u^{\nu})=9H^{2}\xi,$$ where $ T $ is the temperature and $ \nabla_{\nu}S^{\nu} $ is the rate of generation of entropy in unit volume. According to second law of thermodynamics, the entropy must always increase, i.e. $T\nabla_{\nu}S^{\nu}\geq0,$ which implies that $ \xi\geq 0.$ For $s=1/2,$ from (\[eqn:tau\]) and (\[eqn:Hubbleparameter\]), it follows $\xi=\sqrt{3} \alpha H.$ Since both, $\alpha$ and $H$ are always positive definite in the present case the local second law will be satisfied. Then it is easy to conclude that the local second law will be satisfied at the critical points $\omega^+$ and $\omega^-$ since they are the critical points corresponding to the case $s=1/2.$ As there are no analytical solutions for $s\neq1/2,$ it is impossible to make a similar analysis.
Now we turn to the more general aspects of the entropy evolution, namely the status of the generalised second law (GSL) and the behaviour of the second order derivative of entropy. An ordinary macroscopic system evolving towards a state of stable thermodynamic equilibrium must satisfy the conditions, $$\label{eqn:ddots}
S^{\prime} \geq 0, \hspace{0.24in} \textrm{and} \hspace{0.24in} S^{\prime\prime}<0, \, \, \textrm{at least in the long run}$$ where $'prime'$ denotes a derivative with respect to suitable cosmological variable like cosmic time or scale factor. The first condition refers to the GSL and the second one is the convexity condition implying an upper bound to the growth of entropy. In reference [@Pavon1], the authors have shown that our universe seems to behave like an ordinary macroscopic system which obeys the above conditions. The consideration of the entropy evolution in the standard $\Lambda$CDM model also supports this [@Krishna].
According to GSL, the total entropy must always increase, i.e., $$S^{\prime}=S^{\prime}_m + S^{\prime}_h \geq 0,$$ where $S_m$ and $S_h$ are the matter entropy and horizon entropy respectively and the $'prime'$ denotes the derivative with respect to scale factor. The entropy of the Hubble horizon is defined as [@Davis], $$\label{eqn:S_h}
S_{h}=\frac{A}{4l_{p}^2}k_B=\frac{\pi c^2}{{l_{p}^2}H^2}k_B,$$ where $A=4\pi c^2/H^2$ is the area of the Hubble horizon of a spatially flat FLRW universe, $k_B$ is the Boltzmann constant, $l_p$ is the Planck length and $c$ is the velocity of light. We have the derivative of the horizon entropy with respect to the scale factor as, $$\label{eqn:S'_h}
{S'_{h}}=\frac{-2{\pi}c^2{H'}}{{l_{p}^2}H^3}k_B.$$ The variation in the entropy of matter, $S'_m$ can be obtained from the Gibb’s relation, $$T_{m}S'_{m}=E'+P_{eff}V',$$ where $T_m$ is the temperature of the viscous matter, $E=\rho_m V$ is its total energy and $V=\frac{4\pi c^3}{3H^3}$ is the volume enclosed by the Hubble horizon. Using the Friedmann equation and assuming thermal equilibrium so that $T_m=T_h,$ where $T_{h}=\frac{H\hbar}{2\pi}{k_B}$ the Hawking temperature of the horizon, we get $$\label{eqn:S'_m}
S'_m=-\frac{c^5H'}{GH^2}\frac{q}{T_h}.$$ Adding (\[eqn:S’\_h\]) and (\[eqn:S’\_m\]), we get the rate of change of total entropy as, $$\label{eqn:S'_general}
%S'=\frac{-2{\pi}c^2{H'}}{{l_{p}^2}H^3}\left( q +\frac{1}{\kappa} \right),
S'=\frac{-2{\pi}c^2{H'}}{{l_{p}^2}H^3}\left( q + 1 \right).$$
![The evolution of $S' $ in units of $k_B$ with scale factor $a$ in the bulk viscous matter dominated universe using the full causal IS theory for best estimated values of the model parameters, when $s=1/2$.[]{data-label="plot:S’_general"}](EntropyS1FIS.pdf)
For $s=1/2,$ it is evident from general solution (\[eqn:Hubbleparameter\]) that $H^{\prime}<0$ and also $(1+q)>0$ and hence the GSL is valid. The evolution of $S^{\prime}$ is shown in figure \[plot:S’\_general\] and is such that the first increase occurs during the decelerated epoch and then it decreases during the accelerated epoch. The figure 2 shows that the slope of the curve changes drastically around the transition redshift. The maximum of $S^{\prime}$ corresponds to the transition from deceleration to acceleration epoch. It is then quite natural to expect that GSL will be satisfied at the corresponding critical points, $\omega^+$ and $\omega^-.$ The Hubble parameter corresponding to these fixed points is $H_{\omega^\pm}=\frac{1}{(1+q^\pm)a^{1+{q^\pm}}},$ implying that $H_{\omega^\pm}^{\prime}<0$ and hence GSL is valid at both the points as expected.\
For finding the status of GSL for $s>1/2,$ (we restrict to this case, since as noted earlier $H=0$ for the critical point corresponding to $s<1/2$) we change the variable from scale factor to newly defined time, $\tilde \tau.$ Following (\[eqn:HP1P2\]) satisfied by $P_1$ we can rewrite the entropy derivative in (\[eqn:S’\_general\]) as, $$\label{eqn:S'_P1P2}
\frac{dS}{d\tilde{\tau}}=\frac{2{\pi}c^2}{{l_{p}^2}}\left(1+ q\right)^2e^{2(1+q)\tilde{\tau}},$$ and is always greater than zero. Hence GSL is satisfied at $P_1.$ The validity of GSL at $P_{2}$ is straight forward since it represents a de Sitter epoch at which the Hubble parameter is a constant implying $S^{\prime}=0.$
Now will check the status of the convexity condition of entropy, $S^{\prime\prime}<0,$ in this model. This condition should be satisfied at least in the final stage of the evolution for the maximisation of entropy [@Pavon1]. Taking the derivative of $S^{\prime}$ in (\[eqn:S’\_general\]) with respect to the scale factor, we get $$\label{eqn:S''_general}
S''=\frac{-2\pi c^2}{{l_p}^2}\left[\frac{H'}{H^3}q'+\left(q+1\right)\left( \frac{H''}{H'}-\frac{3H'}{H}\right)\right].$$
![The evolution of $S'' $ in units of $k_B$ with scale factor $a$ in the bulk viscous matter dominated universe using the full causal IS theory for estimated values of the model parameters, when $s=1/2.$[]{data-label="plot:S”_general"}](EntropyS2FIS.pdf)
For $s=1/2$ the evolution of $S^{\prime\prime}$ can be obtained by substituting the Hubble parameter from (\[eqn:Hubbleparameter\]). The net result is plotted in figure \[plot:S”\_general\]. It shows that $S''>0$ during the early phase of evolution, while $S^{\prime\prime}<0$ in the later epoch and asymptotically approaches zero from below. The $S''$ changes its sign around the transition period. Hence the convexity condition is fulfilled in the long run of the expansion of the universe. This indicates the maximisation of entropy of the universe and hence entropy is bounded. The boundedness of the entropy rules out the presence of any instabilities at the end stage [@Callen1]. The behaviour of $S''$ at the critical points $\omega^+$ and $\omega^-$ is evident from the above analysis. The fixed point $\omega^+$ represents the earlier epoch and $\omega^-$ represents the later epoch for $s=1/2.$ However, as a matter of simple academic interest, the evolution equation of $S''$ at the critical points can be expressed as, $$\label{eqn:S''_P+}
S''_{\omega^\pm}=\frac{2\pi c^2}{l_P^2 } (1+2q^\pm )
(1+q^\pm)^4 a^{2q^\pm}.$$
![The evolution of $S'' $ in units of $k_B$ with scale factor $a$ at the critical points $\omega^\pm$ for the best estimated value of the model parameters, when $s=1/2$[]{data-label="plot:S”_omega+-"}](EntropyCPFIS.pdf)
From the figure \[plot:S”\_omega+-\], it is clear that convexity condition is violated at the critical point $\omega^+$ but satisfied at $\omega^-$ as expected. This indicates that the first critical point $\omega^+$ is an unstable thermodynamic equilibrium and the second point $\omega^-$ is thermodynamically stable.
Fro $s>1/2$ the second derivative of entropy at the critical points $P_1$ can be obtained using (\[eqn:HP1P2\]) and (\[eqn:S”\_general\]) as, $$\label{eqn:S''_P1P2}
\frac{d^2 S}{d{\tilde{\tau}}^2}=\frac{4\pi c^2}{l_p^2}(1+q)^3 e^{2(1+q)\tau}.$$ For $P_1,$ which represents the deceleration parameter $q>0,$ the term $\frac{d^2 S_{P_2}}{d{\tilde{\tau}}^2}>0,$ and the convexity condition is hence violated. At the equilibrium point $P_2,$ representing a de Sitter epoch, we observe that the $S''$ will vanish hence the convexity condition is not strictly satisfied. These results are summarised in table \[Table:FISThermalproperties\].
\
A causal dissipative model for $s=1/2$ having barotropic equation of state for matter, $p=\omega \rho,$ with $\omega$ varying in the range $0<\omega<1,$ has been analyses in reference [@Cruz1]. The authors assumed an ansatz for Hubble parameter of the form, $H(t>t_s)=|A|/(t-t_s),$ where $|A|$ is a positive coefficient depending on $\omega$ and the viscous coefficient $\xi.$ The IS transport equation will then give rise to a quadratic equation for $|A|,$ with two possible solutions, say $|A|_+$ and $|A|_-.$ By considering only the solution corresponding to $|A|_+,$ the authors have argued that the GSL is satisfied in both the prior decelerated and later accelerated phases, but the convexity condition is satisfied by the early phase but violated in the later accelerated epoch. In contrast to this, the analytical solutions that we have obtained for the IS equation with $s=1/2$ and zero barotropic pressure, i.e. $\omega=0,$ predicts an early decelerated epoch which satisfies GSL but violate convexity condition and a late accelerated phase which satisfies both GSL and convexity conditions.
Conclusions {#sec:4}
===========
In this work we have analysed the dynamical system behaviour and thermodynamic characteristics of the late universe with a dissipative fluid using the full Israel-Stewart theory. Assuming the bulk viscosity as $\xi = \alpha \rho^s,$ we consider two separate cases one with $s=1/2$ and the other with $s\neq 1/2.$
For $s=1/2$ we could obtain an analytical solution for the Hubble parameter, by which the model implies a prior decelerated epoch and a late accelerated epoch. The corresponding phase space is found to be reduced to a one dimensional one with two fixed points $\omega^+$ and $\omega^-$ corresponding to an early decelerated and late accelerating phases respectively. It emerges from our analysis that $\omega^+$ is a past attractor hence unstable while the late accelerating epoch corresponding to $\omega^-$ is a stable one. We have seen that the effective equation of state indicates a stiff nature for the viscous matter in the neighbourhood of the fixed point $\omega^+.$ At $\omega^-,$ corresponding to the the late accelerating phase the equation of state become **$\omega \sim -0.79,$** implying a quintessence nature but not pure de Sitter. Regarding energy conditions, it is easy to see that both fixed points satisfy the dominant energy condition, but the strong energy condition is satisfied only by $\omega^+$ as a consequence of its decelerating nature. When $s=1/2,$ a general behaviour of the bulk viscous model has analyzed using a general relaxation time expression (\[eqn:relaxationtimegeneral\]), by varying $\epsilon$ and barotropic index $\gamma.$ For the best estimated parameter values, the models exhibits the quintessence evolution, however the late phase stabilizing values of equation state is close to $-1.$
The next choice is $s\neq1/2.$ When $s<1/2$ there exist two critical points in which the first one is corresponding to static universe, while the second is giving a de Sitter epoch. Since the first static solution prohibits any further evolution, the case fails to explain the conventional evolution of the universe. As a result the case $s<1/2$ is not worth studying and can be ruled out. For the case $s>1/2$ the phase space becomes two dimensional with coordinates $h=H^{1-2s}$ and $\omega$ and having two critical points, out of which the first one, $P_1$ represents a decelerated epoch and the second one $P_2$ indicating the de Sitter epoch. Our analysis on stability shows that, $P_1$ is a saddle point and $P_2$ is a repeller, hence unstable. Hence a stable evolution towards an end de Sitter epoch is unlikely for $s>1/2.$ In the energy condition analysis, for the case $s> 1/2,$ we found that both SEC and DEC are satisfied at $P_1,$ which is corresponding to a prior decelerated phase of expansion. In the case of $P_2,$ DEC is satisfied while SEC is violated as is representing a late accelerated epoch.
In the analysis of the thermodynamic characteristics, we have shown that, for $s=1/2$ the model satisfies the GSL, $S^{\prime}\geq 0$ through out the evolution and obeys the convexity condition $S^{\prime\prime}<0$ in the long run of the expansion. Then as matter of fact we verified, in the case of the corresponding fixed points, that GSL is valid at both the critical points $\omega^+$ and $\omega^-$ but convexity condition is satisfied only by the later critical point $\omega^-.$ This indicates that the expansion is tending towards a state of a maximum entropy as in the evolution of an ordinary macroscopic system.
For $s\neq1/2$ we restrict the thermodynamic analysis to the case $s>1/2.$ The GSL is valid at both the critical points in this case. Among these we already noted that $P_1$ represents a prior decelerated epoch and $P_2$ corresponds the future de Sitter epoch. Regarding the convexity condition, our result is that, it is violated at both the fixed points prohibiting an upper bound for the growth of entropy. Hence the case $s>1/2$ does not imply a stable thermodynamic evolution. This is in line with the dynamical system behaviour of this case also, by which we found both critical points are unstable.
To summarise, for the choice $s=1/2,$ the present dissipative model described using the Israel-Stewart theory predicts a stable evolution of the late universe with prior decelerated epoch followed by an accelerated epoch. The GSL is valid throughout the evolution and the entropy is bounded for the end phase with the convexity condition satisfied. We can also infer that the thermal properties of the bulk viscous universe, especially the entropy, exhibits drastic change during the phase transition period. For the choice $s\neq1/2,$ the case with $s<1/2$ can be ruled out since it predicts an evolution not in conformity with the conventional evolution of the universe. The case $s>1/2$ predicts a prior decelerated and a late accelerated phase, but fails to predict a stable evolution. Finally we would like to comment that apart from explaining the late acceleration, the viscous models are found successful in certain other areas too. For example in reference [@anand1] the authors have shown that, a very small viscosity of the order of $ 10^{-6}$Pa sec (1$\sigma$ level) in the dark matter sector can cure the $\sigma_8-\Omega_m$ tension ($\sigma_8$ is the r.m.s. fluctuations of perturbations at $8h^{-1}$Mpc scale) and the $H_0 -\Omega_m$ tension occurred when one analyse the Planck CMB parameters using the standard $\Lambda$CDM model.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are also thankful to IUCAA, Pune for the hospitality during the visits. We are also thankful to the referees for the comments, which helped to improve the manuscript.Authors are grateful to Prof. M. Sabir for the careful reading of the manuscript. Author JMND acknowledges UGC - BSR for the fellowship, author KPB acknowledges KSCSTE, Government of Kerala for financial assistance and author AS is thankful to DST for fellowship through the INSPIRE fellowship.\
[**Appendix**]{}\
**The possibilities of attaining a pure de Sitter epoch for $s=1/2$**
So far our analysis have shown that the model allows an asymptotic value for equation of state around, $\omega = -0.79.$ Now we check possibilities of improving this value so that the model can predict a pure de Sitter epoch with $\omega=-1$ in the long run. Previously we took the relaxation time as $\tau = \alpha \rho^{s-1}$ with $s=1/2.$ Since this doesn’t gives an asymptotic de Sitter epoch, let us relax this condition by assuming a more general relation for the relaxation time as [@Maartens2], $$\label{eqn:relaxationtimegeneral}
\tau = \frac{\alpha}{\epsilon \gamma (2 - \gamma)} \rho^{s-1}.$$ We first fix the barotropic index as, $\gamma=1$ and allow to vary the parameter $\epsilon$ in the range $0 < \epsilon \leq 1.$\
The solution for the Hubble parameter is obtained in [@Cruz2], which have the same form as in the previous case, $$H=H_0(C_1 a^{-m_1}+C_2a^{-m_2}),$$ but with different coefficients, $$C_{1;2}=\frac{\pm \epsilon+\sqrt{6\epsilon\alpha^2+\epsilon^2}\mp\sqrt{3}\alpha\tilde{\Pi_0}}{2\sqrt{6\epsilon\alpha^2+\epsilon^2}},$$ $$m_{1;2}=\frac{\sqrt{3}}{2\alpha}\left(\sqrt{3}\alpha+\epsilon\mp\sqrt{6\epsilon\alpha^2+\epsilon^2}\right),$$ which are satisfying the conditions $C_1+C_2=1$ and $m_2>m_1.$ Using the Supernovae type Ia data we have extracted the parameter values in the present case as $\alpha=169.50,$ $\tilde{\Pi_0}=-0.70,$ $\epsilon=0.39$ and $H_0=69.99$ with $\chi^2_{d.o.f.}=0.97.$ We have then concentrated on the evolution of the equation of state which can be analytically obtained as, $$\label{eqn:eqnofstateFISwithe}
\omega=-1+\frac{2(C_1 m_1 a^{-m_1}+C_2 m_2 a^{-m_2})}{3(C_1 a^{-m_1}+C_2a^{-m_2})}.$$ To get the late phase behaviour, consider the asymptotic limit of equation of state parameter (\[eqn:eqnofstateFISwithe\]) when the scale factor $a\rightarrow\infty.$ In the late phase evolution, the equation of state parameter (\[eqn:eqnofstateFISwithe\]) can takes the form, $$\omega\sim -1+\frac{2}{3}m_1.$$ For the new best estimated parameter values, the constant, $m_1=0.18,$ and hence the equation of state parameter will stabilizes around $\omega\sim -0.88.$ So the values has been improved slightly but still not represent a pure de Sitter case.
As a further move we extend the analysis by varying the parameter $\gamma$ also. By considering this, a more general solution for the Hubble parameter can be obtained as discussed in [@Cruz3] as, $$\label{eqn:Hubbleparameterwithgamma }
H=C_3 (1+z)^{\alpha'} cosh^\gamma\left[\beta(ln(1+z)+C_4)\right],$$ where $$\nonumber
C_3=H_0\left[1-\frac{(q_0+1-\alpha')^2}{\gamma^2 \beta^2}\right]^{\gamma/2},$$ $$\nonumber
C_4=\frac{1}{\beta}arctanh\left[\frac{(q_0+1)-\alpha'}{\gamma\beta}\right],$$ $$\nonumber
\alpha'=\frac{\sqrt{3}\gamma}{2 \xi_0}\left[\sqrt{3}\xi_0+\epsilon\gamma(2-\gamma)\right],$$ $$\nonumber
\beta=\frac{\sqrt{3}}{2\xi_0}\sqrt{6\xi_0^2\epsilon(2-\gamma)+\epsilon^2\gamma^2(2-\gamma)^2}.$$ where $q_0$ is the present value of deceleration parameter and $\xi_0$ is the viscosity constant parameter ($\xi_0=\alpha$ in our analysis). Following reference [@Cruz3], the model parameters take the values as $\xi_0=245.2,$ $\epsilon=0.601$ and $\gamma=1.26$ with $\chi^2_{d.o.f.}=1.07$ for $H_0=70km/Mpcs$ and $q_0=-0.60.$ Using the equation of parameter evaluating equation [@EPJC1], we have obtained the the asymptotic limit of equation of state parameter for the estimated parameter values, when $a\rightarrow \infty,$ the equation of state $\omega\sim -0.93,$ and is very close to the de Sitter epoch value. Therefore, we have conclude that, even though the model will not attain the pure de Sitter epoch ($\omega=-1$) as the end phase, it attains a quintessence epoch which very close to the de Sitter phase.\
A. G. Riess et al. (Supernova Search Team Collaboration) 1998 [ *Astron. J.*]{} [ **116**]{} 1009 S. Perlmutter et al. (Supernova Cosmology Project Collaboration) 1999 [ *Astrophys. J.*]{} [ **517**]{} 565 C. L. Bennett et. al. (WMAP Collaboration) 2003 [ *Astrophys. J. Suppl.*]{} [ **148**]{} 1 A. G. Riess et. al. (Supernova Search Team Collaboration) 2004 [ *Astrophys. J.*]{} [ **607**]{} 665 M. Tegmark et. al. (SDSS Collaboration) 2004 [ *Phys. Rev. D*]{} [ **69**]{} 103501 U. Seljak et. al. (SDSS Collaboration) 2005 [ *Phys. Rev. D*]{} [ **71**]{} 103515 E. Komatsu et. al. (WMAP Collaboration) 2011 [ *Astrophys. J. Suppl.*]{} [ **192**]{} 18 E. J. Copeland, M. Sami and S. Tsujikawa, 2006 [ *Int. J. Mod. Phys. D*]{} [ **15**]{} 1753 L. Wang, R. R. Caldwell, J. P. Ostriker and P. J. Steinhardt, 2000 [ *Astrophys. J.*]{} [ **530**]{} 17 R. R. Caldwell, 2002 [ *Phys. Lett. B*]{} [ **545**]{} 23 K. Bamba, K. Capozziello, S. Nojiri and S. D. Odintsov, 2012 [ *Astrophys. Space Sci.*]{} [ **342**]{} 155 B. Wang, E. Abdalla, F. Atrio-Barandela and D. Pavon, 2016 [ *Rep. Prog. Phys.*]{} [ **79**]{} 096901 G. R. Dvali, G. Gabadadze and M. Porrati, 2000 [ *Phys. Lett. B*]{} [ **484**]{} 112 K. Freese and M. Lewis, 2002 [ *Phys. Lett. B*]{} [ **540**]{} 1 I. Brevik and O. Gorbunova, 2005 [ *Gen. Rel. Grav.*]{} [ **37**]{} 2039 I. Brevik, O. Gorbunova and Y. A. Shaido, 2005 [ *Int. J. Mod. Phys. D*]{} [ **14**]{} 1899 A. Avelino and U. Nucamendi, 2009 [ *JCAP*]{} [ **04**]{} 006 Athira Sasidharan and Titus K. Mathew, 2015 [ *Eur. Phys. J. C.*]{} [ **75**]{} 348 C. W. Misner, K. S. Thorne and J. A. Wheeler, 1973 [*Gravitation*]{} ( U.S.A: W. H. Freeman and Company) S. Weinberg, 1972 [*[Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity]{}*]{} (New York: Wiley) S. Weinberg, 1989 [ *Rev. Mod. Phys.*]{} [ **61**]{} 1 S. Anand, P. Chaubal, A. Mazundar and S. Mohanty, 2017 [ *J. Cosmol. Astropart. Phys.,*]{} [ **11**]{} 005 J. R. Wilson, G. J. Mathews and G. M. Fuller, 2007 [ *Phys. Rev. D*]{} [ **75**]{} 043521 H. Okumura and F. Yonezawa, 2003 [ *Physica A*]{} [ **321**]{} 207 G. J. Mathews, N. Q. Lan and C. Kolda, 2008 [ *Phys. Rev. D*]{} [ **78**]{} 043525 A. Avelino and U. Nucamendi, 2010 [ *JCAP*]{} [ **08**]{} 009
S. Weinberg, 1971 [ *Astophys. J*]{} [ **168**]{} 175 M. A. Schweizer, 1982 [ *Astrophys. J*]{} [ **258**]{} 798 N. Udey and W. Israel, 1982 [ *Mon. Not. R. Astron. Soc.*]{} [ **199**]{} 1137 W. Zimdahl, 1996 [ *Mon. Not. R. Astron. Soc.*]{} [ **280**]{} 1239 G.L. Murphy, 1973 [ *Phys. Rev. D*]{} [ **8**]{} 4231 N. Turok, 1988 [ *Phys. Rev. Lett.*]{} [ **60**]{} 549 W. Zimdahl and D. Pavon, 1993 [ *Phys. Lett. A*]{} [ **175**]{} 57 C. Eckart, 1940 [ *Phys. Rev.*]{} [ **58**]{} 919 A. A. Coley and R. J. van den Hoogen, 1995 [ *Class. Quantum Grav.*]{} [ **12**]{} 1977 W. Israel, 1976 [ *Ann. Phys. (N. Y.)*]{} [ **100**]{} 310 W. A. Hiscock and L. Lindblom, 1985 [ *Phys. Rev. D*]{} [ **31**]{} 725 J. C. Fabris, S. V. B. Goncalves and R. de Sa Ribeiro, 2006 [ *Gen. Relativ. Gravit.*]{} [ **38**]{} 495 J. D. Barrow, 1986 [ *Phys. Lett. B*]{} [ **180**]{} 335 R. Colistete, J. C. Fabris, J. Tossa and W. Zimdahl, 2007 [ *Phys. Rev. D*]{} [ **76**]{} 103516 Athira Sasidharan and Titus K. Mathew, 2016 [ *JHEP*]{} [ **06**]{} 138 W. Israel and J. M. Stewart, 1979 [ *Annals Phys.*]{} [ **118**]{} 341 W. Israel and J. M. Stewart, 1979 [ *Proc. Roy. Soc. Lond. A*]{} [ **365**]{} 43 O. F. Piattella, J. C. Fabris and W. Zimdahl, 2011 [ *JCAP,*]{} [ **05**]{} 029 T. Padmanabhan and S. M. Chitre, 1987 [ *Phys. Lett. A*]{} [ **120**]{} 443 A. Di Prisco, L. Herrera, and Ibáñez, J., 2000 [ *Phys. Rev. D*]{} [ **63**]{} 023501 Jerin Mohan N D, Athira Sasidharan and Titus K. Mathew, 2017 [ *Euro. Phys. J. C.*]{} [ **77**]{} 849 I. Brevik, O. Green, J. de Haro, S. D. Odintsov and E. N. Saridakis, 2017 [ *Int. Nat. J. Mod. Phys. D*]{} [ **26**]{} 1730024 Diego Pavon and Ninfa Radicella, 2013 [ *Gen. Relativ. Gravit*]{} [ **45**]{} 63 M. Cruz, N. Cruz and S. Lepe, 2017 [ *Phys. Rev. D*]{} [ **96**]{} 124020 R. Maartens, 1995 [ *Class. Quantum Grav.*]{} [ **12**]{} 1455 L. P. Chimento and A. S. Jacubi, 1997 [ *Class. Quantum Grav.*]{} [ **14**]{} 1811 Wainwright J and Ellis G F R, 1997 [*[Dynamical Systems in Cosmology]{}*]{} (Cambridge: Cambridge University Press) U. Alam, V. Sahini, A. A. Starobinsky, 2004 [ *JCAP*]{},[ **0406**]{} 008 A. Awad, W. E. Hanafy, G. Nashed, and E. N. Saridakis, 2018 [ *JCAP*]{} [ **2018**]{} 052 M. Visser, 1997 [ *Science*]{} [ **276**]{} 88 D. Pavon, 1990 [ *Classical and Quantum Gravity*]{} [ **7**]{} 487 J. D. Barrow, 1987 [ *Phys. Lett. B*]{} [ **183**]{} 285 Krishna P. B. and Titus K. Mathew, 2017 [ *Phys. Rev. D*]{} [ **96**]{} 063513 P. C. W. Davis, 1987 [ *Class. Quantum Gravity*]{} [ **4**]{}, L225 H B Callen, 1985 [*Thermodynamics and an Introduction to Thermostatistics*]{} (New York: John Wiley) R. Maartens, [*arXiv:astro-ph/9609119*]{} N. Cruz, E. Gonzalez, G. Palma, 2018 [*arXive:1812.05009v3*]{} N. Cruz, E. Gonzalez, G. Palma, 2019 [*arXive:1906.04570*]{}\
\
|
---
abstract: 'Traditionally, Internet Access Providers (APs) only charge end-users for Internet access services; however, to recoup infrastructure costs and increase revenues, some APs have recently adopted two-sided pricing schemes under which both end-users and content providers are charged. Meanwhile, with the rapid growth of traffic, network congestion could seriously degrade user experiences and influence providers’ utility. To optimize profit and social welfare, APs and regulators need to design appropriate pricing strategies and regulatory policies that take the effects of network congestion into consideration. In this paper, we model two-sided networks under which users’ traffic demands are influenced by exogenous pricing and endogenous congestion parameters and derive the system congestion under an equilibrium. We characterize the structures and sensitivities of profit- and welfare-optimal two-sided pricing schemes and reveal that 1) the elasticity of system throughput plays a crucial role in determining the structures of optimal pricing, 2) the changes of optimal pricing under varying AP’s capacity and users’ congestion sensitivity are largely driven by the type of data traffic, e.g., text or video, and 3) APs and regulators will be incentivized to shift from one-sided to two-sided pricing when APs’ capacities and user demand for video traffic grow. Our results can help APs design optimal two-sided pricing and guide regulators to legislate desirable policies.'
author:
- Xin Wang
- 'Richard T. B. Ma'
- Yinlong Xu
bibliography:
- 'ref.bib'
title: 'On Optimal Two-Sided Pricing of Congested Networks'
---
Introduction
============
Internet Access Providers (APs) build massive network platforms by which end-users and Content Providers (CPs) can connect and transmit data to each other. Traditionally, APs use one-sided pricing schemes and obtain revenues mainly from end-users. With the growing popularity of data-intensive services, e.g., online video streaming and cloud-based applications, Internet traffic has been growing more than $50\%$ per annum [@craig10internet], causing serious network congestion, especially during peak hours. To sustain such rapid traffic growth and enhance user experiences, APs need to upgrade network infrastructures and expand capacities. However, they feel that the revenues from end-users are insufficient to recoup the corresponding costs. Consequently, some APs have recently shifted towards two-sided pricing schemes, i.e., they start to impose termination fees on CPs’ data traffic in addition to charging the end-users. For example, Comcast[^1] and Netflix[^2] reached a paid peering agreement in 2014 [@Wyatt-deal], under which Comcast provides a direct connection to Netflix and improve its content delivery quality for a fee. Another example is [*sponsored data*]{} proposed by AT&T[^3], under which CPs are allowed to subsidize end-users the fees induced by their data traffic. Since subsidizations indirectly transfer value to the APs, sponsored data is really a two-sided pricing scheme in disguise. Although charging CPs directly or indirectly may increase APs’ revenues and thus motivate APs to deploy network capacities and alleviate congestion, it has raised concerns over net neutrality [@wu2003network], whose advocates argue that zero-pricing [@lee2009subsidizing] on CPs are needed to protect content innovations of the Internet. Although the U.S. FCC has recently passed the Open Internet Order[^4] to protect net neutrality, existing two-sided schemes such as paid peering and sponsored data are exempt from the ruling, because these pricing schemes are common practices in the Internet transit context and the FCC is not yet clear about the policy implications on the utilities of various market participants and social welfare.
Although prior work [@Musacchio2009; @Altman2011; @njoroge2013investment] have studied the economics of two-sided pricing in network markets, the resulting network congestion and its impacts on the utilities of different parties were often overlooked. However, the explosive traffic growth has caused severe congestion in many regional and global networks, which degrades end-users’ experiences and reduces their data demand. This will strongly affect the profits of APs and the utilities of end-users and CPs. To optimize individual and social utilities, APs and regulators need to reflect the design of pricing strategies and regulatory policies accordingly. So far, little is known about 1) the optimal two-sided pricing structure in a congested network and its changes under varying system parameters, e.g., the users’ congestion sensitivity and the APs’ capacities, and 2) potential regulations on two-sided pricing for protecting social welfare from monopolistic providers. To address these questions, one challenge is to accurately capture the endogenous congestion in the network. Although the level of congestion is influenced by network throughput, the users’ traffic demand and throughput are also influenced by network congestion. It is crucial to capture this endogenous congestion so as to faithfully characterize the impacts of two-sided pricing in congested networks.
In this paper, we propose a novel model of a two-sided congested network built by an AP, which transmits data traffic between end-users and CPs. We model network congestion as a function of AP’s capacity and system throughput, which is also affected by the congestion level. We capture users’ population and traffic demand under pricing and congestion parameters and derive an endogenous system congestion under an equilibrium. Based on the equilibrium model, we analyze the structures of profit-optimal and welfare-optimal two-sided pricing and their sensitivities under varying system environments, e.g., congestion sensitivity of users and capacity of the AP. We also compare the two types of optimal pricing and derive regulatory implications. Our main contributions and findings include the following.
- We derive the congestion equilibrium of two-sided networks, identify a property of elasticity (Theorem \[theorem:unique-congestion\] and \[theorem:elasticity\]), and study the equilibrium dynamics under varying pricing and system parameters (Proposition \[proposition:elasticity\] and \[proposition:pricing-effect\]).
- We characterize the structures of optimal two-sided pricing (Theorem \[theorem:KKT-Lerner\] and \[theorem:social-welfare\]) and show that the profit-optimal pricing equalizes the demand hazard rates on the user and CP sides; however, the welfare-optimal counterpart differentiates them based on the elasticity of throughput and per-unit traffic welfare of both sides.
- We analyze the sensitivities of optimal two-sided prices under varying capacity of APs (Corollary \[corollary:profit capacity\] and \[corollary:welfare capacity\]) and congestion sensitivity of users (Corollary \[corollary:profit sensitivity\] and \[corollary:welfare sensitivity\]). The results imply that when network traffic is mainly for online video, APs would increase two-sided prices under expanded capacity, while regulators may want to tighten the price regulation on the side of higher market power. However, when network traffic is mostly for text content, they should take the opposite operations.
- We compare two-sided pricing with the traditional one-sided counterpart. We find that with the growing capacities of APs and demand for video traffic, APs and regulators will have strong incentives to shift from one-sided to two-sided pricing because the benefits of increased profits and social welfare will continue to grow.
We believe that our model and analysis could help APs design two-sided pricing schemes in congested networks and guide regulatory authorities to legislate desirable policies.
Related Work
============
Several works have studied two-sided pricing in the Internet markets. Njoroge et al. [@njoroge2013investment] showed that two-sided pricing could help APs extract higher profits and maintain higher investment levels than one-sided pricing. Altman et al. [@Altman2011] analyzed the impacts of CP’s revenue models, either subscription or advertisement, on AP’s pricing strategies. Choi and Kim [@pil2010net] found that expanding capacity will decrease the CP-side price. In [@Musacchio2009], Musacchio et al. concluded that two-sided pricing is more favorable in terms of social welfare than one-sided pricing when the ratio between CPs’ advertising rates and user price sensitivity are extreme. All of the above works do not consider the impact of network congestion on user throughput, which strongly influences the AP’s pricing strategy. In this paper, we characterize the interactions among network congestion, throughput, and price, based on which we analyze both the profit-optimal and welfare-optimal pricing.
Whether APs should be allowed to charge CPs for their content traffic has been a focus of debate on net neutrality [@wu2003network]. To sidestep this debate and extract revenue from CPs, some APs, e.g., AT&T, have recently provided sponsored data plans, which allow CPs to partially or fully subsidize users the fees induced by their data traffic for increasing market share. Since the sponsorship offers a way for CPs to subsidize their users and indirectly transfer value to APs, it could be seen as a variant of the two-sided pricing model. Some work [@andrews2013economic; @ma2014subsidization; @zhang2015sponsored] have studied this variant and showed that it benefits both CPs and APs. Because APs charge users different prices for content traffic from different CPs, the sponsored data plan is also considered as a type of price discrimination in disguise and has raised concerns from the FCC who says that they will be monitoring and prepared to intervene if necessary [@sponsorfcc]. In this paper, rather than pursuing the price differentiation, we focus on the two-sided pricing under which users are charged entirely based on their traffic volumes.
From the perspective of modeling and analysis, our model extends Rochet and Tirole [@Rochet2003], in which the two-sided network platforms do not incur congestion. To capture the endogenous network congestion and its effect in Internet markets, we introduce a system congestion under a market equilibrium and use a gain function to model the decline degrees of network throughput under different congestion levels. Besides, we also analyze the sensitivities of two-sided pricing under the varying network environments, which are instructive and meaningful for APs and regulators to adjust pricing schemes and regulatory policies with the evolution of the Internet. Ma [@richard2014pay] and Chander and Leruth [@chander1989optimal] also consider the service markets with congestion externalities. Ma [@richard2014pay] analyzed the pay-as-you-go pricing and competition among multiple ISPs. Chander and Leruth [@chander1989optimal] studied the quality differentiation strategy of a monopoly provider. Both of them focused on the one-sided markets while we consider the more general two-sided markets.
Macroscopic Network Model {#sec:model}
=========================
We consider a two-sided network platform built by an AP which transmits data traffic between users and CPs. Unlike physical commodities, the quality of network service is intricately influenced by a negative network effect: higher traffic will induce a more congested network with worse performance. To characterize the congestion, we start with a macroscopic model that captures the physical and economic dynamics among the AP, users, and CPs in this section.
Basic Terms and Definitions
---------------------------
As a preliminary, we first introduce two basic economic and statistical terms that will be used in our model.
\[def:elasticity\] For two variables $x$ and $y$, the elasticity of $y$ with respect to $x$, or $x$-elasticity of $y$, is defined by $\displaystyle\epsilon_x^y \triangleq \Big|\frac{x}{y}\frac{\partial y}{\partial x}\Big|$.
In economics, elasticity captures the responsiveness of a variable $y$ to a change in another variable $x$. Specifically, it can be equivalently expressed as $\epsilon_x^y = |({\partial y}/y)/({\partial x}/x)|$ and interpreted as the absolute value of the percentage change in $y$ (numerator) in response to the percentage change in $x$ (denominator). Intuitively, when the value $\epsilon_x^y$ is higher, $y$ responds to the change of $x$ more strongly and we say that $y$ is more elastic to $x$.
\[def:hazard-rate\] For a function $y(x)$, the hazard rate of $y$ with respect to $x$ is defined by $\displaystyle \tilde y^x \triangleq -\frac{1}{y}\frac{\partial y}{\partial x}$.
In statistics, hazard rate is used to measure the rate of decrease in the function $y$ with respect to the variable $x$. In particular, it can be expressed as $y^x = -(\partial y/y)/\partial x$ and interpreted as the proportion of $y$ that is reduced due to a marginal change of $x$. Note that the function $y$ can have different values of its hazard rate at different starting points of the variable $x$.
Baseline Physical Model
-----------------------
We denote a metric of congestion, e.g., delay or loss rate, by $\phi$ to model the congestion level of the AP’s network. We denote the AP’s user population by $m$ and the users’ average desirable throughput by $n$, i.e., the maximum amount of data rate consumed under a congestion-free network with $\phi = 0$. When network congestion exists, i.e., $\phi > 0$, the users’ desirable throughput might not be fulfilled; and therefore, their achievable throughput is lower than their desirable throughput. We define the users’ average achievable throughput under a congestion level $\phi$ by $l(\phi) \triangleq n\rho(\phi)$, i.e., the desirable throughput $n$ multiplied by a gain factor $\rho(\phi)$.
\[ass:gain\] $\rho(\phi)\colon \mathbb{R}_+ \mapsto (0,1]$ is a continuously differentiable, decreasing function of $\phi$. It has an upper bound $\rho(0) = 1$ and satisfies that $\displaystyle\lim_{\phi\rightarrow +\infty} \rho(\phi) = 0$.
Assumption \[ass:gain\] states that the [*throughput gain*]{} or simply the [*gain*]{} decreases monotonically when the network congestion $\phi$ deteriorates. In particular, the gain and the users’ achievable throughput reach the maximum under no congestion.
In practice, users’ throughput is usually a mixture of multiple types of traffic flows. Based on the characteristics of applications and protocols, different types of traffic throughput may have dissimilar responses to network congestion. For instance, inelastic traffic such as online video streaming cannot tolerate high delays and loss rates, and therefore its throughput gain declines sharply with the deterioration of congestion; however, the throughput gain of elastic traffic [@scott95fundamental] such as e-mail does not respond to congestion drastically. Notice that the terms [*inelastic traffic*]{} and [*elastic traffic*]{} in networking refer to the traffic whose throughput gains are sensitive and insensitive to congestion, respectively; however, based on the classic definition of elasticity in economics (i.e., Definition \[def:elasticity\]), traffic throughput has higher congestion elasticity if it is more sensitive to congestion, and therefore the throughput gain of inelastic traffic is more elastic to congestion than that of elastic traffic.
In this paper, we adopt the elasticity defined in economics to characterize how the throughput gain responds to congestion. By Definition \[def:elasticity\], the elasticity $\epsilon_{\phi}^{\rho}$ characterizes the percentage decrease in the gain $\rho$ in response to the percentage increase in the congestion $\phi$. Based on this characterization, different forms of the gain function $\rho(\phi)$ with different elasticities $\epsilon_{\phi}^{\rho}$ can be used to model responses of different mixtures of traffic types to congestion. For example, when users’ traffic throughput includes more online video or file sharing traffic, gain functions with higher or lower elasticities can be adopted, respectively.
Given a fixed level of congestion $\phi$, we define the aggregate network throughput by $\lambda(\phi)\triangleq ml(\phi) = mn\rho(\phi)$, i.e., the product of the number of users $m$ and the users’ average achievable throughput $n\rho(\phi)$ under the congestion level $\phi$. On the one hand, under the given level $\phi$ of congestion, the network accommodates certain throughput $\lambda(\phi)$; on the other hand, the network congestion $\phi$ is also influenced by this throughput $\lambda$. We denote the AP’s capacity by $\mu$ and characterize the induced system congestion as a function $\phi \triangleq \Phi(\lambda,\mu)$ of the system throughput $\lambda$ and capacity $\mu$.
\[ass:congestion\_function\] $\Phi(\lambda,\mu):\mathbb{R}^2_+\mapsto \mathbb{R}_+$ is continuously differentiable, increasing in $\lambda$, decreasing in $\mu$.
Assumption \[ass:congestion\_function\] characterizes the physics of network congestion: the congestion level is higher when the system accommodates larger throughput or has less capacity, and vice-versa. Besides, different forms of the congestion function $\Phi$ can be adopted to capture the congestion metric based on different models of network services. For example, the function $\Phi(\lambda,\mu) = 1/(\mu-\lambda)$ models the M/M/1 queueing delay for network services and $\Phi(\lambda,\mu) =\lambda/\mu$ captures the [*capacity sharing*]{} [@chau2010viability] nature of general network congestion.
By far, we have described the system by a triple $(m, n, \mu)$. Because the congestion increases with the network throughput, which decreases with the deteriorated congestion, the resulting system congestion is defined under an equilibrium.
\[def:congestion\] $\phi$ is an induced equilibrium congestion of a system $(m, n,\mu)$ if and only if it satisfies $\phi = \Phi\big(\lambda(\phi),\mu\big)$.
Definition \[def:congestion\] states that the system congestion $\phi$ should induce the aggregate throughput $\lambda(\phi)$ such that it leads to exactly the same level of congestion $\phi = \Phi(\lambda,\mu)$.
We define the inverse function of $\Phi(\lambda, \mu)$ with respect to $\lambda$ by $\Lambda(\phi, \mu) \triangleq \Phi^{-1}(\phi, \mu)$. $\Lambda(\phi, \mu)$ can be interpreted as the implied amount of throughput that induces a congestion level $\phi$ for a system with a capacity $\mu$. By Assumption \[ass:congestion\_function\], $\Lambda(\phi, \mu)$ is strictly increasing in both $\phi$ and $\mu$. To characterize the system congestion, we define a [*gap*]{} function $g(\phi)$ between the supply and demand of throughput under a fixed level of congestion $\phi$ by $$\label{equation:gap}
g(\phi) \triangleq \Lambda\left(\phi,\mu\right) - \lambda(\phi).$$
\[theorem:unique-congestion\] For any system $(m, n,\mu)$, $g(\phi)$ is an increasing function of $\phi$. The system operates at a unique level of equilibrium congestion $\phi$, which solves $g(\phi)=0$.
Theorem \[theorem:unique-congestion\] characterizes the uniqueness of the system equilibrium congestion $\phi$ under which the throughput supply $\Lambda(\phi,\mu)$ equals the aggregate throughput demand $\lambda(\phi)$. Based on Theorem \[theorem:unique-congestion\], we denote the unique equilibrium congestion and the corresponding aggregate throughput of the system $(m, n, \mu)$ by $\varphi = \varphi(m, n, \mu)$ and $\lambda = \lambda(m, n, \mu)$, respectively. We define the marginal change of the throughput gap $g$ due to a marginal change in the congestion $\phi$ by $$\frac{\partial g}{\partial \phi} \triangleq \frac{\partial \Lambda(\phi,\mu)}{\partial \phi} - m n \frac{d\rho(\phi)}{d\phi} > 0,\label{equation:dg}$$ where the first (second) term captures the change of throughput in supply (demand). Next, we characterize the impacts of the user population $m$, users’ average desirable throughput $n$ and system capacity $\mu$ on the system congestion $\varphi$ and throughput $\lambda$ as functions of $\partial g/\partial \varphi$ and $\partial \Lambda/\partial \varphi$ as follows.
\[proposition:elasticity\] The user population $m$’s impacts on the system congestion $\varphi$ and throughput $\lambda$ are $$\frac{\partial \varphi}{\partial m} = \left(\frac{\partial g}{\partial \varphi}\right)^{-1} \frac{\lambda}{m} > 0 \quad \text{and} \quad \frac{\partial \lambda}{\partial m} = \frac{\partial \Lambda}{\partial \varphi} \frac{\partial \varphi}{\partial m} > 0.$$ The desirable throughput $n$’s impacts on $\varphi$ and $\lambda$ are $$\frac{\partial \varphi}{\partial n} = \left(\frac{\partial g}{\partial \varphi}\right)^{-1} \frac{\lambda}{n} > 0 \quad \text{and} \quad \frac{\partial \lambda}{\partial n} = \frac{\partial \Lambda}{\partial \varphi} \frac{\partial \varphi}{\partial n} > 0.$$ The system capacity $\mu$’s impacts on $\varphi$ and $\lambda$ are $$\frac{\partial \varphi}{\partial \mu} = -\frac{\partial \Lambda}{\partial \mu}\left(\frac{\partial g}{\partial \varphi}\right)^{-1} < 0 \quad \text{and} \quad \frac{\partial \lambda}{\partial \mu} = mn\frac{d \rho}{d \varphi}\frac{\partial \varphi}{\partial \mu} > 0.$$
Proposition \[proposition:elasticity\] derives the impacts of $m$, $n$ and $\mu$ on the induced system congestion $\varphi$ and throughput $\lambda$. It states that 1) if the user population $m$ or the desirable throughput $n$ increases, the system congestion and throughput will increase, and 2) if the AP extends its capacity $\mu$, the system congestion will decrease and the system throughput will increase.
\[theorem:elasticity\] Under the equilibrium of any system $(m,n,\mu)$, the elasticities of the system throughput $\lambda$ with respect to the user population $m$ and the desirable throughput $n$ are equal and both satisfy $$\epsilon^{\lambda}_{m} = \epsilon^{\lambda}_{n} = \left(1+\frac{|{\partial \lambda}/{\partial \varphi}|}{{\partial \Lambda}/{\partial \varphi}}\right)^{-1} \in \left(0,1\right].$$
Theorem \[theorem:elasticity\] shows that under an equilibrium of any physical system $(m,n,\mu)$, the marginal impact of user population on the system throughput $\epsilon_m^{\lambda}$ equals that of the user’s desirable throughput on the system throughput $\epsilon_{n}^{\lambda}$. Fundamentally, this result is due to the product form of network throughput $\lambda = m n \rho(\varphi)$ that is symmetric in $m$ and $n$, although both quantities have very different physical natures. Thus, we define this elasticity of system throughput by $$\label{equation:system elasticity}
\epsilon^\lambda \triangleq \left(1+\frac{|{\partial \lambda}/{\partial \varphi}|}{{\partial \Lambda}/{\partial \varphi}}\right)^{-1} = \left(1+mn\frac{\left|{d \rho}/{d \varphi}\right|}{{\partial \Lambda}/{\partial \varphi}}\right)^{-1},$$ where ${|\partial \lambda}/{\partial \varphi}|$ and ${\partial \Lambda}/{\partial \varphi}$ measure the marginal decrease and increase in the throughput demand and supply with respect to congestion, respectively. This $\epsilon^\lambda$ can be interpreted as a metric of [*relative congestion elasticity of throughput demand*]{}: when the traffic demand is elastic, i.e., $|{\partial \lambda}/{\partial \varphi}|$ is large, and the throughput supply is inelastic, i.e., ${\partial \Lambda}/{\partial \varphi}$ is small, $\epsilon^{\lambda} \rightarrow 0$ and the system can accommodate higher throughput mostly due to the elasticity of the demand; otherwise, $\epsilon^{\lambda} \rightarrow 1$ and the elasticity of the supply plays a bigger role in adopting higher throughput. Notice that our model generalizes that of Rochet and Tirole [@Rochet2003], under which congestion does not exist and the throughput supply can be regarded to be infinitely elastic, i.e., ${\partial \Lambda}/{\partial \varphi}=+\infty$, and therefore, the elasticity of throughput alway satisfies $\epsilon^\lambda=1$.
Two-Sided Pricing Model
-----------------------
We consider usage-based pricing schemes [@hande2010pricing] imposed on users and CPs, which are adopted by most wireless APs, e.g., T-Mobile[^5] and AT&T, and some wired APs, e.g., Comcast [@comcastusage]. In our two-sided pricing model, we assume that the AP charges prices of $p$ and $q$ per-unit data traffic to users and CPs, respectively.
On the user side, we model each user by her value $v_u$ of per-unit traffic and denote the number of the users of value $v_u$ by $f_u(v_u)$, which can be regarded as a value density function of users. We assume that a user subscribes to the AP’s access service if and only if she can obtain a positive utility, i.e., her value $v_u$ of per-unit traffic is higher than the price $p$. As a result, the population $m$ of active users of the AP is a function of $p$, defined by $$\label{equation:m}
m(p) \triangleq \int_p^{+\infty} f_u(v_u)dv_u.$$
On the CP side, we model each CP by a tuple $(u_c,v_c)$. $u_c$ is the average desirable throughput of end-users for the CP’s content. $v_c$ is the CP’s per-unit traffic value, which models the CP’s profit obtained by charging its customers or advertisers. Similar to the user side, we denote the density of the CPs of characteristic $(u_c,v_c)$ by $f_c(u_c,v_c)$ and assume that a CP uses the AP’s access service if and only if it can obtain a positive utility, i.e., its value $v_c$ of per-unit traffic is higher than the price $q$. Consequently, the average desirable throughput $n$ of end-users for all active CPs is a function of $q$, defined by $$\label{equation:n}
n(q) \triangleq \int_q^{+\infty}\!\!\!\!\int_0^{+\infty}u_cf_c(u_c,v_c)du_cdv_c.$$
From Equation (\[equation:m\]) and (\[equation:n\]), we know that the population $m(p)$ of users decreases with the user-side price $p$, and the average desirable throughput $n(q)$ of users decreases with the CP-side price $q$. Since the two-sided prices $p$ and $q$ impact the network throughput $\lambda$ via the user population $m$ and the average desirable throughput $n$, respectively, $m(p)$ and $n(q)$ can be interpreted as [*demand*]{} functions on the user and CP sides. Intuitively, the higher price $p$ ($q$) results in the smaller demand $m$ ($n$) on the user (CP) side. Furthermore, by Definition \[def:hazard-rate\], we adopt the hazard rate $\tilde m^p$ ($\tilde n^q$) to measure the decreasing rate of the demand $m$ ($n$) with respect to the price $p$ ($q$). In addition, the inverse of the hazard rate $1/\tilde m^p$ ($1/\tilde n^q$) is often regarded as the AP’s [*market power*]{} [@weyl2010price] on the user (CP) side, which reflects the AP’s ability to profitably raise the price of its network service. Intuitively, if the AP has higher market power on the user (CP) side, i.e., the demand hazard rate $\tilde m^p$ ($\tilde n^q$) is lower, the AP can set a higher price $p$ ($q$) to optimize its profit, since the demand $m$ ($n$) decreases slower with the price.
Under a two-sided pricing $(p,q)$, the aggregate network throughput can be represented as $\lambda(p,q,\phi) \triangleq m(p) n(q)\rho(\phi)$, i.e., the product of the AP’s user population $m(p)$, the users’ average desirable throughput $n(q)$, and the throughput gain factor $\rho(\phi)$ under the congestion level $\phi$. Furthermore, we can write the unique equilibrium congestion as $\varphi(p,q,\mu)\triangleq \varphi(m(p),n(q),\mu)$ and define the corresponding system throughput by $\lambda(p,q,\mu) \triangleq \lambda(p,q,\varphi(p,q,\mu))$. As the prices $p$ and $q$ determine the congestion $\varphi(p,q,\mu)$ and throughput $\lambda(p,q,\mu)$ under any fixed capacity $\mu$, we investigate their impacts on the congestion and the throughput as follows.
\[proposition:pricing-effect\] For a system with a fixed capacity, the system congestion $\varphi$ and throughput $\lambda$ under two-sided prices $p$ and $q$ satisfy $$\begin{aligned}
&\frac{\partial \varphi}{\partial p} = -\left(\frac{\partial g}{\partial \varphi}\right)^{-1} \lambda \tilde m^p < 0 \ \ \; \text{and} \quad \frac{\partial \lambda}{\partial p} = \frac{\partial \Lambda}{\partial \varphi} \frac{\partial \varphi}{\partial p}<0;\\
&\frac{\partial \varphi}{\partial q} = -\left(\frac{\partial g}{\partial \varphi}\right)^{-1} \lambda \tilde n^q < 0 \quad \text{and} \quad \frac{\partial \lambda}{\partial q} = \frac{\partial \Lambda}{\partial \varphi} \frac{\partial \varphi}{\partial q}<0.\end{aligned}$$ Furthermore, the price elasticities of throughput $\lambda$ satisfy $$\epsilon^{\lambda}_{p}:\epsilon^{\lambda}_{q} = \epsilon^{m}_{p}:\epsilon^{n}_{q}.$$
Proposition \[proposition:pricing-effect\] explicitly shows the impacts of prices $p$ and $q$ on the system congestion $\varphi$ and throughput $\lambda$ and states that the congestion and throughput will decrease if higher prices are charged. Intuitively, higher prices reduce the demands in terms of $m$ and $n$, which further reduce the equilibrium congestion $\varphi$ by Proposition \[proposition:elasticity\]. As the system has a fixed capacity, the aggregate throughput $\lambda = \Lambda(\varphi, \mu)$ would decrease consequently. Proposition \[proposition:pricing-effect\] also tells that the price elasticity of throughput is proportional to that of the corresponding demand. This again shows that the demands $m$ and $n$ play similar roles on the two sides of the market.
Structure of Optimal Pricing
============================
In the previous section, we modeled a two-sided network in which the effect of congestion is taken into consideration. In this section, we further explore the structures of the profit-optimal and welfare-optimal pricing in the network. In particular, we will show the impact of system congestion on the structures of optimal pricing.
Structure of Profit-Optimal Pricing
-----------------------------------
We first study the optimal two-sided pricing used by the AP to maximize its profit. We assume that the AP incurs a per-unit traffic cost of $c$, which models the recurring maintenance and utility costs like electricity. We define the AP’s profit by $U(p,q,\mu) \triangleq (p+q-c)\lambda(p,q,\mu)$, i.e., the per unit traffic profit $p+q-c$ multiplied by the aggregate throughput $\lambda$. Under any fixed capacity $\mu$, the AP can maximize its profit by determining the optimal prices that solve the following optimization problem.
$$\begin{aligned}
& \underset{p, q}{\text{maximize}}
& & U(p,q,\mu)=(p+q-c)\lambda(p,q,\mu).
\end{aligned}$$
Before solving the profit maximization problem, we first characterize the impacts of the AP’s capacity and prices on its profit as the following result.
\[proposition:profit\] The impact of the capacity $\mu$ on the profit $U$ is $$\begin{aligned}
&\dfrac{\partial U}{\partial \mu} = (p + q -c)\dfrac{\partial \Lambda}{\partial \mu}(1-\epsilon^\lambda)> 0,\end{aligned}$$ and the impacts of the prices $p$ and $q$ on the profit $U$ are $$\begin{aligned}
\begin{cases}
\dfrac{\partial U}{\partial p} = \lambda - (p+q-c)\epsilon^\lambda\lambda \tilde m^p;\vspace{0.05in}\\
\dfrac{\partial U}{\partial q} = \lambda - (p+q-c)\epsilon^\lambda\lambda \tilde n^q.
\end{cases}\end{aligned}$$
Proposition \[proposition:profit\] intuitively shows that the AP’s profit increases with its capacity under fixed prices; however, increasing prices might reduce demands, which could either increase or decrease the profit. Next, we characterize the optimal two-sided prices of the AP that maximize its profit.
\[theorem:KKT-Lerner\] If prices $p$ and $q$ maximize the AP’s profit, the following condition must hold: $$\label{equation:KKT-necesary}
\displaystyle {\tilde m^p} = {\tilde n^q} = \frac{1}{ (p+q-c)\epsilon^\lambda}.$$ Furthermore, the total price $p+q$ satisfies $$\label{equation:Lerner-formula}
\frac{p+q-c}{p+q} = \frac{1}{\epsilon_p^\lambda + \epsilon_q^\lambda} = \frac{1}{\epsilon^\lambda (\epsilon_p^m + \epsilon_q^n)}.$$
Theorem \[theorem:KKT-Lerner\] provides necessary conditions for prices to be profit-optimal. Equation (\[equation:KKT-necesary\]) shows the connections among the optimal prices, the demand hazard rates on both sides, and the elasticity of system throughput. In particular, the optimal prices will equalize the hazard rates of demands on both sides, which equals the inverse of the product of the profit margin $p+q-c$ and the elasticity of system throughput $\epsilon^\lambda$. This implies that the AP will always balance its market power on both sides of the market so as to maximize its profit. Notice that the formula of Equation (\[equation:KKT-necesary\]) also generalizes the result of Rochet and Tirole [@Rochet2003] for the structure of the profit-optimal two-sided pricing under endogenously congested networks. Besides, Equation (\[equation:Lerner-formula\]) characterizes the relationship between the elasticities and price margins of the profit-maximizing AP, where the total price $p+q$ follows a form of the Lerner index [@Lerner34], i.e., the ratio of profit margin to price equals the inverse of the total elasticities of the system throughput with respect to the prices.
By Equation (\[equation:KKT-necesary\]), we see that the profit-optimal two-sided prices are related to the elasticity of system throughput $\epsilon^\lambda$, which depends on the level of network congestion by Equation (\[equation:system elasticity\]). To see more clearly, we illustrate it with an example where the gain function is $\rho(\phi)=e^{-\phi}$, the congestion function is $\Phi(\lambda,\mu) = \lambda/\mu$ and the demand functions are $m(p) = 1-p$ and $n(q) = (1-q)^2$ ($p,q\in [0,1]$). For this example, we can derive the explicit profit-optimal prices based on Theorem \[theorem:KKT-Lerner\]: $$\label{equation:example profit}
p = \frac{\varphi+c+2}{\varphi+4} \quad \text{and} \quad q = \frac{\varphi+2c}{\varphi+4}$$ where $\varphi$ is the equilibrium congestion of the system. Equation (\[equation:example profit\]) shows that the congestion level directly affects the profit-optimal prices. Therefore, the AP should fully take the congestion effect into consideration when designing two-sided pricing schemes.
Structure of Welfare-Optimal Pricing
------------------------------------
We next analyze the welfare-optimal pricing structure that maximizes social welfare and contrast it with the profit-optimal counterpart. On the user side, given any fixed price $p$, a user of value $v_u$ obtains the surplus $(v_u-p)$ for per-unit traffic, and therefore the total surplus of all users, when each of them consumes one unit traffic, can be defined by $$\begin{aligned}
\label{equation:def_sm}
S_m(p) \triangleq \int^{+\infty}_p (v_u-p)f_u(v_u)dv_u,\end{aligned}$$ where $f_u(v_u)$ measures the population of the users of value $v_u$ for per-unit traffic. Thus the per-user average surplus for per-unit traffic can be defined by $s_m(p) \triangleq S_m(p)/m(p)$, where $m(p)$ is the user population. Accordingly, we define the total user welfare for the aggregate throughput by $$W_m (p, q,\mu) \triangleq s_m(p)\lambda(p,q,\mu) = S_m(p) n(q) \rho\big(\varphi(p,q,\mu)\big),$$ i.e., the users’ average per-unit traffic surplus $s_m(p)$ multiplied by the aggregate traffic throughput $\lambda(p,q,\mu)$. Similarly, on the CP side, given any fixed termination fee $q$, a CP of characteristic $(u_c,v_c)$ generates a surplus $(v_c-q)$ for per-unit traffic and the average desirable throughput per user on this CP is $u_c$; and therefore, the CP can obtain a surplus of $(v_c-q)u_c$ per user. As a result, the total surplus of all active CPs, when each of them accommodates their users’ average desirable throughput, can be defined by $$\begin{aligned}
S_n(q) \triangleq &\int_q^{+\infty}\!\!\!\!\int_0^{+\infty}(v_c-q)u_cf_c(u_c,v_c)du_cdv_c,\end{aligned}$$ where $f_c(u_c,v_c)$ measures the population of the CPs of characteristic $(u_c,v_c)$. Thus, the total surplus of all active CPs for per-unit traffic can be defined by $s_n(q) \triangleq S_n(q)/n(q)$, where $n(q)$ is the end-users’ average desirable throughput on all active CPs. Accordingly, we define the total CP welfare generated from all end-users by $$W_n (p, q,\mu) \triangleq s_n(q)\lambda(p,q,\mu) = S_n(q) m(p) \rho\big(\varphi(p,q,\mu)\big),$$ i.e., the aggregate per-unit traffic CP surplus $s_n(q)$ multiplied by the aggregate traffic throughput $\lambda(p,q,\mu)$. Notice that because the welfare $W_m(p,q,\mu)$ and $W_n(p,q,\mu)$ of the two sides are the system throughput $\lambda(p,q,\mu)$ multiplied by $s_m(p)$ and $s_n(q)$, respectively, $s_m$ and $s_n$ can also be interpreted as the [*per-unit traffic welfare*]{} of the user and CP sides. Based on the above definitions, social welfare can be denoted as the summation of the end-users’ welfare, the CPs’ welfare, and the AP’s profit, defined by $$W(p, q,\mu) \triangleq W_m (p, q,\mu)+W_n (p, q,\mu)+ U(p,q,\mu).$$
Because social welfare usually increases as the two-sided prices decrease, it might be maximized at a point where the AP incurs a loss. To ensure that the AP does not incur a loss and the welfare-optimal pricing scheme is practically feasible, we consider the framework of Ramsey-Boiteux pricing [@ramsey1927contribution] which tries to maximize social welfare, subject to a constraint on the AP’s profit. In our context, we typically constrain the AP’s profit to be zero, i.e., $U=0$, under which the total price $p+q$ is equal to the traffic cost $c$. We formulate the welfare optimization problem as follows. $$\begin{aligned}
& \underset{p, q}{\text{maximize}} \quad\, W(p, q,\mu) = W_m(p,q,\mu) + W_n(p,q,\mu)\\
& \text{subject to} \quad\ p+q=c.\end{aligned}$$
Before we solve the welfare maximization problem, we first characterize the impacts of the capacity and prices on social welfare as follows.
The impact of the AP’s capacity $\mu$ on social welfare $W$ is $$\begin{aligned}
&\dfrac{\partial W}{\partial \mu} = W \tilde{\rho}^\varphi \frac{\partial \Lambda}{\partial \mu} \left(\frac{\partial g}{\partial \varphi}\right)^{-1}> 0.\end{aligned}$$ The impacts of the AP’s prices $p$ and $q$ on social welfare $W$ are $$\begin{aligned}
\begin{cases}
\dfrac{\partial W}{\partial p} = -\lambda - \tilde{m}^p\big[W_n -W(1-\epsilon^\lambda)\big]; \vspace{0.05in}\\
\dfrac{\partial W}{\partial q} = -\lambda - \tilde{n}^q \big[W_m -W(1-\epsilon^\lambda)\big].
\end{cases}\end{aligned}$$ In particular, if $\tilde{S}_m^p$ and $\tilde{S}_n^q$ increase with $p$ and $q$, respectively, social welfare $W$ decreases with $p$ and $q$. \[proposition:welfare\]
Proposition \[proposition:welfare\] intuitively shows that 1) under fixed prices, social welfare increases with the AP’s capacity, and 2) under the commonly assumed monotone conditions [@Barlow63] on the hazard rates $\tilde{S}_m^p$ and $\tilde{S}_n^q$ of surpluses, social welfare decreases as the two-sided prices increase. It implies that to protect social welfare, regulators should encourage APs to expand capacity and regulate the prices of both sides. Next, we characterize the optimal two-sided prices which maximize social welfare.
\[theorem:social-welfare\] If prices $p$ and $q$ maximize the social welfare $W(p,q)$, the following condition must hold: $$\label{equation:welfare}
{\tilde{m}^p}: {\tilde{n}^q} = \left(\epsilon^\lambda-1+\frac{s_m}{s_m+s_n}\right) : \left(\epsilon^\lambda-1+\frac{s_n}{s_m+s_n}\right) .$$
Theorem \[theorem:social-welfare\] provides a necessary condition for the prices to be welfare-optimal. Equation (\[equation:welfare\]) describes the relationship among the elasticity of system throughput, the demand hazard rates and per-unit traffic welfares of the two sides. Compared to the result of Theorem \[theorem:KKT-Lerner\] where the profit-optimal pricing always equalizes the demand hazard rates on both sides, the welfare-optimal counterpart differentiates them based on the per-unit traffic welfares of the two sides as well as the elasticity of system throughput. To see this more clearly, we consider a special case of $\epsilon^\lambda = 1$, i.e., the model of Rochet and Tirole [@Rochet2003] where network congestion does not exist. Under this case, Equation (\[equation:welfare\]) simplifies to the condition in Proposition 2 of [@Rochet2003], i.e., $$\label{equation:case 1}
\frac{s_m}{\tilde{m}^p} = \frac{s_n}{\tilde{n}^q}.$$
Equation (\[equation:case 1\]) shows that when the congestion does not exist, the demand hazard rate is proportional to the per-unit traffic welfare of the same side under the welfare-optimal pricing. This implies that the welfare-optimal prices result in a higher demand hazard rate, and therefore induce lower market power, on the side of higher welfare. When the network becomes mildly congested, the elasticity of system throughput $\epsilon^\lambda$ will slightly decline below $1$. If the AP has higher per-unit traffic welfare on the user side, i.e., $s_m>s_n$, the right-hand side of Equation (\[equation:welfare\]) will increase. Since Equation (\[equation:welfare\]) has to hold under the welfare-optimal pricing, the prices need to adjust to balance the equation again, i.e., to increase the ratio of demand hazard rates on the left-hand side of Equation (\[equation:welfare\]). This implies that the market power of the side of high per-unit traffic welfare will decrease, and vice versa. In general, we can expect that when the network becomes more congested, the elasticity of throughput decreases, and under the welfare-optimal pricing, the ratio of the demand hazard rates of the sides with higher and lower per-unit traffic welfare will further increase.
Similar to the profit-optimal prices, the welfare-optimal prices are also related with the elasticity of throughput $\epsilon^\lambda$ by Equation (\[equation:welfare\]), which depends on the level of network congestion by Equation (\[equation:system elasticity\]). Under the same example where the gain function is $\rho(\phi)=e^{-\phi}$, the congestion function is $\Phi(\lambda,\mu) = \lambda/\mu$ and the demand functions are $m(p) = 1-p$ and $n(q) = (1-q)^2$ ($p,q\in [0,1]$), the explicit welfare-optimal prices can be derived based on Theorem \[theorem:social-welfare\]: $$\label{equation:example welfare}
p = \frac{3k(\varphi)+2c-2}{3k(\varphi)+2} \quad \text{and} \quad q = \frac{3(c-1)k(\varphi)+2}{3k(\varphi)+2}$$ where $k(\varphi)$ is the positive solution of $3k^2 + \varphi k - 4 =0$ and $\varphi$ is the equilibrium congestion of the system. Equation (\[equation:example welfare\]) shows that the congestion level also directly affects the welfare-optimal prices. Thus, regulators need to consider the congestion effect adequately when making the regulatory policies on the two-sided pricing.
**Summary of Implications:** The theoretical results in this section could help APs and regulatory authorities to design two-sided pricing strategies and the corresponding regulatory policies. First, to optimize their profits, APs should set two-sided prices to equalize the demand hazard rates at both sides, whose optimal value changes with the elasticity of system throughput (by Theorem \[theorem:KKT-Lerner\]). Second, to protect social welfare, regulators might want to regulate the prices of both sides (by Proposition \[proposition:welfare\]). In particular, the prices should be regulated such that the difference in the demand hazard rates of the two sides will enlarge as the elasticity of system throughput decreases (by Theorem \[theorem:social-welfare\]).
Sensitivity of Optimal Pricing
==============================
With the rapid development of Internet, characteristics of APs, CPs, and end-users are continuously changing. For example, APs are using new wireless technologies, e.g., 4G or 5G, to expand their capacities, and as the real-time video traffic grows rapidly, users often become more sensitive to network congestion. In this section, we explore the sensitivities of the profit-optimal and welfare-optimal pricing under these varying characteristics of the market participants.
Because Theorem \[theorem:KKT-Lerner\] and \[theorem:social-welfare\] in the previous section only provide necessary conditions for the two-sided prices $p$ and $q$ to be profit- and welfare-optimal, they are not sufficient to guarantee the optimality. To analyze how the optimal prices change with varying parameters, we make the following assumption on the hazard rates of the demands $m$ and $n$ so as to guarantee the (local) optimality of the prices $p$ and $q$.
\[ass:hazard rate\] The demand hazard rates $\tilde{m}^p$ and $\tilde{n}^q$ are increasing in the prices $p$ and $q$, respectively.
The monotonicity conditions on the demand hazard rates $\tilde{m}^p$ and $\tilde{n}^q$ stated in Assumption \[ass:hazard rate\] indicate that at a higher price level $p$ ($q$), the proportion of demand $m$ ($n$) reduced due to a marginal increase in price is larger. These monotone conditions are widely assumed in various contexts in statistics [@Barlow63] and economics, e.g., Myerson’s optimal auction [@myerson81]. In particular, the hazard rate of any concave function satisfies the monotone property, e.g., a concave demand $m(p)$ implies that the hazard rate $\tilde{m}^p$ must be increasing in $p$. Under Assumption \[ass:hazard rate\], we consider a system that has unique profit-optimal and welfare-optimal two-sided prices, denoted by $(p^*,q^*)$ and $(p^\circ,q^\circ)$, respectively.
As the previous section shows that the structure of optimal pricing depends on the elasticity of throughput $\epsilon^\lambda$, we will see in this section that the sensitivities of optimal prices largely depend on the changing direction of $\epsilon^\lambda$ as congestion increases, i.e., the sign of $\partial \epsilon^\lambda/ \partial \varphi$. This quantity is determined by the type of data traffic: if the network traffic is mostly video (text), its throughput gain $\rho(\varphi)$ often decreases convexly (concavely) in $\varphi$, as its traffic is quite sensitive (insensitive) to mild congestion. As a result, we will show that $\epsilon^\lambda$ often increases (decreases) when congestion increases.
Impact of AP’s Capacity {#section:5.1}
-----------------------
In this subsection, we first show how the sensitivity of optimal pricing under varying AP’s capacity is impacted by the changing direction of $\epsilon^\lambda$ under changing congestion. We then explain why this changing direction is determined by the type of data traffic. The following two corollaries show the impacts of the AP’s capacity $\mu$ on the optimal prices.
\[corollary:profit capacity\] The derivatives of the profit-optimal prices $p^*$ and $q^*$ with respect to the capacity $\mu$ satisfy that $$\begin{aligned}
\operatorname{sgn}\Big(\frac{\partial p^*}{\partial \mu}\Big)=\operatorname{sgn}\Big(\frac{\partial q^*}{\partial \mu}\Big) = \operatorname{sgn}\Big(\frac{\partial\epsilon^\lambda}{\partial\varphi}\Big).\end{aligned}$$ Furthermore, their ratio satisfies that $$\label{equation:proportion capacity}
\frac{\partial p^*}{\partial \mu}: \frac{\partial q^*}{\partial \mu} = \frac{d \tilde n^q}{d q}:\frac{d \tilde m^p}{d p}.$$
Corollary \[corollary:profit capacity\] shows that the signs of the marginal profit-optimal prices ${\partial p^*}/{\partial \mu}$ and ${\partial q^*}/{\partial \mu}$ (with respect to capacity) are the same as that of the marginal elasticity of system throughput ${\partial\epsilon^\lambda}/{\partial\varphi}$ (with respect to congestion). This result implies that if the elasticity of system throughput increases with deteriorated congestion, the AP’s profit-optimal prices will increase with its capacity, and vice-versa. In general, the system congestion $\varphi$ will be alleviated with the expanded capacity $\mu$ by Proposition \[proposition:elasticity\]. If the elasticity of throughput decreases (increases) with alleviated congestion, i.e., $\partial\epsilon^\lambda/\partial\varphi>0$ ($\partial\epsilon^\lambda/\partial\varphi<0$), the marginal increase of throughput demand $|\partial \lambda/\partial\varphi|$ increases faster (slower) than that of throughput supply $\partial \Lambda/\partial\varphi$ by Equation (\[equation:system elasticity\]). As a result, the profit-optimal prices will increase (decrease) as the basic economics principle of demand and supply implies.
Furthermore, Equation (\[equation:proportion capacity\]) shows that the marginal profit-optimal prices (with respect to capacity) are proportional to the marginal demand hazard rate (with respect to price) on the opposite sides. Because the profit-optimal prices always equalize the demand hazard rates of both sides by Theorem \[theorem:KKT-Lerner\], i.e., $\tilde{m}^p(p^*) = \tilde{n}^q(q^*)$, the changes in demand hazard rates with respect to capacity $\mu$ should also be the same at both sides. Mathematically, this balanced marginal effect can be used to deduce Equation (\[equation:proportion capacity\]) and expressed as $$\begin{aligned}
\displaystyle\frac{\partial p^*}{\partial \mu}\frac{d \tilde m^p}{d p} = \frac{\partial \tilde m^p}{\partial \mu}= \frac{\partial \tilde n^q}{\partial \mu}=\frac{\partial q^*}{\partial \mu}\frac{d \tilde n^q}{d q}.\end{aligned}$$
\[corollary:welfare capacity\] The derivatives of the welfare-optimal prices $p^\circ$ and $q^\circ$ with respect to the capacity $\mu$ satisfy that $$\begin{aligned}
\operatorname{sgn}\Big(\frac{\partial p^\circ}{\partial \mu}\Big) = -\operatorname{sgn}\Big(\frac{\partial q^\circ}{\partial \mu}\Big) = \operatorname{sgn}\left(\tilde m^p - \tilde n^q\right)\cdot\operatorname{sgn}\left(\frac{\partial \epsilon^\lambda}{\partial \varphi}\right).\end{aligned}$$
Corollary \[corollary:welfare capacity\] shows that the marginal prices $\partial p^\circ /\partial \mu$ and $\partial q^\circ /\partial \mu$ with respect to capacity have opposite signs. This is due to the constraint of fixed total price of the two sides, which further implies that the values of $\partial p^\circ/\partial \mu$ and $\partial q^\circ/\partial \mu$ are always opposite. Corollary \[corollary:welfare capacity\] also shows that the sign of marginal price of the side of higher (lower) demand hazard rate will be the same as (opposite to) that of the marginal elasticity of throughput. This implies that if the elasticity of throughput increases (decreases) with congestion, the welfare-optimal price of the side whose demand hazard rate is higher would increase (decrease) with the capacity. As explained earlier, if the elasticity of throughput increases with expanded capacity and alleviated congestion, i.e., $\partial \epsilon^\lambda/\partial \varphi<0$, the marginal increase of throughput demand $|\partial \lambda /\partial \varphi|$ increases slower than that of throughput supply $\partial \Lambda /\partial \varphi$. As a result, the network would be under-utilized if the two-sided prices are unchanged, and therefore under fixed total price, the price of the side of higher demand hazard rate should be reduced to increase system throughput so as to maximize social welfare. Similarly, if $\partial \epsilon^\lambda/\partial \varphi>0$, the network would be overloaded and the price of the side of higher demand hazard rate should be increased to reduce the throughput.
By comparing the results of Corollary \[corollary:profit capacity\] and \[corollary:welfare capacity\], we see that if the elasticity of throughput $\epsilon^\lambda$ increases (decreases) with congestion $\varphi$, the price of the side of lower demand hazard rate will decrease (increase) with capacity under the welfare-optimal pricing but will increase (decrease) with capacity under the profit-optimal counterpart. This suggests that, to protect social welfare, if $\epsilon^\lambda$ increases (decreases) with $\varphi$, regulators might want to tighten (relax) the price regulation on the side of higher AP market power, i.e., the side of lower demand hazard rate, when APs expand capacities.
From Corollary \[corollary:profit capacity\] and \[corollary:welfare capacity\], we have seen that under expanded capacity, the sensitivities of both profit- and welfare-optimal prices really depend on the changing direction of $\epsilon^\lambda$ with deteriorated congestion, i.e., the sign of $\partial \epsilon^\lambda/\partial \varphi$. Next, we explain why it is determined by the type of data traffic. From Equation (\[equation:system elasticity\]), the elasticity of throughput $\epsilon^\lambda$ is a function of the marginal gain $|\partial \rho/\partial \varphi|$ and supply $\partial \Lambda/\partial \varphi$ of throughput. The former and the latter are determined by the type of data traffic, e.g., text or video, and the congestion model of network service, e.g., capacity sharing [@chau2010viability] or M/M/1 queue, respectively. Since the congestion model is usually fixed, while the data type changes rapidly with the emerging new applications and contents, we focus on the impact of data traffic type. For example, we consider the congestion function $\Phi(\lambda,\mu) = \lambda/\mu$ that captures the capacity sharing nature of network services. As mentioned at the beginning of this section, when the data traffic is mostly for online video (text content), the throughput gain $\rho(\varphi)$ often decreases convexly (concavely) in congestion $\varphi$ and thus is more (less) elastic as the congestion is milder. Consequently, the congestion elasticity of gain $\epsilon^\rho_\varphi$ usually decreases (increases) with congestion. The next proposition builds the relationship between $\epsilon^\rho_\varphi$ and the elasticity of system throughput $\epsilon^\lambda$.
\[proposition:elasticity relation\] If the network congestion meets the form $\Phi(\lambda,\mu)=\lambda/\mu$, the elasticity of system throughput satisfies that $$\label{equation:elasticity relation}
\epsilon^\lambda = \frac{1}{1+\epsilon^\rho_\varphi}\in (0,1].$$
Proposition \[proposition:elasticity relation\] implies that under capacity sharing, the elasticity of throughput $\epsilon^\lambda$ will increase as the congestion elasticity of gain $\epsilon^\rho_\varphi$ decreases. Furthermore, if the network traffic is mostly for text content (online video), the congestion elasticity of gain $\epsilon^\rho_\varphi$ usually increases (decreases) with congestion, resulting in ${\partial \epsilon^\lambda}/{\partial \varphi}<0$ (${\partial \epsilon^\lambda}/{\partial \varphi}>0$). Although this relationship is established under the capacity sharing scenario, other service models, e.g., M/M/1 queue, can be studied in the same way and have the similar conclusion about the impact of traffic type on the elasticity of throughput[^6]. Based on the above discussions, the results of Corollary \[corollary:profit capacity\] and \[corollary:welfare capacity\] can provide important implications for APs and regulators to choose pricing strategies and regulatory policies under expanded system capacities.
**Summary of Implications:** From Corollary \[corollary:profit capacity\] and \[corollary:welfare capacity\], whether an AP would decrease or increase its prices and whether regulators should relax or enhance price regulation largely depends on the changing direction of the elasticity of system throughput with changing congestion, which is influenced by the type of data traffic. In particular, in the early years of the Internet, data traffic was mainly for text content under which the elasticity of throughput often decreases with congestion. Under this case, when APs expand their capacities, they would lower the two-sided prices and regulators should relax the price regulation on the side of high market power. However, in recent years, data traffic is mostly for online video streaming under which the elasticity of throughput usually increases with congestion. Under this case, APs would increase the prices on both sides with expanded capacity, while regulators might want to tighten the price regulation on the side of high market power.
Impact of Users’ Sensitivity
----------------------------
In this subsection, we study the sensitivity of optimal pricing under varying users’ sensitivity to congestion. To model how sensitive end-users are to network congestion, we extend the gain function to be $\rho(\phi,s)$, where the parameter $s$ measures the congestion sensitivity of users. Because when users become more sensitive to congestion, their throughput gain decreases more sharply with deteriorated congestion, we assume that $\partial \rho(\phi,s_1)/\partial \phi>\partial \rho(\phi,s_2)/\partial \phi$ for all $s_1<s_2$, which indicates that under any fixed level of congestion $\phi$, if users become more sensitive to congestion, the marginal change in their throughput gain $|\partial \rho/\partial \phi|$ increases. Besides, we assume that the inverse $\Lambda(\phi,\mu)$ of the congestion function satisfies $\partial \Lambda(\phi,\mu_1)/\partial \phi \le \partial \Lambda(\phi,\mu_2)/\partial \phi$ for all $\mu_1<\mu_2$, which intuitively states that if the AP’s capacity is more abundant, the marginal change in the implied throughput $\partial \Lambda/\partial \phi$ will not decrease. Both the congestion functions $\Phi = \lambda/\mu$ and $\Phi = 1/(\mu-\lambda)$ used before satisfy this assumption. The following two corollaries show the impacts of the congestion sensitivity $s$ of users on the profit- and welfare-optimal prices, respectively.
\[corollary:profit sensitivity\] If the elasticity of system throughput increases with congestion, i.e., $\partial \epsilon^\lambda/\partial \varphi>0$, the prices $p^*$ and $q^*$ both increase with the users’ sensitivity to congestion $s$, i.e., $$\frac{\partial p^*}{\partial s}>0\ \ \text{and} \ \ \frac{\partial q^*}{\partial s}>0.$$ Furthermore, it satisfies that $$\label{equation:proportion sensitivity}
\frac{\partial p^*}{\partial s}: \frac{\partial q^*}{\partial s} = \frac{d \tilde n^q}{d q}:\frac{d \tilde m^p}{d p}.$$
Corollary \[corollary:profit sensitivity\] states that if the elasticity of throughput $\epsilon^\lambda$ increases with congestion $\varphi$, the profit-optimal prices $p^*$ and $q^*$ both increase with the users’ sensitivity $s$. As explained before, the monotonicity condition $\partial \epsilon^\lambda/\partial \varphi>0$ often holds when network throughput is mostly constituted of inelastic traffic, e.g., online video streaming. This result implies that as video traffic continues to grow, users will become more sensitive to congestion and as a result, APs are expected to increase the prices on both sides so as to optimize their profits, which also leads to alleviated congestion and improved users’ experiences. Similar to Equation (\[equation:proportion capacity\]), Equation (\[equation:proportion sensitivity\]) shows that the marginal profit-optimal prices (with respect to sensitivity) are proportional to the marginal demand hazard rates (with respect to price) on the opposite sides.
\[corollary:welfare sensitivity\] If the elasticity of system throughput increases with congestion, i.e., $\partial \epsilon^\lambda/\partial \varphi>0$, the derivatives of the prices $p^\circ$ and $q^\circ$ with respect to the users’ congestion sensitivity $s$ satisfy that $$\begin{aligned}
\operatorname{sgn}\Big(\frac{\partial p^\circ}{\partial s}\Big) = -\operatorname{sgn}\Big(\frac{\partial q^\circ}{\partial s}\Big) = \operatorname{sgn}\left(\tilde m^p - \tilde n^q\right).\end{aligned}$$
Corollary \[corollary:welfare sensitivity\] shows that under the monotonicity condition $\partial \epsilon^\lambda/\partial \varphi>0$, if the demand hazard rate of the user side $\tilde m^p$ is higher than that of the CP side $\tilde n^q$, the derivate of $p^\circ$ ($q^\circ$) with respect to the congestion sensitivity $s$ is positive (negative), and vice-versa. This result implies that when users become more sensitive to congestion, the welfare-optimal price of the side whose demand hazard rate is higher (lower) would increase (decrease). By comparing the results of Corollary \[corollary:profit sensitivity\] and \[corollary:welfare sensitivity\], we find that when the users’ sensitivity to congestion increases, the price of the side of lower demand hazard rate needs to be reduced under the welfare-optimal pricing, but would be raised under the profit-optimal counterpart. This suggests that, to protect social welfare, when users become more sensitive to congestion, more stringent price regulation might need to be imposed on the side where the AP has a higher market power and lower demand hazard rate.
**Summary of Implications:** Corollary \[corollary:profit sensitivity\] and \[corollary:welfare sensitivity\] show how APs and regulators should adjust pricing strategies and regulatory policies under increasing congestion sensitivity of users, when data traffic is mostly for inelastic applications. In particular, as video traffic keeps growing rapidly, end-users will become more sensitive to the network congestion, and consequently APs would increase the two-sided prices to alleviate the congestion and improve users’ experiences so as to maximize profits. From a perspective of social welfare, regulators might want to impose more stringent price regulation on the side of high market power.
Evaluation of Optimal Pricing {#sec:evaluation}
=============================
In the previous sections, we studied the structures and sensitivities of the profit-optimal and welfare-optimal pricing through theoretical analysis. In this section, we further evaluate the pricing schemes by numerical simulations[^7].
Setup of Model Parameters
-------------------------
We first choose the forms of congestion function $\Phi(\lambda,\mu)$ and gain function $\rho(\phi,s)$ to capture detailed characteristics of network services. In particular, we adopt the congestion functions $\Phi(\lambda,\mu) =\lambda/\mu$ and $\Phi(\lambda,\mu) = 1/(\mu-\lambda)$. The former models the capacity sharing [@chau2010viability] nature of network services and was used in much prior work [@gibbens2000internet; @jain2001analysis]; the latter models the M/M/1 queueing delay, which was also widely used in prior work [@ros2004mathematical; @chau2010viability]. We adopt the gain functions $\rho(\phi,s) = 1/(\phi s+1)$ and $\rho(\phi,s) = (s+1)^{-\phi}$ for $s>0$, which were used in prior work [@richard2014pay; @ma2013public]. Their congestion elasticities are $\epsilon^\rho_\phi = 1 - 1/(\phi s +1)$ and $\epsilon^\rho_\phi = \phi \ln(s +1)$ and both increase with the congestion sensitivity of users $s$. This indicates that the throughput gains will be more elastic to congestion if users become more sensitive to congestion.
Although our analyses have focused on a single AP, market competition among multiple APs could be captured by the function $m(p)$ of user population. In particular, we choose a family of population functions $m$ parameterized by $\alpha$: $m(p,\alpha) \triangleq 1 - p^{\frac{1}{\alpha}}$ for $0\le p \le 1,\alpha>0$, which satisfies that $m(p,\alpha_1)>m(p,\alpha_2)$ for all $\alpha_1<\alpha_2$. The parameter $\alpha$ can be regarded as a metric of competition in the user market, i.e., given the same price $p$, if the competition level $\alpha$ increases, the user population $m$ will fall.
The users’ desirable throughput may change rapidly as the traffic demand of content services change. For example, a new SuperHD video format launched by Netflix requires a $50\%$ increase in traffic flows per video over 1080p content [@reed2014current]. Similarly, we extend the function $n(q)$ of average desirable throughput to capture this dynamics. In particular, we choose a family of throughput function $n$ parameterized by $\beta$: $n(q,\beta) \triangleq 1-q^\beta$ for $0\le q \le 1,\beta>0$, which satisfies $n(q,\beta_1) < n(q,\beta_2)$ for all $\beta_1 < \beta_2$. The parameter $\beta$ can be regarded as a metric of traffic demand of content services, i.e., given the same price $q$, if the traffic demand $\beta$ increases, the desirable throughput will increase.
In the following simulations, we will compare various scenarios with a static baseline with $\mu = s = \alpha=\beta=1$ and $c=0.7$, under which the AP’s capacity, the users’ congestion sensitivity, the level of market competition and the traffic demand of content services are all normalized to one.
Comparison with One-Sided Pricing
---------------------------------
Now we evaluate the AP’s incentive to adopt the two-sided pricing instead of the traditional one-sided pricing that only charges on the user side. To this end, we denote the AP’s profits by $U^*_{one}$ and $U^*_{two}$ under the profit-optimal one-sided and two-sided pricing, respectively. We define the growth rate of the profit by the two-sided pricing over the one-sided pricing by $r^* \triangleq (U^*_{two} - U^*_{one})/U^*_{one}$. A larger value of $r^*$ corresponds to a higher profit growth for the AP; and therefore, leads to a stronger incentive for the AP to adopt the two-sided scheme.
Figures \[figure:profit ratio sharing\] and \[figure:profit ratio mm1\] plot the profit growth rate $r^*$ as a function of the competition level $\alpha$ and the AP’s capacity $\mu$ under different traffic demand $\beta$ and users’ sensitivity $s$, respectively. The subfigures (a) are under the congestion function $\Phi = \lambda/\mu$ and the gain function $\rho = 1/(\phi s+1)$. From Subfigures (a) to (b), the gain function is changed to $\rho = (s+1)^{-\phi}$. From Subfigures (a) to (c), the congestion function is changed to $\Phi= 1/(\mu-\lambda)$. In Figure \[figure:profit ratio sharing\], we observe that $r^*$ increases with $\alpha$ when the competition level on the user side becomes more intense, and larger values of the traffic demand $\beta$ induce higher values of $r^*$. In Figure \[figure:profit ratio mm1\], we observe that $r^*$ increases with $\mu$ when the AP expands its capacity. Although $r^*$ decreases with the sensitivity $s$ in general, it increases with $s$ under small capacity $\mu$ when $\Phi = 1/(\mu-\lambda)$. Because under the M/M/1 queueing delay $1/(\mu-\lambda)$, the network congestion would be severe if the capacity is scarce. The increase of the users’ sensitivity to congestion makes the network service more valuable for both users and CPs; and therefore, the AP can obtain a higher profit growth by adopting the two-sided pricing. In fact, the non-monotonic trends in $r^*$ happen only when the capacity $\mu$ is scarce. To see this more clearly, we plot the case of $\mu>2$ in Figure \[figure:other a\]. From all of Figures \[figure:profit ratio sharing b\], \[figure:profit exponential b\] and \[figure:other a\], we observe that a high value of $s$ induces a lower value of $r^*$. In summary, our observations imply that the AP gets a higher profit growth and has stronger incentive to adopt the two-sided pricing if 1) the AP expands its capacity, 2) the competition level on the user side becomes fiercer, and 3) the content services require higher traffic throughput.
Next, we evaluate how regulators should deal with the shift from the one-sided pricing to the two-sided pricing. For this purpose, we denote the social welfare under the welfare-optimal one-sided and two-sided pricing by $W^\circ_{one}$ and $W^\circ_{two}$, respectively. Similarly, we define the growth rate of social welfare by the two-sided pricing over the one-sided pricing by $r^\circ \triangleq (W^\circ_{two}-W^\circ_{one})/W^\circ_{one}$. A larger value of $r^\circ$ indicates that social welfare gets a higher growth when shifting from the one-sided to the two-sided pricing, and thus regulators might want to encourage this transformation.
Figures \[figure:welfare ratio sharing\] and \[figure:welfare ratio mm1\] plot the welfare growth rate $r^\circ$ as a function of the competition level $\alpha$ and the AP’s capacity $\mu$ under different traffic demand $\beta$ and users’ sensitivity $s$, respectively. As a complement to Figure \[figure:welfare ratio mm1 b\], Figure \[figure:other b\] plots the case of large capacity under the M/M/1 queueing setting. By comparing Figures \[figure:welfare ratio sharing\], \[figure:welfare ratio mm1\] and \[figure:other b\] with Figures \[figure:profit ratio sharing\], \[figure:profit ratio mm1\] and \[figure:other a\], respectively, we observe that the welfare growth rate $r^\circ$ has the same changing trend as the profit growth rate $r^*$ when the parameter $\mu,s,\alpha$ or $\beta$ varies. This observation provides justifications for regulators to encourage the AP to shift from the one-sided to the two-sided pricing, especially when the AP has strong incentive to do so.
Impacts of Competition and Demand
---------------------------------
Since we have studied the changes of the optimal pricing under varying capacity $\mu$ and sensitivity $s$ and provided analytical results in the previous section, we next focus on understanding the impacts of the competition level $\alpha$ and traffic demand $\beta$ on the optimal prices. Figure \[figure:varying\_capacitysharing\] plots the profit-optimal prices $(p^*,q^*)$ and the welfare-optimal prices $(p^\circ,q^\circ)$ as functions of $\alpha$ and $\beta$ under the congestion function $\Phi = \lambda/\mu$ and the gain function $\rho = 1/(\phi s+1)$. From Figure \[figure:varying\_capacitysharing\], we observe that 1) when $\alpha$ or $\beta$ increases, the user-side prices $p^*,p^\circ$ decrease and the CP-side prices $q^*,q^\circ$ increase, and 2) the welfare-optimal prices $p^\circ$ and $q^\circ$ are always lower than the profit-optimal prices $p^*$ and $q^*$, respectively. The first observation implies that as the APs’ competition in the user market becomes more intense or the content services have larger traffic demand, the user-side price will decrease and the CP-side price will increase, regardless whether the objective is the AP’s profit or social welfare. The second observation indicates that, to optimize social welfare, regulators might want to regulate the prices on both the user and CP sides, which coincides with the implication of Proposition \[proposition:welfare\]. Note that we omit the cases when the congestion function is $\Phi = 1/(\mu-\lambda)$ or/and the gain function is $\rho = (s+1)^{-\phi}$. Because in the cases, the changing trends of the optimal prices under varying $\alpha$ and $\beta$ are the same as those shown in Figure \[figure:varying\_capacitysharing\].
**Summary of Implications**: Our observations in this section imply that as the capacities of APs and the demand for video traffic grow in the current Internet, APs will have increasing incentives to transform from the traditional one-sided pricing on the user side to the two-sided pricing. Under these cases, regulators might want to encourage this transformation, since it will bring higher growth rates for both social welfare and APs’ profits.
Conclusions
===========
In this paper, we study the optimal two-sided pricing for congested networks. We present a novel model to capture end-users’ population and throughput demand under pricing and congestion parameters and derive the system congestion under an equilibrium. Based on this model, we characterize the structures of the profit-optimal and welfare-optimal pricing schemes. Our results reveal that the profit-optimal pricing always equalizes the demand hazard rates at both sides, while the welfare-optimal counterpart differentiates them based on the elasticity of system throughput. We also explore the sensitivities of the optimal pricing under varying system parameters. We find that with the growth of APs’ capacities and end-users’ traffic demand in the current Internet, APs will have increasing incentives to shift from the traditional one-sided pricing to the two-sided pricing, because it brings higher growth rates for their profits. Furthermore, as online video streaming becomes the main source of network traffic and keeps growing rapidly, end-users become more sensitive to the network congestion and APs are expanding their capacities. Under such scenarios, APs may increase two-sided prices to alleviate the congestion and improve users’ experiences so as to maximize profits. From the perspective of social welfare, regulators might want to tighten the price regulation on the side where APs have higher market power and lower demand hazard rate.
![The profit-optimal and welfare-optimal pricing schemes under varying $\alpha$ and $\beta$.[]{data-label="figure:varying_capacitysharing"}](figure/a_p_cropped.pdf "fig:"){width="20.50000%"} ![The profit-optimal and welfare-optimal pricing schemes under varying $\alpha$ and $\beta$.[]{data-label="figure:varying_capacitysharing"}](figure/a_q_cropped.pdf "fig:"){width="20.50000%"} ![The profit-optimal and welfare-optimal pricing schemes under varying $\alpha$ and $\beta$.[]{data-label="figure:varying_capacitysharing"}](figure/b_p_cropped.pdf "fig:"){width="20.50000%"} ![The profit-optimal and welfare-optimal pricing schemes under varying $\alpha$ and $\beta$.[]{data-label="figure:varying_capacitysharing"}](figure/b_q_cropped.pdf "fig:"){width="20.50000%"}
The Case of M/M/1 queue {#appendix:a}
=======================
In Section \[section:5.1\], we showed that for the capacity sharing scenario, we have the result that if the data traffic is for text (online video), the elasticity of throughput $\epsilon^\lambda$ decreases (increases) with congestion under a reasonable condition. The condition is that the congestion elasticity of throughput gain of text (online video) traffic increases (decreases) with congestion. In this section, we show that for the congestion model of M/M/1 queue, we can derive the similar result under another reasonable condition.
For the scenario of M/M/1 queue, the congestion function is $\Phi(\lambda,\mu) = 1/(\mu-\lambda)$ and its inverse function with respect to the throughput $\lambda$ is $\Lambda(\phi,\mu) = \mu - 1/\phi$. By Equation (\[equation:system elasticity\]), the elasticity of system throughput $\epsilon^\lambda$ can be expressed as $$\epsilon^\lambda(\varphi) = \frac{1}{1+mnG(\varphi)}
\label{equation:MM1}$$ where $G(\varphi) \triangleq -\varphi^2d\rho(\varphi)/d\varphi$. By Equation (\[equation:MM1\]), the elasticity of throughput $\epsilon^\lambda$ decreases with $G$. When the data traffic is for text, the throughput gain $\rho(\varphi)$ usually decreases concavely in congestion $\varphi$, i.e., $d^2 \rho(\varphi)/d \varphi^2<0$, as text traffic is insensitive to mild congestion. Under such a case, we have that $G(\varphi)$ increases with congestion $\varphi$, i.e., $dG(\varphi)/d\varphi>0$. Conversely, when the data traffic is for online video, we consider that $G(\varphi)$ often decreases with congestion $\varphi$. Therefore, under the condition that the function $G(\varphi)$ for text (online video) traffic increases (decreases) with congestion, we have the conclusion that if the data traffic is for text (online video), the elasticity of throughput $\epsilon^\lambda$ decreases (increases) with congestion by Equation (\[equation:MM1\]).
Proofs of Theoretical Results
=============================
In this section, we provide the proofs of the theoretical results mentioned in our paper.
**Proof of Theorem \[theorem:unique-congestion\]:** By Assumption \[ass:congestion\_function\], $\Phi(\lambda,\mu)$ is increasing in $\lambda$ and thus its inverse function $\Lambda(\phi,\mu)$ is increasing in $\phi$. By Assumption \[ass:gain\], $\lambda(\phi)$ is decreasing in $\phi$ and therefore $g(\phi) = \Lambda(\phi,\mu) - \lambda(\phi)$ is increasing in $\phi$. Because $g(\phi)$ is a continuously increasing function of $\phi$ and it satisfies that $$\begin{aligned}
& g\big(\Phi(0,\mu)\big) = \Lambda\big(\Phi(0,\mu),\mu\big) - \lambda\big(\Phi(0,\mu)\big) = - \lambda\big(\Phi(0,\mu)\big)<0 \quad \text{and}\vspace{0.06in}\\
&\displaystyle\lim_{\phi\rightarrow +\infty} g(\phi) = \lim_{\phi\rightarrow +\infty} \left[\Lambda(\phi,\mu)-\lambda(\phi)\right]= \lim_{\phi\rightarrow +\infty} \Lambda(\phi,\mu)>0,\end{aligned}$$ the equation $g(\phi) = 0$ must have a unique solution on the interval $\big[\Phi(0,\mu), +\infty\big)$ by the intermediate value theorem. Because the minimum of $\phi$ is $\Phi(0,\mu)$ under which the aggregate throughput $\lambda$ is zero, there always exists a unique congestion $\phi$ which solves $g(\phi) = 0$, i.e., satisfies $\phi = \Phi(\lambda(\phi),\mu)$ and is an equilibrium congestion by Definition \[def:congestion\]. [$\square$]{}\
**Proof of Proposition \[proposition:elasticity\]:** Under the equilibrium of the system $(m,n,\mu)$, $\Lambda$ and $\lambda$ are functions of $m,n,\mu$ and we denote them by $\Lambda(m,n,\mu) \triangleq \Lambda\left(\varphi(m,n,\mu),\mu\right)$ and $\lambda(m,n,\mu) \triangleq \lambda\left(m,n,\varphi(m,n,\mu)\right) = mn\rho\left(\varphi(m,n,\mu)\right)$. Correspondingly, we denote the throughput gap by $g(m,n,\mu) \triangleq \Lambda(m,n,\mu) - \lambda(m,n,\mu)$. By Theorem \[theorem:unique-congestion\], under the equilibrium, the throughput gap $g(m,n,\mu)$ is zero for any $m,n,$ and $\mu$. Thus, for the user population $m$, we have the identity that $$\frac{\partial g(m,n,\mu)}{\partial m} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(m,n,\mu)}{\partial m} - \frac{\partial \lambda(m,n,\mu)}{\partial m} = 0 \ \ \text{where} \ \ \frac{\partial \lambda(m,n,\mu)}{\partial m} = \frac{\lambda(m,n,\varphi)}{\partial m} + \frac{\partial \lambda(m,n,\varphi)}{\partial \varphi}\frac{\partial \varphi(m,n,\mu)}{\partial m}.$$ Based on this identity, we can derive that $$\frac{\partial \varphi(m,n,\mu)}{\partial m} = \left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} - \frac{\partial \lambda(m,n,\varphi)}{\partial \varphi}\right)^{-1} \frac{\lambda(m,n,\varphi)}{\partial m} = \left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}\frac{\lambda}{m} > 0$$ where we denote $g(m,n,\mu,\varphi) \triangleq \Lambda(\varphi,\mu)-\lambda(m,n,\varphi)$. Furthermore, we have that $$\frac{\partial \lambda(m,n,\mu)}{\partial m} =\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(m,n,\mu)}{\partial m} > 0.$$ Similarly, for the average throughput $n$, we can derive that $$\frac{\partial \varphi(m,n,\mu)}{\partial n}= \left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}\frac{\lambda}{n} > 0 \quad \text{and} \quad \frac{\partial \lambda(m,n,\mu)}{\partial n} =\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(m,n,\mu)}{\partial n} > 0.$$ For the AP’s capacity $\mu$, we have the identity that $$\frac{\partial g(m,n,\mu)}{\partial \mu} = \left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(m,n,\mu)}{\partial \mu}+\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}\right) - \frac{\partial \lambda(m,n,\varphi)}{\partial \varphi}\frac{\partial \varphi(m,n,\mu)}{\partial \mu} = 0$$ from which we can derive that $$\frac{\partial \varphi(m,n,\mu)}{\partial \mu} = -\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}\left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} - \frac{\partial \lambda(m,n,\varphi)}{\partial \varphi}\right)^{-1} = -\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}<0.$$ Furthermore, it satisfies that $$\frac{\partial \lambda(m,n,\mu)}{\partial \mu} = \frac{\partial \lambda(m,n,\varphi)}{\partial \varphi}\frac{\partial \varphi(m,n,\mu)}{\partial \mu} = mn\frac{d \rho(\varphi)}{d \varphi}\frac{\partial \varphi(m,n,\mu)}{\partial \mu}> 0.\tag*{{\hfill\ensuremath{\square}}}$$
**Proof of Theorem \[theorem:elasticity\]:** Based on Definition \[def:elasticity\] and Proposition \[proposition:elasticity\], we have the identities that $$\begin{aligned}
&\epsilon^\lambda_m = \frac{\partial \lambda(m,n,\mu)}{\partial m} \left(\frac{\lambda}{m}\right)^{-1} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}\quad \text{and} \\
&\epsilon^\lambda_n = \frac{\partial \lambda(m,n,\mu)}{\partial n} \left(\frac{\lambda}{n}\right)^{-1} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}\end{aligned}$$ where it satisfies that $$\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} = \frac{{\partial \Lambda(\varphi,\mu)}/{\partial \varphi}}{{\partial \Lambda(\varphi,\mu)}/{\partial \varphi} - {\partial \lambda(m,n,\varphi)}/{\partial \varphi}} = \left( 1 + \frac{|\partial \lambda(m,n,\varphi)/\partial \varphi|}{{\partial \Lambda(\varphi,\mu)}/{\partial \varphi}} \right)^{ -1} \in (0,1].\tag*{{\hfill\ensuremath{\square}}}$$
**Proof of Proposition \[proposition:pricing-effect\]:** From Proposition \[proposition:elasticity\], we can derive that $$\frac{\partial \varphi(p,q,\mu)}{\partial p} = \frac{\partial \varphi(m,n,\mu)}{\partial m}\frac{d m(p)}{d p} = \left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} \frac{\lambda}{m} \frac{d m(p)}{d p} = -\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} \lambda \tilde m^p < 0,$$ and therefore we have that $$\frac{\partial \lambda(p,q,\mu)}{\partial p} = \frac{\partial \Lambda(p,q,\mu)}{\partial p} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(p,q,\mu)}{\partial p} < 0$$ where we denote $\Lambda(p,q,\mu) \triangleq \Lambda\left(\varphi(p,q,\mu),\mu\right)$. Similarly, we can derive that $$\frac{\partial \varphi(p,q,\mu)}{\partial q} = -\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} \lambda \tilde n^q < 0 \quad \text{and} \quad \frac{\partial \lambda(p,q,\mu)}{\partial q} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi} \frac{\partial \varphi(p,q,\mu)}{\partial q}<0.$$ Besides, by Theorem \[theorem:elasticity\], it satisfies that $$\epsilon_p^\lambda:\epsilon_q^\lambda = \left(\epsilon_m^\lambda\epsilon_p^m\right) : \left(\epsilon_n^\lambda\epsilon_q^n\right)= \epsilon_p^m:\epsilon_q^n.\tag*{{\hfill\ensuremath{\square}}}$$
**Proof of Proposition \[proposition:profit\]:** From Proposition \[proposition:elasticity\] and Equation (\[equation:system elasticity\]), the impact of the capacity $\mu$ on the profit is $$\begin{aligned}
\frac{\partial U(p,q,\mu)}{\partial \mu} &= (p+q-c)\frac{\partial \lambda(p,q,\mu)}{\partial \mu} = (p+q-c)\frac{\partial \lambda(m(p),n(q),\mu)}{\partial \mu}\\
&= -(p+q-c)\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}mn\frac{d \rho(\varphi)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1}=(p + q -c)\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}(1-\epsilon^\lambda)>0.\end{aligned}$$ From Proposition \[proposition:pricing-effect\] and Equation (\[equation:system elasticity\]), the impacts of the two-sided prices on the profit are $$\begin{aligned}
\frac{\partial U(p,q,\mu)}{\partial p} &\!=\! \lambda \!+\! (p+q-c)\frac{\partial \lambda(p,q,\mu)}{\partial p} \!=\! \lambda \!-\! (p+q-c)\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} \!\!\lambda \tilde{m}^p \!=\! \lambda \!-\!(p+q-c)\epsilon^\lambda \lambda \tilde{m}^p, \\
\frac{\partial U(p,q,\mu)}{\partial q} &\!=\! \lambda \!+\! (p+q-c)\frac{\partial \lambda(p,q,\mu)}{\partial q} \!=\! \lambda \!-\! (p+q-c)\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} \!\!\lambda \tilde{n}^q \!=\! \lambda \!-\! (p+q-c)\epsilon^\lambda \lambda \tilde{n}^q.\tag*{{\hfill\ensuremath{\square}}}\end{aligned}$$
**Proof of Theorem \[theorem:KKT-Lerner\]:** By the Karush-Kuhn-Tucker (KKT) necessary conditions and Proposition \[proposition:profit\], if the two-sided prices $(p,q)$ maximize the profit $U$, we have the relations that $$\begin{aligned}
\label{equation:KKT proof}
\begin{cases}
\displaystyle\frac{\partial U(p,q,\mu)}{\partial p} = \lambda - (p+q-c)\epsilon^\lambda\lambda \tilde m^p = 0\vspace{0.03in}\\
\displaystyle\frac{\partial U(p,q,\mu)}{\partial q} = \lambda - (p+q-c)\epsilon^\lambda\lambda \tilde m^q = 0
\end{cases}\end{aligned}$$ from which we can derive Equation (\[equation:KKT-necesary\]), i.e., $$\tilde m^p = \tilde n^q = \frac{1}{(p+q-c)\epsilon^\lambda}.$$ Furthermore, by Definition \[def:elasticity\] and \[def:hazard-rate\], it satisfies that $$\begin{aligned}
\begin{cases}
\displaystyle\epsilon^\lambda\epsilon^m_p = \epsilon^\lambda p\tilde m^p = \frac{p}{p+q-c}\vspace{0.03in}\\
\displaystyle\epsilon^\lambda\epsilon^n_q = \epsilon^\lambda q\tilde n^q = \frac{q}{p+q-c}
\end{cases}\end{aligned}$$ from which the total price $p+q$ satisfies that $$\displaystyle\frac{p+q}{p+q-c} = \epsilon^\lambda\epsilon^m_p + \epsilon^\lambda\epsilon^n_q= \epsilon^\lambda_p + \epsilon^\lambda_q$$ and thus Equation (\[equation:Lerner-formula\]) holds. [$\square$]{}\
**Proof of Proposition \[proposition:welfare\]:** From Proposition \[proposition:elasticity\], the impact of the capacity $\mu$ on social welfare $W$ is $$\begin{aligned}
\frac{\partial W(p,q,\mu)}{\partial \mu} &= (s_m+s_n)\frac{\partial \lambda(m(p),n(q),\mu)}{\partial \mu} = \frac{W}{\lambda}\frac{\partial \lambda(m(p),n(q),\mu)}{\partial \mu} \\
&= -\frac{W}{mn\rho(\varphi)}mn\frac{d\rho(\varphi)}{d\varphi}\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} = W\tilde \rho^\varphi\frac{\partial \Lambda(\varphi,\mu)}{\partial \mu}\left(\frac{\partial g(m,n,\mu,\varphi)}{\partial \varphi}\right)^{-1} >0.\end{aligned}$$ From Proposition \[proposition:pricing-effect\] and Equation (\[equation:system elasticity\]), the impact of the user-side price $p$ on social welfare $W$ is $$\begin{aligned}
\displaystyle\frac{\partial W(p,q,\mu)}{\partial p}&= \frac{d s_m(p)}{d p}\lambda + \big(s_m + s_n\big)\frac{\partial \lambda(p,q,\mu)}{\partial p}=\frac{d}{d p} \left( \frac{S_m}{m} \right)\lambda - \big(s_m + s_n\big)\lambda\epsilon^\lambda\tilde{m}^p \\
&\displaystyle=\left(-\frac{S_m}{m^2}\frac{d m(p)}{d p}+\frac{1}{m}\frac{d S_m(p)}{d p}\right)\lambda - \big(s_m + s_n\big)\lambda\epsilon^\lambda\tilde{m}^p \displaystyle=\left(s_m\tilde{m}^p-1\right)\lambda - \big(s_m + s_n\big)\lambda\epsilon^\lambda\tilde{m}^p \\
&\displaystyle= \left(W_m \tilde{m}^p -\lambda\right) - W\epsilon^\lambda\tilde{m}^p = -\lambda - \tilde{m}^p\left[W_n-W(1-\epsilon^\lambda)\right]\end{aligned}$$ where $\displaystyle\frac{d S_m(p)}{d p} = -m(p)$ by the definitions of $S_m(p)$ and $m(p)$ in Equation (\[equation:def\_sm\]) and (\[equation:m\]). Similarly, the impact of the CP-side price $q$ on social welfare $W$ is $$\frac{\partial W(p,q,\mu)}{\partial q} = -\lambda - \tilde{n}^q\left[W_m-W(1-\epsilon^\lambda)\right].$$ When $\tilde{S}^p_m$ increases with the price $p$, we have that $$\frac{d \tilde S^p_m(p)}{d p}=\frac{d}{d p} \left( \frac{m}{S_m} \right)=\frac{\partial}{\partial p} \left( \frac{1}{s_m} \right) = -\frac{d s_m(p)}{d p}\left(\frac{1}{s_m}\right)^2>0$$ and thus $\displaystyle\frac{d s_m(p)}{d p}<0$ holds. Based on this condition, we have that $$\begin{aligned}
\frac{\partial W(p,q,\mu)}{\partial p} = \frac{d s_m(p)}{d p}\lambda + \big(s_m + s_n\big)\frac{\partial \lambda(p,q,\mu)}{\partial p} < \frac{d s_m(p)}{d p}\lambda <0\end{aligned}$$ implying that $W$ decreases with $p$. Similarly, we can proof that when $\tilde{S}^q_n$ increases with $q$, $W$ decreases with $q$. [$\square$]{}\
**Proof of Theorem \[theorem:social-welfare\]:** If prices $p$ and $q$ maximize social welfare $W$ and have a fixed total value, we have the first order condition $$\frac{\partial W(p,q,\mu)}{\partial p} - \frac{\partial W(p,q,\mu)}{\partial q} = 0.$$ By Proposition \[proposition:welfare\], we can derive that $$\frac{\tilde{m}^p}{W_m- W(1 - \epsilon^\lambda)} =\frac{\tilde{n}^q}{W_n- W(1 - \epsilon^\lambda)}.$$ Substituting the relations $W_m = s_m\lambda$ and $W_n = s_n\lambda$, we can get the equation that $$\frac{\tilde{m}^p}{s_m-(s_m+s_n)(1 - \epsilon^\lambda)} =\frac{\tilde{n}^q}{s_n-(s_m+s_n)(1 - \epsilon^\lambda)}$$ and thus Equation (\[equation:welfare\]) holds. [$\square$]{}\
**Proof of Corollary \[corollary:profit capacity\]:** Under any fixed capacity $\mu$, the two-sided price $(p,q)$ determines the AP’s profit $U$. Thus the profit is a function of the price and we can denote it by $U(p,q) \triangleq U(p,q,\mu)$. We use $H(p,q)$ to denote the Hessian matrix of the profit function $U(p,q)$, defined by $$\begin{aligned}
H(p,q)\triangleq
\begin{bmatrix}
\displaystyle\frac{\partial^2 U(p,q)}{\partial p^2} & \displaystyle\frac{\partial^2 U(p,q)}{\partial p \partial q}\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p,q)}{\partial q \partial p} & \displaystyle\frac{\partial^2 U(p,q)}{\partial q^2}
\end{bmatrix}=
\begin{bmatrix}
\displaystyle\frac{\partial^2 U(p,q,\mu)}{\partial p^2} & \displaystyle\frac{\partial^2 U(p,q,\mu)}{\partial p \partial q}\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p,q,\mu)}{\partial q \partial p} & \displaystyle\frac{\partial^2 U(p,q,\mu)}{\partial q^2}
\end{bmatrix}.\end{aligned}$$ We denote the determinant of $H(p,q)$ by $$D(p,q) \triangleq det\big(H(p,q)\big) = \frac{\partial^2 U(p,q,\mu)}{\partial p^2}\frac{\partial^2 U(p,q,\mu)}{\partial q^2}-\frac{\partial^2 U(p,q,\mu)}{\partial p \partial q}\frac{\partial^2 U(p,q,\mu)}{\partial q \partial p}.$$ Because the profit-optimal price $(p^*,q^*)$ is unique for any given capacity $\mu$, the prices $p^*$ and $q^*$ are functions of $\mu$ and thus we can write them as $p^*(\mu)$ and $q^*(\mu)$. Moreover, the unique profit-optimal price $(p^*,q^*)$ satisfies the first order condition $$\label{equation:first pqmu}
\frac{\partial U\big(p^*,q^*,\mu\big)}{\partial p^*} = \frac{\partial U\big(p^*,q^*,\mu\big)}{\partial q^*} = 0,$$ thus it is a critical point of the profit function $U$. Because $(p^*,q^*)$ is also a strict local maximum of $U$, we have that $$D(p^*,q^*)= \frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^*\partial q^*}\ge 0$$ by second partial derivative test.
Next, we prove the impact of the capacity $\mu$ on the optimal prices $p^*$ and $q^*$. By the identity in Equation (\[equation:first pqmu\]), there exists the identity that $$\begin{aligned}
\begin{cases}
\displaystyle\frac{d}{d \mu}\left(\frac{\partial U(p^*,q^*,\mu)}{\partial p^*}\right)= \frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2}\frac{\partial p^*(\mu)}{\partial \mu} + \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*}\frac{\partial q^*(\mu)}{\partial \mu} + \frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*} = 0\vspace{0.05in}\\
\displaystyle\frac{d}{d \mu}\Big(\frac{\partial U(p^*,q^*,\mu)}{\partial q^*}\Big) = \frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^*\partial q^*}\frac{\partial p^*(\mu)}{\partial \mu} + \frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2}\frac{\partial q^*(\mu)}{\partial \mu} + \frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*} = 0
\end{cases}\end{aligned}$$ from which we can derive that $$\begin{aligned}
\label{equation:basic expression mu}
\begin{cases}
\displaystyle\frac{\partial p^*(\mu)}{\partial \mu} = \frac{-1}{D(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*} \right)\vspace{0.05in}\\
\displaystyle\frac{\partial q^*(\mu)}{\partial \mu} = \frac{-1}{D(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^* \partial q^*}\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*} \right).
\end{cases}\end{aligned}$$ We denote the inverse function of system congestion $\varphi\big(m(p),n(q),\mu\big)$ with respect to $\mu$ by $\Xi\big(m(p),n(q),\varphi\big)$. By Equation (\[equation:system elasticity\]), the elasticity $\epsilon^\lambda$ of system throughput can be written as a function of $p,q,$ and $\varphi$: $$\epsilon^\lambda(p,q,\varphi) = \left(1-\frac{m(p)n(q)d\rho(\varphi)/d\varphi}{\partial \Lambda(\varphi,\Xi)/\partial \varphi}\right)^{-1}.$$ By Equation (\[equation:KKT proof\]), we can derive that $$\begin{aligned}
&\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*} = \frac{\partial}{\partial \mu}\left[\lambda(p^*,q^*,\mu) -(p^*+q^*-c)\epsilon^\lambda(p^*,q^*,\varphi) \lambda(p^*,q^*,\mu) \tilde{m}^p(p^*)\right] \\
&=\frac{\partial \lambda(p^*,q^*,\mu)}{\partial \mu}\left[1\!-\!(p^*\!+\!q^*\!-\!c)\epsilon^\lambda(p^*,q^*,\varphi)\tilde{m}^p(p^*)\right] - (p^*\!+\!q^*\!-\!c)\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi}\frac{\partial \varphi(p^*,q^*,\mu)}{\partial \mu} \lambda(p^*,q^*,\mu) \tilde{m}^p(p^*)\\
&= - (p^*+q^*-c)\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi}\frac{\partial \varphi(p^*,q^*,\mu)}{\partial \mu} \lambda(p^*,q^*,\mu) \tilde{m}^p(p^*).\end{aligned}$$ Similarly, we can derive that $$\begin{aligned}
\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*} = - (p^*+q^*-c)\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi}\frac{\partial \varphi(p^*,q^*,\mu)}{\partial \mu} \lambda(p^*,q^*,\mu) \tilde{n}^q(q^*). \end{aligned}$$ By Theorem \[theorem:KKT-Lerner\], $\tilde{m}^p(p^*)= \tilde{n}^q(q^*)$ and thus it satisfies that $\displaystyle\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*} = \frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*}$. Because $\displaystyle\frac{\partial \varphi(p^*,q^*,\mu)}{\partial \mu}<0$ by Proposition \[proposition:elasticity\], we have that $$\label{equation:sign mu}
\operatorname{sgn}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial p^*}\right) = \operatorname{sgn}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial \mu\partial q^*}\right) = \operatorname{sgn}\left(\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi}\right).$$ Besides, by the monotone conditions in Assumption \[ass:hazard rate\], we have $\displaystyle\frac{d \tilde m^p(p^*)}{d p},\displaystyle\frac{d \tilde n^q(q^*)}{d q}>0$, and therefore, by Proposition \[proposition:profit\], we can derive that $$\begin{aligned}
\begin{cases}
\displaystyle\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*} = - (p^*+q^*-c)\lambda(p^*,q^*,\mu)\epsilon^\lambda(p^*,q^*,\varphi)\frac{d \tilde n^q(q^*)}{d q} <0\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^* \partial q^*} = - (p^*+q^*-c)\lambda(p^*,q^*,\mu)\epsilon^\lambda(p^*,q^*,\varphi)\frac{d \tilde m^p(p^*)}{d p} < 0.
\end{cases}
\label{equation:hess}\end{aligned}$$ Combining Equation (\[equation:basic expression mu\]), (\[equation:sign mu\]) and (\[equation:hess\]), we can derive that the signs of the marginal prices satisfiy $$\begin{aligned}
\begin{cases}
\displaystyle\operatorname{sgn}\left(\frac{\partial p^*(\mu)}{\partial \mu} \right) = \operatorname{sgn}\left[ \frac{-1}{D(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2}- \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*}\right)\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi} \right] = \operatorname{sgn}\left( \frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi} \right)\vspace{0.05in}\\
\displaystyle\operatorname{sgn}\left(\frac{\partial q^*(\mu)}{\partial \mu} \right) = \operatorname{sgn}\left[ \frac{-1}{D(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2}- \frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^* \partial q^*}\right)\frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi} \right] = \operatorname{sgn}\left( \frac{\partial \epsilon^\lambda(p^*,q^*,\varphi)}{\partial \varphi} \right).
\end{cases}\end{aligned}$$ and the ratio of the marginal prices satisfies $$\frac{\partial p^*(\mu)}{\partial \mu} \!:\! \frac{\partial q^*(\mu)}{\partial \mu} \! =\! \left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial q^* \partial p^*}\right) \!:\! \left(\frac{\partial^2 U(p^*,q^*,\mu)}{\partial (p^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu)}{\partial p^* \partial q^*}\right) \!=\! \frac{d \tilde{n}^q(q^*)}{d q} \!:\! \frac{d \tilde{m}^p(p^*)}{d p}.\tag*{{\hfill\ensuremath{\square}}}$$
**Proof of Corollary \[corollary:welfare capacity\]:** Under the constraint $p+q=c$, the social welfare $W$ can be denoted by $W(p,\mu)\triangleq W(p,c-p,\mu) = W(p,q,\mu)$. Because $p^\circ$ is the unique welfare-optimal price for any given capacity $\mu$, the price $p^\circ$ is a function of $\mu$ and thus we can write it as $p^\circ(\mu)$. Moreover, the unique profit-optimal price $p^\circ$ must be a strict local maximum of the welfare function $W$, thus we have the first order condition for any capacity $\mu$: $$\begin{aligned}
\frac{\partial W(p^\circ,\mu)}{\partial p^\circ} = &\frac{\partial W(p^\circ,q^\circ,\mu)}{\partial p^\circ} - \frac{\partial W(p^\circ,q^\circ,\mu)}{\partial q^\circ}=\tilde{n}^q(q^\circ)\left[W_m(p^\circ,q^\circ,\mu)-W(p^\circ,q^\circ,\mu)(1-\epsilon^\lambda(p^\circ,q^\circ,\varphi))\right]\notag\\
&-\tilde{m}^p(p^\circ)\left[W_n(p^\circ,q^\circ,\mu)-W(p^\circ,q^\circ,\mu)(1-\epsilon^\lambda(p^\circ,q^\circ,\varphi))\right]=0\label{equation:use}\end{aligned}$$ by Proposition \[proposition:welfare\]. So there exists the identity that $$\frac{d}{d\mu}\left(\frac{\partial W(p^\circ,\mu)}{\partial p^\circ}\right) = \frac{\partial^2 W(p^\circ,\mu)}{\partial (p^\circ)^2}\frac{\partial p^\circ(\mu)}{\partial \mu} + \frac{\partial^2 W(p^\circ,\mu)}{\partial \mu\partial p^\circ} = 0$$ from we can derive that $$\label{equation:p1}
\frac{\partial p^\circ(\mu)}{\partial \mu} = -\left(\frac{\partial^2 W(p^\circ,\mu)}{\partial (p^\circ)^2}\right)^{-1}\frac{\partial^2 W(p^\circ,\mu)}{\partial \mu\partial p^\circ}.$$ By Equation (\[equation:use\]), it satisfies that $$\begin{aligned}
\frac{\partial^2 W(p^\circ,\mu)}{\partial \mu\partial p^\circ} &=\frac{1}{\lambda(p^\circ,q^\circ,\mu)}\frac{\partial \lambda(p^\circ,q^\circ,\mu)}{\partial \mu}\frac{\partial W(p^\circ,\mu)}{\partial p^\circ}-\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)W(p^\circ,q^\circ,\mu)\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\varphi)}{\partial \varphi}\frac{\partial \varphi(p^\circ,q^\circ,\mu)}{\partial \mu}\\
&=-\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)W(p^\circ,q^\circ,\mu)\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\varphi)}{\partial \varphi}\frac{\partial \varphi(p^\circ,q^\circ,\mu)}{\partial \mu}.\end{aligned}$$ Because $\displaystyle\frac{\partial \varphi(p^\circ,q^\circ,\mu)}{\partial \mu}<0$ from Proposition \[proposition:elasticity\], it satisfies that $$\operatorname{sgn}\left(\frac{\partial^2 W(p^\circ,\mu)}{\partial \mu\partial p^\circ}\right) = \operatorname{sgn}\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)\cdot\operatorname{sgn}\left(\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\varphi)}{\partial \varphi}\right).$$ Because $p^\circ$ is a strict local maximum of $W$, we have the second order condition $\displaystyle\frac{\partial^2 W(p^\circ,\mu)}{\partial (p^\circ)^2}<0$. Furthermore, by Equation (\[equation:p1\]), the sign of the marginal price is $$\operatorname{sgn}\left(\frac{\partial p^\circ(\mu)}{\partial \mu}\right)= \operatorname{sgn}\left(\frac{\partial^2 W(p^\circ,\mu)}{\partial \mu\partial p^\circ}\right)= \operatorname{sgn}\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)\cdot\operatorname{sgn}\left(\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\varphi)}{\partial \varphi}\right).$$ Besides, under the constraint $p^\circ+q^\circ =c$, we have $q^\circ(\mu) = c - p^\circ(\mu)$ for any capacity $\mu$ and thus there exists the identity that $\displaystyle\operatorname{sgn}\left(\frac{\partial p^\circ(\mu)}{\partial \mu}\right)= -\operatorname{sgn}\left(\frac{\partial q^\circ(\mu)}{\partial \mu}\right)$. [$\square$]{}\
**Proof of Proposition \[equation:elasticity relation\]:** If the congestion function is $\Phi(\lambda,\mu)=\lambda/\mu$, its inverse function with respect to $\lambda$ is $\Lambda(\phi,\mu) = \phi\mu$. From Equation (\[equation:system elasticity\]) and Definition \[def:elasticity\], the elasticity of system throughput satisfies that $$\begin{aligned}
\epsilon^\lambda &= \left(1 - \frac{mnd \rho(\varphi)/d\varphi}{\partial \Lambda(\varphi,\mu)/\partial \varphi}\right)^{-1} = \left(1 - \frac{mnd\rho(\varphi)/d \varphi}{\mu}\right)^{-1} \\
&= \left(1 - \frac{mn\varphi}{\lambda}\frac{d \rho(\varphi)}{d \varphi}\right)^{-1} = \left(1 - \frac{mn\varphi}{mn\rho}\frac{d \rho(\varphi)}{d \varphi}\right)^{-1} = \frac{1}{1 + \epsilon^\rho_\varphi}.\tag*{{\hfill\ensuremath{\square}}}\end{aligned}$$
**Proof of Corollary \[corollary:profit sensitivity\]:** When the gain function is extended to be $\rho(\phi,s)$, the system congestion and throughput are extended to be $\varphi(p,q,\mu,s)$ and $$\label{equation:extension_lambda}
\lambda(p,q,\mu,s)\triangleq \lambda\big(p,q,s,\varphi(p,q,\mu,s)\big) = m(p)n(q)\rho\big(\varphi(p,q,\mu,s),s\big),$$ respectively. Correspondingly, we can denote the AP’s profit by $U(p,q,\mu,s) \triangleq (p+q-c)\lambda(p,q,\mu,s)$. Under any fixed capacity $\mu$ and sensitivity $s$, the two-sided price $(p,q)$ determines the AP’s profit $U$. Thus the profit is a function of the price and we can denote it by $U(p,q) \triangleq U(p,q,\mu,s)$. We use $H_s(p,q)$ to denote the Hessian matrix of the profit function $U(p,q)$, defined by $$\begin{aligned}
H_s(p,q)\triangleq
\begin{bmatrix}
\displaystyle\frac{\partial^2 U(p,q)}{\partial p^2} & \displaystyle\frac{\partial^2 U(p,q)}{\partial p \partial q}\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p,q)}{\partial q \partial p} & \displaystyle\frac{\partial^2 U(p,q)}{\partial q^2}
\end{bmatrix}=
\begin{bmatrix}
\displaystyle\frac{\partial^2 U(p,q,\mu,s)}{\partial p^2} & \displaystyle\frac{\partial^2 U(p,q,\mu,s)}{\partial p \partial q}\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p,q,\mu,s)}{\partial q \partial p} & \displaystyle\frac{\partial^2 U(p,q,\mu,s)}{\partial q^2}
\end{bmatrix}\end{aligned}$$ We denote the determinant of $H_s(p,q)$ by $$D_s(p,q) \triangleq det\big(H_s(p,q)\big) = \frac{\partial^2 U(p,q,\mu,s)}{\partial p^2}\frac{\partial^2 U(p,q,\mu,s)}{\partial q^2}-\frac{\partial^2 U(p,q,\mu,s)}{\partial p \partial q}\frac{\partial^2 U(p,q,\mu,s)}{\partial q \partial p}.$$ Because the profit-optimal price $(p^*,q^*)$ is unique for any given capacity $\mu$ and sensitivity $s$, the prices $p^*$ and $q^*$ are functions of $\mu$ and $s$ and thus we can write them as $p^*(\mu,s)$ and $q^*(\mu,s)$. Moreover, the unique profit-optimal price $(p^*,q^*)$ satisfies the first order condition $$\label{equation:first pqmus}
\frac{\partial U\big(p^*,q^*,\mu,s\big)}{\partial p^*} = \frac{\partial U\big(p^*,q^*,\mu,s\big)}{\partial q^*} = 0,$$ thus it is a critical point of the profit function $U$. Because $(p^*,q^*)$ is also a strict local maximum of $U$, we have that $$D_s(p^*,q^*)= \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (p^*)^2}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial q^* \partial p^*}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial p^*\partial q^*}\ge 0$$ by second partial derivative test. Similar to Equation(\[equation:basic expression mu\]), we can derive that $$\begin{aligned}
\label{equation:basic expression s}
\begin{cases}
\displaystyle\frac{\partial p^*(\mu,s)}{\partial s} = \frac{-1}{D_s(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (q^*)^2}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial p^*} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial q^* \partial p^*}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial q^*} \right)\vspace{0.05in}\\
\displaystyle\frac{\partial q^*(\mu,s)}{\partial s} = \frac{-1}{D_s(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (p^*)^2}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial q^*} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial p^* \partial q^*}\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial p^*} \right).
\end{cases}\end{aligned}$$ Because the gain function satisfies $\displaystyle\frac{\partial \rho(\phi,s_1)}{\partial \phi}>\frac{\partial \rho(\phi,s_2)}{\partial \phi}$ for $\forall s_1<s_2$, i.e., $\displaystyle\frac{\partial^2 \rho}{\partial\phi\partial s}<0$, we can derive that $$\label{eq:rho_phi_s}
\frac{\partial \rho(\phi,s)}{\partial s} = \int_0^\phi \frac{\partial^2 \rho(t,s)}{\partial t\partial s} dt<0.$$ Furthermore, under the identity $g(p,q,\mu,s)\triangleq \Lambda\big(\varphi(p,q,\mu,s),\mu\big) - \lambda(p,q,\mu,s)=0$, we can derive that $$\begin{aligned}
&\frac{\partial g(p,q,\mu,s)}{\partial s} = \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\frac{\partial \varphi(p,q,\mu,s)}{\partial s}-\frac{\partial \lambda(p,q,\mu,s)}{\partial s} \\
&= \frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\frac{\partial \varphi(p,q,\mu,s)}{\partial s}-\frac{\partial \lambda(p,q,s,\varphi)}{\partial s}-\frac{\partial \lambda(p,q,s,\varphi)}{\partial \varphi}\frac{\partial \varphi(p,q,\mu,s)}{\partial s}=0.\end{aligned}$$ from which we have $$\label{eq:phi_s}
\frac{\partial \varphi(p,q,\mu,s)}{\partial s} \!=\! \frac{\partial \lambda(p,q,s,\varphi)}{\partial s} \left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\!-\!\frac{\partial \lambda(p,q,s,\varphi)}{\partial \varphi}\right)^{-1}\!\!\! =\! mn\frac{\partial \rho(\varphi,s)}{\partial s} \left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\!-\!\frac{\partial \lambda(p,q,s,\varphi)}{\partial \varphi}\right)^{-1}\!\!< 0.$$ Furthermore, by Equation (\[equation:system elasticity\]), it satisfies that $$\label{eq:epsilon_s}
\frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial s}= m(p)n(q)(\epsilon^\lambda)^2\frac{\partial^2 \rho(\varphi,s)/\partial \varphi \partial s}{\partial \Lambda(\varphi,\mu)/\partial \varphi}<0.$$ Because the inverse $\Lambda$ of the congestion function satisfies $\displaystyle\frac{\partial \Lambda(\phi,\mu_1)}{\partial \phi} \le \frac{\partial \Lambda(\phi,\mu_2)}{\partial \phi}$ for all $\mu_1<\mu_2$, i.e., $\displaystyle\frac{\partial^2 \Lambda}{\partial\phi\partial\mu}\ge 0$, by Equation (\[equation:system elasticity\]), we have that $$\label{eq:epsilon_mu}
\begin{aligned}
\frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \mu}= -mn(\epsilon^\lambda)^2\frac{\partial \rho(\varphi,s)}{\partial \varphi}\frac{\partial^2\Lambda(\varphi,\mu)}{\partial \varphi\partial \mu}\left(\frac{\partial \Lambda(\varphi,\mu)}{\partial \varphi}\right)^{-2}\ge0.
\end{aligned}$$ We denote the inverse of the equilibrium congestion function $\varphi(p,q,\mu,s)$ with respect to $\mu$ by $\Xi(p,q,\varphi,s)$ and denote $\epsilon^\lambda(p,q,\varphi,s) \triangleq \epsilon^\lambda\big(p,q,\Xi(p,q,\varphi,s),s,\varphi\big)$. By Equation (\[eq:epsilon\_mu\]) and the assumption $\displaystyle\frac{\partial \epsilon^\lambda(p,q,\varphi,s)}{\partial \varphi}>0$ in Corollary \[corollary:profit sensitivity\], we can derive that $$\begin{aligned}
\frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \varphi} &= \frac{\partial \epsilon^\lambda(p,q,\varphi,s)}{\partial \varphi} - \frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \mu} \frac{\partial \Xi(p,q,\varphi,s)}{\partial \varphi}\notag\\
&> - \frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \mu} \frac{\partial \Xi(p,q,\varphi,s)}{\partial \varphi} = -\frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \mu}\left(\frac{\partial\varphi(p,q,\mu,s)}{\partial \mu}\right)^{ -1}\ge0.\label{eq:epsilon_phi}\end{aligned}$$ Furthermore, by Equation (\[eq:phi\_s\]), (\[eq:epsilon\_s\]) and (\[eq:epsilon\_phi\]), we derive $$\frac{\partial \epsilon^\lambda(p,q,\mu,s)}{\partial s} = \frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial s} + \frac{\partial \epsilon^\lambda(p,q,\mu,s,\varphi)}{\partial \varphi}\frac{\partial\varphi(p,q,\mu,s)}{\partial s}<0.
\label{equation:congestion_s}$$ Based on Proposition \[proposition:profit\], Theorem \[theorem:KKT-Lerner\] and Equation (\[equation:congestion\_s\]), we have the relation that $$\begin{aligned}
\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial p^*} &= -(p^*+q^*-c)\lambda(p^*,q^*,\mu,s)\frac{\partial \epsilon^\lambda(p^*,q^*,\mu,s)}{\partial s}\tilde{m}^p(p^*)\notag\\
&= -(p^*+q^*-c)\lambda(p^*,q^*,\mu,s)\frac{\partial \epsilon^\lambda(p^*,q^*,\mu,s)}{\partial s}\tilde{n}^q(q^*) = \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial q^*}>0.\label{equation:pqmus}\end{aligned}$$ Besides, by the monotone conditions in Assumption \[ass:hazard rate\], we have $\displaystyle\frac{d \tilde m^p(p^*)}{d p},\displaystyle\frac{d \tilde n^q(q^*)}{d q}>0$, and therefore, by Proposition \[proposition:profit\], we can derive that $$\begin{aligned}
\begin{cases}
\displaystyle\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial q^* \partial p^*} = - (p^*+q^*-c)\lambda(p^*,q^*,\mu,s)\epsilon^\lambda(p^*,q^*,\mu, s)\frac{d \tilde n^q(q^*)}{d q} <0\vspace{0.05in}\\
\displaystyle\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (p^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial p^* \partial q^*} = - (p^*+q^*-c)\lambda(p^*,q^*,\mu,s)\epsilon^\lambda(p^*,q^*,\mu,s)\frac{d \tilde m^p(p^*)}{d p} < 0.
\end{cases}\end{aligned}$$ Combining it with Equation (\[equation:basic expression s\]) and (\[equation:pqmus\]), we further have $$\begin{aligned}
\begin{cases}
\displaystyle\frac{\partial p^*(\mu,s)}{\partial s} = \frac{-1}{D_s(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial q^* \partial p^*}\right)\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial p^*}>0\vspace{0.05in}\\
\displaystyle\frac{\partial q^*(\mu,s)}{\partial s} = \frac{-1}{D_s(p^*,q^*)}\left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (p^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial p^* \partial q^*}\right)\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial s\partial q^*}>0
\end{cases}\end{aligned}$$ showing that the optimal prices $p^*$ and $q^*$ both increase with the congestion sensitivity $s$ and the ratio of the marginal prices satisfies that $$\begin{aligned}
\frac{\partial p^*(\mu,s)}{\partial s} : \frac{\partial q^*(\mu,s)}{\partial s} &= \left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (q^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial q^* \partial p^*} \right) : \left(\frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial (p^*)^2} - \frac{\partial^2 U(p^*,q^*,\mu,s)}{\partial p^* \partial q^*}\right)\\
& = \frac{d \tilde{n}^q(q^*)}{d q} : \frac{d \tilde{m}^p(p^*)}{d p}.\tag*{{\hfill\ensuremath{\square}}}
\end{aligned}$$
**Proof of Corollary \[corollary:welfare sensitivity\]:** When the gain function is extended to be $\rho(\phi,s)$, we can denote the social welfare by $W(p,q,\mu,s) \triangleq \big(s_m(p)+s_n(q)\big)\lambda(p,q,\mu,s)$ based on Equation (\[equation:extension\_lambda\]). Under the constraint $p+q=c$, the social welfare $W$ can be denoted by $W(p,\mu,s)\triangleq W(p,c-p,\mu,s) = W(p,q,\mu,s)$. Because $p^\circ$ is the unique welfare-optimal price for any given capacity $\mu$ and sensitivity $s$, the price $p^\circ$ is a function of $\mu$ and $s$ and thus we can write it as $p^\circ(\mu,s)$. Moreover, the unique profit-optimal price $p^\circ$ must be a strict local maximum of the welfare function $W$, thus we have the first order condition for any capacity $\mu$ and sensitivity $s$: $$\begin{aligned}
\frac{\partial W(p^\circ,\mu,s)}{\partial p^\circ} \!= &\frac{\partial W(p^\circ,q^\circ,\mu,s)}{\partial p^\circ}\! -\! \frac{\partial W(p^\circ,q^\circ,\mu,s)}{\partial q^\circ}=\tilde{n}^q(q^\circ)\Big[W_m(p^\circ,q^\circ,\mu,s)\!-\!W(p^\circ,q^\circ,\mu,s)\big(1-\epsilon^\lambda(p^\circ,q^\circ,\mu,s)\big)\Big]\notag\\
&-\tilde{m}^p(p^\circ)\Big[W_n(p^\circ,q^\circ,\mu,s)-W(p^\circ,q^\circ,\mu,s)\big(1-\epsilon^\lambda(p^\circ,q^\circ,\mu,s)\big)\Big]=0\label{equation:use2}\end{aligned}$$ by Proposition \[proposition:welfare\]. Similar to Equation (\[equation:p1\]), we can derive that $$\label{equation:p2}
\frac{\partial p^\circ(\mu,s)}{\partial s} = -\left(\frac{\partial^2 W(p^\circ,\mu,s)}{\partial (p^\circ)^2}\right)^{-1} \frac{\partial^2 W(p^\circ,\mu,s)}{\partial s\partial p^\circ}.$$ By Equation (\[equation:use2\]), we can derive that $$\begin{aligned}
\frac{\partial^2 W(p^\circ,\mu,s)}{\partial s\partial p^\circ}& =\frac{1}{\lambda}\frac{\partial \lambda(p^\circ,q^\circ,\mu,s)}{\partial s}\frac{\partial W(p^\circ,\mu,s)}{\partial p^\circ}-\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)W(p^\circ,\mu,s)\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\mu,s)}{\partial s}\\
&= -\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big)W(p^\circ,\mu,s)\frac{\partial \epsilon^\lambda(p^\circ,q^\circ,\mu,s)}{\partial s}.\end{aligned}$$ By Equation (\[equation:congestion\_s\]), it satisfies that $$\operatorname{sgn}\left(\frac{\partial^2 W(p^\circ,\mu,s)}{\partial s\partial p^\circ}\right) = \operatorname{sgn}\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big).$$ Because $p^\circ$ is a strict local maximum of $W$, we have the second order condition $\displaystyle\frac{\partial^2 W(p^\circ,\mu,s)}{\partial (p^\circ)^2}<0$. Furthermore, by Equation (\[equation:p2\]), the sign of the marginal price satisfies that $$\operatorname{sgn}\left(\frac{\partial p^\circ(\mu,s)}{\partial s}\right)= \operatorname{sgn}\left(\frac{\partial^2 W(p^\circ,\mu,s)}{\partial s\partial p^\circ}\right) = \operatorname{sgn}\big(\tilde{m}^p(p^\circ)-\tilde{n}^q(q^\circ)\big).$$ Besides, under the constraint $p^\circ+q^\circ =c$, we have $q^\circ(\mu,s) = c - p^\circ(\mu,s)$ for any $\mu,s$ and thus there exists the identity that $\displaystyle\operatorname{sgn}\left(\frac{\partial p^\circ(\mu,s)}{\partial s}\right)= -\operatorname{sgn}\left(\frac{\partial q^\circ(\mu,s)}{\partial s}\right)$. [$\square$]{}
[^1]: ComcastXFINITY, http://www.xfinity.com
[^2]: Netflix, https://www.netflix.com
[^3]: AT&T Sponsored Data, www.att.com/att/sponsoreddata
[^4]: The U.S. FCC’s Open Internet Order, https://www.fcc.gov/document/fcc-releases-open-internet-order
[^5]: T-Mobile, http://www.t-mobile.com
[^6]: Interested readers are referred to Appendix \[appendix:a\] for details of the case of M/M/1 queuing delay.
[^7]: Presently, APs usually implement the two-sided schemes in the form of paid peering or sponsored data plan on the CP side, whose pricing information has remained trade secrets. So there is no public data trace yet to evaluate our two-sided pricing model.
|
---
author:
- |
Fabio Dominguez\
Department of Physics, Columbia University, New York, NY, 10027, USA\
Email:
title: Particle production in DIS off a shockwave in AdS
---
Introduction
============
In recent years there have been numerous attempts to explain the results of heavy ion collisions suggesting the existence of a strongly interacting deconfined QCD plasma. The complications of dealing with a strongly coupled theory in the framework of familiar quantum field theories have led to the use of methods from string theory via the AdS/CFT correspondence [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj; @Son:2007vk]. Most of the calculations performed up to date are set in the supergravity scenario where the t’Hooft coupling $\lambda=g^{2}N$ of the gauge theory is assumed to be large by taking the number of colors to infinity while the gauge coupling $g$ is still small. This scenario is chosen for practical purposes, since in that approximation the string theory side of the correspondence is reduced to a supergravity theory (string excitations are suppressed).
The conformally symmetric $\mathcal{N}=4$ SYM is not expected to reproduce accurately all regimes of QCD, but for the case of the strongly coupled plasma created in heavy ion collisions it is widely believed to give sensible results. To be able to use the possible similarities between the two different theories within this context, it is important to find a suitable gravity dual for a fast moving nucleus. A natural choice would be a highly boosted slab of matter, as the one described in [@Mueller:2008bt], but this scenario has the inconvenience of using a metric which is not an exact solution of Einstein’s equations. Taking into account that in the limit of an infinite boost the finite slab of matter is supposed to look like a shockwave, it is more convenient to work with a shockwave metric (proved to be an exact solution to the equations of motion). This choice was first suggested in [@Janik:2005zt] but since then different alternatives to define more general scenarios have been developed [@Beuf:2009mk; @Avsar:2009xf; @Albacete:2008ze; @Gubser:2008pc].
Different attempts to explore the properties of the shockwave metric in AdS$_{5}$ have led to interesting results. The first computation of the scattering of a field in a shock wave in AdS was done in [@Cornalba:2006xk; @Cornalba:2006xm]. Subsequent studies in the resummation graviton exchanges and computation of structure functions include [@Cornalba:2007zb; @Brower:2007qh; @Albacete:2008ze; @Avsar:2009xf]. In particular we focus on the deep inelastic scattering analysis presented in [@Avsar:2009xf] where explicit expressions for the fields representing the probes are calculated but the main focus is in the calculation of the structure functions, which are obtained relying on the optical theorem and using a forward scattering amplitude. In this paper we choose to take a different path, by using the $\mathcal{R}$-current as a probe we follow the propagation of the gravity wave after the scattering process. Using the explicit expressions of the fields calculated in [@Avsar:2009xf], we calculate the energy-momentum tensor associated with the produced states after the collision and give a physical picture of the results.
The original projectile considered is assumed to be a space-like $\mathcal{R}$-current. The scattered field is calculated from a multiple scattering approach which takes the form of an eikonal phase when the shockwave is considered to be a $\delta$-function in $x^{-}$. From the point of view of the shockwave as a finite slab of matter under a large boost, the eikonal picture is consistent only if the coherence length of the incoming probe is larger than the width of the target. This condition was observed in [@Mueller:2008bt], it also agrees with the analysis of the validity of the $\delta$-function approximation in [@Avsar:2009xf], and it is consistent with what we find in Eq. (\[cohlength\]) in our attempt to localize the energy flow. When we look at the scattered field we notice it can be regarded as a combination of space-like and time-like vacuum modes. The appearance of the time-like modes after the scattering resembles the known process of particle production in deep inelastic scattering, where real (time-like) particles appear in the final state after a collision involving a highly space-like probe. Based on this observation, we follow the propagation of these produced time-like states and argue that the picture of particle production is applicable in this context.
Given that the calculations are performed in the classical approximation, by solving the classical equations of motion, the particle production picture is not completely clear in the usual sense. Previous calculations regarding scattering processes with various targets [@Cornalba:2007zb; @Brower:2007qh; @Polchinski:2002jw; @Levin:2009vj; @Hatta:2007cs; @'tHooft:1987rb] show that the main contribution to the scattering amplitudes, in the strong coupling limit, come from elastic or quasi-elastic processes where particle production doesn’t play an important role since at high energy they are dominated by graviton exchange. Other mechanisms for multiparticle production have been suggested [@Kharzeev:2009pa] as an attempt to get a consistent picture for the creation of a quark-gluon plasma. We argue this is not necessary to get a consistent picture of particle production in deep inelastic scattering. In this classical calculation, particle production is manifest through the presence of time-like modes in the expansion of the fields after the scattering.
Our argument to support this particle production picture is based on the calculation of the energy-momentum tensor of the $\mathcal{R}$-current. The inelastic part of the scattering, which is associated to the imaginary part of the action, is directly related to the energy flow in the fifth dimension, which is caused by the presence of the time-like modes in the scattered field. As a comparison with previous calculations of the structure functions, this can be seen as an analog of the optical theorem. Our calculation gives the sum over final states of the amplitude squared, which has to be proportional to the imaginary part of the forward current-current correlator (see Fig. \[figot\]). The energy flow in the fifth dimension is also compared to the initial state where the incoming probe is assumed to be carrying energy in the $x^{-}$ direction. This comparison gives a clear picture of the scattering process in the saturation regime and shows a clear difference in the behavior of the transverse and longitudinal components.
![The photon line attached to the boundary represents a space-like state, like the incoming wave. The photon line going down the fifth dimension represents a time-like state. The region in between the dashed lines is the shockwave where the graviton exchanges take place.[]{data-label="figot"}](opthe)
Keeping in mind the picture of the energy flow in the fifth dimension representing the inelastic part of the collision and, therefore, the produced particles, we turn our attention to determine the momentum of the outgoing particles. The explicit expressions for the scattered fields written in terms of vacuum states suggest that the coefficient functions should be considered as the probabilities of the different modes to be produced on any given scattering process. This picture is not completely clear since the basis of vacuum states is not a complete set of generators. To lift this ambiguity we rely on our calculation of the energy-momentum tensor to find which states (labeled by their virtualities) are the ones contributing the most to the energy flow. Given the separation between transverse and longitudinal components on the calculation of the energy-momentum tensor we can find different regions for both cases and find a direct relation with the transverse and longitudinal structure functions found in [@Mueller:2008bt; @Avsar:2009xf].
The last section is devoted to find an approximate trajectory of the energy flow in the fifth dimension. Since the incoming probe is assumed to be moving along the $x^{-}$ direction we attempt to give an estimate of where in the $x^{-}$ coordinate is the energy flow localized for any given value of $z$. For large values of $z$ we find a linear relation between $x^{-}$ and $z$ that doesn’t depend on the properties of the target. It is also interesting to see how the coherence length of the projectile shows up as a restriction on where the different modes can be resolved, supporting the fact that the eikonal approximation is valid only as long as this coherence length is larger than the size of the target.
General setup and scattered wave
================================
The general form of the shock wave metric, in Fefferman-Graham coordinates, is [@Janik:2005zt] $$ds^{2}=\frac{R^{2}}{z^{2}}\left(dz^{2}-2dx^{+}dx^{-}+dx_{\perp}^{2}+h(z,x^{-},x_{\perp})(dx^{-})^{2}\right).\label{metric}$$ There are different procedures used in the literature [@Avsar:2009xf; @Beuf:2009mk; @Janik:2005zt; @Albacete:2008ze; @Gubser:2008pc] to determine and physically motivate the appropriate shape of the shock wave, which is encoded in the function $h(z,x^{-},x_{\perp})$. Here, for simplicity, we assume no dependence in the transverse coordinates (an homogeneous infinite wall), in which case the function $h$ takes the simple form $$h(z,x^{-})=\frac{2\pi^{2}}{N^{2}}z^{4}\langle T_{--}(x^{-})\rangle\, ,\label{homsw}$$ where $T_{--}$ on the right hand side corresponds to the gauge-theory energy-momentum tensor which will represent a fast moving nucleus. The $z^{4}$ factor is required for the metric to satisfy Einstein’s equation and the relation with the energy-momentum tensor is given by holographic renormalization [@de; @Haro:2000xn; @Skenderis:2002wp].
This form of the metric corresponds to the target infinite momentum reference frame with the target moving in the positive $x^{3}$ direction (right mover). The projectile to be considered is an $\mathcal{R}$-current, represented by a gauge vector field in AdS$_{5}$ with space-like momentum $q^{\mu}$, moving in the negative $x^{3}$ direction (left mover). Taking $q_{\perp}=0$, we have $q^{-}>0$, $q^{+}<0$, and the virtuality $Q^{2}=-2q^{+}q^{-}>0$.
We are interested in the explicit form of the classical field after the scattering. This can be found by using a multiple scattering approach to solve the equations of motion [@Avsar:2009xf], which are derived from the 5-dimensional Yang-Mills action $$S=-\frac{N^{2}}{64\pi^{2}R}\int d^{4}x\,dz\;\sqrt{-g}g^{mp}g^{nq}F_{mn}F_{pq}\, .\label{action}$$ Taking into account that the shockwave metric in Eq. (\[metric\]) does not depend on $x^{+}$, the $q^{-}$ component of the initial momentum of the projectile will be conserved and, therefore, we can restrict our analysis to fields of the form $$A_{\mu}(x^{-},x^{+},z)=e^{-iq^{-}x^{+}}A_{\mu}(x^{-},z)\, .\label{planeA}$$
For convenience we use the gauge condition $A_{z}=0$. Under these circumstances, the relevant equations of motion are [@Avsar:2009xf] $$\begin{aligned}
(z\partial_{z}z^{-1}\partial_{z}+iq^{-}\partial_{-})A_{+}=(q^{-})^{2}A_{-}\, ,\label{leom1} \\
(\partial_{-}-iq^{-}h)A'_{+}=iq^{-}A'_{-}\, ,\label{leom2} \\
(z\partial_{z}z^{-1}\partial_{z}+2iq^{-}\partial_{-})A_{i}=-(q^{-})^{2}hA_{i}\, ,\label{treom}\end{aligned}$$ where a prime denotes differentiation with respect to $z$.
Let’s focus on the longitudinal components first. Eqs. and can be combined into a single differential equation for $A'_{+}$, $$(\partial_{z}z\partial_{z}z^{-1}+2iq^{-}\partial_{-})A'_{+}=-(q^{-})^{2}hA'_{+}\, .\label{comblongeom}$$ This is equivalent to the integral equation $$A'_{+}(x^{-},z)=A_{+}^{\prime(0)}(x^{-},z)+\int\frac{dz'}{z'}dy^{-}G_{L}(z,z';x^{-}-y^{-})\left[-(q^{-})^{2}\right]h(z',y^{-})A'_{+}(y^{-},z')\, ,\label{intlong}$$ where $A_{+}^{\prime(0)}$ is the vacuum solution, and the Green’s function $G_{L}$ satisfies the equation $$(\partial_{z}z\partial_{z}z^{-1}+2iq^{-}\partial_{-})G_{L}(z,z';x^{-}-y^{-})=z\delta(z-z')\delta(x^{-}-y^{-})\, .\label{greeneq}$$
In Eq. (\[intlong\]) the field is written as a vacuum piece and a scattering piece. Following Ref. [@Avsar:2009xf] we choose to impose the boundary condition at $z=0$ (plane wave state) on the vacuum solution, or equivalently, require the Green’s function to vanish at the boundary. More specifically, we write the field as $$A_{\mu}(x^{-},z)=A_{\mu}^{(0)}(x^{-},z)+A_{\mu}^{(s)}(x^{-},z)\, ,$$ and the boundary condition reads $$\begin{aligned}
\lim_{z\to0}A_{\mu}^{(0)}(x^{-},z)&=\mathcal{A}_{\mu}(x^{-})\, ,\\
\lim_{z\to0}A_{\mu}^{(s)}(x^{-},z)&=0\, .\end{aligned}$$ The boundary field is assumed to be a plane wave and therefore can be written as $$\mathcal{A}_{\mu}(x^{-})=e^{-iq^{+}x^{-}}\tilde{\mathcal{A}}_{\mu}\, ,$$ with $\tilde{\mathcal{A}}_{\mu}$ pure numbers.
These boundary conditions together with Eq. (\[leom1\]) imply the following boundary condition for $A'_{+}$ $$\lim_{z\to0}z\partial_{z}(z^{-1}A'_{+}(x^{-},z))=\frac{Q^{2}}{2}\mathcal{A}_{+}(x^{-})+(q^{-})^{2}\mathcal{A}_{-}(x^{-})\equiv \frac{Q^{2}}{2}\mathcal{A}_{L}(x^{-})\, .\label{boundarylong}$$
The longitudinal vacuum solution is obtained from the homogeneous version of Eq. (\[comblongeom\]) and the boundary condition (\[boundarylong\]) [@Hatta:2007cs] $$A^{\prime(0)}_{+}(x^{-},z)=-\frac{1}{2}Q^{2}\mathcal{A}_{L}(x^{-})z{\text{K}}_{0}(Qz)\, .\label{vacsol}$$
Here we made use of the fact that the initial momentum of the incoming wave is assumed to be space-like by taking the solution with a K-function (for time-like momentum the appropriate solution involves a Hankel function instead). The distinction between time-like and space-like vacuum solutions will play an important role in our discussion because of their different behavior at large $z$. As will be seen below, the scattering piece of the field can be written in terms of a superposition of time-like and space-like vacuum states, but by taking the large $z$ limit we are able to isolate the contribution from time-like modes only since the space-like ones go exponentially to zero in that region.
Now lets turn our attention to the scattering piece of the field which will be obtained by means of a multiple scattering approach. In particular we will consider the simpler case where the shockwave is represented by a delta function in the $x^{-}$ coordinate, namely $h(x^{-},z)=\delta(x^{-})\tilde{h}(z)$. By considering multiple iterations of Eq. (\[intlong\]) and taking into account that the Green’s function is retarded in the $x^{-}$ variable and satisfies [@Avsar:2009xf] $$G_{L}(z,z';x^{-}-y^{-}\to+0)=-\frac{i}{2q^{-}}z\delta(z-z')\, ,\label{greenzeroright}$$ we can construct a series that exponentiates into an eikonal phase. This procedure gives the result $$A'_{+}(x^{-},z)=A_{+}^{\prime(0)}(x^{-},z)-2q^{-}\int \frac{dz'}{z'}G_{L}(z,z';x^{-})\mathcal{T}(z')A_{+}^{\prime(0)}(0,z')\, ,\label{fullsollg}$$ where $\mathcal{T}(z')$ is the scattering amplitude defined by $$-i\mathcal{T}(z)=1-\exp\left(\frac{iq^{-}}{2}\tilde{h}(z)\right).\label{scatamp}$$
Evaluating (\[fullsollg\]) at $x^{-}=+0$ and using (\[greenzeroright\]), we can easily see that the fields pick up a phase when going through the shockwave $$A'_{+}(+0,z)=\exp\left(\frac{iq^{-}}{2}\tilde{h}(z)\right)A'_{+}(-0,z)\, .\label{discphase}$$ When this is inserted in Eq. (\[planeA\]), it can be seen as a jump in the light-cone time $x^{+}$ which corresponds to the capture time already observed in Refs. [@Mueller:2008bt; @Kancheli:2002nw].
Using Eq. (\[homsw\]) and the fact that we are using the shockwave as a gravity dual for a fast moving nucleus, a sensible choice for $\tilde{h}$ would be $$\tilde{h}(z)=2\pi^{2}z^{4}\gamma L\Lambda^{4}\, ,$$ where $\gamma$ is the boost factor, $L$ the size of the target on its rest frame, and $\Lambda$ the characteristic momentum scale in the target rest frame. Introducing the corresponding Bjorken-$x$ variable $$x=\frac{Q^{2}}{2q^{-}\gamma\Lambda}\, ,$$ the exponent in (\[scatamp\]) can be written as $$\frac{iq^{-}}{2}\tilde{h}(z)=iQ^{2}\frac{\pi^{2}L\Lambda^{3}}{2x}z^{4}\, .$$
The values of $z$ entering the scattering amplitude are typically of order $1/Q$, and therefore the momentum scale which determines when multiple scattering become important is $Q_{s}^2=\pi^{2}L\Lambda^{3}/2x$. From now on we will write the scattering amplitude as[^1] $$\mathcal{T}(z)=i\left(1-e^{iQ^{2}Q_{s}^{2}z^{4}}\right),\label{scattampQs}$$ and will use only the saturation momentum $Q_{s}$ to refer to the properties of the target. In general, we will be interested in the case where the medium appears dense to the projectile and multiple scatterings have to be considered. Specifically $Q^{2}\ll Q_{s}^{2}$.
In order to get further insight into the scattering piece of the field, let’s take a closer look at the Green’s function in (\[fullsollg\]). Considering the Fourier transform in the $x^{-}$ variable $$G_{L}(z,z';x^{-}-y^{-})=\int\frac{dk^{+}}{2\pi}e^{-ik^{+}(x^{-}-y^{-})}G_{L}(z,z';K^{2})\, ,\label{greenfourier}$$ with $K^{2}=-2q^{-}k^{+}$, we can solve Eq. (\[greeneq\]) in momentum space. The space-like case $K^{2}>0$ is given by [@Avsar:2009xf] $$G_{L}(z,z';K^{2})=-zz'({\text{K}}_{0}(Kz'){\text{I}}_{0}(Kz)\Theta(z'-z)+{\text{K}}_{0}(Kz){\text{I}}_{0}(Kz')\Theta(z-z'))\, ,$$ and the time-like case $K^{2}<0$ by its analytic continuation $$G_{L}(z,z';K^{2})=-\frac{i\pi zz'}{2}({\text{H}}^{(1)}_{0}(|K|z'){\text{J}}_{0}(|K|z)\Theta(z'-z)+{\text{H}}^{(1)}_{0}(|K|z){\text{J}}_{0}(|K|z')\Theta(z-z'))\, .\label{tlgreenmom}$$
Inserting Eq. (\[greenfourier\]) into , we get an explicit expansion of the scattering piece in terms of definite momentum states. For large $z$, the terms with $\Theta(z'-z)$ can be neglected, in which case this expansion is written as a superposition of vacuum states. Moreover, the space-like states can be neglected as well since they fall off exponentially. The resulting expression is much simpler and involves only time-like states.
The relevant scattering part to be used at large $z$ is $$\begin{aligned}
A^{\prime(s)}_{+}(x^{-},z)&\simeq -2q^{-}\int_{0}^{\infty}\frac{dk^{+}}{2\pi}e^{-ik^{+}x^{-}}\int\frac{dz'}{z'}\left(-\frac{i\pi zz'}{2}{\text{H}}_{0}^{(1)}(|K|z){\text{J}}_{0}(|K|z')\right)\mathcal{T}(z')A_{+}^{\prime(0)}(0,z')\, ,\\
&=-\frac{i}{4}q^{-}Q^{2}\tilde{\mathcal{A}}_{L}\int_{0}^{\infty}dk^{+}e^{-ik^{+}x^{-}}\left(\int dz'\; z'{\text{J}}_{0}(|K|z')\mathcal{T}(z'){\text{K}}_{0}(Qz')\right)z{\text{H}}_{0}^{(1)}(|K|z)\, .\label{exptlmodes}\end{aligned}$$
A similar analysis can be applied to the transverse components. The solution to Eq. (\[treom\]) can be written as $$A_{i}(x^{-},z)=A_{i}^{(0)}(x^{-},z)-2q^{-}\int \frac{dz'}{z'}G_{T}(z,z';x^{-})\mathcal{T}(z')A_{i}^{(0)}(0,z')\, ,\label{fullsoltr}$$ where the vacuum solution is $$A^{(0)}_{i}(x^{-},z)=Q\mathcal{A}_{i}(x^{-})z{\text{K}}_{1}(Qz)\, .\label{vacsoltr},$$ and the transverse Green’s function satisfies $$(z\partial_{z}z^{-1}\partial_{z}+2iq^{-}\partial_{-})G_{T}(z,z';x^{-}-y^{-})=z\delta(z-z')\delta(x^{-}-y^{-})\, .\label{greeneqtr}$$ The momentum space Green’s function is given by $$G_{T}(z,z';K^{2})=-zz'({\text{K}}_{1}(Kz'){\text{I}}_{1}(Kz)\Theta(z'-z)+{\text{K}}_{1}(Kz){\text{I}}_{1}(Kz')\Theta(z-z'))\, ,\label{slgreentr}$$ for the space-like case $K^{2}>0$, and $$G_{T}(z,z';K^{2})=-\frac{i\pi zz'}{2}({\text{H}}^{(1)}_{1}(|K|z'){\text{J}}_{1}(|K|z)\Theta(z'-z)+{\text{H}}^{(1)}_{1}(|K|z){\text{J}}_{1}(|K|z')\Theta(z-z'))\, ,$$ for the time-like case $K^{2}<0$.
Following the same argument as in the longitudinal case, the scattering piece we will use for large $z$ is $$A^{(s)}_{i}(x^{-},z)\simeq \frac{i}{2}q^{-}Q\tilde{\mathcal{A}}_{i}\int_{0}^{\infty}dk^{+}e^{-ik^{+}x^{-}}\left(\int dz'\; z'{\text{J}}_{1}(|K|z')\mathcal{T}(z'){\text{K}}_{1}(Qz')\right)z{\text{H}}_{1}^{(1)}(|K|z)\, .\label{exptlmodestr}$$
Energy-momentum tensor for the gravity wave and relation with the structure functions
=====================================================================================
In order to get a physical picture of the production of the different time-like modes lets take a look at the flow of energy in different directions. The easiest way to do this is to calculate specific components of the energy-momentum tensor associated with the gravity wave. Our starting point will be the following relation between the energy-momentum tensor and the action [@landau], $$\frac{1}{2}\sqrt{-g}T_{\mu\nu}=\frac{\delta S}{\delta g^{\mu\nu}}\, .\label{Tmunu}$$
Energy flow in the fifth dimension {#totalflux}
----------------------------------
In particular we are interested in the flux of energy propagating down the fifth dimension, since this is the energy carried away by the particles produced in the collision. For that purpose, the relevant component to calculate is $T_{+z}$. From Eq. (\[action\]) we get $$\begin{aligned}
\frac{1}{2}\sqrt{-g}T_{+z}&=-\frac{N^{2}}{32\pi^{2}R}\sqrt{-g}F_{+\alpha}F_{z\beta}g^{\alpha\beta}\\
&=-\frac{N^{2}}{32\pi^{2}R}\sqrt{-g}\left(F_{+-}F_{z+}g^{-+}+F_{+i}F_{zj}g^{ij}\right).\end{aligned}$$ Taking into account the gauge condition $A_{z}=0$, assuming there is no $x_{\perp}$ dependence, and plugging in the AdS$_{5}$ metric, we can write an explicit expression for the mixed component $T_{+}^{z}$ in terms of the fields, $$\begin{aligned}
\sqrt{-g}T_{+}^{z}&=\sqrt{-g}T_{+z}g^{zz}\\
&=-\frac{N^{2}}{16\pi^{2}z}\left(-(\partial_{+}A_{-}-\partial_{-}A_{+})\partial_{z}A_{+}+\partial_{+}A_{i}\partial_{z}A_{i}\right).\end{aligned}$$ Using the complex representation of the plane wave fields as in Eq. , we calculate the $x^{+}$-averaged energy flow. This is given by $$\begin{aligned}
\sqrt{-g}T_{+}^{z}&=-\frac{N^{2}}{32\pi^{2}z}\text{Re}\left((iq^{-}A_{-}+\partial_{-}A_{+})\partial_{z}A_{+}^{*}-iq^{-}A_{i}\partial_{z}A_{i}^{*}\right)\label{emtensor1}\\
&=\frac{N^{2}}{32\pi^{2}z}\text{Im}\left(\frac{1}{q^{-}}z\partial_{z}(z^{-1}A'_{+})A_{+}^{\prime*}-q^{-}A_{i}\partial_{z}A_{i}^{*}\right),\label{emtensor}\end{aligned}$$ where we have used one of the equations of motion (Eq. ) in the last step.
To calculate the total energy flow associated with the produced particles we would have to integrate the expression above over positive values of $x^{-}$ (where the scattering piece is localized). By going to large $z$ we can safely extend the $x^{-}$ integration to all values since the incoming field is very small in that region. As will be seen in the following, being able to integrate over all values of $x^{-}$, instead of only positive values, greatly simplifies the calculation.
Again, lets focus in the longitudinal components first. If we write the $+$ component of the field, like in Eq. , in the form $$A'_{+}(x^{-},z)=\int_{0}^{\infty}dk^{+}\; e^{-ik^{+}x^{-}}a(k^{+})z{\text{H}}_{0}^{(1)}(|K|z)\, ,\label{fourierA}$$ we can easily see that integrating over $x^{-}$ Eq. (\[emtensor\]) will identify the $k^{+}$ modes from the field and the conjugate field. Taking the large $z$ form of the Hankel functions, the contribution from the longitudinal part to the energy-momentum tensor is $$\int dx^{-}(\sqrt{-g}T_{+}^{z})_{L}=\frac{N^{2}}{8\pi^{2}q^{-}}\int_{0}^{\infty}dk^{+}\;|a(k^{+})|^{2}\, ,\label{summodes}$$ which is independent of $z$ (as it should). Comparing Eqs. (\[fourierA\]) and (\[exptlmodes\]) we get the explicit form of $a(k^{+})$ $$a(k^{+})=-\frac{i}{4}q^{-}Q^{2}\tilde{\mathcal{A}}_{L}\int dz'\; z'{\text{J}}_{0}(|K|z')\mathcal{T}(z'){\text{K}}_{0}(Qz')\, .\label{fcoeff}$$ Inserting this into (\[summodes\]) and changing the momentum integration to a virtuality integration we get $$\begin{aligned}
\int dx^{-}(\sqrt{-g}T_{+}^{z})_{L}=\frac{N^{2}}{128\pi^{2}}Q^{4}|\tilde{\mathcal{A}}_{L}|^{2}\int dz'\,dz''\,d|K|\;&|K|{\text{J}}_{0}(|K|z'){\text{J}}_{0}(|K|z'')\nonumber\\
&\times z'{\text{K}}_{0}(Qz')z''{\text{K}}_{0}(Qz'')\mathcal{T}(z')\mathcal{T}(z'')\, .\end{aligned}$$ The virtuality integration gives a $\delta$-function identifying $z'$ with $z''$. Finally, we get $$\int dx^{-}(\sqrt{-g}T_{+}^{z})_{L}=\frac{N^{2}}{128\pi^{2}}Q^{4}|\tilde{\mathcal{A}}_{L}|^{2}\int dz'\;z' {\text{K}}_{0}^{2}(Qz')|\mathcal{T}(z')|^{2}\, .$$ A similar analysis can be done with the transverse components of the field. The total result for the energy flux is $$\int dx^{-}\sqrt{-g}T_{+}^{z}=\frac{N^{2}}{32\pi^{2}}Q^{2}\int dz'\; z' |\mathcal{T}(z')|^{2}\left(\frac{Q^{2}}{4}|\tilde{\mathcal{A}}_{L}|^{2}{\text{K}}_{0}^{2}(Qz')+(q^{-})^{2}|\tilde{\mathcal{A}}_{i}|^{2}{\text{K}}_{1}^{2}(Qz')\right).\label{Tplusz}$$
Relation with imaginary part of the action and optical theorem
--------------------------------------------------------------
As first noted in [@yoshinotes] for the case of an infinite plasma, the total flux calculated above is proportional to the imaginary part of the complex classical action. To see this, consider the classical action in the AdS$_{5}$ metric [@Avsar:2009xf; @yoshinotes] (the shockwave part of the metric doesn’t appear explicitly since the integrand is evaluated at $z=0$) $$\begin{aligned}
S_{cl}&=\frac{N^{2}}{32\pi^{2}}\int d^{4}x \left.\frac{1}{z}\left(-A^{*}_{+}A'_{-}-A^{*}_{-}A'_{+}+A^{*}_{i}A'_{i}\right)\right|_{z=0}\label{action1}\\
&=\frac{N^{2}}{32\pi^{2}}\int d^{4}x \left.\frac{1}{z}\left(-\frac{1}{iq^{-}}A^{*}_{+}\partial_{-}A'_{+}-A^{*}_{-}A'_{+}+A^{*}_{i}A'_{i}\right)\right|_{z=0}\, ,\label{action2}\end{aligned}$$ where we have made use of the equations of motion to get rid of the term with $A'_{-}$. Here the relation with the energy-momentum tensor is already visible when comparing Eqs. (\[action2\]) and (\[emtensor1\]) since the imaginary part of the action should be $z$ independent (see Ref. [@Son:2002sd]). Nevertheless, let us work out the explicit form of the imaginary part of the action in terms of the fields found in the previous section. Taking into account that the scattering piece of the solution to the equations of motion is zero at $z=0$, and that the vacuum classical action is real, we see that the only contribution to the imaginary part comes from $-A^{*(0)}_{+}A^{\prime(s)}_{-}-A^{*(0)}_{-}A^{\prime(s)}_{+}+A^{*(0)}_{i}A^{\prime(s)}_{i}$ in Eq. (\[action1\]).
Lets consider first the term with the transverse components only. The scattering piece to be considered takes the form $$A_{i}^{(s)}(x^{-},z)=-2q^{-}\int\frac{dz'}{z'}\frac{dk^{+}}{2\pi}e^{-ik^{+}x^{-}}G_{T}(z,z';K^{2})\mathcal{T}(z')A_{i}^{(0)}(0,z')\, .$$ The contribution to the classical action we are interested in is $$\begin{aligned}
(S_{cl}-S_{0})_{T}&=\frac{N^{2}}{32\pi^{2}}\int d^{4}x \left.\frac{1}{z}A^{*(0)}_{i}A^{\prime(s)}_{i}\right|_{z=0}\\
&=-\frac{N^{2}q^{-}}{16\pi^{2}}\int d^{4}x\left[\frac{1}{z}e^{iq^{+}x^{-}}\tilde{\mathcal{A}}^{*}_{i}\int\frac{dz'}{z'}\frac{dk^{+}}{2\pi}e^{-ik^{+}x^{-}}\partial_{z}G_{T}(z,z';K^{2})\mathcal{T}(z')A_{i}^{(0)}(0,z')\right]_{z=0}\, .\end{aligned}$$
The $x^{-}$ integration gives a $\delta$-function which combined with the $k^{+}$ integration picks up the $q^{+}$ component in the Green’s function in the scattering piece. The remaining integrand is independent of the rest of the coordinates and therefore we only get an extra factor of volume $V=L_{x}L_{y}L_{+}$. We then arrive at $$(S_{cl}-S_{0})_{T}=-\frac{N^{2}q^{-}V}{16\pi^{2}}\tilde{\mathcal{A}}^{*}_{i}\int\frac{dz'}{z'}\left[\frac{1}{z}\partial_{z}G_{T}(z,z';Q^{2})\right]_{z=0}\mathcal{T}(z')A^{(0)}_{i}(0,z')\, .$$ Inserting the vacuum solution (\[vacsoltr\]) and using Eq. (\[slgreentr\]) to evaluate $$\lim_{z\to0}\frac{1}{z}\partial_{z}G_{T}(z,z';Q^{2})=-z'Q{\text{K}}_{1}(Qz')\, ,$$ we finally find $$(S_{cl}-S_{0})_{T}=\frac{N^{2}q^{-}Q^{2}V}{16\pi^{2}}|\tilde{\mathcal{A}}_{i}|^{2}\int dz'\; z'{\text{K}}_{1}^{2}(Qz')\mathcal{T}(z')\, .$$
Similarly, the first two terms of Eq. (\[action2\]) give the contribution from the longitudinal components. Carrying out the corresponding calculation, we get as the total scattering contribution to the classical action $$S_{cl}-S_{0}=\frac{N^{2}Q^{2}V}{16\pi^{2}q^{-}}\int dz'\; z'\mathcal{T}(z')\left(\frac{Q^{2}}{4}|\tilde{\mathcal{A}}_{L}|^{2}{\text{K}}_{0}^{2}(Qz')+(q^{-})^{2}|\tilde{\mathcal{A}}_{i}|^{2}{\text{K}}_{1}^{2}(Qz')\right).\label{claction}$$ From the explicit form of the scattering amplitude we know it satisfies the following relation $$|\mathcal{T}(z')|^{2}=2\text{Im}\mathcal{T}(z')\, .$$ Therefore, comparing Eqs. (\[Tplusz\]) and (\[claction\]), $$q^{-}\text{Im} S_{cl}=V\int dx^{-}\;\sqrt{-g}T_{+}^{z}\, .$$
Presumably, this relation is valid regardless of the specific target since it can be seen as an illustration of the optical theorem in this supergravity context. By following the propagation of the time-like modes in (\[Tplusz\]) we are performing a sum over final states instead of calculating the forward scattering amplitude. This relation supports our picture of the time-like modes representing real particles produced in the scattering process and therefore making the scattering inelastic, even though the so called scattering amplitude has the form characteristic of elastic interaction, which is a consequence of the series defining the scattering piece of the field being written in terms of diffractive contributions from graviton exchanges. This apparent confusion is due to the fact that the scattering amplitude appears when working in a coordinate basis where the produced particles are not directly visible except for the decoherence introduced by the $z$ dependence of the phase picked up when going through the shockwave (Eq. (\[discphase\])).
Comparison with initial flux in the $x^{-}$ direction
-----------------------------------------------------
The flux just calculated is to be compared with the flux of the incoming gravity wave. Since the incoming probe is assumed to be a left-mover coming from negative large values of $x^{-}$, the relevant component of the energy-momentum tensor is $T_{+}^{-}$. From Eq. (\[Tmunu\]) we get $$\begin{aligned}
\sqrt{-g}T_{+}^{-}&=\sqrt{-g}T_{++}g^{+-}\\
&=\frac{N^{2}}{8\pi^{2}z}\left((A'_{+})^{2}+(\partial_{+}A_{i})^{2}\right).\end{aligned}$$ In complex notation and using , $$\sqrt{-g}T_{+}^{-}=\frac{N^{2}}{16\pi^{2}z}\left(|A'_{+}|^{2}+(q^{-})^{2}|A_{i}|^{2}\right).$$ This quantity is to be calculated for the incoming wave and therefore only the vacuum piece should be used. Using Eqs. (\[vacsol\]) and (\[vacsoltr\]), $$\sqrt{-g}T_{+}^{-}=\frac{N^{2}}{16\pi^{2}}Q^{2}z\left(\frac{Q^{2}}{4}|\tilde{\mathcal{A}}_{L}|^{2}{\text{K}}_{0}^{2}(Qz)+(q^{-})^{2}|\tilde{\mathcal{A}}_{i}|^{2}{\text{K}}_{1}^{2}(Qz)\right).$$
To obtain the total energy flux in the $x^{-}$ direction this result should be integrated with respect to $z$. In that case the only difference between this flux and the one calculated before in the positive $z$ direction (Eq. ) is a factor of $\frac{1}{2}|\mathcal{T}(z)|^{2}$. In the case of interest $Q\ll Q_{s}$, there is a simple interpretation for this relation between the fluxes. For $z\ll 1/\sqrt{QQ_{s}}$ the scattering amplitude $\mathcal{T}(z)$ is very small, which means that the incoming wave doesn’t see the shock wave and goes through unscattered. For large values of $z$ the eikonal phase becomes rapidly oscillating and the factor $\frac{1}{2}|\mathcal{T}(z)|^{2}$ can be safely replaced by 1 in the integration over $z$, giving a total scattering where all the incoming energy flows down the fifth dimension spread among the time-like modes.
Here is a subtle distinction between the longitudinal and the transverse components regarding what fraction of the energy goes unscattered. This fraction is concentrated in the small $z$ region where the two contributions (longitudinal and transverse) have very different behaviors, the longitudinal contribution goes like $z\ln^{2}z$ while the transverse contribution goes like $1/z$. Most of the energy carried by the transverse components will continue its propagation in the $x^{-}$ direction, contrary to what happens with the longitudinal components where essentially all the energy impinging on the target goes toward $z=\infty$ after passing through the shockwave.
Virtuality distribution
=======================
Taking as a starting point Eq. , and the equivalent expression for the transverse components, it is interesting to ask which modes carry most of the energy, or, in other words, which virtualities are favored in the particle production from this DIS process. In order to address those issues let’s not perform the $k^{+}$ integration and obtain an (approximate) expression for the coefficient function $a(k^{+})$.
The $z'$ integration in Eq. (\[fcoeff\]) can’t be performed explicitly. In the limit where $Q\ll Q_{s}$ we can make a rough estimate by taking the scattering amplitude $\mathcal{T}(z')$ to be one for $z'>1/\sqrt{QQ_{s}}$ and zero otherwise. In that case, the $z'$ integration takes the form $$\begin{aligned}
\int_{\frac{1}{\sqrt{QQ_{s}}}}^{\infty}dz'\; z'{\text{J}}_{0}(|K|z'){\text{K}}_{0}(Qz')=&\frac{1}{Q^{2}+|K|^{2}}\left(\sqrt{\frac{Q}{Q_{s}}}{\text{J}}_{0}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right){\text{K}}_{1}\left(\sqrt{\frac{Q}{Q_{s}}}\right)\right.\nonumber \\
&\left.-\frac{|K|}{\sqrt{QQ_{s}}}{\text{J}}_{1}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right){\text{K}}_{0}\left(\sqrt{\frac{Q}{Q_{s}}}\right)\right) \\
\simeq&\frac{1}{Q^{2}+|K|^{2}}\left({\text{J}}_{0}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)-\frac{|K|}{\sqrt{QQ_{s}}}\ln\sqrt{\frac{Q_{s}}{Q}}{\text{J}}_{1}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)\right),\end{aligned}$$ where we have used the small argument form of the K functions.
Plugging this back in Eq. , $$\begin{aligned}
\int dx^{-}(\sqrt{-g}T_{+}^{z})_{L}=\frac{N^{2}Q^{4}}{128\pi^{2}}|\tilde{\mathcal{A}}_{L}|^{2}&\int d|K|\; \frac{|K|}{(Q^{2}+|K|^{2})^{2}}\nonumber \\
&\times\left({\text{J}}_{0}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)-\frac{|K|}{\sqrt{QQ_{s}}}\ln\sqrt{\frac{Q_{s}}{Q}}{\text{J}}_{1}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)\right)^{2}\, .\end{aligned}$$ Instead of trying to perform this integration, we will determine which is the dominant region and therefore determine which virtualities play an important role in the scattering process. The first factor in the integrand is large for $|K|\sim Q$ and falls rapidly for large $|K|$. The second factor is of order 1 for small $|K|$ and becomes large only for $|K| \gg \sqrt{QQ_{s}}$. In that region it grows linearly with $|K|$ but can’t compensate for the rapid falling of the first factor which goes as $1/|K|^{3}$. Therefore, the integral is dominated by the region with $|K|\sim Q$.
Now lets turn our attention to the transverse components. From Eq. (\[exptlmodestr\]) we see that the relevant $z'$ integration takes the form $$\begin{aligned}
\int_{\frac{1}{\sqrt{QQ_{s}}}}^{\infty}dz'\; z'{\text{J}}_{1}(|K|z'){\text{K}}_{1}(Qz')=&\frac{1}{Q^{2}+|K|^{2}}\left(\sqrt{\frac{Q}{Q_{s}}}{\text{J}}_{1}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right){\text{K}}_{2}\left(\sqrt{\frac{Q}{Q_{s}}}\right)\right.\nonumber \\
&\left.-\frac{|K|}{\sqrt{QQ_{s}}}{\text{J}}_{2}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right){\text{K}}_{1}\left(\sqrt{\frac{Q}{Q_{s}}}\right)\right) \\
\simeq&\frac{1}{Q^{2}+|K|^{2}}\left(2\sqrt{\frac{Q_{s}}{Q}}{\text{J}}_{1}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)-\frac{|K|}{Q}{\text{J}}_{2}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)\right),\\
=&\frac{|K|}{Q(Q^{2}+|K|^{2})}{\text{J}}_{0}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right).\end{aligned}$$ When this expression is used to calculate the energy flow in the fifth dimension we get $$\begin{aligned}
\int dx^{-}(\sqrt{-g}T_{+}^{z})_{T}&=\frac{N^{2}(q^{-})^{2}Q^{2}}{32\pi^{2}}|\tilde{\mathcal{A}}_{i}|^{2}\int d|K|\; \frac{|K|^{3}}{(Q^{2}+|K|^{2})^{2}}{\text{J}}_{0}^{2}\left(\frac{|K|}{\sqrt{QQ_{s}}}\right)\\
&\simeq\frac{N^{2}(q^{-})^{2}Q^{2}}{128\pi^{2}}|\tilde{\mathcal{A}}_{i}|^{2}\ln\frac{Q_{s}^{2}}{Q^{2}}\, ,\label{trlog}\end{aligned}$$ where in the last step we used that the integration is dominated by the region $Q<|K|<\sqrt{QQ_{s}}$. Since the integration is logarithmic, the location of the boundaries of the dominant region don’t change the parametric dependence of the final result, and therefore the exact location of the cut-off used for the $z'$ integration doesn’t play an important role. Unlike the longitudinal case, for the transverse components the produced modes are distributed over a wide range of virtualities determined by the incoming wave and the properties of the target.
The logarithm found in Eq. (\[trlog\]) is in agreement with the calculations of the structure functions in [@Avsar:2009xf; @Mueller:2008bt].
Localization of the energy flow in coordinate space
===================================================
In Section \[totalflux\] we integrated over $x^{-}$ in order to get an expression for the total flux in the fifth dimension. Instead of performing that integration, we can find the region in $x^{-}$ which contributes the most to this total flux and determine if this flux is localized in coordinate space.
Since the $x^{-}$ integration was what allowed us to identify the $k^{+}$ component of the field and the conjugate field, the expansion in terms of definite time-like modes is no longer useful. In order to focus on the $x^{-}$ dependence of the fields we chose to perform the $k^{+}$ integration at the level of the Green’s function.
Consider the longitudinal component first. To be able to perform the $k^{+}$ integration first, it is convenient to write the momentum space Green’s function in a general way valid for both time-like and space-like momenta. Specifically [@Avsar:2009xf], $$G_{L}(z,z';K^{2})=-\int_{0}^{\infty}d\omega\;\frac{1}{\omega^{2}+K^{2}}z{\text{J}}_{0}(\omega z)z'{\text{J}}_{0}(\omega z')\, .$$ The coordinate space Green’s function is then given by $$G_{L}(z,z';x^{-}-y^{-})=-\int d\omega\frac{dk^{+}}{2\pi}e^{-ik^{+}(x^{-}-y^{-})}\frac{1}{\omega^{2}+K^{2}-i\epsilon}z{\text{J}}_{0}(\omega z)z'{\text{J}}_{0}(\omega z')\, ,$$ where the $i\epsilon$ prescription is chosen to obtain a Green’s function retarded with respect to $x^{-}$. This choice reproduces correctly the initial condition where we assume a pure plane wave coming from negative values of $x^{-}$ and is also consistent with the momentum space Green’s function in Eq. (\[tlgreenmom\]).
Integrating first over $k^{+}$ and then over $\omega$, we get for the longitudinal Green’s function $$G_{L}(z,z';x^{-}-y^{-})=-\frac{1}{2}\frac{\Theta(x^{-}-y^{-})}{x^{-}-y^{-}}zz'{\text{J}}_{0}\left(\tfrac{q^{-}}{x^{-}-y^{-}}zz'\right)\exp\left[\frac{iq^{-}(z^{2}+z^{\prime2})}{2(x^{-}-y^{-})}\right].\label{explgreen}$$ Inserting (\[explgreen\]) in , we see that the scattering piece of the $A'_{+}$ field takes the form $$A^{\prime(s)}_{+}(x^{-},z)=-\frac{q^{-}Q^{2}}{2}\frac{\Theta(x^{-})}{x^{-}}\tilde{\mathcal{A}}_{L}\;ze^{iq^{-}z^{2}/2x^{-}}\int dz'\; z'{\text{J}}_{0}\left(\tfrac{q^{-}}{x^{-}}zz'\right)e^{iq^{-}z^{\prime2}/2x^{-}}\mathcal{T}(z'){\text{K}}_{0}(Qz')\, .$$ The $z'$ integration is dominated by the region with $z'\sim 1/Q$ (the integrand is small for small $z'$ and falls exponentially for $z'\gg 1/Q$). In this region, the Bessel function is of order one as long as $$x^{-}\gtrsim\frac{q^{-}}{Q}z\, .\label{xminusz}$$
The phase factor $e^{iq^{-}z^{\prime2}/2x^{-}}$ oscillates rapidly for $x^{-}\ll\frac{q^{-}}{Q^{2}}$ in the dominant region of integration and therefore forces the condition $$x^{-}\gtrsim\frac{q^{-}}{Q^{2}}\, .\label{cohlength}$$ This illustrates the coherence length of the initial system where the produced states cannot be resolved. The small $z$ region is not relevant for our analysis since there we are not able to isolate the contributions from the time-like modes (which flow down the fifth dimension) from the space-like modes (which are non-zero only for small $z$). For large $z$ we only have to consider condition (\[xminusz\]).
Taking into account that the $x^{-}$ integration of the flux in the fifth dimension is convergent (and therefore the flux becomes small for large values of $x^{-}$), Eq. implies that the important region in $x^{-}$ for large $z$ is $$x^{-}\sim\frac{q^{-}}{Q}z\, .\label{trajec}$$
The results just derived are also applicable to the transverse components.
Via the UV/IR correspondence, Eq. (\[trajec\]) can be interpreted as a relation between the longitudinal velocity and the transverse velocity of the produced particles (since the coordinate $z$ is dual to the transverse size). This relation is consistent with the results of [@Hatta:2008tx] where it is also shown this relation implies the particles are massless.
I would like to thank A.H. Mueller for his guidance and illuminating discussions. I am also grateful to E. Iancu and Y. Hatta for useful comments. This work is supported by the US Department of Energy.
[99]{}
E. Avsar, E. Iancu, L. McLerran and D. N. Triantafyllopoulos, [*Shockwaves and deep inelastic scattering within the gauge/gravity duality, JHEP*]{} [**0911**]{} (2009) 105, \[[arXiv:0907.4604]{}\]. J. M. Maldacena, [*The large N limit of superconformal field theories and supergravity, Adv. Theor. Math. Phys. *]{} [**2**]{} (1998) 231, \[[hep-th/9711200]{}\]. S. S. Gubser, I. R. Klebanov and A. M. Polyakov, [*Gauge theory correlators from non-critical string theory, Phys. Lett. *]{} [**B428**]{} (1998) 105, \[[hep-th/9802109]{}\]. E. Witten, [*Anti-de Sitter space and holography, Adv. Theor. Math. Phys. *]{} [**2**]{} (1998) 253, \[[hep-th/9802150]{}\]. D. T. Son and A. O. Starinets, [*Viscosity, Black Holes, and Quantum Field Theory, Ann. Rev. Nucl. Part. Sci. *]{} [**57**]{} (2007) 95, \[[arXiv:0704.0240]{}\]. A. H. Mueller, A. I. Shoshi and B. W. Xiao, [*Deep inelastic and dipole scattering on finite length hot $\mathcal{N}=4$ SYM matter, Nucl. Phys. *]{} [**A822**]{} (2009) 20, \[[arXiv:0812.2897]{}\]. R. A. Janik and R. B. Peschanski, [*Asymptotic perfect fluid dynamics as a consequence of AdS/CFT, Phys. Rev. *]{} [**D73**]{} (2006) 045013, \[[hep-th/0512162]{}\]. G. Beuf, [*Gravity dual of N=4 SYM theory with fast moving sources, Phys. Lett. *]{} [**B686**]{} (2010) 55, \[[arXiv:0903.1047]{}\]. J. L. Albacete, Y. V. Kovchegov and A. Taliotis, [*DIS on a Large Nucleus in AdS/CFT, JHEP*]{} [**0807**]{} (2008) 074, \[[arXiv:0806.1484]{}\]. S. S. Gubser, S. S. Pufu and A. Yarom, [*Entropy production in collisions of gravitational shock waves and of heavy ions, Phys. Rev. *]{} [**D78**]{} (2008) 066014, \[[arXiv:0805.1551]{}\]. L. Cornalba, M. S. Costa, J. Penedones and R. Schiappa, [*Eikonal approximation in AdS/CFT: From shock waves to four-point functions, JHEP*]{} [**0708**]{} (2007) 019, \[[hep-th/0611122]{}\]. L. Cornalba, M. S. Costa, J. Penedones and R. Schiappa, [*Eikonal approximation in AdS/CFT: Conformal partial waves and finite N four-point functions, Nucl. Phys. *]{} [**B767**]{} (2007) 327, \[[hep-th/0611123]{}\]. L. Cornalba, M. S. Costa and J. Penedones, [*Eikonal Approximation in AdS/CFT: Resumming the Gravitational Loop Expansion, JHEP*]{} [**0709**]{} (2007) 037, \[[arXiv:0707.0120]{}\]. R. C. Brower, M. J. Strassler and C. I. Tan, [*On the Eikonal Approximation in AdS Space, JHEP*]{} [**0903**]{} (2009) 050, \[[arXiv:0707.2408]{}\]. J. Polchinski and M. J. Strassler, [*Deep inelastic scattering and gauge/string duality, JHEP*]{} [**0305**]{} (2003) 012, \[[hep-th/0209211]{}\]. E. Levin, J. Miller, B. Z. Kopeliovich and I. Schmidt, [*Glauber - Gribov approach for DIS on nuclei in N=4 SYM, JHEP*]{} [**0902**]{} (2009) 048, \[[arXiv:0811.3586]{}\]. Y. Hatta, E. Iancu and A. H. Mueller, [*Deep inelastic scattering off a N=4 SYM plasma at strong coupling, JHEP*]{} [**0801**]{} (2008) 063, \[[arXiv:0710.5297]{}\]. G. ’t Hooft, [*Graviton Dominance in Ultrahigh-Energy Scattering, Phys. Lett. *]{} [**B198**]{} (1987) 61. D. E. Kharzeev and E. M. Levin, [*D-instantons and multiparticle production in N=4 SYM, JHEP*]{} [**1010**]{} (2010) 046, \[[arXiv:0910.3355]{}\]. S. de Haro, S. N. Solodukhin and K. Skenderis, [*Holographic reconstruction of spacetime and renormalization in the AdS/CFT correspondence, Commun. Math. Phys. *]{} [**217**]{} (2001) 595, \[[hep-th/0002230]{}\]. K. Skenderis, [*Lecture notes on holographic renormalization, Class. Quant. Grav. *]{} [**19**]{} (2002) 5849, \[[hep-th/0209067]{}\]. O. V. Kancheli, [*Parton picture of inelastic collisions at transplanckian energies,*]{} [hep-ph/0208021]{}. Y. Hatta, E. Iancu and A. H. Mueller, [*Deep inelastic scattering at strong coupling from gauge/string duality: the saturation line, JHEP*]{} [**0801**]{} (2008) 026, \[[arXiv:0710.2148]{}\]. L. Cornalba and M. S. Costa, [*Saturation in Deep Inelastic Scattering from AdS/CFT, Phys. Rev. *]{} [**D78**]{} (2008) 096010, \[[arXiv:0804.1562]{}\]. L. Cornalba, M. S. Costa and J. Penedones, [*Deep Inelastic Scattering in Conformal QCD, JHEP*]{} [**1003**]{} (2010) 133, \[[arXiv:0911.0043]{}\]. L. D. Landau and E. M. Lifshitz, [*The Classical Theory of Fields*]{} (Pergamon Press, 1975).
Y. Hatta, [*unpublished notes*]{}.
D. T. Son and A. O. Starinets, [*Minkowski-space correlators in AdS/CFT correspondence: Recipe and applications, JHEP*]{} [**0209**]{} (2002) 042, \[[hep-th/0205051]{}\]. Y. Hatta, E. Iancu and A. H. Mueller, [*Jet evolution in the N=4 SYM plasma at strong coupling, JHEP*]{} [**0805**]{} (2008) 037, \[[arXiv:0803.2481]{}\].
[^1]: Here we are following closely the derivation in [@Avsar:2009xf]. Saturation effects in an AdS/CFT context were first considered in [@Hatta:2007he; @Cornalba:2008sp] and subsequently in a more general context in [@Cornalba:2009ax].
|
---
abstract: 'In the present paper we study pseudo-Riemannian submanifolds which have 3-planar geodesic normal sections.We consider W-curves (helices) on pseudo-Riemannian submanifolds. Finally, we give neccessary and sufficient condition for a normal section to be a W-curve on pseudo-Riemannian submanifolds.'
author:
- 'Kadri ARSLAN, Betül BULCA and Günay ÖZTÜRK'
title: 'PSEUDO-RIEMANNIAN SUBMANIFOLDS WITH $3$-PLANAR GEODESICS '
---
**Introduction**
================
[^1] In a Riemannian manifold, a regular curve is called a helix if its first and second curvatures is constant and the third curvature is zero. In 1980 Ikawa investigated the condition that every helix in a Riemannian submanifold is a helix in the ambient space [@Ik1]. In a pseudo-Riemannian manifold, helices are defined by almost the same way as the Riemannian case. The same author also characterized the helices in Lorentzian submanifold [@Ik2].
An isometric immersion $f:M_{r}^{n}\rightarrow \mathbb{R}_{s}^{N}$ is said to be planar geodesic if the image of each geodesic of $M_{r}$ lies in a $2$-plane of $\mathbb{R}_{s}^{N}.$ In the Riemannian case such immersions were studied and classified by Hong [@Ho], Little [@Li], Sakamoto [Sa]{}, Ferus [@Fe] and others. Further, Blomstrom classified planar geodesic immersions with indefinite metric [@Bl]. It has been shown that all parallel, planar geodesic surfaces in $\mathbb{R}_{s}^{N}$ are the pseudo-Riemannian spheres, the Veronese surfaces and certain flat quadratic surfaces. Recently Kim studied minimal surfaces of pseudo-Euclidean spaces with geodesic normal sections. He proved that complete connected minimal surfaces in a $5$-dimensional pseudo-Euclidean space with geodesic normal sections are totally geodesics or flat quadrics [@Ki1].
In the present work, we give some results toward a characterization of $3$-planar geodesic immersions $f:M_{r}\rightarrow \mathbb{N}_{s}$ from an $n$-dimensional, connected pseudo-Riemannian manifold $M_{r}$ into $m$-dimensional pseudo-Riemannian manifold $\mathbb{N}_{s}.$ Further, We consider $W$-curves (helices) on pseudo-Riemannian submanifolds. Finally, we give necessary and sufficient condition for a normal section to be a $W$-curve on pseudo-Riemannian submanifolds.
**Basic Concepts**
==================
Let $f:M_{r}\rightarrow \mathbb{N}_{s}$ be an isometric immersion from an $n$-dimensional, connected pseudo-Riemannian manifold $M_{r}$ of index $r$ $(0\leq r\leq n)$ into $m$-dimensional pseudo-Riemannian manifold $\mathbb{N}_{s}$ of index $s$. Let $\nabla $ and $\widetilde{\nabla }$ denote the covariant derivatives of $M_{r}$ and $\mathbb{N}_{s}$ respectively. Thus $\widetilde{\nabla }_{X}$ is just the directional derivative in the direction $X$ in $\mathbb{N}_{s}.$ Then for tangent vector fields $X$, $Y$ the *second fundamental form* $h$ of the immersion $f$ is defined by $$h(X,Y)=\overset{\sim }{\nabla }_{X}Y-\nabla _{X}Y. \tag{1.1} \label{A1}$$
For a vector field $\xi $ normal to $M_{r}$ we put $$\widetilde{\nabla }_{X}\xi =-A_{\xi }X+D_{X}\xi , \tag{1.2} \label{A2}$$where $A_{\xi }$ is the shape operator of $M_{r}$ and $D$ is the normal connection of $M_{r}$. We have the following relation $$<A_{\xi }X,Y>=<h(X,Y),\xi >\text{.} \tag{1.3} \label{A3}$$
The covariant derivatives of $h$ denoted respectively by $\overline{\nabla }h $ and $\overline{\nabla }$ $\overline{\nabla }h$ to be;
$$(\overline{\nabla }_{X}h)(Y,Z)=D_{X}h(Y,Z)-h(\nabla _{X}Y,Z)-h(Y,\nabla
_{X}Z), \tag{1.4} \label{A4}$$
and
$$\begin{aligned}
(\overline{\nabla }_{W}\overline{\nabla }_{X}h)(Y,Z) &=&D_{W}((\overline{\nabla }_{X}h)(Y,Z))-(\overline{\nabla }_{\nabla _{W}X}h)(Y,Z)- \TCItag{1.5}
\label{A5} \\
&&-(\overline{\nabla }_{X}h)(\nabla _{W}Y,Z)-(\overline{\nabla }_{X}h)(Y,\nabla _{W}Z), \notag\end{aligned}$$
where $X,Y$, $Z$ and $W$ are tangent vector fields over $M_{r}$ and $\overline{\nabla }$ is the Vander Waerden-Bortolotti connection [@Ch1]. Then we obtain the Codazzi equation $$(\overline{\nabla }_{X}h)(Y,Z)=(\overline{\nabla }_{Y}h)(X,Z)=(\overline{\nabla }_{Z}h)(X,Y)\text{.} \tag{1.6} \label{A6}$$
It is a well-known property that $\overline{\nabla }h$ is a trilinear symmetric form on $M_{r}^{\text{ }}$ with values in the normal bundle $N(M_{r})$ and it is called the *third fundamental form*. If $\overline{\nabla }h=0,$ then the second fundamental form is said to be* parallel* [@FS]* (i.e.* $M$ *is 1-parallel* [@ALMO]*)*. If $\overline{\nabla }$ $\overline{\nabla }h=0,$ then the third fundamental form is said to be* parallel* [@Lu]* (i.e.* $M$* is 2-parallel* [@ALMO]*).*
The mean curvature vector field $H$ of $M_{r}$ is defined by $$H=\frac{1}{n}\sum <e_{i},e_{i}>h(e_{i},e_{i}),i=1,...,n. \tag{1.7}
\label{A7}$$ where $\left \{ e_{1},e_{2},...,e_{n}\right \} $ is an orthonormal frame field of $M_{r}.$ $H$ is said to be parallel when $DH=0$ holds.
If the second fundamental form $h$ satisfies $$g(X,Y)H=h(X,Y), \tag{1.8} \label{A8}$$for any tangent vector fields $X,Y$ of $M_{r},$ then $M_{r}$ is called a totally umbilical. A totally umbilical submanifold with parallel mean curvature vector fields is said to be an e*xtrinsic sphere* [@Nak].
**Helices in a Pseudo-Riemannian Manifold**
===========================================
Let $\gamma $ be a regular curve in a pseudo-Riemannian manifold $M_{r}.$ We denote the tangent vector field $\gamma ^{\prime }(s)$ by the letter $X,$ when $\left \langle X,X\right \rangle =+1$ or $-1,$ $\gamma $ is called a *unit speed curve*. The curve $\gamma $ is called a *Frenet curve of osculating order* $d$ (See [@FS]) if its derivatives $\gamma ^{^{\prime }}(s),\gamma ^{^{\prime \prime }}(s),...,\gamma ^{(d)}(s)$ are linearly independent and $\gamma ^{^{\prime }}(s),\gamma ^{^{\prime
\prime }}(s),...,\gamma ^{(d+1)}(s)$ are no longer linearly independent for all $s\in I.$
To each Frenet curve of order $d$ we can associate an orthonormal $d$ frame $\left \{ V_{1},V_{2},...,V_{d}\right \} $ along $\gamma ,$called the *Frenet frame*, and $k_{1},k_{2},...,k_{d-1}$ are *curvature functions* of $\gamma $.
[@Mu]. If $\gamma :I\longrightarrow M_{r}$ is a non-null differentiable curve of an $n$-dimensional pseudo-Riemannian manifold $M_{r}$ of osculating order $d$ $(0\leq d\leq n)$ and $\left \{ V_{1}=X,V_{2},...,V_{d}\right \} $ is the Frenet frame of $\gamma $ then $$V_{1}^{^{\prime }}=\nabla _{X}X=\varepsilon _{2}k_{1}V_{2}, \tag{2.1}
\label{B1}$$$$V_{2}^{^{\prime }}=\nabla _{X}V_{2}=-\varepsilon _{1}k_{1}V_{1}+\varepsilon
_{3}k_{2}V_{3}, \tag{2.2} \label{B2}$$$$\vdots$$$$V_{d-1}^{^{\prime }}=\nabla _{X}V_{d-1}=-\varepsilon
_{(d-2)}k_{(d-2)}V_{(d-2)}+\varepsilon _{d}k_{(d-1)}V_{d}, \tag{2.3}
\label{B3}$$$$V_{d}^{^{\prime }}=\nabla _{X}V_{d}=-\varepsilon _{(d-1)}k_{(d-1)}V_{(d-1)},
\tag{2.4} \label{B4}$$where $\varepsilon _{i}=\left \langle V_{i},V_{i}\right \rangle =\pm 1,\
k_{i}$, $1\leq i\leq (d-1)$ are curvature functions of $\gamma .$
Let $\gamma $ be a smooth curve of osculating order $d$ on $M_{r}.$ The curve $\gamma $ is called a $\mathit{W}$*-curve (or a helix)* of rank $d$ if $k_{1},k_{2},...,k_{d-1}$ are constant and $k_{d}=0.$ In particular, a $W$-curve of rank $2$ is called a *geodesics circle. A* $W$-curve of rank $3$ is a *right circular helix* [@FS].
Let $\gamma $ be a non-null $W$-curve in $M_{r}$. If $\gamma $ is of rank $2$ then $\gamma ^{^{\prime \prime \prime }}$ is a scalar multiple of $\gamma
^{^{\prime }}.$ In this case necessarily$$\gamma ^{^{\prime \prime \prime }}(s)=-\varepsilon _{1}\varepsilon
_{2}k_{1}^{2}\gamma ^{^{\prime }}(s). \tag{2.5} \label{B5a}$$
By the use of (2.1) we have $\gamma ^{^{\prime \prime }}(s)=\varepsilon
_{2}k_{1}V_{2}(s).$ Furthermore, differentiating this equation with respect to $s$ and using (2.2) we obtain $$\gamma ^{^{\prime \prime \prime }}(s)=-\varepsilon _{1}\varepsilon
_{2}k_{1}^{2}X+\varepsilon _{2}k_{1}^{^{\prime }}V_{2}(s)+\varepsilon
_{2}\varepsilon _{3}k_{1}k_{2}V_{3}(s). \tag{2.6} \label{B8}$$Since $\gamma $ is a W-curve of rank $2$ then by definition $k_{1}$ is constant and $k_{2}=0$ we get the result.
Let $\gamma $ be a non-null $W$-curve of $M_{r}$. If $\gamma $ is of osculating order $3$ then $$\gamma ^{^{\prime \prime \prime \prime }}(s)=-\varepsilon _{2}(\varepsilon
_{1}k_{1}^{2}+\varepsilon _{3}k_{2}^{2})\gamma ^{^{\prime \prime }}(s).
\tag{2.7} \label{B9}$$
Differentiating (2.6) and using the fact that $k_{1},k_{2}$ are constant and $k_{3}=0$ we get the result.
**Planar Geodesic Immersions**
==============================
Let $f:M_{r}\rightarrow \mathbb{N}_{s}$ be an isometric immersion from an $n$-dimensional, connected pseudo-Riemannian manifold $M_{r}$ of index $r$ $(0\leq r\leq n)$ into $m$-dimensional pseudo-Riemannian manifold $\mathbb{N}_{s} $ of index $s.$ For a point $p\in M_{r}$ and a unit vector $X\in
T_{p}(M_{r}) $ the vector $X$ and the normal space $T_{p}^{\bot }(M_{r})$ determine a $(m-n+1)$-dimensional subspace $E(p,X)$ of $T_{f(p)}(N_{s})$ which determines a $(m-n+1)$-dimensional totally geodesic submanifold $W$ of $\mathbb{N}_{s}$. The intersection of $M_{r}$ with $W$ gives rise a curve $\gamma $ (in a neighborhood of $p$) called the *normal section* of $M_{r}$ at point $p$ in the direction of $X$ [@Ch2].
The submanifold $M_{r}$ (or the isometric immersion $f$) is said to have $d$*-planar normal sections* if for each normal section $\gamma $ the first, second and higher order derivatives $\gamma ^{^{\prime }}(s),\gamma
^{^{\prime \prime }}(s),...,\gamma ^{(d)}(s),\gamma ^{(d+1)}(s)$ ;$(1\leq
d\leq m-n+1)$ are linearly dependent as vectors in $W$[@Ch2].
The submanifold $M_{r}$ is said to have* *$d$*-planar geodesic normal sections* if each normal section of $M_{r}$ is a geodesic of $M_{r}.$
In [@Bl] the immersion in pseudo-Euclidean space with $2$-planar geodesic normal section have been studied by Blomstrom(See also [@Ho]).
We have the following result.
Let $\gamma $ be a non-null geodesic normal section of $M_{r}.$ If $\gamma
^{\prime }(s)=X(s)$, then we have $$\gamma ^{^{\prime \prime }}(s)=h(X,X), \tag{3.1} \label{C1}$$$$\gamma ^{^{^{\prime \prime \prime }}}(s)=-A_{h(X,X)}X+(\overline{\nabla }_{X}h)(X,X), \tag{3.2} \label{C.2}$$$$\begin{array}{c}
\gamma ^{^{\prime \prime \prime \prime }}(s)=-\nabla
_{X}(A_{h(X,X)}X)-h(A_{h(X,X)}X,X) \\
\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ }-A_{(\overline{\nabla }_{X}h)(X,X)}X+(\overline{\nabla }_{X}\overline{\nabla }_{X}h)(X,X).\end{array}
\tag{3.3} \label{C.3}$$
[@Bl] Pseudo-Riemannian sphere$$S_{r}^{n}(c)=\left \{ p\in \mathbb{E}_{r}^{n+1}:<p-a,p-a>=\frac{1}{c}\right
\} ,c>0, \tag{3.4} \label{C4}$$and pseudo-Riemannian hyperbolic space$$H_{r}^{n}(c)=\left \{ p\in \mathbb{E}_{r+1}^{n+1}:<p-a,p-a>=\frac{1}{c}\right \} ,c<0, \tag{3.5} \label{C5}$$both have $2$-planar geodesic normal sections.
The submanifold $M_{r}$ (or the isometric immersion $f$) is said to be* pseudo-isotropic* at $p$ if $$L=<h(X,X),h(X,X)>,$$is independent of the choice of unit vector $X$ tangent to $M_{r}$ at $p$. In particular if $L$ is independent of the points then $M_{r}$ is said to be constant pseudo-isotropic.
The submanifold $M_{r}$ is pseudo-isotropic if and only if* *$$<h(X,X),h(X,Y)>=0,$$for any orthonormal vectors $X$ and $Y$ [@Bl].
The following results are well-known.
[@Bl]. If the immersion $f:M_{r}^{2}\rightarrow \mathbb{E}_{s}^{m}$ has $2$-planar geodesic normal sections, then $f(M)$ is a submanifold with zero mean curvature in a hypersphere $S_{s-1}^{m-1}$ or $H_{s-1}^{m-1}$ if and only if $L$ is a non-zero constant.
[@Ki1]. The immersion $f:M_{r}^{2}\rightarrow \mathbb{E}_{s}^{m}$ with $2 $-planar geodesic normal sections is constant pseudo-isotropic.
[@Ki2]. Let $M_{r}$ be a pseudo-Riemannian submanifold of index $r$ of a pseudo-Euclidean space $\mathbb{E}_{s}^{m}$ of index $s$ with geodesic normal sections. Then $$\left \langle (\overline{\nabla }_{X}h)(X,X),(\overline{\nabla }_{X}h)(X,X)\right \rangle , \tag{3.6} \label{C6}$$is constant on the their tangent bundle $UM$ of $M_{r}.$
[@Ki2]. Let $M_{r}$ be a minimal surface of $\mathbb{E}_{s}^{5}$ with geodesics normal sections. Then we have
$i)$** **$M_{r}$ is $1$-parallel and $0$-pseudo isotropic (i.e. $L=0$),
$ii)$** **$M_{r}$ has $2$-planar geodesic normal sections,
$iii)$** **$M_{r}$ is flat.
Submanifolds $M$ in $\mathbb{R}^{n+d}$ with $3$-planar normal sections have been studied by S.J.Li for the case $M$ is isotropic [@Li1] and sphered [@Li2]. See also [@AW] for the case $M$ is a product manifold in $\mathbb{R}^{n+d}.$ In [@AC] the authors consider submanifolds in a real space form $\mathbb{N}^{n+d}(c)$ with $3$-planar geodesic normal sections.
We proved the following results$.$
Let $f:M_{r}\rightarrow \mathbb{N}_{s}$ be an isometric immersion with $3$-planar geodesic normal sections then $f$ is constant pseudo-isotropic.
Similar to the proof of Lemma 4.1 in [@Na].
Let $f:M_{r}\rightarrow \mathbb{N}_{s}$ be an isometric immersion with $3$-planar geodesic normal sections then we have $$(\overline{\nabla }_{X}h)(X,X)=\varepsilon _{2}(Xk_{1})V_{2}+\varepsilon
_{2}\varepsilon _{3}k_{1}k_{2}V_{3}, \tag{3.7} \label{C7}$$$$A_{h(X,X)}X=\varepsilon _{1}\varepsilon _{2}k_{1}^{2}X. \tag{3.8}
\label{C8}$$
Let $\gamma $ be a normal section of $M_{r}$ at point $p=\gamma (s)$ in the direction of $X$. We suppose that $k_{1}(s)$ is positive. Then $k_{1}$ is also smooth and there exists a unit vector field $V_{2}$ along $\gamma $ normal to $M_{r}$ such that $$h(X,X)=\left \langle V_{2},V_{2}\right \rangle k_{1}V_{2}. \tag{3.9}
\label{C9}$$
Since $\overline{\nabla }_{X}V_{2}$ is also tangent to $M_{r}$, there exists a vector field $V_{3}$ normal to $M_{r}$ and mutually ortogonal to $X$ and $V_{2}$ such that $$\widetilde{\nabla }_{X}V_{2}=-\left \langle X,X\right \rangle k_{1}X+\left
\langle V_{3},V_{3}\right \rangle k_{2}V_{3}. \tag{3.10} \label{C10}$$
Differentiating (3.9) covariantly and using (3.10) we get $$(\overline{\nabla }_{X}h)(X,X)=-\varepsilon _{1}\varepsilon
_{2}k_{1}^{2}X+\varepsilon _{2}(Xk_{1})V_{2}+\varepsilon _{2}\varepsilon
_{3}k_{1}k_{2}V_{3}, \tag{3.11} \label{C11}$$where $\left \langle V_{i},V_{i}\right \rangle =\varepsilon _{i}=\pm 1.$ Comparing (3.11) with (3.2) we get the result.
Let $\gamma $ be a normal section of $M_{r}$ at point $p=\gamma (s)$ in the direction of $X.$ $\gamma $ is a non-null $W$-curve of rank $2$ in $M_{r}$ if and only if $$\nabla _{X}\nabla _{X}X+g(\nabla _{X}X,\nabla _{X}X)g(X,X)X=0. \tag{3.12}
\label{C12}$$
Since $\gamma ^{\prime }(s)=X(s),$ $\gamma ^{^{\prime \prime }}(s)=\nabla
_{X}\nabla _{X}X$ and $$g(X,X)=\varepsilon _{1},g(\nabla _{X}X,\nabla _{X}X)=\varepsilon
_{2}k_{1}^{2}.$$So, by the use of the equality $\gamma ^{^{\prime \prime }}(s)=\varepsilon
_{2}k_{1}V_{2}(s)$ we get the result.
Let $M_{r}$ be a totally umbilical submanifold of $\mathbb{N}_{s}$ with parallel mean curvature vector field . If the normal section $\gamma $ is a W- curve of osculating order 2. Then $\gamma $ is also a W-curve of $\mathbb{N}_{s}$ with the same order.
Suppose $\gamma $ is a W-curve of rank $2$ in $M_{r}$ then it satisfies the equality (3.12). Further, by the use of (1.1) we get $$\gamma ^{^{\prime \prime }}=\widetilde{\nabla }_{X}X=\nabla _{X}X+h(X,X).
\tag{3.13} \label{C13}$$Since $M_{r}$ is totally umbilical then $g(X,X)H=h(X,X).$ So, the equation (3.13) reduces to $$\gamma ^{^{\prime \prime }}=\widetilde{\nabla }_{X}X=\nabla _{X}X+g(X,X)H.
\tag{3.14} \label{C14}$$Differentiating the equation (3.14) with respect to X we obtain$$\begin{aligned}
\gamma ^{^{\prime \prime \prime }} &=&\widetilde{\nabla }_{X}\widetilde{\nabla }_{X}X=\nabla _{X}\nabla _{X}X+g(X,\nabla _{X}X)H \TCItag{3.15}
\label{C15} \\
&&+g(X,X)(-A_{H}X+D_{X}H). \notag\end{aligned}$$Further, taking use of $DH=0$ and (3.13)-(3.15) get $$\begin{aligned}
&&\widetilde{\nabla }_{X}\widetilde{\nabla }_{X}X+g(\widetilde{\nabla }_{X}X,\widetilde{\nabla }_{X}X)g(X,X)X \\
&=&\nabla _{X}\nabla _{X}X-g(H,H)g(X,X)X+\left \{ g(\nabla _{X}X,\nabla
_{X}X)g(X,X)\right \} g(X,X)X \\
&=&\nabla _{X}\nabla _{X}X+g(\nabla _{X}X,\nabla _{X}X)g(X,X)X.\end{aligned}$$So, by previous proposition $\gamma $ is a W-curve of rank $2$ in $\mathbb{N}_{s}.$
[99]{} Arslan K. and Celik Y., *Submanifolds in real space form with 3-planar geodesic normal sections*,Far East J.Math.Sci. 5(1) (1997), 113-120.
Arslan K., Celik Y. and Deszcz R., *A note on geodesic circles on Riemannian manifolds*, Far East J.Math.Sci. 5(3) (1997), 453-459.
Arslan K., Lumiste Ü., Murathan C. and Özgür C.,* 2-Semiparallel surfaces in space forms I,two particular cases*, Proc. Estonian Acad.Sci. Phys. Math. 49(2000), 3, 139-148.
Arslan K. and West A.,*Product submanifolds with P.3-PNS.*, Glasgow J. Math. 37(1995), 73-81.
Blomstrom C., *Planar geodesic immersions in pseudo-Euclidean space*, Math. Ann. 274(1986), 585-598.
Chen B. Y.,* Geometry of submanifolds*, Marcel-Dekker, 1973.
Chen B. Y., *Geometry of submanifolds and its applications*, Science University of Tokyo 1981.
Ferus, D., *Immersions with parallel second fundamental form*, Math. Z 140 (1974), 87-92.
Ferus D. and Schirrmacher S., *Submanifolds in Euclidean space with simple geodesics*, Math. Ann. 260(1982), 57-62.
Hong S.L., *Isometric immersions of manifolds with plane geodesics in Euclidean space*, J. Dif. Geo. 8(1973), 259-278.
Iawa, T., *On some curves in Riemannian geometry*, Soochow J. Math. 7 (1980), 37-44.
Iawa, T., *On curves and submanifolds in an indefinite-Riemannian manifold*, Tukuba J. Math. 9(1985), 353-371.
Kim Y.H., *Minimal surface of pseudo-Euclidean spaces with geodesic normal sections*, Dif. Geo. and its App. 5(1995), 321-329.
Kim Y.H., *Surfaces in a pseudo-Euclidean space with planar normal sections*, J. Geom. 35 (1989), 120-131.
Li S.J., *Isotropic submanifolds with pointwise 3-planar normal sections*, Boll. U. M. I. 7(1987), 373-385.
Li S.J., *Spherical submanifolds with pointwise 3 or 4-planar normal sections*, Yokohoma Math. J. 35(1987), 21-31.
Little J. A., *Manifolds with planar geodesics*, J. Diff. Geom. 11 (1976) 265-285.
Lumiste Ü., *Submanifolds with Vander Waerden-Bortolotti plane connection and parallelism of the third fundamental form*, Izv.Vuzov.Mat. 30(1987), 18-27.
Murathan C., *Pointwise k-planar normal sections with Immersions in warped products of Riemannian manifolds*, PhD Thesis, Uludağ University,1995, Bursa-Turkey.
Nakagawa H., *On a certain minimal immersion of a Riemannian manifold into a sphere*, Kodai Math. 3(1980), 321-340.
Nakanishi Y., *On helices and pseudo-Riemannian submanifolds*, Tsukuba J. Math. 12(1988), 459-476.
Sakamoto K., *Helical minimal immersions of compact Riemannian manifolds into a unit sphere*, Trans. American Math. Soc. 288(1985), 765-790.
*Kadri Arslan & Betül BULCA*
*Uludag University*
*Faculty of Art and Sciences*
*Department of Mathematics*
*16059, Bursa, TURKEY.*
*arslan@uludag.edu.tr*
*bbulca@uludag.edu.tr*
*Günay ÖZTÜRK*
*Kocaeli University*
*Faculty of Art and Sciences*
*Department of Mathematics*
*41310, Kocaeli, TURKEY.*
*ogunay@kocaeli.edu.tr*
[^1]: 2000 *Mathematics Subject Classification*. 53C40, 53C42
*Key words and phrases*: Pseudo-Riemannian submanifold, geodesic normal section.
|
---
address:
- 'Département de Mathématiques et Statistique, Université de Montréal, Montréal, H3T 1J4, Canada'
- 'Laboratoire de Probabilités et Modèles Aléatoires, CNRS UMR 7599, Université Paris 6, 4 place Jussieu, 75252 Paris Cedex 05, France'
author:
- 'Louis-Pierre ARGUIN'
- Olivier ZINDY
date: '6 October, 2013'
title: 'Poisson-Dirichlet Statistics for the extremes of the two-dimensional discrete Gaussian Free Field'
---
[[***Abstract.***]{} In a previous paper, the authors introduced an approach to prove that the statistics of the extremes of a log-correlated Gaussian field converge to a Poisson-Dirichlet variable at the level of the Gibbs measure at low temperature and under suitable test functions. The method is based on showing that the model admits a one-step replica symmetry breaking in spin glass terminology. This implies Poisson-Dirichlet statistics by general spin glass arguments. In this note, this approach is used to prove Poisson-Dirichlet statistics for the two-dimensional discrete Gaussian free field, where boundary effects demand a more delicate analysis. ]{}
Introduction
============
The model
---------
Consider a finite box $A$ of $\Z^2$. The Gaussian free field (GFF) on $A$ with Dirichlet boundary condition is the centered Gaussian field $(\phi_v, v\in A)$ with the covariance matrix $$\label{eqn: cov}
G_A(v,v'):= E_v\left[\sum_{k=0}^{\tau_{A}} 1_{v'}(S_k)\right]\ ,$$ where $(S_k, k\geq 0)$ is a simple random walk with $S_0=v$ of law $P_v$ killed at the first exit time of $A$, $\tau_{A}$, i.e. the first time where the walk reaches the boundary $\partial A$. Throughout the paper, for any $A\subset \Z^2$, $\partial A$ will denote the set of vertices in $A^c$ that share an edge with a vertex of $A$. We will write $\p$ for the law of the Gaussian field and $\E$ for the expectation. For $B\subset A$, we denote the $\sigma$-algebra generated by $\{\phi_v, v\in B\}$ by $\F_B$.
We are interested in the case where $A=V_N:=\{1,\dots, N\}^2$ in the limit $N\to\infty$. For $0\leq \delta < 1/2$, we denote by $V_N^\delta$ the set of the points of $V_N$ whose distance to the boundary $\partial V_N$ is greater than $\delta N$. In this set, the variance of the field diverges logarithmically with $N$, cf. Lemma \[lem: green estimate\] in the appendix, $$\label{eqn: variance}
\E[\phi^2_v]=G_{V_N}(v,v)= \frac{1}{\pi} \log N^2 + O_N(1), \qquad { \forall v \in V_N^\delta},$$ where $O_N(1)$ will always be a term which is uniformly bounded in $N$ and in $v\in V_N$. (The term $o_N(1)$ will denote throughout a term which goes to $0$ as $N\to\infty$ uniformly in all other parameters.) Equation follows from the fact that for $v\in V_N^\delta$ and $u\in \partial V_N$, $\delta N \leq \| v-u \| \leq \sqrt{2}(1-\delta) N$, where $\|\cdot\|$ denotes the Euclidean norm on $\Z^2$. A similar estimate yields an estimate on the covariance $$\label{eqn: covariance}
\E[\phi_v\phi_{v'}]=G_{V_N}(v,v')=\frac{1}{\pi} \log \frac{N^2}{\|v-v'\|^{ 2}} +O_N(1), \qquad \forall v,v'\in V_N^\delta .$$ In view of and , the Gaussian field $(\phi_v,v\in V_N)$ is said to be [*log-correlated*]{}. On the other hand, there are many points that are outside $V_N^{\delta}$ (of the order of $N^2$ points) for which the estimates and are not correct. Essentially, the closer the points are to the boundary the lesser are the variance and covariance as the simple random walk in has a higher probability of exiting $V_N$ early. This decoupling effect close to the boundary complicates the analysis of the extrema of the GFF by comparison with log-correlated Gaussian fields with stationary distribution.
Main results
------------
It was shown by Bolthausen, Deuschel, and Giacomin [@bolthausen-deuschel-giacomin] that the maximum of the GFF in $V_N^\delta$ satisfies
$$\lim_{N\to\infty} \frac{\max_{v\in V_N^\delta} \phi_v }{\log N^2}= \sqrt{\frac{2}{\pi}}, \qquad \text{ in probability.}$$
A comparison argument using Slepian’s lemma can be used to extend the result to the whole box $V_N$. Their technique was later refined by Daviaud [@daviaud] who computed the [*log-number of high points*]{} in $V_N^\delta$: for $0<\lambda<1$,
$$\label{eqn: daviaud}
\lim_{N\to\infty} \frac{1}{\log N^2} \log \#\{v\in V_N^\delta: \phi_v \, \geq \, \lambda \sqrt{\frac{2}{\pi}} \log N^2\}= 1-\lambda^2, \qquad \text{ in probability.}$$
It is a simple exercise to show using the above results that the [*free energy*]{} in $V_N$ of the model is given by $$\label{eqn: free energy GFF}
f(\beta):=\lim_{N\to\infty}\frac{1}{\log N^2}\log \sum_{v\in V_N}e^{\beta \phi_v}=
\begin{cases}
1+\frac{\beta^2 }{2\pi}, &\text{ if $\beta\leq \sqrt{2\pi}$,}\\
\sqrt{\frac{2}{\pi}}\beta , &\text{ if $\beta\geq \sqrt{2\pi}$,}
\end{cases}
\qquad
\text{ a.s. and in $L^1$. }$$ Indeed, there is the clear lower bound $\log \sum_{v\in V_N}e^{\beta \phi_v} \geq \log \sum_{v\in V_N^\delta}e^{\beta \phi_v}$, which can be evaluated using the log-number of high points by Laplace’s method. The upper bound is obtained using a comparison argument with i.i.d. centered Gaussians.
A striking fact is that the three above results correspond to the expressions for $N^2$ independent Gaussian variables of variance $\frac{1}{\pi}\log N^2$. In other words, correlations have no effects on the above observables of the extremes. The purpose of the paper is to extend this correspondence to observables related to the Gibbs measure.
To this aim, consider the [*normalized Gibbs weights*]{} or [*Gibbs measure*]{} $${\mathcal G}_{\beta, N}(\{v\}):=\frac{\ee^{\beta \phi_v}}{Z_N(\beta)}, \qquad v \in V_N ,$$ where $Z_N(\beta):= \sum_{v\in V_N} \ee^{\beta \phi_v}$. We consider the normalized covariance or [*overlap*]{} $$\label{eqn: q}
q(v,v'): =\frac{\E[\phi_v\phi_{v'}]}{\frac{1}{\pi}\log N^2}, \qquad \forall v,v' \in V_N.$$ This is the covariance divided by the dominant term of the variance in the bulk.
In spin glasses, the relevant object to classify the extreme value statistics of strongly correlated variables is the [*two-overlap distribution function*]{} $$\label{eqn: x}
x_{\beta,N}(q):= \e \left[ {\mathcal G}_{\beta, N}^{\times 2} \left\{q(v,v') \le q \right\} \right], \qquad 0\leq q\leq 1 .$$ The main result shows that the 2D GFF falls within the class of models that exhibit a [*one-step replica symmetry breaking*]{} at low temperature.
\[thm: overlap\] For $\beta > \beta_c=\sqrt{2\pi}$, $$\lim_{N\to\infty }x_{\beta,N}(r):= \lim_{N\to\infty } \e \left[ {\mathcal G}_{\beta, N}^{\times 2} \left\{q(v,v') \le q \right\} \right]=
\begin{cases}
\frac{\beta_c}{\beta} &\text{ for $0 \le r <1$,}\\
1 &\text{ for $r=1$.}
\end{cases}$$
Note that for $\beta\leq \beta_c$, it follows from that the overlap is $0$ almost surely. The result is the analogue for the 2D GFF of the results obtained by Derrida & Spohn [@derrida-spohn] and Bovier & Kurkova [@bovier-kurkova1; @bovier-kurkova2] for the branching Brownian motion and for GREM-type models. In [@arguin-zindy], such a result was proved for a non-hierarchical log-correlated Gaussian field constructed from the multifractal random measure of Bacry & Muzy [@bacry-muzy], see also [@bouchaud-fyodorov] for a closely related model. This type of result was conjectured by Carpentier & Ledoussal [@carpentier-ledoussal]. We also remark that Theorem \[thm: overlap\] shows that at low temperature two points sampled with the Gibbs measure have overlaps $0$ or $1$. This is consistent with the result of Ding & Zeitouni [@ding-zeitouni] who showed that the extremal values of GFF are at distance from each other of order one or of order $N$.
A general method to prove Poisson-Dirichlet statistics for the distribution of the overlaps from the one-step replica symmetry breaking was laid down in [@arguin-zindy]. This connection is done via the (now fundamental) Ghirlanda-Guerra identities. Another equivalent approach would be using [*stochastic stability*]{} as developed in [@aizenman-contucci; @arguin; @arguin-chatterjee]. The reader is referred to Section 2.3 of [@arguin-zindy] where the connection is explained in details for general Gaussian fields. For the sake of conciseness, we simply state the consequence for the 2D GFF.
Consider the product measure ${\mathcal G}_{\beta,N}^{\times s}$ on $s$ [*replicas*]{} $(v_1,\dots,v_s)\in V_N^{\times s}$. Let $F:[0,1]^{\frac{s(s-1)}{2}}\to \R$ be a continuous function. Write $F(q_{ll'})$ for the function evaluated at $q_{ll'}:=q(v_l,v_{l'})$, $l\neq l'$, for $(v_1,\dots ,v_s)\in V_N^{\times s}$. We write $\E {\mathcal G}_{\beta,N}^{\times s}\big(F(q_{ll'})\big)$ for the averaged expectation. Recall that a [*Poisson-Dirichlet variable*]{} $\xi$ of parameter $\alpha$ is a random variable on the space of decreasing weights $\vec{s}=(s_1,s_2,\dots)$ with $1\geq s_1\geq s_2\geq \dots\geq 0$ and $\sum_{i}s_i\leq 1$ which has the same law as $\left(\eta_i/\sum_j\eta_j, i\in \N\right)_\downarrow$ where $\downarrow$ stands for the decreasing rearrangement and $\eta=(\eta_i,i\in\N)$ are the atoms of a Poisson random measure on $(0,\infty)$ of intensity measure $s^{-\alpha-1} ~ ds$.
The theorem below is a direct consequence of the Theorem \[thm: overlap\], the differentiability of the free energy as well as Corollary 2.5 and Theorem 2.6 of [@arguin-zindy].
\[thm: PD\] Let $\beta>\beta_c$ and $\xi=(\xi_k,k\in\N)$ be a Poisson-Dirichlet variable of parameter $\beta_c/\beta$. Denote by $E$ the expectation with respect to $\xi$. For any continuous function $F: [0,1]^{\frac{s(s-1)}{2}}\to \R$ of the overlaps of $s$ replicas: $$\lim_{N\to\infty}
\e \left[ {\mathcal G}_{\beta, N}^{\times s} \left( F(q_{ll'}) \right)\right]
=
E \left[ \sum_{k_1\in\N,...,k_s\in\N} \xi_{k_1}\dots \xi_{k_s} ~ F(\delta_{k_lk_{l'}}) \right].$$
The above is one of the few rigorous results known on the Gibbs measure of log-correlated fields at low temperature. Theorem \[thm: PD\] is a step closer to the conjecture of Duplantier, Rhodes, Sheffield & Vargas (see Conjecture 11 in [@drsv] and Conjecture 6.3 in [@rhodes-vargas]) that the Gibbs measure, as a random probability measure on $V_N$, should be atomic in the limit with the size of the atoms being Poisson-Dirichlet. Theorem \[thm: PD\] falls short of the full conjecture because only test-functions of the overlaps are considered. Finally, it is expected that the Poisson-Dirichlet statistics emerging here is related to the Poissonian statistics of the thinned extrema of the 2D GFF proved by Biskup & Louidor in [@biskup-louidor] based on the convergence of the maximum established by Bramson, Ding & Zeitouni [@bramson-ding-zeitouni]. To recover the Gibbs measure from the extremal process, some properties of the cluster of points near the maxima must be known.
The rest of this paper is dedicated to the proof of Theorem \[thm: overlap\]. In Section \[sect: generalized\], a generalized version of the GFF (whose variance is scale-dependent) is introduced. It is a kind of non-hierarchical GREM and is related to a model studied by Fyodorov & Bouchaud in [@fyodorov-bouchaud]. The proof of Theorem \[thm: overlap\] is given in Section \[sect: overlap\]. It relates the overlap distribution of the 2D GFF to the free energy of the generalized GFF. The free energy of the generalized GFF needed in the proof is computed in Section \[sect: free energy\].
The multiscale decomposition and a generalized GFF {#sect: generalized}
==================================================
In this section, we construct a Gaussian field from the GFF whose variance is scale-dependent. The construction uses a multiscale decomposition along each vertex. The construction is analogous to a [*Generalized Random Energy Model*]{} of Derrida [@derrida], but where correlations are non-hierarchical. Here, only two different values of the variance will be needed though the construction can be directly generalized to any finite number of values.
Consider $0<t<1$. We assume to simplify the notation that $N^{1-t}$ is an even integer and that $N^{t}$ divides $N$. The case of general $t$’s can also be done by making trivial corrections along the construction.
For $v\in V_N$, we write $[v]_t$ for the unique box with $N^{1-t}$ points on each side and centered at $v$. If $[v]_t$ is not entirely contained in $V_N$, we take the convention that $[v]_t$ is the intersection of the square box with $V_N$. For $t=1$, take $[v]_1=v$. The $\sigma$-algebra $\F_{[v]_t^c}$ is the $\sigma$-algebra generated by the field outside $[v]_t$. We define $$\phi_{[v]_t}:= \E\left[ \phi_v ~\big| ~ \F_{[v]_t^c} \right]=\E\left[ \phi_v ~\big| ~ \F_{\partial [v]_t}\right]\ ,$$ where the second equality holds by the Markov property of the Gaussian free field, see Lemma \[lem: GFF\]. Clearly, for any $v\in V_N$, the random variable $\phi_{[v]_t}$ is Gaussian. Moreover, by Lemma \[lem: GFF\], $$\label{eqn: field decomp}
\phi_{[v]_t}=\sum_{u\in \partial [v]_t} p_{t,v}(u) \phi_u \ ,$$ where $p_{t,v}(u)=P_v(S_{\tau_{[v]_t}}=u)$ is the probability that a simple random walk starting at $v$ hits $u$ at the first exit time of $[v]_t$.
The following [*multiscale decomposition*]{} holds trivially $$\phi_v= \phi_{[v]_t} + \left(\phi_v- \phi_{[v]_t} \right)\ .$$ The decomposition suggests the following scale-dependent perturbation of the field. For $0<\alpha<1$ and $\vec{\sigma}=(\sigma_1, \sigma_2) \in \R_+^2$, consider for $v\in V_N$, $$\label{eqn: psi}
\psi_v:=\sigma_1 \phi_{[v]_\alpha} + \sigma_2 \left(\phi_v- \phi_{[v]_\alpha} \right)\ .$$ The Gaussian field $(\psi_v, v\in V_N)$ will be called the $(\alpha,\vec{\sigma})$-GFF on $V_N$.\
To control the boundary effects, it is necessary to consider the field in a box slightly smaller than $V_N$. For $\rho \in (0,1)$, let $$\label{eqn:Arho}
A_{N,\rho}:= \{v \in V_N : d_1(v,\partial V_N) \ge N^{1-\rho}\}\ ,$$ where $d_1(v,B):=\inf \{ \| v-u\| \, ; \, u \in B\}$ for any set $B \subset \Z^2.$ We always take $\rho<\alpha$ so that $[v]_\alpha$ is contained in $V_N$ for any $v\in A_{N,\rho}$. We write ${\mathcal G}_{\beta, N,\rho}^{(\alpha,\vec{\sigma})}(\cdot)$ for the Gibbs measure of $(\alpha,\vec{\sigma})$-GFF restricted to $A_{N,\rho}$ $${\mathcal G}_{\beta, N,\rho}^{(\alpha,\vec{\sigma})}(\{v\}):=\frac{\ee^{\beta \psi_v}}{Z_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta)}, \qquad v \in A_{N,\rho} ,$$ where $Z_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta):= \sum_{v\in A_{N,\rho}} \ee^{\beta \psi_v}\ .$
The associated free energy is given by $$f_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta):= \frac{1}{\log N^2} \log Z_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta), \qquad \forall \beta >0.$$ (Note that $\log \#A_{N,\rho}=(1+o_N(1))\log N^2$.) Its $L_1$-limit is a central quantity needed to apply Bovier-Kurkova technique. This limit is better expressed in terms of the free energy of the REM model consisting of $N^2$ i.i.d. Gaussian variables of variance $\frac{\sigma^2}{\pi} \log N^2$: $$\label{eqn: rem free}
f(\beta; \sigma^2):=
\begin{cases}
1+\frac{\beta^2 \sigma^2}{2 \pi}, &\text{ if $\beta\leq \beta_c(\sigma^2):= \frac{\sqrt{2 \pi}}{{\sigma}},$}\\
\sqrt{\frac{2}{\pi}}\sigma \beta, &\text{ if $\beta\geq \beta_c(\sigma^2).$}
\end{cases}$$
\[thm:freeenergyperturbed\] Fix $\alpha \in (0,1)$ and $\vec{\sigma}= (\sigma_1,\sigma_2) \in \R_+^2$ and let $V_{12}:=\sigma_1^2\alpha+\sigma_2^2(1-\alpha)$. Then, for any $\rho < \alpha,$ and for all $\beta>0$ $$\label{eqn: free energy pert}
\lim_{N\to\infty}f_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta)=f^{(\alpha,\vec{\sigma})}(\beta):=
\begin{cases}
f(\beta; V_{12}), \qquad &\text{ if $\sigma_1\leq \sigma_2$,}\\
\alpha f(\beta; \sigma_1^2) + (1-\alpha)f(\beta; \sigma_2^2), \qquad &\text{ if $\sigma_1\geq \sigma_2$,}
\end{cases}$$ where the convergence holds almost surely and in $L^1$.
Note that the limit does not depend on $\rho$.
Proof of Theorem \[thm: overlap\] {#sect: overlap}
=================================
The Gibbs measure close to the boundary
---------------------------------------
The first step in the proof of Theorem \[thm: overlap\] is to show that points close to the boundary do not carry any weight in the Gibbs measure of the GFF in $V_N$. The result would not necessarily hold if we considered instead the outside of $V_N^\delta$ which is much larger than the outside of $A_{N,\rho}$.
\[lem: boundary\] For any $\rho>0$, $$\lim_{N\to\infty}\mathcal G_{\beta, N}(A^c_{N,\rho}) =0, \qquad \text{ in $\p$-probability.}$$
Before turning to the proof, we claim that the lemma implies that, for any $r\in [0,1]$ and $\rho\in(0,1)$, $$\label{eqn: x rho}
\lim_{N\to\infty}\big|x_{\beta,N} (r)-x_{\beta,N,\rho} (r)\big|=0\ ,$$ where $$x_{\beta,N,\rho}(r ):= \E\mathcal G_{\beta,N,\rho}^{\times 2}\{q(v,v')\leq r\}, \qquad \text{ $r\in [0,1]$}\ .$$ is the two-overlap distribution of the Gibbs measure of the GFF $(\phi_v,v\in V_N)$ restricted to $A_{N,\rho}$ $${\mathcal G}_{\beta, N, \rho}(\{v\}):=\frac{\ee^{\beta \phi_v}}{Z_{N,\rho}(\beta)}, \qquad v \in A_{N,\rho} ,$$ for $Z_{N,\rho}(\beta):= \sum_{v\in A_{N,\rho}} \ee^{\beta \phi_v}$. Indeed, introducing an auxiliary term $$\begin{aligned}
\big|x_{\beta,N} (r)-x_{\beta,N,\rho} (r)\big|&\leq
\big| \E\mathcal G_{\beta,N}^{\times 2}\big\{q(v,v')\leq r\big\} - \E\mathcal G_{\beta,N}^{\times 2}\big\{q(v,v')\leq r; v,v'\in A_{N,\rho}\big\}\big|\\
&+
\big| \E\mathcal G_{\beta,N}^{\times 2}\big\{q(v,v')\leq r; v,v'\in A_{N,\rho}\big\}-\E\mathcal G_{\beta,N,\rho}^{\times 2}\big\{q(v,v')\leq r\big\} \big|\ .
\end{aligned}$$ The first term is smaller than $2\ \E\mathcal G_{\beta,N}(A_{N,\rho}^c)$. The second term equals $$\begin{aligned}
&\E\mathcal G_{\beta,N,\rho}^{\times 2}\big\{q(v,v')\leq r\big\}- \E\mathcal G_{\beta,N}^{\times 2}\big\{q(v,v')\leq r; v,v'\in A_{N,\rho}\big\}\\
&\hspace{4cm}=
\E\left[\frac{\mathcal G_{\beta,N}^{\times 2}\big\{q(v,v')\leq r; v,v'\in A_{N,\rho}\big\}}{\mathcal G_{\beta,N}^{\times 2}\big\{ v,v'\in A_{N,\rho}\big\}}
\left(1-\mathcal G_{\beta,N}^{\times 2}\big\{ v,v'\in A_{N,\rho} \big\} \right)\right],
\end{aligned}$$ which is also smaller than $2\ \E\mathcal G_{\beta,N}(A_{N,\rho}^c)$. Lemma \[lem: boundary\] then implies as claimed.
Let $\epsilon>0$ and $\lambda>0$. The probability can be split as follows $$\begin{aligned}
\p\left(\mathcal G_{\beta, N}(A^c_{N,\rho}) > \epsilon \right)
&\leq \p\left(\mathcal G_{\beta, N}(A^c_{N,\rho}) > \epsilon, \left|\frac{1}{\log N^2}\log Z_N(\beta) - f(\beta)\right|\leq \lambda \right)\\\
&+
\p\left( \left|\frac{1}{\log N^2}\log Z_N(\beta) - f(\beta) \right|> \lambda \right)\ ,
\end{aligned}$$ where $f(\beta)$ is defined in . The second term converges to zero by . The first term is smaller than $$\label{eqn: self-overlap}
\begin{aligned}
\p\left( \frac{1}{\log N^2}\log \sum_{v\in A^c_{N,\rho}} \exp\beta \phi_v > f(\beta)-\lambda+\frac{\log \epsilon}{\log N^2}\right) .
\end{aligned}$$ Since the free energy is a Lipschitz function of the variables $\phi_v$, see e.g. Theorem 2.2.4 in [@talagrand], the free energy self-averages, that is for any $t>0$ $$\lim_{N\to\infty}\p\left( \left|\frac{1}{\log N^2}\log \sum_{v\in A_{N,\rho}^c} \exp\beta \phi_v -\frac{1}{\log N^2} \E\left[\log \sum_{v\in A_{N,\rho}^c} \exp\beta \phi_v\right]\right|\geq t\right)= 0\ .$$ To conclude the proof, it remains to show that for some $C<1$ (independent of $N$ but dependent on $\rho$) $$\label{eqn: expect bound}
\limsup_{N\to\infty} \frac{1}{\log N^2} \E\left[\log \sum_{v\in A_{N,\rho}^c} \exp\beta \phi_v\right] < C f(\beta).$$ Note that by Lemma \[lem: GFF\], the maximal variance of $\phi_v$ in $V_N$ is $ \frac{1}{\pi}\log N^{2}+O_N(1)$. Pick $(g_v,v\in A_{N,\rho}^c)$ independent centered Gaussians (and independent of $(\phi_v)_{v \in A_{N,\rho}^c}$) with variance given by $\E[g_v^2]= \frac{1}{\pi}\log N^{2}+O_N(1) - \E[\phi_v^2]$. Jensen’s inequality applied to the Gibbs measure implies that $\E[\log \sum_{v\in A_{N,\rho}^c} \exp \beta (\phi_v+g_v)]\geq \E[\log \sum_{v\in A_{N,\rho}^c} \exp \beta \phi_v]$. Moreover, by a standard comparison argument (see Lemma \[lem: slepian\] in the Appendix), $\E[\log \sum_{v\in A_{N,\rho}^c} \exp \beta (\phi_v+g_v)]$ is smaller than the expectation for i.i.d. variables with identical variances. The two last observations imply that $$\frac{1}{\log N^2} \E\left[\log \sum_{v\in A_{N,\rho}^c} \exp\beta \phi_v\right] \leq \frac{1}{\log N^2} \E\left[\log \sum_{v\in A_{N,\rho}^c}\exp\beta \widetilde\phi_{ v} \right],$$ where $(\widetilde \phi_v, v\in A_{N,\rho}^c)$ are i.i.d. centered Gaussians of variance $\frac{1}{\pi}\log N^{2}+O_N(1)$. Since $\#A_{N,\rho}^c=N^2-|A_{N,\rho}|= 4N^{2-\rho}(1+o_N(1))$, the free energy of these i.i.d. Gaussians in the limit $N\to\infty$ is given by $$\lim_{N\to\infty} \frac{1}{\log 4N^{2-\rho}}\E\left[\log \sum_{ v\in A_{N,\rho}^{ c}} \exp\beta \widetilde\phi_{ v}\right] =
\begin{cases}
1+ \frac{\beta^2}{2\pi}\left(1-\frac{\rho}{2}\right){ ^{-1}}, & \text{ $\beta< \sqrt{2\pi}\left(1-\frac{\rho}{2}\right)^{1/2}$},\\
\sqrt{\frac{2}{\pi}} \left(1-\frac{\rho}{2}\right)^{-1/2} \beta, &\text{ $\beta\geq \sqrt{2\pi}\left(1-\frac{\rho}{2}\right)^{1/2}$}\ .
\end{cases}$$ The last two equations then imply $$\limsup_{N\to\infty} \frac{1}{\log N^2} \E\left[\log \sum_{v\in A_{N,\rho}^{{ c}}} \exp\beta \phi_v\right] \leq
\begin{cases}
\left(1-\frac{\rho}{2}\right) + \frac{\beta^2}{2\pi}, & \text{ $\beta< \sqrt{2\pi}\left(1-\frac{\rho}{2}\right)^{1/2}$},\\
\sqrt{\frac{2}{\pi}} \left(1-\frac{\rho}{2}\right)^{1/2} \beta, &\text{ $\beta\geq \sqrt{2\pi}\left(1-\frac{\rho}{2}\right)^{1/2}$}\ .
\end{cases}$$ It is then straightforward to check that, for every $\beta$, the right side is strictly smaller than $f(\beta)$ as claimed.
An adaptation of the Bovier-Kurkova technique
---------------------------------------------
Theorem \[thm: overlap\] follows from Equation and
For $\beta >\beta_c= \sqrt{2\pi}$, $$\lim_{\rho \to 0}\lim_{N\to\infty} x_{\beta,N,\rho} (r)
=
\begin{cases}
\frac{\beta_c}{\beta}, &\text{ for $0 \le r <1$,}\\
1, &\text{ for $r=1$.}
\end{cases}$$
Without loss of generality, we suppose that $\lim_{\rho\to 0} \lim_{N\to\infty}x_{\beta, N,\rho}=x_\beta$ in the sense of weak convergence. Uniqueness of the limit $x_\beta$ will then ensure the convergence for the whole sequence by compactness. Note also that by right-continuity and monotonicity of $x_\beta$, it suffices to show $$\label{eqn: to prove}
\int_\alpha^1 x_\beta (r) dr = \frac{\beta_c}{\beta} (1-\alpha) , \qquad \text{ for a dense set of $\alpha$'s in $[0,1]$.}$$ We can choose a dense set of $\alpha$ such that none of them are atoms of $x_\beta$, that is $x_\beta(\alpha)-x_\beta(\alpha^-)=0$.
Now recall Theorem \[thm:freeenergyperturbed\]. Pick $\vec{\sigma}=(1,1+u)$ for some parameter $|u|\leq 1$. Since $\beta>\sqrt{2\pi}$, $u$ can be taken small enough so that $\beta$ is larger than the critical $\beta$’s of the limit. The goal is to establish the following equality: $$\label{eqn: equality}
\int_\alpha^1 x_\beta (r) dr =\lim_{\rho\to 0} \lim_{N\to\infty} \frac{\pi}{\beta^2} \frac{\partial}{\partial u} f_{N,\rho}^{( \alpha, \vec{\sigma})}{ (\beta)} \Big|_{u=0}\ .$$ The conclusion follows from this equality. Indeed, by construction, the function $u\mapsto { f_{N,\rho}^{( \alpha, \vec{\sigma})}(\beta)}$ is convex. In particular, the limit of the derivatives is the derivative of the limit at any point of differentiability. Therefore, a straightforward calculation from with $\sigma_1=1$ and $\sigma_2=1+u$ gives: $$\lim_{N\to\infty} \frac{\pi}{\beta^2} \frac{\partial}{\partial u} f_{N,\rho}^{( \alpha, \vec{\sigma})}(\beta) =
\begin{cases}
\frac{\sqrt{2\pi}}{\beta} \frac{(1-\alpha)(1+u)}{\sqrt{\alpha + (1-\alpha)(1+u)^2}}, & \text{ if $u>0$,}\\
\frac{\sqrt{2\pi}}{ \beta} (1-\alpha), & \text{ if $u<0$.}
\end{cases}$$ This gives at $u=0$.
We introduce the notation for the [*overlap at scale $\alpha$*]{}: $$\label{eqn: q alpha}
q_\alpha(v,v'):=\frac{1}{\frac{1}{\pi}\log N^2}\E\left[\left(\phi_v- \phi_{[v]_\alpha} \right)\left(\phi_{v'}- \phi_{[v']_\alpha} \right)\right],$$ Equality is proved via two identities: $$\begin{aligned}
\label{eqn: BK1}
\int_\alpha^1 x_{\beta, N,\rho} (r) dr &= &(1-\alpha)- \E \mathcal G_{\beta, N, \rho}^{\times 2} \big[ q(v,v') -\alpha ; q(v,v')\geq \alpha\big],\\
\label{eqn: BK2}
\frac{\pi}{\beta^2} \frac{\partial}{\partial u}{f_{N,\rho}^{( \alpha, \vec{\sigma})}}(\beta) \Big|_{u=0} &=&\E \mathcal G_{\beta, N, \rho} \big[ q_\alpha(v,v) \big] -
\E \mathcal G_{\beta, N, \rho}^{\times 2} \big[q_\alpha(v,v') ; v'\in [v]_\alpha\big]\ .\end{aligned}$$ The first identity holds since by Fubini’s theorem $$\begin{aligned}
\int_\alpha^1 x_{\beta, N,\rho} (r) dr&=
\E \mathcal G_{\beta,N,\rho}^{\times 2} \left[ \int_\alpha^1 1_{\{ r\geq q(v,v')\}} dr\right]\\
&= \E \mathcal G_{\beta,N,\rho}^{\times 2} \big[ 1-\alpha; q(v,v') <\alpha \big] + \E \mathcal G_{\beta,N,\rho}^{\times 2} \big[ 1-q(v,v'); q(v,v') \geq\alpha \big]\ .
\end{aligned}$$
For the second identity, direct differentiation gives $$\frac{\pi}{\beta^2} \frac{\partial}{\partial u} {f_{N,\rho}^{( \alpha, \vec{\sigma})}}(\beta) \Big|_{u=0}=\frac{1}{\frac{1}{\pi}\log N^2}\E\mathcal G_{\beta,N,\rho}\big[ \phi_v-\phi_{[v]_\alpha}\big]\ .$$ The identity is then obtained by Gaussian integration by parts.
To prove , we need to relate the overlap at scale $\alpha$ with the overlap as well as the event $\{q(v,v')\geq \alpha\}$ with the event $\{v'\in[v]_\alpha\}$. This is slightly complicated by the boundary effect present in GFF. The equality in the limit $N\to\infty$ between the first terms of and is easy. Because $(\phi_u- \E[\phi_u | \F_{[v]_\alpha^c}], u \in [v]_\alpha)$ has the law of a GFF in $[v]_\alpha$, it follows from Lemma \[lem: green estimate\] that $$\E\big[(\phi_v-\phi_{[v]_\alpha})^2\big]=\frac{(1-\alpha)}{\pi} \log N^{2} + O_N(1)\ .$$ Therefore, we have for $v\in A_{N,\rho}$ $$\lim_{N\to\infty} \E \mathcal G_{\beta, N, \rho} \left[ q_\alpha(v,v) \right] = 1-\alpha \ .$$ It remains to establish the equality between the second terms of and . Here, a control of the boundary effect is necessary. The following observation is useful to relate the overlaps and the distances: if $v,v'\in A_{N,\rho}$, Lemma \[lem: green estimate\] gives $$\label{eqn: q in Arho}
1-\rho-\frac{\log \|v-v'\|^2}{\log N^2} + o_N(1)\leq q(v,v') \leq 1-\frac{\log \|v-v'\|^2}{\log N^2} + o_N(1)\ .$$ On one hand, the right inequality proves the following implication $$\label{eqn: right}
\text{$q(v,v')\geq \alpha+\varepsilon$ for some $\varepsilon>0$} \Longrightarrow \|v-v'\|^{ 2} \leq c N^{2(1-\alpha-\varepsilon)}\ ,$$ for some constant $c$ independent of $N$ and $\rho$. On the other hand, the left inequality gives: $$\label{eqn: left}
v'\in [v]_\alpha \Longrightarrow \text{$q(v,v')\geq \alpha-2\rho$.}$$
Using this, we show $$\begin{aligned}
\label{eqn: show1}
\Delta_1(N,\rho)&:= \Big|\E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q(v,v') -\alpha ; q(v,v')\geq \alpha\right]-\E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q_\alpha(v,v') ; q(v,v')\geq \alpha\right]\Big| \to 0\ ,\\
\Delta_2(N,\rho)&:=\Big|\E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q_\alpha(v,v') ; q(v,v')\geq \alpha\right]-\E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q_\alpha(v,v') ; v'\in[v]_\alpha\right]\Big| \to 0\ ,
\end{aligned}$$ in the limit $N\to\infty$ and $\rho\to 0$. Let $\varepsilon>0$. Remark that $$\label{eqn: rem}
0\leq \E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q(v,v') -\alpha ; q(v,v')\geq \alpha\right]- \E \mathcal G_{\beta, N, \rho}^{\times 2} \left[ q(v,v') -\alpha ; q(v,v')\geq \alpha+\varepsilon\right]\leq \varepsilon\ .$$ To establish the equality of the overlaps on the event $\{q(v,v')\geq \alpha+\varepsilon\}$, consider the decomposition, $$\label{eqn: fusion1}
\begin{aligned}
&\E\left[\left(\phi_v- \phi_{[v]_\alpha} \right)\left(\phi_{v'}- \phi_{[v']_\alpha} \right)\right]=\\
&\E\left[\left(\phi_v- \E[\phi_{ v}|\F_{[v']^c_\alpha}] \right)\left(\phi_{v'}- \phi_{[v']_\alpha} \right)\right]
+
\E\left[\left(\E[\phi_{v}|\F_{[v']^c_\alpha}]- \phi_{[v]_\alpha} \right)\left(\phi_{v'}- \phi_{[v']_\alpha} \right)\right].
\end{aligned}$$ On the event $\{q(v,v')\geq \alpha+\varepsilon\}$, implies $\|v-v'\|^{ 2} \leq c N^{2(1-\alpha-\varepsilon)}$. Therefore, the first term of the right side of is by Lemma \[lem: green estimate\] $$\label{eqn: 1stterm}
\E\left[\left(\phi_v- \E[\phi_{ v}|\F_{[v']^c_\alpha} \right)\left(\phi_{v'}- \phi_{[v']_\alpha} \right)\right]=\frac{ 2}{\pi}\log \frac{ N^{(1-\alpha)}}{\|v-v'\|} + O_N(1)\ .$$ The second term is negligible. Indeed, by Cauchy-Schwarz inequality, it suffices to prove that $$\label{eqn: fusion2}
\E\left[\left(\E[\phi_{v}|\F_{[v']^c_\alpha}]- \phi_{[v]_\alpha} \right)^2\right]=O_N(1)\ .$$ For this, write $\widetilde B$ for the box $[v]_\alpha\cap [v']_\alpha$. We have $$\phi_v -\phi_{[v]_\alpha} =(\phi_v-\E[\phi_v| \F_{\widetilde B^c}])+(\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha})\ .$$ Since $\phi_v-\E[\phi_v| \F_{\widetilde B^c}]$ is independent of $\F_{\widetilde B^c}$ and $\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha}$ is $\F_{\widetilde B^c}$-measurable (observe that $\F_{\widetilde B^c}\supset \F_{[v]_\alpha^c}$), we get $$\E[(\phi_v-\phi_{[v]_\alpha})^2]= \E[(\phi_v-\E[\phi_v| \F_{\widetilde B^c}])^2]+ \E[(\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha})^2]\ .$$ Moreover, $\E[(\phi_v-\E[\phi_v| \F_{\widetilde B^c}])^2]$ and $\E[(\phi_v-\phi_{[v]_\alpha})^2]$ are both equal to $\frac{1-\alpha}{\pi}\log N^2 + O_N(1)$ by Lemma \[lem: green estimate\] and the fact that distances of $v$ to vertices in $\partial\widetilde B$ and $\partial[v]_\alpha$ are both proportional to $N^{1-\alpha}$. Therefore $\E[(\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha})^2]=O_N(1)$. The same argument with $\phi_{[v]_\alpha}$ replaced by $\E[\phi_v|\F_{[v']_\alpha^c}]$ shows that $\E[(\E[\phi_v| \F_{\widetilde B^c}]-\E[\phi_v|\F_{[v']_\alpha^c}])^2]=O_N(1)$. The two equalities imply . Equations and give $$\label{eqn: qalpha}
q_\alpha(v,v')= 1-\alpha-\frac{\log \|v-v'\|^2}{\log N^2} + o_N(1), \qquad \text{ on $\{q(v,v')\geq \alpha+\varepsilon\}$.}$$ Equations , and yield $\Delta_1(N,\rho)\to 0$ in the limit $N\to\infty$, $\rho\to 0$ and $\varepsilon\to 0$.
For $\Delta_2(N,\rho)$, let $\varepsilon'>2\rho$. For $v'\in [v]_\alpha$, implies $q(v,v')\geq \alpha - 2\rho$. On the other hand, by , $q(v,v')\geq \alpha+\varepsilon'$ implies $v'\in[v]_\alpha$. These two observations give the estimate $$\Delta_2(N,\rho)
\leq \E \mathcal G_{\beta, N, \rho}^{\times 2} \big[ q_\alpha(v,v') ; q(v,v')\in [\alpha-\varepsilon',\alpha+\varepsilon']\big]\ .$$ The right side is clearly smaller than $$x_{\beta,N,\rho}(\alpha+\varepsilon')-x_{\beta,N,\rho}(\alpha-\varepsilon')\ .$$ Under the successive limits $N\to\infty$, $\rho\to 0$, then $\varepsilon'\to 0$, the right side becomes $x_\beta(\alpha)-x_\beta(\alpha-)$. This is zero since $\alpha$ was chosen not to be an atom of $x_\beta$.
The free energy of the $(\alpha,\vec{\sigma})$-GFF: proof of Theorem \[thm:freeenergyperturbed\] {#sect: free energy}
================================================================================================
The computation of the free energy of the $(\alpha,\vec{\sigma})$-GFF is divided in two steps. First, an upper bound is found by comparing the field $\psi$ in $A_{N,\rho}$ with a “non-homogeneous” GREM having the same free energy as a standard 2-level GREM. Second, we get a matching lower bound using the trivial inequality $f_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta) \ge \ \frac{1}{\log N^2} \log \sum_{v \in V_N^\delta} \ee^{\beta \psi_v} $. The limit of the right term is computed following the method of Daviaud [@daviaud].
Proof of the upper bound {#sect: upperbound}
------------------------
For conciseness, we only prove the case $\sigma_1\geq \sigma_2$, by a comparison argument with a $2$-level GREM. The case $\sigma_1\leq \sigma_2$ is done similarly by comparing with a REM. The comparison argument will have to be done in two steps to account for boundary effects.
Divide the set $A_{N,\rho}$ into square boxes of side-length $N^{1-\alpha}/100$. (The factor $1/100$ is a choice. We simply need these boxes to be smaller than the neighborhoods $[v]_\alpha$, yet of the same order of length in $N$.) Pick the boxes in such a way that each $v\in A_{N,\rho}$ belongs to one and only one of these boxes. The collection of boxes is denoted by $\mathcal B_\alpha$ and $\partial \mathcal B_\alpha$ denotes $\bigcup_{B\in \mathcal B_\alpha} \partial B$. For $v\in A_{N,\rho}$, we write $B(v)$ for the box of $\mathcal B_\alpha$ to which $v$ belongs. For $B\in \mathcal B_{ \alpha}$, denote by $\widetilde B\supset B$ the square box given by the intersections of all $[u]_\alpha$, $u\in B$, see figure \[fig: boxes\]. Remark that the side-length of $\widetilde B$ is $cN^{1-\alpha}$, for some constant $c$. For short, write $\phi_{\widetilde B}:=\E[\phi_{v_B}|\F_{\widetilde B^c}]$ where $v_B$ is the center of the box $B$. The idea in constructing the GREM is to associate to each point $v \in B$ the same contribution at scale $\alpha$, namely $\phi_{\widetilde B}$. One problem is that $\phi_{\widetilde B}$ will not have the same variance for every $B$ since it depends on the distance to the boundary. This is the reason why the comparison will need to be done in two steps.
![The box $B\in \mathcal B_\alpha$ and the corresponding box $\widetilde B$ which is the intersection of all the neighborhoods $[v]_\alpha$, $v\in B$.[]{data-label="fig: boxes"}](box_upper.pdf){height="4cm"}
First, consider the hierarchical Gaussian field $(\widetilde\psi_v, v\in A_{N,\rho})$: $$\label{eqn: tilde psi}
\widetilde \psi_v= g^{(1)}_{B(v)} + g^{(2)}_v,$$ where $(g^{(2)}_v, v\in A_{N,\rho} )$ are independent centered Gaussians [ (also independent from $(g^{(1)}_B, B\in \mathcal B_\alpha)$)]{} with variance $$\E[(g^{(2)}_v)^2]= \E[\psi_v^2] - \E[ (g^{(1)}_{B(v)})^2]\ .$$ This ensures that $\E[\psi_v^2]= \E[\widetilde\psi_v^2]$ for all $v\in A_{N,\rho}$. The variables $(g^{(1)}_B, B\in \mathcal B_\alpha)$ are also independent centered Gaussians with variance chosen to be $\sigma_1^2\E[\phi_{\widetilde B}^2]+C$ for some constant $C\in \R$ independent of $B$ in $ \mathcal B_{ \alpha}$ and independent of $N$. The next lemma ensures that $$\label{eqn: corr bound}
\E[\psi_v\psi_{v'}]\geq \E[\widetilde\psi_v\widetilde \psi_{v'}]\ .$$
\[lem: correlations\] Consider the field $(\psi_v, v\in A_{N,\rho})$ as in . Then $\E[\psi_v\psi_{v'}]\geq 0$. Moreover, if $v$ and $v'$ both belong to $B\in \mathcal B_\alpha$, then $$\E[\psi_v\psi_{v'}]\geq \sigma_1^2\E[\phi_{\widetilde B}^2]+C\ ,$$ for some constant $C\in\R$ independent of $N$.
For the first assertion, write $$\psi_v=(\sigma_1-\sigma_2) \phi_{[v]_\alpha}+\sigma_2 \phi_v\ .$$ The representation $\phi_{[v]_\alpha}=\sum_{u\in \partial [v]_\alpha} p_{\alpha,v}(u)~\phi_u$ of Lemma \[lem: GFF\] and the fact that $\sigma_1>\sigma_2$ imply that $\E[\psi_v\psi_v'] \ge 0$ since the field $\phi$ is positively correlated by .
Suppose now that $v,v'\in B$ where $B\in \mathcal B_\alpha$. The covariance can be written as $$\label{eqn: covariance psi}
\begin{aligned}
\E[\psi_v\psi_{v'}]&= \sigma_1^2\E\left[\phi_{[v]_\alpha}\phi_{[v']_\alpha}\right] + \sigma_2^2\E\left[(\phi_v-\phi_{[v]_\alpha})(\phi_{v'}-\phi_{[v']_\alpha})\right] \\
&+ \sigma_1\sigma_2 \E\left[\phi_{[v]_\alpha}(\phi_{v'}-\phi_{[v']_\alpha})\right]+\sigma_1\sigma_2 \E\left[\phi_{[v']_\alpha}(\phi_v-\phi_{[v]_\alpha})\right]\ .
\end{aligned}$$ We first prove that the last two terms of are positive. By Lemma \[lem: GFF\], we can write $\phi_{[v]_\alpha}=\sum_{u\in\partial [v]_\alpha} p_{\alpha,v}(u)\ \phi_u$. Note that the vertices $u$ that are in $[v']_\alpha^c$ will not contribute to the covariance $ \E\left[\phi_{[v]_\alpha}(\phi_{v'}-\phi_{[v']_\alpha})\right]$ by conditioning. Thus $$\begin{aligned}
\E\left[\phi_{[v]_\alpha}(\phi_{v'}-\phi_{[v']_\alpha})\right]&=\sum_{u\in\partial [v]_\alpha\cap [v']_\alpha} p_{\alpha,v}(u)\ \E\left[\phi_u(\phi_{v'}-\phi_{[v']_\alpha})\right]\\
&=\sum_{u\in\partial [v]_\alpha\cap [v']_\alpha} p_{\alpha,v}(u)\ \E\left[(\phi_u-\E[\phi_u|\F_{[v']_\alpha^c}])(\phi_{v'}-\E[\phi_{ v'}|\F_{[v']_\alpha^c}])\right]\ .
\end{aligned}$$ Lemma \[lem: green estimate\] ensures that the correlation in the sum are positive.
For the first term of , the idea is to show that $\phi{[v]_\alpha}$ and $\phi_{\widetilde B}$ are close in the $L^2$-sense. The same argument used to prove shows that $$\label{eqn: lem12}
\E\left[\left(\phi_{[v]_\alpha} - \E[\phi_v| \F_{\widetilde B^c}]\right)^2\right]= O_N(1)\ .$$ Moreover, since $v$ and $v_B$ are also at a distance smaller than $N^{1-\alpha}/100$ from each other, Lemma 12 in [@bolthausen-deuschel-giacomin] implies that $$\label{eqn: lem12b}
\E\left[\left(\phi_{\widetilde B}- \E[\phi_v| \F_{\widetilde B^c}]\right)^2\right]= O_N(1)\ .$$ Equations and give $\E[(\phi_{\widetilde B}-\phi_{[v]_\alpha})^2]= O_N(1)$ and similarly for $v'$. All the above sum up to $$\sigma_1^2\E\left[\phi_{[v]_\alpha}\phi_{[v']_\alpha}\right]=\sigma_1^2\E[\phi_{\widetilde B}^2]+O_N(1)\ .$$
It remains to show that the second term of is greater than $O_N(1)$. Since $\phi_{[v]_\alpha}$ and $\phi_{[v']_\alpha}$ are $\F_{\widetilde B^c}$-measurable by definition of the box $\widetilde B$, we have the decomposition $$\begin{aligned}
\E\left[(\phi_v-\phi_{[v]_\alpha})(\phi_{v'}-\phi_{[v']_\alpha})\right]&=\E[(\phi_v-\E[\phi_v| \F_{\widetilde B^c}])(\phi_{v'}-\E[\phi_{v'}| \F_{\widetilde B^c}])] \\
&+ \E[(\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha})(\E[\phi_{v'}| \F_{\widetilde B^c}]-\phi_{[v']_\alpha})]\ .
\end{aligned}$$ The first term is positive by Lemma \[lem: GFF\]. As for the second, Equation shows that $$\E\Big[\left(\E[\phi_v| \F_{\widetilde B^c}]-\phi_{[v]_\alpha}\right)\left(\E[\phi_{v'}|\F_{\widetilde B^c}]-\phi_{[v']_\alpha}\right)\Big]=O_N(1)\ .$$ This concludes the proof of the lemma.
Equation implies that the free energy of $\psi$ is smaller than the one of $\widetilde \psi$ by a standard comparison lemma, see Lemma \[lem: slepian\] in the Appendix. It remains to prove an upper bound for the free energy of $\widetilde \psi$.
Note that the field $\widetilde \psi$ is not a GREM [*per se*]{} because the variances of $g^{(1)}_B$, $B\in \mathcal B_\alpha$, are not the same for every $B$, as it depends on the distance of $B$ to the boundary. However, the variances of $\phi_{\widetilde B}$, $B\in \mathcal B_\alpha$, are uniformly bounded by $\frac{\alpha}{\pi}\log N^2+O_N(1)$; indeed $$\begin{aligned}
\E\left[\phi_{\widetilde B}^2\right]
&=\E\left[\phi_{v_B}^2\right] - \E\left[(\phi_{v_B}-\phi_{\widetilde B})^2\right]\\
&= \E\left[\phi_{v_B}^2\right] - \frac{1-\alpha}{\pi} \log N^2 + O_N(1)\\
&\leq \frac{1}{\pi}\log N^2 - \frac{1-\alpha}{\pi} \log N^2 + O_N(1)= \frac{\alpha}{\pi} \log N^2 + O_N(1),
\end{aligned}$$ where we used Lemmas \[lem: GFF\] and \[lem: green estimate\] in the second line and Lemma \[lem: green estimate\] in the third.
Moreover, note that for $v \in B$, $$\E[(g^{(2)}_v)^2]= \E[\psi_v^2] - \E[ (g^{(1)}_B)^2]= \sigma_1^2\big(\E[\phi_{[v]_\alpha}^2]- \E[ \phi_{\widetilde B}^2]\big)+ \sigma_2^2\frac{1-\alpha}{\pi} \log N^2 - C\sigma_1^2\ .$$ The first term is of order $O_N(1)$ by Equations and . Thus one has $$\E[(g^{(2)}_v)^2]= \sigma_2^2\frac{1-\alpha}{\pi} \log N^2 +O_N(1)\ .$$ The important point is that the variance of $g_v^{(2)}$ of $\widetilde \psi$ is uniform in $v$, up to lower order terms. Now consider the $2$-level GREM $(\bar \psi_v, v\in A_{N,\rho})$ $$\bar \psi_v= \bar g^{(1)}_B + g^{(2)}_v$$ where $(g^{(2)}_v, v\in A_{N,\rho} )$ are as before and $(\bar g^{(1)}_B, B\in \mathcal B_\alpha)$ are i.i.d. Gaussians of variance $\frac{\alpha}{\pi}\log N^2+O_N(1)$. This field differs from $\widetilde \psi$ only from the fact that the variance of $ \bar g^{(1)}_B$ is the same for all $B$ and is the maximal variance of $(g^{(1)}_B, B\in \mathcal B_\alpha)$. The calculation of the free energy of $(\bar \psi_v, v\in A_{N,\rho})$ is a standard computation and gives the correct upper bound in the statement of Theorem \[thm:freeenergyperturbed\]. (We refer to [@bolthausen-sznitman] for the detailed computation of the free energy of the GREM.) The fact that the free energy of $\bar \psi$ is larger than the one of $\widetilde \psi$ follows from the next lemma showing that the free energy of a hierarchical field is an increasing function of the variance of each point at the first level.
Consider $N_1,N_2\in\N$. Let $(X^{(1)}_{v_1} , v_1\leq N_1)$ and $(X^{(2)}_{v_1,v_2} ; v_1\leq N_1, v_2\leq N_2)$. Consider the Gaussian field of the form $$X_v=\sigma_1(v_1) X^{(1)}_{v_1}+\sigma_2 X^{(1)}_{v_1,v_2}\ , \ \ v=(v_1,v_2)$$ where $\sigma_2>0$ and $\sigma_1(v_1)>0$, $v_1\leq N_1$, might depend on $v_1$. Then $\E\left[\log \sum_v e^{\beta X_v}\right]$ is an increasing function in each variable $\sigma_1(v_1)$.
Direct differentiation gives $$\frac{\partial}{\partial \sigma_1(v_1)} \E\left[\log \sum_v e^{\beta X_v}\right]= \beta\E \left[\frac{\sum_{v_2} X_{v_1}e^{\beta X_{v_1,v_2}}}{Z_N(\beta)}\right]\ ,$$ where $Z_N(\beta)=\sum_v e^{\beta X_v}$. Gaussian integration by part then yields $$\beta\E \left[\frac{\sum_{v_2} e^{X_{v_1}\beta X_{v_1,v_2}}}{\sum_v e^{\beta X_v}}\right]
=\beta^2 \sigma_1(v_1)\E \left[ \frac{\sum_{v_2} e^{\beta X_{v_1,v_2}}}{Z_N(\beta)} - \frac{\sum_{v_2,v_2'} e^{\beta X_{v_1,v_2} }e^{\beta X_{v_1,v'_2}}}{Z_N(\beta)^2}\right]\ .$$ The right side is clearly positive, hence proving the lemma.
Proof of the lower bound
------------------------
Recall the definition of $V_{N}^\delta$ given in the introduction. The two following propositions are used to compute the log-number of high points of the field $\psi$ in $V_{N}^\delta$. The treatment follows the treatment of Daviaud [@daviaud] for the standard GFF. The lower bound for the free energy is then computed using Laplace’s method. Define for simplicity $V_{12}:=\sigma_1^2\alpha+\sigma_2^2(1-\alpha)$.
\[prop:perturbed-maximum\] $$\lim_{N \to \infty} \p\left( \max_{v \in V_N^\delta} \psi_v \ge \sqrt{\frac{2}{\pi}} \gamma_{max} \log N^2 \right) =0,$$ where $$\gamma_{max}=\gamma_{max}(\alpha,\vec{\sigma}):=
\begin{cases}
\sqrt{V_{12}}, \ \ &\text{ if $\sigma_1\leq \sigma_2$,}
\\
\sigma_1\alpha+\sigma_2(1-\alpha), \ \ &\text{ if $\sigma_1\geq \sigma_2$. }
\end{cases}$$
The case $\sigma_1\leq \sigma_2$ is direct by a union bound. In the case $\sigma_1\geq \sigma_2$, note that the field $\widetilde \psi$ defined in but restricted to $V_N^\delta$ is a 2-level GREM with $cN^{2\alpha}$ (for some $c>0$) Gaussian variables of variance $\frac{\sigma_1^2\alpha}{\pi}\log N^2 + O_N(1)$ at the first level. Indeed, for the field restricted to $V_N^\delta$, the variance of $\E[\phi_{\widetilde B}^2]$ is $\frac{\sigma_1^2\alpha}{\pi}\log N^2 + O_N(1)$ by Lemma \[lem: green estimate\] since the distance to the boundary is a constant times $N$. Therefore, by Lemma \[lem: slepian\] and Equation , we have $$\p\left( \max_{v \in V_N^\delta} \psi_v \ge \sqrt{\frac{2}{\pi}} \gamma_{max} \log N^2 \right) \leq \p\left( \max_{v \in V_N^\delta} \widetilde \psi_v \ge \sqrt{\frac{2}{\pi}} \gamma_{max} \log N^2 \right) \ .$$ The result then follows from the maximal displacement of the 2-level GREM. We refer the reader to Theorem 1.1 in [@bovier-kurkova1] for the details.
\[prop:perturbed-highpoints\] Let $\mathcal{H}_N^{\psi,\delta}(\gamma):=\left\{ v \in V_N^\delta: \, \psi_v \ge \sqrt{\frac{2}{\pi}} \gamma \log N^2 \right\}$ be the set of $\gamma$-high points within $V_N^\delta$ and define $$\begin{aligned}
&\text{if $\sigma\leq \sigma_2$} \qquad \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma):=1- \frac{\gamma^2}{V_{12}};\\
&\text{if $\sigma\geq \sigma_2$} \qquad \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma):=
\begin{cases}
1- \frac{\gamma^2}{V_{12}},
\ &\text{ if $ \gamma < \frac{V_{12}}{\sigma_1}$},
\\
(1-\alpha) - \frac{(\gamma -\sigma_1\alpha)^2}{\sigma_2^2(1-\alpha)},
\ &\text{ if $ \gamma \geq \frac{V_{12}}{\sigma_1}$.}
\end{cases}
\end{aligned}$$ Then, for all $0< \gamma < \gamma_{max},$ and for any $\mathcal{E}<\mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma)$, there exists $c$ such that $$\label{eqn: lower-}
\p\left(\vert \mathcal{H}_N^{\psi,\delta}(\gamma) \vert \le N^{2\mathcal{E}} \right) \le \exp \{- c (\log N)^2\}.$$
Proposition \[prop:perturbed-highpoints\] is obtained by a two-step recursion. Two lemmas are needed. The first is a straightforward generalization of the lower bound in Daviaud’s theorem (see Theorem 1.2 in [@daviaud] and its proof). For all $0<\alpha<1,$ denote by $\Pi_\alpha$ the centers of the square boxes in $\mathcal B_\alpha$ (as defined in Section \[sect: upperbound\]) which also belong to $V_N^\delta$.
\[lem:recurrence\] Let $\alpha',\alpha'' \in (0,1]$ such that $0<\alpha'<\alpha'' \le \alpha$ or $\alpha \le \alpha'<\alpha'' \le 1.$ Denote by $\sigma$ the parameter $\sigma_1$ if $0<\alpha'<\alpha'' \le \alpha$ and by $\sigma$ the parameter $\sigma_2$ if $\alpha \le \alpha'<\alpha'' \le 1.$ Assume that the event $$\Xi:=\left\{\#\{v\in\Pi_{ \alpha'}: \psi_v ( \alpha') \geq \gamma' \sqrt{\frac{2}{\pi}} \log N^2 \} \ge N^{\mathcal{E}'} \right\},$$ is such that $$\p(\Xi^c) \le \exp\{-c' (\log N)^2 \},$$ for some $\gamma' \ge 0$, $\mathcal{E}'>0$ and $c'>0$.
Let $$\mathcal{E}(\gamma):= \mathcal{E}' + (\alpha''-\alpha') - \frac{(\gamma-\gamma')^2}{ \sigma^2 (\alpha''-\alpha')}>0.$$ Then, for any $\gamma''$ such that $\mathcal{E}(\gamma'')>0$ and any $\mathcal{E}< \mathcal{E}(\gamma'')$, there exists $c$ such that $$\p\left(\#\{v\in\Pi_{ \alpha''}: \psi_v ( \alpha'') \geq \gamma'' \sqrt{\frac{2}{\pi}} \log N^2 \} \le N^{2\mathcal{E}}\right) \le \exp\{-c (\log N)^2 \} .$$
We stress that $\gamma''$ may be such that $\mathcal{E}(\gamma'')<\mathcal{E}'$. The second lemma, which follows, serves as the starting point of the recursion and is proved in [@bolthausen-deuschel-giacomin] (see Lemma 8 in [@bolthausen-deuschel-giacomin]).
\[lem:init\] For any $\alpha_0$ such that $0<\alpha_0<\alpha$, there exists $\mathcal E_0=\mathcal E_0(\alpha_0)>0$ and $c=c(\alpha_0)$ such that $$\p\left(\#\{v\in\Pi_{ \alpha_0}: \psi_v ( \alpha_0) \geq 0\}\leq N^{\mathcal E_0}\right)\leq \exp\{-c (\log N)^2 \} .$$
Let $\gamma$ such that $0< \gamma < \gamma_{max}$ and choose $\mathcal{E}$ such that $\mathcal{E}<\mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma)$. By Lemma \[lem:init\], for $\alpha_0<\alpha$ arbitrarily close to $0$, there exists $\mathcal E_0=\mathcal E_0(\alpha_0)>0$ and $c_0=c_0(\alpha_0)>0$, such that $$\label{eqn: lower}
\p\left(\#\{v\in\Pi_{ \alpha_0}: \psi_v ( \alpha_0) \geq 0\}\leq N^{2 \mathcal E_0}\right) \leq \exp\{-c_0 (\log N)^2 \} .$$ Moreover, let $$\label{eqn: E_1}
\mathcal{E}_1(\gamma_1):= \mathcal{E}_0 + (\alpha-\alpha_0) - \frac{\gamma_1^2}{ \sigma_1^2 (\alpha-\alpha_0)} .$$ Lemma \[lem:recurrence\] is applied from $\alpha_0$ to $\alpha$. For any $\gamma_1$ with $\mathcal{E}_1(\gamma_1)>0$ and any $\mathcal{E}_1<\mathcal{E}_1(\gamma_1)$, there exists $c_1>0$ such that $$\p\left(\#\{v\in\Pi_{ \alpha}: \psi_v ( \alpha) \geq \gamma_1 \sqrt{\frac{2}{\pi}} \log N^2 \} \leq N^{2 \mathcal{E}_1} \right)\leq \exp\{-c_1 (\log N)^2 \}.$$ Therefore, Lemma \[lem:recurrence\] can be applied again from $\alpha$ to $1$ for any $\gamma_1$ with $\mathcal{E}_1(\gamma_1)>0$. Define similarly $
\mathcal{E}_2(\gamma_1,\gamma_2):= \mathcal{E}_1(\gamma_1) + (1-\alpha) - (\gamma_2-\gamma_1)^2/ \sigma_2^2 (1-\alpha).
$ Then, for any $\gamma_2$ with $\mathcal{E}_2(\gamma_1,\gamma_2)>0$, and $\mathcal{E}_2<\mathcal{E}_2(\gamma_1,\gamma_2)$, there exists $c_2>0$ such that $$\label{eqn: lower prob}
\p\left(\#\{v\in V_N^\delta: \psi_v \geq \gamma_2 \sqrt{\frac{2}{\pi}} \log N^2 \} \leq N^{2 \mathcal{E}_2} \right)\leq \exp\{-c_2 (\log N)^2 \} .$$ Observing that $0 \le \mathcal E_0 \le \alpha_0,$ Equation follows from if it is proved that $\lim_{\alpha_0 \to 0}\mathcal{E}_2(\gamma_1,\gamma) = \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma)$ for an appropriate choice of $\gamma_1$ (in particular such that $\mathcal{E}_1(\gamma_1)>0$). It is easily verified that, for a given $\gamma$, the quantity $\mathcal{E}_2(\gamma_1,\gamma)$ is maximized at $
\gamma_1^*= \gamma \sigma_1^2(\alpha-\alpha_0)/(V_{12}-\sigma_1^2\alpha_0).
$ Plugging these back in shows that $\mathcal{E}_1(\gamma_1^*)>0$ provided that $
\gamma< V_{12}/\sigma_1=:\gamma_{crit},
$ with $\alpha_0$ small enough (depending on $\gamma$). Furthermore, since $
\mathcal{E}_2(\gamma_1^*,\gamma)= \mathcal{E}_0 +(1-\alpha_0)- \gamma^2/(V_{12}-\sigma_1^2\alpha_0),
$ we obtain $
\lim_{\alpha_0 \to 0} \mathcal{E}_2(\gamma_1^*,\gamma) = \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma),
$ which concludes the proof in the case $0<\gamma < \gamma_{crit}.$
If $\gamma_{crit} \leq \gamma < \gamma_{max}$, the condition $\mathcal{E}_1(\gamma_1^*)>0$ is violated as $\alpha_0$ goes to zero. However, the previous arguments can easily be adapted and we refer to subsection 3.1.2 in [@arguin-zindy] for more details.
We will prove that for any $\nu>0$ $$\p\left( f_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta) \le f^{(\alpha,\vec{\sigma})}(\beta) - \nu \right) \longrightarrow 0, \qquad N \to 0.$$ Define $\gamma_i:=i \gamma_{\max}/M$ for $0 \le i \le M$ ($M$ will be chosen large enough). Notice that Proposition \[prop:perturbed-maximum\], Proposition \[prop:perturbed-highpoints\] and the symmetry property of centered Gaussian random variables imply that the event $$\begin{aligned}
\nonumber
B_{N,M,\nu}&:=&\bigcap_{i=0}^{M-1} \left\{ \vert \mathcal{H}_N^{\psi,\delta}(\gamma_i) \vert \ge N^{2 \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma_{i})-\nu/3} \right\}
\bigcap \left\{ \max_{v \in V_N^\delta} \vert \psi_v \vert \le \sqrt{\frac{2}{\pi}} \gamma_{\max} \log N^2 \right\}\end{aligned}$$ satisfies $$\p( B_{N,M,\nu}) \longrightarrow 1, \qquad N \to \infty,$$ for all $M \in \n^*$ and all $\nu>0.$ Then, observe that on $B_{N,M,\nu}$ $$\begin{aligned}
Z_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta) &\ge& \sum_{v \in V_N^\delta} \ee^{\beta \psi_v} \ge
\sum_{i=1}^{M} ( \vert \mathcal{H}_N^{\psi,\delta}(\gamma_{i-1}) \vert- \vert \mathcal{H}_N^{\psi,\delta}(\gamma_i) \vert) N^{2 \sqrt{\frac{2}{\pi}} \gamma_{i-1} \beta}
\\
&=& \vert \mathcal{H}_N^{\psi,\delta}(0) \vert + \Big(2 \sqrt{\frac{2}{\pi}} \frac{\gamma_{\max}}{M} \beta \log N\Big) \int_1^{M} \vert \mathcal{H}_N^{\psi,\delta}(\frac{\lfloor u \rfloor \gamma_{\max}}{M}) \vert N^{2 \sqrt{\frac{2}{\pi}} \frac{u-1}{M}\gamma_{\max} \beta} d u
\\
&\ge& \Big(2\sqrt{\frac{2}{\pi}} \frac{\gamma_{\max}}{M} \beta \log N\Big) \sum_{i=1}^{M-1} \vert \mathcal{H}_N^{\psi,\delta}( \gamma_i) \vert N^{2 \sqrt{\frac{2}{\pi}} \gamma_{i-1} \beta},\end{aligned}$$ where we used Abel’s summation by parts formula. Writing $\gamma_{i-1}=\gamma_{i}-\gamma_{\max}/M$ and $P_\beta(\gamma):= \mathcal{E}^{(\alpha,\vec{\sigma})}(\gamma) + \sqrt{\frac{2}{\pi}} \beta \gamma,$ we get on $B_{N,M,\nu}$ $$\label{eqn: lower bound}
f_{N,\rho}^{(\alpha,\vec{\sigma})}(\beta) \ge
\frac{1}{\log N^2}\log \left(\sum_{i=1}^{M-1} N^{2 P_\beta(\gamma_{i})}\right) -\frac{\nu}{6}- \frac{ \sqrt{\frac{2}{\pi}} \gamma_{\max}\beta}{M}+o_N(1)\ .$$
Using the expression of $ \mathcal{E}^{(\alpha,\vec{\sigma})}$ in Proposition \[prop:perturbed-highpoints\] on the different intervals, it is easily checked by differentiation that $
\max_{\gamma \in \left[0,\gamma_{\max} \right]} P_{\beta}(\gamma)=f^{(\alpha,\vec{\sigma})}(\beta).
$ Furthermore, the continuity of $\gamma \mapsto P_{\beta}(\gamma)$ on $\left[0,\gamma_{\max} \right]$ yields $$\max_{1 \le i \le M-1} P_{\beta}(\gamma_i) \longrightarrow \max_{\gamma \in \left[0,\gamma_{\max} \right]} P_{\beta}(\gamma)=f^{(\alpha,\vec{\sigma})}(\beta), \qquad M \to \infty.$$ Therefore, choosing $M$ large enough and applying Laplace’s method in yield the result.
appendix
========
The conditional expectation of the GFF has nice features such as the Markov property, see e.g. Theorems 1.2.1 and 1.2.2 in [@dynkin] for a general statement on Markov fields constructed from symmetric Markov processes.
\[lem: GFF\] Let $B\subset A$ be subsets of $\Z^2$. Let $(\phi_v,v\in A)$ be a GFF on $A$. Then $$\E[\phi_v | \F_{B^c}]=\E[\phi_v | \F_{\partial B}], \qquad \forall v\in B,$$ and $$(\phi_v -\E[\phi_v | \F_{\partial B}], v\in B)$$ has the law of a GFF on $B$. Moreover, if $P_v$ is the law of a simple random walk starting at $v$ and $\tau_B$ is the first exit time of $B$, we have $$\E[\phi_v | \F_{\partial B}]=\sum_{u\in \partial B} P_v(S_{\tau_B}=u) ~\phi_u\ .$$
The following estimate on the Green function can be found as Lemma 2.2 in [@ding] and is a combination of Proposition 4.6.2 and Theorem 4.4.4 in [@lawler-limic].
\[lem: green estimate\] There exists a function $a: \z^2 \times \z^2 \mapsto [0,\infty)$ of the form $$a(v,v')= \frac{2}{\pi}\log \|v-v'\| +\frac{2\gamma_0 \log 8}{\pi} + O(\|v-v'\|^{-2})$$ (where $\gamma_0$ denotes the Euler’s constant) such that $a(v,v)=0$ and $$G_{A}(v,v')=E_{v}\left[a(v', S_{\tau_A})\right] - a(v,v')\ .$$
Slepian’s comparison lemma can be found in [@ledoux-talagrand] and in [@kahane] for the result on log-partition function.
\[lem: slepian\] Let $(X_1,\cdots, X_N)$ and $(Y_1,\cdots, Y_N)$ be two centered Gaussian vectors in $N$ variables such that $$\E[X_i^2]= \E[Y_i^2]\ \forall i, \qquad \E[X_iX_j]\geq \E[Y_iY_j] \ \forall i\neq j\ .$$ Then for all $\beta>0$ $$\E\left[\log \sum_{i=1}^N e^{\beta X_i}\right]\leq \E\left[\log \sum_{i=1}^N e^{\beta Y_i}\right]\ ,$$ and for all $\lambda>0$, $$\p\left(\max_{i=1,\dots,N} X_i > \lambda\right) \leq \p\left(\max_{i=1,\dots,N} Y_i > \lambda\right) \ .$$
[**Acknowledgements:**]{} The authors would like to thank the Centre International de Rencontres Mathématiques in Luminy for hospitality and financial support during part of this work.
[99]{}
Aizenman, M. and Contucci, P. (1998). On the stability of the quenched state in mean-field spin-glass models. [*J. Statist. Phys.*]{} [**92**]{}, 765–783.
Arguin, L.-P. (2007). A dynamical characterization of Poisson-Dirichlet distributions. [*Electron. Comm. Probab.*]{} [**12**]{}, 283–290.
Arguin, L.-P. and Chatterjee, S. (2013). Random Overlap Structures: Properties and Applications to Spin Glasses. [*Probab. Theory Relat. Fields*]{} [**156**]{}, 375–413.
Arguin, L.-P. and Zindy, O. (2012). Poisson-Dirichlet Statistics for the extremes of a log-correlated Gaussian field. To appear in [*Ann. Appl. Probab.*]{} [Arxiv:1203.4216v1]{}.
Bacry, E. and Muzy, J.-F. (2003). Log-infinitely divisible multifractal processes. [*Comm. Math. Phys.*]{} [**236**]{}, 449–475.
Biskup, M. and Louidor, O. (2013). Extreme local extrema of two-dimensional discrete Gaussian free field. Preprint. [Arxiv:1306.2602]{}.
Bolthausen, E., Deuschel, J.-D. and Giacomin, G. (2001). Entropic repulsion and the maximum of the two-dimensional harmonic crystal. [*Ann. Probab.*]{} [**29**]{}, 1670–1692.
Bolthausen, E., Deuschel, J.-D. and Zeitouni, O. (2011). Recursions and tightness for the maximum of the discrete, two dimensional Gaussian Free Field. [*Elec. Comm. Probab.*]{} [**16**]{}, 114–119.
Bolthausen, E. and Sznitman, A.-S. (2002). [*Ten Lectures on Random Media*]{}. Birkhaüser Bovier, A. and Kurkova, I. (2004). Derrida’s generalised random energy models. I. Models with finitely many hierarchies. [*Ann. Inst. H. Poincaré Probab. Statist.*]{} [**40**]{}, 439–480.
Bovier, A. and Kurkova, I. (2004). Derrida’s generalised random energy models. II. Models with continuous hierarchies. [*Ann. Inst. H. Poincaré Probab. Statist.*]{} [**40**]{}, 481–495.
Bramson, M., Ding, J. and Zeitouni, O. (2013). Convergence in law of the maximum of the two-dimensional discrete Gaussian free field. Preprint. [ArXiv:1301.6669v2]{}.
Bramson, M. and Zeitouni, O. (2012). Tightness of the recentered maximum of the two-dimensional discrete Gaussian Free Field. [*Comm. Pure Appl. Math.*]{} [**65**]{}, 1–20.
Carpentier, D. and Le Doussal, P. (2001). Glass transition for a particle in a random potential, front selection in nonlinear renormalization group, and entropic phenomena in Liouville and Sinh-Gordon models. [*Phys. Rev. E*]{} [**63**]{}, 026110.
Daviaud, O. (2006). Extremes of the discrete two-dimensional Gaussian free field. [*Ann. Probab.*]{} [**34**]{}, 962–986.
Derrida, B. (1985). A generalisation of the random energy model that includes correlations between the energies. [*J. Phys. Lett.*]{} [**46**]{}, 401–407.
Derrida, B. and Spohn, H. (1988). Polymers on disordered trees, spin glasses, and traveling waves. [*J. Statist. Phys.*]{} [**51**]{}, 817–840.
Ding, J. (2011). Exponential and double exponential tails for maximum of two-dimensional discrete Gaussian free field. Preprint. [ArXiv:1105.5833]{}.
Ding, J. and Zeitouni, O. (2012). Extreme values for two-dimensional discrete Gaussian free field . Preprint. [ArXiv:1206.0346]{}.
Duplantier, B., Rhodes, R., Sheffield, S. and Vargas, V. (2012). Critical Gaussian Multiplicative Chaos: Convergence of the Derivative Martingale. Preprint. [Arxiv:1206.1671]{}.
Dynkin, E. (1980) Markov Processes and Random Fields. [*Bull. Amer. Math. Soc.*]{} [**3**]{} 975–999.
Fyodorov, Y. V. and Bouchaud, J.-P. (2008). Freezing and extreme value statistics in a Random Energy Model with logarithmically correlated potential. [*J. Phys. A: Math. Theor.*]{} [**41**]{}, 372001 (12pp).
Fyodorov, Y. V. and Bouchaud, J.-P. (2008). Statistical mechanics of a single particle in a multiscale random potential: Parisi landscapes in finite-dimensional Euclidean spaces [*J. Phys. A: Math. Theor.*]{} [**41**]{}, 324009.
Fyodorov, Y. V., Le Doussal, P. and Rosso, A. (2009). Statistical mechanics of logarithmic REM: duality, freezing and extreme value statistics of $1/f$ noises generated by Gaussian free fields. [*J. Stat. Mech.*]{} P10005 (32pp).
Ghirlanda, S. and Guerra, F. (1998). General Properties of overlap probability distributions in disordered spin systems. [*J.Phys. A*]{} [**31**]{}, 9149–9155.
Kahane, J.-P. (1985). Sur le chaos multiplicatif. [*Ann. Sci. Math. Québec*]{} [**9**]{}, 105–150.
Ledoux, M. and Talagrand, M. (1991). [*Probability in Banach Spaces*]{}. Springer-Verlag. Lawler, G.F. and Limic, V. (2010). [*Random Walk: a modern introduction.*]{} Vol. 123 Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, 376pp. Panchenko, D. (2010). The Ghirlanda-Guerra identities for mixed p-spin model. [*C.R. Acad. Sci. Paris*]{} Ser. I [**348**]{}, 189–192.
Rhodes, R. and Vargas, V. (2013). Gaussian multiplicative chaos and applications: a review. Preprint. [Arxiv:1305.6221]{}.
Talagrand, M. (2003). [*Spin glasses: a challenge for mathematicians. Cavity and mean field models.*]{} Springer-Verlag.
|
---
abstract: |
A phenomenological approach is presented that allows one to model, and thereby interpret, photoemission spectra of strongly correlated electron systems. A simple analytical formula for the self-energy is proposed. This self-energy describes both coherent and incoherent parts of the spectrum (quasiparticle and Hubbard peaks, respectively). Free parameters in the expression are determined by fitting the density of states to experimental photoemission data. An explicit fitting is presented for the La$_{1-x}$Sr$_x$TiO$_3$ system with $0.08 \le x \le
0.38$. In general, our phenomenological approach provides information on the effective mass, the Hubbard interaction, and the spectral weight distribution in different parts of the spectrum. Limitations of this approach are also discussed.
address: |
(a) Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, D-86135 Augsburg, Germany\
(b) Institute of Theoretical Physics, Warsaw University, ul. Hoża 69, 00-681 Warszawa, Poland\
(c) Experimental Physics II, Institute of Physics, University of Augsburg, D-86135 Augsburg, Germany
author:
- 'Krzysztof Byczuk,$^{a,b}$ Ralf Bulla,$^a$ Ralph Claessen,$^c$ and Dieter Vollhardt$^a$'
title: Phenomenological Modeling of Photoemission Spectra in Strongly Correlated Electron Systems
---
Introduction
============
Photoemission experiments provide important information about the electronic single-particle excitation spectrum of solids.[@photo] For weakly correlated materials this is essentially given by the density of states (DOS) obtained by, e.g., density functional theory in combination with the local density approximation (LDA).[@jones89] In many cases the agreement between LDA and experiment turns out to be very good. However, there is a class of materials where the discrepancy between the measured and calculated spectra is significant.[@fujimori92; @robery93; @inue95; @morikawa95; @morikawa96; @kim98; @yoshida99; @schrame00; @kim01] For these strongly correlated electron systems there is a clear demand for new theoretical and computational approaches. The recently developed LDA+DMFT method, a combination of the LDA and the dynamical mean field theory (DMFT), has proved to be very successful in this respect.[@anisimov97; @lichtenstein98; @zolfl00; @nekrasov00; @held0; @held00; @held01] The LDA+DMFT method supplements the LDA by local correlations between $d$- or $f$-electrons.[@held0; @held01] In the simplest case, namely, in the absence of long-range order and when the correlated bands at the Fermi level are sufficiently separated from other bands,[@held0; @held01] the DOS of a correlated system is well represented by the integral (Hilbert transform) $$\rho_{\rm LDA+DMFT}(\omega) = -\frac{1}{\pi}{\rm Im} \int d \omega'
\frac{\rho_{\rm LDA}(\omega')}{\omega-\omega'-\Sigma_{\rm DMFT}(\omega)+i0^+},
\label{1}$$ where $\rho_{\rm LDA}(\omega) $ is the DOS from the LDA calculation, and $\Sigma_{\rm DMFT}(\omega) $ is the local self-energy calculated self-consistently within the DMFT scheme which includes correlation effects missing in the LDA approach.[@anisimov97; @lichtenstein98; @zolfl00; @nekrasov00; @held0; @held00; @held01] Non-local contributions to the self-energy cannot yet be implemented in this scheme. This will become possible in extensions of the DMFT, e.g., in the Dynamical Cluster Approximation (DCA) and related computational schemes.[@dca]
The aim of this paper is to describe another approach, phenomenological in nature, to model photoemission spectra of strongly correlated electrons. The motivation is the following: the LDA+DMFT method is microscopic but requires an extensive numerical effort to calculate $\rho_{\rm LDA}(\omega) $ and $\Sigma_{\rm DMFT}(\omega) $. On the other hand, an analysis of various models of strongly correlated electrons within the DMFT has shown that certain features of the self-energy do not depend on the details of the model. Fermi liquid behaviour is seen in, e.g., Numerical Renormalization Group (NRG) calculations for the Hubbard model both at and away from half-filling (see Fig. (\[fig2\]) below) and for the Periodic Anderson Model in the heavy-fermion regime.[@bulla99; @pbj] In these systems, the imaginary part of the self-energy consists of the $\omega^2$-dependence for $\omega \to 0$ and $T\to 0$ and two pronounced peaks at finite $\omega$. On increasing the temperature, Fermi liquid behaviour can be rapidly destroyed, in particular close to the Mott transition,[@bcv] or in general for systems with a very high effective mass. The imaginary part of the self-energy then goes over to a [*single*]{} and very broad peak centered approximately at the Fermi level (see, e.g., Fig. 5 in Ref.\[22\]). This model-independence of the self-energy suggests the use of a universal form for $\Sigma(\omega)$ which depends only on a small number of phenomenological parameters. This $\Sigma(\omega)$ replaces $\Sigma_{\rm DMFT}(\omega) $ in Eq. (\[1\]).
Although the proposed scheme is phenomenological (the parameters in the self-energy being determined by fitting to the experimental data) we believe it to be useful for the qualitative interpretation and understanding of the experimental results. The phenomenological self-energy $\Sigma(\omega)$ obtained in this way can be used to deduce other quantities for the specific material, such as a linear specific heat coefficient and the dynamical conductivity (under the assumption that vertex corrections are negligible). This approach would then serve as a unifying phenomenological description of a variety of experimental results.
Conceptually, such an approach is not new. It was used previously to fit and interpret, for example, the integrated photoemission data for Ca$_{1-x}$Sr$_x$VO$_3$,[@inue95] and the angular resolved photoemission data in prototype Fermi liquid metals[@newRef1; @newRef2] and high-temperature superconductors.[@htc] However, only the quasiparticle peak was fitted in these approaches. Here we provide an analytical expression for the self-energy which is appropriate for fitting the whole spectrum of correlated $d$- or $f$-electrons, where the Hubbard subbands and the quasiparticle resonance are present simultaneously. Our approach is based on a sum of Lorentz functions. Matho has proposed another route based on the multi-pole expansion of the phenomenological self-energy.[@matho97] It turns out that these two approaches are mathematically equivalent. In our approach it is possible to present a simple physical motivation for the form of the self-energy.
Our phenomenological approach encounters certain difficulties which should be mentioned here. The most serious problem is connected with the description of multi-band systems. The strong electronic correlations originate from the localized nature of $f$- or $d$-orbitals. Hence, several bands might cross the Fermi level or be very close to it even when they are split by, e.g., a crystal field. In V$_2$O$_3$, for instance, the splitting between the $t_{2g}$ and $e_g$ bands is rather small. In such a case, each band which lies in the vicinity of the Fermi level would require a separate self-energy, which makes the number of fitting parameters twice or three times larger. Since the photoemission results are not orbitally resolved, an unambiguous fitting cannot be guaranteed in these cases. Without additional experimental input, a phenomenological approach for these cases is not adequate.
In our paper we therefore concentrate on the experimental data for La$_{1-x}$Sr$_x$TiO$_3$, a system with degenerate $t_{2g}$ bands (see Sec. 4). Before that — in Secs. 2 and 3 — the phenomenological expression for $\Sigma(\omega)$ is introduced. The results of our paper are summarized in Sec. 5.
Self-Energy
===========
We start with a heuristic derivation of the retarded self-energy for strongly correlated electrons (e.g., $d$-electrons) in the metallic phase which form a Fermi liquid state at low energies and temperatures. The DOS $\rho(\omega)$, calculated with this self-energy should consist of three parts: two wide incoherent parts (upper and lower Hubbard bands) and a coherent peak at or close to the Fermi level $\mu$.
The construction of a suitable self-energy expression is guided by the following idea. Let us start with a model for the spectral function $A_{\rm mod}(\omega)$ which is a sum of three Lorentz curves $$A_{\rm mod}(\omega) = \frac{Q}{\pi}\frac{\gamma}{(\omega-\omega_0)^2+\gamma^2}+$$ $$\frac{(1-Q)}{\pi}
\left[
\frac{q}{2}\frac{\Gamma}{(\omega-\frac{I}{2})^2+\Gamma^2}
+
\left(1-\frac{q}{2}\right)\frac{\Gamma}{(\omega+\frac{I}{2})^2+\Gamma^2}
\right].
\label{2}$$ One peak (the quasiparticle peak) is centered at $\omega=\omega_0$ with spectral weight $Q\geq0$ and width $\gamma\geq0$. The other two peaks (upper and lower Hubbard peaks) are centered at $\omega=\pm I/2$ and their widths are assumed to be both equal to $\Gamma\geq0$ (see Fig. (\[fig1\])). The total weight of these two peaks is $1-Q$ with the relative weights $q/2\geq 0$ and $1-q/2\geq 0$ respectively. $A_{\rm mod}(\omega)$ is normalized to unity. Apart from assuming the same width for both upper and lower Hubbard peaks, Eq. (\[2\]) is the most general sum of three Lorentz peaks. Since we have not yet specified the position of the chemical potential $\mu$, Eq. (\[2\]) describes both symmetric and asymmetric cases.
![Schematic plot of the three peaks of the model spectral function $A_{\rm mod}(\omega)$. []{data-label="fig1"}](fig1.eps){width="12.5cm"}
There is, of course, some arbitrariness in the choice of Eq. (\[2\]) for $A_{\rm mod}(\omega)$. Lorentzians are used here to obtain simple analytical expressions. One can try to use, for example, Gauss or semi-elliptic forms of the DOS as well. However, in these cases $\Sigma(\omega)$ either cannot be expressed analytically or is not a smooth function of $\omega$.
The retarded local Green function $g_{\rm mod}(\omega)$ corresponding to the spectral function (\[2\]) is $$g_{\rm mod}(\omega)=
\frac{Q}{\omega-\omega_0+i\gamma}+
(1-Q) \left[
\frac{q}{2}\frac{1}{\omega-\frac{I}{2}+i\Gamma} +
\left(1-\frac{q}{2}\right)\frac{1}{\omega+\frac{I}{2}+i\Gamma}
\right].
\label{3}$$ We also introduce a Green function $$g_{\rm mod}^0(\omega)=\frac{1}{\omega-\omega_1+i\Delta},
\label{G2}$$ which corresponds to a spectral function $A_{\rm mod}^0(\omega)$ with a single peak centered at $\omega=\omega_1$ and with the width $\Delta\geq 0$. Note that all the above quantities (i.e. those with an index ‘mod’) do not correspond to any physical quantity but are introduced for the construction of a suitable self-energy only.
Using the Green functions (\[3\]) and (\[G2\]) together with the Dyson equation for the self-energy $\Sigma_{\rm mod}(\omega) = g_{\rm mod}^0(\omega)^{-1}-
g_{\rm mod}(\omega)^{-1}$ we find $$\Sigma_{\rm mod}(\omega)=\omega-\omega_1 +i \Delta -
\left[
\frac{Q}{\omega-\omega_0+i\gamma} + (1-Q)
\frac{\omega+i \Gamma - (1-q) \frac{I}{2}}{(\omega+i\Gamma)^2-\left(
\frac{I}{2}\right)^2}
\right]^{-1}.
\label{5}$$ This self-energy contains 8 parameters. The number of parameters may be reduced by imposing additional conditions which are discussed below.
In order to preserve the Fermi liquid properties at low energy we have to supplement the self-energy (\[5\]) by the condition ${\rm Im} \Sigma(\omega=\mu)=0$.[@remark1] Then, as $\omega\rightarrow 0$, ${\rm Re} \Sigma (\omega) \sim -\omega$, and ${\rm Im} \Sigma (\omega) \sim -\omega^2$. In the high energy limit $\omega\rightarrow \infty$, ${\rm Re} \Sigma (\omega) \sim 1/\omega$, but ${\rm Im} \Sigma (\omega) \sim a -1/\omega^2$ with a constant $a\geq 0$, which means that the imaginary part of the self-energy $\Sigma(\omega)$ changes sign. We have checked that this artefact becomes important only if $I$ is much smaller than $\gamma$. However, in this weakly correlated limit $\rho_{\rm LDA}$ usually reproduces the experimental data reliably and the corrections due to $\Sigma(\omega)$ are not necessary. In the strongly correlated limit the change of the sign in ${\rm Im}
\Sigma(\omega)$ appears at high energies. We have therefore introduce a cut-off setting ${\rm Im}
\Sigma(\omega)=0$ whenever the self-energy becomes positive.
The self-energy (\[5\]) is temperature independent. As noted in the introduction, the self-energy develops a peak [*at*]{} the chemical potential $\mu$ for finite temperature, in particular close to the Mott transition. This effect is described phenomenologically by introducing a scattering part $$\Sigma_{\rm scatt}(\omega)=\frac{s}{\omega-\mu+i\gamma_s},
\label{5b}$$ with two fitting parameters $s$ and $\gamma_s$. The scattering part is not used in this paper but it might be important in systems far away from the Fermi liquid regime.
Hence, the phenomenological self-energy takes the form $$\Sigma_{\rm fit}(\omega)=\Sigma_{\rm mod}(\omega) + \Sigma_{\rm scatt}(\omega)
\label{5c} \ .$$
Before applying Eq. (\[5c\]) to model experimental data, we check whether this form of the self-energy can reproduce the self-energies obtained numerically from the DMFT equations of the Hubbard model $$H = -t\sum_{<ij>\sigma} (c^\dagger_{i\sigma} c_{j\sigma} +
c^\dagger_{j\sigma} c_{i\sigma}) +
U\sum_i c^\dagger_{i\uparrow} c_{i\uparrow}
c^\dagger_{i\downarrow} c_{i\downarrow}, \
\label{eq:H}$$ where $t$ is the hopping matrix element between nearest neighbour sites and $U$ is the local interaction energy for the electrons with antiparallel spins $\sigma$. Figure \[fig2\] shows the result of the fit to the spectral function, calculated by NRG with the microscopic parameters $U=4$, $\mu=-1.4$ and $T=0$. We imposed the Fermi liquid condition $\Sigma_{\rm fit}(\omega=\mu)=0$. The scattering part $\Sigma_{\rm scatt} (\omega)$ was set to zero. The parameters determined from the fit-procedure are $\omega_0=0.02$, $\delta=0.005$, $\gamma=0.005$, $\Gamma=0.51$, $Q=0.3$, $q=0.72$, and $I=4.5$. We used the same bare DOS as in the NRG calculation, i.e. a semielliptic DOS with the width $W=2$. This width sets the energy units in this fitting. The comparison shows both the possibilities and limitations of our phenomenological approach. Although the three-peak structure and the overall distribution of the spectral weight are described correctly, there are significant deviations regarding the form of the peaks. The main reason for this is that the peaks in the numerical data are not Lorentzian (see, e.g., the discussion of the form of the Kondo resonance in the single impurity Anderson model in \[28\]). The fit-procedure therefore cannot recover the dip at $\omega\approx 1.5$, and compensates this by underestimating the width of the upper Hubbard peak. Also, the band filling determined from the fitted spectral function is larger by about $7\%$ as compared to the NRG result.
![Upper panel: The fitted spectral function using the self-energy (\[5\]) (solid line) and the numerical spectra from NRG (dashed line). Lower panel: The phenomenological self-energy corresponding to the fitting in the upper panel (solid line) and the self-energy from NRG (dashed line). Scales on the horizontal axis are the same. []{data-label="fig2"}](Figure_NRG_final.eps){width="12.5cm"}
The same holds for the structures obtained for the self-energy as is seen in the lower panel of Fig. (\[fig2\]). These are not, according to the NRG result, given by Lorentz peaks. Nevertheless, the general structure, i.e., the Fermi-liquid behavior for small frequencies and the two peaks at higher frequencies ($\omega-\mu\approx-0.5\;\mbox{and}\;2.8$) is reproduced correctly. The relative difference in the weight of these latter peaks is due to the particle-hole asymmetry in the parameters used for this particular calculation.
Modeling Photoemission Spectra
==============================
The phenomenological form for the self-energy Eq.(\[5c\]) discussed in the previous section can now be inserted in the Hilbert transformation for the DOS $$\rho_{\rm fit}(\omega) = -\frac{1}{\pi} {\rm Im} \int d \omega'
\frac{\rho_{\rm LDA}(\omega')}{\omega-\omega'-\Sigma_{\rm fit}(\omega)+i0^+}.
\label{6}$$ The direct ($S_{\rm direct}$) and inverse ($S_{\rm inverse}$) photoemission intensities, within a constant transfer matrix approximation, are given by $$S_{\rm direct}(\omega) = S_0 \int_{-\infty}^{\infty}d \omega'
R_{\sigma}(\omega-\omega') f\left( \frac{\omega'-\mu}{kT}\right) \rho_{\rm fit}(\omega'),$$ $$S_{\rm inverse}(\omega) = S_0' \int_{-\infty}^{\infty}d \omega'
R_{\sigma'}(\omega-\omega')\left[1- f\left( \frac{\omega'-\mu}{kT}\right)
\right]
\rho_{\rm fit}(\omega'),$$ where $f[(\omega-\mu)/k_{\rm B}T]$ is the Fermi-Dirac function with the chemical potential $\mu$ at the temperature $k_{\rm B}T$ (in energy units) and $R_{\sigma}(\omega)=\exp(-\omega^2/2\sigma^2)
/\sqrt{2\pi}\sigma$ is the apparatus function with the resolution $\sigma$. Note that the resolution in direct photoemission experiments is typically about one or two orders of magnitude better than in inverse photoemission experiments. $S_0$ and $S_0'$ are constant prefactors.
With good quality data for the direct and the inverse photoemission spectra on the same sample under the same conditions one can now determine the phenomenological parameters in the self-energy (\[5c\]) and determine, for example, the effective mass, the magnitude of the Hubbard interaction and the full frequency dependence of the self-energy.
Example of Fitting
==================
We now turn to the phenomenological modeling of experimental photoemission data, as exemplified in the case of La$_{1-x}$Sr$_x$TiO$_3$. In this compound, the $3d^1$ electrons occupy degenerate $t_{2g}$ orbitals for which the crystal and Jahn-Teller splittings are very small. Moreover, the $t_{2g}$ band is well separated from the $e_g$ and $p$-bands. Our phenomenological modeling can therefore be restricted to a single degenerate band.
The available direct photoemission spectra[@yoshida99] show the typical features of a strongly correlated metal, i.e., a quasiparticle peak and a lower Hubbard band. At low temperature La$_{1-x}$Sr$_x$TiO$_3$ is an antiferromagnetic insulator for $x=0$ and a band insulator with an empty $d$-band for $x=1$. Consequently, the filling of the three-fold degenerate $d$-band decreases from $n=1$ to $n=0$ in going from $x=0$ to $x=1$. The antiferromagnetic insulator is stable up to $x \approx 0.05$ for $T<T_N$ ($T_N = 112\; K$ for $x=0$), and for $0.05<x<0.08$ an antiferromagnetic metallic phase appears with decreasing $T_N$ with increasing $x$.
In the photoemission measurement, the quasiparticle peak is clearly visible in Fig. (\[fig3\]) for all paramagnetic samples with $x=0.08,\;0.18,\;0.28,\;0.35$.[@yoshida99] This coherent peak is suppressed in the antiferromagnetic metallic phase ($x=0.06$), and vanishes in the insulating regime ($x=0.04$). The wide incoherent peak — the lower Hubbard band — is present for all $x$-values. These features, in particular the lower Hubbard band, cannot be explained within the LDA approach.[@lda]
![ Upper panel: Four curves from the photoemission experiments[@yoshida99] for La$_{1-x}$Sr$_x$TiO$_3$. Black dots show digitized points and small wiggles are due to digitization errors. The inset shows the DOS from LDA calculations for this system.[@lda] Lower panel: Example of fitting the self-energy (\[5c\]) (solid line) and the self-energy (\[9\]) (dot-dashed line) to the experimental result (dashed thick line) with $x=0.08$. []{data-label="fig3"}](Figure_experiment_final.eps){width="12.5cm"}
Here we have used the experimental data[@yoshida99] to determine the phenomenological parameters in the self-energy (\[5c\]). However, since we only know the occupied part of the spectrum it is impossible to determine unambiguously the absolute value of $I$, corresponding to the distance between the lower (occupied) and the upper (unoccupied) Hubbard bands. Only a relative value $I/2+\mu$ can be determined. In order to make the problem tractable we have to reduce the number of parameters.
To this end we set $I=5\; eV$ since this is the value found in other theoretical studies.[@nekrasov00] Also we assume that the $t_{2g}$ band is three-fold degenerate and that $x=1$ corresponds to $1/3$ filling. With decreasing $x$ the filling of this band is lowered and is found to be $n=(1-x)/3$, after normalizing the DOS to unity. This filling for a given $x$ is used as another constraint on the parameters in our self-energy (\[5c\]). In other words, the parameters are adjusted such as to obtain the correct filling $n=(1-x)/3$ of the $d$-band. We do not use the Fermi liquid constraint because the experiment was performed at finite temperature and close to the Mott transition, so that deviations from ${\rm Im}\Sigma(\omega=0)=0$ are expected to be significant.
Within these assumptions the best fit is obtained by minimizing the mean square deviation between the theoretical and experimental data.[@yoshida99] The apparatus resolution function is taken to be a Gauss function with the variance $\sigma=0.035$ eV and the temperature is $T=20$ K. The bare DOS is used as for LaTiO$_3$ from Ref.\[29\]. One example is shown in the lower panel of Fig. (\[fig3\]).
For comparison, we have also fitted the coherent part of the spectrum using the phenomenological self-energy[@inue95] $$\Sigma_{\rm QP}(\omega)=g \frac{a \omega}{\omega+ia}\cdot \frac{b}{\omega+ib}.
\label{9}$$ This form of $\Sigma(\omega)$ is useful to fit the quasiparticle peak but does not describe the Hubbard bands. In situations with no clear separation of these two structures, a fit using Eq.(\[9\]) obviously involves some arbitrariness. This is in contrast to our fit formula for the self-energy which does not require the two structures to be well separated.
![Upper panel: The spectral functions for different $x$ obtained from fitting to experimental data, as described in the text. Lower panel: Imaginary part of the self-energy for different $x$. The inset shows the behavior of $1/Z$, extracted from the real part of the self-energy, as a function of $x$. []{data-label="fig4"}](Figure_fits_final.eps){width="12.5cm"}
As we can see in Fig. (\[fig3\]) both self-energies (\[5c\]) and (\[9\]) provide a similar description of the quasiparticle peak. However, the self-energy (\[5c\]) also fits the high-energy feature of the spectrum. In Fig. (\[fig4\]) we show the total spectral functions (upper panel) for four different values of $x$ obtained with this procedure. The lower panel presents the imaginary parts of the self-energy (\[5\]). As we can see, with decreasing $x$ the number of states below the Fermi level is reduced and more spectral weight is pushed into the upper Hubbard peak. Furthermore, the quasiparticle peak is suppressed for higher $x$. These trends are also mirrored in the behavior of the self-energy for different $x$. The absolute value of ${\rm Im}\Sigma$ at the Fermi level increases with $x$. This increase might be attributed to the enhanced scattering due to the randomness introduced by the Sr atoms in the system. We also calculated the $Z$-factor ($1/Z=1-\partial {\rm Re}\Sigma(\omega=\mu)/\partial\omega$) which in Fermi liquid theory is related to the effective mass of the quasiparticles ($1/Z\sim m^{\star}/m$). In the inset to Fig. (\[fig4\]) we see a reduction of $1/Z$ with increasing $x$.
From these results, obtained phenomenologically, we are able to conclude that La$_{1-x}$Sr$_x$TiO$_3$ for $x>0.06$ is a correlated metal which can be modeled with a local self-energy. The [*origin*]{} of the narrow quasiparticle peak and the lower Hubbard band can only be explained within a microscopic approach, e.g., DMFT. This is indeed possible, as has been shown recently using the LDA+DMFT approach.[@nekrasov00]
The $Z$-factor determined above is of the same order for both forms of the phenomenological self-energy \[(\[5c\]) and (\[9\])\]. Nevertheless, we find that the particular value of $Z$ depends on the precise form of $\Sigma(\omega)$. The dependence of $Z$ on $x$ also turns out to be different for the two self-energies. Using Eq. (\[5\]) its behavior is shown in the inset to Fig. (\[fig4\]). From Eq. (\[9\]) we obtained values between $3.5$ to $4$ for the corresponding $x$. This is because the phenomenological self-energy is used to fit the spectrum in a finite frequency window around $\omega=\mu$, which might be much larger then the actual frequency window for which Fermi liquid theory is valid. One should therefore be cautious in the interpretation of the actual values obtained for the effective mass $m^{\star}$.
We also note an intriguing feature of the experimental data for $x>0.06$.[@yoshida99] When the curves are normalized to unity and plotted in a single figure, they are almost identical. In particular, the distance between the coherent peak and the top of the Hubbard band does not depend on $x$. This is in striking contrast to numerical calculations for a doped Hubbard model where doping inevitably leads to a shift of the chemical potential towards the lower Hubbard band (at least under the assumption of a homogeneous phase). Such a discrepancy has recently also been observed for the related compound Gd$_{1-x}$Sr$_x$TiO$_3$ where it was attributed to an inhomogeneous sample composition due a chemical phase separation into strongly and poorly doped domains.[@sing02] Such a phase separation may be restricted to the surface, as bulk-sensitive experiments did not observe it. A possible difference between surface and bulk electronic structure in oxidic perovskites has indeed been reported for doped vanadates.[@maiti01; @suga]
Conclusions and Final Remarks
=============================
In this paper we presented a simple analytical expression (\[5\]) for the self-energy which can be used to fit experimental data for strongly correlated electron systems. As an application we presented fits to the photoemission data for La$_{1-x}$Sr$_x$TiO$_3$, for which the phenomenological fit parameters all take reasonable values.
There are certain difficulties in carrying out such a program, mainly due to the limited spectroscopic information which is currently available for correlated electron systems. Direct and inverse photoemission data would help to enhance the quality of the fits, in particular, if they allow one to fit different bands. Angular resolved photoemission experiments would give further information, and could provide insights into the validity of assuming a purely local self-energy.
At the moment, our approach is particularly useful to fit data for systems with partially filled, degenerate bands which are well separated from other completely filled or empty bands as, e.g., in La$_{1-x}$Sr$_x$TiO$_3$.
Acknowledgements {#acknowledgements .unnumbered}
================
It is a pleasure to acknowledge M. Potthoff for discussions. This research was support in part by the Deutsche Forschungsgemeinschaft through the Sonderforschungsbereich 484. The work of K.B. was sponsored by the Alexander von Humboldt foundation through their scholarship program.
[0]{} S. H[ü]{}fner, [*Photoemission Spectroscopy*]{} (Springer, Berlin, 1995).
R.O. Jones, O. Gunnarsson, Rev. Mod. Phys. [**61**]{}, 689 (1989).
A. Fujimori, I. Hase, H. Namatame, Y. Fujishima, Y. Tokura, H. Eisaki, S. Uchida, K. Takegahara and F.M.F. de Grot, Phys. Rev. Lett. [**69**]{}, 1769 (1992); Phys. Rev. [**B 46**]{}, 9841 (1992).
S.W. Robery, L.T. Hudson, C. Eylem and B. Eichorn, Phys. Rev. [**B 48**]{}, 562 (1993).
I.H. Inoue, I. Hase, Y. Aiura, A. Fujimori, Y. Haruyama, T. Maruyama and Y. Nishihara, Phys. Rev. Lett. [**74**]{}, 2539 (1995).
K. Morikawa, T. Mizokawa, K. Kobayashi, A. Fujimori, H. Eisaki, S. Uchida, F. Iga and Y. Nishihara, Phys. Rev. [**B 52**]{}, 13711 (1995).
K. Morikawa, T. Nizokawa, A. Fujimori, Y. Taguchi and Y. Tokura, Phys. Rev. [**B 54**]{}, 8446 (1996).
H. Kim, H. Kumigashira, A. Ashihara and T. Takahashi, Phys. Rev. [**B 57**]{}, 1316 (1998).
T. Yoshida, A. Ino, T. Mizokawa, A. Fujimori, Y. Taguchi, T. Katsufuji and Y. Tokura, cond-mat/9911446 (unpublished); A. Fujimori, T. Yoshida, K. Okazaki, T. Tsujioka, K. Kobayashi, T. Mizokawa, M. Onoda, T. Katsufuji, Y. Taguchi, Y. Tokura, J. of Electron Spectroscopy and Related Phenomena [**117-118**]{}, 277 (2001).
M. Schramme, Ph. D. Thesis - Augsburg University (2000).
Hyeong-Do Kim, J.-H. Park, J. W. Allen, A. Sekiyama, A. Yamasaki, K. Kadono, S. Suga, Y. Saitoh, T. Muro, P. Metcalf, cond-mat/0108044 (unpublished).
V.I. Anisimov, A.I. Poteryaev, M.A. Korotin, A.O. Anokhin, G. Kotliar, J. Phys.: Condens. Matt. [**9**]{}, 7359 (1997).
A.I. Lichtenstein, M.I. Katsnelson, Phys. Rev. [**B 57**]{}, 6884 (1998).
M.B. Zölfl, Th. Pruschke, J. Keller, A.I. Poteryaev, I.A. Nekrasov, V.I. Anisimov, Phys. Rev. [**B61**]{}, 6884 (2000).
I.A. Nekrasov, K. Held, N. Blümer, A.I. Poteryaev, V.I. Anisimov, and D. Vollhardt, Euro. Phys. J. [**B 18**]{}, 55 (2000).
K. Held, I.A. Nekrasov, N. Blümer, V.I. Anisimov, and D. Vollhardt, Int. J. Mod. Phys. [**B 15**]{}, 2611 (2001).
K. Held, G. Keller, V. Eyert, D. Vollhardt, and V.I. Anisimov, Phys. Rev. Lett. [**86**]{}, 5345 (2001).
K. Held, I.A. Nekrasov, G. Keller, V. Eyert, N. Blümer, A.K. McMahan, R.T. Scalettar, T.Pruschke, V.I. Anisimov, and D. Vollhardt, in [*Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms*]{}, eds. J. Grotendorst, D. Marx and A. Muramatsu, NIC Series Vol. 10 (NIC Directors, Forschungszentrum Jülich, 2002), p. 175.
M. Jarrell and H.R. Krishnamurthy, Phys. Rev. [**B 63**]{}, 125102 (2001); M. Jarrell, Th. Maier, M. H. Hettler, A.N. Tahvildarzadeh, Euro. Phy. Letters [**56**]{}, 563 (2001); M. Jarrell, Th. Maier, C. Huscroft, and S. Moukouri, Phys. Rev. [**B 64**]{}, 195130 (2001); G. Biroli and G. Kotliar, Phys. Rev. [**B 65**]{}, 155112 (2002); G. Kotliar, S. Y. Savrasov, G. Pálsson, and G. Biroli, Phys. Rev. Lett. [**87**]{}, 186401 (2001).
R. Bulla, Phys. Rev. Lett. [**84**]{}, 136 (1999).
Th. Pruschke, R. Bulla, and M. Jarrell, Phys. Rev. B [**61**]{}, 12799 (2000).
R. Bulla, T.A. Costi, and D. Vollhardt, Phys. Rev. B [**64**]{}, 045103 (2001).
R. Claessen, R.O. Anderson, J.W. Allen, C.G. Olson, C. Janowitz, W.P. Ellis, S. Harm, M. Kalning, R. Manzke, and M. Skibowski, Phys. Rev. Lett. [**69**]{}, 808 (1992).
J.W. Allen, G.-H. Gweon, R. Claessen, and K. Matho, J. Phys. Chem. Solids [**56**]{}, 1849 (1995).
See e.g. M.R. Norman, M. Randeria, H. Ding, J. C. Campuzano, Phys. Rev. [**B 57**]{}, 11093 (1998).
K. Matho, Molecular Phys. Reports [**17**]{}, 141 (1997); J. Electron Spectroscopy and Related Phenomena [**117-118**]{}, 13 (2001).
The equation expressing the Fermi liquid constraint is quite lengthly and not printed here.
R. Bulla, M.T. Glossop, D.E. Logan, and Th. Pruschke, J. Phys.: Condens. Matter [**12**]{}, 4899 (2000).
K. Takegahara, J. of Electron Spectroscopy and Related Phenomena [**66**]{}, 303 (1994).
M. Sing, M. Karlsson, D. Schrupp, R. Claessen, M. Heinrich, V. Fritsch, H.-A. Krug von Nidda, A. Loidl, R. Bulla, cond-mat/0205067 (unpublished).
K. Maiti, D.D. Sarma, M.J. Rosenberg, I.H. Inoue, H. Makino, O. Goto, M. Pedio, and R. Cimino, Euro. Phys. Lett. [**55**]{}, 246(2001).
S. Suga and A. Sekiyama, private communication.
|
---
abstract: 'The importance of electrostatic interactions in soft matter and biological systems can often be traced to non-uniform charge effects, which are commonly described using a multipole expansion of the corresponding charge distribution. The standard approach when extracting the charge distribution of a given system is to treat the constituent charges as points. This can, however, lead to an overestimation of multipole moments of high order, such as dipole, quadrupole, and higher moments. Focusing on distributions of charges located on a spherical surface – characteristic of numerous biological macromolecules, such as globular proteins and viral capsids, as well as of inverse patchy colloids – we develop a novel way of representing spherical surface charge distributions based on the von Mises-Fisher distribution. This approach takes into account the finite spatial extension of individual charges, and leads to a simple yet powerful way of describing surface charge distributions and their multipole expansions. In this manner, we analyze charge distributions and the derived multipole moments of a number of different spherical configurations of identical charges with various degrees of symmetry. We show how the number of charges, their size, and the geometry of their configuration influence the behavior and relative importance of multipole magnitudes of different order. Importantly, we clearly demonstrate how neglecting the effect of charge size leads to an overestimation of high-order multipoles. The results of our work can be applied to construct analytical models of electrostatic interactions and multipole expansion of charged particles in diverse soft matter and biological systems.'
author:
- Anže Lošdorfer Božič
title: From discrete to continuous description of spherical surface charge distributions
---
=1
\[sec:intro\]Introduction
=========================
It is hard to underestimate the importance of charge and the resulting electrostatic interactions in various soft matter and biological systems. These include protein-protein and protein-polyelectrolyte interactions, viral capsid assembly and stability, interactions and crystallization of inverse patchy colloids, and drug delivery and cellular uptake of nanoparticles [@Holm2012; @Bianchi2017; @Bianchi2017b; @Siber2012; @Bai2016]. Electrostatic interactions are also highly tunable and consequently enable a controllable and tunable assembly of charged particles [@Bianchi2017; @Bianchi2014]. The control over electrostatic effects can be achieved either by varying particle size and the size of their patches of charge, or by changing the properties of the surrounding electrolyte – most importantly, its salt concentration and $pH$ value [@Bianchi2014; @Barisik2014; @Kusters2015; @Sabapathy2017; @ALB2017a; @Krishnan2017; @Nap2014; @Abrikosov2017]. What is more, the charge on biological macromolecules can be in principle also regulated via induced mutations, changing the nature and charge of their amino acid composition [@Ni2012].
Experimental observations and numerical simulations of electrostatic effects in these systems are often supplemented by analytical models [@Holm2012; @Bianchi2017; @Bianchi2017b; @Siber2012; @Warshel2006]. In a first approximation, the total charge on a particle – be it a colloid or a macromolecule – can account for a large amount of its electrostatic behavior. But while such treatment of particles as homogeneously charged is customary, non-uniform charge effects often play a significant role that cannot be neglected [@Adar2017; @Grant2001]. For instance, both charge heterogeneity (patchiness) and charge fluctuation reduce the electrostatic repulsion between proteins or protein aggregates, eventually even giving way to attraction [@Grant2001; @ALB2013a; @Li2017; @Li2015; @Vega2016]. Likewise, heterogeneity of charge is a determining factor in the aggregation and crystallization of inverse patchy colloids, as well as in their interaction with polyelectrolytes [@Bianchi2017b; @Blanco2016; @Bianchi2014; @Dempster2016; @Yigit2015; @Yigit2017]. Recent experiments have also revealed a long-range attraction between overall neutral surfaces, locally charged in a mosaic-like structure of positively and negatively charged patches [@Silbert2012; @Perkin2006; @Meyer2005].
Due to the typical size of colloidal and molecular systems, and the sheer number of atoms and charges involved in them, effective coarse-grained representations of their interaction potentials are vital for the modeling of such systems [@Hoppe2013]. A common way of describing charge heterogeneities in particles and reducing their complexity is the multipole expansion of particles’ surface charge distributions. This approach presents an efficient way of describing surface charges as continuous patches, easing the description of the rich set of surface charge patterns embedded in proteins and charged patchy particles [@Blanco2016; @Stipsitz2015]. In addition to determining and classifying the electrostatic multipole moments of different proteins [@ALB2017a; @Felder2007; @Nakamura1985], multipole expansion has also been widely used to explore protein-protein, protein-ligand, and colloidal interactions [@Abrikosov2017; @Blanco2016; @Hoppe2013; @Paulini2005; @Parimal2014], predict the electrophoretic mobility of proteins [@Kim2006], and to provide a representation of both the protein structure [@Gramada2006] and of the symmetry of viral capsids [@Lorman2007; @Lorman2008].
In obtaining a multipole representation of a particle’s surface charge distribution, the charges on the particle are typically treated as point charges represented by Dirac $\delta$ functions. Similarly, the patches of charge on inverse patchy colloids are often considered to cover an exact surface area of the colloid, defined by sharp edges. While common, both descriptions are known to have multipole expansions where it is difficult to achieve an accurate representation of a surface charge distribution with a finite number of multipole terms, due to the Dirac $\delta$ and Heaviside step functions involved [@ALB2013a; @ALB2011]. And while an arbitrary cutoff can in principle be chosen, e.g., by representing a surface charge distribution only by its monopole, dipole, and quadrupole moments, this leaves open the question of accuracy and relevance of high-order multipole moments.
Here, we present a novel way of constructing spherical surface charge distributions based on the von Mises-Fisher distribution, taking into account the finite extent of individual charges on a given particle. We derive the expression for the multipole moments of thusly constructed distributions, yielding a simple yet elegant form which can be used to study how the number and size of charges as well as the geometry of their configuration on a particle influences the relative relevance of multipole moments of different order. The derived model presents an improvement in the description of the multipole representation of any number of charges on a spherical particle, with a simplicity which nonetheless allows it to serve as a more accurate input for analytical models of electrostatic effects in systems of globular proteins, viral capsids, and charged patchy colloids.
Constructing spherical surface charge distributions
===================================================
We consider a point charge $q_k e_0$, located on a unit sphere of radius $R=1$ at a position $\mathbf{r}_k=(R,\vartheta_k,\varphi_k)=(R,\Omega_k)$, written in spherical coordinates; $e_0$ is the elementary charge. The contribution of the point charge to the total surface charge distribution on the sphere, when written in terms of the Dirac $\delta$ function, is $$\label{eq:delta}
\sigma_\delta(\Omega)=\frac{q_k e_0}{R^2}\times\delta(\Omega-\Omega_k),$$ normalized so that $\int\sigma_\delta(\Omega)\,\mathrm{d}V=q_k e_0$. Such a description, while standard, can cause difficulties when describing a contribution of many charges to the surface charge distribution and then expanding it in terms of multipoles. Specifically, the multipole coefficients of the distribution converge poorly, as an infinite sum over spherical harmonics is required to accurately represent the Dirac $\delta$ function.
In order to remedy this, we now represent a point charge $q_k e_0$ with a normal distribution on a sphere instead, writing its contribution to the total surface charge distribution as $$\label{eq:vmf}
\sigma_\mathrm{vMF}(\Omega)=\frac{q_k e_0}{R^2}\times f(\Omega\,|\,\Omega_k,\lambda_k),$$ where the function $f(\Omega\,|\,\Omega_k,\lambda)$ is the von Mises-Fisher (vMF) distribution on a unit sphere in three dimensions [@Mardia2009], $$f(\Omega\,|\,\Omega_k,\lambda)=\frac{\lambda}{4\pi\sinh\lambda}\,\exp(\lambda\cos\gamma_k).
\vspace*{0.1cm}$$ Here, $\cos\gamma_k$ denotes the great-circle distance between points $\Omega$ and $\Omega_k$ on the sphere. The vMF distribution is a normal distribution on a sphere, centered around a mean direction $\Omega_k$ with a concentration parameter $\lambda$ – the higher its value, the higher the concentration of the distribution around the mean direction (see Fig. \[fig:A1\] in Appendix \[sec:vmf\]). We write the normalization factor $1/R^2$ in Eq. in analogy with the spherical expression of the Dirac $\delta$ function \[Eq. \].
Given $N$ charges on a sphere, the total surface charge distribution can thus be written as a sum of contributions from individual charges: $$\label{eq:sig-vmf}
\sigma(\Omega)=\frac{e_0}{4\pi R^2}\sum_{k=1}^N\frac{q_k\lambda_k}{\sinh\lambda_k}\exp(\lambda_k\cos\gamma_k),$$ where each charge is represented by its own vMF distribution characterized by the mean direction $\Omega_k$, coinciding with the position of the charge projected onto the unit sphere, and the charge’s concentration parameter $\lambda_k$, describing its spatial extension around the mean position. The surface charge distribution, Eq. , can in turn be expanded in terms of its multipole moments $$\label{eq:multipole}
\sigma(\Omega)=\frac{e_0}{4\pi R^2}\sum_{l,m}\sigma_{lm}Y_{lm}(\Omega).$$ A lengthy derivation, given in Appendix \[sec:derivation\], yields a very elegant expression for the multipole coefficients $\sigma_{lm}$, $$\label{eq:slm}
\sigma_{lm}=4\pi\sum_kq_k\,g_l(\lambda_k)\,Y_{lm}^*(\Omega_k),$$ where we have defined $$\label{eq:glk}
g_l(\lambda)=\frac{\lambda}{\sinh\lambda}\,i_l(\lambda).$$ Here, $i_l(x)$ are the modified spherical Bessel functions of the first kind [@Abramowitz]. Rather unexpectedly, the multipole coefficients are determined by a single function dependent on the multipole order $\ell$ and the concentration parameter $\lambda_k$. With the knowledge of multipole coefficients \[Eq. \], we can now also insert them back into Eq. to obtain the total surface charge distribution. Given an expansion of a surface charge distribution in terms of its multipole coefficients, we define the multipole magnitude $S_l$ of order $\ell$ as $$\label{eq:mag}
S_l=\sqrt{\frac{4\pi}{2l+1}\sum_m|\sigma_{lm}|^2}.$$ Inserting the expression for the multipole coefficients, Eq. , we obtain the normalized multipole magnitudes (Appendix \[sec:mags\])
$$\label{eq:Sl}
\frac{S_l}{|S_0|}=\left(\sum_k|q_k|\right)^{-1}\left[\sum_{k=t}q_k^2\,g_l^2(\lambda_k)+2\sum_{k>t}q_kq_t\,g_l(\lambda_k)\,g_l(\lambda_t)\,P_l(\cos\gamma_{kt})\right]^{1/2}.$$
The monopole moment $S_0$ relates of course to the total charge $Q$, whereas the multipole moments of the first and second order correspond to the dipole and quadrupole moment, respectively, and can be easily related to their Cartesian forms [@ALB2017a]. In order to enable an easy comparison between configurations with the same number of charges but different total charge, we have normalized the multipole magnitudes in Eq. with the absolute value of the monopole moment, $|S_0|=4\pi\sum_k|q_k|=4\pi|Q|$.
We have thus derived the multipole coefficients for an arbitrary distribution of $N$ charges on a unit sphere \[Eq. \], where we have assigned to them vMF distributions with given mean directions $\Omega_k$ and concentration parameters $\lambda_k$. Through this, we have obtained a very simple expression both for the resulting total surface charge distribution and its corresponding multipole moments \[Eq. \]. Such an approach ascribes a finite, continuous spatial extent to each charge, providing a more realistic description and at the same time avoiding the difficulties related to the multipole expansion of Dirac $\delta$ functions.
Configurations of identical charges
-----------------------------------
In order now to explore the consequences of the derived expressions for the surface charge distribution and its multipole moments, we will limit ourselves in the rest of the paper to configurations where all the charges possess identical properties, $q_k=q=1$ and $\lambda_k=\lambda$.
Such an assumption immediately enables us to study certain limiting cases of our results (Appendix \[sec:limits\]): When the concentration parameter $\lambda$ tends to $0$, the surface charge distribution of any configuration of charges expectedly becomes a uniform distribution on the sphere, described by its total charge. On the other hand, when $\lambda$ tends to infinity, the surface charge distribution reduces to a sum over Dirac $\delta$ functions of individual charges \[Eq. \].
More interestingly, the multipole magnitudes of a configuration of $N$ identical charges can be expressed as $$\begin{aligned}
\frac{S_l}{|S_0|}&=&g_l(\lambda)\times\left(\frac{1}{N}+\frac{2}{N^2}\sum_{k>t}P_l(\cos\gamma_{kt})\right)^{1/2}\\
&=&g_l(\lambda)\times\frac{S_l^\infty}{|S_0|}.
\label{eq:sinf}\end{aligned}$$ From here, we see that there are two major factors determining the relative contribution of a given multipole moment $S_l$ to the surface charge distribution. The first factor is given by the function $g_l(\lambda)$, and the second by the geometry of the configuration of the $N$ charges, given by their spherical distances $\cos\gamma_{kt}$. The latter are indeed all that determines the multipole magnitudes in the limit $\lambda\to\infty$, as $\lim_{\lambda\to\infty}g_l(\lambda)=1$ $\forall l$. On the other hand, when $\lambda\to0$, we have $\lim_{\lambda\to0}g_l(\lambda)\propto\lambda^l$ and the low-order multipoles become increasingly dominant. A more detailed discussion of the different limiting cases is given in Appendix \[sec:limits\].
Random configurations of identical charges and the role of symmetry
===================================================================
We study the implications of our results by applying them to different configurations of $N$ identical charges on a unit sphere and analyzing the properties of the resulting surface charge distributions. All the charges share the same properties, $q_k=q=1$ and $\lambda_k=\lambda$, with $\lambda$ and $N$ being variable. For comparison, we use three different types of charge configurations on a sphere:
- Random distributions of charges, where the positions of charges are distributed uniformly on the sphere.
- Distributions of charges with some minimal distance between them. The positions of these charges are generated randomly and picked according to Mitchell’s best-candidate algorithm [@Mitchell1991] (approximating Poisson disc sampling and blue noise). In this manner, we prevent charges from being distributed too closely together, as can, for instance, happen in scheme [*(i)*]{}. The resulting positions of charges are, while random, no longer independent.
- Distributions of charges based on the solutions of the Thomson problem, which minimizes the electrostatic energy of such a configuration [@Wales2006]. Compared to schemes [*(i)*]{} and [*(ii)*]{}, the charges in these configurations are spaced the furthest apart, and the configurations exhibit various symmetries, including tetrahedral, octahedral, and icosahedral (depending on the number of charges $N$).
We will refer to these configurations as random, Mitchell, and Thomson configurations, respectively. While the solutions of the Thomson problem, [*(iii)*]{}, provide unique configurations for a given $N$, generating them randomly – either uniformly, [*(i)*]{}, or with Mitchell’s algorithm, [*(ii)*]{} – can yield many different configurations. In the latter two cases we thus generate, for each $N$, $M=5000$ different configurations, allowing us to operate in terms of average quantities, where the average is taken over all $M$ configurations.
![image](figure1.png){width="1.5\columnwidth"}
In Fig. \[fig:1\], we use the three different schemes [*(i)*]{}-[*(iii)*]{} to obtain configurations of $N=10$ identical charges and their surface charge distributions, which have been projected from a sphere onto a plane using Mollweide projection [@Snyder]. In addition, the configurations are shown at two different values of the concentration parameter, $\lambda=1$ and $\lambda=10$ (cf. also Fig. \[fig:A1\] in Appendix \[sec:vmf\]). We see that, for small $\lambda$, the Thomson configuration is almost indistinguishable from a uniform charge distribution. Random and Mitchell configurations show more variation, especially if the charges are allowed to be located close to each other. At higher $\lambda$, where the influence of charges is more concentrated around their positions, the deviations from a uniform distribution become more prominent in all three configurations. Again, however, the relative positions of the charges determine the extent of this variation. These observations indicate that indeed $\lambda$ and the relative positions of the charges (given by $\cos\gamma_{kt}$) will determine the multipole characterization of a given configuration of charges. Mitchell’s algorithm, [*(ii)*]{}, positions the charges so that there is a minimal distance between them, leading to a “layered” distribution of distances between charges; on the other hand, random positioning of charges onto the sphere, [*(i)*]{}, tends to distribute them uniformly on average (see Fig. \[fig:E1\] in Appendix \[sec:extra\]). Lastly, Thomson configurations exhibit the largest distances between particles and the highest overall symmetry.
Multipole expansion
-------------------
Figure \[fig:1\] provides us with an insight into how a particular configuration of charges and their concentration parameter $\lambda$ influence the resulting surface charge distribution. However, as it is difficult to assess the general influence of the number of charges and their properties based on their surface charge distribution alone, we now turn our attention to their multipole magnitudes \[Eq. \].
Figure \[fig:2\] shows the distributions of the first $6$ normalized multipole moments for $5000$ different random and Mitchell configurations of $N=10$ identical charges. We can see that, in both cases, at small values of $\lambda$ the multipole moments of high order $\ell$ are quickly suppressed, and the surface charge distribution is thus dominated by its monopole moment. When $\lambda$ increases, the high-order multipoles drop off ever more slowly until they become comparable among each other in the limit $\lambda\to\infty$. On average, random configurations tend to have much larger low-order multipoles (dipole, quadrupole) compared to Mitchell configurations with a minimum distance between the charges; this difference disappears for high-order multipoles. All these observations stem from the mean values of multipoles obtained by averaging over the $5000$ different configurations; within these, there is still a significant amount of variation, especially for low $\ell$.
![Violin plot of the first $6$ multipole magnitudes for random and Mitchell configurations of $N=10$ identical charges. Each entry in the violin plot shows a (mirrored) distribution of normalized magnitudes of $5000$ different configurations, with the central symbols denoting the mean and the bars denoting the corresponding standard deviation. Star symbols show the multipole magnitudes of the corresponding Thomson configuration. The plot is shown for four different values of the concentration parameter $\lambda$. \[fig:2\]](figure2.pdf){width="\columnwidth"}
As the concentration parameter $\lambda$ is increased, multipoles of high order become less and less negligible (Fig. \[fig:2\]). This behavior becomes even more pronounced when we plot the normalized total power of order $\ell$, $P_l/|P_0|$, obtained by terminating the expression for total power \[Eq. \] at a given $\ell$. The total power consists of a sum of squared multipole magnitudes, and is shown in Fig. \[fig:3\] for configurations of $N=10$ identical charges. Again, we can see clearly that at low $\lambda$, the surface charge distribution of any configuration is dominated by its monopole moment. Upon a gradual increase in $\lambda$, the next few multipole moments become more important, while the majority of the multipoles still do not contribute to the total power. However, with a still further increase in $\lambda$, more and more multipoles need to be summed before the total power converges, and when $\lambda=1000$, we are far from convergence even when we truncate the sum only at $\ell=21$. This observation holds regardless of which of the three configuration schemes [*(i)*]{}-[*(iii)*]{} we choose. We can also observe that the total power of Mitchell configurations of charges, keeping a minimum distance between them, matches quite closely the total power of the corresponding Thomson configuration, while the total power of random configurations is always higher, especially due to the higher values of dipole and quadrupole moments.
![Normalized total power $P_l/|P_0|$ of random and Mitchell configurations of $N=10$ identical charges, obtained by summing the squares of multipole magnitudes up to order $\ell$. Full lines show the mean values, obtained by averaging over $5000$ different configurations, while the shaded regions denote the corresponding standard deviations. The latter are negligible for Mitchell configurations. Dashed lines and star symbols show the total power for the corresponding Thomson configuration. The plot is shown for four different values of the concentration parameter $\lambda$. \[fig:3\]](figure3.pdf){width="\columnwidth"}
Figures \[fig:E2\] and \[fig:E3\] in Appendix \[sec:extra\] show the results for configurations of $N=20$ identical charges, analogous to those presented in Figs. \[fig:2\] and \[fig:3\] for configurations of $N=10$ charges. We can see that the general behavior is similar in the two cases. Notably, though, a higher number of charges lowers the overall magnitudes of multipoles compared to the monopole moment, and in case of Mitchell configurations, the first non-negligible multipole occurs at a later value of $\ell$ compared to the case of $N=10$ charges ($\ell\sim8$ and $\ell\sim6$, respectively).
The first non-negligible multipole in an expansion of a surface charge distribution thus appears to be in large part determined by any symmetry a given configuration might possess. Thomson configuration of $N=10$ identical charges possesses a $D_{4d}$ symmetry [@Wales2006], and as such, its first non-vanishing multipole should be the quadrupole, $\ell=2$ [@Gelessus1995]. From Fig. \[fig:4\] we can observe that this is indeed the case. Multipole magnitudes reach a first peak, however, at $\ell=5$ and $\ell=6$. Compared with the corresponding random and Mitchell configurations, the Thomson configuration also shows the most variation between multipoles of different order – that is, while some multipoles are strongly represented in the expansion, others are completely absent; this is especially noticeable in the limit $\lambda\to\infty$, where the only contribution to the multipoles is due to the geometry of the configuration \[Eq. \]. We can see that the higher the order of the multipole, $\ell$, the more slowly this limiting value is reached.
![image](figure4.pdf){width="2\columnwidth"}
The results for configurations of $N=10$ identical charges indicate that the multipole description of a surface charge distribution and its deviations from a uniform distribution are influenced by several factors (Fig. \[fig:4\]). In configurations where high-order multipoles (large $\ell$) are dominant, the description of charges will approach the limit of Dirac $\delta$ functions ($\lambda\to\infty$) slowly, i.e., their surface charge distribution will be approximated well by a uniform distribution in a wider range of $\lambda$s. Another factor influencing the deviation from a uniform distribution is the limiting value of the multipole magnitudes, $S_l^\infty$, dependent solely on the geometrical distribution of the charges. A lower limiting value, typical of completely random distributions, implies that, in spite of how quickly this limit is attained with $\lambda$, the uniform distribution given by the monopole moment will remain dominant.
We now wish to generalize these observations to configurations with an arbitrary number of charges. In order to do that, we will characterize the behavior of multipole magnitudes $S_l$ with two parameters: first, with their limiting value $S_l^\infty$, and second, with the value of the concentration parameter where a multipole magnitude reaches $10\%$ of the monopole moment, which we will denote $\lambda_{0.1}$: $$\left.\frac{S_l}{|S_0|}\right|_{\lambda_{0.1}}=g_l(\lambda_{0.1})\times\frac{S_l^\infty}{|S_0|}=0.1.$$ The dependence of these two parameters on the number of identical charges in a configuration, $N$, is shown in Figs. \[fig:5\] and \[fig:6\] for a large number of multipole magnitudes. First of all, we see that, for random configurations of charges, the limiting value $S_l^\infty$ changes only slowly with $\ell$, while it decreases with $N$ (Fig. \[fig:5\]). However, the value of $\ell$ influences rather strongly the parameter $\lambda_{0.1}$ – the speed at which the limiting value $S_l^\infty$ is attained (Fig. \[fig:6\]). In contrast, $\lambda_{0.1}$ is not influenced much by the number of charges in a random configuration.
![Heatmap showing the limiting value of the multipole magnitudes $S_l^\infty/|S_0|$ as a function of $N$ and $\ell$. The value of $S_l^\infty/|S_0|$ is obtained in the limit $\lambda\to\infty$, and depends solely on the distances between charges in a given configuration. In the case of random and Mitchell configurations, the heatmap shows the mean values obtained by averaging over $5000$ different configurations. \[fig:5\]](figure5.pdf){width="\columnwidth"}
![Heatmap showing the value of the concentration parameter where a multipole magnitude reaches $10\%$ of the monopole magnitude (total charge), $\lambda_{0.1}$, as a function of $N$ and $\ell$. A value of $\lambda_{0.1}=1000$ indicates that a multipole does not reach the $10\%$ value of the monopole in the considered range of $\lambda$s. In the case of random and Mitchell configurations, the heatmap shows the mean values obtained by averaging over $5000$ different configurations. \[fig:6\]](figure6.pdf){width="\columnwidth"}
The number of charges $N$ has a much bigger influence on the behavior of Thomson configurations. Partially, this is to be expected, as they exhibit various symmetries at different $N$, resulting in a strong presence of multipoles of given $\ell$ and on the almost complete vanishing of other multipoles. For instance, it is known that for configurations with tetrahedral, octahedral, and icosahedral symmetries, only certain values of $\ell$ are permitted in the multipole expansion of the surface charge distribution [@Lorman2007; @ALB2013a]: $\ell_\mathrm{tet}=4i+6j\,(+3)$, $\ell_\mathrm{oct}=4i+6j\,(+9)$, and $\ell_\mathrm{ico}=6i+10j\,(+15)$; configurations with odd values of $\ell$ are those which lack inversion symmetry. The symmetries of different configurations and the permitted multipoles in their expansion are reflected in a checkered pattern in the heatmaps in Figs. \[fig:5\] and \[fig:6\], a pattern which is completely absent in the case of random configurations.
In addition to this, Thomson configurations exhibit vanishing multipole moments of low order $\ell$, the amount of which increases with increasing $N$. For example, while the dipole and quadrupole moment are negligible for a Thomson configuration with $N=10$ charges, all of the first $10$ multipole moments are negligible for a Thomson configuration with $N=60$ charges. What is more, these vanishing multipole moments appear to occur periodically in “islands” of $\ell$ numbers, the extent of which increases with $N$ (see also Fig. \[fig:E4\] in Appendix \[sec:extra\]).
Mitchell configurations present a middle ground between random and Thomson configurations. The variation of $S_l^\infty$ and $\lambda_{0.1}$ occurs gradually with $N$, in a similar fashion to random configurations, yet at the same time we can observe the same vanishing multipoles of low order at higher numbers of charges as we saw for Thomson configurations. Individual multipoles also tend to be more pronounced in Mitchell configurations compared to the multipoles of random configurations, yet not so strikingly as in the symmetric Thomson configurations.
Taken together, our results for configurations of identical charges show that the multipole magnitudes of their surface charge distributions depend strongly on the exact geometry of the configuration, with clear differences between randomly positioned charges, configurations where charges are placed a minimum distance apart, and configurations of very high symmetry. In addition, both the number of charges and their size – given by the concentration parameter – place the surface charge distribution of a configuration of charges in different regimes, where the distribution can either behave solely as a uniform distribution, or needs a large number of high-order multipoles to be accurately represented.
Discussion and conclusions
==========================
In this work, we have presented a novel way of constructing continuous surface charge distributions of spherical particles composed of numerous charges. Our approach is based on the description of individual charges with a vMF distribution on a sphere, taking into account the finite extent of the charges. With this, we were able to extract the electrostatic multipoles of such surface charge distributions and analyze their behavior as a function of the multipole order $\ell$, and the number $N$ and size (concentration parameter $\lambda$) of the charges. Analytically, we have derived the precise relation of the multipole magnitudes to the size of the charges and the geometry of their configuration on the sphere. We have explored the predictions of our approach on different configurations of identical charges, generated either randomly or by using Mitchell’s algorithm, or extracted from the solutions of the Thomson problem.
While we have considered configurations of charges with identical properties, the results derived in this paper allow an easy generalization to arbitrary configurations of fractional charges $q_k$ with concentration parameters $\lambda_k$ (which take into account their relative extension on a unit sphere with $R=1$). In addition, given a “physical” size of a charge $a_k$, we can rewrite the concentration parameter for an arbitrary size of the sphere $R$ as $$\label{eq:elk}
\lambda_k=\frac{R}{a_k}.$$ This implies that, for a given size of a charge, the parameter $\lambda$ will be larger for larger spheres, where the same charge will appear more localized on a larger than on a smaller sphere. In this way, our approach can be used to study the surface charge distributions on biomolecules of different sizes, ranging from small globular proteins ($R\gtrsim1$ nm) to larger viral capsids ($R\gtrsim10$-$20$ nm) [@ALB2017a; @ALB2013b], thus spanning a range of $\lambda_k\gtrsim1$-$100$, depending on the size of the macromolecule in question. This of course makes it necessary to be able to estimate the value of the parameter $a_k$, which can be obtained from the biochemical nature of the molecules (such as different amino acids) carrying the charge in a given system.
An important conclusion we can draw from our results is that the relationship between the size of the charges relative to the size of the sphere they are located on plays a significant role in determining the resulting surface charge distributions. We have seen that going from very spread-out charges (small $\lambda$) to charges that can be treated as Dirac $\delta$ functions (large $\lambda$) results in a wildly different relative importance of the corresponding multipole magnitudes. Specifically, a surface charge distribution constructed out of point charges will need in principle an infinite sum of multipoles in order to be represented accurately, potentially masking the importance of low-order multipoles. Consequently, such a description could lead to an over- or underestimation of dipole and quadrupole moments. Our results also indicate that even in descriptions of general charge distributions of molecules, taking into account the finite extension of charges could have a pronounced effect on the determination of their electrostatic multipoles [@Stone1981; @Larsson1985].
In general, our approach also helps distinguish the regime where a given configuration of charges on a sphere can be described well by a uniform distribution from the regime where the charges are localized enough that their geometry and symmetry determine the largest multipoles in the expansion of the surface charge distribution. While the geometry of a particular configuration of charges turns out to play a large role, it will nonetheless tend to a uniform distribution when $\lambda\ll1$, whereas the multipole magnitudes will be determined solely by the geometry of the configuration when $\lambda\gg1$. In the intermediate regime of $\lambda$s, increasing the number of charges in a configuration will in general reduce the importance of high-order multipoles, the more so the less symmetric the configuration. At the same time, multipoles of low order are prominent at small values of $\lambda$, and the high-order multipoles become comparable only when $\lambda$ is increased.
The parameter space of biological macromolecules and colloids can in fact span a large range of values studied in this work. The charge of both small globular proteins and large capsid assemblies is carried by the same amino acids, meaning that the concentration parameters of charges will be smaller for the globular proteins than for viral capsids. On the other hand, viral capsids can carry several hundreds or thousands of individual charges, while the smaller proteins are often composed of only a few tens of charges. Consequently, we can expect a large variation in the multipole behavior of the surface charge distributions in different systems.
A particular observation that should be of importance when describing the surface charge distributions in viral capsids is that the order $\ell$ of the dominant multipole in symmetric distributions increases with an increasing number of charges in a configuration. This is in contrast to random configurations of charges, which tend to be dominated by low-order multipoles, no matter the number of charges. As viral capsids possess very high symmetry – typically icosahedral – our approach can be used to extract the dominant multipole describing the symmetry of a particular configuration, which was recently shown to play a role in orientational phase transitions in capsids [@Dharmavaram2017].
The approach presented in this work enables a simple yet powerful construction of continuous surface charge distributions from individual charges on spherical particles, taking into account the finite size of each charge. This allows for a construction of various analytical models based on multipole expansion that can be used in describing systems of inverse patchy colloids, small globular proteins, and viral capsids of different sizes. In addition, the approach presented here can help elucidate the relative relevance of multipole magnitudes in a given system, and can help distinguish the cases where total charge provides a sufficient description of the electrostatic properties from the cases where a more detailed multipole expansion is needed.
I thank S. Čopar and R. Podgornik for numerous helpful discussions and comments. This work was supported by the Slovenian Research Agency (under Research Core Funding grant No. P1-0055).
\[sec:vmf\] von Mises-Fisher distribution
=========================================
Von Mises-Fisher (vMF) distribution is a normal probability distribution on the $(p-1)$-dimensional sphere in $\mathcal{R}^p$ [@Mardia2009]. The vMF distribution for a random $p$-dimensional vector $\mathbf{r}$ is given by $$f_p(\mathbf{r}\,|\,\mathbf{r}_0,\lambda)=C_p(\lambda)\,\exp(\lambda\,\mathbf{r}_0^T\mathbf{r}).$$ Here, $\lambda\geqslant0$, $|\mathbf{r_0}|$=1, and the normalization constant $C_p$ is equal to $$C_p(\lambda)=\frac{\lambda^{p/2-1}}{(2\pi)^{p/2}I_{p/2-1}(\lambda)},$$ where $I_\nu$ denotes the modified Bessel functions of the first kind [@Abramowitz]. vMF distribution is a normal distribution on a sphere, where the parameter $\mathbf{r}_0$ is the mean direction of the distribution, and the parameter $\lambda$ is the concentration parameter – the higher its value, the higher the concentration of the distribution around the mean direction. A generalization of the vMF distribution to a bivariate normal distribution with an unconstrained covariance matrix is called the spherical Fisher-Bingham or Kent distribution [@Kent1982].
![image](figure7.png){width="1.5\columnwidth"}
In three dimensions – on a unit sphere $\mathcal{S}_2$ – the normalization constant of the vMF distribution reduces to $$C_3=\frac{\lambda}{4\pi\sinh\lambda}=\frac{\lambda}{4\pi(e^\lambda-e^{-\lambda})},$$ and we can thus write $$f_3(\mathbf{r}\,|\,\mathbf{r}_0,\lambda)=\frac{\lambda}{4\pi\sinh\lambda}\,\exp(\lambda\,\mathbf{r}_0^T\mathbf{r}).$$ Since any vector on the unit sphere can be represented in spherical coordinates as $$\mathbf{r}=(\cos\varphi\sin\vartheta,\sin\varphi\sin\vartheta,\cos\vartheta).$$ the exponent of the vMF distribution becomes $$\exp(\lambda\,\mathbf{r}_0^T\mathbf{r})=\exp(\lambda\cos\gamma_0),$$ where $\gamma_0$ denotes the great-circle distance between points $\Omega$ and $\Omega_0$, $$\cos\gamma_0=\cos\vartheta\cos\vartheta_0+\cos(\varphi-\varphi_0)\sin\vartheta\sin\vartheta_0.$$ With this, we can write the vMF distribution on a unit sphere centered around a point $\Omega_0$ as $$f_3(\Omega\,|\,\Omega_0,\lambda)=\frac{\lambda}{4\pi\sinh\lambda}\exp(\lambda\cos\gamma_0).$$ The distribution is normalized so that $$\oint_{\mathcal{S}_2}\mathrm{d}\Omega\,f_3(\Omega\,|\,\Omega_0,\lambda)=1.$$ The parameter $\lambda$ determines the concentration of the distribution centered around $\Omega_0$. For $\lambda=0$, the distribution is uniform on the sphere, while for $\lambda\to\infty$, the distribution tends to a Dirac $\delta$ function. Applying this to a distribution of a single point charge on a unit sphere \[Eq. \], Fig. \[fig:A1\] shows the distribution of a charge with $q=1$ located on the $x$-axis, for three different values of $\lambda$. When the concentration parameter is small, $\lambda=1$, the distribution extends across most of the unit sphere; however, with increasing $\lambda$, the influence of the charge becomes more and more localized.
\[sec:derivation\] Derivation of multipole coefficients of the vMF surface charge distribution
==============================================================================================
We start with the vMF surface charge distribution of a number of point charges $q_k$ with concentration parameters $\lambda_k$ and centered on positions $\Omega_k$ \[Eq. \]. From the corresponding multipole expansion, Eq. , we obtain for the multipole coefficients $$\sigma_{lm}=\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\,\exp(\lambda_k\cos\gamma_k).$$ The exponential function can be written as a power series, wherefrom we get $$\label{eq:tmp}
\sigma_{lm}=\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\sum_n\frac{\lambda_k^n}{n!}\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\,\cos^n\gamma_k.$$ Introducing $x_k=\cos\gamma_k$, we split the sum over $n$ into even and odd terms: $$\begin{aligned}
\label{eq:split}
\nonumber\sigma_{lm}&=&\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\left\{\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\sum_n\frac{\lambda_k^{2n}}{(2n)!}x_k^{2n}\right.\\
&&+\left.\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\sum_n\frac{\lambda_k^{2n+1}}{(2n+1)!}x_k^{2n+1}\right\}.\end{aligned}$$ Based on Ref. [@Arfken], we postulate that $$\label{eq:lemma}
\sum_{n=0}^{\infty}\alpha_nx_k^n=\sum_{m=0}^{\infty}a_mP_m(x_k),$$ where $P_n(x)$ are the Legendre polynomials, and the sum runs either over $m=n=\mathrm{even}$ or $m=n=\mathrm{odd}$. Using the orthogonality of spherical harmonics, we can write $$\sum_n\alpha_n\int_{-1}^1x^nP_m(x)\mathrm{d}x=\frac{2}{2m+1}\,a_m.$$ The integral can be split in two parts, and taking into account $P_m(-x)=(-1)^mP_m(x)$, we see that
$$\label{eq:am}
a_m=\frac{2m+1}{2}\sum_n\alpha_n\times\begin{dcases}
2\int_0^1x^nP_m(x)\mathrm{d}x & \text{,\quad$n+m$ even}\\
0 & \text{,\quad$n+m$ odd}
\end{dcases}\quad,$$
and the integral can be expressed in terms of $\Gamma$ functions [@Abramowitz]: $$\int_0^1x^nP_m(x)\mathrm{d}x=\frac{\sqrt{\pi}\,2^{-n-1}\Gamma(1+n)}{\Gamma(1+n/2-m/2)\Gamma(3/2+n/2+m/2)}.$$ By writing $a_m=\sum_n A_{mn}\alpha_n$, we get from Eq. $$A_{mn}=\frac{\sqrt{\pi}\,2^{-n-1}\,(2m+1)\Gamma(1+n)}{\Gamma(1+n/2-m/2)\Gamma(3/2+n/2+m/2)}\quad,\quad n+m\textrm{ even;}\quad m\leqslant n,$$ and $0$ otherwise. We immediately see that when $n=\mathrm{even}$, so is $m$, and conversely, when $n=\mathrm{odd}$, so is again $m$. Summing over even powers of $x$ in Eq. will thus yield only $P_n(x)$ of even order, and similarly for the sum over odd powers. In addition, the coefficients $A_{mn}$ are nonzero only when $m\leqslant n$. From Eq. we have $\alpha_n=\lambda^n/n!$, and so it follows that $$\label{eq:am2}
a_m=\sum_{n\geqslant m}^\infty A_{mn}\,\frac{\lambda^n}{n!}=\sum_{n\geqslant m}^\infty\frac{\lambda^n\sqrt{\pi}\,2^{-n-1}\,(2m+1)}{\Gamma(1+n/2-m/2)\,\Gamma(3/2+n/2+m/2)}\quad,\quad n+m\textrm{ even.}$$ Inserting now the theorem in Eq. into Eq. , we obtain $$\sigma_{lm}=\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\left\{\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\sum_sa_{2s}P_{2s}(x_k)+\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\sum_sa_{2s+1}P_{2s+1}(x_k)\right\}.$$ Next, we use the addition theorem for the spherical harmonics to write $$\begin{aligned}
\nonumber\sigma_{lm}&=&\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\left\{\sum_sa_{2s}\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\,\frac{4\pi}{2(2s)+1}\sum_{t=-2s}^{2s}Y_{2s,t}(\Omega)\,Y_{2s,t}^*(\Omega_k)+\right.\\
&&\phantom{\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\left\{\right.}\left.\sum_sa_{2s+1}\oint\mathrm{d}\Omega\,Y_{lm}^*(\Omega)\,\frac{4\pi}{2(2s+1)+1}\sum_{t=-2s+1}^{2s+1}Y_{2s+1,t}(\Omega)\,Y_{2s+1,t}^*(\Omega_k)\right\}.\end{aligned}$$ Rearranging the order of summation and integration, the integrals evaluate into Dirac $\delta$ functions. By virtue of this, the sums over $s$ and $t$ disappear, yielding $$\label{eq:slm1}
\sigma_{lm}=\sum_k\frac{q_k\lambda_k}{\sinh\lambda_k}\times\begin{dcases}
a_l\,\frac{4\pi}{2l+1}\,Y_{l,m}^*(\Omega_k) & \text{,\quad$l$ even}\\
a_l\,\frac{4\pi}{2l+1}\,Y_{l,m}^*(\Omega_k) & \text{,\quad$l$ odd}
\end{dcases}\quad.$$ Using the expression for the coefficients $a_m$ in Eq. , we obtain from Eq. : $$\label{eq:slm2}
\sigma_{lm}=4\pi\sum_kq_k\,Y_{lm}^*(\Omega_k)\sum_{s\geqslant l}^\infty\frac{\lambda^{s+1}_k}{\sinh\lambda_k}\,\frac{\sqrt{\pi}\,2^{-s-1}}{\Gamma(s/2-l/2+1)\,\Gamma(s/2+l/2+3/2)}\quad,\quad l+s\textrm{ even},$$ which holds true for both even and odd $l$. Introducing $$g_l(\lambda)=\sum_{s\geqslant l}^\infty\frac{\lambda^{s+1}}{\sinh\lambda}\,\frac{\sqrt{\pi}\,2^{-s-1}}{\Gamma(s/2-l/2+1)\,\Gamma(s/2+l/2+3/2)}\quad,\quad l+s\textrm{ even},$$
we can write the multipole coefficients as $$\label{eq:slm3}
\sigma_{lm}=4\pi\sum_kq_k\,g_l(\lambda_k)\,Y_{lm}^*(\Omega_k).$$ What is more, using Mathematica software [@Mathematica] we can show that the function $g_l(\lambda)$ evaluates to $$g_l(\lambda)=\frac{\lambda}{\sinh\lambda}\,i_l(\lambda),$$ where $$i_l(x)=\sqrt{\frac{\pi}{2x}}\,I_{l+1/2}(x)$$ are the modified spherical Bessel functions of the first kind [@Abramowitz]. Thus, we finally obtain the result of Eq. .
\[sec:mags\]Multipole magnitudes, total power, and bond order parameters
========================================================================
In order to obtain the multipole magnitudes from Eq. , we insert the expression for the multipole coefficients, Eq. , into the squared form of the magnitudes. Thus, we get $$\begin{aligned}
\nonumber S_l^2&=&\frac{(4\pi)^3}{2l+1}\sum_m\left[\left(\sum_kq_k\,g_l(\lambda_k)\,Y_{lm}^*(\Omega_k)\right)\right.\\
&&\times\left.\left(\sum_tq_t\,g_l(\lambda_t)\,Y_{lm}(\Omega_t)\right)\right].\end{aligned}$$ Using the addition theorem for spherical harmonics, we then obtain $$S_l^2=(4\pi)^2\sum_{k,t}q_k\,q_t\,g_l(\lambda_k)\,g_l(\lambda_t)\,P_l(\cos\gamma_{kt}).$$ Taking into account that $P_l(\cos\gamma_{kk}=1)=1$ and that $\cos\gamma_{kt}=\cos\gamma_{tk}$, this expression simplifies to $$\begin{aligned}
\nonumber S_l^2&=&(4\pi)^2\left[\sum_{k=t}q_k^2\,g_l^2(\lambda_k)\right.\\
&&+\left.2\sum_{k>t}q_k\,q_t\,g_l(\lambda_k)\,g_l(\lambda_t)\,P_l(\cos\gamma_{kt})\right].\end{aligned}$$ Normalizing this expression with $|S_0|=4\pi\sum_k|q_k|$, we obtain Eq. \[eq:Sl\]. Knowing the multipole magnitudes, we can also immediately write down the total power $$\label{eq:pow}
P=\int_\Omega|\sigma(\Omega)|^2\mathrm{d}\Omega=\sum_l\frac{4\pi}{2l+1}\sum_m|\sigma_{lm}|^2=\sum_l S_l^2.$$
\[sec:limits\]Limiting cases
============================
In the limit where $\lambda\to0$, the asymptotic behavior of $g_l(\lambda)$ is given by $$\lim_{\lambda\to0}g_l(\lambda)\propto\lambda^l+\mathcal{O}(\lambda^{l+2}),$$ and specifically, $\lim_{\lambda\to0}g_0(\lambda)=1$. In this limit, the term with $l=0$ becomes dominant, and thus the only non-zero multipole coefficient $\sigma_{lm}$ is that with $l=m=0$. There, $\lim_{\lambda\to0}\sigma_{00}=\sqrt{4\pi}\,Q$, with $Q=\sum_kq_k$ the total charge on the sphere. Inserting this into the expression for the multipole expansion of the surface charge density, Eq. , we get indeed that $$\lim_{\lambda\to0}\sigma(\Omega)=\frac{Qe_0}{4\pi R^2},$$ a uniform distribution on a sphere.
In the other limit where $\lambda\to\infty$, the function $g_l(\lambda)$ always tends to $1$, independent of $l$: $$\lim_{\lambda\to\infty}g(l,\lambda)=1\quad\forall l.$$ The multipole coefficients $\sigma_{lm}$ simplify to $\lim_{\lambda\to\infty}\sigma_{lm}=4\pi\sum_kq_k\,Y_{lm}^*(\Omega_k)$, yielding $$\begin{aligned}
\nonumber\lim_{\lambda\to\infty}\sigma(\Omega)&=&\frac{e_0}{R^2}\sum_k q_k\sum_{l,m}Y_{lm}^*(\Omega_k)Y_{lm}(\Omega)\\
&=&\frac{e_0}{R^2}\sum_k q_k\,\delta(\Omega-\Omega_k),\end{aligned}$$ which is indeed a surface charge distribution composed of Dirac $\delta$ functions centered at $\Omega_k$. Here we also see why, when using Dirac $\delta$ functions for the description of point charges, in principle an infinite sum over $\ell$ is needed to represent the distribution correctly.
As for the multipole magnitudes, the only non-zero moment in the limit of $\lambda\to0$ is of course $S_0$, with all higher multipoles tending to zero, $\lim_{\lambda\to0}S_l/|S_0|=0$ for $l\geqslant1$. On the other hand, the multipole magnitudes in the limit of $\lambda\to\infty$ are determined purely by their geometrical factor $$\lim_{\lambda\to\infty}\frac{S_l}{|S_0|}=\frac{S_l^\infty}{|S_0|}=\sqrt{\frac{1}{N}+\frac{2}{N^2}\sum_{k>t}P_l(\cos\gamma_{kt})},$$ which is dependent on the spherical distances between the charges on the sphere, $\cos\gamma_{kt}$.
\[sec:extra\]Additional figures
===============================
Here, we show several additional figures complementing the results presented in the main text.
![Histogram of distances between charges, given by $\cos\gamma_{kt}$, and of the closest distance between two charges, $\cos\min\gamma_{kt}$ with $k\neq t$. The histograms were obtained from $5000$ different random and Mitchell configurations of $N=10$ identical charges. The dashed vertical line shows the minimum distance of the corresponding Thomson configuration. \[fig:E1\]](figure8.pdf){width="\columnwidth"}
![image](figure11.pdf){width="2\columnwidth"}
![Violin plot of the first $12$ multipole magnitudes for configurations of $N=20$ identical charges, generated either randomly or by using Mitchell’s algorithm. Each entry in the violin plot shows a (mirrored) distribution of normalized magnitudes of $5000$ different configurations, with the central symbols denoting the mean and the bars denoting the corresponding standard deviation. Star symbols show the multipole magnitudes of the corresponding Thomson configuration. The plot is shown for four different values of the concentration parameter $\lambda$. \[fig:E2\]](figure9.pdf){width="\columnwidth"}
![Normalized total power $P_l/|P_0|$ of random and Mitchell configurations of $N=20$ identical charges, obtained by summing the squares of multipole magnitudes up to order $\ell$. Full lines show the mean values, obtained by averaging over $5000$ different configurations, while the shaded regions denote the corresponding standard deviations. The latter are negligible for Mitchell configurations. Dashed lines and star symbols show the total power for the corresponding Thomson configuration. The plot is shown for four different values of the concentration parameter $\lambda$. \[fig:E3\]](figure10.pdf){width="\columnwidth"}
[10]{}
C. Holm, P. K[é]{}kicheff, and R. Podgornik, eds., [*Electrostatic Effects in Soft Matter and Biophysics*]{}, vol. 46 of [*NATO Science Series II – Mathematics, Physics and Chemistry*]{}. Springer, 2012.
E. Bianchi, B. Capone, I. Coluzza, L. Rovigatti, and P. D. J. van Oostrum, [*Limiting the valence: advancements and new perspectives on patchy colloids[,]{} soft functionalized nanoparticles and biomolecules*]{}, Phys. Chem. Chem. Phys. [**19**]{}, 19847–19868 (2017).
E. Bianchi, P. D. van Oostrum, C. N. Likos, and G. Kahl, [*Inverse patchy colloids: Synthesis, modeling and self-organization*]{}, Curr. Op. Colloid Interface Sci. [**30**]{}, 8–15 (2017).
A. Šiber, A. [Lošdorfer Božič]{}, and R. Podgornik, [ *Energies and pressures in viruses: contribution of nonspecific electrostatic interactions*]{}, Phys. Chem. Chem. Phys. [**14**]{}, 3746–3765 (2012).
Y. Bai, Q. Luo, and J. Liu, [*Protein self-assembly via supramolecular strategies*]{}, Chem. Soc. Rev. [**45**]{}, 2756–2767 (2016).
E. Bianchi, C. N. Likos, and G. Kahl, [*Tunable assembly of heterogeneously charged colloids*]{}, Nano Lett. [**14**]{}, 3412 (2014).
M. Barisik, S. Atalay, A. Beskok, and S. Qian, [*Size dependent surface charge properties of silica nanoparticles*]{}, J. Phys. Chem. C [**118**]{}, 1836–1842 (2014).
R. Kusters, H.-K. Lin, R. Zandi, I. Tsvetkova, B. Dragnea, and P. van der Schoot, [*Role of charge regulation and size polydispersity in nanoparticle encapsulation by viral coat proteins*]{}, J. Phys. Chem. B [**119**]{}, 1869–80 (2015).
M. Sabapathy, R. A. M. K, and E. Mani, [*Self-assembly of inverse patchy colloids with tunable patch coverage*]{}, Phys. Chem. Chem. Phys. [ **19**]{}, 13122–13132 (2017).
A. [Lošdorfer Božič]{} and R. Podgornik, [*[pH]{} dependence of charge multipole moments in proteins*]{}, Biophys. J. [**113**]{}, 1454–1465 (2017).
M. Krishnan, [*A simple model for electrical charge in globular macromolecules and linear polyelectrolytes in solution*]{}, J. Chem. Phys. [**146**]{}, 205101 (2017).
R. J. Nap, A. [Lošdorfer Božič]{}, I. Szleifer, and R. Podgornik, [*The role of solution conditions in the bacteriophage [PP7]{} capsid charge regulation*]{}, Biophys. J. [**107**]{}, 1970–1979 (2014).
A. I. Abrikosov, B. Stenqvist, and M. Lund, [*Steering patchy particles using multivalent electrolytes*]{}, Soft Matter (2017).
P. Ni, Z. Wang, X. Ma, N. C. Das, P. Sokol, W. Chiu, B. Dragnea, M. Hagan, and C. C. Kao, [*An examination of the electrostatic interactions between the N-terminal tail of the brome mosaic virus coat protein and encapsidated RNAs*]{}, J. Mol. Biol. [**419**]{}, 284–300 (2012).
A. Warshel, P. K. Sharma, M. Kato, and W. W. Parson, [*Modeling electrostatic effects in proteins*]{}, Biochim. Biophys. Acta Proteins Proteomics [**1764**]{}, 1647–1676 (2006).
R. M. Adar, D. Andelman, and H. Diamant, [*Electrostatics of patchy surfaces*]{}, Adv. Colloid Interface Sci. [**247**]{}, 198–207 (2017).
M. Grant, [*Nonuniform charge effects in protein- protein interactions*]{}, J. Phys. Chem. B [**105**]{}, 2858–2863 (2001).
A. [Lošdorfer Božič]{} and R. Podgornik, [*Symmetry effects in electrostatic interactions between two arbitrarily charged shells in the [D]{}ebye-[H]{}ückel approximation*]{}, J. Chem. Phys. [**138**]{}, 074902 (2013).
S. Li, G. Erdemci-Tandogan, J. Wagner, P. van der Schoot, and R. Zandi, [ *Impact of a nonuniform charge distribution on virus assembly*]{}, Phys. Rev. E [**96**]{}, 022401 (2017).
W. Li, B. A. Persson, M. Morin, M. A. Behrens, M. Lund, and M. Zackrisson Oskolkova, [*Charge-induced patchy attractions between proteins*]{}, J. Phys. Chem. B [**119**]{}, 503–508 (2015).
J. F. Vega, E. Vicente-Alique, R. Núñez-Ramírez, Y. Wang, and J. Martínez-Salazar, [*Evidences of Changes in Surface Electrostatic Charge Distribution during Stabilization of HPV16 Virus-Like Particles*]{}, PLoS ONE [**11**]{}, 1–17 (2016).
M. A. Blanco and V. K. Shen, [*Effect of the surface charge distribution on the fluid phase behavior of charged colloids and proteins*]{}, J. Chem. Phys. [**145**]{}, 155102 (2016).
J. M. Dempster and M. Olvera de la Cruz, [*Aggregation of heterogeneously charged colloids*]{}, ACS Nano [**10**]{}, 5909–5915 (2016).
C. Yigit, J. Heyda, and J. Dzubiella, [*Charged patchy particle models in explicit salt: Ion distributions, electrostatic potentials, and effective interactions*]{}, J. Chem. Phys. [**143**]{}, 064904 (2015).
C. Yigit, M. Kanduč, M. Ballauff, and J. Dzubiella, [*Interaction of Charged Patchy Protein Models with Like-Charged Polyelectrolyte Brushes*]{}, Langmuir [**33**]{}, 417–427 (2017).
G. Silbert, D. Ben-Yaakov, Y. Dror, S. Perkin, N. Kampf, and J. Klein, [ *Long-Ranged Attraction between Disordered Heterogeneous Surfaces*]{}, Phys. Rev. Lett. [**109**]{}, 168305 (2012).
S. Perkin, N. Kampf, and J. Klein, [*Long-Range Attraction between Charge-Mosaic Surfaces across Water*]{}, Phys. Rev. Lett. [**96**]{}, 038301 (2006).
E. E. Meyer, Q. Lin, T. Hassenkam, E. Oroudjev, and J. N. Israelachvili, [ *Origin of the long-range attraction between surfactant-coated surfaces*]{}, Proc. Natl. Acad. Sci. USA [**102**]{}, 6839–6842 (2005).
T. Hoppe, [*A simplified representation of anisotropic charge distributions within proteins*]{}, J. Chem. Phys. [**138**]{}, 174110 (2013).
M. Stipsitz, G. Kahl, and E. Bianchi, [*Generalized inverse patchy colloid model*]{}, J. Chem. Phys. [**143**]{}, 114905 (2015).
C. E. Felder, J. Prilusky, I. Silman, and J. L. Sussman, [*A server and database for dipole moments of proteins*]{}, Nucleic Acids Res. [**35**]{}, W512–W521 (2007).
H. Nakamura and A. Wada, [*Nature of the charge distribution in proteins. III. Electric multipole structures*]{}, J. Phys. Soc. Jpn. [**54**]{}, 4047–4052 (1985).
R. Paulini, K. M[ü]{}ller, and F. Diederich, [*Orthogonal multipolar interactions in structural chemistry and biology*]{}, Angew. Chem. Int. Ed. [**44**]{}, 1788–1805 (2005).
S. Parimal, S. M. Cramer, and S. Garde, [*Application of a spherical harmonics expansion approach for calculating ligand density distributions around proteins*]{}, J. Phys. Chem. B [**118**]{}, 13066–13076 (2014).
J. Y. Kim, S. H. Ahn, S. T. Kang, and B. J. Yoon, [*Electrophoretic mobility equation for protein with molecular shape and charge multipole effects*]{}, J. Colloid Interface Sci. [**299**]{}, 486–492 (2006).
A. Gramada and P. E. Bourne, [*Multipolar representation of protein structure*]{}, BMC Bioinformatics [**7**]{}, 242 (2006).
V. Lorman and S. Rochal, [*Density-wave theory of the capsid structure of small icosahedral viruses*]{}, Phys. Rev. Lett. [**98**]{}, 185502 (2007).
V. Lorman and S. Rochal, [*Landau theory of crystallization and the capsid structures of small icosahedral viruses*]{}, Phys. Rev. B [**77**]{}, 224109 (2008).
A. [Lošdorfer Božič]{}, A. Šiber, and R. Podgornik, [ *Electrostatic self-energy of a partially formed spherical shell in salt solution: Application to stability of tethered and fluid shells as models for viruses and vesicles*]{}, Phys. Rev. E [**83**]{}, 041916 (2011).
K. V. Mardia and P. E. Jupp, [*Directional statistics*]{}, vol. 494. John Wiley & Sons, 2009.
M. Abramowitz and I. A. Stegun, [*Handbook of mathematical functions*]{}, vol. 55. Dover Publications, 1964.
D. P. Mitchell, [*Spectrally Optimal Sampling for Distribution Ray Tracing*]{}, SIGGRAPH Comput. Graph. [**25**]{}, 157–164 (1991).
D. J. Wales and S. Ulker, [*Structure and dynamics of spherical crystals characterized for the Thomson problem*]{}, Phys. Rev. B [**74**]{}, 212101 (2006).
J. P. Snyder, [*Map projections – A working manual*]{}, vol. 1395. US Government Printing Office, Washington, DC, 1987.
A. Gelessus, W. Thiel, and W. Weber, [*Multipoles and symmetry*]{}, J. Chem. Ed. [**72**]{}, 505 (1995).
A. [Lošdorfer Božič]{}, A. Šiber, and R. Podgornik, [ *Statistical analysis of sizes and shapes of virus capsids and their resulting elastic properties*]{}, J. Biol. Phys. [**39**]{}, 215–228 (2013).
A. Stone, [*Distributed multipole analysis, or how to describe a molecular charge distribution*]{}, Chem. Phys. Lett. [**83**]{}, 233–239 (1981).
S. Larsson and M. Braga, [*Atomic charges based on spherical harmonics expansion at the atomic centers*]{}, Theor. Chim. Acta [**68**]{}, 291–300 (1985).
S. Dharmavaram, F. Xie, W. Klug, J. Rudnick, and R. Bruinsma, [ *Orientational phase transitions and the assembly of viral capsids*]{}, Phys. Rev. E [**95**]{}, 062402 (2017).
J. T. Kent, [*The Fisher-Bingham distribution on the sphere*]{}, J. R. Stat. Soc. Series B Stat. Methodol. [**44**]{}, 71–80 (1982).
G. B. Arfken and H. J. Weber, [*Mathematical methods for physicists*]{}. Academic Press, San Diego, CA, 4th ed., 1995.
, [*Mathematica 8.0*]{}, 2010.
|
---
abstract: |
Consider a dynamic programming scheme for a decision problem in which all subproblems involved are also decision problems. An implementation of such a scheme is [*positive-instance driven*]{} (PID), if it generates positive subproblem instances, but not negative ones, building each on smaller positive instances.
We take the dynamic programming scheme due to Bouchitté and Todinca for treewidth computation, which is based on minimal separators and potential maximal cliques, and design a variant (for the decision version of the problem) with a natural PID implementation. The resulting algorithm performs extremely well: it solves a number of standard benchmark instances for which the optimal solutions have not previously been known. Incorporating a new heuristic algorithm for detecting safe separators, it also solves all of the 100 public instances posed by the exact treewidth track in PACE 2017, a competition on algorithm implementation.
We describe the algorithm, prove its correctness, and give a running time bound in terms of the number of positive subproblem instances. We perform an experimental analysis which supports the practical importance of such a bound.
author:
- Hisao Tamaki
date: 'Received: date / Accepted: date'
title: 'Positive-instance driven dynamic programming for treewidth [^1]'
---
Introduction
============
Suppose we design a dynamic programming algorithm for some decision problem, formulating subproblems, which are decision problems as well, and recurrences among those subproblems. A standard approach is to list all subproblem instances and scan the list from “small" ones to “large" , deciding the answer, positive or negative, to each instance by means of these recurrences. When the number of positive subproblem instances are expected to be much smaller than the total number of subproblem instances, a natural alternative is to generate positive instances only, using recurrences to combine positive instances to generate a “larger" positive instance. We call such a mode of dynamic programming execution [*positive-instance driven*]{} or [*PID*]{} for short. One goal of this paper is to demonstrate that PID is not simply a low-level implementation strategy but can be a paradigm of algorithm design for some problems.
The decision problem we consider is that of deciding, given graph $G$ and positive integer $k$, if the treewidth of $G$ is at most $k$. This graph parameter was introduced by Robertson and Seymour [@RS86] and has had a tremendous impact on graph theory and on the design of graph algorithms (see, for example, a survey [@BK08]). The treewidth problem is NP-complete [@ACP87] but fixed-parameter tractable: it has an $f(k)n^{O(1)}$ time algorithm for some fixed function $f(k)$ as implied by the graph minor theorem of Robertson and Seymour [@RS04], and an explicit $O(f(k)n)$ time algorithm was given by Bodlaender [@Bod96]. A classical dynamic programming algorithm due to Arnborg, Corneil, and Proskurowsky (ACP algorithm) [@ACP87] runs in $n^{k + O(1)}$ time. Bouchitté and Todinca [@BT02] developed a more refined dynamic programming algorithm (BT algorithm) based on the notions of minimal separators and potential maximal cliques, which lead to algorithms running in $O(1.7549^n)$ time or in $O(n^5 \tbinom{\lceil (2n + k + 8)/3 \rceil}{k + 2})$ time [@FKTV08; @FV12]. Another important approach to treewidth computation is based on the perfect elimination order (PEO) of minimal chordal completions of the given graph. PEO-based dynamic programming algorithms run in $O^*(2^n)$ time with exponential space and in $O^*(4^n)$ time with polynomial space [@BFKKT12], where $O^*(f(n))$ means $O(n^c f(n))$ for some constant $c$.
There has been a considerable amount of effort on implementing treewidth algorithms to be used in practice and, prior to this work, the most successful implementations for exact treewidth computation are all based on PEO. The authors of [@BFKKT12] implemented the $O^*(2^n)$ time dynamic programming algorithm and experimented on its performance, showing that it works well for small instances. For larger instances, PEO-based branch-and-bound algorithms are known to work well in practice [@GD04]. Recent proposals for reducing treewidth computation to SAT solving are also based on PEO [@SH09; @BJ14]. From the PID perspective, this situation is somewhat surprising, since it can be shown that each positive subproblem instance in the PEO-based dynamic programming scheme corresponds to a combination of an indefinite number of positive subproblem instances in the ACP algorithm, and hence the number of positive subproblem instances can be exponentially larger than that in the ACP algorithm. Indeed, a PID variant of the ACP algorithm was implemented by the present author and has won the first place in the exact treewidth track of PACE 2016 [@DHJKKR17], a competition on algorithm implementations, outperforming other submissions based on PEO. Given this success, a natural next step is to design a PID variant of the BT algorithm, which is tackled in this paper.
The resulting algorithm performs extremely well, as reported in Section \[sec:performance\]. It is tested on DIMACS graph-coloring instances [@JT93], which have been used in the literature on treewidth computation as standard benchmark instances [@GD04; @BK06; @Musliu08; @SH09; @BFKKT12; @BJ14]. Our implementation of the algorithm solves all the instances that have been previously solved (that is, with matching upper and lower bounds known) within 10 seconds per instance on a typical desktop computer and solves 13 out of the 42 previously unsolved instances. For nearly half of the instances which it leaves unsolved, it significantly reduces the gap between the lower and upper bounds. It is interesting to note that this is done by improving the lower bound. Since the number of positive subproblem instances are much smaller when $k < {{\mathop{\rm tw}}}(G)$ than when $k = {{\mathop{\rm tw}}}(G)$, the PID approach is particularly good at establishing strong lower bounds.
We also adopt the notion of safe separators due to Bodlaender and Koster [@BK06] in our preprocessing and design a new heuristic algorithm for detecting safe separators. With this preprocessing, our implementation also solves all of the 100 public instances posed by PACE 2017 [@PACE17], the successor of PACE 2016. It should be noted that these test instances of PACE 2017 are much harder than those of PACE 2016: the winning implementation of PACE 2016 mentioned above, which solved 199 of the 200 instances therein, solves only 62 of these 100 instances of PACE 2017 in the given time of 30 minutes per instance.
Adapting the BT algorithm to work in PID mode has turned out non-trivial. Each subproblem instance in the BT algorithm for given graph $G$ and positive integer $k$ takes the form of a connected set $C$ of $G$ such that $N_G(C)$, the open neighborhood of $C$ in $G$, is a minimal separator of $G$ with cardinality at most $k$. For each such $C$, we ask if $C$ is [*feasible*]{}, in the sense that there is a tree decomposition of the subgraph of $G$ induced by $C \cup N_G(C)$ of width at most $k$ that has a bag containing $N_G(C)$ (see Section \[sec:prelim\] for the definition of a tree decomposition of a graph). The difficulty of making the BT algorithm PID comes from the fact that the recurrence for deciding if $C$ is feasible may involve an indefinite number of connected sets $C'$ such that $C' \subset C$. Thus, even if the number of positive instances is small, there is a possibility that the running time is exponential in that number. We approach this issue by introducing an auxiliary structure we call O-blocks (see Section \[sec:oriented\]) and formulate a recurrences that are binary: a combination of a feasible connected set and a feasible O-block may yield either a larger feasible connected set or a larger feasible O-block. Due to this binary recurrence, we obtain an upper bound on the running time of our algorithm which is sensitive to the number of subproblem instances (Observation \[obs:run\_time\] in Section \[sec:time\]). To support the significance of such a bound, we perform an experimental analysis which shows the existence of huge gaps between the actual number of combinatorial objects corresponding to subproblems and the known theoretical upper bounds.
The rest of this paper is organized as follows. In Section \[sec:prelim\], we introduce notation, define basic concepts and review facts in the literature. In Section \[sec:oriented\], we precisely define the subproblems in our dynamic programming algorithm and formulate recurrences. We describe our algorithm and prove its correctness in Section \[sec:algorithm\] and then analyze its running time in Section \[sec:time\]. In Section \[sec:experimental\], we describe our experimental analysis. In Section \[sec:implementation\], we describe some implementation details. Finally, in Section \[sec:performance\], we give details of the performance results sketched above.
Preliminaries {#sec:prelim}
=============
In this paper, all graphs are simple, that is, without self loops or parallel edges. Let $G$ be a graph. We denote by $V(G)$ the vertex set of $G$ and by $E(G)$ the edge set of $G$. For each $v \in V(G)$, $N_G(v)$ denote the set of neighbors of $v$ in $G$: $N_G(v) = \{u \in V(G) \mid \{u, v\} \in E(G)$. For $U \subseteq V(G)$, the [*open neighborhood of $U$ in $G$*]{}, denoted by $N_G(U)$, is the set of vertices adjacent to some vertex in $U$ but not belonging to $U$ itself: $N_G(U) = (\bigcup_{v \in U} N_G(v)) \setminus U$. The [*closed neighborhood of $U$ in $G$*]{}, denoted by $N_G[U]$, is defined by $N_G[U] = U \cup N_G(U)$. We also write $N_G[v]$ for $N_G[\{v\}] = N_G(v)
\cup \{v\}$. We denote by $G[U]$ the subgraph of $G$ induced by $U$: $V(G[U]) = U$ and $E(G[U]) = \{\{u, v\} \in E(G) \mid u, v \in U\}$. In the above notation, as well as in the notation further introduced below, we will often drop the subscript $G$ when the graph is clear from the context.
We say that vertex set $C \subseteq V(G)$ is [*connected in*]{} $G$ if, for every $u, v \in C$, there is a path in $G[C]$ between $u$ and $v$. It is a [*connected component*]{} of $G$ if it is connected and is inclusion-wise maximal subject to this condition. A vertex set $C$ in $G$ is a [*component associated with $S \subseteq G$*]{}, if $C$ is a connected component of $G[V(G) \setminus S]$. For each $S \subseteq V(G)$, we denote by ${{\mathcal C}}_G(S)$ the set of all components associated with $S$. A vertex set $S \subseteq V(G)$ is a [*separator*]{} of $G$ if $|{{\mathcal C}}_G(S)| > {{{\mathcal C}}_G(\emptyset})|$, that is, if its removal increases the number of connected components of $G$. A component $C$ associated with separator $S$ of $G$ is a [*full component*]{} if $N_G(C) = S$. A separator $S$ is a [*minimal separator*]{} if there are at least two full components associated with $S$. This term is justified by this fact: if $S$ is a minimal separator and $a$, $b$ vertices belonging to two distinct full components associated with $S$, then for every proper subset $S'$ of $S$, $a$ and $b$ belong to the same component associated with $S'$; $S$ is a minimal set of vertices that separates $a$ from $b$. A [*block*]{} is a pair $(S, C)$, where $S$ is a separator and $C$ is a component associated with $S$; it is a [*full block*]{} if $C$ is a full component, that is, $S = N(C)$.
Graph $H$ is [*chordal*]{} if every induced cycle of $H$ has length exactly three. $H$ is a [*minimal chordal completion of $G$*]{} if it is chordal, $V(H) = V(G)$, $E(G) \subseteq E(H)$, and $E(H)$ is minimal subject to these conditions. A vertex set $\Omega \subseteq V(G)$ is a [*potential maximal*]{} clique of $G$, if $\Omega$ is a clique in some minimal chordal completion of $G$.
A [*tree-decomposition*]{} of $G$ is a pair $(T, {{\mathcal X}})$ where $T$ is a tree and ${{\mathcal X}}$ is a family $\{X_i\}_{i \in V(T)}$ of vertex sets of $G$ such that the following three conditions are satisfied. We call members of $V(T)$ [*nodes*]{} of $T$ and each $X_i$ the [*bag*]{} at node $i$.
1. $\bigcup_{i \in V(T)} X_i = V(G)$.
2. For each edge $\{u, v\} \in E(G)$, there is some $i \in V(T)$ such that $u, v \in X_i$.
3. The set of nodes $I_v = \{i \in V(T) \mid v \in X_i\}$ of $V(T)$ induces a connected subtree of $T$.
The [*width*]{} of this tree-decomposition is $\max_{i \in V(T)} |X_i| - 1$. The [*treewidth*]{} of $G$, denoted by ${{\mathop{\rm tw}}}(G)$ is the minimum width of all tree-decompositions of $G$. We may assume that the bags $X_i$ and $X_j$ are distinct from each other for $i \neq j$ and, under this assumption, we will often regard a tree-decomposition as a tree $T$ in which each node is a bag.
We call a tree-decomposition $T$ of $G$ [*canonical*]{} if each bag of $T$ is a potential maximal clique of $G$ and, for every pair $X$, $Y$ of adjacent bags in $T$, $X \cap Y$ is a minimal separator of $G$. The following fact is well-known. It easily follows, for example, from Proposition 2.4 in [@BT01].
\[lem:pmc\_decompose\] Let $G$ be an arbitrary graph. There is a tree-decomposition $T$ of $G$ of width ${{\mathop{\rm tw}}}(G)$ that is canonical.
The following local characterization of a potential maximal clique is crucial. We say that a vertex set $S \subseteq V(G)$ is [*cliquish*]{} in $G$ if, for every pair of distinct vertices $u$ and $v$ in $S$, either $u$ and $v$ are adjacent to each other or there is some $C \in {{\mathcal C}}(S)$ such that $u, v \in N(C)$. In other words, $S$ is cliquish if completing $N(C)$ for every $C \in {{\mathcal C}}(S)$ into a clique makes $S$ a clique.
\[lem:pmc\_charact\] (Theorem 3.15 in [@BT01]) A separator $S$ of $G$ is a potential maximal clique of $G$ if and only if (1) $S$ has no full-component associated with it and (2) $S$ is cliquish.
It is also shown in [@BT01] that if $\Omega$ is a potential maximal clique of $G$ and $S$ is a minimal separator contained in $\Omega$, then there is a unique component $C_S$ associated with $S$ that contains $\Omega \setminus S$. We need an explicit way of forming $C_S$ from $\Omega$ and $S$.
Let $K \subseteq V(G)$ be an arbitrary vertex set and $S$ an arbitrary proper subset of $K$. We say that a component $C \in {{\mathcal C}}(K)$ is [*confined to $S$*]{} if $N(C) \subseteq S$; otherwise it is [*unconfined to $S$*]{}. Let ${{\mathop{\rm unconf}}}(S, K)$ denote the set of components associated with $K$ that are unconfined to $S$. Define the [*crib*]{} of $S$ with respect to $K$, denoted by ${{\mathop{\rm crib}}}(S, K)$, to be $(K \setminus S)
\cup \bigcup_{C \in {{\mathop{\rm unconf}}}(S, K)} C$: it is the union of $K \setminus S$ and all those components associated with $K$ that have neighborhoods intersecting $K \setminus S$.
The following lemma relies only on the second property of potential maximal cliques, namely that they are cliquish, and will be applied not only to potential maximal cliques but also to separators with full components, which are trivially cliquish.
\[lem:crib\] Let $K \subseteq V(G)$ be a cliquish vertex set. Let $S$ be an arbitrary proper subset of $K$. Then, ${{\mathop{\rm crib}}}(S, K)$ is a full component associated with $S$.
Let $C = {{\mathop{\rm crib}}}(S, K)$. We first show that $G[C]$ is connected. Suppose $K \setminus S$ has two distinct vertices $u$ and $v$. Since $K$ is cliquish, either $u$ and $v$ are adjacent to each other or there is some component $C' \in {{\mathcal C}}(K)$ such that $u, v \in N(C')$. In the latter case, as $C'$ is unconfined to $S$, we have $C' \subseteq C$. Therefore, $u$ and $v$ belong to the same connected component of $G[C]$. As this applies to every pair of vertices in $K \setminus S$, $K \setminus S$ is contained in a single connected component of $G[C]$. Moreover, each component $C' \in {{\mathcal C}}(K)$ contained in $C$ is unconfined to $S$, by the definition of ${{\mathop{\rm crib}}}(S, K)$, and hence has a neighbor in $K \setminus S$. Therefore, we conclude that $G[C]$ is connected. Each vertex $v$ not in $S \cup C$ belongs to some component in ${{\mathcal C}}(K)$ that is confined to $S$ and hence does not have a neighbor in $C$. Therefore, $C$ is a component associated with $S$.
To see that $C$ is a full component, let $u \in S$ and $v \in K \setminus S$ be arbitrary. Since $K$ is cliquish, either $u$ and $v$ are adjacent to each other or there is some $C' \in {{\mathcal C}}(K)$ such that $u, v \in N(C')$. As such $C'$ is unconfined to $S$ in the latter case, we conclude that $u \in N(C)$ in either case. Since this holds for arbitrary $u \in S$, we conclude that $C$ is a full component associated with $S$.
As ${{\mathop{\rm crib}}}(S, K)$ contains $K \setminus S$, it is clear that it is the only component associated with $S$ that intersects $K$. Therefore, the above mentioned assertion on potential maximal cliques is a corollary to this Lemma.
Recurrences on oriented minimal separators {#sec:oriented}
==========================================
In this section, we fix graph $G$ and positive integer $k$ that are given in the problem instance: we are to decide if the treewidth of $G$ is at most $k$. We assume that $G$ is connected.
For connected set $C \subseteq V(G)$, we denote by $G\langle C \rangle$ the graph obtained from $G[N[C]]$ by completing $N(C)$ into a clique: $V(G\langle C \rangle) = N[C]$ and $E(G\langle C \rangle) = E(G[N[C]]) \cup \{\{u, v\} \mid u,v \in N(C),
u \neq v\}$. We say $C$ is [*feasible*]{} if ${{\mathop{\rm tw}}}(G\langle C \rangle) \leq k$. Equivalently, $C$ is feasible if $G[N[C]]$ has a tree-decomposition of width $k$ or smaller that has a bag containing $N(C)$.
Let us first review the BT algorithm [@BT01] adapting it to our decision problem. We first list all minimum separators of cardinality $k$ or smaller and all potential maximal cliques of cardinality $k + 1$ or smaller. Then, for each pair of a potential maximal clique $\Omega$ and a minimal separator $S$ such that $S \subset \Omega$, place a link from $S$ to $\Omega$. To understand the difficulty of formulating a PID variant of the algorithm, it is important to note that the pair $(\Omega, S)$ to be linked is easy to find from the side of $\Omega$, but not the other way round. Then, we scan the full blocks $(N(C), C)$ of minimal separators in the increasing order of $|C|$ to decide if $C$ is feasible, using the following recurrence: $C$ is feasible if and only if there is some potential maximal clique $\Omega$ such that $N(C) \subset \Omega$, $C = {{\mathop{\rm crib}}}(N(C), \Omega)$, and every component $D \in {{\mathop{\rm unconf}}}(N(C), \Omega)$ is feasible. Finally, we have ${{\mathop{\rm tw}}}(G) \leq k$ if and only if there is a potential maximal clique $\Omega$ with $|\Omega| \leq k + 1$ such that every component associated with $\Omega$ is feasible.
To facilitate the PID construction, we orient minimal separators as follows. We assume a total order $<$ on $V(G)$. For each vertex set $U \subseteq V(G)$, the [*minimum element*]{} of $U$, denoted by $\min(U)$, is the smallest element of $U$ under $<$. For vertex sets $U$ and $W$, we say [*$U$ precedes $W$*]{} and write $U \prec W$ if $\min(U) < \min(W)$.
We say that a connected set $C$ is [*inbound*]{} if there is some full block associated with $N(C)$ that precedes $C$; otherwise, it is [*outbound*]{}. Observe that if $C$ is inbound then $N(C)$ is a minimal separator, since $N(C)$ has another full component associated with it and, contrapositively, if $N(C)$ is not a minimal separator then $C$ is necessarily outbound. We say a full block $(N(C), C)$ is [*inbound*]{} ([*outbound*]{}) if $C$ is inbound (outbound, respectively).
\[lem:pmc\_outbounds\] Let $K$ be a cliquish vertex set and let $A_1, A_2$ be two components associated with $K$. Suppose that $A_1$ and $A_2$ are outbound. Then, either $N(A_1) \subseteq N(A_2)$ or $N(A_2) \subseteq N(A_1)$.
Let $K$, $A_1$, and $A_2$ be as above and suppose neither of $N(A_1)$ and $N(A_2)$ is a subset of the other. For $i = 1, 2$, let $C_i = {{\mathop{\rm crib}}}(N(A_i), K)$. Since $N(A_2) \setminus N(A_1)$ is non-empty and contained in $K \setminus N(A_1)$, $A_2$ is contained in $C_1$. We have $A_1 \prec C_1$ as $A_1$ is outbound and hence $A_1 \prec A_2$. A contradiction, since similarly we have $A_2 \prec A_1$.
Let $K$ be a cliquish vertex set. Based on the above lemma, we define the [*outlet*]{} of $K$, denoted by ${{\mathop{\rm outlet}}}(K)$, as follows. If no non-full component associated with $K$ is outbound, then we let ${{\mathop{\rm outlet}}}(K) = \emptyset$. Otherwise, ${{\mathop{\rm outlet}}}(K) = N(A)$, where $A$ is a non-full component associated with $K$ that is outbound, chosen so that $N(A)$ is maximal. We define ${{\mathop{\rm support}}}(K) = {{\mathop{\rm unconf}}}({{\mathop{\rm outlet}}}(K), K)$, the set of components associated with $K$ that are not confined to ${{\mathop{\rm outlet}}}(K)$. By Lemma \[lem:pmc\_outbounds\], every member of ${{\mathop{\rm support}}}(K)$ is inbound.
We call a full block $(N(C), C)$ an [*I-block*]{} if $C$ is inbound and $|N(C)| \leq k$. We call it an [*O-block*]{} if $C$ is outbound and $|N(C)| \leq k$.
We say that an I-block $(N(C), C)$ is [*feasible*]{} if $C$ is feasible. We say that an O-block $(N(A), A)$ is feasible if $N(A) = \bigcup_{C \in {{\mathcal C}}} N(C)$ for some set ${{\mathcal C}}$ of feasible inbound components. Note that this definition of feasibility of an O-block is somewhat weak in the sense that we do not require every inbound component associated with $N(A)$ to be feasible.
We say that a potential maximal clique $\Omega$ is [*feasible*]{} if $|\Omega| \leq k + 1$ and every $C \in {{\mathop{\rm support}}}(\Omega)$ is feasible.
In order to formulate mutual recurrences among feasible I-blocks, O-blocks, and potential maximal cliques, we need the following auxiliary notion of [*buildable*]{} potential maximal cliques.
Let $\Omega$ be a potential maximal clique with $|\Omega| \leq k + 1$. For each $C \in {{\mathop{\rm support}}}(\Omega)$, block $(N(C), C)$ is an I-block, since $C$ is inbound as observed above and we have $|N(C)| \leq k$ by our assumption that $|\Omega| \leq k + 1$. We say that $\Omega$ is [*buildable*]{} if $|\Omega| \leq k + 1$ and either
1. $\Omega = N[v]$ for some $v \in V(G)$,
2. there is some subset ${{\mathcal C}}$ of ${{\mathop{\rm support}}}(\Omega)$ such that $\Omega = \bigcup_{D \in {{\mathcal C}}} N(D)$ and every member of ${{\mathcal C}}$ is feasible, or
3. $\Omega = N(A) \cup (N(v) \cap A)$ for some feasible O-block $(N(A), A)$ and a vertex $v \in N(A)$.
It will turn out that every feasible potential maximal clique is buildable (Lemma \[lem:buildable\_feasible\]).
\[lem:PMC\_feasible\] We have ${{\mathop{\rm tw}}}(G) \leq k$ if and only if $G$ has a feasible potential maximal clique $\Omega$ with ${{\mathop{\rm outlet}}}(\Omega) = \emptyset$.
Suppose first that $G$ has a feasible potential maximal clique $\Omega$ with ${{\mathop{\rm outlet}}}(\Omega) = \emptyset$. Note that ${{\mathop{\rm support}}}(\Omega) = {{\mathcal C}}(\Omega)$, as every $C \in {{\mathcal C}}(\Omega)$ is unconfined to an empty set. For each component $C \in {{\mathop{\rm support}}}(\Omega)$, let $T_C$ be the tree-decomposition of $G\langle C \rangle$ of width $k$ or smaller, which exists since $C$ is feasible by the definition of a feasible potential maximal clique. Let $X_C$ be a bag of $T_C$ such that $N(C) \subseteq X_C$. Combine these tree-decompositions into a tree $T$ by adding bag $\Omega$ and letting each $X_C$ in $T_C$ be adjacent to $\Omega$. That $T$ satisfies the first two conditions for tree decomposition is trivial. The third condition is also satisfied, since, if a vertex $v$ appears in $N[C]$ for two or more members $C$ in ${{\mathop{\rm support}}}(\Omega)$, then $v$ appears in $X_C$ for each such $C$ and in $\Omega$. Therefore, $T$ is a tree decomposition of $G$ of width $k$ or smaller and hence ${{\mathop{\rm tw}}}(G) \leq k$.
For the converse, suppose the treewidth of $G$ is $k$ or smaller. Let $T$ be a canonical tree-decomposition of $G$ of width $k$ or smaller: each bag of $T$ is a potential maximal clique and the intersection of each pair of adjacent bags of $T$ is a minimal separator. Orient each edge of $T$ as follows. Let $X$ and $Y$ be adjacent bags in $T$ and let $S = X \cap Y$. Let $C$ be the outbound full component associated with the minimal separator $S$. Then, $C$ intersects exactly one of $X$ and $Y$. If $C$ intersects $X$ then we orient the edge between $X$ and $Y$ from $Y$ to $X$; otherwise from $X$ to $Y$. Since $T$ is a tree, the resulting directed tree has a sink $X_0$. Then, each component $C$ associated with $X_0$ is inbound and hence ${{\mathop{\rm outlet}}}(X_0) = \emptyset$. We show that each such $C$ is moreover feasible. Indeed, the required tree-decomposition of $G\langle C \rangle$ may be obtained from $T$ by taking intersection of every bag with $N[C]$: the resulting tree is a tree-decomposition of $G[N(C)]$ and contains the bag $X_0 \cap N[C] \supseteq N(C)$. The width of the tree-decomposition is not greater than that of $T$ and hence is $k$ or smaller. Therefore, I-block $(N(C), C)$ for each component $C$ associated with $X_0$ is feasible and hence the potential maximal clique $X_0$ is feasible.
\[lem:PMC\_original\] Let $C$ be a connected set of $G$ such that $N(C)$ is a minimal separator. Let $\Omega$ be a potential maximal clique of $G\langle C \rangle$. Then, $\Omega$ is a potential maximal clique of $G$.
For each component $D$ associated with $N(C)$, let $H_D$ be a minimal chordal completion of $G\langle C \rangle$. In particular, choose $H_C$ so that $\Omega$ is a clique in $H_C$. Let $H$ be the union of these graphs: $V(H) = V(G)$ and $E(H) = \bigcup_{D \in {{\mathcal C}}(N(C))} E(H_D)$. It is clear that $H$ is chordal. Let $H'$ be a minimal chordal completion of $G$ contained in $H$. It is well-known that every minimal separator is a clique in every chordal completion and hence $N(C)$ is a clique in $H'$. Therefore, the minimality of $H_D$ for each $D$ implies that $H' = H$. As $\Omega$ is a clique in $H_C$, it is a clique in $H$ and hence is a potential maximal clique of $G$.
The following is our oriented version of the recurrence in the BT algorithm described in the beginning of this section.
\[lem:I-block-feasible\] An I-block $(N(C), C)$ is feasible if and only if there is some feasible potential maximal clique $\Omega$ with ${{\mathop{\rm outlet}}}(\Omega) = N(C)$ and $\bigcup_{D \in {{\mathop{\rm support}}}(\Omega)} D = C$.
Suppose first that there is a feasible potential maximal clique $\Omega$ as in the lemma. For each component $D \in {{\mathop{\rm support}}}(\Omega)$, let $T_D$ be a tree-decomposition of $G\langle D \rangle$ of width $k$ or smaller and $X_D$ be a bag in $T_D$ containing $N(D)$. Combine these tree-decompositions $T_D$, $D \in {{\mathop{\rm support}}}(\Omega)$, into a tree $T$ by adding bag $\Omega$ and let it be adjacent to $X_D$ for each $D \in {{\mathop{\rm support}}}(\Omega)$. We confirm that $T$ is a tree-decomposition of $G[N[C]]$. Every vertex $v \in N[C]$ appears in some bag of $T$ since $C$ is the union of $D$ for all $D \in {{\mathop{\rm support}}}(\Omega)$ and the bag $\Omega$ contains $N(C)$. Every edge of $G[N[C]]$ appears in some bag of $T$ for the same reason. The third condition for $T$ being a tree decomposition is also satisfied, since, if a vertex $v$ appears in $N[D]$ for two or more members $D$ in ${{\mathop{\rm support}}}(\Omega)$, then $v$ appears in $X_D$ for each such $D$ and in $\Omega$. Therefore, $T$ is a tree decomposition of $G[N[C]]$ of width $k$ or smaller and hence the bag $\Omega$ in $T$ contains $N(C)$, $T$ attests the feasibility of the I-block $(N(C), C)$.
For the converse, suppose that I-block $(N(C), C)$ is feasible. Let $T$ be a canonical tree-decomposition of $G \langle C \rangle$ of width $k$ or smaller. Orient the edges of $T$ as in the proof of Lemma \[lem:PMC\_feasible\]: orient the edge from $X$ to $Y$ if and only if $Y$ intersects the outbound full component associated with $X \cap Y$. We need to stress here that the notion of outbound components used in this orientation is with respect to the entire graph $G$ and not with respect to $G\langle C \rangle$, the graph of which $T$ is a tree-decomposition. As $N(C)$ is a clique in $G \langle C \rangle$, $T$ contains a bag that contains $N(C)$. In the subtree of $T$ induced by those bags containing $N(C)$, let $X_0$ be a sink with respect to the above orientation. As $T$ is canonical, $X_0$ is a potential maximal clique of $G \langle C \rangle$ and hence of $G$ by Lemma \[lem:PMC\_original\]. We show below that $X_0$ is feasible.
Let $A$ be the outbound full component associated with $N(C)$. As $N(C) \subseteq X_0$ and $A \cap N[C] = \emptyset$, $A$ is a component associated with $X_0$. We claim that $N(C) = {{\mathop{\rm outlet}}}(X_0)$. Suppose otherwise that there is some outbound component $A'$ associated with $X_0$ such that $N(C)$ is a proper subset of $N(A')$. Then, as $A'$ is not confined to $N(C)$, $C = {{\mathop{\rm crib}}}(N(C), X_0)$ contains $A'$. Therefore, there is some bag $X$ adjacent to $X_0$ in $T$ such that $X \cap A' \neq \emptyset$. Since $N(C)$ is a minimal separator that separates $A$ from $A'$, $X$ must contain $N(C)$. But, since $A'$ is an outbound component associated with $X_0$, the edge between $X_0$ and $X$ is oriented from $X_0$ to $X$. This contradicts the choice of $X_0$ and we conclude that $N(C) = {{\mathop{\rm outlet}}}(X_0)$.
It remains to verify that each $D \in {{\mathop{\rm support}}}(X_0)$ is feasible. This is true since the tree of bags obtained from $T$ by intersecting each bag with $N[D]$ is a tree-decomposition of $G \langle D \rangle$ required for the feasibility of $D$.
\[lem:support-outbound\] Let $K$ be a cliquish vertex set, ${{\mathcal C}}$ a non-empty subset of ${{\mathop{\rm support}}}(K)$, and $S = \bigcup_{C \in {{\mathcal C}}} N(C)$. If $S$ is a proper subset of $K$ then ${{\mathop{\rm crib}}}(S, K)$ is outbound.
Let $K$, ${{\mathcal C}}$ and $S$ be as in the lemma. Since $K$ is cliquish, ${{\mathop{\rm crib}}}(S, K)$ is a full component associated with $S$ that contains $K \setminus S$, by Lemma \[lem:crib\]. To show that it is outbound, it suffices to show that no other full component associated with $S$ is outbound. Let $A$ be an arbitrary full component associated with $S$ that is distinct from ${{\mathop{\rm crib}}}(S, K)$. As $A$ does not intersect $K$, it is a component associated with $K$. Let $C$ be an arbitrary member of ${{\mathcal C}}$. Then, $C$ is confined to $S$ by the definition of $S$. On the other hand $C$ is not confined to ${{\mathop{\rm outlet}}}(K)$ since $C \in {{\mathop{\rm support}}}(K)$. Therefore, $S$ is not a subset of ${{\mathop{\rm outlet}}}(K)$. $A$ cannot be outbound, since it would imply that $S = N(A) \subseteq {{\mathop{\rm outlet}}}(K)$. Therefore, $A$ is inbound and, since this holds for every full component associated with $S$ other than ${{\mathop{\rm crib}}}(S, K)$, ${{\mathop{\rm crib}}}(S, K)$ is outbound.
The following lemma is crucial for our PID result: the algorithm described in the next section generates all buildable potential maximal cliques and we need to guarantee all feasible maximal cliques to be among them.
\[lem:buildable\_feasible\] Let $\Omega$ be a feasible potential maximal clique. Then, $\Omega$ is buildable.
Let $S = \bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} N(C)$.
Suppose first that $S \cup {{\mathop{\rm outlet}}}(\Omega) \neq \Omega$ and let $v$ be an arbitrary member of $\Omega \setminus (S \cup {{\mathop{\rm outlet}}}(\Omega))$. Since $\Omega$ is cliquish and $v$ is not in $N(C)$ for any component $C$ associated with $\Omega$, $v$ is adjacent to every other vertex in $\Omega$. Therefore, $\Omega \subseteq N[v]$. Let $C$ be an arbitrary component associated with $\Omega$. If $C$ is confined to ${{\mathop{\rm outlet}}}(\Omega)$ then $v \not\in N(C)$ since $v \not\in {{\mathop{\rm outlet}}}(\Omega)$. Otherwise, $C \in {{\mathop{\rm support}}}(\Omega)$ and hence $v \not\in N(C)$ as $v \not\in S$. Therefore, $N(v) \setminus \Omega$ is empty and hence we have $\Omega = N[v]$. Thus, $\Omega$ is buildable, the first case of buildability.
Suppose next that $S \cup {{\mathop{\rm outlet}}}(\Omega) = \Omega$. We have two cases to consider: $S = \Omega$ and $S \neq \Omega$.
Consider the case where $S = \Omega$. Let ${{\mathcal C}}_0$ be an arbitrary minimal subset of ${{\mathop{\rm support}}}(\Omega)$ such that $\bigcup_{C \in {{\mathcal C}}_0} N(C) = \Omega$. Since $\Omega$ does not have a full component associated with it, ${{\mathcal C}}_0$ has at least two members. Let $C_0$ be an arbitrary member of ${{\mathcal C}}_0$ and let ${{\mathcal C}}_1 = {{\mathcal C}}_0 \setminus \{C_0\}$. From the minimality of ${{\mathcal C}}_0$, $S_1 = \bigcup_{C \in {{\mathcal C}}_1} N(C)$ is a proper subset of $\Omega$. By Lemmas \[lem:crib\] and \[lem:support-outbound\], $A_1 = {{\mathop{\rm crib}}}(S_1, \Omega)$ is a full component associated with $S_1$ and is outbound. Therefore, $(S_1, A_1)$ is an O-block and is feasible since every member of ${{\mathcal C}}_1 \subseteq {{\mathop{\rm support}}}(\Omega)$ is feasible as potential maximal clique $\Omega$ is feasible. Thus, the second case in the definition of feasible potential maximal cliques applies.
Finally, suppose that $S \neq \Omega$. Let $A = {{\mathop{\rm crib}}}(S, \Omega)$. Then, $A$ is a full component associated with $S$ and is outbound, by Lemmas \[lem:crib\] and \[lem:support-outbound\]. Since $S = \bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} N(C)$ and $\Omega$ is feasible, the O-block $(S, A)$ is feasible. Let $x$ be an arbitrary vertex in $\Omega \setminus S$. Since we are assuming that $S \cup {{\mathop{\rm outlet}}}(\Omega) = \Omega$ we have $x \in {{\mathop{\rm outlet}}}(\Omega) \setminus S$. Let $v$ be an arbitrary vertex in $\Omega \setminus {{\mathop{\rm outlet}}}(\Omega)$. Observe that there is no component $C$ associated with $\Omega$ such that $N(C)$ contains both $x$ and $v$: $x \not\in N(C)$ for every $C \in {{\mathop{\rm support}}}(\Omega)$ and $v \not\in N(C)$ for every $C$ that is confined to ${{\mathop{\rm outlet}}}(\Omega)$. Since $\Omega$ is cliquish, it follows that $x$ and $v$ are adjacent to each other. Therefore, we have $\Omega \setminus S
\subseteq N(v)$. Moreover, $A$ contains $\Omega \setminus S$ by Lemma \[lem:crib\]. Finally, $A \setminus \Omega$ is disjoint from $N(v)$, since every component $D$ associated with $\Omega$ such that $v \in N(D)$ is not confined to ${{\mathop{\rm outlet}}}(\Omega)$ and hence contained in $C$. Therefore, we have $\Omega = S \cup (N(v) \cap A)$, and the third case in the definition of buildable potential maximal cliques applies.
Algorithm {#sec:algorithm}
=========
Given graph $G$ and positive integer $k$, our algorithm generates all I-blocks, O-blocks, and potential maximal cliques that are feasible. In the algorithm description below, the following variables, with suffixes, are used: ${{\mathcal I}}$ for listing feasible I-blocks, ${{\mathcal O}}$ for feasible O-blocks, ${{\mathcal P}}$ for buildable potential maximal cliques, and ${{\mathcal S}}$ for feasible potential maximal cliques. We note that each member of ${{\mathcal I}}$ and ${{\mathcal O}}$ is actually the component part of an I- or O-block.
**Algorithm PID-BT**
**Input:** Graph $G$ and positive integer $k$ **Output:** “YES” if ${{\mathop{\rm tw}}}(G) \leq k$; “NO” otherwise **Procedure:**
1. Let ${{\mathcal I}}_0 = \emptyset$ and ${{\mathcal O}}_0 = \emptyset$.
2. Initialize ${{\mathcal P}}_0$ and ${{\mathcal S}}_0$ to $\emptyset$.
3. Set $j = 0$.
4. For each $v \in V(G)$, if $N[v]$ is a potential maximal clique with $|N[v]| \leq k + 1$ then add $N[v]$ to ${{\mathcal P}}_0$ and if, moreover, ${{\mathop{\rm support}}}(N[v]) = \emptyset$ then do the following.
1. Add $N[v]$ to ${{\mathcal S}}_0$.
2. If ${{\mathop{\rm outlet}}}(N[v]) \neq \emptyset$ then let $C = {{\mathop{\rm crib}}}({{\mathop{\rm outlet}}}(N[v]),
N[v])$ and, provided that $C \neq C_h$ for $1 \leq h \leq j$, increment $j$ and let $C_j = C$.
5. Set $i = 0$.
6. Repeat the following and stop repetition when $j$ is not incremented during the iteration step.
1. While $i < j$, do the following.
1. Increment $i$ and let ${{\mathcal I}}_i$ be ${{\mathcal I}}_{i - 1} \cup
\{C_i\}$.
2. Initialize ${{\mathcal O}}_i$ to ${{\mathcal O}}_{i - 1}$, ${{\mathcal P}}_i$ to ${{\mathcal P}}_{i - 1}$, and ${{\mathcal S}}_i$ to ${{\mathcal S}}_{i - 1}$.
3. For each $B \in {{\mathcal O}}_{i - 1}$ such that $C_i \subseteq B$ and $|N(C_i) \cup N(B)| \leq k + 1$, let $K = N(C_i) \cup N(B)$ and do the following.
1. If $K$ is a potential maximal clique, then add $K$ to ${{\mathcal P}}_i$.
2. If $|K| \leq k$ and there is a full component $A$ associated with $K$ (which is unique), then add $A$ to ${{\mathcal O}}_i$.
4. Let $A$ be the full component associated with $N(C_i)$ and add $A$ to ${{\mathcal O}}_i$.
5. For each $A \in {{\mathcal O}}_i \setminus {{\mathcal O}}_{i - 1}$ and $v \in N(A)$, let $K = N(A) \cup (n(v) \cap A)$ and if $|K| \leq k + 1$ and $K$ is a potential maximal clique then add $K$ to ${{\mathcal P}}_i$.
6. For each $K \in {{\mathcal P}}_i \setminus {{\mathcal S}}_{i - 1}$, if ${{\mathop{\rm support}}}(K) \subseteq {{\mathcal I}}_i$ then add $K$ to ${{\mathcal S}}_i$ and do the following: if ${{\mathop{\rm outlet}}}(K) \neq \emptyset$ then let $C = {{\mathop{\rm crib}}}({{\mathop{\rm outlet}}}(K), K)$ and, provided that $C \neq C_h$ for $1 \leq
h \leq j$, increment $j$ and let $C_j = C$.
7. If there is some $K \in {{\mathcal S}}_j$ such that ${{\mathop{\rm outlet}}}(K) = \emptyset$, then answer “YES”; otherwise, answer “NO”.
\[thm:correctness\] Algorithm PID-BT, given $G$ and $k$, answers “YES” if and only if ${{\mathop{\rm tw}}}(G) \leq k$.
We show that ${{\mathcal S}}_J$ computed by the algorithm, where $J$ denotes the final value of $j$, is exactly the set of feasible potential maximal cliques for the given $G$ and $k$. The theorem then follows by Lemma \[lem:PMC\_feasible\].
In the following proof, ${{\mathcal O}}_i$, ${{\mathcal P}}_i$, and ${{\mathcal S}}_i$ for each $i$ stand for the final values of these program variables.
We first show by induction on $i$ that the following conditions are satisfied.
1. For every $1 \leq h \leq i$, $(N(C_j), C_j)$ is a feasible I-block.
2. ${{\mathcal I}}_i = \{C_h \mid 1 \leq h \leq i\}$.
3. For every $A \in {{\mathcal O}}_i$, $(N(A), A)$ is a feasible O-block.
4. Every $K \in {{\mathcal P}}_i$ is a buildable potential maximal clique.
5. Every $K \in {{\mathcal S}}_i$ is a feasible potential maximal clique.
Consider the base case $i = 0$. Condition 1 vacantly holds. Conditions 2 and 3 also hold since ${{\mathcal I}}_0 = {{\mathcal O}}_0 = \emptyset$. Condition 4 holds: $N[v]$ is confirmed to be a potential maximal clique before it is added to ${{\mathcal P}}_0$ and is buildable by the definition of buildability (case 1). Condition 5 holds since ${{\mathop{\rm support}}}(N[v]) = \emptyset$ implies that the potential maximal clique $N[v]$ is feasible.
Suppose $i > 0$ and that the above conditions are satisfied for smaller values of $i$.
1. When $C_i$ is defined, there is some $i' < i$ and $K \in
{{\mathcal S}}_{i'}$ such that ${{\mathop{\rm outlet}}}(K) \neq \emptyset$ and $C_i = {{\mathop{\rm crib}}}({{\mathop{\rm outlet}}}(K), K)$. By the induction hypothesis, $K$ is a feasible potential maximal clique and hence, by Lemma \[lem:I-block-feasible\], $(N(C_i), C_i)$ is a feasible I-block.
2. As ${{\mathcal I}}_{i - 1} = \{C_h \mid 1 \leq h \leq i - 1\}$ and ${{\mathcal I}}_{i} = {{\mathcal I}}_{i - 1} \cup \{C_i\}$, ${{\mathcal I}}_i = \{C_h \mid 1 \leq h \leq i\}$ holds.
3. Let $A \in {{\mathcal O}}_i \setminus {{\mathcal O}}_{i - 1}$. Then there is some $B \in {{\mathcal O}}_{i - 1}$ such that $A$ is outbound, $|N(A)| \leq
k$, and $N(A) = N(C_i) \cup N(B)$. From the first two conditions, $(N(A), A)$ is an O-block. By the induction hypothesis, $(N(B), B)$ is a feasible O-block and hence $N(B) = \bigcup_{D \in {{\mathcal C}}} N(D)$ for some set ${{\mathcal C}}$ of feasible inbound components. As $C_i$ is feasible by 1 above and $N(A) = \bigcup_{D \in {{\mathcal C}}\cup \{C_i\}} N(D)$, O-block $(N(A), A)$ is feasible.
4. Let $K \in {{\mathcal P}}_i \setminus {{\mathcal P}}_{i - 1}$. Then, $K$ is added to ${{\mathcal P}}_i$ either at step 6-(a)-iii-A or at step 6-(a)-v. Consider the first case, Then, $K = N(B) \cup N(C_i)$ where $(N(B), B)$ is a feasible O-block and hence $N(B) = \bigcup_{D \in {{\mathcal C}}} N(D)$ for some set ${{\mathcal C}}$ of feasible inbound components. As $C_i$ is feasible, $K$ satisfies all the conditions in the second case of the definition of buildable potential maximal cliques. Consider next the second case, $K$ is obtained at step 6-(a)-v. Then, $K = N(A) \cup (n(v) \cap A)$, where $(N(A), A)$ is a feasible O-block, and the third case in the definition of buildable potential maximal cliques applies.
5. Let $K \in {{\mathcal S}}_i \setminus {{\mathcal S}}_{i-1}$. Then, $K \in {{\mathcal P}}_i$ and is a buildable potential maximal clique by 4 above. The confirmed condition ${{\mathop{\rm support}}}(K) \subseteq {{\mathcal I}}_i$ ensures that $K$ is feasible, since every member of ${{\mathcal I}}_i$ is feasible by 1 and 2 above.
We conclude that every member of ${{\mathcal S}}_J$ is a feasible potential maximal clique.
In showing the converse, the following observation is crucial. Let $(N(A), A)$ be a feasible O-block such that $N(A) = \bigcup_{C \in {{\mathcal C}}} N(C)$ for some set ${{\mathcal C}}$ of feasible components and suppose ${{\mathcal C}}\subseteq {{\mathcal I}}_i$. Then, $A \in {{\mathcal O}}_i$. The proof is a straightforward induction on $i$.
The proof of the converse consists in showing the following by induction on $m$.
1. For each feasible I-block $(N(C), C)$, with $|C| = m$, there is some $i$ such that $C = C_i$.
2. For each feasible O-block $(N(A), A)$ with $|A| = |V(G)| - m$, there is some $i$ such that $A \in {{\mathcal O}}_i$.
3. For each buildable potential maximal clique $\Omega$ such that $|\bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} C| = m$, there is some $i$ such that $\Omega \in {{\mathcal P}}_i$.
4. For each feasible potential maximal clique $\Omega$ such that $|\bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} C| = m$, there is some $i$ such that $\Omega \in {{\mathcal S}}_i$.
The base case $m = 0$ is vacantly true. Suppose $m > 0$ and the statements hold for smaller values of $m$.
1. Let $(N(C), C)$ be a feasible I-block with $|C| = m$. Then, by Lemma \[lem:I-block-feasible\], there is some feasible potential maximal clique $\Omega$ such that $N(C) = {{\mathop{\rm outlet}}}(\Omega)$ and $C = {{\mathop{\rm crib}}}(N(C), \Omega)$. We have $|\bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} C| < m$, since this union is a subset of $C \setminus (\Omega \setminus N(C))$. Therefore, by the induction hypothesis, there is some $i$ such that $\Omega \in {{\mathcal S}}_i$. Therefore, $C$ is constructed as $C_j$ either at step 4-(b) or at step 6-(a)-vi.
2. Let $(N(A), A)$ be a feasible O-block with $|A| = |V(G)| - m$. Let ${{\mathcal C}}$ be a set of feasible components such that $N(A) = \bigcup_{C \in {{\mathcal C}}} N(C)$ and let $C$ be an arbitrary member of ${{\mathcal C}}$. As $C$, $A$, and $N(C)$ are pairwise disjoint, we have $|C| < m$. Therefore, there is some $i_C$ such that $C_{i_C} = C$. Set $i = \max\{i_C \mid C \in {{\mathcal C}}\}$. Then, ${{\mathcal C}}\subseteq {{\mathcal I}}_i$ and hence $A \in {{\mathcal O}}_i$, by the observation above.
3. Let $\Omega$ be a buildable potential maximal clique with $|\bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} C| = m$. In the first case of the definition of buildability, $\Omega$ is added to ${{\mathcal P}}_0$ at step 4. In the second case, we have $\Omega = \bigcup_{C \in {{\mathcal C}}} N(C)$ for some ${{\mathcal C}}\subseteq {{\mathop{\rm support}}}(\Omega)$ such that every member of ${{\mathcal C}}$ is feasible. Choose ${{\mathcal C}}$ to be minimal subject to these conditions. Let $C$ be an arbitrary member of ${{\mathcal C}}$. As $|C| \leq m$, by the induction hypothesis and 1 above, there is some $i_C$ such that ${{\mathcal C}}\subseteq {{\mathcal I}}_{i_C}$. Choose $C \in {{\mathcal C}}$ so that $i_C$ is the largest and let the chosen be $D$. Let ${{\mathcal C}}' = {{\mathcal C}}\setminus \{D\}$ and let $S = \bigcup_{C \in {{\mathcal C}}'}
N(C)$. By the minimality of ${{\mathcal C}}$, $S$ is a proper subset of $\Omega$. Therefore, ${{\mathop{\rm crib}}}(S, \Omega)$ is a full component associated with $S$ and there is an outbound full component $A$ associated with $S$. As all members of ${{\mathcal C}}'$ is feasible and $|S| \leq k$, $(S, A)$ is a feasible O-block. By the choice of $D$, we have ${{\mathcal C}}' \subseteq {{\mathcal I}}_{i_D - 1}$ and hence $A \in {{\mathcal O}}_{i_D - 1}$ by the observation above. At step 6-(a)-iii-A in the iteration for $i = i_D$, $\Omega$ is put into ${{\mathcal P}}_{i_D}$.
4. Let $\Omega$ be a feasible potential maximal clique with $|\bigcup_{C \in {{\mathop{\rm support}}}(\Omega)} C| = m$. Then, by 3 above, there is some $i_1$ such that $\Omega \in {{\mathcal P}}_{i_1}$. Furthermore, as every member $C$ of ${{\mathop{\rm support}}}(\Omega)$ is feasible and $|C| \leq m$, there is some $i_2$ such that ${{\mathop{\rm support}}}(\Omega) \subseteq {{\mathcal I}}_{i_2}$, by 1 above. At step 7 in the iteration for $i = \max\{i_1, i_2\}$, $\Omega$ is put into ${{\mathcal S}}_i$.
We conclude that every feasible potential maximal clique is in ${{\mathcal S}}_J$. This completes the proof.
Running time analysis {#sec:time}
=====================
The running time of our algorithm is stated in terms of the the number of positive subproblem instances. Given $G$ and $k > 0$, let ${{\mathcal I}}_G^k$ denote the set of feasible I-blocks and ${{\mathcal O}}_G^k$ the set of feasible O-blocks.
\[obs:run\_time\] Given $G$ and $k > 0$, algorithm PID-BT runs in $O^*(|{{\mathcal I}}_G^k|\cdot|{{\mathcal O}}_G^k|)$ time.
The number of iteration in step 6, where $i$ is incremented each time, is $|{{\mathcal I}}_G^k|$. In each iteration step, every computation step may be charged to each element of ${{\mathcal O}}_{i - 1}$ and the total number of steps charged to a single element of ${{\mathcal O}}_{i - 1}$ is $n^{O(1)}$. Since $|{{\mathcal O}}_{i - 1}| \leq |{{\mathcal O}}_G^k|$, we have the claimed time bound.
The bound in this observation is incomparable to the previous bounds on non-PID versions of the BT algorithm, which run in $O^*(|\Pi_G|)$ time when $\Pi_G$, the set of potential maximal cliques in $G$, is given. In [@FV12], in addition to a combinatorial bound of $|\Pi_G|= O(1.7549^n)$, it was shown that $\Pi_G$ can be computed in $O^*(\Pi_G)$ time.
It should be emphasized, however, that it is not known whether the decision problem version of the treewidth problem with given $k$ can be solved in $O^*(|\Pi_G^{k+1}|)$ time, where $\Pi_G^k$ is the set of potential maximal cliques of cardinality at most $k$ in $G$. The bottleneck here is the time to list all members of $\Pi_G^{k+1}$. Although a nontrivial upper bound on $|\Pi_G^{k+1}|$ in terms of $n$ and $k$, together with a running time bound based on it, is given in [@FV12], a huge gap between the actual value $|\Pi_G^{k+1}|$ and the upper bound is observed in practice, as shown in the next section. This is the gap that makes the bound in Obseravation \[obs:run\_time\] interesting.
Experimental analysis {#sec:experimental}
=====================
To study the strength of the running time bound of Observation \[obs:run\_time\] from a practical view point, we have performed some experiments, in which we count the number of combinatorial objects involved in the treewidth computation. We first compare the actual number of relevant potential maximal cliques (that is, of cardinality at most $k + 1$ where $k$ is the treewidth) with the theoretical uppser bounds on that number: the naive bound of $\tbinom{n}{k + 1}$ and an assymptotically stronger bound of $n (\tbinom{\lceil (2n + k + 7)/3 \rceil}{k
+ 2} +
\tbinom{\lceil (2n + k + 4)/2 \rceil}
{k + 1})$ given in [@FV12]. Table \[tab:pmcs\] shows the results on some random instances, where the number of vertices $n$ is 20, 30, 40, or 50, the number of edges $m$ is $2n$, $3n$, $4n$ or $5n$, and the graph for each pair $(n, m)$ is chosen uniformly at random from the set of all graphs with $n$ vertices and $m$ edges. Huge gaps between the actual number and the upper bounds are apparent.
-----------------------------------------------------------------------------------------------------------------------------------------
$n = |V|$ $|E|$ $k = {{\mathop{\rm tw}}}$ PMCs ($\leq k + 1$) $\tbinom{n}{k + 1}$ $n (\tbinom{\lceil (2n + k + 7)/3 \rceil}{k
+ 2} +
\tbinom{\lceil (n + k + 4)/2 \rceil}
{k + 1})$
----------- ------- --------------------------- --------------------- --------------------- ---------------------------------------------
20 40 6 115 77520 1003860
20 60 8 96 167960 2076360
20 80 11 121 125970 1921680
20 100 11 37 125970 1921680
30 60 7 559 5852925 67393950
30 90 11 682 86493225 352580340
30 120 14 1137 155117520 430361970
30 150 16 768 119759850 426140550
40 80 8 5341 273438880 2705471600
40 120 14 10372 40225345056 91260807600
40 160 18 17360 131282408400 135562547400
40 200 20 6820 131282408400 157012867200
50 100 10 6029 37353738800 201991095800
50 150 16 48068 9847379391150 10332510412500
50 200 20 36388 67327446062800 53246262826500
50 250 24 47729 126410606437752 52230760068000
-----------------------------------------------------------------------------------------------------------------------------------------
: The numbers of relevant potetntial maximal cliques and their upper bounds[]{data-label="tab:pmcs"}
Since the running time bound in Observation \[obs:run\_time\] involves the quantity $|{{\mathcal O}}_G^k|$ which is not theoretically upper-bounded by a function of $|\Pi_G^{k + 1}|$, the gaps observed in Table \[tab:pmcs\] alone may not be sufficient to support the importance of this running time bound. To address this issue, we have counted more combinatorial objects involved in our PID computation on the same graph instances: in addition to relevant potential maximal cliques counted above, all potential maximal cliques, relevant minimal separators, all minimal separators, feasible I-blocks, feasible O-blocks and feasible potential maximal cliqeus. Here, the input $k$ to the decision problem is set to the treewidth of the graph.
Table \[tab:numObjects\] shows the result. We see that the number of feasible O-blocks is smaller than the number of relevant potential mmaximal cliques, as far as these instances are concerend. This, together with what we have observed in Table \[tab:pmcs\], provides an evidence that the running time bound of Observation \[obs:run\_time\] is more relevant from a practical point of view than the running time bounds of known theoretical algorithms.
We also see that the number of all potential maximal cliques grows much faster than the number of relevant potential maximal cliques. This shows the advantage of our algorithm which avoids generating all potential maximal cliques.
To summarize, our PID algorithm has advantages over the standard BT algorithms because the running time upper bounds of those algorithms are either in terms of a combinatorial [*upper bound*]{} on the number of relevant potential maximal cliques or in terms of the actual number of [*all*]{} potential maximal cliques: our experiments reveal huge gaps between the actual number of relevant potential maximal cliques and both of these quantities. Note that, if there is an efficient method of generating relevant potential maximal cliques, a non-PID version of the BT algorithm might outperform our PID version.
------- ------- ----------------------- --------- --------------------------- ------------ ------------------------------- ---------- ---------- -------
$|V|$ $|E|$ ${{\mathop{\rm tw}}}$ all $\leq{{\mathop{\rm tw}}}$ all $\leq {{\mathop{\rm tw}}}+ 1$ I-blocks O-blocks PMCs
20 40 6 98 51 376 115 19 26 37
20 60 8 191 48 796 96 46 108 93
20 80 11 185 122 698 376 121 158 370
20 100 11 107 25 354 37 24 32 36
30 60 7 535 185 3122 559 114 170 334
30 90 11 2983 247 20154 682 228 708 618
30 120 14 2713 376 16736 1137 352 804 1055
30 150 16 1913 281 10535 768 240 498 647
40 80 8 14842 1070 178661 5341 840 2965 4154
40 120 14 164773 2356 1740644 10372 2080 8637 8577
40 160 18 134485 3952 1251656 17360 3289 10023 13646
40 200 20 52182 1790 423691 6820 1502 4749 5347
50 100 10 96499 1361 1123621 6029 779 2171 2914
50 150 16 1792713 9152 $>$2000000 48068 8099 36881 39803
50 200 20 2130811 7878 $>$2000000 36388 6956 28247 29842
50 250 24 1452449 10571 $>$2000000 47729 8949 30834 37115
------- ------- ----------------------- --------- --------------------------- ------------ ------------------------------- ---------- ---------- -------
: The numbers of principal objects in treewidth computation[]{data-label="tab:numObjects"}
Implementation {#sec:implementation}
==============
In this section, we sketch two important ingredients of our implementation. Although both are crucial in obtaining the result reported in Section \[sec:performance\], our work on this part is preliminary and improvements are the subject of future research.
Data structures
---------------
The crucial elementary operation in our algorithm is the following. We have a set ${{\mathcal O}}$ of feasible O-blocks obtained so far and, given a new feasible I-block $(N(C), C)$, need to find all members $(N(A), A)$ of ${{\mathcal O}}$ such that $C \subseteq A$ and $|N(C) \cup N(A)| \leq k + 1$. As the experimental analysis in the previous section shows, there is only a few such $A$ on average for the tested instances even though ${{\mathcal O}}$ is usually huge. To support an efficient query processing, we introduce an abstract data structure we call a block sieve.
Let $G$ be a graph and $k$ a positive integer. A [*block sieve*]{} for graph $G$ and width $k$ is a data structure storing vertex sets of $V(G)$ which supports the following operations.
store($U$)
: : store vertex set $U$ in in the block sieve.
supersets($U$)
: : return the list of entries $W$ stored in the block sieve such that $U \subseteq W$ and $|N(U) \cup N(W)| \leq k + 1$.
Data structures for superset query have been studied [@Savnik13]. The second condition above on the retrieved sets, however, appears to make this data structure new. For each $U \subseteq V(G)$, we define the [*margin*]{} of $U$ to be $k + 1 - |N(U)|$. Our implementation of block sieves described below exploits an upper bound on the margins of vertex sets stored in the sieve.
We first describe how such block sieves with upper bounds on margins are used in our algorithm. Let ${{\mathcal O}}$ be the current set of O-blocks. We use $t$ block sieves ${{\mathcal B}}_1$, …, ${{\mathcal B}}_t$, each ${{\mathcal B}}_i$ having a predetermined upper bound $m_i$ on the margins of the sets stored. We have $0 < m_1 < m_2 < \ldots < m_t = k$. We set $m_0 = 0$ for notational ease below. In our implementation, we choose roughly $t = \log_2 k $ and $m_i = 2^i$ for $0 < i < t$. For each $(N(A), A)$ in ${{\mathcal O}}$, $A$ is stored in ${{\mathcal B}}_i$ such that the margin $k + 1 - |N(A)|$ is $m_i$ or smaller but larger than $m_{i - 1}$. When we are given an I-block $(N(C), C)$ and are to list relevant blocks in ${{\mathcal O}}$, we query all of the $t$ blocks with the operations $\mathop{\rm supersets}(C)$. These queries as a whole return the list of all vertex sets $A$ such that $(N(A), A) \in {{\mathcal O}}$, $C \subseteq A$, and $|(N(A)\cup N(C))| \leq k + 1$.
We implement a block sieve by a trie ${{\mathcal T}}$. The upper bound $m$ on margin is not used in the construction of the sieve; it is used in the query time. In the following, we assume $V(G) = \{1, \ldots, n\}$ and, by an interval $[i, j]$, $1 \leq i \leq j \leq n$, we mean the set $\{v: i \leq v \leq j\}$ of vertices. Each non-leaf node $p$ of ${{\mathcal T}}$ is labelled with a non-empty interval $[s_p, f_p]$, such that $s_r = 0$ for the root $r$, $s_p = f_q + 1$ if $p$ is a child of $q$, and $f_p = n$ if $p$ is a parent of a leaf. Each edge $(p, q)$ which connects node $p$ and a child $q$ of $p$, is labelled with a subset $S_{(p, q)}$ of the interval $[s_p, f_p]$. Thus, for each node $p$, the union of the labels of the edges along the path from the root to $p$ is a subset of the interval $[1, s_p - 1]$, or $[1, n]$ when $p$ is a leaf, which we denote by $S_p$. The choice of interval $[s_p, f_p]$ for each node $p$ is heuristic. It is chosen so that the number of descendants of $p$ is not too large or too small. In our implementation, the interval size is adaptively chosen from $8$, $16$, $32$, and $64$.
Each leaf $q$ of trie ${{\mathcal T}}$ represents a single set stored at this leaf, namely $S_q$ as defined above. We denote by $S({{\mathcal T}})$ the set of all sets stored in ${{\mathcal T}}$. Then, for each node $p$ of ${{\mathcal T}}$, the set of sets stored under $p$ is $\{U \mid U \cap [1, p] = S_p\}$.
We now describe how a query is processed against this data structure. Suppose query $U$ is given. The goal is to visit all leaves $q$ such that $U \subseteq S_q$ and $|N(U) \cup N(S_q)| \leq k + 1$. This is done by a depth-first traversal of the trie ${{\mathcal T}}$. When we visit node $p$, we have the invariant that $U \cap [1, f_p] \subseteq S_p$, since otherwise no leaf in the subtree rooted at $p$ stores a superset of $U$. Therefore, we descend from $p$ to a child $p'$ of $p$ only if this invariant is maintained. Moreover, we keep track of the quantity $i(p, U) =
|N(U) \cap S_p|$ in order to make further pruning of search possible. For each leaf $q$ below $p$ such that $U \subseteq S_q$, we have $i(q, U) \geq i(p, U)$. Combining this with eauality $|N(U) \setminus N(S_q)| =
|N(U) \cap S_q| = i(q, U)$, we have $|N(U) \cup N(S_q)| \geq |N(S_q)| + i(p, U)$. Since we know an upper bound $m$ on the margin $k + 1 - |N(S_q)|$ of $S_q$, or lower bound $k + 1 - m$ on $|N(S_q)|$, we may prune the search under node $p$ if $i(p, U) > m$, since this inequality implies $|N(U) \cup N(S_q)| > k + 1$ for every leaf $q$ under $p$. When we reach a leaf $q$, we test if $|N(U) \cup N(S_q)| \leq k + 1$ indeed holds.
Safe separators
---------------
The notion of safe separators for treewidth was introduced by Bodlaender and Koster [@BK06]: a separator $S$ of $G$ is [*safe*]{} if completing $S$ into a clique does not change the treewidth of $G$. If we find a safe separator $S$ then the problem of deciding tree width of $G$ reduces to that of deciding the treewidth of $G\langle C \rangle$ for each component $C$ associated with $S$. Preprocessing $G$ into such independent subproblems is highly desirable whenever possible.
The above authors observed that a powerful sufficient condition for safeness can be formulated based on graph minors. A [*labelled minor*]{} of $G$ is a graph obtained from $G$ by zero or more applications of the following operations. (1) Edge contraction: choose an edge $\{u, v\}$, replace $u$ and $v$ by a single new vertex and let all neighbors of $u$ and $v$ be adjacent to this new vertex; name the new vertex as either $u$ or $v$. (2) Vertex deletion: delete a vertex together with all incident edges. (3) Edge deletion.
\[lem:minoar-safe\] ([@BK06]) A separator $S$ of $G$ is safe if, for every component $C$ associated with $S$, $G[V(G) \setminus C]$ contains clique $S$ as a labelled minor.
Call a separator [*minor-safe*]{} if it satisfies the sufficient condition for safeness stated in this lemma. Bodlaender and Koster [@BK06] showed that if $S$ is a minimal separator and is an almost clique (deleting some single vertex makes it a clique) then $S$ is minor-safe and moreover that the set of all almost clique minimal separators can be found in $O(n^2 m)$ time, where $n$ is the number of vertices and $m$ is the number of edges.
We aim at capturing as many minor-safe separators as possible, at the expense of theoretical running time bounds on the algorithm for finding them. Thus, in our approach, both the algorithm for generating candidate separators and the algorithm for deciding minor-safeness are heuristic. For candidate generation, we use greedy heuristic for treewidth such as min-fill and min-degree: the separators in the resulting tree-decomposition are all candidates for safe separators.
When we apply our heuristic decision algorithm for minor-safeness to candidate separator $S$, one of the following occurs.
1. The algorithm answers “YES”. In this case, a required labelled clique minor has been found for every component associated $S$ and hence $S$ is minor-safe.
2. The algorithm answers “DON’T KNOW”. In this case, the algorithm has failed to find a labelled clique minor for at least one component, and hence it is not known if $S$ is minor-safe or not.
3. The algorithm aborts, after reaching the prescribed number of execution steps.
Our heuristic decision algorithm works in two phases. Let $S$ be a separator, $C$ a component associated with $S$, and $R = V(G)
\setminus (S \cup C)$. In the first phase, we contract edges in $R$ and obtain a graph $B$ on vertex set $S \cup R'$, where each vertex of $R'$ is a contraction of some vertex set of $R$ and $B$ has no edge between vertices in $R'$. For each pair $u, v$ of distinct vertices in $S$, let $N(u, v)$ denote the common neighbors of $u$ and $v$ in graph $B$. The contractions are performed with the goal of making $|N(u, v) \cap R'|$ large for each missing edge $\{u, v\}$ in $S$. In the second phase, for each missing edge $\{u, v\}$, we choose a common neighbor $w \in N(u, v) \cap R'$ and contract either $\{u, w\}$ or $\{v, w\}$. The choice of the next missing edge to be processed and the choice of the common neighbor are done as follows. Suppose the contractions in the second phase are done for some missing edges in $S$. For each missing edge $\{u, v\}$ not yet “processed”, let $N'(u, v)$ be the set of common neighbors of $u$ and $v$ that are not yet contracted with any vertex in $S$. We choose $\{u, v\}$ with the smallest $|N'(u, v) \cap R'|$ to be processed next. Tie-breaking when necessary and the choice of the common neighbor $w$ in $N'(u, v) \cap R'$ to be contracted with $u$ or $v$ is done in such a way that the minimum of $|(N'(x, y)
\cap R') \setminus \{w\}|$ is maximized over all remaining missing edges $\{x,
y\}$ in $S$.
The performance of these heuristics strongly depends on the instances. For PACE 2017 public instances, they work quite well. Table \[tab:PACE-safe\] shows the preprocessing result on the last 10 of those instances. See Section \[sec:performance\] for the description of those instances and the computational environment for the experiment. For each instance, the number of safe separators found and the maximum subproblem size in terms of the number of vertices, after the graph is decomposed by the safe separators found, are listed. The results show that these instances, which are deemed the hardest among all the 100 public instances, are quickly decomposed into manageable subproblems by our preprocessing.
name $|V|$ $|E|$ $tw(G)$ safe separators found max subproblem time(secs)
------- ------- ------- --------- ----------------------- ---------------- ------------
ex181 109 732 18 18 89 0.078
ex183 265 471 11 173 76 0.031
ex185 237 793 14 142 52 0.046
ex187 240 453 10 138 81 0.031
ex189 178 4517 70 6 161 0.062
ex191 492 1608 15 184 132 0.171
ex193 1391 3012 10 791 119 3.17
ex195 216 382 10 114 84 0.015
ex197 303 1158 15 176 56 0.062
ex199 310 537 9 157 131 0.046
: Safe separator preprocessing on PACE 2017 instances[]{data-label="tab:PACE-safe"}
On the other hand, these heuristics turned out useless for most of the DIMACS graph coloring instances: no safe separators are found for those instances. We suspect that this is not the limitation of the heuristics but is simply because those instances lack minor-safe separators. We need, however, further study to get a firm conclusion.
Performance results {#sec:performance}
===================
We have tested our implementation on two sets of instances. The first set comes from the DIMACS graph coloring challenge [@JT93] and has served as a standard benchmark suite for treewidth in the literature [@GD04; @BK06; @Musliu08; @SH09; @BFKKT12; @BJ14]. The other is the set of public instances posed by the exact treewidth track of PACE 2017 [@PACE17].
The computing environment for the experiment is as follows. CPU: Intel Core i7-7700K, 4.20GHz; RAM: 32GB; Operating system: Windows 10, 64bit; Programming language: Java 1.8; JVM: jre1.8.0\_121. The maximum heap space size is 6GB by default and is 24GB where it is stated so. The implementation is single threaded, except that multiple threads may be invoked for garbage collection by JVM. The time measured is the CPU time, which includes the garbage collection time.
To determine the treewidth of a given instance we use our decision procedure with $k$ being incremented one by one, starting from the obvious lower bound, namely the minimum degree of the graph. Binary search is not used because the cost of overshooting the exact treewidth can be huge. We do not feel the need of using stronger lower bounds either, since the cost of executing the decision procedure for $k$ below such lower bounds is usually quite small.
Table \[tab:DIMACS\] shows the results on DIMACS graph coloring instances. Each row shows the name of the instance, the number of vertices, the number of edges, the exact treewidth computed by our algorithm, CPU time in seconds, and the previously best known upper and lower bounds on the treewidth. Rows in bold face show the newly solved instances. For all but three of them, the previous best upper bound has turned out optimal: only the lower bound was weaker. In this experiment, however, no knowledge of previous bounds are used and our algorithm independently determines the exact treewidth.
The results on “queen" instances illustrate how far our algorithm has extended the practical limit of exact treewidth computation. Queen7\_7 with 49 vertices is the largest instance previously solved, while queen10\_10 with 100 vertices is now solved. Also note that all previously solved instances are fairly easy for our algorithm: all of them are solved within 10 seconds per instance and many of them within a second.
name $|V|$ $|E|$ ${{\mathop{\rm tw}}}$ time(secs) prev UB prev LB
---------------------------- ------------- ---------------- ----------------------- --------------- ------------- -------------
anna 138 493 12 0.078 12 12
david 87 406 13 0.031 13 13
[**DSJC125.5**]{} [**125**]{} [**3891**]{} [**108**]{} [**459**]{} [**108**]{} [**56**]{}
DSJC125.9 125 6961 119 0.062 119 119
[**DSJC250.9**]{} [**250**]{} [**27897**]{} [**243**]{} [**0.44**]{} [**243**]{} [**212**]{}
[**DSJC500.9**]{} [**500**]{} [**112437**]{} [**492**]{} [**14**]{} [**492**]{} [**433**]{}
[**DSJR500.5**]{} [**500**]{} [**58862**]{} [**246**]{} [**546**]{} [**-**]{} [**-**]{}
DSJR500.1c 500 121275 485 2.12 485 485
fpsol2.i.1 496 11654 66 3.30 66 66
fpsol2.i.2 451 8691 31 5.66 31 31
fpsol2.i.3 425 8688 31 5.68 31 31
[**games120**]{}$^\dagger$ [**120**]{} [**638**]{} [**32**]{} [**94738**]{} [**32**]{} [**24**]{}
[**homer**]{}$^\dagger$ [**561**]{} [**1628**]{} [**30**]{} [**2765**]{} [**31**]{} [**26**]{}
huck 74 301 10 0.012 10 10
inithx.i.1 864 18707 56 8.10 56 56
inithx.i.2 645 13979 31 8.14 31 31
inithx.i.3 621 13969 31 10 31 31
jean 80 254 9 0.031 9 9
miles250 128 387 9 0.000 9 9
miles500 128 1170 22 0.11 22 22
[**miles750**]{} [**128**]{} [**2113**]{} [**36**]{} [**0.23**]{} [**36**]{} [**35**]{}
miles1000 128 3216 49 0.33 49 49
miles1500 128 5198 77 0.45 77 77
mulsol.i.1 197 3925 50 1.41 50 50
mulsol.i.2 188 3885 32 1.77 32 32
mulsol.i.3 184 3916 32 1.80 32 32
mulsol.i.4 185 3946 32 1.78 32 32
mulsol.i.5 186 3973 31 1.80 31 31
myciel2 5 5 2 0.000 2 2
myciel3 11 20 5 0.000 5 5
myciel4 23 71 10 0.015 10 10
myciel5 47 236 19 0.33 19 19
[**myciel6**]{} [**95**]{} [**755**]{} [**35**]{} [**419**]{} [**35**]{} [**29**]{}
queen5\_5 25 160 18 0.000 18 18
queen6\_6 36 290 25 0.031 25 25
queen7\_7 49 476 35 0.19 35 35
[**queen8\_8**]{} [**64**]{} [**728**]{} [**45**]{} [**4.16**]{} [**45**]{} [**25**]{}
[**queen9\_9**]{} [**81**]{} [**1056**]{} [**58**]{} [**274**]{} [**58**]{} [**35**]{}
[**queen8\_12**]{} [**96**]{} [**1368**]{} [**65**]{} [**649**]{} [**-**]{} [**39**]{}
[**queen10\_10**]{} [**100**]{} [**1470**]{} [**72**]{} [**20934**]{} [**72**]{} [**39**]{}
zeroin.i.1 211 4100 50 1.09 50 50
zeroin.i.2 211 3541 32 1.64 32 32
zeroin.i.3 206 3540 32 1.55 32 31
: Results on the DIMACS graph coloring instances[]{data-label="tab:DIMACS"}
\
Previous upper bounds from [@GD04] and [@Musliu08]; previous lower bounds from [@GD04] and [@BWK06].\
$^\dagger$ 24GB heap space is used for these instances.
Table \[tab:DIMACS-LB\] shows the lower bounds obtained by our algorithm on unsolved DIMACS graph coloring instances. Lower bound entries in bold face are improvements over the previously known lower bounds. Computation time of the previously best lower bounds ranges from a few minutes to a week [@BWK06]. Detailed comparison of lower bound methods, which requires the normalization of machine speeds, is not intended here. Rather, the table is meant to show the potential of our algorithm as a lower bound procedure.
For many of the instances the improvements are significant. It can also be seen from this table that our algorithm performs rather poorly on relatively sparse graphs with a large number of vertices.
-------------- ------- -------- ------------- ------------- -------------- ------- -------
name $|V|$ $|E|$ 1 sec 1 min 30 min lower upper
DSJC125.1 125 736 [**25**]{} [**30**]{} [**36** ]{} 20 60
DSJC250.1 250 3218 [**45**]{} [**57**]{} [**66** ]{} 43 167
DSJC250.5 250 15668 [**180**]{} [**197**]{} [**211**]{} 114 229
DSJC500.1 500 12458 - [**94**]{} [**115**]{} 87 409
DSJC500.5 500 62624 - [**360**]{} [**388**]{} 231 479
DSJC1000.1 1000 49629 - 172 [**189**]{} 183 896
DSJC1000.5 1000 249826 - [**724**]{} [**742**]{} 469 977
DSJC1000.9 1000 449449 - [**983**]{} [**987**]{} 872 991
le450\_5a 450 5714 29 50 59 79 243
le450\_5b 450 5734 - 49 57 - 246
le450\_5c 450 9803 - 84 100 106 265
le450\_5d 450 9757 - 94 99 - 265
le450\_15a 450 8168 24 40 49$^\dagger$ 94 262
le450\_15b 450 8169 23 32 47$^\dagger$ - 258
le450\_15c 450 16680 - 114 132 139 350
le450\_15d 450 16750 - 112 131 - 353
le450\_25a 450 8260 11 23 25$^\dagger$ 96 216
le450\_25b 450 8263 16 26 30$^\dagger$ - 219
le450\_25c 450 17343 43 89 109 144 320
le450\_25d 450 17425 - 93 112 - 327
myciel7 191 2360 22 31 35 52 66
queen11\_11 121 1980 [**61**]{} [**70**]{} [**77**]{} 40 87
queen12\_12 144 2596 [**71**]{} [**76**]{} [**84**]{} 55 103
queen13\_13 169 3328 [**70**]{} [**82**]{} [**91**]{} 51 121
queen14\_14 196 4186 [**74**]{} [**87**]{} [**98**]{} 55 140
queen15\_15 225 5180 [**78**]{} [**93**]{} [**104**]{} 73 162
queen16\_16 256 6320 [**83**]{} [**99**]{} [**110**]{} 79 186
school1 385 19095 73 112 125 149 178
school1\_nsh 352 14612 78 105 118 132 152
-------------- ------- -------- ------------- ------------- -------------- ------- -------
: New lower bounds on the treewidth of unsolved DIMACS graph coloring instances[]{data-label="tab:DIMACS-LB"}
Previous upper bounds from [@Musliu08]; previous lower bounds from [@BWK06].\
$^\dagger$ out of memory before time out
Table \[tab:PACE2017\] shows the results on PACE 2017 instances. The prefix “ex" in the instance names means that they are for the exact treewidth track. Odd numbers mean that they are public instances disclosed prior to the competition for testing and experimenting. Even numbered instances, not in the list, are secret and to be used in evaluating submissions. The time allowed to be spent for each instance is 30 minutes. As can be seen from the table, our algorithm solves all of the public instances with a large margin in time.
name $|V|$ $|E|$ ${{\mathop{\rm tw}}}$ time (secs) name $|V|$ $|E|$ ${{\mathop{\rm tw}}}$ time (secs)
------- ------- ------- ----------------------- ------------- ------- ------- -------- ----------------------- -------------
ex001 262 648 10 1.48 ex101 1038 291034 540 12
ex003 92 2113 44 8.92 ex103 237 419 10 3.01
ex005 377 597 7 14 ex105 1038 291037 540 12
ex007 137 451 12 0.046 ex107 166 396 12 1.44
ex009 466 662 7 13 ex109 1212 1794 7 43
ex011 465 1004 9 0.50 ex111 395 668 9 4.33
ex013 56 280 29 15 ex113 93 488 14 0.046
ex015 177 669 15 0.046 ex115 963 419877 908 18
ex017 330 571 9 1.11 ex117 77 181 13 18
ex019 291 752 11 40 ex119 84 479 23 16
ex021 318 572 9 2.80 ex121 204 1164 34 76
ex023 690 1355 8 0.91 ex123 122 635 35 14
ex025 92 472 20 1.61 ex125 320 8862 70 8.19
ex027 274 715 11 51 ex127 228 527 10 0.20
ex029 238 411 9 1.33 ex129 737 2826 14 0.97
ex031 219 382 8 12 ex131 292 1386 18 0.17
ex033 363 541 7 50 ex133 522 1296 11 3.94
ex035 247 804 14 3.60 ex135 2822 129474 87 49
ex037 272 615 10 3.43 ex137 196 1098 19 0.34
ex039 56 280 32 58 ex139 334 568 9 8.34
ex041 205 341 9 0.63 ex141 226 1168 34 117
ex043 279 513 9 3.34 ex143 130 660 35 52
ex045 600 865 7 7.80 ex145 48 96 12 18
ex047 1854 21118 21 140 ex147 101 606 16 0.093
ex049 117 332 13 0.078 ex149 698 2604 12 0.75
ex051 136 254 10 0.62 ex151 279 733 12 210
ex053 218 383 9 1.98 ex153 772 11654 47 57
ex055 197 813 18 0.078 ex155 758 11580 47 103
ex057 281 9075 117 0.093 ex157 260 467 9 6.42
ex059 298 780 10 0.47 ex159 582 2772 18 2.37
ex061 158 1058 22 9.59 ex161 1046 3906 12 2.84
ex063 103 582 34 4.76 ex163 244 445 10 4.69
ex065 50 175 25 79 ex165 222 742 14 0.23
ex067 235 424 10 2.70 ex167 509 969 10 7.96
ex069 235 441 9 1.43 ex169 3706 42236 22 530
ex071 253 434 9 2.42 ex171 647 2175 14 0.77
ex073 712 1085 7 15 ex173 536 1011 10 5.05
ex075 111 360 8 0.28 ex175 227 1000 17 113
ex077 237 423 10 2.70 ex177 227 759 14 0.23
ex079 314 4943 42 1.64 ex179 187 346 10 14
ex081 188 638 6 0.55 ex181 109 732 18 0.20
ex083 213 380 10 3.05 ex183 265 471 11 8.61
ex085 229 370 8 11 ex185 237 793 14 0.33
ex087 380 5790 47 46 ex187 240 453 10 2.80
ex089 318 576 9 11 ex189 178 4517 70 3.59
ex091 193 336 9 31 ex191 492 1608 15 21
ex093 454 664 7 27 ex193 1391 3012 10 3.80
ex095 220 555 11 0.59 ex195 216 382 10 6.11
ex097 286 4079 48 2.01 ex197 303 1158 15 0.36
ex099 616 923 7 88 ex199 310 537 9 23
: Results on the PACE 2017 public instances[]{data-label="tab:PACE2017"}
Acknowledgment {#acknowledgment .unnumbered}
==============
The author thanks Hiromu Ohtsuka for his help in implementing the block sieve data structure. He also thanks Yasuaki Kobayashi for helpful discussions and especially for drawing the author’s attention to the notion of safe separators. This work would have been non-existent if not motivated by the timely challenges of PACE 2016 and 2017. The author is deeply indebted to their organizers, especially Holger Dell, for their dedication and excellent work.
S. Arnborg, D. G. Corneil, and A. Proskurowski: Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic Discrete Methods 8, 277-284, 1987
J. Berg and M. Järvisalo: SAT-based approaches to treewidth computation: an evaluation. Proceedings of the IEEE 26th International Conference on Tools with Artificial Intelligence, 328-335, 2014
H. L. Bodlaender: A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM Journal on Computing 25(6), 1305-1317, 1996
H. L. Bodlaender, F. V. Fomin, A. M. C. A. Koster D. Kratsch, and D. M. Thilikos: On exact algorithms for treewidth. ACM Transactions on Algorithms 9(1), 12, 2012 H. L. Bodlaender and A. M. C. A. Koster: Safe separators for treewidth. Discrete Mathematics 306(3), 337-350, 2006
H. L. Bodlaender, T. Wolle, and A. M. C. A. Koster: Contraction and Treewidth Lower Bounds. Journal of Graph Algorithms and Applications 10(1), 5-49, 2006
H. L. Bodlaender and A. M. C. A. Koster: Combinatorial Optimization on Graphs of Bounded Treewidth. The Computer Journal 51(3), 255-269, 2008
V. Bouchitté and I. Todinca: Treewidth and minimum fill-in: Grouping the minimal separators. SIAM Journal on Computing 31(1), 212-232, 2001
V. Bouchitté and I. Todinca: Listing all potential maximal cliques of a graph. Theoretical Computer Science 276, 17-32, 2002
H. Dell, T. Husfeldt, B. M. Jansen, P. Kaski, C. Komusiewicz, and F. A. Rosamond: The First Parameterized Algorithms and Computational Experiments Challenge LIPIcs-Leibniz International Proceedings in Informatics 63, 2017.
F. V. Fomin, D. Kratsch, I. Todinca, and Y. Villanger: Exact algorithms for treewidth and minimum fill-in. SIAM Journal on Computing, 38(3), 1058-1079, 2008
F. Fomin and Y. Villanger: Treewidth computation and extremal combinatorics. Combinatorica 32(3), 289-308, 2012
V. Gogate and R. Dechter: A complete anytime algorithm for treewidth. Proceedings of the 20th conference on Uncertainty in artificial intelligence, AUAI Press, 2004
D. S. Johnson and M. A. Trick (eds.): Cliques, coloring, and satisfiability: second DIMACS implementation challenge. Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, Vol. 26. American Mathematical Society, 1996 N. Musliu: An iterative heuristic algorithm for tree decomposition. Recent Advances in Evolutionary Computation for Combinatorial Optimization, 133-150, 2008
PACE 2017 website: https://pacechallenge.wordpress.com/
N. Robertson and P. D. Seymour: Graph minors. II. Algorithmic aspects of tree-width. Journal of Algorithms 7, 309-322, 1986
N. Robertson and P. D. Seymour: Graph minors. XX. Wagner’s conjecture. Journal of Combinatorial Theory, Series B 92(2), 325-357, 2004
M. Samer and H. Veith: Encoding treewidth into SAT. Proceedings of International Conference on Theory and Applications of Satisfiability Testing, 45-50, 2009
I. Savnik: Index data structure for fast subset and superset queries. Proceedings of International Conference on Availability, Reliability, and Security, 134-148, 2013
Github repository: https://github.com/TCS-Meiji/PACE2017-TrackA
[^1]: A preliminary and an abridged version of this paper was presented at the 25th European Sysmposium on Algorithms
|