diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcohc" "b/data_all_eng_slimpj/shuffled/split2/finalzzcohc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcohc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \nDetection of cosmic rays arriving at the Earth with energies above \n$10^{20} $ eV questions the presence of the GZK cutoff~\\cite{gzk}.\nThis cutoff determines the energy where the cosmic ray spectrum\nis expected to abruptly steepen. Cosmic rays with ultra high energies (above \n$\\sim 5 \\times 10^{19} $ eV) lose energy through photoproduction of pions\nwhen transversing the Cosmic Microwave Background Radiation (CMB). As\nthe CMB attenuates ultra high energy cosmic rays (UHECR) on a 50 Mpc\nscale (or characteristic distance) at $10^{20}$ eV,\none can determine it's production maximum distance. An event\nof $10^{20} {\\rm eV}$ has to be produced within $\\sim 100$ Mpc, unless it is\na non standard particle~\\cite{cfk,afk}.\nThe absence of any powerful source located within this \nrange~\\cite{sommers} --- that could accelerate a cosmic ray to such an \nenergy --- turns the existence \nof these events into a mystery, the so called GZK puzzle. \n\nThe results of two important cosmic ray experiments AGASA~\\cite{agasa} and \nHiRes~\\cite{hires} are not consistent. Not only is the energy \nspectrum measured by HiRes systematically below the one measured by AGASA, \nbut also the \nHiRes spectrum steepens around $10^{20} $ eV while AGASA's spectrum flattens \naround this energy region. The steepening in the HiRes spectrum may be in \nagreement with a GZK cutoff, while AGASA's is thought not to be. \n\nThere are many possible ways to understand this discrepancy\n~\\cite{demarco,stanev}. \nThe Pierre \nAuger Observatory~\\cite{auger} will soon have a statistically significant \ndata sample and\nwill certainly shed light into understanding these events.\n\nIn this article we focus on the role\nof the shape of the error distribution in the energy determination. \nWe show that the intrinsic \nfeatures of an air shower results in a lognormal error distribution on the\nenergy determination. \nThe minimum standard deviation of this distribution ($\\sigma$) is set by \nphysical properties of the shower. \nIf additional errors due to detection --\nwhich increases the $\\sigma$ -- are not kept to minimum, the end of the\nenergy spectrum will be smeared in a way that the GZK feature might not be seen.\n\nUnderstanding the energy error is crucial in order to determine whether \nor not the GZK cutoff is present. \nA lognormal error distribution on the reconstructed primary cosmic ray\nenergy is to be expected due to fluctuations\nboth in the shower starting point as well as from the cascade development \n\\cite{gaisser}. According to simulations by the AUGER collaboration \n\\cite{desrep} the depth of first interaction affects the rate of development\nof the particle cascade of the shower which results in a fluctuation\nof about 15\\% on the number of muons and about 5\\% on the eletromagnetic\ncomponent. Auger also predicts that the number of muons in a proton induced \nshower increases with primary energy as E$^{0.85}$ \\cite{desrep}. \nThe 15\\% fluctuation will then contribute as a fixed fractional error and\nthe fluctuation on the number of muons on the ground will be \n$N = (1 \\pm 0.15) N_0 (E\\,\/\\,E_0)^{0.85}$. Therefore one has to add a 15\\% \ncontribution to the error in estimating the primary energy in addition \nto the $\\sqrt{N}$ error factor.\nAs this shower starting fluctuation error is a percentage of the \nenergy it results in a lognormal error distribution.\n\nThere are mainly two ways of determining the energy: ground detectors reconstruct\nthe energy based on the particle density at a certain distance from the\nshower core and fluorescence detectors which \ndetermine the energy through the shower longitudinal profile~\\cite{hires}.\nThe longitudinal profile determines the number of particles in the shower\nper depth and it is well known to have large fluctuations. \nAs mentioned above, the fluctuations arise both from the shower starting point \nas well as from the cascade development. \nThe same is expected for the energy determination in ground detectors, \nsince the particle density depends on the number of particles.\n\nThe inherent fluctuations and resulting lognormal error distribution will affect\ncrucially the analysis of data collected in ground arrays since their data\nsample is collected at one particular depth. It does also affect fluorescence\ndata but as the energy reconstruction uses the full longitudinal profile of\nthe shower, there is more potential information to estimate the original \nenergy.\n\nFigure~\\ref{fig:grdp} shows the distribution\nof particles at ground level for $2 \\times 10^4$ simulated \nshowers~(using \\cite{aires}) \nfrom $10^{20}$ eV protons. A lognormal fit with $\\sigma = 0.08$ is\nsuperimposed and it is clear that the distribution has a lognormal shape.\nThe same distribution for showers from $10^{18}$~eV\nprotons is shown in Figure~\\ref{fig:grdp18}. The poor fit is due to an excess\nof simulated events relative to the lognormal at the high end.\nThe standard deviation of the fit is 0.14. Effects due to the errors with\nasymmetrical and non-gaussian tails are shown in \\cite{vaz}.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrd_e20.eps}\n\\caption{Distribution of total number of particles at ground level.\nRatio of number of particles over average is shown.\n$2\\times 10^4$ showers were simulated with the Aires~\\cite{aires} package.\nPrimary particles are $10^{20}$ eV protons and $\\langle {\\rm N_{part}}\n\\rangle = 2.7 \\times 10^{10}$. Superimposed is a lognormal curve with $\\sigma = 0.08$.}\n\\label{fig:grdp}\n\\end{figure}\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrd_e18.eps}\n\\caption{Same as Figure~\\ref{fig:grdp} but for $10^{18}$ eV protons\nas the primaries. Here $\\langle {\\rm N_{part}} \\rangle = 14.7 \\times 10^7$.\nSuperimposed is a lognormal curve with $\\sigma = 0.14$. The poor fit is due to \nan excess of simulated events relative to the lognormal at the high end.}\n\\label{fig:grdp18}\n\\end{figure}\n\nThe simulated showers used Sybill interaction model and assumed that the ground \nwas at sea level (defined in Aires \\cite{aires} as 0\\,m or 1036\\,g\/cm$^2$). A more\nthorough analysis is under way to understand why the error distribution for lower\nenergies (as in Figure~\\ref{fig:grdp18}) deviates from the lognormal shape.\nHowever it is clear that most of the events in excess come from the tail of the\nmaximum depth (XMAX) of the shower distribution. In Figure~\\ref{fig:xmax18} we\nshow the XMAX distribution for the same events used in Figure~\\ref{fig:grdp18}.\nIf we cut events with XMAX $>$ 890 g\/cm$^2$ from this data set, \nthe ground particles distribution will lose part of the excess events. This \ndistribution is shown in Figure~\\ref{fig:cut18}. \nThese excess events, if included, would only make more exaggerated the effect we discuss here.\n\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{xmax_e18.eps}\n\\caption{Maximum shower depth (XMAX) distribution for $2 \\times 10^{4}$ showers \nwith $10^{18}$ eV protons as the primaries. The arrow in the XMAX axis indicates\nwhere an analysis cut will be applied.}\n\\label{fig:xmax18}\n\\end{figure}\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrde18cut.eps}\n\\caption{Same as Figure~\\ref{fig:grdp18} but events with XMAX $>$ 890 g\/cm$^2$\nwere removed. Superimposed is a lognormal curve with $\\sigma = 0.13$. \nThe fit improves in relation to Figure~\\ref{fig:grdp18}.}\n\\label{fig:cut18}\n\\end{figure}\n\n\nThe results shown in \nFigures~\\ref{fig:grdp} and~\\ref{fig:grdp18} are also dependent on the location of\nthe ground level. We have also simulated events with ground above\nsea level at 950\\,g\/cm$^2$. The lognormal continues to fit well the\n$10^{20}$ eV distribution and its $\\sigma$ improves to 0.05. The $10^{18}$ eV \ndistribution still has an excess but the chisquare improves to 4 and the $\\sigma$\nto 0.10.\n\nBelow we will describe how we determine the UHECR spectrum\nassuming a injection power spectrum from cosmologically distributed sources. \nWe account for energy loss due to propagation through the CMB. \nWe then describe how the energy error is\nevaluated and how it affects the energy reconstruction and the determination\nof the GZK cutoff.\n\n\\section{Analytical determination of UHECR propagation and energy \nspectrum}\nOur analytical approach assumes a cosmological cosmic ray flux. We \nassume extragalactic sources isotropically distributed at different \nredshifts~\\cite{blan}. These\nsources produce a power law energy spectrum (injection spectrum) which is\nassumed to be:\n\\begin{equation}\nF(E) = k E^{-\\alpha} \\exp\\left(-\\frac{E}{E_{max}}\\right)\n\\label{eq:flux}\n\\end{equation}\nwhere $E$ is the cosmic ray energy, $k$ is a normalization factor, $\\alpha$ \nis the spectral index and $E_{max}$ is the maximum energy at the source.\n\nThe energy degradation of protons through the CMB includes losses \ndue to pair production~\\cite{blu,geddes}, expansion of the universe~\\cite{bere} \nand photopion production~\\cite{bere}. These losses at present\nepoch are shown in Figure~\\ref{fig:enloss}.\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{enloss_t.eps}\n\\caption{Energy losses (as labeled) of a proton transversing the CMB at \npresent epoch.}\n\\label{fig:enloss}\n\\end{figure}\nWe include current values\nfor the matter and dark energy density ($\\Omega_M$ and $\\Omega_\\Lambda$)\nwhen considering the energy loss due to expansion of the universe \n($\\beta_z$):\n\n\\begin{equation}\n\\beta_z(E,z) = H_0 \\sqrt{\\Omega_M (1 + z)^3 + \\Omega_\\Lambda}\n\\end{equation}\nwhere $\\beta$ is defined as $\\beta = 1\/E \\times dE\/dt$ and \n$\\Omega_M = 0.3$, $\\Omega_\\Lambda = 0.7$ and $H_0 = 75 $\n~km~s$^{-1}$~Mpc$^{-1}$.\n\nAlso the energy losses due to pair or to photopion production ($\\beta(E,z)$)\nat a certain epoch with redshift z, is corrected. Since the number density\nof the cosmic background photons varies as $n = n_0 \\, (1+z)^3$ the energy loss\nat z differs from the energy loss today ($\\beta_0(E)$) in the following way:\n\n\\begin{equation}\n\\beta(E,z) = (1 + z)^3 \\beta_0((1+z)E)\n\\end{equation}\n\nOnce all energy loss mechanisms are known, the energy with which a\nproton has to be generated in order to account for the energy observed\ntoday can be determined. The generated energy depends on the distance\nor epoch (redshift) from today. This can be well determined by a\nmodification factor $\\eta(E,z)$~\\cite{bere} which relates the generated\nenergy spectrum to the modified (and measured) one.\n\nThe cosmological flux assumes the observer in the center of a sphere\nof large radius and an isotropic density of sources \\cite{blan}.\nThe flux at the Earth is given by:\n\\begin{eqnarray*}\nj(E) & = & \\frac{c}{4 \\pi H_0} \\int_0^z F(E_g) \n\\left(\\frac{E_g}{E}\\right)^{-\\alpha} (1 + z)^m \n\\frac{dE_g}{dE} \\\\\n& & \\times \\frac{dz}\n{(1 + z)\\left[\\Omega_M (1 + z)^3 + \\Omega_\\Lambda\\right]^{1\/2}}\n\\end{eqnarray*}\nwhere $E_g$ is the generated cosmic ray energy (at a source located with\nredshift $z$); $F(E)$ is given by Equation~\\ref{eq:flux}; $E$ is the cosmic \nray energy determined at the Earth; $\\alpha$ is the same spectral index as\nin Equation~\\ref{eq:flux}; $m$ accounts for the luminosity evolution of the\nsources and $c$ is the speed of light. We assume $m = 0$ and therefore do not\ntake luminosity evolution into account. \nThe modification factor $\\eta$ is given by:\n\\begin{equation}\n\\eta = \\int_0^z \\left(\\frac{E_g}{E}\\right)^{-\\alpha} \\frac{dE_g}{dE} \\frac{dz}\n{(1 + z)\\left[\\Omega_M (1 + z)^3 + \\Omega_\\Lambda\\right]^{1\/2}}\n\\end{equation}\n\nFor comparison, we determine the modification factor for arbitrary\nredshifts and assuming no cosmological constant. Our results match those\nof~\\cite{bere,demarco} and are shown in Figure~\\ref{fig:mod}.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{mf.eps}\n\\caption{Modification factor~\\cite{bere} from our analytical calculation\nversus cosmic ray energy. Curves are for different redshifts (top: \n$z = 0.002$; middle: $z = 0.02$; bottom $z = 0.2$)\nand assuming no cosmological constant in order to compare results to \n\\cite{bere,demarco}.}\n\\label{fig:mod}\n\\end{figure}\n\nIn Figure~\\ref{fig:specg} is shown (black solid curve) the cosmic ray expected\nflux versus energy ($E$) at Earth multiplied by $E^3$ \nfrom a cosmological injection spectrum with $\\alpha = 2.6$. The expected GZK\nfeature is present.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{fluxg.8.eps}\n\\caption{Cosmic ray energy spectrum ($\\times E^3$) from a cosmological \nflux (solid black line) with spectral index $\\alpha = 2.6$. The other curves \nare the energy spectrum convoluted with a lognormal\nerror with standard deviations $\\sigma$ as shown.}\n\\label{fig:specg}\n\\end{figure}\n\n\\section{Errors effects on the energy reconstruction}\nWe will now assume that the cosmic ray energy spectrum from a cosmological\nisotropic distribution of sources \nis the true spectrum. To understand how an error in the reconstructed energy\naffects the spectrum, we convolute the cosmological flux assuming a lognormal\nerror on the energy.\n\nThe lognormal distribution is given by\n\\begin{equation}\n\\frac{dP(E',E)}{d\\ln E} = k \n\\exp\\left[-\\frac{1}{2\\sigma^2} \\log^2\\frac{E'}{E}\\right]\n\\end{equation}\nwhere $k = 1\\, \/\\, \\sqrt{2 \\pi}\\sigma$ is a normalization to unit area and \n$\\sigma$ is the standard deviation of $\\log_{10}E$. \nWhen a lognormal error in the energy reconstruction is assumed, the flux will be \nconvoluted in the following way:\n\\begin{equation}\ndF'(E) = F(E') \\frac{dP(E',E)}{dE} dE' \n\\end{equation}\nwhere F is given by Equation~\\ref{eq:flux}.\n\nThe expected flux ($\\times E^3$) for energies reconstructed with a lognormal\nerror distribution is shown in Figure~\\ref{fig:specg}. The curves are for \na spectral index\n$\\alpha = 2.6$ and $\\sigma = 0.08$, 0.14, 0.25 and 0.5 as labeled.\n\nIt is very clear that not only the flux increases by a constant factor but\nalso the GZK feature is smeared. As shown in Figures~\\ref{fig:grdp}\nand \\ref{fig:grdp18}, the standard deviation in the lognormal \ndistribution will be obtained in an ideal case where thousands of events \nare detected depends on the energy of the primary particle. It is\n0.08 for a $10^{20}$ eV proton and 0.14 for a $10^{18}$ eV proton.\nFigure~\\ref{fig:specg} shows that if the standard deviation is above\n0.14 the GZK cutoff will show up at higher energies than in the true \nspectrum.\n\n\\section{Results and Conclusions}\nFigure~\\ref{fig:specg} shows how the energy spectrum from a cosmological\nflux is smeared due to a lognormal error in the energy reconstruction.\nIntrinsic shower fluctuations leads to a lognormal distribution of\nobserved energy deposition and number of particles in the shower.\nA standard deviation of $log_{10}E$ equal to 0.25 is enough to modify not only the \nshape as\nwell as the normalization of the spectrum measured at the Earth.\nAs a consequence the GZK feature will be smeared and might not be detected \nat all. Such a $\\sigma$ (standard deviation of $\\log_{10}E$) can\neasily result from a detector that only samples a small portion of\nthe total number of particles. This will be more crucial\nto ground detectors since their particle sample is detected all at one \nheight. The standard deviation of the intrinsic \nenergy error distribution for ground detectors is expected to be larger \nthan for fluorescence detectors.\n\nThe air fluorescence\ndetectors will have lower intrinsic lognormal standard deviation as they\nobserve the full development of the shower through the range of view.\nThey miss observing only what goes into the ground or is out of the field\nof view.\n\nAs the Pierre Auger Observatory will not only increase the data sample\nto a significant level, but also combine both ground and fluorescence\ntechniques, it will have constraints to understand and better control\nthe errors in the energy reconstruction. In this way it is\npossible to keep the standard deviation of the intrinsic lognormal\nenergy error to its minimum value.\n\nThe lognormal curves shown in \nFigure~\\ref{fig:grdp} and \\ref{fig:grdp18} have $\\sigma = 0.08$ and 0.14\nrespectively. However one can expect a larger value from an observed distribution\nsince the detectors sample only a portion of the total number of particles.\nOn the other hand\nthe standard deviation of the distribution depends on the ground level altitude\nand\ntherefore an analysis equivalent to ours has to be done for a specific\ndepth.\n\nFigure~\\ref{fig:2spec} shows that the lognormal error in the energy\nis also affected by the spectral index of the injection spectra. However\nthe error in the energy reconstruction will smear the flux in a\nsignificant way independently of the spectral index.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=300pt \\epsfbox{flux2.eps}\n\\vspace*{-2.5cm}\n\\caption{Same as Figure~\\ref{fig:specg} but with spectral index $\\alpha = 3.0$\n(top) and $\\alpha = 2.3$ (bottom). Curve with crosses (x) has \n$\\sigma = 0.5$; circles (o), $\\sigma = 0.25$ and squares, $\\sigma = 0.1$.}\n\\label{fig:2spec}\n\\end{figure}\n\nWe have shown that a lognormal error in the energy reconstruction of\nthe UHECR spectra will affect not only the shape but also the normalization\nof the measured energy spectra. A standard deviation equal to or greater \nthan 0.25 will smear the GZK feature. As a consequence this feature will \nnot be seen. \nThis result is independent of the spectral index of the injection spectra.\n\nIn conclusion, the establishment of the presence or not of the GZK cutoff in \nthe \nUHECR spectrum depends not only in a larger data sample but also in the \ndetermination of the shape of the energy error distribution. The standard \ndeviation of this distribution has to be kept to its intrinsec value. If\nit is equal or greater to 0.25 the GZK feature will be smeared and not be\ndetected.\n\n{\\em Acknowledgements --} \nWe thank Don Groom for useful comments.\nThis work was partially supported by NSF Grant Physics\/Polar Programs\nNo. 0071886 and in part\nby the Director, Office of Energy Research, Office of High Energy and\nNuclear Physics, Division of High Energy Physics of the U.S. Department\nof Energy under Contract Num. DE-AC03-76SF00098 through the Lawrence\nBerkeley National Laboratory.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\sigma\\in(0,1)$ with $\\sigma\\neq\\frac{1}{2}$. We consider the Cauchy problem for the fractional nonlinear Schr\\\"odinger equation\n\\begin{equation}\\tag{$\\textup{NLS}_\\sigma$}\ni\\partial_tu+(-\\Delta)^\\sigma u+\\mu|u|^{p-1}u=0,\\ u(0)=u_0\\in H^s,\n\\end{equation}\nwhere $\\mu= \\pm 1$ depending on the focusing or defocusing case. The operator $(-\\Delta)^{\\sigma}$ is the so-called fractional laplacian, a Fourier multiplier of $|\\xi|^{2\\sigma}$. The fractional laplacian is the infinitesimal generator of some Levy processes \\cite{B}. A rather extensive study of the potential theoretic aspects of this operator can be found in \\cite{landkof}. \n\n\nThe previous equation is a fundamental equation of fractional quantum mechnics, a generalization of the standard quantum mechanics extending the Feynman path integral to Levy processes \\cite{laskin}. \n\nThe purpose of the present paper is to develop a general well-posedness and ill-posedness theory in Sobolev spaces. The one-dimensional case has been treated in \\cite{CHKL} for cubic nonlinearities, i.e. $p=3$, and $\\sigma\\in(\\frac{1}{2},1)$. Here, we consider a higher-dimensional version and other types of nonlinear terms. We also include all $\\sigma\\in (0,1)$ except $\\sigma=\\frac{1}{2}$; furthermore, contrary to \\cite{CHKL} where the use of Bourgain spaces was crucial (since the main goal of their paper was to derive well-posedness theory on the flat torus), we rely only on standard Strichartz estimates and functional inequalities in $\\mathbb{R}^d.$ In the case of Hartree-type nonlinearities, the local well-posedness and blow-up have been investigated in \\cite{ozawa}. \n\nIn the present paper, we will not consider global aspects with large data. For that, we refer the reader to \\cite{sire2} for a study of the energy-critical equation in the radial case, following the seminal work of Kenig and Merle \\cite{km1,km2}. As a consequence, we do not consider blow-up phenomena, an aspect we will treat in a forthcoming work.\n\nWe introduce two important exponents for our purposes: \n$$\ns_c=\\frac{d}{2}-\\frac{2\\sigma}{p-1}\n$$\nand \n$$s_g=\\frac{1-\\sigma}{2}.$$\n\nHere, $s_c$ is the scaling-critical regularity exponent in the following sense: for $\\lambda>0$, the transformation\n\\[u(t,x)\\mapsto \\frac{1}{\\lambda^{2\\sigma\/(p-1)}} u\\Big(\\frac{t}{\\lambda^{2\\sigma}},\\frac{x}{\\lambda}\\Big), \\quad u_0(x)\\mapsto \\frac{1}{\\lambda^{2\\sigma\/(p-1)}} u_0\\Big(\\frac{x}{\\lambda}\\Big)\\]\nkeeps the equation invariant and one can expect local-wellposedness for $s \\geq s_c$, since the scaling leaves the $\\dot H^{s_c}$ norm invariant. On the other hand, $s_g$ is the critical regularity in the ``pseudo\"-Galilean invariance (see the proof of ill-posedness below). \nUnder the flow of the equation ($\\textup{NLS}_\\sigma$), the following\nquantities are conserved:\n\\begin{align*}\nM[u]=&\\int_{\\mathbb R^d} |u(t,x)|^2dx&&\\textup{(mass)},\\\\\nE[u]=&\\int_{\\mathbb R^d}\\frac{1}{2}|\\,|\\nabla|^{\\sigma}u(t,x)|^2+\\frac{\\mu\n}{p+1}|u(t,x)|^{p+1}dx.&&\\textup{(energy)}.\n\\end{align*}\n\nAn important feature of the equation under study is a loss of derivatives for the Strichartz estimates as proved in \\cite{COX}. Unless additional assumptions are met such as radiality as in \\cite{zihua}, one has a loss of $d(1-\\sigma)$ derivatives in the dispersion (see \\eqref{dispersive estimate with loss}). This happens to be an issue in several arguments. \n\n\n\\subsection*{Main results} The goal of this paper is to show that $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^s$ for $s\\geq \\max(s_c,s_g, 0)$, and it is ill-posed in $H^s$ for $s\\in (s_c, 0)$. We start with well-posedness results. \n\\begin{theorem}[Local well-posedness in subcritical cases]\\label{subcritical LWP}\nLet\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&s\\geq s_g&&\\text{ when }d=1\\textup{ and }2\\leq p<5,\\\\\n&s>s_c&&\\text{ when }d=1\\textup{ and }p\\geq 5,\\\\\n&s>s_c&&\\text{ when }d\\geq 2\\textup{ and }p\\geq3.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^s$. \n\\end{theorem}\n\n\n\n\\begin{theorem}[Local well-posedness in critical cases]\\label{critical LWP}\nSuppose that\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&p>5&&\\text{ when }d=1,\\\\\n&p>3&&\\text{ when }d\\geq 2.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^{s_c}$. \n\\end{theorem}\n\nThe proof of Theorem \\ref{critical LWP} is based on a new method, improving on estimates in \\cite{CKSTT}. This improvement, based on controlling the nonlinearity in a suitable space, is necessary due to the loss of derivatives in the Strichartz estimates. \n\nAs a by-product, we also prove small data scattering. \n\\begin{theorem}[Small data scattering]\\label{scattering}\nSuppose that\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&p>5&&\\text{ when }d=1,\\\\\n&p>3&&\\text{ when }d\\geq 2.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, there exists $\\delta>0$ such that if $\\|u_0\\|_{H^{s_c}}<\\delta$, then $u(t)$ scatters in $H^{s_c}$. Precisely, there exist $u_\\pm \\in H^{s_c}$ such that\n$$\\lim_{t\\to\\pm\\infty}\\|u(t)-e^{it(-\\Delta)^\\sigma}u_\\pm\\|_{H^{s_c}}=0.$$\n\\end{theorem}\n\n\\begin{remark}\nContrary to the case $\\sigma\\neq\\frac{1}{2}$, when $\\sigma=\\frac{1}{2}$, the fractional NLS does not have small data scattering. See \\cite{krieger}. \n\\end{remark}\n\nFinally, our last theorem is the ill-posedness result. Note that our result is not optimal, since one should expect ill-posedness in $H^s$ up to $s_g=\\frac{1-\\sigma}{2}$, which is nonnegative. We hope to come back to this issue in a forthcoming work. \n\n\\begin{theorem}[Ill-posedness]\\label{ill-posedness}\nLet $d=1,2$ or $3$ and $\\sigma\\in(\\frac{d}{4},1)$. If $p$ is not an odd integer, we further assume that $p\\geq k+1$, where $k$ is an integer larger than $\\frac{d}{2}$. Then, $(\\textup{NLS}_\\sigma)$ is ill-posed in $H^s$ for $s\\in (s_c,0)$.\n\\end{theorem}\n\nAn interesting feature of the previous ill-posedness result is the fact that, contrary to the standard NLS equation ($\\sigma=1$) there is no exact Galilean invariance. However, one can introduce a new ``pseudo-Galilean invariance\" which is enough to our purposes. More precisely, for $v\\in\\mathbb{R}^d$, we define the transformation\n$$\\mathcal{G}_vu(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}u(t,x-2t\\sigma|v|^{2(\\sigma-1)}v).$$\nNote that when $\\sigma=1$, $\\mathcal{G}_v$ is simply a Galilean transformation, and that NLS is invariant under this transformation, that is, if $u(t)$ solves NLS, so does $\\mathcal{G}_vu(t)$. However, when $\\sigma\\neq1$, $(\\textup{NLS}_\\sigma)$ is not exactly symmetric with respect to pseudo-Galilean transformations. This opens the construction of solitons for $(\\textup{NLS}_\\sigma)$ which happen to be different from the ones constructed in the standard case $\\sigma=1$. Indeed, if we search for exact solutions of the type\n\\begin{equation}\nu(t,x)=e^{it(|v|^{2\\sigma}-\\omega^{2\\sigma})} e^{-iv\\cdot x}Q_\\omega(x-2t\\sigma|v|^{2(\\sigma-1)}v),\n\\end{equation}\nthen the profile $Q_\\omega$ solves the pseudo-differential equation\n\\begin{equation}\\label{Q equation}\n\\mathcal{P}_vQ_\\omega+\\omega^{2\\sigma}Q_\\omega-|Q_\\omega|^{p-1}Q_\\omega=0,\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{P}_v=e^{iv\\cdot x}(-\\Delta)^\\sigma e^{-iv\\cdot x}-|v|^{2\\sigma}-2i\\sigma|v|^{2\\sigma-2}v\\cdot\\nabla,\n\\end{equation}\ni.e., $\\mathcal{P}_v$ is a Fourier multiplier $\\widehat{\\mathcal{P}_v f}(\\xi)=p_v(\\xi)\\hat{f}(\\xi)$, wiht symbol\n\\begin{equation}\np_v(\\xi)=|\\xi-v|^{2\\sigma}-|v|^{2\\sigma}+2\\sigma|v|^{2\\sigma-2}v\\cdot\\xi.\n\\end{equation}\nWe plan to come back to this issue in future works. \n\n\n\n\n\n\n\\section{Strichartz Estimates}\n\nIn this section, we review Strichartz estimates for the linear fractional Schr\\\"odinger operators. \nWe say that $(q,r)$ is \\textit{admissible} if\n$$\\frac{2}{q}+\\frac{d}{r}=\\frac{d}{2},\\quad 2\\leq q,r\\leq\\infty,\\quad (q,r,d)\\neq(2,\\infty,2).$$\nWe define the Strichartz norm by\n$$\\|u\\|_{S_{q,r}^s(I)}:=\\||\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u\\|_{L_{t\\in I}^qW_x^{s,r}},$$\nwhere $I=[0,T)$. Let $\\psi: \\mathbb{R}^d\\to [0,1]$ be a compactly supported smooth function such that $\\sum_{N\\in 2^{\\mathbb{Z}}}\\psi_N=1$, where $\\psi_N(\\xi)=\\psi(\\frac{\\xi}{N})$. For dyadic $N\\in 2^{\\mathbb{Z}}$, let $P_N$ be a Littlewood-Paley projection, that is, $\\widehat{P_N f}(\\xi)=\\psi(\\frac{\\xi}{N})\\hat{f}(\\xi)$. Then, we define a slightly stronger Strichartz norm by \n$$\\|u\\|_{\\tilde{S}_{q,r}^s(I)}:=\\Big(\\sum_{N\\in2^\\mathbb{Z}}\\|P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u)\\|_{L_{t\\in I}^qW_x^{s,r}}^2\\Big)^{1\/2}.$$\n\n\\begin{proposition}[Strichartz estimates \\cite{COX}]\\label{Strichartz}\nFor an admissible pair $(q,r)$, we have\n\\begin{align*}\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{S_{q,r}^s(I)}, \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{q,r}^s(I)}&\\lesssim\\|u_0\\|_{H^s},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{S_{q,r}^s(I)}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^{s}},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{\\tilde{S}_{q,r}^s(I)}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^{s}}.\n\\end{align*}\n\\end{proposition}\n\n\\begin{proof}[Sketch of Proof]\nBy the standard stationary phase estimate, one can show that\n$$\\|e^{it(-\\Delta)^\\sigma}P_1\\|_{L^1\\to L^\\infty}\\lesssim|t|^{-\\frac{d}{2}},$$\nand by scaling,\n\\begin{equation}\\label{dispersive estimate with loss}\n\\|e^{it(-\\Delta)^\\sigma}P_N\\|_{L^1\\to L^\\infty}\\lesssim N^{d(1-\\sigma)}|t|^{-\\frac{d}{2}}.\n\\end{equation}\nThen, it follows from the argument of Keel-Tao \\cite{KT} that for any $I\\subset\\mathbb{R}$,\n\\begin{align*}\n\\|e^{it(-\\Delta)^\\sigma}P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u_0)\\|_{L_{t\\in I}^q W_x^{s,r}}&\\lesssim\\|P_Nu_0\\|_{H^s},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}F)(s)ds\\Big\\|_{L_{t\\in I}^qW_x^{s,r}}&\\lesssim \\|P_NF\\|_{L_{t\\in I}^1H_x^{s}}.\n\\end{align*}\nSquaring the above inequalities and summing them over all dyadic numbers in $2^{\\mathbb{Z}}$, we prove Strichartz estimates.\n\\end{proof}\n\n\nThe loss of derivatives is due to the Knapp phenomenon (see \\cite{zihua}). However, in the radial case, one can overcome this loss as proved in \\cite{zihua}, restricting then the admissible powers of the fractional laplacian. Indeed, in \\cite{zihua}, this is proved that one has optimal Strichartz estimates if $\\sigma \\in (d\/(2d-1),1)$. In particular, the number $d\/(2d-1)$ is larger than $1\/2$ and there is a gap between the Strichartz estimates for the wave operator $\\sigma=1\/2$ and the one occuring for higher powers. This issue suggests that a new phenomenon might occur for this range of powers. \n\n\n\n\n\\section{Local Well-posedness}\n\nWe establish local well-posedness of the fractional NLS by the standard contraction mapping argument based on Strichartz estimates. Due to loss of regularity in Strichartz estimates, our proof relies on the $L_x^\\infty$ bounds (see Lemma 3.2 and 3.3).\n\n\\subsection{Subcritical cases}\n\nFirst, we consider the case that $d=1$ and $2\\leq p<5$. In this case, the equation is scaling-subcritical in $H^s$ for $s>s_g$, since $s_g>s_c$. We remark that in the proof, we control the $L_{t\\in I}^4L_x^\\infty$ norm simply by Strichartz estimates (see \\eqref{1d Strichartz 1} and \\eqref{1d Strichartz 2}).\n\n\\begin{proof}[Proof of Theorem \\ref{subcritical LWP} when $d=1$ and $2\\leq p<5$]\nWe define\n$$\\Phi_{u_0}(u):=e^{it(-\\Delta)^\\sigma}u_0+ i\\mu\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}(|u|^{p-1}u)(s)ds.$$\nLet\n$$\\|u\\|_{X^s}:=\\|u\\|_{L_{t\\in I}^\\infty H_x^s\\cap L_{t\\in I}^4 L_x^\\infty},$$\nwhere $I=[0,T)$. Then, applying the 1d Strichartz estimates\n\\begin{align}\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^4 L_x^\\infty}&\\lesssim\\|u_0\\|_{\\dot{H}^{s_g}}\\label{1d Strichartz 1},\\\\\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^\\infty H_x^s}&\\lesssim\\|u_0\\|_{H^s},\\nonumber\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^4L_x^\\infty}&\\lesssim \\|F\\|_{L_{t\\in I}^1\\dot{H}_x^{s_g}}\\label{1d Strichartz 2},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^\\infty H_x^s}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^s}\\nonumber,\n\\end{align}\nwe get\n$$\\|\\Phi_{u_0}(u)\\|_{X^s}\\lesssim \\|u_0\\|_{H^s}+\\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}.$$\nBy the fractional chain rule\n\\begin{equation}\\label{chain rule}\n\\||\\nabla|^s F(u)\\|_{L^q}\\lesssim \\|F'(u)\\|_{L^{p_1}}\\||\\nabla|^s u\\|_{L^{p_2}},\n\\end{equation}\nwhere $s>0$ and $\\frac{1}{q}=\\frac{1}{p_1}+\\frac{1}{p_2}$, and H\\\"older inequality, we obtain\n$$\\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}\\lesssim \\Big\\| \\||u|^{p-1}\\|_{L_x^\\infty}\\|u\\|_{H_x^s}\\Big\\|_{L_{t\\in I}^1} \\leq T^{\\frac{5-p}{4}}\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H_x^s}.$$\nFor the fractional chain rule \\eqref{chain rule}, we refer \\cite{CW}, for example. We remark that one can choose $p_1=\\infty$ in \\eqref{chain rule}. Indeed, this can be proved by a little modification of the last step in the proof of Proposition 3.1 in \\cite{CW}. Thus, we have\n$$\\|\\Phi_{u_0}(u)\\|_{X^s}\\lesssim \\|u_0\\|_{H^s}+T^{\\frac{5-p}{4}} \\|u\\|_{X^s}^p.$$\nSimilarly, by Strichartz estimates,\n$$\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{X^s}\\lesssim \\||u|^{p-1}u-|v|^{p-1}v\\|_{L_{t\\in I}^1H_x^s}.$$\nThen, applying the fractional Leibniz rule and the fractional chain rule in \\cite{CW}, we get \n\\begin{align*}\n\\||u|^{p-1}u-|v|^{p-1}v\\|_{H_x^s}&=\\Big\\|\\int_0^1 p|v+t(u-v)|^{p-1}(u-v) dt\\Big\\|_{H_x^s}\\\\\n&\\leq p\\int_0^1\\||v+t(u-v)|^{p-1}(u-v) \\|_{H_x^s}dt\\\\\n&\\lesssim\\int_0^1\\|v+t(u-v)\\|_{L_x^\\infty}^{p-1}\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +\\||v+t(u-v)|^{p-1}\\|_{H_x^s}\\|u-v\\|_{L_x^\\infty}dt\\\\\n&\\lesssim\\int_0^1\\|v+t(u-v)\\|_{L_x^\\infty}^{p-1}\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +\\|v+t(u-v)\\|_{L_x^\\infty}^{p-2}\\|v+t(u-v)\\|_{H_x^s}\\|u-v\\|_{L_x^\\infty}dt\\\\\n&\\leq (\\|u\\|_{L_x^\\infty}^{p-1}+\\|v\\|_{L_x^\\infty}^{p-1})\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +(\\|u\\|_{L_x^\\infty}^{p-2}+\\|v\\|_{L_x^\\infty}^{p-2})(\\|u\\|_{H_x^s}+\\|v\\|_{H_x^s})\\|u-v\\|_{L_x^\\infty}.\n\\end{align*}\nThus, it follows that\n\\begin{align*}\n&\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{X^s}\\\\\n&\\lesssim T^{\\frac{5-p}{4}}\\Big\\{(\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1}+\\|v\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1})\\|u-v\\|_{L_{t\\in I}^\\infty H_x^s}\\\\\n&\\quad\\quad\\quad+(\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-2}+\\|v\\|_{L_{t\\in I}^4L_x^\\infty}^{p-2})(\\|u\\|_{L_{t\\in I}^\\infty H_x^s}+\\|v\\|_{L_{t\\in I}^\\infty H_x^s})\\|u-v\\|_{L_{t\\in I}^4L_x^\\infty}\\Big\\}\\\\\n&\\lesssim T^{\\frac{5-p}{4}} (\\|u\\|_{X^s}^{p-1}+\\|v\\|_{X^s}^{p-1})\\|u-v\\|_{X^s}.\n\\end{align*}\nChoosing sufficiently small $T>0$, we conclude that $\\Phi_{u_0}$ is a contraction on a ball\n$$B:=\\{u: \\|u\\|_{X^s}\\leq 2\\|u_0\\|_{H^s}\\}$$\nequipped with the norm $\\|\\cdot\\|_{X^s}$.\n\\end{proof}\n\nNext, we will prove Theorem \\ref{subcritical LWP} when $d=1$ and $p\\geq 5$, or $d\\geq 2$ and $p\\geq3$. In this case, we do not have a good control on the $L_x^\\infty$ norm from Strichartz estimates. Instead, we make use of Sobolev embedding.\n\n\\begin{lemma}[$L_{t\\in I}^{p-1}L_x^\\infty$ bound]\nSuppose that $d=1$ and $p\\geq5$, or $d\\geq 2$ and $p\\geq3$. Let $s>s_c$. Then, we have\n\\begin{equation}\\label{subcritical Lx-infty bound}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}\\lesssim T^{0+}\\|u\\|_{S_{q_0,r_0}^{s}(I)},\n\\end{equation}\nwhere $(q_0,r_0) =((p-1)^+,\\Big(\\tfrac{2d(p-1)}{d(p-1)-4}\\Big)^-)$ is an admissible pair. Here, we denote by $c^+$ a number larger than $c$ but arbitrarily close to $c$, and similarly for $c^-$.\n\\end{lemma}\n\n\\begin{proof}\nWe observe that\n$$\\frac{1}{r_0}-\\frac{s-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r_0})}{d}<0.$$\nThus, by Sobolev inequality,\n$$\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}\\leq T^{0+}\\|u\\|_{L_{t\\in I}^{q_0}L_x^\\infty}\\lesssim\\||\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r_0})}u\\|_{L_{t\\in I}^{q_0}W_x^{s,r_0}}=\\|u\\|_{S_{q_0,r_0}^s(I)}.$$\n\\end{proof}\n\nWe also employ a standard persistence of regularity argument.\n\\begin{lemma}[Persistence of regularity]\\label{persistence of regularity}\nLet $10$, $\\Phi_{u_0}$ is contractive on a ball\n$$B:=\\{u: \\|u\\|_{X^s}\\leq 2\\|u_0\\|_{H^s}\\}$$\nequipped with the norm $\\|\\cdot\\|_{X^0}$, which is complete by Lemma \\ref{persistence of regularity}.\n\\end{proof}\n\n\\begin{remark}\nThe standard persistence of regularity argument allows us to avoid derivatives in \\eqref{difference in LWP proof}. Indeed, for $u\\in B$, $\\|\\langle\\nabla\\rangle^su\\|_{L_{t\\in I}^{p-1}L_x^\\infty}$ is not necessarily bounded.\n\\end{remark}\n\n\\subsection{Scaling-critical cases}\nIn the scaling-critical case, we use the following lemma, which plays the same role as \\eqref{subcritical Lx-infty bound}. We note that the norms in the lemma are defined via the Littlewood-Paley projection in order to overcome the failure of the Sobolev embedding $W^{s,p}\\hookrightarrow L^q$, $\\frac{1}{q}=\\frac{1}{p}-\\frac{s}{d}$, when $q=\\infty$. Lemma 3.3 generalizes \\cite[Lemma 3.1]{CKSTT}.\n\n\\begin{lemma}[Scaling-critical $L_{t\\in I}^{p-1}L_x^\\infty$ bound]\n\\begin{equation}\\label{critical Lx-infty bound}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\lesssim \\left\\{\\begin{aligned}\n&\\|u\\|_{\\tilde{S}_{4,\\infty}^{s_c}(I)}^4\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-5}&&\\text{ when }d=1\\textup{ and }p>5,\\\\\n&\\|u\\|_{\\tilde{S}_{2+,\\infty-}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}&&\\text{ when }d=2\\textup{ and }p>3,\\\\\n&\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}&&\\text{ when }d\\geq 3\\textup{ and }p>3.\n\\end{aligned}\n\\right.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe will prove the lemma only when $d\\geq3$. By interpolation $\\|f\\|_{L^{p_\\theta}}\\leq \\|f\\|_{L^{p_0}}^\\theta \\|f\\|_{L^{p_1}}^{1-\\theta}$, $\\frac{1}{p_\\theta}=\\frac{\\theta}{p_0}+\\frac{1-\\theta}{p_1}$, $0<\\theta<1$, it suffices to show the lemma for rational $(p-1)=\\frac{m}{n}>2$ with $\\gcd(m,n)=1$. First, we estimate\n$$A(t)=\\Big[\\sum_N\\|P_{N}u(t)\\|_{L_x^\\infty}\\Big]^m\\sim\\sum_{N_1\\geq\\cdots\\geq N_m}\\prod_{i=1}^m\\|P_{N_i}u(t)\\|_{L_x^\\infty}.$$\nObserve from Bernstein's inequality that \n\\begin{align}\n\\|P_Nu(t)\\|_{L_x^\\infty}&\\lesssim N^{-\\frac{\\sigma(p-3)}{p-1}}d_N\\label{Bernstein1},\\\\\n\\|P_Nu(t)\\|_{L_x^\\infty}&\\lesssim N^{\\frac{2\\sigma}{p-1}}d_N'\\label{Bernstein2},\n\\end{align}\nwhere\n$$d_N=\\|P_Nu(t)\\|_{\\dot{W}_x^{s_c-(1-\\sigma),\\frac{2d}{d-2}}},\\ d_N'=\\|P_Nu(t)\\|_{\\dot{H}_x^{s_c}}.$$\nAs a consequence, we have\n\\begin{equation}\\label{Bernstein3}\n\\|P_Nu(t)\\|_{L_x^\\infty}\\lesssim\\Big(N^{-\\frac{\\sigma(p-3)}{p-1}}d_N\\Big)^\\theta\\Big(N^{\\frac{2\\sigma}{p-1}}d_N'\\Big)^{1-\\theta}=N^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}(d_N)^\\theta (d_N')^{1-\\theta},\n\\end{equation}\nwhere $\\theta=\\frac{1}{p-2}$. Hence, applying \\eqref{Bernstein1} for $i=1,\\cdots,n$ and \\eqref{Bernstein3} for $i=n+1,\\cdots,m$, we bound $A(t)$ by\n$$\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_m} \\Big(\\prod_{i=1}^nN_i^{-\\frac{\\sigma(p-3)}{p-1}}d_{N_i}\\Big) \\Big(\\prod_{i=n+1}^m N_i^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}(d_{N_i})^\\theta (d_{N_i}')^{1-\\theta}\\Big).$$\nFor an arbitrarily small $\\epsilon>0$, we let\n$$\\tilde{d}_N=\\sum_{N'\\in 2^{\\mathbb{Z}}} \\min\\Big(\\frac{N}{N'},\\frac{N'}{N}\\Big)^\\epsilon d_{N'},\\quad \\tilde{d}_N'=\\sum_{N'\\in 2^{\\mathbb{Z}}} \\min\\Big(\\frac{N}{N'},\\frac{N'}{N}\\Big)^\\epsilon d_{N'}'.$$\nThen, since $d_N\\leq \\tilde{d}_N$ and $\\tilde{d}_{N_i}\\leq (\\frac{N_1}{N_{i}})^\\epsilon \\tilde{d}_{N_1}$ and similarly for primes, $A(t)$ is bounded by\n$$\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_m} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big) \\Big(\\prod_{i=n+1}^m N_i^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon (\\tilde{d}_{N_1})^\\theta (\\tilde{d}_{N_1}')^{1-\\theta}\\Big).\n$$\nSumming in $N_m, N_{m-1}, ..., N_{n+1}$ and using $m-n=(p-2)n$,\n\\begin{align*}\nA(t)&\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_n} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big)\\\\\n&\\quad\\quad\\quad\\quad\\times N_n^{\\frac{\\sigma(p-3)(m-n)}{(p-1)(p-2)}}\\Big(\\frac{N_1}{N_{n}}\\Big)^{(m-n)\\epsilon} (\\tilde{d}_{N_1})^{(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)}\\\\\n&=\\sum_{N_1\\geq \\cdots\\geq N_n} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big)\\\\\n&\\quad\\quad\\quad\\quad\\times N_n^{\\frac{\\sigma(p-3)n}{p-1}}\\Big(\\frac{N_1}{N_{n}}\\Big)^{(p-2)n\\epsilon} (\\tilde{d}_{N_1})^{(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)},\n\\end{align*}\nand then summing in $N_n, N_{n-1}, ..., N_1$, we obtain that\n$$A(t)\\lesssim\\sum_{N_1} (\\tilde{d}_{N_1})^{n+(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)}=\\sum_{N_1} (\\tilde{d}_{N_1})^{2n} (\\tilde{d}_{N_1}')^{m-2n},$$\nwhich is, by H\\\"older inequality and Young's inequality, bounded by\n\\begin{align*}\n&\\lesssim\\|(\\tilde{d}_{N})^{2n} \\|_{\\ell_N^2}\\|(\\tilde{d}_{N}')^{m-2n}\\|_{\\ell_N^{2}}=\\|\\tilde{d}_{N}\\|_{\\ell_N^{4n}}^{2n}\\|\\tilde{d}_{N}'\\|_{\\ell_N^{2(m-2n)}}^{m-2n}\\\\\n&\\leq \\|\\tilde{d}_{N}\\|_{\\ell_N^2}^{2n}\\|\\tilde{d}_{N}'\\|_{\\ell_N^2}^{m-2n}\\lesssim \\|d_{N}\\|_{\\ell_N^2}^{2n}\\|d_{N}'\\|_{\\ell_N^2}^{m-2n}=\\|d_{N}\\|_{\\ell_N^2}^{2n}\\|d_{N}'\\|_{\\ell_N^2}^{(p-3)n}.\n\\end{align*}\nFinally, by the estimate for $A(t)$, we prove that \n\\begin{align*}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}&\\leq \\int_I A(t)^{p-1}dt=\\int_I A(t)^\\frac{m}{n}dt\\\\\n&\\lesssim\\int_I \\|d_N\\|_{\\ell_N^2}^2\\|d_N'\\|_{\\ell_N^2}^{p-3} dt\\leq \\|d_N\\|_{L_{t\\in I}^2\\ell_N^2}^2\\|d_N'\\|_{L_{t\\in I}^\\infty\\ell_N^2}^{p-3}\\\\\n&\\leq \\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}.\n\\end{align*}\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of theorem \\ref{critical LWP}]\nFor simplicity, we assume that $d\\geq3$. Indeed, with little modifications, we can prove the theorem when $d=1,2$. We define $\\Phi_{u_0}(u)$ as in the proof of Theorem \\ref{subcritical LWP}. Then, by Strichartz estimates, the fractional chain rule and \\eqref{critical Lx-infty bound}, we have\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c_0\\||u|^{p-1}u\\|_{L_{t\\in I}^1H^{s_c}}\\\\\n&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c_1\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H^{s_c}}\\\\\n&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}.\n\\end{align*}\nSimilarly, one can show that\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}&\\leq c\\|u_0\\|_{H^{s_c}}+c\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}\n\\end{align*}\nand\n\\begin{align*}\n&\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^0(I)}+\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{\\tilde{S}_{\\infty, 2}^0(I)}\\\\\n&\\leq c_0(\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}+\\|v\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1})\\|u-v\\|_{L_{t\\in I}^\\infty L_x^2}\\\\\n&\\leq c(\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}+\\|v\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|v\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2})\\|u-v\\|_{L_{t\\in I}^\\infty L_x^2}.\n\\end{align*}\nNow we let $\\delta=\\delta(c,\\|u_0\\|_{H^{s_c}})>0$ be a sufficiently small number to be chosen later, and then we pick $T=T(u_0,\\delta)>0$ such that\n$$\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}\\leq\\delta,$$\nDefine \n$$B=\\Big\\{u: \\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}\\leq 2\\delta\\textup{ and }\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}\\leq 2c\\|u_0\\|_{H^{s_c}}\\Big\\}$$\nequipped with the norm\n$$\\|u\\|_X:=\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{0}(I)}+\\|u\\|_{\\tilde{S}_{\\infty,2}^{0}(I)}.$$\nThen, for $u\\in B$, we have\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}&\\leq \\delta+c (2\\delta)^2(2c\\|u_0\\|_{{H}^{s_c}})^{p-2}\\leq 2\\delta,\\\\\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}&\\leq c\\|u_0\\|_{{H}^{s_c}}+c (2\\delta)^2(2c\\|u_0\\|_{{H}^{s_c}})^{p-2}\\leq 2c\\|u_0\\|_{{H}^{s_c}}.\n\\end{align*}\nChoosing sufficiently small $\\delta>0$, we prove that $\\Phi_{u_0}$ maps $B$ to itself. Similarly, one can show \n$$\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_X\\leq\\frac{1}{2}\\|u-v\\|_X.$$\nTherefore, it follows that $\\Phi_{u_0}$ is a contraction mapping in $B$.\n\\end{proof}\n\n\\begin{remark}\n$(i)$ In the proofs, the $L_x^\\infty$ norm bounds are crucial for the following reason. In Proposition \\ref{Strichartz}, there is a loss of regularity except the trivial ones,\n$$\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^\\infty L_x^2}=\\|u_0\\|_{L^2}$$\nand\n$$\\Big\\|\\int_0^t e^{it(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^\\infty L_x^2}\\leq \\|F\\|_{L_{t\\in I}^1L_x^2}.$$\nHence, when we estimate the $L_{t\\in I}^\\infty H_x^s$ norm of the integral term in $\\Phi_{u_0}(u)$, we are forced to use the trivial one\n\\begin{equation}\n\\Big\\|\\int_0^t e^{it(-\\Delta)^\\sigma}|u|^{p-1}u(s)ds\\Big\\|_{L_{t\\in I}^\\infty H_x^s}\\leq \\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}.\n\\end{equation}\nIndeed, otherwise, we have a higher regularity norm on the right hand side. Then, we cannot close the contraction mapping argument. Moreover, if $u_0\\in H^s$, there is no good bound for $\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^qW_x^{s,r}}$ except the trivial one $(q,r)=(\\infty,2)$. Thus, we are forced to bound the right hand side of (3.10) by\n$$\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H_x^s}.$$\nTherefore, we should have a good control on $\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}$.\\\\\n$(ii)$ When $p<3$, the $L_{t\\in I}^{p-1}L_x^\\infty$ norm is scaling-supercritical. Thus, based on our method, the assumptions on $p$ in Theorem \\ref{subcritical LWP} and \\ref{critical LWP} are optimal except $p=3$ in the critical case.\n\\end{remark}\n\n\n\n\n\n\n\n\n\\section{Small Data Scattering}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{scattering}]\nFor simplicity, we consider the case $d\\geq 3$ only. It follows from the estimates in the proof of Theorem \\ref{critical LWP} that if $\\|u_0\\|_{H^s}$ is small enough, then\n$$\\|u(t)\\|_{L_{t\\in\\mathbb{R}}^{p-1}L_x^\\infty}+\\|u(t)\\|_{L_{t\\in\\mathbb{R}}^\\infty H_x^{s_c}}\\leq\\|u(t)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(\\mathbb{R})}+\\|u(t)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(\\mathbb{R})}\\lesssim\\|u_0\\|_{H^{s_c}}<\\infty.$$\nBy Strichartz estimates, the fractional chain rule and \\eqref{critical Lx-infty bound}, we prove that\n\\begin{align*}\n&\\|e^{-iT_1(-\\Delta)^\\sigma}u(T_1)-e^{-iT_2(-\\Delta)^\\sigma}u(T_2)\\|_{H^{s_c}}\\\\\n&=\\Big\\|\\int_{T_1}^{T_2} e^{-is(-\\Delta)^\\sigma}(|u|^{p-1}u)(s)ds\\Big\\|_{H^{s_c}}\\\\\n&\\leq \\|u(t)\\|_{L_{t\\in[T_1,T_2)}^{p-1}L_x^\\infty}^{p-1}\\|u(t)\\|_{L_{t\\in[T_1,T_2)}^\\infty H_x^{s_c}}\\to 0\n\\end{align*}\nas $T_1,T_2\\to\\pm\\infty$. Thus, the limits\n$$u_\\pm=\\lim_{t\\to\\pm\\infty} e^{-it(-\\Delta)^\\sigma}u(t)$$\nexist in $H^{s_c}$. Repeating the above estimates, we show that\n$$\\|u(t)-e^{it(-\\Delta)^\\sigma}u_\\pm\\|_{H^{s_c}}=\\|e^{-it(-\\Delta)^\\sigma}u(t)-u_\\pm\\|_{H^{s_c}}\\to 0$$ \nas $t\\to\\pm\\infty$.\n\\end{proof}\n\n\\section{Ill-posedness}\n\nWe will prove Theorem \\ref{ill-posedness} following the strategy in \\cite{CCT}. Throughout this section, we assume that $d=1,2$ or $3$ and $\\frac{d}{4}<\\sigma<1$. If $p$ is not an odd integer, we further assume that $p\\geq k+1$, where $k$ is the smallest integer greater than $\\frac{d}{2}$. \n\nFirst, we construct an almost non-dispersive solution by small dispersion analysis.\n\n\\begin{lemma}[Small dispersion analysis]\\label{small dispersion analysis}\nGiven a Schwartz function $\\phi_0$, let $\\phi^{(\\nu)}(t,x)$ be the solution to the fractional NLS\n\\begin{equation}\\label{small dispersion}\ni\\partial_t u+\\nu^{2\\sigma}(-\\Delta)^\\sigma u+\\mu|u|^{p-1}u=0,\\ u(0)=\\phi_0,\n\\end{equation}\nand $\\phi^{(0)}(t,x)$ be the solution to the ODE with no dispersion\n$$i\\partial_t u+\\mu|u|^{p-1}u=0,\\ u(0)=\\phi_0,$$\nthat is,\n\\begin{equation}\\label{no dispersion}\n\\phi^{(0)}(t,x)=\\phi_0(x)e^{it\\omega |\\phi_0(x)|^{p-1}}.\n\\end{equation}\nThen there exist $C, c>0$ such that if $0<\\nu\\leq c$ is sufficiently small, then\n\\begin{equation}\\label{small dispersion estimate}\n\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{H^k}\\leq C\\nu^{2\\sigma}\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof closely follows the proof of Lemma 2.1 in \\cite{CCT}.\n\\end{proof}\n\nObviously, $\\phi^{(\\nu)}(t,\\nu x)$ is a solution to $(\\textup{NLS}_\\sigma)$. Moreover, $\\phi^{(\\nu)}(t,\\nu x)$ is bounded and almost flat in the following sense.\n\n\\begin{corollary}\\label{small dispersion corollary}\nLet $\\phi^{(\\nu)}$, $\\nu$ and $c$ be in Lemma \\ref{small dispersion analysis}. Let $s\\geq 0$. Then,\n\\begin{equation}\\label{L^infty control}\n\\|\\phi^{(\\nu)}(t,\\nu x)\\|_{L_x^\\infty}\\sim 1\n\\end{equation}\nand\n\\begin{equation}\\label{H^s control}\n\\|\\phi^{(\\nu)}(t,\\nu x)\\|_{\\dot{H}_x^s}\\sim \\nu^{s-\\frac{d}{2}}(c |\\log \\nu|^c)^s\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{corollary}\n\n\\begin{proof}\nSince $k>\\frac{d}{2}$, by the Sobolev inequality, we have\n\\begin{align*}\n\\|\\phi^{(\\nu)}(t,\\nu x)-\\phi^{(0)}(t,\\nu x)\\|_{L_x^\\infty}&=\\|\\phi^{(\\nu)}(t, x)-\\phi^{(0)}(t,x)\\|_{L_x^\\infty}\\\\\n&\\lesssim\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{H^k}\\lesssim\\nu^{2\\sigma}.\n\\end{align*}\nThen, \\eqref{L^infty control} follows from the explicit formula \\eqref{no dispersion} for $\\phi^{(0)}(t,x)$. \nIt follows from \\eqref{small dispersion estimate} and \\eqref{no dispersion} that\n$$\\|\\phi^{(\\nu)}(t, \\nu x)\\|_{\\dot{H}_x^s}\\leq \\nu^{s-\\frac{d}{2}}(\\|\\phi^{(0)}(t)\\|_{\\dot{H}^s}+\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{\\dot{H}^s})\\sim \\nu^{s-\\frac{d}{2}}(c |\\log \\nu|^c)^s.$$\n\\end{proof}\n\nFor $v\\in\\mathbb{R}^d$, we define the pseudo-Galilean transformation by\n$$\\mathcal{G}_vu(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}u(t,x-2t\\sigma|v|^{2(\\sigma-1)}v).$$\nNote that when $\\sigma=1$, $\\mathcal{G}_v$ is simply a Galilean transformation, and that NLS is invariant under this transformation, that is, if $u(t)$ solves NLS, so does $\\mathcal{G}_vu(t)$. However, when $\\sigma\\neq1$, $(\\textup{NLS}_\\sigma)$ is not exactly symmetric with respect to pseudo-Galilean transformations. Indeed, if $u(t)$ solves $(\\textup{NLS}_\\sigma)$, then $\\tilde{u}(t)=\\mathcal{G}_vu(t)$ obeys $(\\textup{NLS}_\\sigma)$ with an error term\n\\begin{equation}\\label{fNLS with error}\ni\\partial_t\\tilde{u}+(-\\Delta)^\\sigma\\tilde{u}+\\omega|\\tilde{u}|^{p-1}\\tilde{u}=e^{it|v|^{2\\sigma}}e^{-iv\\cdot x}(\\mathcal{E}u)(t,x-2\\sigma t|v|^{2(\\sigma-1)}v),\n\\end{equation}\nwhere\n$$\\widehat{\\mathcal{E}u}(\\xi)=E(\\xi)\\hat{u}(\\xi)$$\nwith \n$$E(\\xi)=|\\xi-v|^{2\\sigma}-|\\xi|^{2\\sigma}-|v|^{2\\sigma}+2\\sigma|v|^{2(\\sigma-1)}v\\cdot\\xi.$$\nHowever, we note that\n\\begin{equation}\\label{E bound}\n|E(\\xi)|\\lesssim |\\xi|^{2\\sigma}.\n\\end{equation}\nIndeed, if $|\\xi|\\leq\\frac{|v|}{100}$, then\n$$E(\\xi)=\\Big||v|^{2\\sigma}\\Big(|\\tfrac{v}{|v|}-\\tfrac{\\xi}{|v|}|^{2\\sigma}-1+2\\sigma \\tfrac{v}{|v|}\\cdot\\tfrac{\\xi}{|v|}\\Big)-|\\xi|^{2\\sigma}\\Big|\\lesssim|v|^{2\\sigma}\\tfrac{|\\xi|^2}{|v|^2}+|\\xi|^{2\\sigma}\\lesssim|\\xi|^{2\\sigma}.$$\nOtherwise, \n$$E(\\xi)\\lesssim|\\xi|^{2\\sigma}+|v|^{2\\sigma}+|\\xi|^{2\\sigma}+|v|^{2\\sigma}+2\\sigma|v|^{2\\sigma-1}|\\xi|\\lesssim|\\xi|^{2\\sigma}.$$\nTherefore, one would expect an \\textit{almost} symmetry for an almost flat solution $u(t)$, such as $\\phi^{(\\nu)}(t,\\nu x)$ in Lemma \\ref{small dispersion analysis}. Precisely, we have the following lemma.\n\n\\begin{lemma}[Pseudo-Galilean transformation]\\label{Pseudo-Galilean transformation}\nLet $\\phi^{(\\nu)}$, $\\nu$ and $c$ be in Lemma \\ref{small dispersion analysis}. For $v\\in\\mathbb{R}^d$, we define\n$$\\tilde{u}(t,x)=(\\mathcal{G}_v\\phi^{(\\nu)}(\\cdot, \\nu\\cdot))(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}\\phi^{(\\nu)}\\big(t,\\nu(x-2t\\sigma|v|^{2(\\sigma-1)}v)\\big),$$\nand let $u(t,x)$ be the solution to $(\\textup{NLS}_\\sigma)$ with the same initial data\n\\begin{equation}\\label{initial data}\ne^{-iv\\cdot x}\\phi^{(\\nu)}(0,\\nu x)=e^{-iv\\cdot x}\\phi_0(0,\\nu x).\n\\end{equation}\nThen, there exists $\\delta>0$ such that\n\\begin{equation}\\label{estim}\n\\|e^{iv\\cdot x}(u(t)-\\tilde{u}(t))\\|_{H_x^k}\\lesssim \\nu^{\\delta}\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{lemma}\n\n\\begin{remark}\nWhen $p=3$, in \\cite{CHKL} the authors could use the counterexample in \n\\cite{CCTamer}. This counterexample is constructed by \npseudo-conformal symmetry and Galilean transformation. A good thing is \nthat this solution is very small in high Sobolev norms, too. Somehow, \nthis smallness allows \\cite{CHKL} to show that the error in pseudo-Galilean \ntransformation is also small. However, when $p>3$, the counterexample in \n\\cite{CCTamer} does not work. Later, Christ, Colliander and Tao \\cite{CCT} constructed \na different counterexample which works for more general $p$. \nUnfortunately, this counterexample is not small in high Sobolev norms. \nIt is very large instead. In particular, for our purposes, it is hard to control the error from \npseudo-Galilean transformation. But, our new \ncounterexample still has small high Sobolev norm after translating it \nto its frequency center; this is the term $e^{iv\\cdot x}$ in equation \\eqref{estim}. Using this smallness, we can prove that pseudo-Galilean \ntransformation is almost invariant. We also remark that the condition $\\sigma>\\frac{d}{4}$ is to guarantee smallness of the error (see \\eqref{II(s) estimate}).\n\\end{remark}\n\n\\begin{proof}[Proof of Lemma \\ref{Pseudo-Galilean transformation}]\nLet $R(t)=(u-\\tilde{u})(t)$. Then, $R(t)$ satisfies\n$$i\\partial_tR+(-\\Delta)^\\sigma R=\\mu\\big(|\\tilde{u}|^{p-1}\\tilde{u}-|u|^{p-1}u\\big)-e^{it|v|^{2\\sigma}}(\\mathcal{E}\\phi^{(\\nu)}\\big(t,\\nu(x-2\\sigma t|v|^{2(\\sigma-1)}v)\\big),$$\nor equivalently\n\\begin{align*}\nR(t)&=i\\int_0^t e^{i(t-s)(-\\Delta)^{\\sigma}}\\Big\\{\\mu\\big(|u|^{p-1}u-|\\tilde{u}|^{p-1}\\tilde{u}\\big)(s)\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad+e^{is|v|^{2\\sigma}}(\\mathcal{E}\\phi^{(\\nu)}\\big(s,\\nu(x-2\\sigma s|v|^{2(\\sigma-1)}v)\\big)\\Big\\}ds.\n\\end{align*}\nHence, by a trivial estimate, we get\n\\begin{align*}\n\\|e^{iv\\cdot x}R(t)\\|_{H^k}&\\leq\\int_0^t \\big\\|e^{iv\\cdot x}\\big(|u|^{p-1}u-|\\tilde{u}|^{p-1}\\tilde{u}\\big)(s)\\big\\|_{H^k}+\\|\\mathcal{E}\\phi^{(\\nu)}(s,\\nu\\cdot)\\|_{H^k}ds\\\\\n&=\\int_0^t I(s)+II(s)ds.\n\\end{align*}\nFirst, by \\eqref{E bound} and \\eqref{H^s control}, we show that\n\\begin{equation}\\label{II(s) estimate}\n\\int_0^tII(s)ds\\lesssim \\int_0^t\\sum_{j=0}^k\\|\\phi^{(\\nu)}(s,\\nu\\cdot)\\|_{\\dot{H}^{j+2\\sigma}}ds\\sim (c |\\log \\nu|^c)^{1+2\\sigma-\\frac{d}{2}}\\nu^{2\\sigma-\\frac{d}{2}}.\n\\end{equation}\nFor $I(s)$, expanding $u=\\tilde{u}+R$ and then applying H\\\"older inequality and Sobolev inequalities, we bound $I(s)$ by\n\\begin{equation}\\label{I(s) bound}\n\\lesssim\\sum_{j=1}^p\\|e^{iv\\cdot x}R\\|_{H^k}^j.\n\\end{equation}\nFor example, when $p=3$,\n\\begin{align*}\nI(s)&\\leq2\\| |e^{iv\\cdot x}\\tilde{u}|^2e^{iv\\cdot x}R\\|_{H^k}+\\|(e^{iv\\cdot x}\\tilde{u})^2\\overline{e^{iv\\cdot x}R}\\|_{H^k}+2\\|e^{iv\\cdot x}\\tilde{u}|e^{iv\\cdot x}R|^2\\|_{H^k}\\\\\n&\\quad+\\|\\overline{e^{iv\\cdot x}\\tilde{u}}(e^{iv\\cdot x}R)^2\\|_{H^k}+\\||e^{iv\\cdot x}R|^2e^{iv\\cdot x}R\\|_{H^k}\\\\\n&=:I_1(s)+I_2(s)+I_3(s)+I_4(s)+I_5(s).\n\\end{align*}\nConsider\n$$I_1(s)=\\sum_{|\\alpha|\\leq k}\\|\\nabla_{x_1}^{\\alpha_1}\\cdots\\nabla_{x_d}^{\\alpha_d}(|e^{iv\\cdot x}\\tilde{u}|^2e^{iv\\cdot x}R)(s)\\|_{L^2}=:\\sum_{|\\alpha|\\leq k}I_{1,\\alpha}(s),$$\nwhere $\\alpha=(\\alpha_1,\\alpha_2,\\cdots,\\alpha_d)$ is a multi-index with $|\\alpha|=\\sum_{i=1}^d\\alpha_i$. Observe that whenever a derivative hits\n$$e^{iv\\cdot x}\\tilde{u}(s)=e^{is|v|^{2\\sigma}}\\phi^{(\\nu)}\\big(s,\\nu(x-2s\\sigma|v|^{2(\\sigma-1)}v)\\big),$$\nwe get a small factor $\\nu$. Hence, after distributing derivatives by the Leibniz rule, the worst term we have in $I_{1,\\alpha}(s)$ is\n$$\\| |e^{iv\\cdot x}\\tilde{u}(s)|^2\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2},$$\nwhich is, by \\eqref{L^infty control}, bounded by \n$$\\| e^{iv\\cdot x}\\tilde{u}(s)\\|_{L^\\infty}^2\\|\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2}\\sim \\|\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2}.$$\nLikewise, we estimate other terms.\n\nCollecting all,\n$$\\|e^{iv\\cdot x}R(t)\\|_{H^k}\\lesssim (c |\\log \\nu|^c)^{1+2\\sigma-\\frac{d}{2}}\\nu^{2\\sigma-\\frac{d}{2}}+\\int_0^t \\sum_{j=1}^p\\|e^{iv\\cdot x}R(s)\\|_{H^k}^jds$$\nfor $|t|\\leq c|\\log \\nu|^c$. Then, by the standard nonlinear iteration argument, we prove the lemma.\n\\end{proof}\n\nSince we have solutions almost symmetric with respect to the pseudo-Galilean transformations, we can make use of the following decoherence lemma to construct counterexamples for local well-posedness.\n\n\\begin{lemma}[Decoherence]\\label{Decoherence}\nLet $s<0$. Fix a nonzero Schwartz function $w$. For $a,a'\\in[\\frac{1}{2},1]$, $0<\\nu\\leq\\lambda\\ll 1$ and $v\\in\\mathbb{R}^d$ with $|v|\\geq 1$, we define \n$$\\tilde{u}^{(a,\\nu,\\lambda,v)}(t,x):=\\mathcal{G}_v\\Big(\\lambda^{-\\frac{2\\sigma}{p-1}}\\phi^{(a,\\nu)}(\\lambda^{-2\\sigma}\\cdot,\\lambda^{-1}\\nu\\cdot)\\Big)(t,x),$$\nwhere $\\phi^{(a,\\nu)}$ is the solution to \\eqref{small dispersion} with initial data $aw$. Then, we have\n\\begin{align*}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)\\|_{H^s}, \\|\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2},\\\\\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2}|a-a'|\n\\end{align*}\nand \n\\begin{align*}\n&\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(t)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(t)\\|_{H^s}\\\\\n&\\geq c|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2}\\Big\\{\\|(\\phi^{(a,\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})-\\phi^{(a',\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})\\|_{L^2}-C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\Big\\}\n\\end{align*}\nfor all $|t|\\leq c|\\log \\nu|^c\\lambda^{2\\sigma}$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof closely follows the proof of Lemma 3.1 in \\cite{CCT}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{ill-posedness}]\nThe proof is very similar to that of Theorem 1 in \\cite{CCT} except that in the last step, we need to use Lemma \\ref{Pseudo-Galilean transformation} due to lack of exact symmetry. We give a proof for the readers' convenience.\n\nLet $\\epsilon>0$ be a given but arbitrarily small number. Let $\\nu=\\lambda^{\\alpha}$, where $\\alpha>0$ is a small number to be chosen later. Then, we pick $v\\in \\mathbb{R}^d$ such that\n$$\\lambda^{-\\frac{2\\sigma}{p-1}}|v|^s(\\lambda\/\\nu)^{d\/2}=\\epsilon\\Leftrightarrow |v|=\\nu^{\\frac{1}{s}(\\frac{d(1-\\alpha)}{2}+\\frac{2\\alpha\\sigma}{p-1})}\\epsilon^{1\/s}.$$\nNote that since $s<0$, $\\frac{1}{s}(\\frac{d(1-\\alpha)}{2}+\\frac{2\\alpha\\sigma}{p-1})=\\frac{1}{s}(\\frac{d}{2}-\\alpha s_c)<0$ for sufficiently small $\\alpha$, and thus $|v|\\geq 1$. Hence, it follows from Lemma \\ref{Decoherence} that\n \\begin{align}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)\\|_{H^s}, \\|\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C\\epsilon,\\\\\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C\\epsilon|a-a'|,\n\\end{align}\nand \n\\begin{align*}\n&\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(t)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(t)\\|_{H^s}\\\\\n&\\geq c\\epsilon\\Big\\{\\|(\\phi^{(a,\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})-\\phi^{(a',\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})\\|_{L^2}-C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\Big\\}\n\\end{align*}\nfor all $|t|\\leq c|\\log \\nu|^c\\lambda^{2\\sigma}$. Now we observe from the explicit formula \\eqref{no dispersion} for $\\phi^{(a,0)}$ and \\eqref{small dispersion} that there exists $T>0$ such that $\\|\\phi^{(a,\\nu)}(T)-\\phi^{(a',\\nu)}(T)\\|_{L^2}\\geq c$. Moverover, if $\\alpha>0$ is sufficiently small, $C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\to 0$ as $\\nu \\to0$. Therefore, for $\\nu$ small enough, we have\n\\begin{equation}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(\\lambda^{2\\sigma}T)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(\\lambda^{2\\sigma}T)\\|_{H^s}\\geq c\\epsilon.\n\\end{equation}\nNext, we replace $\\tilde{u}^{(a,\\nu,\\lambda, v)}$ and $\\tilde{u}^{(a',\\nu,\\lambda, v)}$ in $(6.11)$, $(6.12)$ and $(6.13)$ by $u^{(a,\\nu,\\lambda, v)}$ and $u^{(a',\\nu,\\lambda, v)}$ by Lemma \\ref{Pseudo-Galilean transformation} with $O(\\nu^{\\delta})$ erorr. Then, making $|a-a'|$ arbitrarily small and then sending $\\nu\\to 0$ (so, $\\lambda^{2\\sigma}T\\to 0$), we complete the proof.\n\\end{proof}\n\n\\section*{Acknowledgements}\nY.H. would like to thank IH\\'ES for their hospitality and support while he visited in the summer of 2014. Y.S. would like to thank the hospitality of the Department of Mathematics at University of Texas at Austin where part of the work was initiated. Y.S. acknowledges the support of ANR grants \"HAB\" and \"NONLOCAL\". \n\n\\bibliographystyle{alpha} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of learning systems with concepts borrowed from statistical mechanics and thermodynamics has a long history reaching back to Maxwell's demon and the ensuing debate on the relation between physics and information \\cite{parrondo2015thermodynamics}. Over the last 20 years, the informational view of thermodynamics has experienced great developments, which has allowed to broaden its scope form equilibrium to non-equilibrium phenomena \\cite{jarzynski2011equalities,de2013non}. Of particular importance are the so-called fluctuation theorems \\cite{seifert2012stochastic,jarzynski2000hamiltonian,crooks1999entropy}, which relate equilibrium quantities to non-equilibrium trajectories allowing, thus, to approximate equilibrium quantities via experimental realizations of non-equilibrium processes \\cite{ytreberg2004efficient,park2003free}. Among the fluctuation theorems, two results stand out, Jarzynski's equality \\cite{jarzynski1997equilibrium,cohen2004note,jarzynski2004nonequilibrium} and Crooks' fluctuation theorem \\cite{crooks1998nonequilibrium,crooks2000path}, as they aim to bridge the apparent chasm between reversible microscopic laws \nand irreversible macroscopic phenomena \\cite{loschmidt1876ueber}.\n\nThe advances in non-equilibrium thermodynamics have recently also led to new theoretical insights into simple learning systems \\cite{goldt2017stochastic,perunov2016statistical,england2015dissipative,still2012thermodynamics,ortega2013thermodynamics,grau2018non}.\nAbstractly, thermodynamic quantities like energy, entropy or free energy can be thought to define order relations between states \\cite{lieb1991,gottwald2019}, which makes them applicable to a wide range of problems. In the economic sciences, for example, such order relations are typically used to define a decision-maker's preferences over states \\cite{mascollel1995}. Accordingly, a decision-maker or a learning system can be thought to maximize a utility function, analogous to a physical system that aims to minimize an energy function.\nMoreover, in the presence of uncertainty in stochastic choice, such decision-makers can be thought to operate under entropy constraints reflecting the decision-maker's precision \\cite{ortega2013thermodynamics,parrondo2015thermodynamics}, resulting in soft-maximizing the corresponding utility function instead of perfectly maximizing it. This is formally equivalent to following a Boltzmann distribution with energy given by the utility. Therefore, in this picture, the physical concept of work corresponds to utility changes caused by the environment, whereas the physical concept of heat corresponds to utility gains due to internal adaptation \\cite{still2012thermodynamics}. Like a thermodynamic system is driven by work, such learning systems are driven by changes in the utility landscape (e.g. changes in an error signal). By exposing learning systems to varying environmental conditions, it has been hypothesized that adaptive behavior can be studied in terms of fluctuation theorems \\cite{grau2018non,england2015dissipative}, which are not necessarily tied to physical processes but are broadly applicable to stochastic processes satisfying certain constraints \\cite{hack2022jarzyskis}.\n\nAlthough fluctuation theorems have been empirically observed in numerous experiments in the physical sciences \\cite{douarche2005experimental,collin2005verification,saira2012test,liphardt2002equilibrium,an2015experimental,smith2018verification}, there have been no reported experimental results relating fluctuation theorems to adaptive behavior in humans or other living beings. Here, we test Jarzynski's equality and Crooks' fluctuation theorem experimentally in a human sensorimotor adaptation task.\nIn this context, the fluctuation theorem establishes a linear relationship between the externally imposed utility changes driving the learning process (which are directly related to non-predicted information and energy dissipation \\cite{still2012thermodynamics}) and the log-probability ratio between forward and backward adaptation trajectories, when exposing participants to the sequence of environments either in the forward or reverse order. Accordingly, such learners can be quantitatively characterized by a hysteresis effect that can also be observed in simple physical systems.\n\n\\section{Results}\n\\label{sect:results}\n\nIn a visuomotor adaptation task, human participants controlled a cursor on a screen towards a single stationary target by moving a mechanical manipulandum that was obscured from their vision under an overlaid screen---see Figure~\\ref{methods exp}\\textbf{A}. Crucially, in each trial $n$, the position of the cursor could be rotated with angle $\\theta_n$ relative to the actual hand position so that participants had to adapt when moving the cursor from the start position to the target. To measure participants' adaptive state, we recorded their movement position at the time of crossing a certain distance from the start position, so that their response could be characterized by an angle $x_n$. \nThe deviation between participants' response and the required movement incurs a sensorimotor loss \\cite{kording2004loss}, that can be quantified as an exponential quadratic error \n\\begin{equation}\n\\label{utility}\n E_n(x)= 1 - e^{- (x-(\\theta_n+b))^2},\n\\end{equation}\nwhere $\\theta_n$ is the true rotation angle set by the trial $n$ and $b$ is a participant-specific parameter for the bias, respectively---see Figure \\ref{methods exp}\\textbf{D}. This loss is taken to be the energy (or negative utility) of a participant's stochastic response $X_n = x_n$. Therefore the pointing behavior after a suitably long adaption time can be described by a Boltzmann equilibrium distribution $p_n^{eq}$ of the form\n\\begin{equation}\n\\label{Boltzmann}\n p_{n}^{eq}(x_n) = \\exp\\big(-\\beta( E_n(x_n) - F_n )\\big),\n\\end{equation}\nfor all $x_n\\in A_n$, where the sensorimotor error $E_n(x_n)$ plays the role of an energy, the free energy term $F_n = \\frac{1}{\\beta} \\log \\int_{A_n} \\exp\\left( -\\beta E_n(x_n) \\right) dx_n$ is caused by the normalization, and $A_n$ is the support of the equilibrium distribution $p_{n}^{eq}$, which will vary for each participant, as we explain in Section \\ref{exp design}. Moreover, the softness-parameter $\\beta$, also known as \\textit{inverse temperature}, controls the trade-off between entropy maximization and energy minimization, essentially interpolating between a purely stochastic choice ($\\beta = 0$) and a purely rational choice ($\\beta \\to \\infty$) minimizing the energy perfectly. \n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.32]{lastI.jpg}\n \\caption{\\textbf{A} Schematic representation of an experimental trial with deviation angle $\\theta$. The dotted line represents the participant's hand movement and the continuous line represents the rotated movement observed on the screen. \\textbf{B} Experimental protocol. The continuous line represents the deviation angles $\\theta$ imposed during the first experimental cycle, where trials 1 to 25 constitute the forward process and trials 34 to 58 constitute the backward process. The dotted line represents the second cycle. \\textbf{C} Illustration of the equilibrium distributions \\eqref{Boltzmann} with $b,\\theta_n=0$ resulting from the exponential quadratic error \\eqref{utility} and, respectively, $\\beta=1,1.5,2$.\n The shaded area represents the target, which tolerates, at most, an error of $2 \\degree$. \\textbf{D} As participants have to equilibrate between forward and backward protocol, we compare their performance in the $0 \\degree$ plateaus between protocols with the equilibrium distribution recorded before the start of the protocol, here shown exemplarily for participant 7 (red: normalized error histogram for the in-between plateaus, green: equilibrium histogram). The same comparison for each participant can be found in Figure \\ref{plateaus all}.}\n \\label{methods exp}\n\\end{figure}\n\nThe task consisted in a sequence of target reaching trials, where the rotation angle $\\theta_n$ changed from one trial $n$ to the next trial $n+1$ according to a given up-down protocol---see Figure~\\ref{methods exp}\\textbf{B}---, so that participants' responses could be represented by a trajectory $\\vect{x}=(x_0,x_1,..,x_N)$.\nWhen the environment is changing over many time steps, we can distinguish error changes $\\Delta E_{ext}(\\vect{x}) \\coloneqq \\sum_{n=0}^{N-1} (E_{n+1}(x_n)-E_{n}(x_n))$ that are induced externally by changes in the environment, from error changes $\\Delta E_{int}(\\vect{x}) \\coloneqq \\sum_{n=1}^{N} (E_{n}(x_n)-E_{n}(x_{n-1}))$ due to internal adaptation when changing from $x_{n-1}$ to $x_{n}$. Crucially, it is exactly the externally induced changes in error, $\\Delta E_{ext}(\\vect{x})$, analogous to the physical concept of work, that drive the adaptation process: if $\\Delta E_{ext}(\\vect{x})$ is large, the system is more surprised and has to adapt more. In the following, we thus refer to $\\Delta E_{ext}(\\vect{x})$ as \\emph{driving error} or \\emph{driving signal}. When applying Crooks' fluctuation theorem for general adaptive systems \\cite{hack2022jarzyskis} to the above setting, we obtain the linear relation\n\\begin{equation}\n\\label{prediction}\n \\Delta E_{ext}(\\vect{x}) - \\Delta F = \\frac{1}{\\beta}\\log \\left( \\frac{\\rho^F(\\Delta E_{ext}(\\vect{x}))}{\\rho^B(-\\Delta E_{ext}(\\vect{x}))} \\right),\n\\end{equation}\nwhere $\\Delta F$ denotes the free energy difference $F_N-F_0$, and $\\rho^F$ and $\\rho^B$ are the learner's probability densities over possible driving errors after sequentially exposing the learning system to the $N+1$ environments in forward and reverse order, respectively. In equation \\eqref{prediction}, these densities are evaluted at the actual driving errors $\\Delta E_{ext}(\\vect{x})$ and $-\\Delta E_{ext}(\\vect{x})$, respectively, for a particular adaptive trajectory $\\vect{x}$.\n \n\n\nA direct consequence of \\eqref{prediction} is Jarzynski's equality \\cite{crooks1998nonequilibrium}, which states that\n\\begin{equation}\n\\label{prediction II}\n \\big\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\big\\rangle = e^{-\\beta \\Delta F},\n\\end{equation}\nwhere $\\langle ~ \\cdot ~ \\rangle \\coloneqq \\mathbb E[~\\cdot~]$ denotes the expectation operator, considering $\\vect{X} = (X_n)_{n=0}^N$ a Markov chain with transition densitites $\\Pi_n$ that have $p^{eq}_n$ as stationary distributions. In our experiment, $\\vect{X}$ represents participants' responses that are repeated over multiple repetitions of the forward-backward protocol. \nIn the following, we will test the relationships \\eqref{prediction} and \\eqref{prediction II} experimentally with $\\Delta F = 0$ as our human learners start and end in the same environmental state.\n\n\nIn our experiment the task is divided into 20 cycles of 66 trials each, following the protocol \\eqref{protocol} illustrated in Figure \\ref{methods exp}\\textbf{B}. We refer to trials 1 to 25 of each cycle as a realization of the \\emph{forward process} and trials 34 to 58 as a realization of the \\emph{backward process}. Notice the backward process consists of the same angles as the forward process, that is, the same utility functions, but in reversed order. Thus, we record for each participant 20 values for $\\Delta E_{ext}(\\vect{x})$ in both the forward and backward processes\nthat we use to estimate participants' probability densities of the forward and backward processes, $\\rho^F$ and $\\rho^B$, respectively, using kernel density estimation. As the amount of data is limited to test the linear relation in \\eqref{prediction}, we will use simulation results in the following to compare against participants' behavior.\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.2]{fig-2-last-IV.jpg}\n \\caption{Simulation of Crooks' fluctuation theorem. \\textbf{A} Simulation with 1000 cycles. In black, the theoretical prediction; in red, the linear regression for the simulated data and, in green, the simulated points. Since the simulated data set adjusts pretty well to Crooks' fluctuation theorem \\eqref{prediction}, Jarzynski's equality \\eqref{prediction II} is fulfilled. \\textbf{B} Simulation with 20 cycles and bootstrapping. The black line is the theoretical prediction \\eqref{prediction} while the red line and shaded area are, respectively, the mean and the 99 \\% confidence interval of \\eqref{prediction} after 1000 bootstraps of the driving error values obtained in a single run (which consists of 20 cycles).}\n \\label{2 simus}\n\\end{figure}\n\nWhen simulating an artificial decision-maker based on a stochastic optimization scheme with Markovian dynamics, for example a Metropolis-Hasting algorithm with target distribution $p_n^{eq}\\propto \\exp(-\\beta E_n)$, it is clear that we can recover the linear relationship \\eqref{prediction}, provided that sufficient samples are collected \\cite{hack2022jarzyskis}---see, for example, a simulation with 1000 cycles in Figure \\ref{2 simus}\\textbf{A} where we can see a good adjustment between the theoretical prediction (in black) and the linear regression of the observed data (in red). As a result, \\eqref{prediction II} also holds in this scenario. The more critical question is what happens when only few samples are available. To this end, we use the stochastic optimization algorithm to simulate the protocol of our experiment, that is, 20 cycles, and indicate confidence intervals using 1000 bootstraps. It can be seen in Figure \\ref{2 simus}\\textbf{B} that the theoretical prediction is consistent with the $99\\%$ confidence interval in the region where $|\\Delta E_{\\textrm{ext}}| \\leq 4$ (which is the region where our experimental data lies).\nUsing the same bootstrapped data, we obtain several estimates of $\\langle e^{- \\Delta E_{ext}(\\vect{X})}\\rangle$ (the mean of $e^{- \\Delta E_{ext}(\\vect{X})}$ for the observed values of $\\Delta E_{ext}(\\vect{X})$ at each bootstrap) which we use to calculate a confidence interval for it. This results in the $99\\%$ confidence interval for $\\langle e^{-\\Delta E_{ext}(\\vect{X})}\\rangle$ being $(0.48,\\text{ }1.64)$, which is consistent with the theoretical prediction \\eqref{prediction II} for $\\Delta F = 0$. Accordingly, we will expect a similar behavior for our experimental data. Note we take, for simplicity, $b=0$, $\\beta=1$ and, for all $n$, $A_n=[-90,90]$ in these simulations.\n\n\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.35]{long-simu2.eps}\n \\caption{Linear relation long simulation with 1000 cycles. In black, the theoretical prediction; in red, the linear regression for the simulated data and, in blue, the simulated points. Since the simulated dataset adjusts pretty well to Crooks' fluctuation theorem, Jarzynski's equality is fulfilled.}\n \\label{long simu}\n\\end{figure}\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.8]{short.simu-boot2.eps}\n \\caption{Linear relation short simulation. In black, the theoretical line; in blue, two thin lines stand for the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the driving error values obtained in one run of the short simulation and a thick line stands for the mean of these bootstraps. (we should color between the lines)}\n \\label{short simu}\n\\end{figure}\n\\end{comment}\n\n\\begin{figure}[!tb]\n\\centering\n \n \\includegraphics[scale=0.21]{lastIII-II.jpg}\n \\caption{Hysteresis effect. The filled triangles are the mean of the observed angles for every deviation in both the forward process, in green, and the backward process, in red. The black line is the forward protocol. Participants that achieve at least $50\\%$ adaptation are shaded by a green background color. Hysteresis can be observed between trials 1 and 5, 9 and 17 and 21 and 25. Notice, as expected, the forward means are below the backward in the first region, above in the second and below again in the third.}\n \\label{hyste plot}\n\\end{figure}\n\nParticipants' average adaptive responses can be seen in Figure \\ref{hyste plot} compared to the experimentally imposed true parameter values (the trial-by-trial responses can be seen in Figure \\ref{forward all}). The green and red lines distinguish the forward and backward trajectories, respectively, so that, from the contrast between the two curves, hysteresis becomes apparent, as common in simple physical systems \\cite{jarzynski2011equalities} and as reported previously in similar experiments for sensorimotor adaptation \\cite{turnham2012facilitation}. Participants that achieve at least $50\\%$ adaptation are shaded by a green background color and are our participants of interest. The three participants that fail to achieve this minimum adaptation level are marked by a red shade. Instead of excluding these participants entirely from the analysis, we keep them in to show the contrast to the well-adapted participants and to highlight that the results reported for the well-adapted participants do not hold trivially for any participant producing inconsistent behavior.\n\nFigure \\ref{together} shows participants' data compared to the theoretical prediction from \\eqref{prediction} and the 99 \\% confidence interval after 1000 bootstraps as in the case of the simulations in Figure \\ref{2 simus}\\textbf{B}.\nThere, we see that our data follow the trend of the theoretical prediction and lie within or close to the confidence interval bounds of the prediction for ample regions for several participants.\nThis is not a trivial result, as can be easily seen, when randomizing the temporal order of the trajectory points or when replacing the utility function with another one that does not fit the setup. Figure \\ref{togetherB}\\textbf{B} and \\ref{togetherB}\\textbf{C} show this, for example, for an inverted Mexican hat (\\eqref{mex hat} with $\\sigma=4$) that assigns low utility to the target region, and for resamples of the trajectory points in a random order, respectively. Both results are clearly incompatible with the theoretical prediction. \n\n\nWhen conducting an additional robustness analysis in Figure~\\ref{graph distances}, we found that, under the proposed utility function, participants' behavior is compatible with Crooks' fluctuation theorem for a broad neighbourhood of parameter settings, but breaks down when choosing implausible parameters. Regarding Jarzynski's equality \\eqref{prediction II}, the confidence intervals for the majority of participants are consistent with the theoretical prediction when using the bootstrapped values to calculate $\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\rangle$ (cf. Table \\ref{jarz participants}). In contrast, when following the same procedure for both the inverted Mexican hat and the randomized procedure, we obtain consistency for a considerably smaller number of participants. In particular, for the inverted Mexican hat, we obtain consistency for only two participants. Moreover, these participants are $S_8$ and $S_9$, which belong to the group that did not reach at least $50\\%$ adaptation (indicated by the red background area in the figures). For the randomized procedure, the expected number of participants that show consistency is also close to two, although the specific subjects which are consistent vary with the realization of the randomized procedure. More specifically, after 1000 runs of the randomized procedure, the mean number of consistent subjects we observed was 2.33. \n\n\n\\begin{comment}\n\\begin{table}[!htb]\n\\centering\n \\begin{tabular}{||c| c| c ||} \n \\hline\n participant & Mean & Standard deviation \\\\ [0.5ex] \n \\hline\\hline\n 1 & 1.02 & 0.35 \\\\ \n 2 & 0.72 & 0.30 \\\\\n 3 & 0.43 & 0.09 \\\\\n 4 & 1.36 & 0.51 \\\\\n 5 & 0.57 & 0.27 \\\\\n 6 & 0.48 & 0.15 \\\\\n 7 & 0.31 & 0.10 \\\\\n 8 & 4.80 & 2.60 \\\\\n 9 & 1.55 & 0.51 \\\\\n 10 & 2.28 & 1.30 \\\\\n \\hline\n \\end{tabular}\n \\caption{Jarzynski's equality experimental results. Notice the results should be one according to the theoretical prediction. To get the results, we bootstrap the observed value of $\\Delta E_{ext}(\\vect{x})$ for the forward process 1000 times and calculate $\\int p(W) e^{-W}dW$, where $p(W)$ is obtained using kernel density estimation on each bootstrap. Aside from participants 3 and 7, all participants are almost consistent with 3 sigmas.}\n \\end{table}\n\\end{comment}\n \n \\begin{table}[!tb]\n\\centering\n \\begin{tabular}{||c| c|| c| c ||} \n \\hline\n participant & Confidence interval & participant & Confidence interval \\\\ [0.5ex] \n \\hline\\hline\n 1 & \\cellcolor[RGB]{175,234,180}(0.03,\\text{ }48.59) & 6 & \\cellcolor[RGB]{175,234,180} (0.04,\\text{ }3.75) \\\\ \n 2 & \\cellcolor[RGB]{175,234,180} (0.03,\\text{ }137.58) & 7 &\\cellcolor[RGB]{175,234,180} (0.01,\\text{ }0.50)\\\\\n 3 & \\cellcolor[RGB]{175,234,180} (0.01,\\text{ }3.63) & 8 &\\cellcolor[RGB]{255, 182, 193}(1.98,\\text{ }518130.21)\\\\\n 4 & \\cellcolor[RGB]{175,234,180}(0.49,\\text{ }63.48) & 9 & \\cellcolor[RGB]{255, 182, 193}(0.76,\\text{ }77.24)\\\\\n 5 &\\cellcolor[RGB]{175,234,180} (0.46,\\text{ }1.37) & 10 &\\cellcolor[RGB]{255, 182, 193}(0.26,\\text{ }48758.33)\\\\\n \\hline\n \\end{tabular}\n \\caption{Experimental results for Jarzynski's equality. We include the confidence intervals for the left hand side of \\eqref{prediction II}, which we obtain after bootstrapping the observed values of $\\Delta E_{ext}(\\vect{x})$ for the forward process 1000 times and estimating $\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\rangle$ by its mean for each set of bootstrapped data.\n\n Note, in our setup, the theoretical prediction fulfills $\\Delta F=0$ in the right hand side of \\eqref{prediction II}, resulting in values around $1.0$ for this estimate. Participants that achieve at least $50\\%$ adaptation (c.f. Figure \\ref{hyste plot}) are shaded by a green background color.}\n \\label{jarz participants}\n \\end{table}\n\n\n\n\n\n\\section{Discussion}\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.21]{lastIV.jpg}\n \\caption{Experimental results for Crooks' fluctuation theorem when the sensorimotor loss behaves as an exponential quadratic error \\eqref{utility}. The black line is the theoretical prediction of Crooks' fluctuation theorem \\eqref{prediction} while the curves stand for the mean path after 1000 bootstraps of the observed driving error values. Participants that achieve at least $50\\%$ adaptation (c.f. Figure \\ref{hyste plot}) are shaded by a green background color. The shaded areas inside the graphs are the 99\\% confidence intervals which result from bootstrapping. Note we fit the parameters for each participant according to Section \\ref{exp design}.}\n \\label{together}\n\\end{figure}\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.21]{lastV-II.jpg}\n \\caption{Control results for Crooks' fluctuation theorem in two scenarios: \\textbf{A} the sensorimotor loss behaves like a Mexican hat function and \\textbf{B} the sensorimotor loss behaves as an exponential quadratic error but we sample the observed angles randomly with repetition. The black line is the theoretical prediction of Crooks' fluctuation theorem \\eqref{prediction} while the curves stand for the mean path after 1000 bootstraps of the observed driving error values. The shaded areas inside the graphs are the 99\\% confidence intervals which result from bootstrapping. Note, for simplicity, we assume $\\beta=1$ for all participants when using the Mexican hat to demonstrate that the result in (A) does not trivially hold for any cost function. For \\textbf{B}, we fit the parameters for each participant according to Section \\ref{exp design}.}\n \\label{togetherB}\n\\end{figure}\n\nIn our experiment we have investigated the hypothesis that human sensorimotor adaptation may be participant to the thermodynamic fluctuation theorems first reported by Crooks \\cite{crooks1999entropy} and Jarzynski \\cite{jarzynski2000hamiltonian}. In particular, we tested whether changes in sensorimotor error induced externally by an experimental protocol are linearly related to the log-ratio of the probabilities of behavioral trajectories under a given forward and time-reversed backward protocol of a sequence of visuomotor rotations. We found that participants' data, in all cases where participants showed an appropriate adaptive response, was consistent with this prediction\nor close to its confidence interval bounds, as expected from our simulations with finite sample size.\nMoreover, we found that the exponentiated error averaged over the path probabilities was statistically compatible with unity for these participants, in line with Jarzynski's theorem. \n\nTogether these results not only extend the experimental evidence of \\linebreak Boltzmann-like relationships between the probabilities of behavior and the corresponding order-inducing functions---such as energy, utility, or sensorimotor error---from the equilibrium to the non-equilibrium domain, but also from simple physical systems to more complex learning systems when studying adaptation in changing environments, deepening, thus, the parallelism between thermodynamics in physics and decision-making systems \\cite{ortega2013thermodynamics}.\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.65]{fig_5.png}\n \\caption{(NEED TO CHANGE) Close-up of Figure \\ref{together} \\textbf{A} in the region where the simulations of the experiment show a better fit regarding Crooks' fluctuation theorem \\eqref{prediction}, namely, the $\\Delta E_{\\textrm{ext}}$-interval $[-1,1]$ (cf. Figure \\ref{2 simus}\\textbf{B}).}\n \\label{together zoom}\n\\end{figure}\n\\end{comment}\n\nWhen testing for the validity of thermodynamic relations, one of the most critical issues is the choice of the energy function, that is, in our case, the error cost function. In physical systems, the energy function is usually hypothesized following from simple models involving point masses, springs, rigid bodies, etc., and generally requires knowledge of the degrees of freedom of the system under consideration. Here we have used an exponential quadratic error as a utility function, as it has been suggested previously that human pointing behavior can be best captured by loss functions that approximately follow a negative parabola for small errors and then level off for large errors \\cite{kording2004loss}. In the absence of very large errors, many studies in the literature on sensorimotor learning have only used the quadratic loss term \\cite{wolpert1995internal,todorov2002optimal}. Thus, our assumptions are in line with the literature. Crucially, the reported results fail when assuming non-sensical cost functions, like the Mexican hat.\n\n\nExperimental tests of both Jarzynski's equality \\eqref{prediction II} and Crooks fluctuation theorem \\eqref{prediction} have been previously reported in classical physics \\cite{douarche2005experimental,collin2005verification,toyabe2010experimental,saira2012test,liphardt2002equilibrium} and also, in the case of Jarzynski's equality, in quantum physics \\cite{an2015experimental,smith2018verification}. Importantly, these results have been successfully tested in several contexts: unfolding and refolding processes involving RNA \\cite{collin2005verification,liphardt2002equilibrium}, electronic transitions between electrodes manipulating a charge parameter \\cite{saira2012test}, rotation of a macroscopic object inside a fluid surrounded by magnets where the current of a wire attached to the macroscopic object is manipulated \\cite{douarche2005experimental}, and a trapped ion \\cite{an2015experimental,smith2018verification}. Despite differences in physical realization, protocols, and energy functions (and thus work functions), all the above experiments follow the same basic design behind the approach presented here. This supports the claim that \nfluctuation theorems do not necessarily rely on involved physical assumptions but are simple mathematical properties of certain stochastic processes \\cite{hack2022jarzyskis}, although originally they were derived in the context of non-equilibrium thermodynamics \\cite{jarzynski1997equilibrium,crooks1998nonequilibrium}. \n\nMathematically, Crooks theorem \\eqref{prediction} holds for any Markov process (i), whose initial distribution is in equilibrium (ii), and whose transition probabilities satisfy detailed balance with respect to the corresponding equilibrium distributions (iii) \\cite{hack2022jarzyskis}. \nOur experimental test of Equation~\\eqref{prediction} can be seen, thus, as a test for the hypothesis that human sensorimotor adaptation processes satisfy conditions (i), (ii), and (iii). Condition (i) requires adaptation to be Markovian, which is in line with most error-driven models of sensorimotor adaptation \\cite{shadmehr2012} that assume some internal state update of the form $x_{t+1}=f(x_t, e)$ with adaptive state $x$ and error $e$. While such models have proven fruitful for simple adaptation tasks like ours, they also have clear limitations, for example when it comes to meta-learning processes that have been reported in more complex learning scenarios \\cite{braun2010,lieder2019}. Condition (ii) is supported by our data in the second and last rows of Figure \\ref{plateaus all},\nwhere it can be seen that participants' behavior at the beginning of each cycle is roughly consistent with the equilibrium behavior recorded prior to the start of the experiment. Condition (iii) requires that the adaptive process converges to the equilibrium distribution \\eqref{Boltzmann} dictated by the environment and that the behaviour remains statistically unchanged when staying in that environment. Moreover, it requires that the equilibrium behavior at each energy level is time-reversible, that means, once adaption has ceased the trial-by-trial behavior would have the same statistics when played forward or backward in a video recording. Note, however, that does not imply time-reversibility over the entire adaptation trajectory, but is only required locally for each transition step. \nThe usual noise-driven models of adaptation fulfil this requirement, like the Metropolis-Hastings scheme that has been proposed to simulate human sensorimotor adaptation \\cite{SANBORN2016883,grau2018non}.\n\nWhile Jarzynski's equality \\eqref{prediction II} directly follows from Crooks theorem, weaker assumptions are sufficient to derive it \\cite{hack2022jarzyskis,jarzynski1997equilibrium}. In particular, condition (iii) regarding detailed balance is not necessary, as it is only required that the behavioral distribution does not change anymore once the equilibrium distribution is reached. Thus, Equation~\\eqref{prediction II} can be used as a test for the weaker hypothesis that human sensorimotor adaptation satisfies conditions (i), (ii) and stationarity after convergence. While Jarzynski's equality only requires samples from the forward process, Crooks theorem also tests the relation between the forward and the backward processes. In particular, Crooks theorem\ndecouples the information processing with respect to any particular environment from the biases introduced by the adaptation history, that is, it assumes the transition probabilities for any given environment are independent of the history. Hence, the observed difference in behaviour after having adapted to the same environment, the hysteresis, is solely explained in terms of the information processing history before encountering the environment. Such hysteresis effects are not only common in simple physical systems like magnets or elastic bands, but have also been reported for sensorimotor tasks \\cite{kelso1994,schack2011,turnham2012facilitation}. The hysteresis effects we report in Figure~\\ref{hyste plot} are in line with a system obeying Crooks theorem and can be replicated using Markov Chain Monte Carlo simulations of adaptation \\cite{grau2018non}.\n\n\n\n\n\n\n\n\n\n\nOur study is part of a number of recent studies that have tried to harness equilibrium and non-equilibrium thermodynamics to gain new theoretical insights into simple learning systems \\cite{goldt2017stochastic,perunov2016statistical,england2015dissipative,still2012thermodynamics,ortega2013thermodynamics}.\nFor example, the information that can be acquired by learning in simple forward neural networks has been shown to be bounded by thermodynamic costs given by the entropy change in the weights and the heat dissipated into the environment \\cite{seifert2012stochastic}. More generally, when interpreting a system's response to a stochastic driving signal in terms of computation, the amount of non-predictive information contained in the state about past environmental fluctuations is directly related to the amount of thermodynamic dissipation \\cite{still2012thermodynamics}. This suggests that thermodynamic fundamentals, like the second law, can be carried over to learning systems.\nConsider, for example, a Bayesian learner where the utility is given by the log-likelihood model and where the data are presented either in one chunk for a single update, or consecutively in little batches with many little updates. In the latter case, the cumulative surprise is much smaller and lower bounded by the log-likelihood of the data, which corresponds to the free energy difference before and after learning \\cite{grau2018non}. Finally, it has even been suggested that the dissipation of absorbed work as it is studied in a generalized Crooks theorem may underlie a general thermodynamic mechanism for self-organization and adaptation in living matter \\cite{england2015dissipative}, raising the question of whether such a general principle of adaptive dissipation could also govern biological learning processes \\cite{perunov2016statistical}.\n\n\n\n\n\\begin{comment}\n\\begin{figure}[!htb]\n\\centering\n \\includegraphics[scale=.8]{jarz-simu.eps}\n \\caption{In blue, the histogram for Jarzynski's equality. The values are obtained after 1000 bootstraps of one run of the short simulation. The mean is 0.91 and the standard deviation is 0.15. In red, the best fit by a normal distribution. The Lilliefors normality test is not rejected. The best fit mean is 0.93.}\n \\label{jarzynski simu}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\section*{Other narrative}\n\n\\paragraph{Narrative} We aim to study quantitatively the difficulty of adaptation in human decision-making. Given a set of $N+1$ environments where, for each of them, the decision-maker has to choose, after a finite amount of time, some element of a set $S$, we are interested in the distribution of the choices $(x_0,x_1,..,x_N) \\in S^{N+1}$. We could, for example, measure somehow the discrepancy between the observed distribution and the distribution we would get if the decision-maker adapted perfectly. Alternatively, we could measure it by comparing the observed distribution $p$ with the one we get when presenting the decision-maker with the same set of environments in reversed order $p^\\dagger$. In case the decision-maker learned perfectly every environment, both distributions would coincide. The discrepancy between them is, thus, a way of measuring how well the decision-maker adapts to the presented environments. In physics, this discrepancy follows a specific equation, namely\n\\begin{equation}\n\\label{ave eq}\n \\langle W_d(\\vect{x})\\rangle_{\\rho(\\vect{x})} = \\frac{1}{\\beta} D_{KL}(p||p^\\dagger)\n\\end{equation}\nwhere $p(\\vect{x})$ is the probability of observing some vector $\\vect{x}$ when facing the sequence of environments and $p^\\dagger(\\vect{x})$ is the probability of observing the same vector, in reversed order, when facing the sequence of environments reversed and $W_d$ is the dissipated work (explain). Actually, \\eqref{ave eq} has a point wise counterpart (from which it can be derived) which states $\\forall \\vec{x} \\in S^{N+1}$\n\\begin{equation}\n\\label{point eq}\n W_d(\\vect{x})=\\frac{1}{\\beta} \\log \\frac{\\rho(X_N = x_N,..,X_0=x_0)}{\\rho(Y_N=x_0,..,Y_0=x_N)} \n\\end{equation}\nwhere $X_i$ are the random variables corresponding to the choices when the energies are given in the original order and $Y_i$ the random variables when the environment order in inverted. Given the fact we have a small sample, instead of testing directly \\eqref{point eq}, we test a consequence of it, namely,\n\\begin{equation}\n W_0= \\log \\Big( \\frac{\\rho^F(W=W_0)}{\\rho^B(W=-W_0)} \\Big)\n\\end{equation}\n$\\forall W_0 \\in W(S^{N+1}$ where $W$ is the work (explain).\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{initial.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s10.eps}\n \\caption{In blue, histogram initial 100 trials (no deviation). In red, equilibrium distribution where the width and center are chosen to represent the histogram.}\n \\label{plateaus all}\n\\end{figure}\n\n\\clearpage \n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{plat.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s10.eps}\n \\caption{Histogram 0 deviation plateaus.}\n \\label{plateaus all}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{forw.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{fow.s10.eps}\n \\caption{Observed angles in the forward trajectories.}\n \\label{forward all}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage \n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.3]{hyste-s1.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s3.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s5.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s7.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s9.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s10.eps}\n \\caption{Hysteresis plot. In green, the mean of the observed angles for the forward process. In red, the mean of the observed angles for the backward process. In black, the forward protocol. Hysteresis can be observed between trials 1 and 5, 9 and 17 and 21 and 25. Notice the forward means are below the backward in the first region, above in the second and below again in the third, as expected.}\n \\label{hyste plot}\n\\end{figure}\n\\end{comment}\n\n\n\\begin{comment}\n\\clearpage \n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.3]{jarz-s1.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s3.eps} \\hfill\n \\includegraphics[scale=0.2]{jarz-s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s5.eps} \\hfill\n \\includegraphics[scale=0.2]{jarz-s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s7.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s9.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s10.eps}\n \\caption{Histogram Jarzynski's equation data. The (mean,standard deviation) pair for each participant are: (0.8,0.28), (0.57,0.24), (0.33,0.07), (1.05,0.42), (0.45,0.2), (0.38,0.12), (0.25,0.08), (3.83,1.97), (1.22,0.42) and (1.81,1.03). 7 of the 10 participants show results consistent with the Jarzynski equality with 3 sigmas or less.}\n \\label{jarz data}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{back.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s10.eps}\n \\caption{Observed angles in the backward trajectories.}\n \\label{forward all}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{boot.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s10.eps}\n \\caption{Experimental linear relation. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{result}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{mex.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s10.eps}\n \\caption{Experimental linear relation using a mexican hat as energy function. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{mex}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{rando.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s10.eps}\n \\caption{Experimental linear relation sampling with repetition the observed angles. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{random}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.6]{allImages.png}\n \\caption{Experimental linear relation for three cases: original utility function (rows 1 and 2), inverted Mexican hat utility function (rows 3 and 4) and original utility function after sampling with repetition the observed angles (rows 5 and 6). In black, the theoretical line, and, in blue, the mean path after 1000 bootstraps of the observed driving error values. The colored areas represent the 99\\% confidence interval after 1000 bootstraps of the measured driving error values in each case.}\n \\label{together}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.6]{fig_5.png}\n \\caption{Close-up of Figure \\ref{together} in the region where the simulations of the experiment show a better fit of the theoretical prediction: driving error values between 1 and -1.}\n \\label{together zoom}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{mex.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s5_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s10_zoom.eps}\n \\caption{Zoom mexican hat}\n \\label{mexa zoom}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{rando.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s5_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s10_zoom.eps}\n \\caption{Zoom resampling}\n \\label{resamp zoom}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{boot.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s5_xoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s10_zoom.eps}\n \\caption{Zoom original}\n \\label{orig zoom}\n\\end{figure}\n\\end{comment}\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFederated learning~\\citep{kairouz2021advances} enables collaborative learning from distributed data located at multiple clients without the need to share the data among the different clients or with a central server. Much progress has been made in recent work on various aspects of this problem setting, such as improved optimization at each client~\\citep{li2020federatedheterogenous}, improved aggregation of client models at the server~\\citep{chen2020fedbe}, handling the heterogeneity in clients' data distributions~\\citep{zhu2021data}, and also efforts towards personalization of the client models~\\citep{mansour2020three}.\n\nMost existing formulations of federated learning view it as an optimization problem where the global loss function is optimized over multiple rounds, with each round consisting of point estimation of a loss function defined over the client's local data, followed by an aggregation of the client models on a central server. Point estimation, however, is prone to overfitting especially if the amount of training data on clients is very small. Moreover, crucially, such an approach ignores the uncertainty in the client models. Indeed, taking into account the model uncertainty has been shown to be useful not just for improved accuracy and robustness of predictions when the amount of training data is limited, as well as in other tasks, such as out-of-distribution (OOD) detection~\\citep{salehi2021unified} and active learning~\\citep{ahn2022federated}. In this work, we present a Bayesian approach for federated learning which takes into account the model uncertainty, and also demonstrate its effectiveness for other tasks in federated settings where accurate estimates of model uncertainty are crucial, such as OOD detection and active learning in federated setting.\n\nDespite its importance, federated learning in a Bayesian setting is inherently a challenging problem. Unlike standard federated learning, in the Bayesian setting, each client needs to estimate the posterior distribution over its weights (and also the posterior predictive distribution which needed at the prediction stage) which is an intractable problem.\nTypical ways to address this intractability of Bayesian inference for deep learning models include (1) Approximate Bayesian inference where the posterior distribution of model parameters is usually estimated via approximate inference methods, such as MCMC~\\citep{zhang2019cyclical,izmailov2021bayesian}, variational inference~\\citep{zhang2018advances}, or other faster approximations such as modeling the posterior via a Gaussian distribution constructed using the SGD iterates~\\citep{maddox2019simple}, or (2) ensemble methods, such as deep ensembles~\\citep{lakshminarayanan2017simple} where the model is trained using different initialization to yield an ensemble whose diversity represents the model uncertainty.\n\nThe other key challenge for Bayesian federated learning is efficiently communicating the client model parameters, which are represented by a probability distribution, to the server, and their aggregation at the server. Note that, unlike standard federated learning, in the Bayesian settings, each client would maintain either a probability distribution over its model weights or an ensemble over the model weights. Both of these approaches make it difficult to efficiently communicate the client models and aggregate them at the server. Some recent attempts towards Bayesian federated learning have relied on simplifications such as assuming that the posterior distribution of each client's weights is a Gaussian~\\citep{al2020federated,linsner2021approaches}, which makes model communication and aggregation at the server somewhat easier. However, this severely restricts the expressiveness of the client models. In our work, we do not make any assumption on the form of the posterior distribution of the client weights. Another appealing aspects of our Bayesian federated learning model is that, at test time, it does not require Monte-Carlo averaging~\\citep{bishop2006pattern,korattikara2015bayesian} which is usually required by Bayesian methods (especially for non-conjugate models, such as deep learning models) at test time, making them slow (essentially, using $S$ Monte-Carlo samples from the posterior makes prediction $S$ times slower). In contrast, our approach leverages ideas from distillation of posterior predictive distribution~\\citep{korattikara2015bayesian}, using which we are able to represent the entire posterior predictive distribution using a single deep neural network, resulting in fast predictions at test time.\n\nOur contributions are summarized below\n\\begin{itemize}\n \\item We present a novel and efficient approach to Bayesian federated learning in which each client performs a distillation of its posterior predictive distribution into a single deep neural network. This allows solving the Bayesian federated learning problem using ideas developed for standard federated learning methods, while still capturing and leveraging model uncertainty. \n \\item Our approach does not make any strict assumptions on the form of the clients' posterior distributions (e.g., Gaussian~\\citep{al2020federated}) or predictive distributions. Moreover, despite being Bayesian, our approach is still fast at test time since it does not require Monte-Carlo averaging (which is akin to averaging over an ensemble) but uses the idea of distribution distillation to represent the PPD via a single deep neural network.\n \\item We present various ways to aggregate the clients' predictive distributions at the server, both with as well as without requiring publicly available (unlabeled) data at the server. \n \\item In addition to tasks such as classification and out-of-distribution (OOD) detection, we also show a use case of our approach for the problem of active learning in federated setting~\\citep{ahn2022federated} where our approach outperforms existing methods.\n\\end{itemize}\n\n\n\\begin{comment}\nThe popularity of machine and deep learning approaches is a result of availability of better computation and storage mediums in the ecosystem. In many practical scenarios, the pool of data is readily available in the form of a repository or some other storage medium. However, there are also ample cases, where the sharing of data is of concern if it is of private in nature e.g. medical health records. This, coupled with the increasing trend of computational power in mobile and personal devices has brought attention to the concept of leveraging resources on local devices for computation of learning models. It has led an increase in interest in the field of \\textit{federated learning}.\n\nFederated learning explores the possibility of training the learning models using local resources present on remote devices. It uses local computation resources, thereby increasing computation efficiency due to a distributed training fashion. Also, it uses data in a secure way by retaining it with the clients. This eliminates the need of a central data repository and also saves the storage cost and the expensive communication cost between any central server and the distributed systems. Federated learning process usually iterates through many rounds of communication alternating between the server and client level computations. Usually, in each round, the clients initializes their model with the global server model and trains it using local data, which is then sent to the server. The server then aggregates these models to form the global server model, which can be used for the next round.\n\nBesides the formulation stated above in simpler words, the problem becomes more challenging due to a few more aspects.\n\n\\begin{enumerate}\n \\item \\textbf{Imbalanced data} Different devices may be able to generate or store different volumes of data. This can be either due to storage characteristics of device or the variability in usage of service\/application generating the data. This means that based on the available data, local learning models may have a small or large base of data to learn from and so their performance can vary with huge gaps.\n \\item \\textbf{Non-IID data} The data generated by various devices may have different distribution depending on the usage. For ex: Depending on the genre of conversation, the data for next word prediction task may differ.\n \\item \\textbf{Inactive participation} Many devices do not participate actively because of their availability and data generation activity. This may lead to different number of devices and different active devices present in each round.\n\\end{enumerate}\n\\end{comment}\n\n\\section{Bayesian Federated Learning via Predictive Distribution Distillation}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.35]{figures\/FedDiagram.png}\n \\caption{\\small{The above figure summarizes our framework. Each client infers the (approximate) posterior distribution by generating the posterior samples (teacher models) which are distilled to give the PPD (student model parameterized by a deep neural network). Each client communicates its MAP teacher sample and the PPD to the server which aggregates them to yield a global teacher sample and the global PPD, both of which are sent back to the clients, which use these quantities for the next round of learning.}}\n \\label{fig:fedppd_diagram}\n\\end{figure}\n\nUnlike standard federated learning where the client model is represented by a single neural network whose weights are estimated by minimizing a loss function using client's data, we consider the Bayesian setting of federated learning where each client learns a posterior distribution over its weights. The goal is to efficiently communicate the clients' local posteriors to the server and aggregate these local posteriors to learn a global posterior.\n\nHowever, since we usually care about predictive tasks, the actual quantity of interest in the Bayesian setting is not the posterior distribution per se, but the posterior predictive distribution (PPD). Given a set of $S$ samples $\\theta^{(1)},\\ldots,\\theta^{(S)}$ from the posterior, estimated using some training data $\\mathcal{D}$, the (approximate) PPD of a model is defined as $p(y|x,\\mathcal{D}) = \\frac{1}{S}\\sum_{i=1}^S p(y|x,\\theta^{(i)})$. Note that the PPD can be thought of as an ensemble of $S$ models.\n\nSince the PPD is the actual quantity of interest, in our Bayesian federated learning setting, we aim to directly estimate the PPD at each client. However, even estimating and representing the PPD has challenges. In particular, since the PPD is essentially an ensemble of models, storing and communicating such an ensemble from each client to the server can be challenging. To address this issue, we leverage the idea of distribution\/ensemble distillation~\\citep{korattikara2015bayesian}, where the PPD of a deep learning model can be efficiently distilled and stored as a single deep neural network. We leverage this distillation idea on each client to represent the client's PPD using a single neural network which can then be communicated and aggregated at the server in pretty much the same way as it is done in standard federated learning.\n\n\nOur approach can be summarized as follows (and is illustrated in Fig.~\\ref{fig:fedppd_diagram})\n\n\\begin{enumerate}\n \\item For each client, we perform approximate Bayesian inference for the posterior distribution of the client model weights using Markov Chain Monte Carlo (MCMC) sampling. This gives us a set of samples from the client's posterior and these samples will be used as as teacher models which we will distill into a student model. We use stochastic gradient Langevin dynamics (SGLD) sampling~\\citep{welling2011bayesian} since it gives us an online method to efficiently distill these posterior samples into a student model (step 2 below).\n \\item For each client, we distill the MCMC samples (teacher models) directly into the posterior predictive distribution (PPD), which is the student model. Notably, in this distillation based approach~\\citep{korattikara2015bayesian}, the PPD for each client is represented succinctly by a \\emph{single} deep neural network, instead of via an ensemble of deep neural network. This makes prediction stage much faster as compared to typical Bayesian approaches. \n \\item For each client, the teacher model with largest posterior probability (i.e., the MAP sample) from its posterior distribution and the student model representing the client's PPD (both of which are deep neural networks), are sent to the server. \n \\item The server aggregates the teacher and student models it receives from all the clients. For the aggregation, we consider several approaches which we describe in Sec.~\\ref{sec:aggr}. \n \\item The aggregated teacher and student models are sent back to each client, and the process continues for the next round.\n \\item We continue steps 1-5 till convergence.\n\\end{enumerate}\n\n\\subsection{Posterior Inference and Distillation of Client's PPD}\n\\label{sec:fedbdk-1}\nWe assume there are $K$ clients with labeled data $\\mathcal{D}_1,\\ldots,\\mathcal{D}_K$, respectively. On each client, we take the Monte Carlo approximation of its posterior predictive distribution (PPD) and distill it into a single deep neural network using an online Bayesian inference algorithm, as done by the Bayesian Dark Knowledge (BDK) approach in~\\citep{korattikara2015bayesian}. Each iteration of this distillation procedure first generates a sample from the client's posterior distribution using the stochastic Langevin gradient descent (SGLD) algorithm~\\citep{welling2011bayesian} and incrementally ``injects'' into a deep neural network $\\mathcal{S}$ (referred to as ``student'') with parameters $w$, representing a succinct form of the client's (approximate) PPD. This is illustrated by each of the client blocks shown in Fig.~\\ref{fig:fedppd_diagram}. For client $k$, assuming the set of samples generated by SGLD to be $\\theta_k^{(1)},\\ldots,\\theta_k^{(S)}$, this distillation procedure can be seen as learning the parameters $w_k$ of the client $k$'s student model $\\mathcal{S}_k$ by minimizing the following loss function~\\citep{korattikara2015bayesian}\n\\begin{equation}\n \\hat{L}(w_k) = - \\frac{1}{S} \\sum_{i=1}^S \\sum_{x^\\prime \\in \\mathcal{D}^\\prime_k} \\mathbb{E}_{p(y | x', \\theta_k^{(i)})} \\log \\mathcal{S}_k\n\n\\end{equation}\nNote that, in the above equation, to compute the loss, we use an unlabeled distillation dataset $\\mathcal{D}_k^\\prime$ at client $k$. This unlabeled dataset can be generated from the original labeled dataset $\\mathcal{D}_k$ by adding perturbations to the inputs, as suggested in~\\citep{korattikara2015bayesian}. \n\nWe sketch the full algorithm for optimizing for $w_k$ in the Supplementary Material. We use this algorithm at each client to learn the student model $\\mathcal{S}_k$ which represents a compact approximation of the client $k$'s PPD in form of a single deep neural network (as shown in the client block in Fig.~\\ref{fig:fedppd_diagram}), which can be now communicated to the server just like client models are communicated in standard federated learning algorithms. Note that, as shown in Fig.~\\ref{fig:fedppd_diagram}, in our federated setting, in addition to weights $w_k$ of its PPD approximation (the student model), each client $k$ also sends the posterior sample $\\theta_k^{MAP}$ (the sample with the largest posterior probability), to the server.\n\\subsection{Aggregation of Client Models}\n\\label{sec:aggr}\nAs described in Sec.~\\ref{sec:fedbdk-1}, the server receives two models from client $k$ - the (approximate) MAP sample $\\theta_k^{MAP}$ (the teacher) as well as the (approximate) PPD $w_k$ (the student). We denote the teacher models (approximate MAP samples) from the $K$ clients as $\\{\\theta_1^{MAP},\\ldots,\\theta_K^{MAP}\\}$ and the respective student models (approximate PPD) as $\\{w_1,\\ldots,w_K\\}$. These models need to be aggregated and then sent back to each client for the next round. We denote the server aggregated quantities for the teacher and student models as $\\theta_g$ and $w_g$ (we use $g$ to refer to ``global'').\n\nIn this work, we consider and experiment with three aggregation schemes on the server.\n\n\\textbf{Simple Aggregation of Client Models:} Our first aggregation scheme (shown in Algorithm~\\ref{algo-agg}) computes dataset-size-weighted averages of all the teacher models and all the student models received at the server. Denoting the number of training examples at client $k$ as $n_k$ and $N = \\sum_{k=1}^K n_k$, we compute $\\theta_g = \\frac{1}{N}\\sum_{k=1}^K n_k\\theta_k^{MAP}$ and $w_g = \\frac{1}{N}\\sum_{k=1}^K n_k w_k$, similar to how FedAvg algorithm~\\citep{mcmahan2017communication} aggregates client models on the server.\n\n\\textbf{Client-entropy-based Aggregation of Client Models:} Our second aggregation scheme (shown in Algorithm~\\ref{algo-ent}) uses an estimate of each client model's uncertainty to perform an importance-weighted averaging of the student models from all the clients. For each client $k$, we apply its student model $w_k$ on an unlabeled dataset available at the server and compute the average entropy of predictions on this entire dataset. Denoting the average predictive entropy of client $k$ to be $e_k$, we calculate an importance weight for client $k$ as $I_k = n_k\/e_k$. Essentially, a client with larger predictive entropy will receive a smaller importance weight. Using these importance weights, the student models (PPD weights) are aggregated as $w_g = \\frac{1}{\\sum_{k=1}^K I_k}\\sum_{k=1}^K I_k w_k$. For teacher models, however, we still use the simple dataset-size-weighted average as $\\theta_g = \\frac{1}{N}\\sum_{k=1}^K n_k\\theta_k^{MAP}$.\n\n\\textbf{Distillation-based Aggregation of Client Models:} Our third aggregation scheme goes beyond computing (weighted) averages of models received only from the clients. The motivation behind this approach is that the client models (both teachers as well as students) received at the server may not be diverse enough to capture the diversity and heterogeneity of the clients~\\citep{chen2020fedbe}. To address this issue, this approach (shown in Algorithm~\\ref{algo-distill}) first fits two probability distributions, one over the $K$ teacher models and the other over the $K$ student models received from the clients. It then uses these distributions to generate $M$ \\emph{additional} client-\\emph{like} teacher models and student models. Using the actual teacher models (resp. student models) and the additionally \\emph{generated} teacher models (resp. student models), we perform knowledge distillation on the server to compute the global teacher model $\\theta_g$ and the global student model $w_g$. This server-side distillation procedure requires an \\emph{unlabeled} dataset $\\mathcal{U}$ on the server. Applying the actual and generated teacher models (resp. student models) on the unlabeled dataset $\\mathcal{U}$ gives us pseudo-labeled data $\\mathcal{T}$ where each pseudo-label is defined as the averaged prediction (softmax probability vector) obtained by applying the actual and generated teacher models (resp. student models) to an unlabeled input. For the distillation step, we finally run the Stochastic Weighted Averaging (SWA) algorithm~\\citep{izmailov2018averaging} using the pseudo-labeled data $\\mathcal{T}$ and the simple aggregation of the client models as initialization. Both $\\theta_g$ and $w_g$ can be obtained by following this procedure in an identical manner. Recently, this idea was also used in Federated Bayesian Ensemble (FedBE)~\\citep{chen2020fedbe}. However, FedBE is \\emph{not} Bayesian in the sense that the clients still perform point estimation and it is only the distillation step where a distribution is fit over the client models to generate a more diverse ensemble of models which are distilled using the SWA algorithm to get the global model. \n\n\\begin{comment}\nOur method proposes to use and experiments with three forms of aggregation based on availability of unlabelled data at the server. The aggregation process for version 1 is based on Federated Averaging and is highlighted in Algorithm \\ref{alg:alg2}.\n\nThe aggregation process for version 2 is based on Federated Bayesian Ensemble and is highlighted in Algorithm \\ref{alg:alg3}. It creates a probability distribution over the given samples by treating the individual samples as points from the distribution. Then, using a few more points from the distribution, it approximates an ensemble from the given model.\n\nThe third approach utilizes the perspective that the student model can be a good estimator for entropy since it outputs Posterior Predictive Distribution. Hence, we can use entropy estimates of some sample data along with the original data size (used in Federated Averaging) for aggregation at server. The procedure for the same is highlighted in \\ref{alg:alg4}.\n\n\\end{comment}\n\n\\begin{comment}\nWe aim to learn a global model in federated setting with focus on the underlying uncertainty and propose to learn Predictive Posterior Distribution (PPD) on clients. We sketch our approach in Algorithm~\\ref{algo} and illustrate it in Figure~\\ref{fig:fedppd_diagram}.\n\nIn federated learning, each client holds a local dataset which cannot be shared with other clients\/server. This gets further challenging if data is distributed in non-iid fashion across clients, leading to a possibility of clients models differing substantially. Thus, instead of a point-estimate on clients local dataset, we propose to consider clients' predictions averaged over its posterior distribution. Unfortunately, computing posterior distribution is intractable and computationally inefficient given the large space of model parameters. So, we follow~\\citep{korattikara2015bayesian} to approximate the true posterior by drawing samples from it and distilling them into a single model representing PPD. Basically, each client updates its local model (a.k.a teacher model) for $k$ steps using SGLD optimizer and distills the knowledge of the updated model into a second model (a.k.a student model); $k$ being the hyperparameter. This local training continues for fixed number of update steps or until the local model has converged. Note that any sampling method can be used to draw samples from the posterior and is not limited to SGLD used in~\\citep{korattikara2015bayesian}.\n\n\nThe method proposed in the paper aims to perform Bayesian inference in a federated learning setting. This is done by constructing a model at local clients and global server that outputs Posterior Predictive Distribution instead of predictions derived from a single MAP\/MLE estimate.\n\nHowever, the posterior and predictive posterior distribution for models like neural networks are intractable. So, through many works, there has been an attempt to form approximation over the posterior to derive Posterior Predictive Distribution. An interesting approach to this is to model the Posterior Predictive Distribution using a distillation based training. The method, Bayesian Dark Knowledge, does not aim to approximate the posterior but to output Posterior Predictive Distribution for a given input. The objective function of this approach is to train a student model that approximates the Posterior Predictive Distribution of the teacher network. It can be expressed as below:\n\\begin{gather*}\n \\hat{L}(w | x') = - \\frac{1}{| \\Theta |} \\sum_{\\theta^S \\in \\Theta} \\mathbb{E}_p(y | x', \\theta^S) \\log S(y | x, w')\n\\end{gather*}\nThis loss function is on a single data point x', but to integrate it over the given domain, a dataset $D'$ is taken. The teacher model continuously generates samples from posterior $p(\\Theta | D)$ by training on the original dataset $D$ which are distilled into the student model $S$ using $D'$ making it behave as probabilistic weighted aggregate of output from samples frandom posterior. The updates for teacher and student model are highlighted in the Bayesian Dark Knowledge (BDK) algorithm demonstrated in \\{cite paper\\}. The Bayesian Dark Knowledge approach in their work used SGLD for generating posterior samples.\n\nIn the proposed approach, we assume two models present globally that learn the posterior and posterior predictive distribution using updates from local clients. At the beginning of each round, the local clients initialize their teacher and student network from weights of the central server teacher and student models. They update their individual teacher and student models following the methodology of Bayesian Dark Knowledge by using the data generated\/stored within them locally. At the end of the round, they communicate the teacher and student models to the central server, where the central server aggregates the weights from these models, which are then used by the clients in the next round. The framework for the same is outlined in Algorithm \\ref{alg:alg1}.\n\nOur method proposes to use and experiments with three forms of aggregation based on availability of unlabelled data at the server. The aggregation process for version 1 is based on Federated Averaging and is highlighted in Algorithm \\ref{alg:alg2}.\n\nThe aggregation process for version 2 is based on Federated Bayesian Ensemble and is highlighted in Algorithm \\ref{alg:alg3}. It creates a probability distribution over the given samples by treating the individual samples as points from the distribution. Then, using a few more points from the distribution, it approximates an ensemble from the given model.\n\nThe third approach utilizes the perspective that the student model can be a good estimator for entropy since it outputs Posterior Predictive Distribution. Hence, we can use entropy estimates of some sample data along with the original data size (used in Federated Averaging) for aggregation at server. The procedure for the same is highlighted in \\ref{alg:alg4}.\n\n\\end{comment}\n\n\\begin{comment}\nWe have performed experiments using the above proposed approaches and a few baselines on classification task for a few datasets. A few additional experiments have also been performed to measure the performance of the above approaches on federated learning related tasks. The description of experiments, experimental settings and results for all the experiments are shown in the Experimental Results section.\n\\end{comment}\n\n\\begin{minipage}[t]{0.5\\textwidth}\n\\begin{algorithm}[H]\n\\caption{FedPPD}\n\\label{algo}\n\\begin{algorithmic}[1]\n\\Require {Number of communication rounds $T$, \\newline\nTotal clients $K$, \\newline Unlabeled dataset $\\mathcal{U} = \\{x_i\\}_{i=1}^P$ \\newline Server teacher model weights $\\theta_g$, \\newline Server student model weights $w_g$, \\newline Client teacher model weights $\\{\\theta_i\\}_{i=1}^{K}$, \\newline Client student model weights $\\{w_i\\}_{i=1}^K$ \\newline Number of training samples at client $\\{n_i\\}_{i=1}^K$ \\newline}\n\n\\For{ each round $t = 0, \\ldots, T-1$}\n \\State Server broadcasts $\\theta_g^{(t)}$ and $w_g^{(t)}$ \\newline \n \\For{ each client $i \\in \\{1, \\dots, K\\}$}\n \\State $\\theta_i = \\theta_g^{(t)}$\n \\State $w_i = w_g^{(t)}$\n \\State Update $\\theta_i$ and $w_i$ locally as per \\citep{korattikara2015bayesian}\n \\EndFor\n \\State Communicate $\\{\\theta_i^{MAP}\\}_{i=1}^K$ and $\\{w_i\\}_{i=1}^K$ to server \\newline\n \\State $\\theta_g^{(t+1)}$, $w_g^{(t+1)}$ = \\newline Server\\_Update($\\{\\theta_i^{MAP}\\}_{i=1}^K, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$) \\newline\n\\EndFor \\newline\n\\State \\Return $w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Average)}\n\\label{algo-agg}\n\\begin{algorithmic}[1]\n\\Require $\\{\\theta_i^{MAP}\\}_{i=1}^{K}$, $\\{w_i\\}_{i=1}^K$, $\\{n_i\\}_{i=1}^K$ \\newline\n\\State $N = \\sum_{i=1}^K n_i$\n\\State \\Return $\\frac{1}{N}\\sum_{i=1}^K n_i \\theta_i^{MAP}, \\frac{1}{N}\\sum_{i=1}^K n_i w_i$ \\newline\n\\end{algorithmic}\n\\end{algorithm}\n\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.45\\textwidth}\n\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Entropy)}\n\\label{algo-ent}\n\\begin{algorithmic}[1]\n\\Require $\\mathcal{U}, \\{\\theta_i^{MAP}\\}_{i=1}^{K}, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$ \\newline\n\\State $\\theta_g = \\frac{1}{\\sum_{i=1}^K n_i}\\sum_{i=1}^K n_i \\theta_i^{MAP}$ \\newline\n\\State $I = [ ]$ \\Comment{Clients' importance weights}\n\\For {client $i = 1, \\dots, K$}\n \\State $I[i] = n_i\/Entropy(w_i, \\mathcal{U})$\n\\EndFor \\newline\n\\State $w_g = \\frac{1}{\\sum_{i=1}^K I[i]}\\sum_{i=1}^K I[i] w_i$ \\newline\n\\State \\Return $\\theta_g, w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Distill)}\n\\label{algo-distill}\n\\begin{algorithmic}[1]\n\\Require $\\mathcal{U}, \\{\\theta_i^{MAP}\\}_{i=1}^{K}, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$ \\newline\n\\State $\\overline{\\theta}, \\overline{w} = $ Server\\_Update(Average)\n\\newline\n\n\\State Construct global teacher model distribution $p(\\theta | D)$ from $\\{\\theta_i^{MAP}\\}_{i=1}^K$\n\\State Sample $M$ additional teachers and form teacher ensemble\n\\newline $E_T=\\{\\theta_m \\sim p(\\theta | \\mathcal{D})\\}_{m=1}^{M} \\cup \\{\\overline{\\theta}\\} \\cup \\{\\theta_i\\}_{i=1}^{K}$\n\\State Annotate $\\mathcal{U}$ using $E_T$ to generate pseudo-labeled dataset $\\mathcal{T}$ \n\\State Distill $E_T$ knowledge to $\\overline{\\theta}$ using SWA \n\\newline\n$\\theta_g = SWA(\\overline{\\theta}, E_T, \\mathcal{T})$ \n\\newline\n\n\\State Similarly follow steps 2-5 with $\\{w_i\\}_{i=1}^K$ to get $w_g$\n\\newline\n\\State \\Return $\\theta_g, w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\end{minipage}\n\n\n\nThe overall sketch of our Bayesian federated learning procedure, which we call FedPPD (Federated Learning via Posterior Predictive Distributions), is shown in Algorithm~\\ref{algo}. The three aggregation schemes for server-side updates are shown in Algorithm~\\ref{algo-agg}, \\ref{algo-ent}, and \\ref{algo-distill}. Note that, among the 3 aggregation schemes, Algorithm~\\ref{algo-agg} does not require any unlabeled data at the server whereas \\ref{algo-ent}, and \\ref{algo-distill} assume that the server has access to unlabeled data. Also, owing to high computation capacity, server can compute $\\theta_g$ and $w_g$ in parallel for all the aggregation schemes; incurring no additional delays in communication rounds.\n\n\\section{Related Work}\n\nFederated learning has received considerable research interest recently. The area is vast and we refer the reader to excellent surveys~\\citep{li2020federated,kairouz2021advances} on the topic for a more detailed overview. In this section, we discuss the works that are the most relevant to our work.\n\nWhile standard federated learning approaches assume that each client does point estimation of its model weights by optimizing a loss function over its own data, recent work has considered posing federated learning as a posterior inference problem where a global posterior distribution is inferred by aggregating local posteriors computed at each client. FedPA~\\citep{al2020federated} is one such recent approach which performs approximate inference for the posterior distribution of each client's weights. However, it assumes a restrictive form for the posterior (Gaussian). Moreover, the method needs to estimate the covariance matrix of the Gaussian posterior, which is difficult in general and approximations are needed. Moreover, although FedPA estimates the (approximate) posterior on each client, due to efficiency\/communication concerns, at the server, it only computes a point estimate (the mean) of the global posterior. Thus, even though the approach is motivated from a Bayesian setting, in the end, it does not provide a posterior distribution or a PPD for the global model.\n\nRecently, \\citep{linsner2021approaches} presented methods for uncertainty quantification in federated learning using a variety of posterior approximation methods for deep neural networks, such as Monte Carlo dropout~\\citep{gal2016dropout}, stochastic weight averaging Gaussian (SWAG)~\\citep{maddox2019simple}, and deep ensembles~\\citep{lakshminarayanan2017simple}. These approaches, however, also suffer from poor quality of approximation of the posterior at each client. \\citep{lee2020bayesian} also propose a Bayesian approach for federated learning. However, their approach also makes restrictive assumptions, such as the distribution of the gradients at each of the clients being jointly Gaussian.\n\nInstead of a simple aggregation of client models at the server, FedBE~\\citep{chen2020fedbe} uses the client models to construct a distribution at the server and further distills this distribution into a single model using distillation. This model is then sent by the server to each client for the next round. Though maintaining a distribution over client models and distilling it in a single model is more robust than a simple aggregation like federated averaging, FedBE only performs point estimation ignoring any uncertainty in the client models. Another probabilistic approach to federated learning \\cite{thorgeirsson2020probabilistic} fits a Gaussian distribution using the client models, and sends the mean of this Gaussian to each client for the next round of client model training. This approach too does not estimate a posterior at each client, and thus ignores the uncertainty in client models. \n\nIn the context of Bayesian learning, recent work has also explored federated versions of Markov Chain Monte Carlo sampling algorithms, such as stochastic gradient Langevin dynamics sampling~\\citep{lee2020bayesian,el2021federated}. While interesting in their own right in terms of performing MCMC sampling in federated settings, these methods are not designed with the goal of real-world applications of federated learning, where fast prediction and compact model sizes are essential.\n\nAmong other probabilistic approaches to federated learning, recent work has explored the use of latent variables in federated learning. In \\cite{louizos2021expectation}, a hierarchical prior is used on client model's weights where the prior's mean is set to the server's global model, and additional latent variables can also be used to impose other structures, such as sparsity of client model weights. However, these approaches do not model the uncertainty in the client model.\n\nSome of the recent work on federated learning using knowledge distillation is also relevant. Note that our work leverages the ideas of teacher-student distillation, both at the clients (when learning a representation of the PPD using a single deep neural network), as well as in our third aggregation strategy where server-side distillation is used for learning the global model. In federated learning, the idea of distillation has been used in other works as well, such as federated learning when the client models are of different sizes and the (weighted) averaging does not make sense due to the different size\/architecture of different client models~\\citep{zhu2021data}.\n\n\\section{Experiments}\nIn this section, we compare our Bayesian federated learning approach with various relevant baselines on several benchmark datasets. We report results on the following tasks: (1) Classification in federated setting, (2) Active Learning in federated setting, and (3) OOD detection on each client. In this section, we refer to our approach with simple averaging on the server side as FedPPD, the variant with entropy-weighted averaging on the server side as FedPPD+Entropy, and the variant with distillation based aggregation on the server side as FedPPD+Distill. \n\n\\subsection{Experimental Setup}\n\\subsubsection{Baselines} We compare our methods with the following baselines\n\n(1) \\textbf{FedAvg}~\\citep{mcmahan2017communication} is the standard federated algorithm in which the local models of the participating clients are aggregated at server to compute a global model which is then sent back to all the clients for initialization in the next round.\n\n(2) \\textbf{FedBE}~\\citep{chen2020fedbe} is another state-of-the-art baseline which provides a more robust aggregation scheme in which instead of only averaging the client models at the server, a probability distribution is fit using the client models, several other models are generated from this probability distribution, and then the client models as well as the generated models are distilled into a single model to yield the global model at the server, which is sent to all the clients for initialization in the next round. Note however that the clients in FedBE only perform point estimation of their weights unlike our approach which estimates the posterior distribution and the PPD of each client. \n\n(3) \\textbf{Federated SWAG}~\\citep{linsner2021approaches} is a Bayesian federated learning algorithm which is essentially based on a federated extension of the SWAG~\\cite{maddox2019simple} which is an efficient Bayesian inference algorithm for deep neural networks. However, Federated SWAG relies on a simplification that it executes standard federated averaging for all except the last round and in the last round, the SWAG algorithm is invoked at each client to yield a posterior. Also note that Federated SWAG requires Monte-Carlo sampling at test time (thus relying on ensemble based slow prediction) unlike our method which only requires a single neural network to make the prediction.\n\nWe also considered a comparison with \\textbf{FedPA}~\\citep{al2020federated} which estimates a posterior (assumed to be Gaussian) over the client weights. However, in our experiments (using the author-provided code and suggested experimental settings) on the benchmark dataset, FedPA performed comparably or worse than FedAvg. We therefore omit those results from the main text and report them in the Supplementary Material. \n \n\\begin{comment}\n\n\\end{comment}\n\n\\subsubsection{Datasets}\nWe evaluate and compare our approach with baseline methods on four datasets: MNIST~\\citep{lecun-mnisthandwrittendigit-2010}, FEMNIST~\\citep{cohen2017emnist}, and CIFAR-10\/100~\\citep{krizhevsky2009learning}. MNIST comprises of images of handwritten digits categorized into 10 classes. It has a total of 60,000 images for training and 10,000 images for testing. FEMNIST consists of images of handwritten characters (digits, lowercase, and uppercase alphabets resulting in total of 62 classes) written by multiple users. It has a total of 80,523 images written by 3,550 users. CIFAR-10 consists of $32\\times32$ dimensional RGB images categorised into 10 different classes. It has a total of 50,000 images for training and 10,000 images for testing. CIFAR-100 is similar to CIFAR-10 but has 100 distinct classes.\n\n\\subsubsection{Model Architecture and Configurations}\n\\label{sec:config}\nIn all our experiments, the student model has a larger capacity compared to teacher model as it is modeling the PPD by distilling multiple models drawn from the posterior distribution. We have used a customized CNN architecture for both teacher and student model on MNIST, FEMNIST and CIFAR-10 dataset, with student model being deeper and\/or wider than its corresponding teacher model. For CIFAR-100, ResNet-18 and ResNet-34 are used as the teacher and student model, respectively.\n\nIn all our experiments, we consider $K=10$ clients with data heterogeneity. Each client holds a small non-i.i.d. subset of training data - approximately 2000 samples for FEMNIST, CIFAR-10 and CIFAR-100 and around 500 samples for MNIST. We use the Leaf~\\citep{caldas2018leaf} benchmark to distribute FEMNIST data across clients based on the writer. We have also excluded digits and considerd only alphabets to increase the class imbalance. However, we let clients have data from multiple writers ensuring that no two clients have same writer assigned. This results in almost similar class distribution but with different styled handwritten images distributed across clients. Also, this setting is different from data distribution on other datasets where each client strictly maintains a small subset of all the classes. In case of MNIST and CIFAR-10, we ensure that there are atmost 2 major classes per client and upto 20 distinct major classes in case of CIFAR-100. For a fair comparison, we run our method and all the baselines for 200 rounds on all the datasets (except MNIST where we run it for 100 rounds) and train local client model for 10 epochs in each round. Also, we assume complete client participation i.e. all the clients are considered in each round. However, we tune the learning rate, momentum and weight decay for each method independently. For FedBE and FedPPD, we run additional 20 and 50 epochs at the server for distillation on CIFAR\/MNIST and FEMNIST datasets, respectively. \n\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\vspace{-5em}\n \\centering\n \\includegraphics[scale=0.4]{figures\/CIFAR10_Acc_Plot.pdf}\n \\caption{Convergence of all the methods on CIFAR-10 dataset}\n \\label{fig:convergence_plot}\n \\vspace{-1em}\n \n \n \n \n \n \n \n \n\\end{wrapfigure}\n\n\\subsection{Tasks}\n\\textbf{Classification} \nWe evaluate FedPPD (its three variants) and the baselines on several classification datasets and report the accuracy on the respective test datasets. The results are shown in Table~\\ref{tab:classification_acc}. We also show the convergence of all the methods in on CIFAR-10 in Figure~\\ref{fig:convergence_plot} (we show similar plots for other datasets in Supplementary Material). All the three variants of FedPPD outperform the other baselines on all the datasets. As compared to the best performing baseline, our approach yields an improvement of $4.44\\%$ and $7.08\\%$ in accuracy on CIFAR-10 and CIFAR-100, respectively. On MNIST and FEMNIST datasets too, we observe noticeable improvements. The improvements across the board indicate that FedPPD and its variants are able to leverage model uncertainty to yield improved predictions especially when the amount of training data per client is small, which is the case with the experimental settings that we follow (as we mention in Sec.~\\ref{sec:config}). We also observe that in cases where there is a significant heterogeneity in the data distribution across the different clients (on CIFAR-10 and CIFAR-100), the performance gains offered by FedPPD and its variants are much higher as compared to the baselines. On other datasets (MNIST and FEMNIST), the data distributions are roughly similar across different clients and even though the accuracies are higher, the performance gains are not as significant, but reasonable nevertheless.\n\n\n\n\n\\begin{table}[!htbp]\n\\vspace{-2em}\n \\setlength\\tabcolsep{1pt}\n \n \\begin{minipage}{0.5\\linewidth}\n \\begin{tabular}{ccccc}\n \\toprule\n Model & MNIST & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\midrule\n FedAvg & 97.74 & 87.40 & 57.20 & 47.02 \\\\\n FedAvg+SWAG & 97.75 & 87.45 & 57.34 & 47.07 \\\\\n FedBE & 97.82 & 88.12 & 60.18 & 47.52 \\\\\n FedPPD & 97.85 & \\textbf{88.81} & 61.86 & 53.00 \\\\\n FedPPD+Entropy & 97.93 & 88.65 & 62.19 & 52.72 \\\\ \n FedPPD+Distill & \\textbf{98.08} & 88.80 & \\textbf{64.62} & \\textbf{54.60} \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Federated classification test accuracies on benchmark datasets}\n \\label{tab:classification_acc}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.38\\textwidth}\n \\begin{figure}[H]\n \\small\n \\includegraphics[height=4cm,width=5cm]{figures\/AL_Acc_Plot.pdf}\n \\caption{Federated Active Learning on CIFAR-10 dataset. Note: FedAvg+SWAG performed almost similarly to FedAvg on this task as well so we skip it from the plot.}\n \\label{fig:al_results}\n \\end{figure}\n \\end{minipage}\n \\vspace{-0.5cm}\n\\end{table}\n\n\\textbf{Federated Active Learning} We further show the usefulness of our Bayesian approach by using it for the problem of federated active learning. In active learning, the goal of the learner is to iteratively request the labels of the most informative inputs instances, add these labeled instances to the training pool, retrain the model, and repeat the process until the labeling budget remains. Following~\\citep{ahn2022federated}, we extend our method and the baselines to active learning in federated setting using entropy of the predictive distribution of an input $x$ as the acquisition function, defined as $I(x) = -\\sum_{i=1}^{k} p(y=y_i|x) \\log p(y=y_i|x)$ as a score function. In federated active learning setting (we provide a detailed sketch of the federated active learning algorithm in the Supplementary Material), each client privately maintains a small amount of labeled data and a large pool of unlabeled examples. In each round of active learning, clients participate in federated learning with their currently labeled pool of data until the global model has converged. Now, each client uses the global model to identify a fixed number (budget) of the most informative inputs among its pool of unlabeled input based on the highest predictive entropies $I(x)$, which are then annotated (locally maintaining data privacy) and added to the pool of labeled examples. Now, with this, next round of active learning begins, where clients will participate in federated learning and use the global model to expand their labeled pool. This process continues until either the unlabeled dataset has been exhausted completely or desired accuracy has been achieved. For a fair comparison, we have run federated active learning on CIFAR-10 dataset with same parameters for all the approaches. We start active learning with 400 labeled and 3200 unlabeled samples at each client and use a budget of 400 samples in every round of active learning. For federated learning, we use the same hyperparameters as for the classification experiments. We stop federated active learning once all the clients have exhausted their unlabeled dataset and show the results in Figure~\\ref{fig:al_results}. FedPPD and its variants attain the best accuracies among all the methods compared, which shows that our Bayesian approach provides more robust estimates of the model and predictive uncertainty as compared to the other baselines, and thus outperforms them on the problem of federated active learning.\n\n\\textbf{Out-of-distribution (OOD) detection} We also evaluate FedPPD and its variants, and the other baselines, in terms of their ability to distinguish between Out-of-Distribution (OOD) data and data used during training phase (in-distribution data). For this, given any sample $x$ to be classified among $k$ distinct classes and model weights $\\theta$ (or PPD for our approach), we compute Shannon entropy of the model's predictive distribution for the input $x$ and compute the AUROC (Area Under ROC curve) metric. We use KMNIST as OOD data for models trained on FEMNIST, and SVHN for CIFAR-10\/CIFAR-100 models. Note that, to avoid class imbalance, we sample an equal amount of data for both the distributions (out and in) and repeat it 5 times. We report the results in Table~\\ref{tab:auroc_score}. FedPPD and its variants consistently result in better AUROC scores on all the datasets validating its robustness and accurate estimates of model uncertainty. In addition to OOD detection, we also apply all the methods for the task of identifying the correct predictions and incorrect predictions based on the predictive entropies. For this task too, FedPPD and its variants outperform the other baselines. \n\n\\begin{table}[!htbp]\n\\setlength\\tabcolsep{4pt}\n \\centering\n \\scriptsize\n \\makebox[\\textwidth][c]{\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\toprule & \\multicolumn{3}{c|}{Out of Domain Detection} & \\multicolumn{3}{c|}{Correct\/Incorrect Prediction}\\\\\n\\midrule\n Model & FEMNIST & CIFAR-10 & CIFAR-100 & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\hline\n FedAvg & $0.957 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.703 \\pm 0.011$ & $0.846 \\pm 0.011$ & $0.742 \\pm 0.011$ & $0.792 \\pm 0.003$ \\\\\n FedAvg+SWAG & $0.956 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.704 \\pm 0.011$ & $0.845 \\pm 0.009$ & $0.743 \\pm 0.010$ & $0.800 \\pm 0.004$\\\\\n FedBE & $0.959 \\pm 0.002$ & $0.728 \\pm 0.006$ & $0.669 \\pm 0.009$ & $\\mathbf{0.863 \\pm 0.005}$ & $0.753 \\pm 0.007$ & $0.789 \\pm 0.005$\\\\\n FedPPD & $\\mathbf{0.983 \\pm 0.003}$ & $0.701 \\pm 0.007$ & $0.698 \\pm 0.009$ & $0.862 \\pm 0.008$ & $0.755 \\pm 0.007$ & $0.814 \\pm 0.003$\\\\\n FedPPD+Entropy & $0.982 \\pm 0.002$ & $\\mathbf{0.768 \\pm 0.009}$ & $0.721 \\pm 0.014$ & $0.856 \\pm 0.006$ & $0.749 \\pm 0.007$ & $0.817 \\pm 0.004$\\\\ \n FedPPD+Distill & $0.975 \\pm 0.002$ & $0.765 \\pm 0.006$ & $\\mathbf{0.784 \\pm 0.008}$ & $0.853 \\pm 0.013$ & $\\mathbf{0.769 \\pm 0.006}$ & $\\mathbf{0.823 \\pm 0.002}$\\\\\n \n \\bottomrule\n \\end{tabular}\n }\n \\caption{AUROC scores for OOD detection and correct\/incorrect predictions}\n \\label{tab:auroc_score}\n\\end{table}\n\\vspace{-1cm}\n\n\\begin{comment}\n\\begin{table}[!htbp]\n \\tiny\n \\begin{tabular}{cccc}\n \\toprule\n Model & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\midrule\n FedAvg & $0.957 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.703 \\pm 0.011$ \\\\\n FedAvg+SWAG & $0.956 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.704 \\pm 0.011$\\\\\n FedBE & $0.966 \\pm 0.003$ & $0.728 \\pm 0.006$ & $0.669 \\pm 0.009$ \\\\\n FedPPD & $\\mathbf{0.983 \\pm 0.003}$ & $0.701 \\pm 0.007$ & $0.698 \\pm 0.009$ \\\\\n FedPPD+Entropy & $0.982 \\pm 0.002$ & $\\mathbf{0.768 \\pm 0.009}$ & $0.721 \\pm 0.014$ \\\\ \n FedPPD+Distill & $0.949 \\pm 0.003$ & $0.765 \\pm 0.006$ & $\\mathbf{0.784 \\pm 0.008}$ \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{AUROC score for OOD data detection}\n \\label{tab:auroc_correct}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{ >{\\centering\\arraybackslash}p{4cm} >{\\centering\\arraybackslash}p{4cm} }\n \\toprule\n Model & CIFAR-10\\\\\n \\midrule\n FedAvg & 51.54\\\\\n FedBE & 54.29\\\\\n FedPPD & 58.87\\\\\n FedPPD+Distill & 57.44\\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption[caption]{\\centering Classification accuracy on test \\hspace{\\linewidth} dataset\n for federated active learning experiment}\n \\label{tab:active_learning}\n\\end{table}\n\\end{comment}\n\n\\vspace{0.5em}\n\\section{Conclusion and Discussion}\n\\vspace{-0.5em}\nCapturing and leveraging model uncertainty in federated learning has several benefits as we demonstrate in this work. To achieve this, we developed a Bayesian approach to federated learning by leveraging the idea of distilling the posterior predictive into a single deep neural network. The Bayesian approach not only yields more accurate and robust predictions in federated learning in situations with limited training data at each client and heterogeneity across clients, but is also helpful for tasks, such as OOD detection and active learning in federated setting. Our work provides a general framework to solve Bayesian federated learning. In this work, we consider a specific scheme to distill the PPD at each client. However, other methods that can distill the posterior distribution into a single neural network~\\citep{wang2018adversarial,vadera2020generalized} are also worth leveraging for Bayesian federated learning. Another interesting future work will be to extend our approach to settings where different clients could possibly be having different model architectures. Finally, our approach first generates MCMC samples (using SGLD) and then uses these samples to obtain the PPD in form of a single deep neural network. Recent work has shown that it is possible to distill an ensemble into a single model without explicitly generating samples from the distribution~\\citep{ratzlaff2019hypergan}. Using these ideas for Bayesian federated learning would also be an interesting future work. \n\n\\begin{comment}\n\\begin{ack}\nUse unnumbered first level headings for the acknowledgments. All acknowledgments\ngo at the end of the paper before the list of references. Moreover, you are required to declare\nfunding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work).\nMore information about this disclosure can be found at: \\url{https:\/\/neurips.cc\/Conferences\/2022\/PaperInformation\/FundingDisclosure}.\n\n\nDo {\\bf not} include this section in the anonymized submission, only in the final paper. You can use the \\texttt{ack} environment provided in the style file to automatically hide this section in the anonymized submission.\n\\end{ack}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAt the frontier of computational statistics there is growing interest\nin parallel implementation of Monte Carlo algorithms using multi-processor\nand distributed architectures. However, the resampling step of sequential\nMonte Carlo (SMC) methods \\citep{gordon1993novel} (see \\citep{kunsch2013particle}\nfor a recent overview) which involves a degree of interaction between\nsimulated ``particles'', hinders their parallelization. So, whilst\nmulti-processor implementation offers some speed up for SMC, the potential\nbenefits of distributed computing are not fully realized \\citep{lee2010utility}. \n\nPerforming resampling only occasionally, a technique originally suggested\nfor the somewhat different reason of variance reduction \\citep{liu1995blind},\nalleviates this problem to some extent, but the collective nature\nof the resampling operation remains the computational bottleneck.\nOn the other hand, crude attempts to entirely do away with the resampling\nstep may result in unstable or even non-convergent algorithms. With\nthese issues in mind we seek a better understanding of the relationship\nbetween the interaction structure of SMC algorithms and theoretical\nproperties of the approximations they deliver. Our overall aim is\nto address the following question:\n\n\\smallskip{}\n\n\n\\emph{To what extent can the degree of interaction between particles\nbe reduced, whilst ensuring provable stability of the algorithm?}\n\n\\smallskip{}\nOur strategy is to introduce and study an unusually general type of\nSMC algorithm featuring a parameterized resampling mechanism. This\nprovides a flexible framework in which we are ultimately able to attach\nmeaning to \\emph{degree of interaction} in terms of graph-theoretic\nquantities. To address the matter of \\emph{provable stability}, we\nseek conditions under which the algorithm yields time-uniformly convergent\napproximations of prediction filters, and approximations of marginal\nlikelihoods whose relative variance can be controlled at a linear-in-time\ncost. \n\nThe general algorithm we study is defined in terms of a family of\nMarkov transition matrices, $\\alpha$, and we refer to the algorithm\nitself as $\\alpha$SMC. We shall see that through particular choices\nof $\\alpha$ one obtains, as instances of $\\alpha$SMC, well known\nalgorithms including sequential importance sampling (SIS), the bootstrap\nparticle filter (BPF) and the adaptive resampling particle filter\n(ARPF) in which resampling is triggered by monitoring some functional\ncriterion, such as the Effective Sample Size (ESS) \\citep{liu1995blind}. \n\nAlthough the ESS does not necessarily appear in the definition of\nthe general $\\alpha$SMC algorithm, we find that it does appear quite\nnaturally from the inverse quadratic variation of certain martingale\nsequences in its analysis. This allows us to make precise a sense\nin which algorithmic control of the ESS can guarantee stability of\nthe algorithm. Our results apply immediately to the ARPF, but our\nstudy has wider-reaching methodological consequences: in our framework\nit becomes clear that the standard adaptive resampling strategy is\njust one of many possible ways of algorithmically controlling the\nESS, and we can immediately suggest new, alternative algorithms which\nare provably stable, but designed to avoid the type of complete interaction\nwhich is inherent to the ARPF and which hinders its parallelization.\nThe structure of this paper and our main contributions are as follows. \n\nSection~\\ref{sec:aSMC} introduces the general algorithm, $\\alpha$SMC.\nWe explain how it accommodates several standard algorithms as particular\ncases and comment on some other existing SMC methods.\n\nSection~\\ref{sec:Martingale-approximations-and} presents Theorem\n\\ref{thm:convergence}, a general convergence result for $\\alpha$SMC.\nWe give conditions which ensure unbiased approximation of marginal\nlikelihoods and we elucidate connections between certain invariance\nproperties of the matrices $\\alpha$ and the negligibility of increments\nin a martingale error decomposition, thus formulating simple sufficient\nconditions for weak and strong laws of large numbers. We also discuss\nsome related existing results.\n\nSection~\\ref{sec:stability} presents our second main result, Theorem\n\\ref{thm:L_R_mix}. We show, subject to regularity conditions on the\nhidden Markov model (HMM) under consideration, that enforcement of\na strictly positive lower bound on a certain coefficient associated\nwith ESS of $\\alpha$SMC is sufficient to guarantee non-asymptotic,\ntime-uniform bounds on: 1) the exponentially normalized relative second\nmoment of error in approximation of marginal likelihoods, and 2) the\n$L_{p}$ norm of error in approximation of prediction filters. The\nformer implies a linear-in-time variance bound and the latter implies\ntime-uniform convergence. These results apply immediately to the ARPF.\n\nSection~\\ref{sec:Discussion} houses discussion and application of\nour results. We point out the pitfalls of some naive approaches to\nparallelization of SMC and discuss what can go wrong if the conditions\nof Theorem~\\ref{thm:convergence} are not met. Three new algorithms,\nwhich adapt the degree of interaction in order to control the ESS\nand which are therefore provably stable, are then introduced. We discuss\ncomputational complexity and through numerical experiments examine\nthe degree of interaction involved in these algorithms and the quality\nof the approximations they deliver compared to the ARPF.\n\n\n\\section{$\\alpha$SMC\\label{sec:aSMC}}\n\nA hidden Markov model (HMM) with measurable state space $\\left(\\mathsf{X},\\mathcal{X}\\right)$\nand observation space $\\left(\\mathsf{Y},\\mathcal{Y}\\right)$ is a\nprocess $\\left\\{ \\left(X_{n},Y_{n}\\right);n\\geq0\\right\\} $ where\n$\\left\\{ X_{n};n\\geq0\\right\\} $ is a Markov chain on $\\mathsf{X}$,\nand each observation $Y_{n}$, valued in $\\mathsf{Y}$, is conditionally\nindependent of the rest of the process given $X_{n}$. Let $\\mu_{0}$\nand $f$ be respectively a probability distribution and a Markov kernel\non $\\left(\\mathsf{X},\\mathcal{X}\\right)$, and let $g$ be a Markov\nkernel acting from $\\left(\\mathsf{X},\\mathcal{X}\\right)$ to $\\left(\\mathsf{Y},\\mathcal{Y}\\right)$,\nwith $g(x,\\cdot)$ admitting a density, denoted similarly by $g(x,y)$,\nwith respect to some dominating $\\sigma$-finite measure. The HMM\nspecified by $\\mu_{0}$, $f$ and $g$, is\n\\begin{eqnarray}\n & & X_{0}\\sim\\mu_{0}(\\cdot),\\quad\\left.X_{n}\\right|\\{X_{n-1}=x_{n-1}\\}\\sim f(x_{n-1},\\cdot),\\quad n\\geq1,\\label{eq:HMM}\\\\\n & & \\;\\quad\\quad\\quad\\quad\\hspace{1.1em}\\quad\\left.Y_{n}\\right|\\left\\{ X_{n}=x_{n}\\right\\} \\sim g(x_{n},\\cdot),\\quad\\quad\\quad\\quad n\\geq0.\\nonumber \n\\end{eqnarray}\n\n\nWe shall assume throughout that we are presented with a fixed observation\nsequence $\\left\\{ y_{n};n\\geq0\\right\\} $ and write \n\\[\ng_{n}(x):=g(x,y_{n}),\\quad n\\geq0.\n\\]\nThe following assumption imposes some mild regularity which ensures\nthat various objects appearing below are well defined. It shall be\nassumed to hold throughout without further comment.\n\\begin{assumption*}\n$\\mathbf{\\mathbf{(A1)}}$ For each $n\\geq0$, $\\sup_{x}g_{n}(x)<+\\infty$\nand $g_{n}(x)>0$ for all $x\\in\\mathsf{X}$.\n\\end{assumption*}\nWe take as a recursive definition of the \\emph{prediction filters},\nthe sequence of distributions $\\left\\{ \\pi_{n};n\\geq0\\right\\} $ given\nby \n\\begin{eqnarray}\n & & \\pi_{0}:=\\mu_{0},\\nonumber \\\\\n & & \\pi_{n}\\left(A\\right):=\\frac{\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}(x)f(x,A)}{\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}(x)},\\quad A\\in\\mathcal{X},\\quad n\\geq1,\\label{eq:filtering_recursion}\n\\end{eqnarray}\nand let $\\left\\{ Z_{n};n\\geq0\\right\\} $ be defined by\n\\begin{equation}\nZ_{0}:=1,\\quad\\quad Z_{n}:=Z_{n-1}\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}\\left(x\\right),\\quad n\\geq1.\\label{eq:Z_recusion}\n\\end{equation}\nDue to the conditional independence structure of the HMM, $\\pi_{n}$\nis the conditional distribution of $X_{n}$ given $Y_{0:n-1}=y_{0:n-1}$;\nand $Z_{n}$ is the marginal likelihood of the first $n$ observations,\nevaluated at the point $y_{0:n-1}$. Our main computational objectives\nare to approximate $\\left\\{ \\pi_{n};n\\geq0\\right\\} $ and $\\left\\{ Z_{n};n\\geq0\\right\\} $. \n\n\n\\subsection{The general algorithm}\n\nWith population size $N\\geq1$, we write $[N]:=\\{1,\\ldots,N\\}$. \\emph{To\nsimplify presentation, whenever a summation sign appears without the\nsummation set made explicit, the summation set is taken to be $[N]$,\nfor example we write $\\Sigma_{i}$ to mean $\\Sigma_{i=1}^{N}$. }\n\nThe $\\alpha$SMC algorithm involves simulating a sequence $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $\nwith each $\\zeta_{n}=\\left\\{ \\zeta_{n}^{1},\\ldots,\\zeta_{n}^{N}\\right\\} $\nvalued in $\\mathsf{X}^{N}$. Denoting $\\mathbb{X}:=\\left(\\mathsf{X}^{N}\\right)^{\\mathbb{N}}$,\n$\\mathcal{F}^{\\mathbb{X}}:=\\left(\\mathcal{X}^{\\otimes N}\\right)^{\\otimes\\mathbb{N}}$,\nwe shall view $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $ as the canonical\ncoordinate process on the measurable space $\\left(\\mathbb{X},\\mathcal{F}^{\\mathbb{X}}\\right)$,\nand write $\\mathcal{F}_{n}$ for the $\\sigma$-algebra generated by\n$\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n}\\right\\} $. By convention, we\nlet $\\mathcal{F}_{-1}:=\\{\\mathbb{X},\\emptyset\\}$ be the trivial $\\sigma$-algebra.\nThe sampling steps of the $\\alpha$SMC algorithm, described below,\namount to specifying a probability measure, say $\\mathbb{P}$, on\n$\\left(\\mathbb{X},\\mathcal{F}^{\\mathbb{X}}\\right)$. Expectation w.r.t.~$\\mathbb{P}$\nshall be denoted by $\\mathbb{E}$. \n\nLet $\\mathbb{A}_{N}$ be a non-empty set of Markov transition matrices,\neach of size $N\\times N$. For $n\\geq0$ let $\\alpha_{n}:\\mathbb{X}\\rightarrow\\mathbb{A}_{N}$\nbe a matrix-valued map, and write $\\alpha_{n}^{ij}$ for the $i$th\nrow, $j$th column entry so that for each $i$ we have $\\sum_{j}\\alpha_{n}^{ij}=1$\n(with dependence on the $\\mathbb{X}$-valued argument suppressed).\nThe following assumption places a restriction on the relationship\nbetween $\\alpha$ and the particle system $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $.\n\\begin{assumption*}\n\\textbf{\\emph{(A2)}} For each $n\\geq0$, the entries of $\\alpha_{n}$\nare all measurable with respect to $\\mathcal{F}_{n}$\n\\end{assumption*}\nIntuitively, the members of $\\mathbb{A}_{N}$ will specify different\npossible interaction structures for the particle algorithm and under\n\\textbf{(A2)}, each $\\alpha_{n}$ is a random matrix chosen from $\\mathbb{A}_{N}$\naccording to some deterministic function of $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n}\\right\\} $.\nExamples are given below. We shall write $\\mathbf{1}_{1\/N}$ for the\n$N\\times N$ matrix which has $1\/N$ as every entry and write $Id$\nfor the identity matrix of size apparent from the context in which\nthis notation appears. We shall occasionally use $Id$ also to denote\nidentity operators in certain function space settings. Let $\\mathcal{M}$,\n$\\mathcal{P}$ and $\\mathcal{L}$ be respectively the collections\nof measures, probability measures and real-valued, bounded, $\\mathcal{X}$-measurable\nfunctions on $\\mathsf{X}$. We write\n\\[\n\\left\\Vert \\varphi\\right\\Vert :=\\sup_{x}\\left|\\varphi(x)\\right|,\\quad\\quad\\text{osc}(\\varphi):=\\sup_{x,y}\\left|\\varphi(x)-\\varphi(y)\\right|,\n\\]\nand\n\\begin{equation}\n\\mu(\\varphi):=\\int_{\\mathsf{X}}\\varphi(x)\\mu(dx),\\quad\\text{for any}\\quad\\varphi\\in\\mathcal{L},\\;\\mu\\in\\mathcal{M}.\\label{eq:mu(phi)_notation}\n\\end{equation}\n\n\\begin{rem*}\nNote that $\\mathbb{X}$, $\\mathcal{F}^{\\mathbb{X}}$, $\\mathcal{F}_{n}$,\n$\\mathbb{P}$, $\\alpha$ and various other objects depend on $N$,\nbut this dependence is suppressed from the notation. Unless specified\notherwise, any conditions which we impose on such objects should be\nunderstood as holding for all $N\\geq1$.\n\\end{rem*}\nLet $\\left\\{ W_{n}^{i};i\\in[N],n\\geq0\\right\\} $ be defined by the\nfollowing recursion:\n\n\\begin{equation}\nW_{0}^{i}:=1,\\quad\\quad W_{n}^{i}:=\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j}),\\quad i\\in[N],n\\geq1.\\label{eq:W_n_defn}\n\\end{equation}\n\n\nThe following algorithm implicitly specifies the law $\\mathbb{P}$\nof the $\\alpha$SMC particle system. For each $n\\geq1$, the ``Sample''\nstep should be understood as meaning that the variables $\\zeta_{n}=\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$\nare conditionally independent given $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n-1}\\right\\} $.\nThe line of Algorithm~\\ref{alg:aSMC} marked $(\\star)$ is intentionally\ngeneric, it amounts to a practical, if imprecise restatement of \\textbf{(A2).\n}In the sequel we shall examine instances of $\\alpha$SMC which arise\nwhen we consider specific $\\mathbb{A}_{N}$ and impose more structure\nat line $(\\star)$.\n\n\\begin{algorithm}[H]\n\\begin{raggedright}\n\\qquad{}For $n=0$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}For $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Set\\quad{} $W_{0}^{i}=1$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Sample\\quad{} $\\left\\{ \\zeta_{0}^{i}\\right\\} _{i\\in[N]}\\iid\\mu_{0}$ \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}For $n\\geq1$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n$(\\star)$\\qquad{}\\enskip{}\\hspace{0.25em}Select $\\alpha_{n-1}$\nfrom $\\mathbb{A}_{N}$ according to some functional of $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n-1}\\right\\} $\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}For $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Set\\quad{} $W_{n}^{i}=\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Sample\\quad{} $\\zeta_{n}^{i}|\\mathcal{F}_{n-1}\\;\\sim\\;\\dfrac{\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})f(\\zeta_{n-1}^{j},\\cdot)}{W_{n}^{i}}$\n\\par\\end{raggedright}\n\n\\protect\\caption{$\\alpha$SMC\\label{alg:aSMC}}\n\\end{algorithm}\n\n\nWe shall study the objects \n\n\\begin{equation}\n\\pi_{n}^{N}:=\\frac{\\sum_{i}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{i}W_{n}^{i}},\\quad\\quad\\quad Z_{n}^{N}:=\\frac{1}{N}\\sum_{i}W_{n}^{i},\\quad n\\geq0,\\label{eq:pi^N_andZ^N}\n\\end{equation}\nwhich as the notation suggests, are to be regarded as approximations\nof $\\pi_{n}$ and $Z_{n}$, respectively. We shall also be centrally\nconcerned with the following coefficient, which is closely related\nto the ESS,\n\n\\begin{equation}\n\\mathcal{E}_{n}^{N}:=\\frac{\\left(N^{-1}\\sum_{i}W_{n}^{i}\\right)^{2}}{N^{-1}\\sum_{i}\\left(W_{n}^{i}\\right)^{2}}=\\frac{\\left(N^{-1}\\sum_{i}\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})\\right)^{2}}{N^{-1}\\sum_{i}\\left(\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})\\right)^{2}},\\quad n\\geq1,\\label{eq:ESS_defn_front}\n\\end{equation}\nand by convention $\\mathcal{E}_{0}^{N}:=1$. The second equality in\n(\\ref{eq:ESS_defn_front}) is immediate from the definition of $W_{n}^{i}$,\nsee (\\ref{eq:W_n_defn}). Note that $\\mathcal{E}_{n}^{N}$ is always\nvalued in $[0,1]$, and if we write\n\\begin{equation}\nN_{n}^{\\text{eff}}:=N\\mathcal{E}_{n}^{N},\\label{eq:N_eff}\n\\end{equation}\nwe obtain the ESS of \\citet{liu1995blind}, although of course in\na generalized form, since $\\mathcal{E}_{n}^{N}$ is defined in terms\nof the generic ingredients of $\\alpha$SMC. A few comments on generality\nare in order. Firstly, for ease of presentation, we have chosen to\nwork with a particularly simple version of $\\alpha$SMC, in which\nnew samples are proposed using the HMM Markov kernel $f$. The algorithm\nis easily generalized to accommodate other proposal kernels. Secondly,\nwhilst we focus on the application of SMC methods to HMM's, our results\nand methodological ideas are immediately transferable to other contexts,\nfor example via the framework of \\citep{smc:meth:DDJ06}.\n\n\n\\subsection{Instances of $\\alpha$SMC\\label{sub:Instances-of-SMC}}\n\nWe now show how $\\alpha$SMC admits SIS, the BPF and the ARPF, as\nspecial cases, through particular choices of $\\mathbb{A}_{N}$. Our\npresentation is intended to illustrate the structural generality of\n$\\alpha$SMC, thus setting the scene for the developments which follow.\nThe following lemma facilitates exposition by ``unwinding'' the\nquantities $\\left\\{ W_{n}^{i}\\right\\} _{i\\in[N]}$ defined recursively\nin (\\ref{eq:W_n_defn}). It is used throughout the remainder of the\npaper.\n\\begin{lem}\n\\label{lem:W_n_representation}For $n\\geq1$, $0\\leq p0$,\nor equivalently\n\\[\n\\inf_{n\\geq0}N_{n}^{\\text{eff}}\\geq N\\tau>0,\n\\]\n by construction. This seemingly trivial observation turns out to\nbe crucial when we address time-uniform convergence of the ARPF in\nSection~\\ref{sec:stability}, and the condition $\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$\nwill appear repeatedly in discussions which lead to the formulation\nof new, provably stable algorithms in Section~\\ref{sec:Discussion}. \n\n\n\\subsubsection*{Comments on other algorithms}\n\nIn the engineering literature, a variety of algorithmic procedures\ninvolving distributed computing have been suggested \\citep{bolic2005resampling}.\n``Local'' particle approximations of Rao--Blackwellized filters\nhave been devised in \\citep{chen2011decentralized} and \\citep{johansen2012exact}\n. \\citet{Verge_island_particle} have recently suggested an ``island''\nparticle algorithm, designed for parallel implementation, in which\nthere are two levels of resampling and the total population size $N=N_{1}N_{2}$\nis defined in terms of the number of particles per island, $N_{1}$,\nand the number of islands, $N_{2}$. Interaction at both levels occurs\nby resampling, at the island level this means entire blocks of particles\nare replicated and\/or discarded. They investigate the trade-off between\n$N_{1}$ and $N_{2}$ and provide asymptotic results which validate\ntheir algorithms. In the present work, we provide some asymptotic\nresults in Section~\\ref{sec:Martingale-approximations-and} but it\nis really the non-asymptotic results in Section~\\ref{sec:stability}\nwhich lead us to suggest specific novel instances of $\\alpha$SMC\nin Section~\\ref{sec:Discussion}. Moreover, in general $\\alpha$SMC\nis distinct from all these algorithms and, other than in some uninteresting\nspecial cases, none of them coincide with the adaptive procedures\nwe suggest in Section~\\ref{sub:Algorithms-with-adaptive}. \n\n\n\\section{Convergence\\label{sec:Martingale-approximations-and}}\n\nIn this section our main objective is to investigate, for general\n$\\alpha$SMC (Algorithm~\\ref{alg:aSMC}), conditions for convergence\n\\begin{equation}\nZ_{n}^{N}-Z_{n}\\rightarrow0\\quad\\text{and}\\quad\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0,\\label{eq:as_convergence_intro}\n\\end{equation}\nat least in probability, as $N\\rightarrow\\infty$. \n\nIn the case of SIS, i.e.~$\\mathbb{A}_{N}=\\{Id\\}$, it is easy to\nestablish (\\ref{eq:as_convergence_intro}), since the processes $\\left\\{ \\zeta_{n}^{i};n\\geq0\\right\\} _{i\\in[N]}$\nare independent Markov chains, of identical law. On the other hand,\nfor the bootstrap filter, i.e.~$\\mathbb{A}_{N}=\\{\\mathbf{1}_{1\/N}\\}$,\nthe convergence $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0$,\ncan be proved under very mild conditions, by decomposing $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)$\nin terms of ``local'' sampling errors, see amongst others \\citep{smc:theory:Dm04,smc:the:DM08}\nfor this type of approach. For instance, for $A\\in\\mathcal{X}$ we\nmay write \n\\begin{eqnarray}\n\\pi_{1}^{N}(A)-\\pi_{1}(A) & = & \\frac{1}{N}\\sum_{i}\\delta_{\\zeta_{1}^{i}}(A)-\\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},A)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}\\label{eq:intro_boot_decomp1}\\\\\n & + & \\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},A)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}-\\pi_{1}(A).\\label{eq:intro_boot_decomp2}\n\\end{eqnarray}\nHeuristically, the term on the r.h.s.~of (\\ref{eq:intro_boot_decomp1})\nconverges to zero because given $\\mathcal{F}_{0}$, the samples $\\left\\{ \\zeta_{1}^{i}\\right\\} _{i\\in[N]}$\nare conditionally i.i.d.~according $\\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},\\cdot)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}$,\nand the term in (\\ref{eq:intro_boot_decomp2}) converges to zero because\nthe samples $\\left\\{ \\zeta_{0}^{i}\\right\\} _{i\\in[N]}$ are i.i.d.~according\nto $\\mu_{0}$. A similar argument ensures that $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0$,\nfor any $n\\geq0$ and therefore by the continuous mapping theorem\n$Z_{n}^{N}-Z_{n}\\rightarrow0$, since \n\\[\nZ_{n}=\\prod_{p=0}^{n-1}\\pi_{p}(g_{p}),\\quad\\text{and}\\quad Z_{n}^{N}=\\prod_{p=0}^{n-1}\\pi_{p}^{N}(g_{p}).\n\\]\nIn the case of $\\alpha$SMC, $\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$\nare conditionally independent given $\\mathcal{F}_{n-1}$, but we do\nnot necessarily have either the unconditional independence structure\nof SIS, or the conditionally i.i.d.~structure of the BPF to work\nwith. \n\n\\citet{smc:the:DM08} have established a CLT for the ARPF using an\ninductive approach w.r.t.~deterministic time periods. \\citet{arnaud2009smc}\nhave obtained a CLT for the ARPF based on an alternative multiplicative\nfunctional representation of the algorithm. Convergence of the ARPF\nwas studied in \\citep{del2012adaptive} by coupling the adaptive algorithm\nto a reference particle system, for which resampling occurs at deterministic\ntimes. One of the benefits of their approach is that existing asymptotic\nresults for non-adaptive algorithms, such as central limit theorems\n(CLT), can then be transferred to the adaptive algorithm with little\nfurther work. Their analysis involves a technical assumption \\citep[Section 5.2]{del2012adaptive}\nto deal with the situation where the threshold parameters coincide\nwith the adaptive criteria. Our analysis of $\\alpha$SMC does not\nrest on any such technical assumption, and in some ways is more direct,\nbut we do not obtain concentration estimates or a CLT. Some more detailed\nremarks on this matter are given after the statement of Theorem~\\ref{thm:convergence}. \n\n\\citet{crisan2012particle} studied convergence and obtained a CLT\nfor an adaptive resampling particle filter in continuous time under\nconditions which they verify for the case of ESS-triggered resampling,\nwithout needing the type of technical assumption of \\citep{del2012adaptive}.\nTheir study focuses, in part, on the random times at which resampling\noccurs and dealing with the subtleties of the convergence in continuous\ntime. Our asymptotic $N\\rightarrow\\infty$ analysis is in some ways\nless refined, but in comparison to this and the other existing works,\nwe analyze a more general algorithm, and it is this generality which\nallows us to suggest new adaptive algorithms in Section~\\ref{sec:Discussion},\ninformed by the time-uniform non-asymptotic error bounds in our Theorem~\\ref{thm:L_R_mix}. \n\nTo proceed, we need some further notation involving $\\alpha$. Let\nus define the matrices: $\\alpha_{n,n}:=Id$ for $n\\geq0$, and recursively\n\\begin{equation}\n\\alpha_{p,n}^{ij}:=\\sum_{k}\\alpha_{p+1,n}^{ik}\\alpha_{p}^{kj},\\quad\\quad(i,j)\\in[N]^{2},\\;0\\leq p0$.\nDue to the conditional independence structure of the HMM, it can easily\nbe checked that \n\\[\n\\pi_{n}=\\frac{\\gamma_{n}}{\\gamma_{n}(1)},\\quad\\quad Z_{n}=\\gamma_{n}(1),\\quad n\\geq0,\n\\]\nand\n\\[\n\\overline{Q}_{p,n}=\\frac{Q_{p,n}}{\\pi_{p}Q_{p,n}(1)}.\n\\]\n\n\nFor $i\\in[N]$ and $0\\leq p\\leq n$, introduce the random measures\n\\begin{equation}\n\\Gamma_{p,n}^{N}:=\\sum_{i}\\beta_{p,n}^{i}W_{p}^{i}\\delta_{\\zeta_{p}^{i}},\\quad\\quad\\overline{\\Gamma}_{p,n}^{N}:=\\frac{\\Gamma_{p,n}^{N}}{\\gamma_{p}(1)}.\\label{eq:Gamma_defn}\n\\end{equation}\nwhere $W_{p}^{i}$ is as in (\\ref{eq:W_n_defn}). For simplicity of\nnotation, we shall write $\\Gamma_{n}^{N}:=\\Gamma_{n,n}^{N},\\;\\overline{\\Gamma}_{n}^{N}:=\\overline{\\Gamma}_{n,n}^{N}$.\nIf we define \n\\begin{equation}\n\\overline{W}_{n}^{i}:=\\frac{W_{n}^{i}}{\\gamma_{n}(1)},\\quad n\\geq0,\\label{eq:W_bar_defn}\n\\end{equation}\nthen we have from (\\ref{eq:Gamma_defn}), \n\\[\n\\overline{\\Gamma}_{p,n}^{N}=\\sum_{i}\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\delta_{\\zeta_{p}^{i}}.\n\\]\n\n\nFinally, we observe from (\\ref{eq:beta_n_n_defn}) that\n\\[\n\\Gamma_{n}^{N}=\\sum_{i}\\beta_{n,n}^{i}W_{n}^{i}\\delta_{\\zeta_{n}^{i}}=N^{-1}\\sum_{i}W_{n}^{i}\\delta_{\\zeta_{n}^{i}}.\n\\]\n\n\n\n\\subsection{Error decomposition}\n\nThroughout this section let $\\varphi\\in\\mathcal{L}$, $n\\geq0$ and\n$N\\geq1$ be arbitrarily chosen, but then fixed. Define, for $1\\leq p\\leq n$\nand $i\\in[N]$,\n\\[\n\\Delta_{p,n}^{i}:=\\overline{Q}_{p,n}(\\varphi)(\\zeta_{p}^{i})-\\frac{\\sum_{j}\\alpha_{p-1}^{ij}W_{p-1}^{j}\\overline{Q}_{p-1,n}(\\varphi)(\\zeta_{p-1}^{j})}{\\sum_{j}\\alpha_{p-1}^{ij}W_{p-1}^{j}\\overline{Q}_{p}(1)(\\zeta_{p-1}^{j})},\n\\]\nand $\\Delta_{0,n}^{i}:=\\overline{Q}_{0,n}(\\varphi)(\\zeta_{0}^{i})-\\mu_{0}\\overline{Q}_{0,n}(\\varphi)$,\nso that $\\mathbb{E}\\left[\\left.\\Delta_{p,n}^{i}\\right|\\mathcal{F}_{p-1}\\right]=0$\nfor any $i\\in[N]$ and $0\\leq p\\leq n$. Then for $0\\leq p\\leq n$\nand $i\\in[N]$ set $k:=pN+i$, and\n\n\\[\n\\xi_{k}^{N}:=\\sqrt{N}\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\Delta_{p,n}^{i},\n\\]\nso as to define a sequence $\\left\\{ \\xi_{k}^{N};k=1,\\ldots,(n+1)N\\right\\} $.\nFor $k=1,\\ldots,(n+1)N$, let $\\mathcal{F}^{(k)}$ be the $\\sigma$-algebra\ngenerated by $\\left\\{ \\zeta_{p}^{i};\\; pN+i\\leq k,\\; i\\in[N],0\\leq p\\leq n\\right\\} $.\nSet $\\mathcal{F}^{(-1)}:=\\{\\mathbb{X},\\emptyset\\}$.\n\nThe following proposition is the main result underlying Theorem~\\ref{thm:convergence}.\nThe proof is given in the appendix.\n\\begin{prop}\n\\label{prop:martingale} Assume $\\mathbf{(A2)}$ and $\\mathbf{(B)}$.\nWe have the decomposition\n\n\\begin{equation}\n\\sqrt{N}\\left[\\overline{\\Gamma}_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right]=\\sum_{k=1}^{(n+1)N}\\xi_{k}^{N},\\label{eq:Gamma_telescope}\n\\end{equation}\nwhere for $k=1,\\ldots,(n+1)N$, the increment $\\xi_{k}^{N}$ is measurable\nw.r.t.~$\\mathcal{F}^{(k)}$ and satisfies \n\\begin{equation}\n\\mathbb{E}\\left[\\left.\\xi_{k}^{N}\\right|\\mathcal{F}^{(k-1)}\\right]=\\mathbb{E}\\left[\\left.\\xi_{k}^{N}\\right|\\mathcal{F}_{p-1}\\right]=0,\\quad\\text{with}\\quad p:=\\left\\lfloor (k-1)\/N\\right\\rfloor .\\label{eq:xi_cond_exp}\n\\end{equation}\nFor each $r\\geq1$ there exists a universal constant $B(r)$ such\nthat\n\\begin{eqnarray}\n & & \\mathbb{E}\\left[\\left|\\overline{\\Gamma}_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right|^{r}\\right]^{1\/r}\\nonumber \\\\\n & & \\leq B(r)^{1\/r}\\sum_{p=0}^{n}\\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)\\mathbb{E}\\left[\\left|\\sum_{i}\\left(\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\right)^{2}\\right|^{r\/2}\\right]^{1\/r}.\\label{eq:martingale_burkholder_bound}\n\\end{eqnarray}\n \n\\end{prop}\nThe proof of Theorem~\\ref{thm:convergence}, which is mostly technical,\nis given in the appendix. Here we briefly discuss our assumptions\nand sketch some of the main arguments. Part 1) of Theorem~\\ref{thm:convergence}\nfollows immediately from (\\ref{eq:Gamma_telescope}) and (\\ref{eq:xi_cond_exp})\napplied with $\\varphi=1$. In turn, the martingale structure of\\textbf{\n}(\\ref{eq:Gamma_telescope}) and (\\ref{eq:xi_cond_exp}) is underpinned\nby the measurability conditions \\textbf{(A2)} and $\\mathbf{(B)}$.\nThe proofs of parts 2) and 3) of Theorem~\\ref{thm:convergence},\ninvolve applying Proposition~\\ref{prop:martingale} in conjunction\nwith the identities\n\\begin{eqnarray}\nZ_{n}^{N}-Z_{n} & = & \\Gamma_{n}^{N}(1)-\\gamma_{n}(1),\\nonumber \\\\\n\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi) & = & \\frac{\\Gamma_{n}^{N}(\\varphi)}{\\Gamma_{n}^{N}(1)}-\\frac{\\gamma_{n}(\\varphi)}{\\gamma_{n}(1)}.\\label{eq:convergence_sketch_id}\n\\end{eqnarray}\nIn order to prove that these errors convergence to zero in probability\nwe show that the quadratic variation term in (\\ref{eq:martingale_burkholder_bound})\nconverges to zero. In general, we cannot hope for the latter convergence\nwithout some sort of negligibility hypothesis on the product terms\n$\\left\\{ \\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)\\beta_{p,n}^{i}\\overline{W}_{p}^{i};i\\in[N]\\right\\} $.\nAssumption $\\mathbf{\\mathbf{(A1)}}$ allows us to crudely upper-bound\n$\\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)$ and $\\overline{W}_{p}^{i}$;\nthe measurability condition $\\mathbf{(B)}$ allows us to dispose of\nthe expectation in (\\ref{eq:martingale_burkholder_bound}); then via\nMarkov's inequality and the classical equivalence: \n\\[\n\\lim_{N\\rightarrow\\infty}\\max_{i\\in[N]}\\beta_{p,n}^{i}=0\\quad\\Leftrightarrow\\quad\\lim_{N\\rightarrow\\infty}\\sum_{i}\\left(\\beta_{p,n}^{i}\\right)^{2}=0,\n\\]\nwhich holds since $\\left(\\max_{i\\in[N]}\\beta_{p,n}^{i}\\right)^{2}\\leq\\sum_{i}\\left(\\beta_{p,n}^{i}\\right)^{2}\\leq\\max_{i\\in[N]}\\beta_{p,n}^{i}$,\nthe negligibility part of $\\mathbf{(B^{+})}$ guarantees that $\\left|\\Gamma_{n}^{N}(\\varphi)-\\gamma_{n}(\\varphi)\\right|$\nconverges to zero in probability. The stronger condition $\\mathbf{(B^{++})}$\nbuys us the $\\sqrt{N}$ scaling displayed in part 3). In Section~\\ref{sub:Ensuring-convergence}\nwe discuss what can go wrong when $\\mathbf{(B^{+})}$ does not hold.\n\n\n\\section{Stability\\label{sec:stability}}\n\nIn this section we study the stability of approximation errors under\nthe following regularity condition.\n\\begin{assumption*}\n$\\mathbf{(C)}$ There exists $\\left(\\delta,\\epsilon\\right)\\in[1,\\infty)^{2}$\nsuch that\n\\[\n\\sup_{n\\geq0}\\sup_{x,y}\\frac{g_{n}(x)}{g_{n}(y)}\\leq\\delta,\\quad\\quad f(x,\\cdot)\\leq\\epsilon f(y,\\cdot),\\quad(x,y)\\in\\mathsf{X}^{2}.\n\\]\n\\end{assumption*}\n\\begin{rem}\n\\label{rem:assumption_C}Assumption $\\mathbf{(C)}$ is a standard\nhypothesis in studies of non-asymptotic stability properties of SMC\nalgorithms. Similar conditions have been adopted in \\citep[Chapter 7]{smc:theory:Dm04}\nand \\citep{smc:the:LGO04}, amongst others. $\\mathbf{(C)}$ guarantees\nthat $Q_{p,n}$, and related objects, obey a variety of regularity\nconditions. In particular, we immediately obtain\n\n\\begin{equation}\n\\sup_{p,n}\\sup_{x}\\overline{Q}_{p,n}(1)(x)\\leq\\sup_{p,n}\\sup_{x,y}\\frac{Q_{p,n}(1)(x)}{Q_{p,n}(1)(y)}\\leq\\delta\\epsilon<+\\infty.\\label{eq:Q_p,n_bounded}\n\\end{equation}\nFurthermore if we introduce the following operators on probability\nmeasures:\n\n\\begin{equation}\n\\Phi_{n}:\\mu\\in\\mathcal{P}\\mapsto\\frac{\\mu Q_{n}}{\\mu(g_{n-1})}\\in\\mathcal{P}\\quad\\quad n\\geq1,\\label{eq:Phi_defn}\n\\end{equation}\n\n\n\\begin{equation}\n\\Phi_{p,n}:=\\Phi_{n}\\circ\\cdots\\circ\\Phi_{p+1},\\quad0\\leq p0$. The argument\nis inductive. To initialize, note that since by definition $Z_{0}^{N}=Z_{0}=1$,\nwe have $v_{0}=0$. Now assume (\\ref{eq:v_n_induction_hyp}) holds\nat all ranks strictly less than some fixed $n\\geq1$. Using (\\ref{eq:v_n_recursion}),\nwe then have at rank $n$,\n\\begin{eqnarray*}\nv_{n} & \\leq & \\frac{C}{N\\tau}\\sum_{p=0}^{n-1}\\left(v_{p}+1\\right)\\\\\n & \\leq & \\frac{C}{N\\tau}\\sum_{p=0}^{n-1}\\left(1+\\frac{C}{N\\tau}\\right)^{p}\\\\\n & = & \\frac{C}{N\\tau}\\frac{\\left(1+\\frac{C}{N\\tau}\\right)^{n}-1}{\\left(1+\\frac{C}{N\\tau}\\right)-1}\\\\\n & = & \\left(1+\\frac{C}{N\\tau}\\right)^{n}-1.\n\\end{eqnarray*}\nThis completes the proof of (\\ref{eq:v_n_induction_hyp}), from which\nthe second equality on the right of (\\ref{eq:them_Stability_Statement})\nfollows immediately upon noting that by Theorem~\\ref{thm:convergence},\n$\\mathbb{E}[Z_{n}^{N}]=Z_{n}$.\n\nFor the second bound on the right of (\\ref{eq:them_Stability_Statement}),\nfirst note that as per Remark~\\ref{rem:assumption_C}, under $\\mathbf{(C)}$\\textbf{\n}we have \n\\begin{eqnarray*}\n\\left\\Vert P_{p,n}(\\bar{\\varphi})\\right\\Vert & = & \\sup_{x}\\left|P_{p,n}(\\varphi)(x)-\\pi_{n}(\\varphi)\\right|\\\\\n & = & \\sup_{x}\\left|\\Phi_{p,n}(\\delta_{x})(\\varphi)-\\Phi_{p,n}(\\pi_{p})(\\varphi)\\right|\\\\\n & \\leq & \\sup_{\\mu,\\nu\\in\\mathcal{P}}\\left\\Vert \\Phi_{p,n}(\\mu)-\\Phi_{p,n}(\\nu)\\right\\Vert \\left\\Vert \\varphi\\right\\Vert \\leq\\left\\Vert \\varphi\\right\\Vert C\\rho^{n-p},\n\\end{eqnarray*}\nand \n\\[\n\\sup_{n\\geq0}\\sup_{p\\leq n}\\;\\delta_{p,n}<+\\infty,\n\\]\n Using these upper bounds, the fact that under $\\mathbf{(B^{++})}$\nwe have $\\beta_{p,n}^{i}=1\/N$, and Proposition~\\ref{prop:L_p_bound_mixing},\nwe find that there exists a finite constant $\\widetilde{B}(r)$ such\nthat for any $N\\geq1$, $n\\geq0$, $\\varphi\\in\\mathcal{L}$, \n\n\\emph{\n\\[\n\\mathbb{E}\\left[\\left|\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right|^{r}\\right]^{1\/r}\\leq\\left\\Vert \\varphi\\right\\Vert \\frac{\\tilde{B}(r)}{\\sqrt{N}}\\sum_{p=0}^{n}\\rho^{n-p}\\mathbb{E}\\left[\\left|\\mathcal{E}_{p}^{N}\\right|^{-r\/2}\\right]^{1\/r},\n\\]\n}where\n\\[\n\\mathcal{E}_{n}^{N}=\\frac{\\left(N^{-1}\\sum_{i}W_{n}^{i}\\right)^{2}}{N^{-1}\\sum_{i}\\left(W_{n}^{i}\\right)^{2}}.\n\\]\n\n\\end{proof}\n\n\\section{Discussion\\label{sec:Discussion}}\n\n\n\\subsection{Why not just run independent particle filters and average?\\label{sub:Why-not-just}}\n\nOne obvious approach to parallelization of SMC is to run a number\nof independent copies of a standard algorithm, such as the BPF, and\nthen in some sense simply average their outputs. Let us explain possible\nshortcomings of this approach. \n\nSuppose we want to run $s\\geq1$ independent copies of Algorithm~\\ref{alg:boot_pf},\neach with $q\\geq1$ particles. For purposes of exposition, it is helpful\nto express this collection of independent algorithms as a particular\ninstance of $\\alpha$SMC: for the remainder of Section~\\ref{sub:Why-not-just},\nwe set $N=sq$ and consider Algorithm~\\ref{alg:aSMC} with $\\mathbb{A}_{N}$\nchosen to consist only of the block diagonal matrix:\n\\begin{equation}\n\\left[\\begin{array}{cccc}\n\\mathbf{q^{-1}} & \\mathbf{0} & \\cdots & \\mathbf{0}\\\\\n\\mathbf{0} & \\mathbf{q^{-1}} & \\cdots & \\mathbf{0}\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\n\\mathbf{0} & \\mathbf{0} & \\cdots & \\mathbf{q^{-1}}\n\\end{array}\\right]\\label{eq:alpha_block}\n\\end{equation}\nwhere $\\mathbf{q}^{-1}$ is a $q\\times q$ submatrix with every entry\nequal to $q^{-1}$ and $\\mathbf{0}$ is a submatrix of zeros, of the\nsame size. In this situation, a simple application of Lemma~\\ref{lem:W_n_representation}\nshows that for any $n\\geq1$ and $\\ell\\in[s]$, if we define $B(\\ell):=\\{(\\ell-1)q+1,(\\ell-1)q+2,\\ldots,\\ell q\\}$,\nthen\n\\begin{equation}\n\\text{for all}\\quad i_{n}\\in B(\\ell),\\quad\\quad W_{n}^{i_{n}}=\\prod_{p=0}^{n-1}\\left(N^{-1}\\sum_{i_{p}\\in B(\\ell)}g_{p}\\left(\\zeta_{p}^{i_{p}}\\right)\\right)=:\\mathbb{W}_{n}^{\\ell},\\label{eq:W_n^i_blocks}\n\\end{equation}\nc.f. (\\ref{eq:bootstrap_W_n^i})--(\\ref{eq:bootstrap_Z_n^N}), and\nfurthermore upon inspection of Algorithm~\\ref{alg:aSMC}, we find\n\\begin{equation}\n\\text{for all }\\ell\\in[s]\\text{ and }i\\in B(\\ell),\\quad\\quad\\mathbb{P}\\left(\\left.\\zeta_{n}^{i}\\in A\\right|\\mathcal{F}_{n-1}\\right)=\\frac{\\sum_{j\\in B(\\ell)}g_{n-1}\\left(\\zeta_{n-1}^{j}\\right)f\\left(\\zeta_{n-1}^{j},A\\right)}{\\sum_{j\\in B(\\ell)}g_{n-1}\\left(\\zeta_{n-1}^{j}\\right)},\\label{eq:blocks_law}\n\\end{equation}\nfor any $A\\in\\mathcal{X}$. It follows that the blocks of particles\n\\[\n\\hat{\\zeta}_{n}^{k}:=\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in B(\\ell)},\\quad\\ell\\in[s],\n\\]\nare independent, and for each $\\ell\\in[s]$, the sequence $\\left\\{ \\hat{\\zeta}_{n}^{\\ell};n\\geq0\\right\\} $\nevolves under the same law as a BPF, with $q$ particles. Furthermore\nwe notice \n\\[\n\\pi_{n}^{N}=\\pi_{n}^{sq}=\\frac{\\sum_{i}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{i}W_{n}^{i}}=\\frac{\\sum_{\\ell\\in[s]}\\sum_{i\\in B(\\ell)}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{\\ell\\in[s]}\\sum_{i\\in B(\\ell)}W_{n}^{i}}=\\frac{\\sum_{\\ell\\in[s]}\\mathbb{W}_{n}^{\\ell}\\left(q^{-1}\\sum_{i\\in B(\\ell)}\\delta_{\\zeta_{n}^{i}}\\right)}{\\sum_{\\ell\\in[s]}\\mathbb{W}_{n}^{\\ell}},\n\\]\nwhere $q^{-1}\\sum_{i\\in B(\\ell)}\\delta_{\\zeta_{n}^{i}}$ may be regarded\nas the approximation of $\\pi_{n}$ obtained from the $\\ell$th block\nof particles. Since we have assumed that $\\mathbb{A}_{N}$ consists\nonly of the matrix (\\ref{eq:alpha_block}), $\\mathbf{(A2)}$ and$\\mathbf{(B^{++})}$\nhold, and by Theorem~\\ref{thm:convergence} we are assured of the\na.s.~convergence $\\pi_{n}^{sq}(\\varphi)\\rightarrow\\pi_{n}(\\varphi)$\nwhen $q$ is fixed and $s\\rightarrow\\infty$. In words, we have convergence\nas the total number of bootstrap algorithms tends to infinity, even\nthough the number of particles within each algorithm is fixed. On\nthe other hand, simple averaging of the output from the $m$ independent\nalgorithms would entail reporting:\n\\begin{equation}\n\\frac{1}{sq}\\sum_{i\\in[sq]}\\delta_{\\zeta_{n}^{i}}\\label{eq:naive}\n\\end{equation}\nas an approximation of $\\pi_{n}$; the problem is that (\\ref{eq:naive})\nis biased, in the sense that in general it is not true that, with\n$q$ fixed, $(sq)^{-1}\\sum_{i\\in[sq]}\\varphi(\\zeta_{n}^{i})\\rightarrow\\pi_{n}(\\varphi)$\nas $s\\rightarrow\\infty$ (although obviously we do have convergence\nif $q\\rightarrow\\infty$). In summary, simple averages across independent\nparticle filters do not, in general, converge as the number of algorithms\ngrows. \n\nWe can also discuss the quality of an approximation of $Z_{n}$ obtained\nby simple averaging across the $s$ independent algorithms; let us\nconsider the quantities\n\\[\n\\mathbb{Z}_{n}^{(q,\\ell)}:=\\frac{1}{\\ell}\\sum_{j\\in[\\ell]}\\mathbb{W}_{n}^{j},\\quad\\ell\\in[s].\n\\]\nComparing (\\ref{eq:W_n^i_blocks}) with (\\ref{eq:bootstrap_Z_n^N}),\nand noting (\\ref{eq:blocks_law}) and the independence properties\ndescribed above, we have\n\\begin{equation}\n\\mathbb{E}\\left[\\mathbb{Z}_{n}^{(q,s)}\\right]=Z_{n},\\quad\\quad\\mathbb{E}\\left[\\left(\\frac{\\mathbb{Z}_{n}^{(q,s)}}{Z_{n}}-1\\right)^{2}\\right]=\\frac{1}{s}\\mathbb{E}\\left[\\left(\\frac{\\mathbb{Z}_{n}^{(q,1)}}{Z_{n}}-1\\right)^{2}\\right],\\label{eq:Z_naive_average}\n\\end{equation}\nwhere the first equality holds due to the first part of Theorem~\\ref{thm:convergence}:\nin this context the well known lack-of-bias property of the BPF. Under\ncertain ergodicity and regularity conditions, \\citep[Proposition 4]{WhiteleyTPF}\nestablishes that $\\mathbb{E}\\left[\\left(\\mathbb{Z}_{n}^{(q,1)}\/Z_{n}\\right)^{2}\\right]$\ngrows exponentially fast along observation sample paths when $q$\nis fixed and $n\\rightarrow\\infty$. When that occurs, it is clear\nfrom (\\ref{eq:naive}) that $s$ must be scaled up exponentially fast\nwith $n$ in order to control the relative variance of $\\mathbb{Z}_{n}^{(q,s)}$.\nOn the other hand, by Theorem~\\ref{thm:L_R_mix} and Remark~\\ref{rem:linear_variance},\nit is apparent that if we design an instance of $\\alpha$SMC so as\nto enforce $\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$, then we can control\n$\\mathbb{E}\\left[\\left(Z_{n}^{N}\/Z_{n}\\right)^{2}\\right]$ at a more\nmodest computational cost. When $\\mathbb{A}_{N}$ consists only of\nthe matrix (\\ref{eq:alpha_block}) we do not have a guarantee that\n$\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$, but in Section~\\ref{sub:Algorithms-with-adaptive}\nwe shall suggest some novel algorithms which do guarantee this lower\nbound and therefore enjoy the time-uniform convergence and linear-in-time\nvariance properties of Theorem~\\ref{thm:L_R_mix}. Before addressing\nthese stability issues we discuss the conditions under which the $\\alpha$SMC\nalgorithm converges.\n\n\n\\subsection{Ensuring convergence\\label{sub:Ensuring-convergence}}\n\nThroughout Section~\\ref{sub:Ensuring-convergence}, we consider the\ngeneric Algorithm~\\ref{alg:aSMC}. We describe what can go wrong\nif the conditions of Theorem~\\ref{thm:convergence} do not hold:\nas an example of a situation in which $\\mathbf{(B^{+})}$ does not\nhold; suppose that $\\mathbb{A}_{N}$ consists only of the transition\nmatrix of a simple random walk on the star graph with $N$ vertices,\ncall it $\\mathcal{S}_{N}$. That is, for $N>2$, $\\mathcal{S}_{N}$\nis an undirected tree with one internal vertex and $N-1$ leaves,\nand for $N\\leq2$, all vertices are leaves. Examples of $\\mathcal{S}_{N}$\nare illustrated in Figure~\\ref{fig:Star-graphs-of}. It is elementary\nthat a simple random walk on $\\mathcal{S}_{N}$ has unique invariant\ndistribution given by \n\\[\n\\frac{d_{N}^{i}}{\\sum_{j}d_{N}^{j}},\\quad i\\in[N],\\quad\\quad\\text{ where}\\quad d_{N}^{i}:=\\text{ degree of vertex }i\\text{ in }\\mathcal{S}_{N}.\n\\]\nAssuming that for every $N>2$ the internal vertex of $\\mathcal{S}_{N}$\nis labelled vertex $1$, then for all $0\\leq p2$, so $\\mathbf{(B^{+})}$ does not hold, and thus part\n2) of Theorem~\\ref{thm:convergence} does not hold.\n\n\\begin{figure}\n\\hfill{}\\includegraphics[width=1\\textwidth]{stars2}\\hfill{}\n\n\\protect\\caption{\\label{fig:Star-graphs-of}Star graphs. }\n\n\n\\end{figure}\n\n\nAs a more explicit example of convergence failure, suppose that $\\mathbb{A}_{N}$\nconsists only of the matrix which has $1$ for every entry in its\nfirst column, and zeros for all other entries. This is the transition\nmatrix of a random walk on a directed graph of which all edges lead\nto vertex $1$. It follows that for all $0\\leq p0$\nshould be enforced, so as to ensure stability\n\n\\item\\label{enu:criterion2}the computational complexity of associated\nsampling, weight and ESS calculations should not be prohibitively\nhigh\n\n\\end{enumerate} The motivation for (\\ref{enu:criterion1}) is the\ntheoretical assurance given by Theorem~\\ref{thm:L_R_mix}. The motivation\nfor (\\ref{enu:criterion2}) is simply that we do not want an algorithm\nwhich is much more expensive than any of the standard SMC methods,\nAlgorithms~\\ref{alg:SIS}--\\ref{alg:boot_pf} and the ARPF. It is\neasily checked that the complexity of SIS is $O(N)$ per unit time\nstep, which is the same as the complexity of the BPF \\citep{carpenter1999improved}\nand the ARPF.\n\nThroughout the remainder of Section~\\ref{sub:Algorithms-with-adaptive}\nwe shall assume that $\\mathbb{A}_{N}$ consists only of transition\nmatrices of simple random walks on regular undirected graphs. We impose\na little structure in addition to this as per the following definition,\nwhich identifies an object related to the standard notion of a block-diagonal\nmatrix.\n\\begin{defn*}\nA \\textbf{B-matrix} is a Markov transition matrix which specifies\na simple random walk on a regular undirected graph which has a self-loop\nat every vertex and whose connected components are all complete subgraphs.\n\\end{defn*}\nNote that due to the graph regularity appearing in this definition,\nif $\\mathbb{A}_{N}$ consists only of B-matrices, then $\\mathbf{(B^{++})}$\nis immediately satisfied. This regularity is also convenient for purposes\nof interpretation: it seems natural to use graph degree to give a\nprecise meaning to ``degree of interaction''. Indeed $Id$ and $\\mathbf{1}_{1\/N}$\nare both B-matrices, respectively specifying simple random walks on\n$1$-regular and $N$-regular graphs, and recall for the ARPF, $\\mathbb{A}_{N}=\\left\\{ Id,\\mathbf{1}_{1\/N}\\right\\} $;\nthe main idea behind the new algorithms below is to consider an instance\nof $\\alpha$SMC in which $\\mathbb{A}_{N}$ is defined to consist of\nB-matrices of various degrees $d\\in[N]$, and define adaptive algorithms\nwhich select the value of $\\alpha_{n-1}$ by searching through $\\mathbb{A}_{N}$\nto find the graph with the smallest $d$ which achieves $\\mathcal{E}_{n}^{N}\\geq\\tau>0$\nand hence satisfies criterion 1. In this way, we ensure provable stability\nwhilst trying to avoid the complete interaction which occurs when\n$\\alpha_{n-1}=\\mathbf{1}_{1\/N}$. \n\nAnother appealing property of B-matrices is formalized in the following\nlemma; see criterion (\\ref{enu:criterion2}) above. The proof is given\nin the appendix.\n\\begin{lem}\n\\label{lem:complexity}Suppose that $A=\\left(A^{ij}\\right)$ is a\nB-matrix of size $N$. Then given the quantities $\\left\\{ W_{n-1}^{i}\\right\\} _{i\\in[N]}$\nand $\\left\\{ g_{n-1}(\\zeta_{n-1}^{i})\\right\\} _{i\\in[N]}$, the computational\ncomplexity of calculating $\\left\\{ W_{n}^{i}\\right\\} _{i\\in[N]}$\nand simulating $\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$ as per\nAlgorithm~\\ref{alg:aSMC}, using $\\alpha_{n-1}=A$, is $O(N)$.\n\\end{lem}\nWhen calculating the overall complexity of Algorithm~\\ref{alg:aSMC}\nwe must also consider the complexity of line $(\\star)$, which in\ngeneral depends on $\\mathbb{A}_{N}$ and the particular functional\nused to choose $\\alpha_{n}$. We resume this complexity discussion\nafter describing the specifics of some adaptive algorithms. \n\n\n\\subsubsection*{Adaptive interaction}\n\nThroughout this section we set $m\\in\\mathbb{N}$ and then $N=2^{m}$.\nConsider Algorithm~\\ref{alg:aSMC} with $\\mathbb{A}_{N}$ chosen\nto be the set of B-matrices of size $N$. We suggest three adaptation\nrules at line $(\\star)$ of Algorithm~\\ref{alg:aSMC}: Simple, Random,\nand Greedy, all implemented via Algorithm~\\ref{alg:generic adaptation}\n(note that dependence of some quantities on $n$ is suppressed from\nthe notation there), but differing in the way they select the index\nlist $\\mathcal{I}_{k}$ which appears in the ``while'' loop of that\nprocedure. The methods for selecting $\\mathcal{I}_{k}$ are summarised\nin Table~\\ref{tab:Choosing_I_k}: the Simple rule needs little explanation,\nthe Random rule implements an independent random shuffling of indices\nand the Greedy rule is intended, heuristically, to pair large weights,\n$\\mathbb{W}_{k}^{i}$, will small weights in order to terminate the\n``while'' loop with as small a value of $k$ as possible. Note that,\nformally, in order for our results for $\\alpha$SMC to apply when\nthe Random rule is used, the underlying probability space must be\nappropriately extended, but the details are trivial so we omit them. \n\n\\begin{algorithm}\n\\begin{raggedright}\n\\qquad{}at iteration $n$ and line $(\\star)$ of Algorithm~\\ref{alg:aSMC},\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}for $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $B(0,i)=\\{i\\}$, $\\mathbb{W}_{0}^{i}=W_{n-1}^{i}g_{n-1}(\\zeta_{n-1}^{i})$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $k=0$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $\\overline{\\mathbb{W}}_{0}=N^{-1}\\sum_{i}\\mathbb{W}_{0}^{i}$\n, $\\mathcal{E}=\\frac{\\left(\\overline{\\mathbb{W}}_{0}\\right)^{2}}{N^{-1}\\sum_{i}\\left(\\mathbb{W}_{0}^{i}\\right)^{2}}$, \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}while $\\mathcal{E}<\\tau$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $\\mathcal{I}_{k}$ according to the\nSimple, Random or Greedy scheme of Table~\\ref{tab:Choosing_I_k}.\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}for $i=1,\\ldots,N\/2^{k+1}$ \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}\\qquad{}set $B(k+1,i)=B(k,\\mathcal{I}_{k}(2i-1))\\cup B(k,\\mathcal{I}_{k}(2i))$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}\\qquad{}set $\\mathbb{W}_{k+1}^{i}=\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2i-1)}\/2+\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2i)}\/2$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $k=k+1$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $\\mathcal{E}=\\frac{\\left(\\overline{\\mathbb{W}}_{0}\\right)^{2}}{N^{-1}2^{k}\\sum_{i\\in[N\/2^{k}]}\\left(\\mathbb{W}_{k}^{i}\\right)^{2}}$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $K_{n-1}=k$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $\\alpha_{n-1}^{ij}=\\begin{cases}\n1\/2^{K_{n-1}}, & \\text{if }i\\sim j\\text{ according to \\ensuremath{\\left\\{ B(K_{n-1},i)\\right\\} _{i\\in[N\/2^{K_{n-1}}]}}}\\\\\n0, & \\text{otherwise}.\n\\end{cases}$\n\\par\\end{raggedright}\n\n\\protect\\caption{\\label{alg:generic adaptation}Adaptive selection of $\\alpha_{n-1}$}\n\\end{algorithm}\n\n\nFollowing the termination of the ``while'' loop, Algorithm~\\ref{alg:generic adaptation}\noutputs an integer $K_{n-1}$ and a partition $\\left\\{ B(K_{n-1},i)\\right\\} _{i\\in[N\/2^{K_{n-1}}]}$\nof $[N]$ into $N\/2^{K_{n-1}}$ subsets, each of cardinality $2^{K_{n-1}}$;\nthis partition specifies $\\alpha_{n-1}$ as a B-matrix and $2^{K_{n-1}}$\nis the degree of the corresponding graph (we keep track of $K_{n-1}$\nfor purposes of monitoring algorithm performance in Section~\\ref{sub:Numerical-illustrations}).\nProposition~\\ref{prop:Upon-termination-of} is a formal statement\nof its operation and completes our complexity considerations. The\nproof is given in the appendix. It can be checked by an inductive\nargument similar to the proof of Lemma~\\ref{lem:ARPF_A2}, also in\nthe appendix, that when $\\alpha_{n}$ is chosen according to Algorithm~\\ref{alg:generic adaptation}\ncombined with any of the adaptation rules in Table~\\ref{tab:Choosing_I_k},\n\\textbf{(A2)} is satisfied. \n\\begin{prop}\n\\label{prop:Upon-termination-of}The weights $\\left\\{ \\mathbb{W}_{k}^{i}\\right\\} _{i\\in[N\/2^{k}]}$\ncalculated in Algorithm~\\ref{alg:generic adaptation} obey the expression\n\\begin{equation}\n\\mathbb{W}_{k}^{i}=2^{-k}\\sum_{j\\in B(k,i)}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j}).\\label{eq:W_k_explicit}\n\\end{equation}\nMoreover, $\\alpha_{n-1}$ delivered by Algorithm~\\ref{alg:generic adaptation}\nis a B-matrix and when this procedure is used at line $(\\star)$ of\nAlgorithm~\\ref{alg:aSMC}, the weights calculated in Algorithm~\\ref{alg:aSMC}\nare given, for any $i\\in[N\/2^{K_{n-1}}]$, by\n\\begin{equation}\nW_{n}^{j}=\\mathbb{W}_{K_{n-1}}^{i},\\quad\\quad\\text{for all \\quad}j\\in B(K_{n-1},i)\\label{eq:W_equals_bb_W}\n\\end{equation}\nand $\\mathcal{E}_{n}^{N}\\geq\\tau$ always. The overall worst-case\ncomplexity of Algorithm~\\ref{alg:aSMC} is, for the three adaptation\nrules in Table~\\ref{tab:Choosing_I_k}, Simple: $O(N)$, Random:\n$O(N)$, and Greedy: $O(N\\log_{2}N)$. \n\\end{prop}\n\\begin{table}[h]\n\\begin{tabular}[c]{>{\\raggedright}p{1.2cm}l}\n\\toprule \n\\addlinespace \n\\textbf{\\footnotesize{Simple}} & {\\footnotesize{set $\\mathcal{I}_{k}=(1,\\ldots,N\/2^{k})$}}\\tabularnewline\\addlinespace\n\\midrule \n\\addlinespace \n\\textbf{\\footnotesize{Random}} & \n{\\footnotesize{if $k=0$, set $\\mathcal{I}_{k}$ to a random permutation of $[N\/2^{k}]$, otherwise $\\mathcal{I}_{k}=(1,\\ldots,N\/2^{k})$}}\\tabularnewline\\addlinespace\n\\midrule \n\\addlinespace \n\\multirow{2}{1.2cm}{\\textbf{\\footnotesize{Greedy}}} &\n{\\footnotesize{set $\\mathcal{I}_{k}$ such that}}\\tabularnewline &\n{\\hspace{\\bigskipamount}\\footnotesize{$\\mathbb{W}_{k}^{\\mathcal{I}_{k}(1)}\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(3)}\\geq\\cdots\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(N\/2^{k}-1)}\\geq\\mathcal{\\mathbb{W}}_{k}^{\\mathcal{I}_{k}(N\/2^{k})}\\cdots\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(4)}\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2)}$}}\\tabularnewline\\addlinespace \\bottomrule\\addlinespace\n\\end{tabular}\\protect\\caption{\\label{tab:Choosing_I_k}Adaptation rules for choosing $\\mathcal{I}_{k}$}\n\\end{table}\n\n\n\n\\subsection{Numerical illustrations\\label{sub:Numerical-illustrations}}\n\nWe consider a stochastic volatility HMM: \n\\begin{eqnarray*}\n & & X_{0}\\sim\\mathcal{N}(0,1),\\quad X_{n}=aX_{n-1}+\\sigma V_{n},\\\\\n & & Y_{n}=\\varepsilon W_{n}\\exp(X_{n}\/2).\n\\end{eqnarray*}\n where $\\left\\{ V_{n}\\right\\} _{n\\in\\mathbb{N}}$ and $\\left\\{ W_{n}\\right\\} _{n\\in\\mathbb{N}}$\nare sequences of mutually i.i.d.~$\\mathcal{N}(0,1)$ random variables,\n$\\left|a\\right|<1$, and $\\sigma,\\varepsilon>0$. To study the behaviour\nof the different adaptation rules in terms of effective sample size,\na sequence of $3\\cdot10^{4}$ observations were generated from the\nmodel with $a=0.9$, $\\sigma=0.25$, and $\\varepsilon=0.1$. This\nmodel obviously does not satisfy $\\mathbf{(C)}$, but $\\mathbf{(A1)}$\nis satisfied as long as the observation record to does include the\nvalue zero.\n\nThe ARPF and $\\alpha$SMC with the Simple, Random and Greedy adaptation\nprocedures specified in Section~\\ref{sub:Algorithms-with-adaptive}\nwere run on this data with $N=2^{10}$ and threshold $\\tau=0.6$.\nTo give some impression of ESS and interaction behaviour, Figure~\\ref{fig:ESS-and-interaction}\nshows snapshots of $N_{n}^{\\text{eff}}$ and $K_{n}$ versus $n$,\nfor $575\\leq n\\leq825$. The sample path of $N_{n}^{\\text{eff}}$\nfor ARPF displays a familiar saw-tooth pattern, jumping back up to\n$N=2^{10}$ when resampling, i.e.~when $K_{n}=10$. The Simple adaptation\nscheme keeps $N_{n}^{\\text{eff}}$ just above the threshold $\\tau N=0.6\\times2^{10}$,\nwhereas the Greedy strategy is often able to keep $N_{n}^{\\text{eff}}$\nwell above this threshold, with smaller values of $K_{n}$, i.e.~with\na lower degree of interaction. The results for the Random adaptation\nrule, not shown in this plot, where qualitatively similar to those\nof the Greedy algorithm but slightly closer to the Simple adaptation. \n\nIn order to examine the stationarity of the particle processes as\nwell as the statistical behavior of the degree of interaction over\ntime, Figure~\\ref{fig:histograms_and_E_vs_k} shows two histograms\nof $K_{n}$ for each of the adaptation rules. One histogram is based\non the sample of $K_{n}$ where $100