diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfqmr" "b/data_all_eng_slimpj/shuffled/split2/finalzzfqmr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfqmr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe dynamics of a quantum system composed by $N$ particles in $\\bR^d$, $d=1,2,3$, interacting via a zero-range, two-body interaction is described by the formal Hamiltonian\n\\beq\\label{hamnd}\n\\mathcal H =- \\sum_{i=1}^N \\f{1}{2m_i} \\Delta_{\\xv_i} + \\sum_{\\underset{i < j}{i,j=1}}^N \\mu_{ij} \\,\\delta(\\xv_i - \\xv_j),\n\\eeq\nwhere $\\xv_i \\in \\bR^d$, $i=1,\\ldots,N$, is the coordinate of the $i$-th particle, $m_i$ is the corresponding mass, $\\Delta_{\\xv_i}$ is the Laplacian relative to $\\xv_i $, and $\\mu_{ij} \\in \\bR$ is the strength of the interaction between particles $i$ and $j$. To simplify the notation we set $\\hbar =1$. \nFormal Hamiltonians of the type \\eqref{hamnd} are widely used in physical applications. In particular they are relevant in the study of ultra-cold quantum gases, both in the bosonic and in the fermionic case, in the so-called unitary limit, i.e., for infinite two-body scattering length (see \\cite{bh},\\cite{wc1},\\cite{wc2},\\cite{cmp} and references therein).\n\nThe first step towards a rigorous approach to the analysis of the model is to give \nthe mathematical definition of such a Hamiltonian as a self-adjoint operator on the appropriate $L^2$-space. One first notices that the interaction term in \\eqref{hamnd} is effective only on the hyperplanes $\\cup_{i2$ the characterization of all possible self-adjoint extensions of $\\dot{\\mathcal H}_0$ is more involved. However, a class of extensions based on the analogy with the case $N=2$ can be explicitly constructed. More precisely, one considers the so-called Skornyakov-Ter-Martirosyan (STM) extension of $\\dot{\\mathcal H}_0$ which, roughly speaking, is a symmetric operator acting on functions $\\psi \\in L^2(\\bR^{3N}) \\cap H^2( \\bR^{3N} \\setminus \\cup_{i2$ of the condition \\eqref{bc0} that characterizes the two-body case. Unfortunately, unlike \\eqref{bc0}, \\eqref{bcn} does not necessarily define a self-adjoint operator. Indeed, for a system of three identical bosons it was shown in \\cite{fm} that the STM extension is not self-adjoint and all its self-adjoint extensions are unbounded from below owing to the presence of an infinite sequence of energy levels $E_k$ going to $-\\infty$ for $k \\rightarrow \\infty$. In \\cite{MM} this result was generalized to the case of three distinguishable particles with different masses. This kind of instability is known in the literature as the Thomas effect. \nIt should be stressed that the Thomas effect is strongly related to the well-known Efimov effect (see, e.g., \\cite{bh}, \\cite{adfgl}, \\cite{ahkw}) even if, to our knowledge, a rigorous mathematical investigation of this connection is still lacking. We also mention that if, instead of \\eqref{bcn}, one introduces a ``non-local'' boundary condition on the hyperplanes then it is possible to construct a positive Hamiltonian and to study its stability properties for $N$ large (see, e.g., \\cite{fs}). In this paper we do not consider this kind of Hamiltonians.\n \nIt is reasonable to expect that the Thomas effect does not occur if the Hilbert space of states is suitably restricted, e.g., introducing symmetry constraints on the wave function. A remarkably important constraint is antisymmetry. In fact, a wave function that is antisymmetric under exchange of coordinates of two particles necessarily vanishes at the coincidence points of such two particles, thus making their mutual zero-range interaction ineffective. Analogously, in a mixture of fermions of different species subject to pairwise zero-range interaction, fermions of the same species cannot ``feel'' mutual zero-range interaction and therefore the interaction term in the Hamiltonian is less singular.\n\n\nIn this paper we consider the simplified model consisting of $N$ identical fermions, with unit mass, and a different particle with mass $m$, interacting with the fermions through a zero-range potential. For such a model only partial results are available and it is remarkable that they strongly depend on the parameters $N$ and $m$. \n\nConcerning the physical literature, we mention that for $N=2$ it is known (see, e.g., \\cite{bh} and references therein) that for $m < 0.0735= (13.607)^{-1}$ the Thomas effect is present while for $m> 0.0735$ the STM extensions are expected to be bounded from below. More recently (see \\cite{cmp}), it was shown by means of analytical and numerical arguments that in the case $N=3$ the Thomas effect occurs for $m< 0.0747= (13.384)^{-1}$. This, in particular, indicates that for $0.07350$ such that $\\mathcal F_{\\alpha}$ is closed and bounded from below if $m>m^*(N)$. This implies that $\\mathcal F_{\\alpha}$ is the quadratic form of a \\emph{unique} self-adjoint extension of the STM operator $H_{\\alpha}$, and this extension is bounded from below. It therefore describes a stable system, where the Thomas effect does not occur.\n\nSuch a critical mass was first conjectured in \\cite{m2} and is precisely the unique root of an explicit equation (see \\eqref{Lambda} and \\eqref{La=1} in Section \\ref{main results: sec}). It turns out that $m^*(N)$ is increasing with $N$ and that the condition $m>m^*(N)$ guarantees the stability also in the limit of infinitely many fermions, provided that the mass of the extra particle scales as $m \\propto N$. \n\n\n\nThe second question we address is a sufficient condition for the \\emph{instability} of the model. This can be seen by plugging suitable trial functions into $\\mathcal F_{\\alpha}$. An attempt in this direction is in \\cite[Section 7]{DFT}, but with trial functions that do not satisfy the fermionic symmetry: thus, the result stated there on the unboundedness from below of the quadratic form for $m=1$ and $N$ sufficiently large cannot be considered valid. Our second main result (Theorem \\ref{ub1}) fills in this gap and we prove that for any $N\\geq 2$ the quadratic form $\\mathcal F_{\\alpha}$ is unbounded from below for $mm^*(2)$ and unstable $m2$ we expect the condition $m>m^*(N)$ to be far from optimal for stability. This is due to the crucial role played by the restriction to antisymmetric wave functions (see the discussion in Section \\ref{instability: sec}) so that the system might be stable also if our condition is violated. \n\nThird, the fact that the $(N\\!+\\!1)$-particle system is unstable at least when $m$ is below the \\emph{same} threshold $m^*(2)$ for the instability of the $(2\\!+\\!1)$-particle system has a rather natural interpretation: the instability of a subsystem made of two fermions plus the different particle is responsible for the instability of the whole system. \n\n\nWe want to mention that in the final stage of the preparation of this work we became aware of a recent paper \\cite{m4} where the case of two fermions plus a different particle is studied using the theory of self-adjoint extensions. We believe that a comparison with our methods and results would be of great interest for the further developments of the subject.\n\nThe paper is organized as follows. In Section \\ref{main results: sec} we introduce the renormalised quadratic form $\\mathcal F_{\\alpha}$ and the STM extension $H_{\\alpha}$ and we formulate our main results. In Section \\ref{stability: sec} we give the proof of Theorem \\ref{clbou}. In Section \\ref{instability: sec} we give the proof of Theorem \\ref{ub1}. In the Appendix we briefly outline the formal renormalisation procedure to derive $\\form$.\n\n\\vs\n\nFor the convenience of the reader, we collect here some useful notation that will be used throughout the paper. We use the notation $ \\ldf(\\R^{d}) $ (resp. $ \\hf(\\R^{d}) $, $ \\hcf(\\R^{d}) $, etc.) for the space containing totally antisymmetric functions belonging to $ L^2(\\R^{d}) $ (resp. $ H^1(\\R^{d}) $, $ H^{-1\/2}(\\R^{d}) $, etc.). We often use the short-hand notation $\\|\\cdot\\|_{L^2}$, $\\|\\cdot\\|_{\\ldf}$, etc. for the associated norms $\\|\\cdot\\|_{L^2(\\bR^d)}$, $\\|\\cdot\\|_{\\ldf(\\R^{d})}$, etc.\n\\newline\nFor a vector $\\xv \\in \\bR^3$ we set $x = |\\xv|$. Moreover we define $ \\Kv := \\lf( \\kv_2,\\ldots , \\kv_{N-1} \\ri) $ and, for $ i = 1, \\ldots, N $,\n\t\\bdm\n\t\t\\breve{\\kkv}_i := \\lf( \\kv_1, \\ldots, \\kv_{i-1}, \\kv_{i+1}, \\ldots, \\kv_N \\ri).\n\t\\edm\nFor any $f \\in L^2(\\bR^d)$ the Fourier transform is defined by $ \\hat{f}(\\kv)=(2 \\pi)^{-d\/2} \\int_{\\bR^d} \\! \\diff \\xv \\, e^{-i \\kv \\cdot \\xv} f(\\xv)\\,.$\n\\newline\nThe functions $G_{\\lambda}$, $\\pot$, $L_{\\lambda}$, with $\\lambda >0$, and $D(\\Kv)$ are defined in \\eqref{Green function}, \\eqref{pot xi}, \\eqref{Lla} and \\eqref{Dk} respectively.\n\n\n\n\n\n\n\n\n\n\\vs\n\n\\section{Main results}\n\\label{main results: sec}\n\nIn this section we introduce the quadratic form $\\form$, the STM extension $H_{\\alpha}$ and we formulate our main results.\n\n\n\\subsection{The quadratic form $\\form$}\n\n\n\\n\nFor a three dimensional quantum system composed by $N$ identical fermions, with mass one, plus a different particles, with mass $m$, with a two-body zero-range interaction the formal many-body Hamiltonian is \n\\beq\n\t\\label{formal Hamiltonian}\n\t\\tilde{H} : = - \\frac{1}{2 m}\\Delta_{\\xv_0} - \\f{1}{2} \\sum_{i =1}^N \\Delta_{\\xv_i} + \\mu \\sum_{i = 1}^N \\delta(\\xv_0 - \\xvi)\\, ,\n\\eeq\nwhere $ \\xv_i \\in \\RT $, $ i = 0, \\ldots, N $, and $\\mu \\in \\bR$. Introducing the centre of mass and relative coordinates \n\\beq\n\t\\lf\\{\n \t\t\\begin{array}{ll}\n\t\t\t\\mathbf{X} : =\\displaystyle\\frac{1}{m+N} \\Big( m\\xv_0 + \\sum_{i=1}^N \\xv_i \\Big), \t&\t\\mbox{}\t\\\\\n\t\t\t\\yv_i : = \\xv_0 - \\xv_i ,\t\t&\t\\mbox{for} \\:\\:\\: i = 1,\\ldots N\\,,\n\t\t\\eay\n\t\\ri.\n\\eeq\none obtains\n\\beq\n\t\\tilde{H} = \\hcm + \\tx\\frac{m+1}{2m} H \n\\eeq\nhere $ \\hcm : =- [2(m+N)]^{-1} \\Delta_{\\mathbf{X}} $ and\n\\beq\n\t\\label{cm Hamiltonian}\n\tH : = - \\sum_{i=1}^N \\Delta_{\\yv_i} - \\frac{2}{m+1} \\sum_{i < j} \\nabla_{\\yv_i} \\cdot \\nabla_{\\yv_j} + \\mu \\sum_{i = 1}^N \\delta(\\yvi)\\,,\n\\eeq\n$\\nabla_{\\yv_i} $ denoting the gradient with respect to $\\yv_i$. \nWe also introduce the free Hamiltonian\n\\beq\n\t\\label{free Hamiltonian}\n\t\t\\dom(H_0) := H^2_{\\mathrm{f}}(\\bR^{3N}),\t\\hspace{1cm}\tH_0 : = - \\sum_{i=1}^N \\Delta_{\\yv_i} - \\frac{2}{m+1} \\sum_{i < j} \\nabla_{\\yv_i} \\cdot \\nabla_{\\yv_j}\\,,\n\\eeq\nand its restriction to functions vanishing in a neighbourhood of the hyperplanes \n$ \\{\\yv_i = 0 \\}$\n\\beq\n\t\\label{free2 Hamiltonian}\n\\dom(\\dot{H}_0) : = \\!\\bigg\\{ \\psi \\in \\hdf(\\R^{3N}) \\: \\bigg| \\! \\int_{\\R^3} \\!\\!\\diff \\kv_i \\: \\hat\\psi(\\kv_1, \\ldots, \\kv_N) = 0 \\,, \\;\\;\\;\\;i=1,\\ldots ,N \\bigg\\},\t\\hspace{1cm}\t\\dot{H}_0 : = H_0\\big|_{\\dom(\\dot{H}_0)}\\,.\n\\eeq\n\nIn order to give a rigorous meaning to the formal expression \\eqref{cm Hamiltonian} as a self-adjoint operator in $ \\ldf(\\R^{3N}) $, one can use Krein's theory to construct self-adjoint extensions of the operator \\eqref{free2 Hamiltonian} (see \\cite{m3}). Here, instead, we follow a different approach and we investigate the quadratic form associated with the expectation value $ \\bra{\\psi} H \\ket{\\psi} $. However, because of the singularity of the ``potential'' $ \\delta(\\yv_i) $ the quadratic form has to be defined via a renormalisation procedure in the Fourier space. The idea of the construction is given in the Appendix (see also \\cite{DFT},\\cite{FT}) and here we only give the final result.\n\n\\n\nLet us denote for any $\\lambda>0$\n\\beq\n\t\\label{Green function}\n\t\\green(\\kv_1, \\ldots, \\kv_N) : = \\bigg[ \\sum_{i=1}^N k_i^2 + \\frac{2}{m+1}\\sum_{i < j} \\kv_i \\cdot \\kv_j + \\la \\bigg]^{-1}\\,,\n\\eeq\n\\beq\\label{Lla}\nL_{\\lambda}(\\kv_1,\\ldots,\\kv_{N-1}) := 2 \\pi^2 \\bigg( \\frac{m(m+2)}{(m+1)^2} \\sum_{i = 1}^{N-1} k_i^2 + \\frac{2m}{(m+1)^2} \\sum_{i 0$, $ \\F_0[\\phi] : = \\bra{\\phi} H_0 \\ket{\\phi} $ and $\\Phi_{\\alpha}^{\\lambda}$ is the following form on the charge $ \\xi $ \n\\beq\\label{domqform}\n \\dom(\\qform) := \\hcfpiu (\\R^{3N - 3})\\,,\n\\eeq\n\\beq\n\t\\label{qform}\n\t\\qform[\\xi] : = \\dqform[\\xi] + \\oqform[\\xi]\\,,\n\\eeq\n\\beq\t\\label{dqform}\n\t\\dqform[\\xi] : = \\int_{\\R^{3N-3}} \\diff \\kv_1 \\cdots \\diff \\kv_{N-1} \\: \\big| \\hat\\xi(\\kv_1, \\ldots, \\kv_{N-1}) \\big|^2 \\lf[ \\al +\t\t\n\tL_{\\lambda}(\\kv_1,\\ldots,\\kv_{N-1}) \\ri],\n\\eeq\n\\bml{\n\t\\label{oqform}\n\t\\oqform[\\xi] : = (N-1) \\int_{\\R^{3N}} \\diff \\sv \\diff \\tv \\diff \\kv_2 \\cdots \\diff \\kv_{N-1} \\:\t \\hat\\xi^*(\\sv, \\kv_2, \\ldots, \\kv_{N-1}) \\hat\\xi (\\tv, \\kv_2, \\ldots, \\kv_{N-1}) \\cdot \t\\\\\n\t\\green(\\sv, \\tv, \\kv_2, \\ldots, \\kv_{N-1}) .\n}\n\n\\vspace{0.1cm}\n\n\\n\nWe notice that for $\\xi \\in \\dom(\\qform)$ we have $\\pot \\xi \\in \\ldf(\\R^{3N})$ and $\\pot \\xi \\notin \\hf(\\R^{3N})$. Therefore the decomposition in \\eqref{form domain} is meaningful. Moreover, as we shall see in Section \\ref{stability: sec}, the form $\\qform$ is well-defined on $\\dom (\\qform)$.\n\nIt is worth mentioning that the fermionic constraint implies not only that the wave function is totally antisymmetric but also that the form on the charges \\eqref{qform} differs from the bosonic case by a sign in front of the off-diagonal part \\eqref{oqform}. This fact results in a weaker effective interaction among the fermions and the stability problem is qualitatively different.\n\nIn order to formulate our main results on the form $\\form$ we first introduce our definition of stability parameter. Let us consider the following function\n\\beq\\label{Lambda}\n\\Lambda (m,N) : = 2\\pi^{-1} (N-1) (m+1)^2 \\bigg[ \\f{1}{ \\sqrt{m(m+2)}} - \\arcsin\\bigg(\\f{1}{m+1}\\bigg) \\bigg]\\,.\n\\eeq\nIt is easy to check that, for $N$ fixed, the function $\\Lambda(m,N)$ is decreasing in $m$ and \n\\beq\n\t\\lim_{m\\rightarrow 0} \\Lambda(m,N)=\\infty,\t\\hspace{1cm}\t\\lim_{m\\rightarrow \\infty} \\Lambda(m,N)=0\\,.\n\\eeq \nThen we have\n\n\\begin{definition}[Stability parameter $m^*(N)$]\n\t\\mbox{}\t\\\\ \nFor $N$ fixed, we define $m^*(N)$ as the unique solution to the equation \n\\beq\\label{La=1}\n\\Lambda(m,N)=1\\,.\n\\eeq\n\\end{definition}\n\\vspace{0.2cm}\n\\n\nIf we define $\\theta := \\arctan \\sqrt{1+\\f{2}{m}}$, where $\\theta \\in (\\f{\\pi}{4}, \\f{\\pi}{2})$, a direct computation shows that equation \\eqref{La=1} can be equivalently written as\n\\beq\\label{teta}\n \\cot 2 \\theta + 2 \\theta -\\f{\\pi}{2}\\bigg( 1 - \\f{1}{N-1} \\cos^2 2 \\theta \\bigg)=0\\,.\n\\eeq\nWe remark that \\eqref{teta} reduces for $N=2$ to the equation found for the critical mass in \\cite[p. 12871, note 10]{pc}. \nWe also notice that $m^*(N)$ is positive and increasing with $N$. In particular, the condition $\\Lambda(m,N) <1$ is equivalent to $m>m^*(N)$. This condition is crucial to guarantee closure and boundedness from below of the form $\\form$.\n\n\\begin{theorem}[Stability for $ m > m^*(N)$]\n\\label{clbou}\n\\mbox{}\t\\\\\nLet $N\\geq 2$ and $m>m^*(N)$. Then the quadratic form $\\form$ is closed and bounded from below. In particular, it is positive for $\\alpha \\geq 0$ and \n\\beq\\label{inff}\n\\form[\\psi] \\geq - \\f{\\alpha^2}{4 \\pi^4 \\big(1\\!-\\!\\Lambda(m,N) \\big)} \\, \\|\\psi\\|^2_{L^2},\t\\hspace{1cm} \\psi \\in \\dom(\\form)\\,,\n\\eeq\nfor $\\alpha <0$.\n\\end{theorem}\n\n\\n\nThe proof will be given in Section \\ref{stability: sec}. \n\nConcerning the instability problem, our result is the following.\n\n\\begin{theorem}[Instability for $ m < m^*(2) $]\n\\label{ub1}\n\\mbox{}\t\\\\\nLet $N\\geq 2$ and $mm^* (2)$ and unstable for $m2$ the problem remains open: no rigorous result is available for $m^*(2)0$\n and\n\\beq\t\\label{operator A}\n(\\mathcal A^{\\lambda}_{\\alpha,i} \\hat\\xi ) (\\breve{\\kkv}_i) \n : = (-1)^{i+1} \\bigg[ \\alpha + L_{\\lambda}(\\breve{\\kkv}_i) \\bigg] \\hat{\\xi}(\\breve{\\kkv}_i) - \\!\\! \\sum_{j=1, j\\neq i}^N \\!\\! (-1)^{j+1} \\!\\! \\int_{\\R^3} \\diff \\kv_i \\: G_{\\lambda} (\\kv_1,\\ldots, \\kv_N) \\, \\hat{\\xi}(\\breve{\\kkv}_j) \\,.\n\\eeq\nThe last equality in \\eqref{domain Ha 2} should be understood as the boundary condition satisfied by any $\\psi \\in \\dom(H_{\\alpha})$. In fact, \nby a straightforward computation, one verifies that such equality implies the following asymptotic condition for $R\\rightarrow \\infty$ \n\\beq\n \\int_{k_i m^*(N)$, then $\\form$ defines a unique self-adjoint and bounded from below extension $\\hat{H}_{\\alpha}$ of the operator $H_{\\alpha}$. In particular, $\\hat{H}_{\\alpha}$ is positive for $\\alpha \\geq 0$ and \n\\beq\\label{infsp}\n\\inf \\sigma(\\hat{H}_{\\alpha}) \\geq - \\f{\\alpha^2}{4 \\pi^4 \\big(1 - \\Lambda(m,N) \\big)}, \t\\hspace{1cm}\t\\text{for} \\;\\;\\;\\alpha <0.\n\\eeq \n\t\\item If $N \\geq 2$ and $mm^*(2)$. In particular we want to give the explicit characterization of our Hamiltonian $\\hat{H}_{\\alpha}$ in order to make more transparent a possible comparison of our result with the result in \\cite{m4}.\n\n\\n\nIn Section 3 we shall prove the estimate \\eqref{cC}, which in particular implies that the form $\\Phi^{\\lambda}_{\\alpha}$, defined on $H^{1\/2}(\\R^3)$, is closed and positive for $\\lambda$ sufficiently large. Therefore it defines a positive, self-adjoint operator $\\Gamma^{\\lambda}_{\\alpha}$, $\\dom (\\Gamma^{\\lambda}_{\\alpha})$ in $L^2(\\R^3)$ explicitly given by\n\\bdn\\label{opGa}\n\\dom(\\Gamma^{\\lambda}_{\\alpha})= \\!\\lf\\{ \\xi \\in H^{1\/2}(\\R^3)\\; \\Big| \\; \\Gamma^{\\lambda}_{\\alpha} \\hat\\xi \\in L^2(\\R^3) \\ri\\},\t\\nonumber\t\\\\\n\\lf(\\Gamma^{\\lambda}_{\\alpha} \\hat\\xi \\ri)(\\qv)\\!= [\\alpha + L_{\\lambda}(\\qv) ] \\hat\\xi(\\qv) + \\!\\!\\int_{\\R^3}\\!\\!\\! \\diff \\pv \\, G_{\\lambda}(\\pv, \\qv) \\hat\\xi(\\pv).\n\\edn\nExploiting this fact and following the same line of \\cite[Section5]{DFT}, we obtain\n\\bml{\n\t\\label{domain Ha3}\n\t\\dom(\\hat{H}_{\\al}) = \\bigg\\{ \\psi \\in \\ldf(\\R^{6}) \\: \\bigg| \\: \\exists \\, \\xi \\in \\dom(\\Gamma^{\\lambda}_{\\alpha}) \\: \\;\\mathrm{s.t.} \\;\\: \\phi_{\\la} := \\psi - \\pot \\xi \\in \\hdf(\\R^{6}),\t\\\\\n\t\\int_{\\R^3} \\diff \\pv \\: \\hat\\phi_{\\la}(\\pv, \\qv) = (\\Gamma^{\\lambda}_{\\alpha} \\hat\\xi ) (\\qv) \\bigg\\}\\,,\n} \n\\beq\n\t\\label{action Ha3}\n\t(\\hat{H}_{\\al} + \\la) \\psi = (H_0 + \\la) \\phi_{\\la}.\n\\eeq\nWe notice that the above operator differs from the operator \\eqref{domain Ha 2}, \\eqref{action Ha} only in a larger class of admissible charges $ \\xi $, i.e., the domain $ \\dom(\\Gamma^{\\lambda}_{\\alpha}) $ strictly contains $ H^{3\/2}(\\R^3) $. We also underline that the boundary condition satisfied on the hyperplanes by an element of \\eqref{domain Ha3} is the standard STM boundary condition.\n\n\n\n\n\\section{Closure and boundedness from below of $ \\form $}\n\\label{stability: sec}\n\nThe proof of Theorem \\ref{clbou} is based on a careful estimate from below and from above of the form $\\qform$ on the charge $\\xi$. \nIf $ N > 2 $ (with an obvious modification in the case $ N = 2 $) we rewrite \nboth the diagonal and the off-diagonal terms of $\\Phi_{\\alpha}^{\\lambda}$, defined in \\eqref{dqform}, \\eqref{oqform}, in a more manageable form (see \\cite{m3}), by introducing the change of coordinates\n\\bdm\n\t\\sv \\longrightarrow \\siv : = \\sv + \\frac{1}{m+2} \\sum_{i=2}^{N-1} \\kv_i, \t\\hspace{1cm} \t\\tv \\longrightarrow \\tav : = \\tv + \\frac{1}{m+2} \\sum_{i=2}^{N-1} \\kv_i.\t\t\n\\edm\nThen\n\\beq\n\t\\label{dqform 1}\n\t\\dqform[\\xi] = \\alpha \\, \\| \\xi\\|_{L^2(\\R^{3N-3})\n\t}^2 +\t\n\t2 \\pi^2 \\int_{\\R^{3N-3}} \\diff \\sigma \\diff \\Kv \\:\n\t\\big| \\xit(\\siv, \\Kv\n\t) \\big|^2 \\sqrt{\\tx\\frac{m(m+2)}{(m+1)^2} \\sigma^2 + \\dk+ \\la}\\,,\n\\eeq\n\\beq\n\t\\label{oqform 1}\n\t\\oqform[\\xi] = (N-1) \\int_{\\R^{3N}} \\diff \\siv \\diff \\tav \\diff \\Kv \\:\n\t\\xit^*(\\siv, \\Kv\n\t) \\: \\xit (\\tav, \\Kv\n\t)\n\t\\lf( \\sigma^2 + \\tau^2 + \\tx\\frac{2}{m+1} \\siv \\cdot \\tav + \\dk + \\la \\ri)^{-1}\\,,\n\\eeq\nwhere\n\\beq\n\\Kv := \\kv_2, \\ldots , \\kv_{N-1}\\,,\n\\eeq\n\\beq\n\t\\label{xit}\n\t\\xit \\lf(\\siv, \\Kv\n\t\\ri) := \\hat\\xi\\bigg(\\siv - \\frac{1}{m+2} \\sum_{i=2}^{N-1} \\kv_i, \\kv_2, \\ldots, \\kv_{N-1}\n\t\\bigg)\\,,\n\\eeq\n\\beq\n\t\\label{Dk}\n\t\\dk : =\n\t\\frac{m}{(m+1)(m+2)} \\bigg( (m+3) \\sum_{i=2}^{N-1} k_i^2 + 2 \\sum_{i < j} \\kv_i \\cdot \\kv_j \\bigg)\\,.\n\\eeq\nNotice that $\\dk$ satisfies the bound (see \\eqref{el12} in the following)\n\\beq\n\t\\label{estimate Dk}\n\t\\frac{m}{m+1} \\sum_{i=2}^{N-1} k_i^2 \\leq \\dk \\leq \\frac{m(m+N+1)}{(m+1)(m+2)} \\sum_{i=2}^{N-1} k_i^2\\,.\n\\eeq\nLast, setting \n\\beq\n\t\\siv := \\sqrt{\\dk + \\la} \\: \\pv,\t\\hspace{1cm}\t\\tav : = \\sqrt{\\dk + \\la} \\: \\qv\\,,\n\\eeq\nand\n\\beq\n\t\\label{charge Qk}\n\tQ_{\\kkv}(\\pv) : = (\\dk + \\la)^{3\/4} \\: \\xit \\lf(\\sqrt{\\dk + \\la} \\: \\pv, \\Kv\n\t\\ri)\\,,\n\\eeq\nwe obtain\n\\beq\n\t\\label{qform 1}\n\t\\Phi_0^{\\lambda}[\\xi] = \\qform[\\xi] - \\alpha \\| \\xi \\|_{L^2(\\R^{3N-3})}^2 = \\int_{\\R^{3N-6}} \\diff \\K\n\t\\sqrt{\\dk \\!+\\! \\la} \\;\\, F_1 \\! \\lf[ Q_{\\kkv} \\ri]\\,,\n\\eeq\nwhere for any $\\zeta \\geq 0$ we introduced the quadratic form in $L^2(\\R^3)$\n\\beq\n\t\\label{fform}\n\t\\dom(F_{\\zeta})\\!:=\\!\\dom(F_1)\\!= \\! \\bigg\\{ \\! f \\! \\in \\! L^2(\\R^3)\\,\\bigg|\\! \\int_{\\R^3} \\!\\! \\diff \\pv \\sqrt{p^2 \\!+\\!1} \\, |f(\\pv)|^2 <\\infty \\bigg\\},\t\\quad\tF_{\\zeta} \\lf[f\\ri\n\t: = \\dform_{\\zeta} \\lf[f\\ri]\n\t+ \\oform_{\\zeta}\\lf[f\\ri]\n\\eeq\nand \n\\beq\n\t\\label{dform}\n\t\\dform_{\\zeta} \\lf[f \\ri]\n\t= 2 \\pi^2 \\int_{\\R^{3}} \\diff \\pv \\sqrt{\\tx\\frac{m(m+2)}{(m+1)^2} p^2 + \\zeta} \\: \\lf| f\n\t(\\pv) \\ri|^2 \\,,\n\\eeq\n\\beq\n\t\\label{oform}\n\t\\oform_{\\zeta} \\lf[f \\ri]\n\t= (N-1) \\int_{\\R^{6}} \\diff \\pv \\diff \\qv \\,\t\\frac{f\n\t^*(\\pv) f\n\t(\\qv)}{ p^2 + q^2 + \\tx\\frac{2}{m+1} \\pv \\cdot \\qv + \\zeta}\\, .\n\\eeq\n\n\\vspace{0.3cm}\n\\n\nUsing the representation \\eqref{qform 1}, \\eqref{fform}, \\eqref{dform}, \\eqref{oform} we obtain the following estimate for $\\Phi_0^{\\lambda}$, which is the crucial ingredient for the proof of Theorem \\ref{clbou}. \n\n\n\\begin{proposition}[Upper and lower bounds for $ \\Phi_0^{\\lambda} $]\n\\label{stiqform}\n\\mbox{}\t\\\\\nFor any $\\xi \\in \\dom(\\qform)$ we have\n\\beq\\label{abqform}\n\\Big(1-\\Lambda(m,N) \\Big)\\; \\Phi_{0,\\lambda}^{\\mathrm{diag}} [\\xi] \\; \\leq \\; \\Phi_0^{\\lambda} [\\xi] \\; \\leq \\; \\Big( 1 + \\Gamma(m,N)\\Big) \\;\\Phi_{0,\\lambda}^{\\mathrm{diag}} [\\xi]\\,, \n\\eeq\nwhere\n\\beq\\label{Gamma}\n\\Gamma(m,N) : = \\f{(N\\!-\\!1) (m\\!+\\!1)^2}{\\sqrt{m(m\\!+\\!2)}} \\arcsin \\lf( \\f{1}{m\\!+\\!1} \\ri)\\,.\n\\eeq\n\\end{proposition}\n\n\n\\vspace{0.3cm}\n\nThe proof is based on a careful analysis of the form $F_{\\zeta}$, $\\zeta \\in [0,1]$, reduced to each subspace with fixed angular momentum $l$ and it is postponed to the end of this section. First we introduce some useful notation and we prove some preliminary lemmas.\n\nFor any $f\\in L^2(\\R^3)$ we consider the expansion\n\\beq\\label{exsh}\nf(\\pv) = \\sum_{l=0}^{\\infty}\\sum_{m=-l}^l f_{lm}(p) Y_{l}^m(\\theta_p, \\phi_p)\\,,\n\\eeq\nwhere $\\pv=(p, \\theta_p,\\phi_p)$ in spherical coordinates and $Y_{l}^m$ denotes the spherical harmonics of order $l,m$. We notice that $f\\in \\dom(F_1)$ is equivalent to\n\\beq\n\\sum_{l=0}^{\\infty}\\sum_{m=-l}^l \\int_0^{\\infty}\\!\\!dp\\, p^2 \\sqrt{p^2 +1}\\, \\lf|f_{lm}(p)\\ri|^2 <\\infty\\,.\n\\eeq\nMoreover, we denote by $P_l$ the Legendre polynomial of order $l=0,1,\\ldots$ explicitly given by\n\\beq\\label{leg}\nP_l(y) = \\f{1}{2^l l!} \\f{d^l}{dy^l} (y^2 -1)^l \\,,\t\\hspace{1cm}\t y \\in [-1,1]\\,.\n\\eeq\n In the first lemma we decompose $F_{\\zeta}$ in each subspace of fixed angular momentum $l$.\n\n\\begin{lemma}[Decomposition of $ F_{\\zeta} $]\n\\label{lemma0}\n\\mbox{}\t\\\\\nFor $f\\in \\dom(F_{1})$ we have\n\\beq\t\\label{Fzl}\nF_{\\zeta}\\lf[f \\ri] \\; =\\; \t\\sum_{l=0}^{\\infty}\\sum_{m=-l}^l G_{\\zeta,l} \\lf[ f_{lm} \\ri] \\; =: \\; \\sum_{l=0}^{\\infty}\\sum_{m=-l}^l \\lf( G^{\\mathrm{diag}}_{\\zeta} \\lf[f_{lm} \\ri] + G^{\\mathrm{off}}_{\\zeta,l} \\lf[f_{lm} \\ri] \\ri)\\,,\n\\eeq \nwhere for $g \\in L^2((0,\\infty), \\, p^2 \\sqrt{p^2+1}\\,dp)$\n\\begin{eqnarray}\n\t\\label{G diag off}\n&&G^{\\mathrm{diag}}_{\\zeta} \\lf[ g \\ri] : = 2 \\pi^2 \\!\\! \\int_0^{\\infty} \\!\\!\\! dp \\, p^2 \\sqrt{\\frac{m(m+2)}{(m+1)^2} p^2 +\\zeta}\\, \\, |g(p)|^2\\,, \\\\\n&& G^{\\mathrm{off}}_{\\zeta,l} \\lf[ g\\ri] : = 2\\pi (N-1) \\!\\int_0^{\\infty} \\!\\!\\!\\! \\!dp\\!\\!\\int_0^{\\infty}\\!\\!\\!\\!\\! dq\\, p^2 g^*(p) \\, q^2 g(q) \\!\\int_{-1}^1 \\!\\!\\!\\! dy\\, \\frac{P_l(y)}{p^2 + q^2 +\\frac{2}{m+1} p q y +\\zeta}\\,.\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\nFor a given $f\\in \\dom(F_{1})$ we consider the expansion \\eqref{exsh}. From \\eqref{dform} we see that $\\dform_{\\zeta}[f]= \\sum_{l=0}^{\\infty} \\sum_{m=-l}^l G^{\\text{diag}}_{\\zeta}[f_{lm}]$. Concerning the off-diagonal term \\eqref{oform}, \nwe denote by $\\theta_{pq}$ the angle between the vectors $\\pv$ and $\\qv$ and we consider the following expansion in Legendre polynomials:\n\\begin{eqnarray}\\label{exle}\n&&\\f{1}{p^2 +q^2 + \\f{2}{m+1} pq \\cos \\theta_{pq} +\\zeta} = \\sum_{l=0}^{\\infty}\\f{2l+1}{2} \\int_{-1}^1 \\!\\!\\!\\! dy\\, \\frac{P_l(y)}{p^2 + q^2 +\\frac{2}{m+1} p q y +\\zeta} \\,\\, P_l(\\cos \\theta_{pq})\\nonumber\\\\\n&&= \\sum_{l=0}^{\\infty} 2\\pi \\int_{-1}^1 \\!\\!\\!\\! dy\\, \\frac{P_l(y)}{p^2 + q^2 +\\frac{2}{m+1} p q y +\\zeta} \\sum_{m=-l}^{l} Y_{l}^{m *} (\\theta_p,\\phi_p) Y_{l}^m(\\theta_q,\\phi_q)\\,.\n\\end{eqnarray}\nIn the last line we used the addition formula for spherical harmonics (see, e.g., \\cite[Eq. (8.814)]{GR}). From (\\ref{exle}) we obtain $\\oform_{\\zeta}[f]= \\sum_{l=0}^{\\infty} \\sum_{m=-l}^l G^{\\text{off}}_{\\zeta,l}[f_{lm}]$.\n\\end{proof}\n\n\nIn the next lemma we give a new representation of $G^{\\text{off}}_{\\zeta,l}$ which is particularly useful to control $G^{\\text{off}}_{\\zeta,l}$ in terms of $G^{\\text{off}}_{0,l}$ for any $\\zeta>0$. \n\n\\begin{lemma}[Estimates for $ G^{\\mathrm{off}}_{\\zeta,l}$]\n\\label{lemma01}\n\\mbox{}\t\\\\\nThe form $G^{\\mathrm{off}}_{\\zeta,l}$ can be written as\n\\begin{eqnarray}\\label{rapof}\n&&G^{\\mathrm{off}}_{\\zeta,l} \\lf[ g\\ri] = \\sum_{k=0}^{\\infty} B_{l,k} \\int_0^{\\infty}\\!\\!\\! d\\nu \\, \\nu^k e^{-\\zeta \\nu} \\lf| \\int_0^{\\infty}\\!\\!\\! dp \\, g(p) \\,p^{2+k} e^{-\\nu p^2} \\ri|^{2}\\,,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\\label{Blk}\n&&B_{l,k}=\n\t\t\\begin{cases}\n\t\t\t \\f{2\\pi (N-1)}{2^l l! \\,k!} \\lf(\\! \\frac{-2}{m\\!+ \\!1} \\!\\ri)^{\\!k} \\disp\\int_{-1}^1\\!\\!\\! dy \\, (1-y^2)^l \\frac{d^l}{dy^l} y^k \t& \\mbox{if} \\:\\:\\:\\: l \\leq k\\,,\t\\\\\n\t\t\t0\t&\t\\mbox{otherwise}\\,.\n\t\t\\end{cases}\n\\end{eqnarray}\n\n\\n \nMoreover for any $\\zeta > 0$ we have\n\\begin{eqnarray}\\label{stle}\n&&0\\leq G^{\\mathrm{off}}_{\\zeta,l} \\lf[ g\\ri] \\leq G^{\\mathrm{off}}_{0,l} \\lf[ g\\ri] \\hspace{1cm} \\text{for $l$ even}\\,,\\\\\n&&\\nonumber\\\\\n&&G^{\\mathrm{off}}_{0,l} \\lf[ g\\ri] \\leq G^{\\mathrm{off}}_{\\zeta,l} \\lf[ g\\ri] \\leq 0 \\hspace{1cm} \\text{for $l$ odd}\\,.\\label{stlo}\n\\end{eqnarray}\n\\end{lemma}\n\n\\begin{proof}\nUsing the expansion\n\\beq\n\\f{1}{p^2+q^2+ \\f{2}{m+1} pqy + \\zeta}=\n \\f{1}{p^2+q^2+\\zeta} \n\\sum_{k=0}^{\\infty} \\lf(\\! \\f{-2}{m+\\!1}\\, \\f{pqy}{p^2+q^2 +\\zeta} \\!\\ri)^{\\!\\!k}\n\\eeq\nand formula (\\ref{leg}), we obtain\n\\beq\\label{exse1}\nG^{\\mathrm{off}}_{\\zeta,l}\\lf[ g \\ri]= \\f{2\\pi (N-1)}{ 2^l l! } \\!\\sum_{k=0}^{\\infty} \\lf(\\! \\f{-2}{m+1} \\!\\ri)^{\\!\\!k} \\!\\! \\int_0^{\\infty}\\!\\!\\! \\!dp \\!\\int_0^{\\infty}\\!\\!\\!\\! dq\\, \\f{p^{2+k} g^*(p) q^{2+k} g(q) }{(p^2+q^2+\\zeta)^{k+1}}\\!\n\\int_{-1}^1\\!\\!\\!\\!dy\\, y^k \\f{d^l}{dy^l} (y^2-1)^l \\,.\n\\eeq\nIntegrating by parts $l$ times we find\n\\begin{eqnarray}\\label{exse2}\n&&G^{\\mathrm{off}}_{\\zeta,l}\\lf[ g \\ri]=\\sum_{k=0}^{\\infty} B_{l,k} \\, k! \\!\\int_0^{\\infty}\\!\\!\\! \\!dp \\!\\int_0^{\\infty}\\!\\!\\!\\! dq\\, \\f{p^{2+k} g^*(p) q^{2+k} g(q) }{(p^2+q^2+\\zeta)^{k+1}}\\,.\n\\end{eqnarray}\nFinally we use the identity\n\\beq\\label{iden}\n\\f{k!}{(p^2+q^2+\\zeta)^{k+1}} = \\int_0^{\\infty}\\!\\!\\!\\! d\\nu \\, \\nu^{k} \\,e^{-(p^2+q^2+\\zeta)\\nu}\n\\eeq\nin (\\ref{exse2}) and we obtain (\\ref{rapof}). Let us fix $l$ even. Then the integral in (\\ref{Blk}) is different from zero only if $k$ is even and this implies that $G^{\\mathrm{off}}_{\\zeta,l}$ is positive and the estimate (\\ref{stle}) holds. Analogously, when $l$ is odd the integral in (\\ref{Blk}) is different from zero only if $k$ is odd and therefore $G^{\\mathrm{off}}_{\\zeta,l}$ is negative and (\\ref{stlo}) holds.\n\\end{proof}\n\n\n\n\n\n\n\n\\noindent\nNow we study the form $G_{0,l} = G^{\\text{diag}}_{0} + G^{\\text{off}}_{0,l}$ and we show that it can be diagonalized for each $l$. \n\\begin{lemma}[Diagonalization of $G_{0,l}$]\n\\label{lemma1}\n\t\\mbox{}\t\\\\\n\t For any $g \\in L^2((0,\\infty), p^2\\sqrt{p^2+1}\\, dp)$ we have\n\\begin{eqnarray} \\label{formula0}\n&&G^{\\mathrm{diag}}_0 [g] = 2 \\pi^2 \\f{\\sqrt{m(m+2)}}{m+1} \\int_{\\R} \\diff k \\, \\big| g^{\\sharp} (k) \\big|^2\\,, \\\\\n&&G^{\\mathrm{off}}_{0,l}[g] = \\int_{\\R} \\diff k \\, S_l (k) \\big| g^{\\sharp} (k) \\big|^2\\,,\n\\label{formula1} \n\\end{eqnarray}\nwhere\n\\beq \\label{formula2}\n g^{\\sharp} (k) : = \\frac{1}{\\sqrt{2\\pi}} \\int_{\\R} \\diff x \\, e^{-ikx}\\, e^{2x} \\, g(e^x)\n\\eeq\nand \n\\beq \\label{formula3}\nS_l (k) = 2\\pi^2 (N-1) \\int_{-1}^1 \\diff y \\, P_l (y) \n\\frac{\\sinh \\lf( k \\arccos \\frac{y}{m+1} \\ri) }{ \\sin \\lf( \\arccos \\frac{y}{m+1} \\ri) \\, \\sinh(\\pi k)}\\,.\n\\eeq\n\\end{lemma}\n\\begin{proof}\nThe proof of (\\ref{formula0}) is straightforward. To prove (\\ref{formula1}) we first introduce the new integration variables\n $p=e^{x_1}$ and $q=e^{x_2}$, so that the form reads\n\\begin{equation}\\label{for2}\n\\begin{split}\nG^{\\mathrm{off}}_{0,l} \\lf[g \\ri]\\;&=\\;2 \\pi(N-1) \n\\int_{\\R} dx_1 dx_2\\,\\,e^{3 x_1}{g}^* (e^{x_1})e^{3 x_2} g(e^{x_2})\n\\int_{-1}^1 dy \\f{ P_l(y) }{ e^{2 x_1} + e^{2x_2} +\\f{2y}{m+1} e^{x_1 +x_2} } \\\\\n& = \\; \\pi(N-1) \n\\int_{\\R} dx_1 dx_2\\,\\,e^{2 x_1}{g}^* (e^{x_1})e^{2 x_2} g(e^{x_2})\n\\int_{-1}^1 dy \\f{ P_l(y) }{ \\cosh (x_1-x_2) +\\f{y}{m+1} }\\,.\n\\end{split}\n\\end{equation}\nThe kernel in (\\ref{for2}) is a convolution kernel and therefore it can be diagonalized by means of the Fourier transform. Using the explicit Fourier transform of the kernel (see, e.g., \\cite{erdely}) we finally arrive at \\eqref{formula1}.\n\\end{proof}\n\n\nOwing to the previous lemma, the problem of finding bounds for the form $G^{\\text{off}}_{0,l} $ is reduced to finding \nbounds for the function $S_l(k)$.\nTaking into account the identity $\\arccos z = \\f{\\pi}{2} - \\arcsin z$ and the parity of $P_l$, we represent $S_l(k)$ as\n\\begin{equation}\\label{reps}\nS_l(k) =\n\\begin{cases}\n \\displaystyle - \\pi^2 (N-1) \n\\int_{-1}^1 dy \\, P_l(y) \\f{\\sinh \\lf( k \\arcsin \\f{ y }{m+1}\\ri) }{ \\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri)\\sinh \\lf(\\f{\\pi}{2} k\\ri) \\, } & \\text{for} \\; l \\text{ odd}, \\\\\n \\displaystyle \\pi^2(N-1) \n\\int_{-1}^1 dy \\, P_l(y) \\f{\\cosh \\lf( k \\arcsin \\f{ y }{m+1}\\ri) }{ \\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri)\\cosh \\lf(\\f{\\pi}{2} k\\ri) \\, } & \\text{for} \\; l \\text{ even}\\,.\n\\end{cases}\n\\end{equation}\nIt is evident from this representation that $S_l (k)$ is for any $l \\geq 0 $ an even $C^{\\infty}$-function of $k$ with $\\lim_{k \\rightarrow \\infty} S_l (k)=0$. \nBefore discussing upper and lower bound of $S_l (k) $, we shall prove the following elementary lemma.\n\\begin{lemma}[Taylor expansions of $ \\wt S^{\\mathrm{o}}_k $ and $ \\wt S^{\\mathrm{e}}_k $]\n\\label{forever}\n\t\\mbox{}\t\\\\\nFor any fixed $k\\geq 0$ the following functions \n\\beq\\label{s0y}\n\\wt S^{\\mathrm{o}}_k (y) = \\f{\\sinh \\lf( k \\arcsin \\f{ y }{m+1}\\ri) }{ \\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri) \\, } \\,,\t\\hspace{1cm}\t \\wt S^{\\mathrm{e}}_k (y) = \\f{\\cosh \\lf( k \\arcsin \\f{ y }{m+1}\\ri) }{ \\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri) \\, }\n\\eeq\nhave a Taylor expansion in the variable $y \\in [-1,1]$ with positive coefficients.\n\\end{lemma}\n\\begin{proof}\nLet $\\cP$ be the set of functions whose Taylor expansion has positive coefficients.\nFirst note that $\\arcsin y$, $\\sinh y$ and $\\cosh y$ belong to $\\cP$. \nThe derivative is a linear automorphism of $\\cP$ and therefore \n\\[\n\\frac{d}{dy} \\arcsin y = \\frac{1}{\\sqrt{1-y^2} } = \\frac{1}{\\cos \\arcsin y} \\in \\cP\\,.\n\\]\nMoreover, $\\cP$ is invariant under dilations, multiplications and compositions of functions in $\\cP$.\nThus, $\\wt S^{\\mathrm{o}}_k$ and $\\wt S^{\\mathrm{e}}_k $ belong to $ \\cP$.\n\\end{proof}\n\n\\n\nIn the next lemma we compute lower and upper bounds for $S_l (k) $.\n\\begin{lemma}[Bounds for $S_l (k) $]\n \\label{lem4}\n\t\\mbox{}\t\\\\\nFor any $k \\in \\R$ we have\n\\begin{eqnarray}\\label{boe}\n0 \\; \\leq \\; S_l(k) \\; \\leq \\; 2 \\pi^2 (N\\!-\\!1)(m\\!+\\!1) \\arcsin \\lf(\\f{1}{m\\!+\\!1}\\ri) &\t\\hspace{0,7cm}\t & \\text{for $l$ even}\t\\\\\n\\label{boo}\n-4\\pi (N\\!-\\!1) (m\\!+\\!1) \\bigg[ 1- \\sqrt{m(m\\!+\\!2)} \\, \\arcsin \\lf(\\f{1}{m\\!+\\!1}\\ri) \\bigg]\n\\; \\leq S_l(k) \\; \\leq 0 &\t\\hspace{0,7cm} & \\text{for $l$ odd}\\,.\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof}\nLet us prove \\eqref{boo} first. The upper bound follows from \\eqref{stlo} and \\eqref{formula1}. \nAs for the lower bound, by means of \\eqref{s0y} we write\n\\beq \\label{tonno}\nS_l (k) = - \\frac{\\pi^2 (N-1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) }\\int_{-1}^{+1} \\diff y \\, P_l (y) \\, \\wt S^{\\mathrm{o}}_k (y) \\,.\n\\eeq\nThe first step is to prove that $S_l (k)$ is an increasing function of $l$ for any fixed $k$. \nFrom \\eqref{tonno}, using \\eqref{leg} and integrating by parts, we obtain\n\\begin{align} \\label{tonno2}\nS_{l+2}(k) &= - \\frac{\\pi^2 (N-1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\frac{1}{2^{l+2} (l+2)!}\n\\int_{-1}^{+1} \\diff y \\, \\frac{d^{l+2}}{dy^{l+2} }(y^2-1)^{l+2} \\wt S^{\\mathrm{o}}_k (y) \\no \\\\\n& = \\frac{\\pi^2 (N-1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\frac{1}{2^{l+2} (l+2)!}\n\\int_{-1}^{+1} \\diff y \\, \\frac{d^{2}}{dy^{2} }(y^2-1)^{l+2} \\frac{d^{l} \\wt S^{\\mathrm{o}}_k}{dy^{l} }(y) \\no \\\\\n& = \\frac{\\pi^2 (N-1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\frac{1}{2^{l+2} (l+2)!}\n\\int_{-1}^{+1} \\diff y \\, \\lf[(l+2)(l+1)(y^2-1)^{l}4y^2 + 2(l+2)(y^2-1)^{l+1} \\ri] \n\\frac{d^{l} \\wt S^{\\mathrm{o}}_k}{dy^{l} } (y) \\no \\\\\n&= \\frac{\\pi^2 (N\\!-\\!1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\frac{1}{2^{l} l!} \n\\int_{-1}^{+1} \\!\\!\\!\\diff y \\, (y^2-1)^{l} \n\\frac{d^{l} \\wt S_k}{dy^{l} } (y) + \\frac{\\pi^2 (N\\!-\\!1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\f{1\\!+2(l\\!+\\!1)}{2^{l\\!+\\!1} (l\\!+\\!1)!} \\int_{-1}^{+1} \\!\\!\\!\\diff y \\, (y^2-1)^{l+1} \n\\frac{d^{l} \\wt S^{\\mathrm{o}}_k}{dy^{l} } (y)\n\\no\\\\\n&= S_l(k) + \\frac{\\pi^2 (N\\!-\\!1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) } \\f{1\\!+2(l\\!+\\!1)}{2^{l\\!+\\!1} (l\\!+\\!1)!} \\int_{-1}^{+1} \\!\\!\\!\\diff y \\, (y^2-1)^{l+1} \n\\frac{d^{l} \\wt S^{\\mathrm{o}}_k}{dy^{l} } (y) \\,. \n\\end{align}\nBy Lemma \\ref{forever}, and taking into account that $l+1$ is even, we deduce that the last integral in \\eqref{tonno2} is positive and therefore we conclude $S_{l+2} (k) \\geq S_l(k)$. \nThis means that it is sufficient to find the minimum of $S_1 (k)$.\nFrom \\eqref{reps} we have \n\\beq \\label{shotgun}\n S_1 (k) = \n-2\\pi^2 (N-1) \n\\int_{0}^1 \\diff y \\, \\f{y}{\\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri)} \\f{\\sinh \\lf( k \\arcsin \\f{ y }{m+1}\\ri) }{\\sinh \\lf(\\f{\\pi}{2} k\\ri) \\, }\\,.\n\\eeq\nWe know that $S_1(0)<0$ and $\\lim_{k \\rightarrow \\infty} S_1(k)=0$. Moreover, the derivative of $S_1(k)$ does not vanish for $k>0$, which follows from the fact that the derivative of the function\n\\[\n\\f{\\sinh ak}{\\sinh bk}\\,,\t\\hspace{1cm} 00$. Therefore, $S_1(k)$ is monotone increasing when $k>0$ and attains its minimum at $k=0$. Thus,\n\\begin{equation}\\label{S10}\n\\begin{split}\nS_l (k) \\;\\geq\\; S_1 (k) \\;\\geq\\; S_1 (0)\\;&=\\;-4\\pi (N-1) \\int_{0}^1 \\diff y \\, y\n\\f{ \\arcsin \\f{ y }{m+1} }{ \\cos \\lf( \\arcsin \\f{ y }{m+1}\\ri) } \\\\\n&=\\;-4\\pi (N-1) (m+1)^2 \\int_0^{\\arcsin \\f{ 1 }{m+1} } \\diff z \\, z \\sin z \\\\\n&=\\;-4\\pi (N-1) (m+1) \\bigg[1- \\sqrt{m(m+2)} \\, \\arcsin \\lf(\\f{1}{m+1}\\ri) \\bigg]\n\\end{split}\n\\end{equation}\nand \\eqref{boo} is proved. The proof of \\eqref{boe} is completely analogous. In this case the lower bound follows from \\eqref{stle} and \\eqref{formula1}. Using the representation\n\\beq\nS_l (k) = \\frac{\\pi^2 (N-1)}{ \\sinh \\lf(\\f{\\pi}{2} k\\ri) }\\int_{-1}^{1} \\diff y \\, P_l (y) \\, \\wt S^e_k (y)\n\\eeq\nobtained from \\eqref{s0y}, one sees that $S_{l+2}(k) \\leqslant S_l(k)$ and therefore it is enough to consider $S_0(k)$. Since\n\\beq\nS_0(0)= \\pi^2 (N\\!-\\!1) \\!\\int_{-1}^{1}\\!\\!\\! \\diff y \\f{1}{\\cos \\left(\\! \\arcsin \\f{y}{m+1} \\! \\right)} = 2 \\pi^2 (N\\!-\\!1)(m\\!+\\!1) \\arcsin \\lf( \\f{1}{m\\!+\\!1} \\ri) >0\\,,\n\\eeq\n$\\lim_{k \\rightarrow \\infty} S_0(k)=0$, and the derivative of $S_0(k)$ does not vanish for $k>0$, we deduce that $S_0(0)$ is the maximum of $S_0(k)$.\n\\end{proof}\n\n\nUsing the results of the previous lemmas we can finally \nprove Proposition \\ref{stiqform}. \n \n\\begin{proof}[Proof of Proposition \\ref{stiqform}] We prove first the estimate from below in \\eqref{abqform}. From Lemmas \\ref{lemma0}, \\ref{lemma01}, \\ref{lemma1}, \\ref{lem4} we have\n\\begin{equation}\\label{sotto}\n\\begin{split}\n\\oform_1[f]\\;&=\\;\\sum_{\\substack{l,m \\\\ l\\textrm{ even}}}\n G^{\\mathrm{off}}_{1,l} \\lf[f_{lm} \\ri] + \\sum_{\\substack{l,m \\\\ l\\textrm{ odd}}}\n G^{\\mathrm{off}}_{1,l} \\lf[f_{lm} \\ri] \\;\\geq\\;\\sum_{\\substack{l,m \\\\ l\\textrm{ odd}}}\n G^{\\mathrm{off}}_{0,l} \\lf[f_{lm} \\ri] \\;=\\;\\sum_{\\substack{l,m \\\\ l\\textrm{ odd}}} \\int_{\\R} dk\\, S_l(k) \\, \\big| f^{\\sharp}_{lm} (k) \\big|^2 \\\\\n&\\geq\\; -4\\pi (N\\!-\\!1) (m\\!+\\!1) \\bigg[ 1 - \\sqrt{m(m+2)} \\, \\arcsin \\lf( \\f{1}{m+1} \\ri) \\bigg]\n \\; \\sum_{l,m} \\int_{\\R} dk\\, \\big| f^{\\sharp}_{lm} (k) \\big|^2 \\\\\n&=\\;- \\Lambda(m,\\!N) \\, \\f{2 \\pi^2 \\sqrt{m(m+2)}}{m+1} \\sum_{l,m} \\int_{\\R} dk\\, \\big| f^{\\sharp}_{lm} (k) \\big|^2\n\\end{split}\n\\end{equation}\nwhere $\\Lambda(m,\\!N)$ is defined in \\eqref{Lambda}. Then, by \\eqref{formula0},\n\\beq\\label{sotto1}\n\\oform_1[f] \\geq -\\Lambda(m,\\!N) \\! \\sum_{l,m} G^{\\mathrm{diag}}_{0} [f_{lm}] \\geq -\\Lambda(m,\\!N) \\! \\sum_{l,m} G^{\\mathrm{diag}}_{1} [f_{lm} ] = -\\Lambda(m,\\!N) \\, \\dform_1[f]\\,.\n\\eeq\nFrom the representation \\eqref{qform 1} and from \\eqref{sotto1} we have\n\\beq\n\\Phi_0^{\\lambda} [\\xi] \\geq \\big(1-\\Lambda(m,N)\\big) \\int_{\\bR^{3N-6}} \\!\\!\\! \\diff \\Kv \\, \\sqrt{D(\\Kv) +\\lambda} \\, F_1^{\\mathrm{diag}} [Q_{\\Kv}] = \\big(1-\\Lambda(m,N)\\big)\\, \\Phi_{0,\\lambda}^{\\mathrm{diag}} [\\xi]\n\\eeq\nand the lower bound is proved. An analogous proof yields the upper bound in \\eqref{abqform}. We have\n\\begin{equation}\\label{sopra}\n\\begin{split}\n\\oform_1[f]\\;&\\leq\\;2 \\pi^2 (N\\!-\\!1)(m\\!+\\!1) \\arcsin \\lf( \\f{1}{m\\!+\\!1} \\ri) \\sum_{l,m} \\int\\!\\! dk\\, | f^{\\sharp}_{lm} (k)|^2 \\\\\n&=\\;\\f{(N\\!-\\!1) (m\\!+\\!1)^2}{\\sqrt{m(m\\!+\\!2)}} \\arcsin \\lf(\\f{1}{m\\!+\\!1} \\ri) \\sum_{l,m} G^{\\mathrm{diag}}_{0} [f_{lm}] \\\\\n&\\leq\\;\\f{(N\\!-\\!1) (m\\!+\\!1)^2}{\\sqrt{m(m\\!+\\!2)}} \\arcsin \\lf( \\f{1}{m\\!+\\!1} \\ri) \\sum_{l,m} G^{\\mathrm{diag}}_{1} [f_{lm}] \\\\\n&=\\;\\f{(N\\!-\\!1) (m\\!+\\!1)^2}{\\sqrt{m(m\\!+\\!2)}} \\arcsin \\lf( \\f{1}{m\\!+\\!1} \\ri) \\dform_1[f]\n\\end{split}\n\\end{equation}\nwhich, together with \\eqref{qform 1}, yields the upper bound for $\\Phi_0^{\\lambda}$. \n\\end{proof}\n\n\nLet us briefly comment on the result of Proposition \\ref{stiqform}. By means of the elementary estimate\n\\beq\\label{el12}\n-\\f{1}{2} \\sum_{i=1}^{N-1} k_i^2 \\leq \\sum_{i0$. Therefore, if $\\alpha \\geq 0$ the form $\\form$ is positive and if $\\alpha<0$ the lower bound \\eqref{inff} holds. Let us now prove that $\\form$ is closed. We choose \n\\beq\n\\lambda >\n\t\\begin{cases} \n\t\t0\t& \\text{if} \\;\\; \\alpha \\geq 0, \t\\\\\n\t\t\\f{\\alpha^2}{4 \\pi^4 \\big(1-\\Lambda(m,N)\\big) }\t&\t\\text{if} \\;\\; \\alpha <0\\,,\n\t\\end{cases}\n\\eeq\nand consider the form $\\form^{\\lambda}[\\psi] :=\\form [\\psi] +\\lambda \\|\\psi\\|^2_{L^2_{\\mathrm{f}}}$ defined on $\\dom(\\form)$. Let $\\{\\psi_n \\}$ be a sequence in $\\dom(\\form)$ such that\n\\beq\n\\lim_{n \\rightarrow \\infty} \\|\\psi_n - \\psi \\|_{L^2_{\\mathrm{f}}} =0\\,, \\hspace{1cm}\t\\lim_{n,m \\rightarrow \\infty} \\form^{\\lambda}[\\psi_n - \\psi_m] =0\\,,\n\\eeq\nwhere $\\psi \\in L^2_{\\mathrm{f}}(\\bR^{3N})$. From the definition of $\\dom(\\form)$ and $\\form^{\\lambda}$ (see \\eqref{form}, \\eqref{form domain}) we have $\\psi_n = \\phi_n^{\\lambda} + \\pot \\xi_n$, with $\\phi_n^{\\lambda} \\in H^1_{\\mathrm{f}}(\\bR^{3N})$, $\\xi_n \\in H^{1\/2}_{\\mathrm{f}}(\\bR^{3N-3})$, and \n\\beq\n\\form^{\\lambda}[\\psi_n - \\psi_m]= \\mathcal{F}_0[\\phi_n^{\\lambda} - \\phi_m^{\\lambda}] + N \\qform[\\xi_n -\\xi_m]\\,.\n\\eeq\nThis, together with \\eqref{cC}, implies that $\\{\\phi_n^{\\lambda} \\}$ is a Cauchy sequence in $ H^1_{\\mathrm{f}}(\\bR^{3N})$ and $\\{ \\xi_n\\}$ is a Cauchy sequence in $H^{1\/2}_{\\mathrm{f}}(\\bR^{3N-3})$. Let us denote by $\\phi^{\\lambda}$ and $\\xi$ the corresponding limits. From the explicit expression of the potential \\eqref{pot xi} we notice that \n\\beq\n\\| \\pot \\xi \\|_{L^2_{\\mathrm{f}}} \\leq c \\, \\|\\xi\\|_{L^2_{\\mathrm{f}}}\\,,\n\\eeq\nwhere $c>0$. Hence,\n\\beq\n\\lim_{n \\rightarrow \\infty} \\|\\psi_n - (\\phi^{\\lambda} + \\pot \\xi) \\|_{L^2_{\\mathrm{f}}}= \\lim_{n \\rightarrow \\infty} \\| (\\phi_n^{\\lambda} - \\phi^{\\lambda}) + \\pot ( \\xi_n - \\xi) \\|_{L^2_{\\mathrm{f}}} =0\\,.\n\\eeq\nSince the limit of the $\\psi_n$'s is unique, $\\psi = \\phi^{\\lambda} + \\pot \\xi$. Therefore $\\psi \\in \\dom(\\form)$ and $\\lim_{n \\rightarrow \\infty} \\form^{\\lambda}[\\psi-\\psi_n]=0$. This shows that the form $\\form^{\\lambda}$ is closed and a fortiori $\\form$ is.\n\\end{proof}\n\n\n\n\n \n\n\n\n\n\\section{Unboundedness from Below of $ \\form $} \n\\label{instability: sec}\n\t\t\nThis section is devoted to the proof of Theorem \\ref{ub1}.\nAs we shall see, what makes an instability condition hard to prove is the restriction to antisymmetric wave functions. \n\n\nIn fact, the proof relies on the explicit evaluation of the charge form $ \\qform $ on a trial function, i.e., a convenient sequence of charges with energy going to $ -\\infty $. Identifying one such sequence is easy when $ N = 2 $ because $ \\qform $ is in practice the same as the reduced form $ F_{1} $ -- see \\eqref{michelebuliccio} below -- \nand the analysis performed in Section \\ref{stability: sec} suggests that a convenient $ Q_n(\\pv) $ has to be chosen\nin the subspace with angular momentum $ \\ell=1 $ and such that in the position representation it becomes peaked at the origin as $ n \\to \\infty $ (i.e., two identical fermions coming arbitrarily close).\n\nWhen $ N>2 $, on the other hand, a natural trial function satisfying the antisymmetry constraint would be the Slater determinant of $N$ one-particle charges, one of which is $ Q_n $ itself. A convenient choice is driven by the physical idea of a $N$-body configuration that contains precisely the $(2\\!+\\!1)$-body structure minimizing the energy with $ N = 2 $, whereas all remaining particles are placed far away in space so that there is no or negligible interference with the two-body state. This results in a $N$-particle Slater determinant between $ Q_n $ and $N-1$ copy of a different component (see \\eqref{min sequence} below). The fermionic character of the trial function is thus fulfilled by construction and optimising the choice of the second component produces only higher order symmetry correlations.\n\n\tThroughout this section we assume that $ \\la $ is a positive number such that $ C_1 \\leq \\la^{-1} \\leq C_2 $ for two finite constants $ C_1, C_2 < \\infty $, which in particular will allow us to incorporate error factors proportional to $ \\la^{-1} $ into a constant $ C $.\n \t\n\t\\begin{proof}[Proof of Theorem \\ref{ub1}]\n\t\tIn order to prove instability of the form $ \\form $, it is enough to produce a sequence of normalised charges $ \\xi_n \\in H_{\\mathrm{f}}^{1\/2}(\\R^{3(N-1)}) $ such that\n\t\t\\beq\n\t\t\t\\lim_{n \\to \\infty} \\qform[\\xi_n] = - \\infty,\n\t\t\\eeq\n\t\tsince the sequence of states $ \\pot \\xi_n $ then satisfies\n\t\t\\beq\n\t\t\t\\form[\\pot \\xi_n] = - \\la \\lf\\| \\pot \\xi_n \\ri\\|_{L^2(\\R^{3N})}^2 + N \\qform[\\xi_n] \\xrightarrow[\\;n\\to\\infty\\;]{} - \\infty.\n\t\t\\eeq\n\t\t\\underline{Case $ N = 2 $}. The result was already proved in \\cite{FT}, but we repeat here the argument for we use a slightly different trial function that turns out to be useful in the general case $ N > 2 $. Owing to \\eqref{qform 1}, \n\t\t\\beq\\label{michelebuliccio}\n\t\t\t\\qform[\\xi] - \\alpha \\| \\xi \\|_{L^2(\\R^{3})}^2 = \\sqrt{\\la} F_1 \\lf[ Q \\ri]\n\t\t\\eeq\n\t\twhere $ Q(\\pv) = \\la^{3\/4} \\xi(\\sqrt{\\la}\\pv) $ (recall \\eqref{charge Qk}). Then we only need to produce $ Q_n \\in L^2(\\RT) $ such that $ \\lim_{n \\to \\infty} F_1[Q_n] = - \\infty $. Note that no constraint is imposed on the symmetry properties of $ Q_n $. In fact, according to the discussion of Section \\ref{stability: sec} (see \\eqref{formula0}, \\eqref{formula1} and \\eqref{S10}), we can take each $ Q_n $ in the subspace with angular momentum $ l = 1 $ and such that the support of its $\\sharp-$transform defined in \\eqref{formula2} gets concentrated at the origin. Explicitly, we choose\n\t\t\\beq\n\t\t\t\\label{qnga}\n\t\t\t\\qnga(\\kv) : = n^{-3\/2} \\qga(n^{-1} k) Y_1^{0}(\\vartheta_k),\n\t\t\\eeq\n\t\t\t\t\\beq\n\t\t\t\\label{Qgamma}\n\t\t\t\\qga(p) : = \\pi^{-1\/4} c_{\\gamma} \\gamma^{1\/2} p^{-1} \\exp \\lf\\{ - \\hbox{$\\frac{1}{8\\gamma^{2} }$} \\ri\\} \\exp \\lf\\{ - \\half \\gamma^2 \\lf( \\log p \\ri)^2 \\ri\\} \\Theta(p-1),\n\t\t\\eeq\n\t\twhere $ \\gamma \\in (0, 1) $ is a variational parameter, $ \\Theta $ is the Heaviside function, i.e., $ \\Theta(p) = 1 $ if $ p \\geq0 $ and $ 0 $ otherwise, and $ c_{\\gamma} $ is a normalisation constant.\n\t\tBy direct computation,\n\t\t\\beq\n\t\t\t\\lf\\| \\qnga \\ri\\|^2_{L^2} = \\int_1^{\\infty} \\diff p \\: p^2 \\lf| \\qga(p) \\ri|^2 = \\tx\\frac{c^2_{\\gamma}}{\\sqrt{\\pi}} \\disp\\int_{0}^{\\infty} \\diff t \\: \\exp\\lf\\{ - t^2 + \\tx\\frac{t}{\\gamma} - \\tx\\frac{1}{4\\gamma^2} \\ri\\} = \\half c^2_{\\gamma} \\: \\lf( 1 + \\mathrm{erf}\\lf\\{ \\tx\\frac{1}{2\\gamma} \\ri\\} \\ri).\n\t\t\\eeq\n\t\tImposing $\\qnga$ to be normalised yields\n\t\t\\beq\n\t\t\t\\label{cgamma}\n\t\t\t1 \\leq c^2_{\\gamma} = 2 \\lf[ 1 + \\mathrm{erf}\\lf\\{ \\tx\\frac{1}{2\\gamma} \\ri\\} \\ri]^{-1} \\leq 1 + C\\gamma \\exp\\lf\\{-\\tx\\frac{1}{4\\gamma^2} \\ri\\}.\n\t\t\\eeq \n(see, e.g., \\cite[Eq. (7.1.13)]{AS}). It is also useful to compute the following integral for $ a \\geq -1 $:\n\t\t\\bml{\n\t\t\t\\label{qga est}\n\t\t\t\\int_0^{\\infty} \\diff p \\: p^{2+a} \\lf| \\qga(p) \\ri|^2 = \\tx{\\frac{1}{\\sqrt{\\pi}}} \\gamma c_{\\gamma}^2 \\exp \\lf\\{ -\\tx{\\frac{1}{4\\gamma^2}} \\ri\\} \\disp\\int_{0}^{\\infty} \\diff t \\: \\exp \\lf\\{ - \\gamma^2 t^2 + (1+a) t \\ri\\}\t\\\\\n\t\t\t =\\half c_{\\gamma}^2 \\lf( 1 + \\mathrm{erf} \\lf\\{\\tx\\frac{1+a}{2\\gamma} \\ri\\} \\ri) \\exp \\lf\\{ \\tx{\\frac{1}{4\\gamma^2}} (2a+a^2) \\ri\\} \\leq \\lf( 1 + C \\gamma \\ri) \\exp \\lf\\{ \\tx{\\frac{1}{4\\gamma^2}} (2a+a^2) \\ri\\},\t\n\t\t}\n\twhere we used\n\t\t\\bdm\n\t\t\t\\half c_{\\gamma}^2 \\lf( 1 + \\mathrm{erf} \\lf\\{\\tx\\frac{1+a}{2\\gamma} \\ri\\} \\ri) = \\frac{ 1 + \\mathrm{erf} \\lf\\{\\tx\\frac{1+a}{2\\gamma} \\ri\\}}{ 1 + \\mathrm{erf} \\lf\\{\\tx\\frac{1}{2\\gamma} \\ri\\}} \\leq 1 + \\frac{2}{\\sqrt{\\pi}} \\int_{\\frac{1}{2\\gamma}}^{\\frac{1+a}{2\\gamma}} \\diff t \\: e^{-t^2} \\leq 1 + C \\gamma^{-1} \\exp\\lf\\{- \\tx\\frac{1}{4 \\gamma^2} \\ri\\} \\leq 1 + \\OO(\\gamma).\n\t\t\\edm\n\t\tUsing the decomposition \\eqref{Fzl}, as well as the scaling law of $ \\qnga $ with $n$, we have\n\t\t\\beq\n\t\t\t\\label{est F1 1}\n\t\t\tF_1[\\qnga] = G_{1}^{\\mathrm{diag}}\\lf[n^{-3\/2} \\qga(n^{-1} p)\\ri] + G_{1,1}^{\\mathrm{off}}\\lf[n^{-3\/2} \\qga(n^{-1} p)\\ri] = n \\lf[ G_{n^{-2}}^{\\mathrm{diag}}\\lf[\\qga\\ri] + G_{n^{-2},1}^{\\mathrm{off}}\\lf[\\qga\\ri] \\ri].\n\t\t\\eeq\nWe estimate the diagonal term in \\eqref{est F1 1} as\n\\begin{equation}\\label{approx Goff}\n\\begin{split}\nG_{n^{-2}}^{\\mathrm{diag}}\\lf[\\qga\\ri]\\;&\\leq\\;2 \\pi^2 \\frac{\\sqrt{m(m+2)}}{m+1} \\lf(1 + \\OO(n^{-1}) \\ri) \\big\\| Q_{\\gamma}^{\\sharp} \\big\\|_{L^2}^2\t\\\\\n&\\leq\\;2 \\pi^2 \\frac{\\sqrt{m(m+2)}}{m+1} \\exp \\lf\\{ \\tx\\frac{3}{4\\gamma^2} \\ri\\} \\lf[1 + \\OO(n^{-1}) + \\OO(\\gamma) \\ri],\n\\end{split}\n\\end{equation}\nwhere we used\n\t\t\\beq\n\t\t\t\\label{tilde Q norm}\n\t\t\t\\big\\| Q^{\\sharp}_{\\gamma} \\big\\|_{L^2}^2 = \\int_{\\R} \\diff k \\: e^{4k} \\: \\big| \\hat{Q}_{\\gamma}\\big(e^k\\big) \\big|^2 = \\int_{0}^{\\infty} \\diff p \\: p^3 \\: | Q_{\\gamma}(p) |^2 \\leq (1 + C \\gamma ) \\exp \\lf\\{ \\tx{\\frac{3}{4\\gamma^2}} \\ri\\}.\n\t\t\\eeq\n\t\tAs for the off-diagonal term in \\eqref{est F1 1},\n\\beq\\label{Goff1}\nG^{\\mathrm{off}}_{n^{-2},1} \\lf[ \\qga \\ri] = G^{\\mathrm{off}}_{0,1} \\lf[ \\qga \\ri] + \\mathcal{R},\n\\eeq\nwhere\n\\bml{\n \t\t\t|\\mathcal{R}| \\leq C \\int_{\\R^3} \\diff \\sv \\int_{\\R^3} \\diff \\tv \\: \\bigg| \\lf(s^2 + t^2 + \\tx\\frac{2}{m+1} \\sv \\cdot \\tv + n^{-2}\\ri)^{-1} - \\lf(s^2 + t^2 + \\tx\\frac{2}{m+1} \\sv \\cdot \\tv\\ri)^{-1} \\bigg| \\lf| \\qga(s) \\ri| \\lf| \\qga(t) \\ri|\t\\\\\n\t\t\t\\leq C n^{-2} \\int_{\\R^3} \\diff \\sv \\int_{\\R^3} \\diff \\tv \\: \\frac{\\lf| \\qga(s) \\ri| \\lf| \\qga(t) \\ri|}{(s^2 + t^2)^{2}} \\leq C n^{-2} \\int_{\\R^3} \\diff \\sv \\: \\lf| \\qga(s) \\ri|^2 \\int_{1}^{\\infty} \\diff t \\: t^{-2} \\leq \\OO(n^{-2}). \\label{R1}\n\t\t}\nMoreover,\n\\begin{equation}\\label{Goff est}\n\\begin{split}\nG^{\\mathrm{off}}_{0,1} \\lf[ \\qga \\ri]\\;&=\\;\\int_{\\R} \\diff k \\: S_1(k) \\big| Q^{\\sharp}_{\\gamma}(k) \\big|^2 \\\\\n& \\leq\\; S_1(0) \\big\\| Q^{\\sharp}_{\\gamma} \\big\\|_{L^2}^2 + \\int_{\\R} \\diff k \\: \\lf( S_1(k) - S_1(0) \\ri) \\big| Q^{\\sharp}_{\\gamma}(k) \\big|^2 \\\\\n&\\leq\\; S_1(0) \\exp \\lf\\{ \\tx{\\frac{3}{4\\gamma^2}} \\ri\\} \\lf(1 + \\OO(\\gamma) \\ri) + C \\int_{\\R} \\diff k \\: \\sqrt{|k|} \\big| Q^{\\sharp}_{\\gamma}(k) \\big|^2 \\\\\n&=\\; - 2 \\pi^2 \\frac{\\sqrt{m(m+2)}}{m+1} \\Lambda(m,2) \\exp \\lf\\{ \\tx\\frac{3}{4\\gamma^2} \\ri\\} \\lf( 1 + \\OO(\\gamma) \\ri) + C \\int_{\\R} \\diff k \\: \\sqrt{|k|} \\big| Q^{\\sharp}_{\\gamma}(k) \\big|^2\n\\end{split}\n\\end{equation}\nwhere we used \\eqref{S10} and the elementary estimate $S_1(k)-S_1(0) \\leq C \\sqrt{|k|}$. To estimate the last integral in \\eqref{Goff est} we observe that\n\\begin{equation}\\label{Qshes}\n\\begin{split}\nQ^{\\sharp}_{\\gamma}(k)\\;&=\\;\\pi^{-1\/4} c_{\\gamma} \\gamma^{1\/2} \\exp\\lf\\{\\tx-\\f{1}{8 \\gamma^2} \\ri\\} \\f{1}{\\sqrt{2\\pi}} \\int_0^{\\infty} \\!\\!\\!dx\\, \n\\exp\\lf\\{\\tx-ikx -\\f{\\gamma^2}{2} x^2 +x\\ri\\} \\\\\n&=\\;\\pi^{-1\/4} c_{\\gamma} \\gamma^{-1\/2} \n\\exp\\lf\\{\\tx-\\f{1}{8 \\gamma^2}\\ri\\} \\bigg(\n\\exp\\lf\\{\\tx-\\f{k^2}{2\\gamma^2} -i\\f{k}{\\gamma^2} +\\f{1}{2\\gamma^2}\\ri\\} - \\f{1}{2 \\sqrt{\\pi} z}(1+r(z)) \\bigg),\n\\end{split}\n\\end{equation}\nwhere\n\\beq\nz= \\f{1-ik}{\\sqrt{2}\\, \\gamma}\\,, \\hspace{1cm} |r(z)|\\leq \\f{\\gamma^2}{\\sqrt{1+k^2}}\\,.\n\\eeq\nTherefore,\n\\begin{equation}\\label{sqrQ}\n\\begin{split}\n\\!\\!\\!\\!\\!\\!\\int_{\\bR}\\!\\!\\!dk\\, \\sqrt{|k|} |Q^{\\sharp}_{\\gamma} (k)|^2 \\;&\\leq\\;\\f{2 \\, c_{\\gamma}^2}{\\sqrt{\\pi} \\gamma} \n\\exp\\lf\\{\\tx\\f{3}{4\\gamma^2}\\ri\\} \\int_{\\bR}\\!\\!\\! dk\\, \\sqrt{|k|} \\, \n\\exp\\lf\\{\\tx-\\f{k^2}{\\gamma^2}\\ri\\} \n+ \\f{4 \\, c_{\\gamma}^2 \\, \\gamma}{\\pi^{3\/2}} \n\\exp\\lf\\{\\tx-\\f{1}{4\\gamma^2} \\ri\\}\n\\int_{\\bR} \\!\\!\\!dk \\, \\f{\\sqrt{|k|}}{1+\\, k^2} \\\\\n& \\leq\\;C \\, \\sqrt{\\gamma} \\; \\exp\\lf\\{\\tx\\f{3}{4 \\gamma^2} \\ri\\} \\Big( 1 + \\sqrt{\\gamma} \\, \n\\exp\\lf\\{\\tx-\\f{1}{\\gamma^2}\\ri\\} \\Big).\n\\end{split}\n\\end{equation}\nUsing \\eqref{approx Goff}, \\eqref{Goff1}, \\eqref{R1}, \\eqref{Goff est}, \\eqref{sqrQ} in \\eqref{est F1 1} we finally obtain\n\t\t\\beq\n\t\t\tF_1[\\qnga] \\leq 2 \\pi^2 \\frac{\\sqrt{m(m+2)}}{m+1} n \\exp \\lf\\{ \\tx\\frac{3}{4\\gamma^2} \\ri\\} \\lf[ 1 - \\Lambda(m,2) + \\OO(\\sqrt{\\gamma}) + \\OO(n^{-1}) \\ri] \\xrightarrow[\\;n\\to\\infty\\;]{} - \\infty,\n\t\t\\eeq\n\t\tif $ \\Lambda(m,2) > 1 $ and $ \\gamma $ is taken small enough (independent of $ n $).\n\n\\noindent \\underline{Case $N>2$}. \n\t\tAs mentioned at the beginning of this section, this case is more complicated, for the trial sequence $ \\xi_n $ must be antisymmetric under the exchange of any variable, i.e., $\\xi_n \\in \\ldf(\\R^{3N-3}) $, and at the same time we want $ \\hat\\xi_n(\\kv_1,\\ldots,\\kv_{N-1}) $ to behave like $ \\qnga(\\kv_1) $ once the other degrees of freedom are traced out. Looking for $\\xi_n$ matching these two requirements is an example of the well-known representability problem (see, e.g., \\cite{LS}), i.e., the search for sufficient conditions to impose on a one-particle density matrix so that it can be obtained as the reduced density matrix of a fermionic many-body state. We remark that the solution is known only in some special cases and is non-trivial. Our choice here is a trial state that is as close as possible to an uncorrelated state, which is given by an antisymmetric wave function containing $ \\qnga $. Explicitly,\n\t\t\\beq\n \t\t\t\\label{min sequence}\n\t\t\t\\hat\\xi_n(\\kv_1, \\ldots, \\kv_{N-1}) : = \\frac{1}{\\sqrt{(N-1)!}}\n\t\t\t\\lf|\n\t\t\t\\begin{array}{ccccc}\n\t\t\t\t\\qnga(\\kv_1)\t&\t\\Xi_{\\beta,2}(\\kv_1)\t&\t\\cdots \t&\t\\Xi_{\\beta,N-1}(\\kv_{1})\t\t\\\\\n\t\t\t\t\\qnga(\\kv_2)\t&\t\\Xi_{\\beta,2}(\\kv_2)\t&\t\\cdots \t&\t\\Xi_{\\beta,N-1}(\\kv_{2})\t\t\\\\\n\t\t\t\t\\vdots \t&\t\\vdots \t&\t\\mbox{} \t&\t\\vdots\t\t\\\\\n\t\t\t\t\\qnga(\\kv_{N-1})\t&\t\\Xi_{\\beta,2}(\\kv_{N-1})\t&\t\\cdots \t&\t\\Xi_{\\beta,N-1}(\\kv_{N-1})\t\n\t\t\t\\end{array}\n\t\t\t\\ri|\\,,\n\t\t\\eeq\n\t\twhere $ \\qnga $ is defined in \\eqref{qnga}, $ 0 < \\beta \\ll 1 $ is another variational parameter,\n\t\t\\beq\n\t\t\t\\label{chibm}\n\t\t\t\\chibm(\\kv) := (4\\pi)^{-1\/2} \\beta^{-3\/2} \\: \\Xi(\\beta^{-1} k) \\: \\exp\\lf\\{ i l \\varphi_k \\ri\\},\n\t\t\\eeq\n\t\t$ l \\in \\N $, $ \\Xi \\in C^{\\infty}_0(\\R^+) $ is real-valued, with support in $ (0,1) $, and such that\n\t\t\\beq\n\t\t\t\\label{G normalized}\n\t\t\t\\int_{0}^1 \\diff k \\: k^2 \\: \\Xi^2(k) = 1.\n\t\t\\eeq\n\t\tNote that, since the two functions $ \\qnga $ and $ \\chibm $, $ l > 0 $, are orthonormal by construction, the function \\eqref{min sequence} belongs to $ \\ldf(\\R^{3(N-1)}) $ and is normalised. Moreover, the supports of $ \\qga $ and $ \\Xi $ do not intersect, which implies that the supports of $ \\qnga $ and $ \\chibm $ are disjoint as well, provided $ \\beta \\leq n $, which follows from the assumptions on $ \\beta $. \n\t\t\n\t\tWe can now evaluate $\\dqform[\\xi_n]$. We start by estimating the diagonal part. Using the exchange symmetry and the definition of $ L_{\\la} $ in \\eqref{Lla}),\n\t\t\\bml{\n \t\t\t\\label{energy est 0}\n\t\t\t\\dqform[\\xi_n] = \\al + \\frac{1}{(N-2)!} \\int_{\\R^{3(N-1)}} \\diff \\kv_1 \\diff \\kkv \\: L_{\\la}(\\kv_1,\\ldots,\\kv_{N-1}) \\lf| \\qnga(\\kv_1) \\ri|^2 \t\\cdot \\\\\n\t\t\t\\sum_{\\sigma,\\tau \\in \\mathcal{P}_{N-1}} \\prod_{l,j =2}^{N-1} \\sgn(\\sigma) \\sgn(\\tau) \\Xi_{\\beta,l}^*(\\kv_{\\sigma(l)}) \\Xi_{\\beta,j}(\\kv_{\\tau(j)}),\n\t\t\t}\n\t\twhere $ \\mathcal{P}_{N-1} $ is the group of permutations of $ N-2 $ elements $2, \\ldots, N - 1 $ and $ \\sgn(\\sigma) $ denotes the sign of any $ \\sigma \\in \\mathcal{P}_N $. All the other terms vanish because of the integral of the product $ \\qnga(\\kv_i) \\chib(\\kv_i) $, which is pointwise zero thanks to the disjoint supports of the functions. Extracting the main factor\n\t\t\\bdm\n\t\t\t\\sqrt{\\tx\\frac{m(m+2)}{(m+1)^2} k_1^2 + \\la}\\,,\n\t\t\\edm\n\t\tand bounding the rest by means of the inequality\n\t\t\\beq\n\t\t\t\\label{useful ineq 1}\n\t\t\t\\sqrt{a + b} \\leq \\sqrt{|a|} + \\sqrt{|b|},\t\\hspace{1cm}\t\\mbox{for} \\:\\: a + b \\geq 0\\,,\t\n\t\t\\eeq\n\t\twe obtain \n\\begin{equation}\\label{Lla estimate}\n\\begin{split}\nL_{\\la}(\\kv_1, \\ldots, \\kv_{N-1}) \\;&\\leq\\; 2 \\pi^2 \\sqrt{\\tx\\frac{m(m+2)}{(m+1)^2} k_1^2 + \\la} \\,\\bigg\\{ 1 + \\lf(\\tx\\frac{m(m+2)}{(m+1)^2} k_1^2 + \\la\\ri)^{-1\/2} \\times \\\\\n& \\qquad\\qquad\\times\\bigg[ \\tx\\frac{m(m+2)}{(m+1)^2} \\disp\\sum_{i = 2}^{N-1} k_i^2 + \\tx\\frac{2m}{(m+1)^2} \\bigg| \\disp\\sum_{j > 1} \\kv_1 \\cdot \\kv_j + \\disp\\sum_{1 < i 0 $, such that \n\t\t\\beq\n\t\t\t\\qform[\\xi_n]\n\t\t\t\\xrightarrow[\\;n\\to\\infty\\;]{} - \\infty\n\t\t\\eeq\nwhich concludes the proof.\n\\end{proof}\n\n\n\n\\vspace{1cm}\n\\n\t\t\n{\\bf Acknowledgments}. M.C. acknowledges the support of the European Research Council under the European Community Seventh Framework Program (FP7\/2007-2013 Grant Agreement CoMBos No. 239694).\n\n\n\n\\vs\n\n\\section*{Appendix}\n\n\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\\setcounter{equation}{0}\n\\setcounter{subsection}{0}\n\\setcounter{pro}{0}\n\\renewcommand{\\thesection}{A}\n\n\n\\label{Appendix A}\n\nHere we describe the formal procedure for the construction of the quadratic form $\\form$. \nWe start from the Hamiltonian \\eqref{cm Hamiltonian} written in the Fourier space\n\\bml{\n(\\widehat{H \\psi})(\\kv_1,\\ldots, \\kv_N) = \\bigg(\\sum_{i=1}^N k_i^2 +\\f{2}{m\\!+\\!1}\\sum_{i 0 $.\nSetting\n\\beq\n\t\\label{ren regular part}\n\t\\hat{\\phi}_{\\la}^R : = \\hat{\\psi} - \\widehat{\\pot \\rho^R},\n\\eeq\nwe have\n\\beq\n\t\\label{ren form decomposition}\n \t\\renform[\\psi] = \\F_0\\lf[\\phi_{\\la}^R\\ri] + \\la \\lf\\| \\phi_{\\la}^R \\ri\\|_{L^2(\\R^{3N})}^2 - \\la \\lf\\| \\psi \\ri\\|_{L^2(\\R^{3N})}^2 + \\renqform\\lf[\\xi^R\\ri],\n\\eeq\nwith $ \\F_0[\\phi] : = \\bra{\\phi} H_0 \\ket{\\phi} $, and\n\\bml{\n\t\\label{ren qform}\n\t\\renqform\\lf[\\xi\\ri] : = - \\sum_{i=1}^N \\int_{\\R^{3N}} \\diff \\kv_1 \\cdots \\diff \\kv_N \\: \\chi_R(k_i) \\: \\hat\\xi_i^*(\\breve \\kkv_i) \\lf[ \\hat{\\psi}(\\kv_1, \\ldots, \\kv_N) + \\green(\\kv_1, \\ldots, \\kv_N) \\: \\hat\\xi_i(\\breve \\kkv_i) \\ri]\t\\\\\n\t- \\sum_{i < j} \\int_{\\R^{3N}} \\diff \\kv_1 \\cdots \\diff \\kv_N \\: \\chi_R(k_i) \\:\t\\hat\\xi_i^*(\\breve \\kkv_i) \\green(\\kv_1, \\ldots, \\kv_N) \\: \\chi_R(k_j) \\: \\hat\\xi_j(\\breve \\kkv_j)\\,.\n}\nIn the limit $ R \\to \\infty $ we assume that $\\rho^R_i , \\, \\xi^R_i \\rightarrow \\xi_i$. Moreover, we extract from the diagonal part of \\eqref{ren qform} only the terms not vanishing in that limit\n\\bmln{\n\t\\sum_{i=1}^N \\int_{\\R^{3N-3}} \\diff \\breve \\kkv_i \\: \\lf| \\hat\\xi_i (\\breve \\kkv_i) \\ri|^2 \\lf[ - \\frac{(2\\pi)^3}{\\mu(\\al,R)} - \\int_{\\RT} \\diff \\kv_i \\: \\chi_R(k_i) \\green(\\kv_1, \\ldots, \\kv_N) \\ri] \t\\\\\n\t= \\sum_{i=1}^N \\int_{\\R^{3N}} \\diff \\breve \\kkv_i \\: \\lf| \\hat\\xi_i (\\breve \\kkv_i) \\ri|^2 \\bigg[ - \\frac{(2\\pi)^3}{\\mu(\\al,R)} - 4 \\pi R \\\\\t\n\t+ 2 \\pi^2 \\bigg[ \\frac{m(m+2)}{(m+1)^2} \\sum_{j \\neq i} k_j^2 + \\frac{2m}{(m+1)^2} \\sum_{i \\neq j} \\kv_i \\cdot \\kv_j + \\la \\bigg]^{1\/2} + o(1) \\bigg].\n}\nIn order to remove the cut-off one is thus forced to set $ \\mu \\to 0 $ as $ R \\to \\infty $ and, although several choices are allowed, we set\n\\beq\n\t\\mu(\\al,R) : = - \\frac{(2\\pi)^3}{4 \\pi R + \\al},\n\\eeq\nthis way canceling the singular term proportional to $- 4\\pi R $ contained in the expression above. \n\nWe can now remove the cut-off taking the limit $ R \\to \\infty $ and so recovering the expression \\eqref{form}. Note that we exploit at this stage the fermionic symmetry, which in particular implies that all charges can be expressed in terms of a single function $ \\xi $, i.e., \n\\beq\n\t\\xi_i(\\xv_1,\\ldots,\\xv_{N-1}) = (-1)^{i+1} \\xi(\\xv_1,\\ldots,\\xv_{N-1}),\n\\eeq\nand $ \\xi $ itself is totally antisymmetric under exchange of coordinates. This in turns implies that the sign in front of the off-diagonal term is the opposite than in the bosonic case, implying a completely different behavior of the ground state. \n\n\n\n\n\\vs\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWith the fast-paced advancement of the Artificial Intelligence (AI) and Robotics fields, there is an increasing potential to resort to\\textit{ robot assistants} (or \\textit{service robots}) to help with daily tasks. Service robots can take on many roles. They can operate as patient carers \\cite{bajones_hobbit_2018}, door-to-door garbage collectors \\cite{ferri_dustcart_2011}, Health and Safety monitors \\cite{dong_design_2018}, museum or tour guides \\cite{waldhart_reasoning_2019}, to name a few. Yet, even decades after the first robot vacuum cleaner was deployed (\\url{https:\/\/en.wikipedia.org\/wiki\/Robotic_vacuum_cleaner}), robot assistants still perform unreliably on more complex tasks. Succeeding in the real world is indeed a difficult challenge because it requires robots to make sense of the high-volume and diverse data coming through their perceptual sensors. Although different sensory modalities contribute to the robot's \\textit{sensemaking} abilities (e.g., touch, sound, temperature), in this work, we focus on the modality of vision. From this entry point, the problem then becomes one of enabling robots to correctly interpret the stimuli of their vision system, with the support of background knowledge sources, a capability also known as \\textit{Visual Intelligence} \\cite{chiatti_towards_2020}. The first prerequisite to Visual Intelligence is the ability to robustly recognise the different objects occupying the robot's environment (\\textit{object recognition}). Let us consider the case of HanS, the Health and Safety robot inspector currently under development at the Knowledge Media Institute (KMi). HanS is expected to monitor the Lab space in search of potentially dangerous situations, such as fire hazards. For instance, imagine that Hans was observing a flammable object (e.g., a paper cup) left on top of a portable heater. To conclude that it is in the presence of a potential fire hazard, the robot would first need to recognise that the cup and the portable heater are there. \n\nCurrently, the most common approach to tackling object recognition tasks is applying methods which are based on Machine Learning (ML). In particular, the state-of-the-art performance is defined by the latest approaches based on Deep Learning (DL) \\cite{liu_deep_2020,lecun_deep_2015}. Despite their popularity, these methods have received many critiques due to their brittleness and lack of transparency \\cite{marcus2018deep,parisi_continual_2019,pearl_theoretical_2018}. To compensate for these limitations, a more recent trend among AI researchers has been to combine ML with knowledge-based reasoning, thus adopting a \\textit{hybrid approach} \\cite{aditya_integrating_2019,gouidis_review_2020}. A question remains, however, on what type of knowledge resources and reasoning capabilities should be leveraged within this new class of hybrid methods \\cite{daruna2018sirok}.\n\nIn \\cite{chiatti_towards_2020}, we identified a set of \\textit{epistemic requirements}, i.e., a set of capabilities and knowledge properties, required for service robots to exhibit Visual Intelligence. We then mapped the identified requirements to the types of classification errors emerging from one of HanS' scouting rounds, where we relied solely on Machine Learning to recognise the objects. This error analysis highlighted that, in 74\\% of the cases, a more accurate object classification could in principle have been achieved if the relative size of objects was considered for their categorisation. For instance, back to HanS' case, the paper cup could be mistaken for a rubbish bin, due to its shape. However, rubbish bins are typically larger than cups. With this awareness, HanS would be able to rule out object categories, which, albeit being visual similarity to the correct class, are \\textit{implausible} from the standpoint of size.\n\nThese elements of \\textit{typicality} and \\textit{plausible reasoning} \\cite{davis_commonsense_2015} link size reasoning to the broader AI objective of developing systems which exhibit \\textit{common sense} \\cite{levesque_common_nodate}, especially with respect to understanding a set of intuitive physics rules governing the environment \\cite{hayes_second_1988,lake_building_2017}. This view is also supported by studies of human visual cognition, which suggest that our priors about the canonical size of objects play a role in how we categorise, draw and imagine objects \\cite{rosch_principles_1999,hoffman_visual_2000,konkle_canonical_2011}.\n\nOn a more practical level, knowledge representations which encode object sizes have already been applied effectively to Natural Language Processing (NLP) tasks. These include answering questions, such as, \"is object A larger than object B?\" \\cite{bagherinezhad_are_2016,elazar_how_2019}. However, despite this body of theoretical and empirical evidence, the role of size in object recognition has received little attention in the field of Computer Vision. To address this issue, in this paper we investigate the performance effects of augmenting a ML-based object recognition system both with background knowledge about the typical size of objects, as well as with a method to reason about the size of the observed objects. Namely, we propose: \n\n\\begin{itemize}\n\\item A hybrid method to validate ML-based predictions based on the typical size of objects. \n\\item A novel representation for size, which categorises objects differently along different dimensions contributing to their size (i.e., their front surface area, depth and aspect ratio). This representation also allows us to model object categories that include instances of varying size.\n\\end{itemize}\n\n\\section{Related work}\n\nState-of-the-art object recognition methods rely heavily on Machine Learning, as further discussed in Section 2.1. Because of the limitations of ML-based methods, hybrid methods, which combine ML with background knowledge and knowledge-based reasoning, have been recently proposed (Section 2.2). In particular, our hypothesis is that awareness of object size has the potential to drastically improve the performance of hybrid object recognition methods \\cite{chiatti_towards_2020}. Therefore, we will conclude our review of the literature by discussing existing approaches to representing the size of objects (Section 2.3). \n\n\\subsection{Machine Learning for Object Recognition}\nThe impressive performance exhibited by object recognition methods based on DL has led to significant advances on several Computer Vision benchmarks \\cite{liu_deep_2020,krizhevsky_imagenet_2017,he_deep_2016}. Deep Neural Networks (NNs), however, come with their own limitations. These models (i) are notoriously data-hungry, i.e., require thousands of annotated training examples to learn from, (ii) learn classification tasks offline, i.e., assume to operate in a closed world, on a fixed and pre-determined set of objects \\cite{mancini_knowledge_2019}, and (iii) learn representational patterns automatically, by iterating over a raw input set \\cite{lecun_deep_2015}. The latter trait can drastically reduce the start-up costs of feature engineering. However, it also complicates tasks such as explaining the obtained features and augmenting them with explicit knowledge statements \\cite{marcus2018deep,pearl_theoretical_2018}.\n\nThe issue of learning robust object representations to adequately reflect changes in the environment, even from minimal training examples, has inspired the development of few-shot metric learning methods. \\textit{Metric learning} is the task of learning an embedding (or feature vector) space, where similar objects are mapped closer to one another than dissimilar objects. In this setup, even objects unseen at training time can be categorised, by matching the learned representations against a support (reference) image set. In particular, in a \\textit{few-shot scenario}, the number of training examples and support images is kept to a minimum. Deep metric learning has been applied successfully to object recognition tasks \\cite{koch_siamese_2015,hoffer_deep_2015,schroff_facenet_2015}, even in real-world, robotic scenarios \\cite{zeng2018robotic}. Koch and colleagues \\cite{koch_siamese_2015} originally proposed to train two identical Convolutional Neural Networks (CNN) fed with images to be matched by similarity. This twin architecture is also known as Siamese Network. An extension of the Siamese architecture is the Triplet Network \\cite{hoffer_deep_2015,schroff_facenet_2015}, where the input data are fed as triplets including: (i) one image depicting a certain object class (i.e., \\textit{anchor}), (ii) a \\textit{positive example} of the same object, and (ii) a \\textit{negative example}, depicting a different object. The winning team for the object stowing task at the latest Amazon Robotic Challenge further tested the effects of learning weights independently on each CNN branch \\cite{zeng2018robotic}. Relaxing the weight-coupling constraint of Siamese and Triplet Networks was shown to benefit the matching of images across different visual domains, e.g., robot-collected images with product catalogue images. Hence, in what follows, we will use the two top-performing solutions in \\cite{zeng2018robotic} as a baseline to evaluate the object recognition performance of solutions which are purely based on Machine Learning. \n\n\\subsection{Hybrid Methods for Object Recognition}\nBroadly speaking, \\textit{hybrid reasoning methods} combine knowledge-based reasoning with Machine Learning. A detailed survey of hybrid methods developed to interpret the content of images can be found in \\cite{aditya_integrating_2019,gouidis_review_2020}. Many of these hybrid methods are specifically tailored on Deep NNs, which define the predominant approach to tackling object recognition problems. In this setup, background knowledge and knowledge-based reasoning can be integrated at four different levels of the NN \\cite{aditya_integrating_2019}: (i) in \\textbf{pre-processing}, to augment the training examples, (ii) within the \\textbf{intermediate layers}, (iii) as part of the \\textbf{architectural topology} or \\textbf{optimisation function}, and (iv) in the \\textbf{post-processing} stages, to validate the NN predictions.\n\nMethods in the first group rely on external knowledge to compensate for the lack of training examples. In \\cite{mancini_knowledge_2019}, auxiliary images depicting newly-encountered objects were first retrieved from the Web and then manually validated. As a result, significant supervision costs were introduced to compensate for the noisiness of data mined automatically.\n\nOther approaches have encoded the background knowledge directly in the inner layer representations of a Deep NN. In the RoboCSE framework, a set of knowledge properties of objects, (i.e., their typical location, fabrication material and affordances) were represented through multi-relational embeddings \\cite{daruna_robocse_2019}. This method was proven effective to infer the object's material, location and affordances from its class, but performed poorly on object categorisation tasks (i.e., when asked to infer the object's class from its properties).\n\nMore transparent and explainable than multi-relational embeddings, methods in the third group are either inspired by the topology of external knowledge graphs \\cite{marino_more_2017} or introduce reasoning components which are trainable end-to-end \\cite{serafini_logic_2016,manhaeve_deepproblog_2018,santoro_simple_2017,van_krieken_analyzing_2020}. Graph Search Neural Networks (GSNN) \\cite{marino_more_2017} resemble an input knowledge graph, where search seeds are selected based on a separate object detection module. In Logic Tensor Networks (LTN) \\cite{serafini_logic_2016}, entities are represented as distinct points in a vector space, based on a set of soft-logic assertions linking these entities. In this framework, symbolic rules (which may adhere to probabilistic logic - \\cite{manhaeve_deepproblog_2018}) are added as constraints to the NN's optimisation function. Authors in \\cite{santoro_simple_2017} proposed to incorporate reasoning in the form of a trainable Relational Reasoning Layer. Similarly, in \\cite{van_krieken_analyzing_2020}, differentiable knowledge statements (expressed in fuzzy logic) contribute to the training loss function, to aid digit classification on the MNIST dataset.\n\nFinally, the fourth family of hybrid methods uses knowledge-based reasoning to validate the object predictions generated through ML. In \\cite{young_towards_2016,young_semantic_2017} the results produced by the ML-based Vision module are first associated with the Qualitative Spatial Relationships extracted from the input image, and then also matched against the top-most related DBpedia concepts, if an unknown object is observed. As in the case of \\cite{santoro_simple_2017,van_krieken_analyzing_2020}, methods in this group can modularly interface with different NN architectures. Moreover, they make it possible to reason on objects unseen at training time, by querying external knowledge sources. For these reasons, in the approach proposed in this paper, knowledge-based reasoning is applied after generating the ML predictions. Because our focus is on reasoning about size, we cannot directly compare our approach against methods in \\cite{young_towards_2016,young_semantic_2017}, which focus on spatial reasoning and taxonomic reasoning (i.e., reasoning about the semantic relatedness of different object categories). \n\n\\subsection{Representing the Size of Objects}\nStudies on human visual cognition have suggested that there seems to be a set of preferred (or canonical) views which we use to mentally represent objects \\cite{rosch_principles_1999,hoffer_deep_2015}. These views have recognisable colour and shape features. Similarly, our perception of the object's sizes is influenced by a set of prototypical priors. Specifically, the \\textit{canonical size} we use to imagine and draw a certain object appears to be proportionally related to the logarithm of the object's assumed size, i.e., our \"prior knowledge about the size of objects in the world\" \\cite{konkle_canonical_2011}. Inspired by these findings, Bagherinezad et al. \\cite{bagherinezhad_are_2016} have modelled the object sizes through a log-normal distribution. The resulting distributions were then used to populate nodes in a graph, where objects which co-occurred frequently across the YFCC100M dataset \\cite{thomee_yfcc100m_2016} were linked together.\n\nThe size representation proposed in \\cite{bagherinezhad_are_2016} is both \\textit{quantitative} (i.e., expressed through statistical descriptors) and \\textit{qualitative }(i.e., smaller than or larger than, as symbolised by the graph's edges). Another quantitative representation was proposed in \\cite{elazar_how_2019}, where sizes (namely the object's length or volume) are represented as Distributions over Quantities (DoQ). In \\cite{zhu_reasoning_2014}, descriptors of the object's length are instead quantised with respect to three qualitative bins (i.e., <10in, 10\u2013100in and >100in). Compared to \\cite{zhu_reasoning_2014}, the statistical distributions in \\cite{bagherinezhad_are_2016,elazar_how_2019} can more expressively model the size variety within the same object class. Indeed, the same object class can comprise of many \\textit{instantiations} or \\textit{models} (a short novel and a dictionary are both books, although dictionaries are usually thicker). Moreover, the same object can be observed under different \\textit{appearances} (e.g., the book could be open or closed). Therefore, relying on a knowledge representation which can capture this within-class variability is crucial to ensure reuse across different real-world scenarios.\n\nAll the three approaches rely on data retrieved from the Web; this approach significantly reduces the cost of hardcoding a Knowledge Base about object sizes, but it is also more sensitive to noise. In addition, a limitation of the reviewed representations is that size is represented in one-dimensional terms, e.g., either through the object's volume or through its length. However, different physical dimensions (height, width and depth) contribute differently to characterising an object. For instance, recycling bins and coat stands may occupy a comparable volume, but bins are usually thicker than coat stands.\n\nThe size representations adopted in \\cite{bagherinezhad_are_2016,elazar_how_2019} have enabled to answer questions posed in natural language, such as \"are elephants bigger than butterflies?\". Furthermore, in \\cite{zhu_reasoning_2014}, the size features were used, among others, as an intermediate representation to predict the objects' \\textit{affordances}, or typical uses. Nonetheless, the role of size in object recognition is yet to be evaluated. In this paper, we propose a novel qualitative representation for the object's typical size, which requires minimal manual annotations and controls for the presence of noisy measurements.\n\n\\section{Methodology}\n\n\\subsection{Representing qualitative sizes in a Knowledge Base}\nWe identified 60 object categories which are commonly found in KMi, the setting in which we aim to deploy our robotic Health and Safety monitor, HanS. These include not only objects which are common to most office spaces (e.g., chairs, desktop computers, keyboards), but also Health and Safety equipment (e.g., fire extinguishers, emergency exit signs, fire assembly point signs), and objects which are, to some extent, specific to KMi (e.g., a foosball table, colorful hats from previous gigs of the KMi rock band, a KMi-branded welcome pod at the main entrance). The objective was then to associate each category in this catalogue to a series of typical size features, represented in qualitative terms. To this aim, we isolated three features contributing to the size of objects, namely their (i) \\textbf{front surface area} (i.e., the product of their width by their height), (ii) \\textbf{depth} dimension, and (iii) \\textbf{Aspect Ratio (AR)}, i.e., the ratio of their width to their height. With respect to the first dimension, we can characterise objects as \\textit{extra-small}, \\textit{small}, \\textit{medium}, \\textit{large} or \\textit{extra-large} respectively. Secondly, objects can be categorised as \\textit{flat}, \\textit{thin}, \\textit{thick}, or \\textit{bulky}, based on their depth. Thirdly, we can further discriminate objects based on whether they are \\textit{taller than wide} (\\textit{ttw}), \\textit{wider than tall} (\\textit{wtt}), or \\textit{equivalent} (\\textit{eq}), i.e., of AR close to 1. If the first two qualitative dimensions were plotted on a cartesian plane, a series of quadrants would emerge, as illustrated in Figure 1. Then, the AR can help to further separate the clusters of objects belonging to the same quadrant. For instance, doors and desks both belong to the extra-large and bulky group but doors, contrarily to desks, are usually taller than wide.\n\nHaving defined the cartesian plane of Figure 1, we can manually allocate the KMi objects to each quadrant and further sort the objects lying in the same quadrant. Sorting the objects manually ensures more reliable results than if the same information was retrieved automatically, especially given the paucity of resources encoding the relative size of objects \\cite{chiatti_towards_2020,bagherinezhad_are_2016}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.2]{size_repr.pdf}\n \\caption{Examples of object sorting across two dimensions: (i) the area of the front surface (the product of the width w by the height h), on the x axis; and (ii) the depth value d, on the y axis.}\n\\end{figure}\n\nMoreover, in the proposed representation, membership of each bin is mutually non-exclusive. Thus, with this representation, even classes which are extremely variable with respect to size, such as carton boxes and power cords, can be modelled. Indeed, boxes come in all sizes and power cords come in different lengths. Moreover, a box might lay completely flat, or appear bulkier, once assembled. Similarly, power cords, which are typically thinner than other pieces of IT equipment, might appear rolled up or tangled. \n\n\n\n\\subsection{Hybrid Reasoning Architecture}\n\nWe propose a modular approach to combining knowledge-based reasoning with Machine Learning for object recognition. In the proposed workflow, the knowledge of the qualitative size of objects (Section 3.1) is integrated in post-processing, after generating the ML-based object predictions. The proposed hybrid architecture is outlined in Figure 2 and consists of a series of sub-components, organised as follows. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0 250 0 0, clip]{framework.pdf}\n \\caption{The proposed architecture for hybrid object recognition. The knowledge-based reasoning module, which is aware of the typical size features of objects, can be modularly queried to validate the ML-based predictions.}\n\\end{figure}\n\n\\textbf{ML-based object recognition.} We rely on the state-of-the-art, ML-based object recognition methods of \\cite{zeng2018robotic}, to classify a set of pre-segmented object regions. Specifically, we classify objects by similarity to a reference image set, through a multi-branch Network. In this deep metric learning setting, predictions are ranked by increasing Euclidean (or L2) distance between each target embedding and the reference embedding set. Nevertheless, this configuration can be easily replaced by any other algorithm that provides, for each detected object, (i) a set of class predictions with an associated measure of confidence (whether similarity-based or probability-based) and (ii) the segmented image region enclosing the object. \n\n\\textbf{Prediction selection.} This checkpoint is conceived for assessing whether a ML-based prediction needs to be corrected or not. At the time of writing, we achieved good results simply by retaining those predictions which the ML algorithm is most confident about, and by running the remaining predictions through the size-based reasoner. Specifically, we avoid the knowledge-based reasoning steps if the top-1 class in the ML ranking: (i) has a ranking score smaller than $\\epsilon$ (i.e., the test image embedding lies near one of the reference embeddings, in terms of L2 distance); and also (ii) appears at least $i$ times in the top-K ranking. However, in Section 5, we also evaluate performance: (i) in the best case scenario where the ground truth labels are known and we know exactly which predictions to select for correction; and (ii) in the case where all ML-based predictions are passed to the size-based reasoner, without a pre-selection. \n\n\\textbf{Size estimation.} At this stage, the input depth image corresponding to each RGB scene is first converted to a 3D PointCloud representation. Then, statistical outliers are filtered out to reduce the impact of noisy points and extract the dense 3D region which best approximates the volume of the observed object. Specifically, all points which lie farther away than two standard deviations ($2\\sigma$) from their $n$ nearest neighbours are discarded. Because this outlier removal step is computationally expensive, especially for large 3D regions, we first sample each input PointCloud down so that 1 every $\\chi$ points is retained. Then, the Convex Hull algorithm is used to approximate the 3D box bounding the object region. Lastly, the x,y,z dimensions of the 3D bounding box are computed. Since the orientation of the object is not known a priori, we cannot unequivocally map any of the estimated dimensions to the object's real width and height. However, we can assume the object's depth to be the minimum of the returned x,y,z dimensions, due to the way depth measurements are collected through the sensor. Indeed, since we do not apply any 3D reconstruction mechanisms, we can expect that the measured depth underestimates the real depth occupied by the object.\n\n\\textbf{Size quantization.} The three dimensions obtained at the previous step are here expressed in qualitative terms. First, the two dimensions which were not marked as depth are multiplied together, giving a proxy of the object's surface area. The object is then categorised as extra-small, small, medium, large or extra-large, based on a set of cutoff thresholds $T$. Second, with respect to the estimated depth dimension, the object is categorised both as flat \/non-flat (based on a threshold $\\lambda_0$ ), and as flat, thin, thick, or bulky (based on a second set of thresholds $\\Lambda$, where $\\lambda_0 \\in \\Lambda$). Third, hypotheses are made about whether the object is taller than wide (ttw), wider than tall (wtt), or equivalent (eq), based on a cutoff $\\omega_0 \\in \\Omega$. It would be unfeasible to predict the object's Aspect Ratio from the estimated 3D dimensions, without knowing its current orientation. Therefore, we estimate the object's AR based on the width (w) and height (h) of the 2D box bounding the object region, as follows:\n\\begin{equation}\nAR = \\begin{cases}\nttw &\\text{if $h \\geq w \\land \\frac{h}{w} \\geq \\omega_0$}\\\\\nwwt &\\text{if $h 0 \\: (i=1,\\dots,n)\n\\end{equation}\nwhere $Z_n^{(M)}$ is a known normalisation constant and $g_j^{(M)}(x)$ ($j=1,\\ldots,n$) are given by certain Meijer $G$-functions.\nGenerally, such PDFs are known as polynomial ensembles~\\cite{KS14}.\n\nIt seems natural to ask whether these product ensembles also have (at least approximately) an interpretation as a Gibbs measure of the form~\\eqref{gibbs} and~\\eqref{H1}.\nHowever, unlike the Vandermonde determinant, the determinant in~\\eqref{2} cannot be evaluated as a product (for $M \\ge 2$).\nThis prohibits a literal interpretation of the eigenvalues of~\\eqref{old-product} as a statistical mechanical system\nwith only one- and two-body interactions. One could fear that this meant that there was no simple physical interpretation related to~\\eqref{2}.\nHowever, if we consider (\\ref{2}) with each $x_j$ large, the Meijer $G$-functions can be replaced by their asymptotic approximation~\\cite{Fi72}.\nAfter a change of variables, the joint density~\\eqref{2} to leading order in the asymptotic expansion becomes~\\cite{FLZ15}\n\\begin{equation}\\label{MB-laguerre}\n\\tilde P^{(M)}(x_1,\\ldots,x_n)=\n\\frac1{\\tilde Z_n^{(M)}}\\Delta_n(\\{x\\})\\Delta_n(\\{x^M\\})\\prod_{k=1}^n x_k^a\\,e^{-x_k},\n\\qquad x_k > 0 \\: (k=1,\\dots,n)\n\\end{equation}\nwhere $a$ is a known non-negative constant.\nThis does correspond to the Boltzmann factor of a statistical mechanical system with one- and two-body interactions only.\n\nA comparison between~\\eqref{2} and~\\eqref{MB-laguerre} can be done a posteriori.\nA connection between the two ensembles was first noted by Kuijlaars and Stivigny~\\cite{KS14}, who observed that the hard edge scaling limit of~\\eqref{MB-laguerre} found in~\\cite{Bo98} took the same functional form as the Meijer $G$-kernel found in the product ensemble~\\cite{KZ14}, albeit with a different choice of parameters. Due to recent progress, even more is known about the scaling limits of both models, and their similarities. Thus it has been established that the two ensembles also share the same global spectral distribution~\\cite{Mu02,BJLNS10,BBCC11,PZ11,FW15}. Furthermore, in both cases the local correlations in the bulk and near the soft edge are given by the familiar sine and Airy process, respectively~\\cite{LWZ14,Zh15}.\n\nThe ensemble~\\eqref{MB-laguerre} had, in fact, appeared in earlier random matrix literature.\nIt was first isolated by Muttalib~\\cite{Mu95}, who suggested it as a naive approximation to the transmission eigenvalues in a problem about quantum transport.\nA feature of the new interaction is that bi-orthogonal polynomials (rather than orthogonal polynomials) are needed in the study of correlation functions. Such bi-orthogonal ensembles were considered in greater generality by Borodin~\\cite{Bo98}, who devoted special attention to PDFs\n\\begin{equation}\\label{MB}\nP(x_1,\\ldots,x_n)=\\frac1{Z_n}\\prod_{j=1}^n w(x_l)\\prod_{1\\leq j0$ and $w(x)$ representing one of the three classical weight functions from Table~\\ref{table:weights}.\nFollowing \\cite{FW15}, we will henceforth refer to these ensembles as the (Jacobi, Laguerre, Hermite) Muttalib--Borodin ensembles.\nWe note that the awkward dependence of signs in the last factor in~\\eqref{MB} disappears when the eigenvalues are non-negative (e.g. for Laguerre- and Jacobi-ensembles) and when $\\theta$ is an odd integer as in~\\eqref{MB-Hermite}.\n\nAt the time of their introduction,\nthe Muttalib--Borodin ensembles had no obvious relation to any random matrix models defined in terms of PDFs on their entries (except for the trivial case $\\theta=1$), and could merely be interpreted as a simple one-parameter generalisation of the classical ensembles.\nHowever, we now see that the Laguerre Muttalib--Borodin ensemble has a close connection\nto products of complex Gaussian random matrices~\\eqref{old-product} through the approximation~\\eqref{MB-laguerre}.\n\nKnowing that the Laguerre Muttalib--Borodin ensemble appears as an asymptotic approximation to the Gaussian product~\\eqref{old-product}, it seems natural to ask the reverse question: \\emph{Can we find product ensembles which reduce asymptotically to the Jacobi and Hermite Muttalib--Borodin ensembles?} If this is possible, it would be reasonable to say we have completed a link between the Muttalib--Borodin ensembles with classical weights and the new family of product ensembles.\n\nFor the Jacobi Muttalib--Borodin ensemble a link to products of random matrices is provided by looking at the squared singular values of a product of truncated unitary matrices~\\cite{KKS15,FW15}. In this paper, it is our aim to isolate a random matrix product structure for which the eigenvalue PDF reduces asymptotically to the functional form of the Hermite Muttalib--Borodin ensemble. This construction therefore completes the correspondence between product ensembles and the three Muttalib--Borodin ensembles with classical weights, i.e. Laguerre, Jacobi, Hermite. Furthermore, the relevant product ensemble provides by itself a new interesting class of integrable models, which unlike all previous product ensembles (see~the review \\cite{AI15}) allows for negative eigenvalues.\n\nAs the product ensemble in question must allow for negative eigenvalues, it is no longer sufficient to investigate Wishart-type matrices like~\\eqref{old-product} which are positive-definite by construction.\nIt turns out that the correct structure is the Hermitised product of a GUE matrix and $M$ complex Ginibre matrices given by~\\eqref{W1}.\nThe case $M = 1$ of~\\eqref{W1} has previously been isolated in the recent paper of Kumar~\\cite{Ku15} as an example\nof a matrix ensemble which permits an explicit eigenvalue PDF.\n\n\\subsection{Second motivation: hyperbolic Harish-Chandra--Itzykson--Zuber integrals}\n\nAnother reason that the Hermitised random matrix product~\\eqref{W1} is of particular interest is its relation to the so-called hyperbolic Harish-Chandra--Itzykson--Zuber (HCIZ) integral. By way of introduction on this point, we note that it is by now evident that the family of exactly solvable product ensembles is intimately linked to a family exactly solvable group integrals sometimes referred to as integrals of HCIZ type. For the study of products of Ginibre matrices~\\eqref{old-product} it was sufficient to know the familiar (and celebrated) HCIZ integral~\\cite{HC57,IZ80}:\n\\begin{equation}\\label{HCIZ}\n \\int_{U(N)\/U(1)^N}e^{-\\tr AVBV^{-1}}\\,(V^{-1}dV)=\\pi^{N(N-1)\/2}\n \\frac{\\det[e^{-a_ib_j}]_{i,j=1}^{N}}\n {\\prod_{1\\leq iN$ indicating that the formula is no longer generally valid in this case, depending on the properties of $w_j$. It would be interesting to extend the above results to include the case $n>N$ more generally.\n\\end{remark}\n\nWith Lemma~\\ref{C1} at hand, we are ready to write down the eigenvalue PDF for the product~\\eqref{W1}.\n\n\\begin{thm}\\label{cor-matrix} Let $\\nu_0=0, \\nu_1, \\ldots, \\nu_M$ be non-negative integers.\nSuppose that\n$H$ is an $n\\times n$ GUE matrix and $G_1, \\ldots, G_M$ are independent standard complex Gaussian matrices where $G_m$ is of size $(\\nu_{m-1}+n) \\times (\\nu_{m} +n)$.\nThen the joint PDF for the non-zero eigenvalues of the matrix~\\eqref{W1} is given by\n\\begin{equation}\\label{PDF-matrix}\nP^{(M)}(x_1,\\ldots,x_n)=\\frac{1}{Z^{(M)}_n}\\prod_{1\\leq ic$ for all $s\\in\\Sigma$.\n\\end{prop}\n\n\\begin{proof}\nAs the proofs for the odd and even kernels are almost identical, we provide only the proof for the even case. The odd case is easily verified by the reader.\n\nIt follows the definition of the even kernel~\\eqref{kernel-even} together with contour integral representation of the bi-orthogonal functions from Proposition~\\ref{prop:bi-func-int}, that\n\\begin{equation}\n K_{2n}^\\text{even}(x,y)=\\frac{1}{(2\\pi i)^2}\\int dt\\oint_\\Sigma ds\\,|x|^{2s}|y|^{-2t-1}\n \\frac{\\Gamma(-s)\\Gamma(t+\\frac12)}{\\Gamma(-t)\\Gamma(s+\\frac12)}\\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+2t+1)}{\\Gamma(\\nu_m+2s+1)}\n \\sum_{k=0}^{n-1}\\frac{\\Gamma(k-t)}{\\Gamma(k+1-s)}.\n\\end{equation}\nFollowing similar steps as in~\\cite{KZ14}, we note the sum allows a telescopic evaluation. This gives\n\\begin{multline}\n K_{2n}^\\text{even}(x,y)=\\frac{1}{(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\oint_\\Sigma ds\\,\\frac{|x|^{2s}|y|^{-2t-1}}{s-t}\n \\frac{\\Gamma(-s)\\Gamma(t+\\frac12)}{\\Gamma(-t)\\Gamma(s+\\frac12)}\\frac{\\Gamma(n-t)}{\\Gamma(n-s)}\n \\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+2t+1)}{\\Gamma(\\nu_m+2s+1)}\\\\\n -\\frac{1}{(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\oint_\\Sigma ds\\,\\frac{|x|^{2s}|y|^{-2t-1}}{s-t}\n \\frac{\\Gamma(t+\\frac12)}{\\Gamma(s+\\frac12)}\\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+2t+1)}{\\Gamma(\\nu_m+2s+1)}.\n\\end{multline}\nHere, the integrand on the second line is zero as it has no poles encircled by the contour $\\Sigma$ and, thus\n\\begin{equation}\n K_{2n}^\\text{even}(x,y)=\\frac{1}{(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\oint_\\Sigma ds\\,\\frac{|x|^{2s}|y|^{-2t-1}}{s-t}\n \\frac{\\Gamma(-s)\\Gamma(t+\\frac12)}{\\Gamma(-t)\\Gamma(s+\\frac12)}\\frac{\\Gamma(n-t)}{\\Gamma(n-s)}\n \\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+2t+1)}{\\Gamma(\\nu_m+2s+1)}.\n\\end{equation}\nFinally, the proposition follows by a change of variables $s\\mapsto s\/2$ and $t\\mapsto t\/2$.\n\\end{proof}\n\n\nThe above integral representations for the bi-orthogonal functions and the kernel are probably the most convenient form for further asymptotic analysis, as we will see in Section~\\ref{sec:hard}. However, it is also often helpful to express these formulae in terms of special functions as (for example) it allows for use of pre-defined mathematical software. Furthermore, such reformulations often guide us to recognise patterns which otherwise would have been left unseen.\n\nThe integral representations for the bi-orthogonal functions given by Proposition~\\ref{prop:bi-func-int} can also be recognised as several different types of special function; this includes generalised hypergeometric, Meijer $G$-, and Fox $H$-functions. Here, we will restrict ourselves to their Meijer $G$-function formulation.\n\nLet us first consider the bi-orthogonal polynomials which may be written as\n\\begin{align}\np_{2n}(x)&=\\frac{(-1)^n}{2^{2n}}\\prod_{m=0}^M\\frac{\\Gamma(\\nu_m+2n+1)}{2^{\\,\\nu_m}\\pi^{-1\/2}} \\nonumber\n\\MeijerG{1}{0}{1}{2M+2}{n+1}{-\\frac{\\nu_0}2,-\\frac{\\nu_0}2+\\frac12,\\ldots,-\\frac{\\nu_M}2,-\\frac{\\nu_M}2+\\frac12}{\\frac{x^2}{2^{2M}}}, \\\\\n\\frac{p_{2n+1}(x)}x&=\\frac{(-1)^n}{2^{2n}}\\prod_{m=0}^M\\frac{\\Gamma(\\nu_m+2n+2)}{2^{\\,\\nu_m+1}\\pi^{-1\/2}}\n\\MeijerG{1}{0}{1}{2M+2}{n+1}{-\\frac{\\nu_0}2,-\\frac{\\nu_0}2-\\frac12,\\ldots,-\\frac{\\nu_M}2,-\\frac{\\nu_M}2-\\frac12}{\\frac{x^2}{2^{2M}}}.\n\\label{p_2n-meijer}\n\\end{align}\nIt is worth comparing these polynomials with the polynomial found in the study of the Laguerre-like matrix product~\\eqref{old-product}.\nAkemann et al.~\\cite{AIK13} found that in this case the bi-orthogonal polynomial is given by\n\\begin{equation}\nP_n^{(M)}(x)=(-1)^n\\prod_{m=0}^M\\Gamma(\\nu_m+n+1)\n\\MeijerG{1}{0}{0}{M+1}{n+1}{-\\nu_0,-\\nu_1,\\ldots,-\\nu_M}{x}.\n\\end{equation}\nIt is clear that the two families of polynomials are related as\n\\begin{equation}\np_{2n}(x)\\propto P_n^{(2M+1)}\\Big(\\frac{x^2}{2^{2M}}\\Big)\n\\qquad\\text{and}\\qquad\np_{2n+1}(x)\\propto xP_n^{(2M+1)}\\Big(\\frac{x^2}{2^{2M}}\\Big)\n\\end{equation}\nwith\n\\begin{equation}\\label{nu-map}\n\\{\\nu_m\\}_{m=0}^M\\mapsto\\{{\\nu_m}\/2,({\\nu_m}-1)\/2\\}_{m=0}^M\n\\qquad\\text{and}\\qquad\n\\{\\nu_m\\}_{m=0}^M\\mapsto\\{{\\nu_m}\/2,({\\nu_m}+1)\/2\\}_{m=0}^M,\n\\end{equation}\nrespectively.\nThis is a generalisation of the relation between Hermite and Laguerre polynomials. Recall that\n\\begin{equation}\\label{HL}\n\\tilde H_{2n}(x)=\\tilde L^{(-\\frac12)}_n(x^2)\n\\qquad\\text{and}\\qquad\n\\tilde H_{2n+1}(x)=x\\tilde L^{(+\\frac12)}_n(x^2),\n\\end{equation}\nwhere $\\tilde H_n(x)$ and $\\tilde L_n^{(\\alpha)}(x)$ denote the Hermite and Laguerre polynomials in monic normalisation.\n\nLikewise, the (non-polynomial) bi-orthogonal functions may be written as\n\\begin{align}\n\\frac{\\phi_{2n}(|x|)}{|x|}&=(-1)^n\\prod_{m=1}^M\\frac{2^{\\nu_m-2}}{\\pi^{1\/2}}\n\\MeijerG{2M+1}{1}{1}{2M+2}{-n}{\\frac{\\nu_M}2-\\frac12,\\frac{\\nu_M}2,\\ldots,\\frac{\\nu_0}2-\\frac12,\\frac{\\nu_0}2}{\\frac{x^2}{2^{2M}}}, \\\\\n\\phi_{2n+1}(|x|)&=(-1)^n\\prod_{m=1}^M\\frac{2^{\\nu_m-1}}{\\pi^{1\/2}}\n\\MeijerG{2M+1}{1}{1}{2M+2}{-n}{\\frac{\\nu_M}2+\\frac12,\\frac{\\nu_M}2,\\ldots,\\frac{\\nu_0}2+\\frac12,\\frac{\\nu_0}2}{\\frac{x^2}{2^{2M}}}.\n\\end{align}\nAgain, we want to compare to the formula in~\\cite{AIK13} which this time reads\n\\begin{equation}\n\\Phi_n^{(M)}(x)=(-1)^n\\MeijerG{M}{1}{1}{M+1}{-n}{\\nu_M,\\ldots,\\nu_1,\\nu_0}{x}.\n\\end{equation}\nEvidently, we have the following relations\n\\begin{equation}\\label{phi-relations}\n\\phi_{2n}(|x|)\\propto |x|\\Phi_n^{(2M+1)}\\Big(\\frac{x^2}{2^{2M}}\\Big)\n\\qquad\\text{and}\\qquad\n\\phi_{2n+1}(|x|)\\propto \\Phi_n^{(2M+1)}\\Big(\\frac{x^2}{2^{2M}}\\Big),\n\\end{equation}\nwith~\\eqref{nu-map} as before.\nYet again, this is a generalisation of the relation between Hermite and Laguerre polynomials. In the simplest case the relations~\\eqref{phi-relations} reduces to\n\\begin{equation}\n\\tilde H_{2n}(x)w_\\text{H}(x)=|x|\\tilde L^{(-\\frac12)}_n(x^2)w_\\text{L}^{(-\\frac12)}(x^2)\n\\qquad\\text{and}\\qquad\n\\tilde H_{2n+1}(|x|)w_\\text{H}(x)=\\tilde L^{(+\\frac12)}_n(x^2)w_\\text{L}^{(\\frac12)}(x^2), \\nonumber\n\\end{equation}\nwhere $w_\\text{H}(x)=e^{-x^2}$ and $w_\\text{L}^{(\\alpha)}(x)=x^\\alpha e^{-x}$ are the Hermite and Laguerre weight functions.\n\nIt is, of course, well-known that there are relations between ensembles with reflection symmetry about the origin and ensembles on the half-line (albeit explicit formulae may be elusive). A general description of such relations in the Muttalib--Borodin ensembles can be found in~\\cite{FI16}.\n\n\n\n\n\\section{Scaling limits at the origin in product and Muttalib--Borodin ensembles}\n\\label{sec:hard}\n\n\nWith the integral representations of the correlation kernels established by Proposition~\\ref{prop:kernel-finite}, we can turn to a study of asymptotic properties. Perhaps the most interesting scaling regime is that of the local correlations near the origin, referred to as the hard edge when the eigenvalues are strictly positive. For other product ensembles~\\cite{KZ14,Fo14,KKS15,FL16}, it has been observed that correlations at the hard edge is determined by the so-called Meijer $G$-kernel, which generalises the more familiar Bessel kernel. Below, we will see that the Meijer $G$-kernel appears once again, but this time\ninvolving a sum.\n\n\\begin{thm}\\label{thm:hard} Let $K_n(x,y)=K_n^\\textup{even}(x,y)+K_n^\\textup{odd}(x,y)$ with the even and odd kernels given by Proposition~\\ref{prop:kernel-finite}. For $x,y\\in\\mathbb R\\setminus\\{0\\}$ and $\\nu_1\\ldots,\\nu_M$ fixed, the microscopic limit near the origin is\n\\begin{equation}\\label{hard-limit}\n\\lim_{n\\to\\infty}\\frac{1}{\\sqrt n}K_{2n}\\Big(\\frac x{\\sqrt n},\\frac y{\\sqrt n}\\Big)\n=K^\\textup{even}(x,y)+K^\\textup{odd}(x,y)\n\\end{equation}\nwith\n\\begin{align}\nK^\\textup{even}(x,y)\n&=\\frac{1}{2(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\int_\\Sigma ds\\,\\frac{|x|^{s}|y|^{-t-1}}{s-t}\n \\frac{\\Gamma(-\\frac s2)\\Gamma(\\frac{t+1}2)}{\\Gamma(-\\frac t2)\\Gamma(\\frac{s+1}2)}\n \\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+t+1)}{\\Gamma(\\nu_m+s+1)} \\label{hard-even} \\\\\nK^\\textup{odd}(x,y)&=\\frac{\\sgn(xy)}{2(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\int_\\Sigma ds\\,\\frac{|x|^{s}\\,|y|^{-t-1}}{s-t}\\,\n \\frac{\\Gamma(\\frac{1-s}2)\\Gamma(\\frac{t+2}2)}{\\Gamma(\\frac{1-t}2)\\Gamma(\\frac{s+2}2)}\n \\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+t+1)}{\\Gamma(\\nu_m+s+1)} \\label{hard-odd},\n\\end{align}\nwhere $-1c$ for all $s\\in\\Sigma$.\n\\end{thm}\n\n\\begin{proof}\nWe only consider the even kernel in Proposition~\\ref{prop:kernel-finite} since the odd case is very similar. After rescaling we rewrite the integral representation of the even kernel in Proposition~\\ref{prop:kernel-finite} as\n\\begin{align}\\label{5.4}\n\\frac 1{\\sqrt n} K_{2n}^\\textup{even}(\\frac x{\\sqrt n},\\frac y{\\sqrt n})= \\frac{1}{2(2\\pi i)^2}\\int_{c-i\\infty}^{c+i\\infty} dt\\int_\\Sigma ds\\,\\frac{|x|^{s}|y|^{-t-1}}{s-t} \\frac{f_{n}(s)}{f_{n}(t)} \\frac{g(t)}{g(s)},\n\\end{align}\n with \\begin{equation}\n f_{n}(s)= \\frac{\\Gamma(n)\\Gamma(-\\frac s2)}{n^{\\frac s2} \\Gamma(n-\\frac s2)}, \\qquad \n g(s)=\\Gamma\\big(\\frac{s+1}{2}\\big) \\prod_{m=1}^M \\Gamma(\\nu_m+s+1). \n \\end{equation}\n \n For any fixed $t\\in c+i \\mathbb{R}$ and $s \\in \\Sigma$, using \\cite[eq. 5.11.13]{NIST} we see \n\\begin{equation}\\label{5.6}\nf_{n}(s)= \\Gamma(-\\frac s2) \\big(1+O(\\frac{1}{n})\\big), \\qquad f_{n}(t)= \\Gamma(-\\frac t2) \\big(1+O(\\frac{1}{n})\\big). \\end{equation}\nFormally, substituting (\\ref{5.6}) in (\\ref{5.4}) gives (\\ref{hard-even}). To proceed rigorously, we need to verify a condition for the exchange of limit and integration. For this purpose, we will proceed to \nfind two dominated functions respectively corresponding to $1\/|f_{n}(t)|$ and $|f_{n}(s)|$.\n\nFirst, using \\cite[eq. 5.11.13]{NIST} we have for sufficiently large $n$\n\\begin{equation}\n \\frac{1}{|f_{n}(t)|} \\leq \\frac{n^{\\frac c2} \\Gamma(n-\\frac c2) }{ \\Gamma(n)|\\Gamma(-\\frac t2)|} \\leq \\frac{2}{|\\Gamma(-\\frac t2)|}, \\quad \\forall t\\in c+i \\mathbb{R}. \\label{t-bound}\n\\end{equation}\n\nSecond, we require an upper bound for $ | f_{n}(s) |$. Noting the asymptotic expansion, that as $z\\rightarrow \\infty$ in the sector $|\\mathrm{arg}(z)|\\leq \\pi-\\delta$ (with $0<\\delta<\\pi$)\n\\begin{equation}\n\\Gamma(z)=e^{-z}z^{z-\\frac{1}{2}}\\sqrt{2\\pi} \\big(1+O(\\frac{1}{z})\\big), \\label{Agamma}\n\\end{equation} \nit is easy to see that for a given $y_0>0$ we can choose the contour $\\Sigma=\\Sigma_{l}\\cup \\Sigma_{r}$ with \\begin{equation} \\Sigma_{l}=\\big\\{\\frac{c}{2}+iy:|y|\\leq y_{0}\\big\\}\\cup \\big\\{x\\pm iy_0: \\frac{c}{2} \\leq x\\leq 1\\big\\}, \\quad \\Sigma_{r}= \\big\\{x\\pm iy_0: x> 1\\big\\}.\\end{equation}\nThus, we get from \\eqref{Agamma} and the boundedness of $\\Gamma(-s\/2)$ over $\\Sigma_l$ that for large $n$ there exists a constant $C_1=C_1(y_0) > 0$ such that \n\\begin{equation}\n |f_{n}(s)| \\leq C_1, \\qquad \\forall s\\in \\Sigma_{l}. \\label{upl}\n\\end{equation}\nIn order to estimate $f_{n}(s)$ with $s\\in \\Sigma_{r}$, we use the integral representation \n \\begin{equation}\n f_{n}(s)= \\frac{n^{-\\frac{s}{2}}}{ 2i \\sin \\frac{\\pi s}{2}} \\int_{\\mathcal{C}_0} (1-u)^{n-1}(-u)^{-\\frac{s}{2}-1}du,\n\\end{equation}\nwhere $\\mathcal{C}_0$ is a counter-clockwise path which begins and ends at $1$ and encircles the origin once; see e.g. \\cite[eq. 5.12.10]{NIST}. Note that we choose $(-u)^{-1-s\/2}=e^{-(1+s\/2)\\log(-u)}$ with $-\\pi<\\mathrm{arg}(-u)<\\pi$. Change $u$ by $u\/n$ and deform the resulting contour into the path which starts from $n$, proceeds along the (upper) real axis to 1, describes a circle of radius one counter-clock round the origin and returns to $n$ along the (lower) real axis. That is, \n \\begin{equation}\n f_{n}(s)= \\frac{1}{ 2i \\sin \\frac{\\pi s}{2}} \\int_{\\mathcal{C}} (1-\\frac{u}{n})^{n-1}(-u)^{-\\frac{s}{2}-1}du.\n\\end{equation}\nLet $s=v\\pm iy_0, v>1$. On the unit circle of the $u$-integral above write $-u=e^{i\\theta}$. Then we easily obtain for $n\\geq1$\n \\begin{equation}\n |f_{n}(s)|\\leq \\frac{1}{ 2 |\\sin \\frac{\\pi s}{2}|} \\int_{-\\pi}^{\\pi} \\big(1+\\frac{1}{n}\\big)^{n-1} |e^{-(\\frac{s}{2}+1)i\\theta}|d\\theta\\leq \n \\frac{\\pi e^{ 1+\\frac{\\pi y_{0}}{2}}}{ |\\sin \\frac{\\pi s}{2}|}. \\label{ub1}\n\\end{equation}\nOn the upper and lower real axis, we have \n \\begin{align}\n |f_{n}(s)|&\\leq \\frac{1}{ 2 |\\sin \\frac{\\pi s}{2}|} \\int_{1}^{n} \\big(1-\\frac{u}{n}\\big)^{n-1}|u^{-\\frac{s}{2}-1} e^{-(\\frac{s}{2}+1)(\\mp i\\pi)}|du \\nonumber \\\\\n &\\leq \\frac{1}{ 2 |\\sin \\frac{\\pi s}{2}|} \\int_{1}^{n} u^{-\\frac{v}{2}-1} e^{\\frac{1}{2}\\pi y_{0}} du \\nonumber\\\\\n & =\\frac{1}{ |\\sin \\frac{\\pi s}{2}|} e^{\\frac{1}{2}\\pi y_{0}} \\frac{1- n^{-\\frac{v}{2}} }{v}\\leq \\frac{1}{ |\\sin \\frac{\\pi s}{2}|} e^{\\frac{1}{2}\\pi y_{0}}. \\label{ub2}\n\\end{align} \nUsing the simple fact $|\\sin \\frac{\\pi s}{2}| \\geq |\\sinh \\frac{\\pi }{2} \\mathrm{Im}(s)|$, combination of \\eqref{ub1} and \\eqref{ub2} shows that \nthere exists a constant $C_2=C_2(y_0)>0$ such that \n\\begin{equation}\n |f_{n}(s)| \\leq C_2, \\qquad \\forall s\\in \\Sigma_{r}.\n\\end{equation}\nTogether with \\eqref{upl} this gives us a bound $C>0$, that is, for large $n$ \n\\begin{equation}\n |f_{n}(s)| \\leq C, \\qquad \\forall s\\in \\Sigma. \\label{s-bound}\n\\end{equation}\n\n\nFinally, use \\eqref{Agamma} and the asymptotic formula that as $y\\rightarrow \\pm \\infty$ \n\\begin{equation}\n| \\Gamma(x + iy) |\\sim \\sqrt{2\\pi} |y|^{x-\\frac{1}{2}} e^{-\\frac{1}{2}\\pi |y|}\n\\end{equation}\nwith bounded real value of $x$ (see \\cite[eq. 5.11.9]{NIST}), it is easy to conclude that the function of variables $s$ and $t$\n \\begin{equation}\n\n \\,\\frac{||x|^{s}|y|^{-t-1}|}{|s-t|} \\frac{2}{|\\Gamma(-\\frac t2)|} \\frac{|g(t)|}{|g(s)|}, \n \\end{equation}\n is integrable along the chosen contours, \n whenever $-10$. We note that this kernel is single-sided ($x,y\\in\\mathbb R_+$) while the kernel from Theorem~\\ref{thm:hard} is double-sided ($x,y\\in\\mathbb R\\setminus\\{0\\}$). However, it is also evident that our new kernel may be re-expressed in terms of the Meijer $G$-kernel. We have\n\\begin{equation}\nK^\\textup{even}(|x|,|y|)=\\frac{|y|}{2^{2M}}K_\\text{Meijer}^{2M+1}\\Big(\\frac{x^2}{2^{2M}},\\frac{y^2}{2^{2M}}\\Big)\n\\quad\\text{and}\\quad\nK^\\textup{odd}(|x|,|y|)=\\frac{|x|}{2^{2M}}K_\\text{Meijer}^{2M+1}\\Big(\\frac{x^2}{2^{2M}},\\frac{y^2}{2^{2M}}\\Big)\n\\end{equation}\nwith\n\\begin{equation}\n\\{\\nu_m\\}_{m=0}^M\\mapsto\\{{\\nu_m}\/2,({\\nu_m}-1)\/2\\}_{m=0}^M\n\\qquad\\text{and}\\qquad\n\\{\\nu_m\\}_{m=0}^M\\mapsto\\{{\\nu_m}\/2,({\\nu_m}+1)\/2\\}_{m=0}^M,\n\\end{equation}\nrespectively. Thus, the random product matrix~\\eqref{W1} provides yet another appearance of the Meijer $G$-kernel; albeit this time in a double-sided version. For graphical representation of the Meijer $G$-kernel we refer to~\\cite[Fig.~3.2]{Ip15}, which shows plots of the local density (i.e. the kernel with $x=y$) for different values of $M$.\n\nA double-side hard edge scaling limit near the origin is also present in the Hermite Muttalib--Borodin ensemble.\nIn this case the kernel is found to be~\\cite{Bo98}\n\\begin{equation}\\label{kernel-borodin}\nK^\\text{even}(x,y)=K^{(\\frac{\\alpha-1}2,\\theta)}(x^2,y^2)\n\\qquad\\text{and}\\qquad\nK^\\text{odd}(x,y)=\\sgn(xy)|x|^\\theta|y|\\,K^{(\\frac{\\alpha+\\theta}2,\\theta)}(x^2,y^2)\n\\end{equation}\nwhere\n\\begin{equation}\\label{kernel-wright-bessel}\nK^{(\\alpha,\\theta)}(x,y)=\\theta\\int_0^1du(xu)^\\alpha J_{\\frac{\\alpha+1}\\theta,\\frac1\\theta}(xu)J_{\\alpha+1,\\theta}((yu)^\\theta)\n\\end{equation}\nwith $J_{a,b}(x)$ denoting Wright's Bessel function. In the case relevant to us~\\eqref{MB-Hermite}, we also have $\\theta=2M+1$. Furthermore, it is known from~\\cite{KS14} that the kernel~\\eqref{kernel-wright-bessel} is a Meijer $G$-kernel whenever $\\theta$ is a positive integer. In particular, we have\n\\begin{equation}\n\\Big(\\frac{x^2}{2^{2M}}\\Big)^{\\frac{1}{2M+1}-1}\nK^{(\\alpha,2M+1)}\\Big((2M+1)\\Big(\\frac{x^2}{2^{2M}}\\Big)^{\\frac{1}{2M+1}},(2M+1)\\Big(\\frac{y^2}{2^{2M}}\\Big)^{\\frac{1}{2M+1}}\\Big)\n=K_\\text{Meijer}^{2M+1}\\Big(\\frac{y^2}{2^{2M}},\\frac{x^2}{2^{2M}}\\Big),\n\\end{equation}\nwhere the Meijer $G$-kernel on the right-hand side has indices\n\\begin{equation}\n\\nu_m=\\frac{\\alpha+m-1}{2M+1},\\qquad m=1,\\ldots,2M+1,\n\\end{equation}\nand as always $\\nu_0=0$. It follows from~\\eqref{kernel-borodin} and~\\eqref{kernel-wright-bessel} that the hard edge correlations for the Hermite Muttalib--Borodin ensemble with appropriately chosen parameters may be expressed in terms of the Meijer $G$-kernel in a similar fashion as done for the product ensemble above. We note that the choice of variables in~\\eqref{kernel-wright-bessel} should be compared to the change of variables~\\eqref{change} performed in the derivation of the asymptotic reduction~\\eqref{hermite-MB}.\n\n\nIt is worth verifying consistency of the simplest scenario of $M=0$.\nWhen $M=0$ our matrix ensemble~\\eqref{W1} reduces to the GUE, hence the kernel given by Theorem~\\ref{thm:hard} must reduce to the sine kernel for $M=0$. To see this, we use \n\\begin{equation}\n\\MeijerG{1}{0}{0}{2}{-}{0,\\frac12}{\\frac{x^2}{4}}=\\frac{\\cos x}{\\sqrt\\pi}\n\\qquad\\text{and}\\qquad\n\\MeijerG{1}{0}{0}{2}{-}{\\frac12,0}{\\frac{x^2}{4}}=\\frac{\\sin |x|}{\\sqrt\\pi}.\n\\end{equation}\nIt follows that\n\\begin{align}\nK^\\textup{even}(x,y)&=\\frac{1}{\\pi}\\int_0^1\\frac{du}{\\sqrt u}\\cos(2x\\sqrt u)\\cos(2y\\sqrt u)\n=\\frac1{\\pi}\\Big(\\frac{\\sin 2(x-y)}{2(x-y)}+\\frac{\\sin 2(x+y)}{2(x+y)}\\Big), \\\\\nK^\\textup{odd}(x,y)&=\\frac{1}{\\pi}\\int_0^1\\frac{du}{\\sqrt u}\\sin(2x\\sqrt u)\\,\\sin(2y\\sqrt u)\n=\\frac1{\\pi}\\Big(\\frac{\\sin 2(x-y)}{2(x-y)}-\\frac{\\sin 2(x+y)}{2(x+y)}\\Big),\n\\end{align}\nwhich upon insertion into~\\eqref{hard-limit} indeed reproduces the sine kernel.\n\nIn the end of this section, let us emphasize that there also exists a contour integral representation of the limiting kernel in Theorem~\\ref{thm:hard} which combines the odd and even into a single formula.\n\n\\begin{prop}\\label{prop:kernelrep2} With the same notation as in Theorem \\ref{thm:hard}, the limiting kernel at the origin can be rewritten as\n\\begin{align}\n K^\\textup{even}(x,y)+ K^\\textup{odd}(x,y)=2\\, \\mathcal{K}_{\\nu_{1},\\ldots,\\nu_{M}}(2x,2y), \\label{equivalence}\n \\end{align}\nwhere the kernel on the right-hand side is defined as\n\\begin{align}\n\\mathcal{K}_{\\nu_{1},\\ldots,\\nu_{M}}(x,y)&\n=\\int_{C_{R}} \\frac{dv}{2\\pi i}\n\\,\\MeijerG{1}{0}{0}{M+1}{-}{0,-\\nu_1, \\ldots,-\\nu_M}{-\\sgn(y)xv}\\MeijerG{M+1}{0}{0}{M+1}{-}{0, \\nu_1,\\ldots,\\nu_M}{|y|v},\n\\label{doubleG-kernel}\n\\end{align}\nwith $C_{R}$ denoting a path in the right-half plane from $-i$ to $i$.\n\\end{prop}\n\n\\begin{proof}\nUsing Euler's reflection formula and duplication formula for the gamma function, we see that\n\\begin{equation*}\nK^\\textup{even}(x,y)+ K^\\textup{odd}(x,y)=\\frac{1}{(2\\pi i)^2}\\int dt\\int ds\\, (2|x|)^{s}(2|y|)^{-t-1}\n\\frac{ g(s,t)}{s-t} \\frac{\\Gamma(t+1)}{\\Gamma(s+1)}\n \\prod_{m=1}^M\\frac{\\Gamma(\\nu_m+t+1)}{\\Gamma(\\nu_m+s+1)},\n\\end{equation*}\nwhere\n\\begin{equation}\ng(s,t)=\\frac{\\sin\\frac{\\pi}{2}t}{\\sin\\frac{\\pi}{2}s}+\\sgn(xy) \\frac{\\cos\\frac{\\pi}{2}t}{\\cos\\frac{\\pi}{2}s}.\n\\end{equation}\nIn order to proceed, we will consider the cases $xy<0$ and $xy>0$ separately.\nFor $xy<0$, it is seen that\n\\begin{equation}\ng(s,t)= \\frac{2}{\\sin\\pi s}\\sin\\frac{\\pi}{2}(t-s)=-\\frac{2}{\\pi} \\Gamma(-s)\\Gamma(1+s) \\, \\sin\\frac{\\pi}{2}(t-s).\n\\end{equation}\nNow~\\eqref{equivalence} can be obtained using the integral representation\n\\begin{equation}\n \\frac{1}{\\pi i} \\int_{C_{R}}dv \\,v^{s-t-1}= \\frac{1}{t-s}\\sin\\frac{\\pi}{2}(t-s),\n\\end{equation}\nwith the contour $C_R$ as above,\ntogether with the definition of Meijer $G$-function. For $xy>0$, we note that\n\\begin{equation}\ne^{i\\pi s}g(s,t)=\n\\left(\\frac{\\sin\\frac{\\pi}{2}t}{\\sin\\frac{\\pi}{2}s}-\\frac{\\cos\\frac{\\pi}{2}t}{\\cos\\frac{\\pi}{2}s} \\right)+2e^{i\\frac{\\pi}{2}(t+s)}.\n\\end{equation}\nThe $s$-variable integrand in the second part has no pole within the contour $\\Sigma$. Thus, the problem reduces to the proven situation.\n\\end{proof}\n\nThe simplest non-trivial case is $M=1$. Here, we get\n \\begin{equation}\n\\mathcal{K}_{\\nu}(x,y)\n=\\left(\\frac{y}{x}\\right)^{\\nu\/2} \\frac{1}{\\pi i} \\int_{C_{R}}dv\n\\, I_{\\nu}\\big(2\\sqrt{\\sgn(y)xv}\\big)\\, K_{\\nu}\\big(2\\sqrt{|y|v}\\big), \\label{doubleM1-kernel}\n\\end{equation}\nwith the modified Bessel functions $I_{\\nu}$ and $K_{\\nu}$, which follows immediately from the fact that\n \\begin{align}\n\\MeijerG{1}{0}{0}{2}{-}{0,-\\nu}{-z}=z^{-\\nu\/2} I_{\\nu}(2\\sqrt{z}), \\qquad \\MeijerG{2}{0}{0}{2}{-}{\\nu,0}{z}=2 z^{\\nu\/2} K_{\\nu}(2\\sqrt{z}).\n\\end{align}\n\n\n\n\\section{Global spectra in product and Muttalib--Borodin ensembles}\n\\label{sec:global}\n\nThe study of the scaling limit at the origin in the previous section introduces a scale in which the average spacing between eigenvalues is of order unity. A very different, but still well-defined, limiting process is the so-called global scaling regime. In this regime the average spacing between eigenvalues tends to zero in such way that the spectral density tends to a quantity $\\rho(x)$ with compact support $I\\subset\\mathbb R$ and $\\int_I \\rho(x)dx=1$. Here $\\rho(x)$ is referred to as the global density.\nThroughout this section, the indices $\\nu_1\\ldots,\\nu_M$ are kept fixed.\n\nFor the Laguerre Muttalib--Borodin ensemble specified by the density~\\eqref{MB-laguerre} the global scaling limit corresponds to a change of variables $x_j\\mapsto nx_j$. Introducing the further change of variables $x_j\\mapsto Mx_j^M$, the global density is known to be the so-called Fuss--Catalan density with parameter $M$ \\cite{FW15}. It can be specified by the moment sequence\n\\begin{equation}\n\\text{FC}_M(k)=\\frac1{Mk+1}\\binom{(M+1)k}k,\\qquad k=0,1,\\ldots\\,.\n\\end{equation}\nThese are the Fuss--Catalan numbers (the Catalan numbers are the case $M=1$).\n\nNow, consider the product of $M$ standard complex Gaussian random matrices. Consistent with the discussion in Section~\\ref{sec:motivation}, the corresponding global density is again the Fuss--Catalan density with parameter $M$~\\cite{Mu02,AGT10,BBCC11,NS06}.\n\nIt is known that the Fuss--Catalan density, $\\rho^{(M)}_\\text{FC}(x)$ say, can also be characterised as the minimiser of the energy functional\n\\begin{equation}\\label{energy-laguerre}\nE[\\rho]=M\\int_0^Ldx\\,\\rho(x)x^{\\frac1M}-\\frac{1}{2}\\int_0^Ldx\\int_0^Ldy\\,\\rho(x)\\rho(y)\\log\\big(|x-y||x^{\\frac1M}-y^{\\frac1M}|\\big)\n\\end{equation}\nwith $L=(M+1)^{M+1}\/M^M$; see~\\cite{CR14,FL15,FLZ15}. Note that the energy functional~\\eqref{energy-laguerre} relates to~\\eqref{MB-laguerre} through the aforementioned change of variables. Similarly, the energy functional corresponding to~\\eqref{MB-Hermite} is\n\\begin{align}\n\\tilde E[\\tilde\\rho]&=\\theta\\int_{-\\tilde L}^{\\tilde L}dx\\,\\rho(x)x^{\\frac2\\theta}\n-\\frac{1}{2}\\int_{-\\tilde L}^{\\tilde L}dx\\int_{-\\tilde L}^{\\tilde L}dy\\,\n\\tilde\\rho(x)\\tilde\\rho(y)\\log\\big(|x-y||\\sgn x|x|^{\\frac1\\theta}-\\sgn y|y|^{\\frac1\\theta}|\\big) \\nonumber \\\\\n&=2\\theta\\int_{0}^{\\tilde L}dx\\,\\rho(x)x^{\\frac2\\theta}-\\int_{0}^{\\tilde L}dx\\int_{0}^{\\tilde L}dy\\,\n\\tilde\\rho(x)\\tilde\\rho(y)\\log\\big(|x^2-y^2||(x^2)^{\\frac1\\theta}-(y^2)^{\\frac1\\theta}|\\big)\n\\label{energy-hermite}\n\\end{align}\nwith $\\theta=2M+1$.\nWe note that changing variables $x^2\\mapsto x$ and $y^2\\mapsto y$, then setting $\\tilde\\rho(x)=x\\rho(x^2)$ reduces~\\eqref{energy-hermite} to~\\eqref{energy-laguerre} with $L=\\tilde L^2$. Thus, the minimiser in~\\eqref{energy-hermite} is given in terms of the Fuss-Catalan density\n\\begin{equation}\\label{double-sided-FC}\n\\tilde\\rho(x)=|x|\\rho_\\text{FC}^{(M)}(x^2)\n\\end{equation}\nand is symmetric about the origin.\n\nAs an illustration, let us consider the simplest case, $M=1$. The Fuss--Catalan density becomes the celebrated Mar\\v cenko--Pastur density,\n\\begin{equation}\n\\rho_\\text{FC}^{(M=1)}(x)=\\frac{1}{2\\pi}\\sqrt{\\frac{4-x}{x}},\\qquad 00$ and the BCD defined as\n\t\t\\begin{equation}\n\t\t\\mathcal{D}^{\\,\\mu\\nu} = \\int \\frac{d\\bm{k}}{\\left( 2\\pi \\right)^d} \\sum_{a} f(\\epsilon_{\\bm{k}a}) \\partial_\\mu \\Omega^\\nu_a.\n\t\t\\end{equation}\nThese two terms are finite only in the metal state and divergent in the clean limit since both are Fermi surface terms. The remaining term $\\sigma^{\\mu;\\nu \\lambda}_\\text{int}$ comes from interband transitions, and it is not divergent in the clean limit~\\cite{Gao2014}. This term, therefore, gives a negligible contribution to the NLC in good metals.\n\nIn the group-theoretical classification of quantum phases, the parity-violating phases are classified into odd-parity electric\/magnetic multipole phases where \\T{}\/\\PT{}-symmetry is preserved~\\cite{Watanabe2018grouptheoretical,Hayami2018Classification}. It is known that these preserved symmetries impose strong constraints on the response functions~\\cite{hikaruwatanabe2017,Zelezny2017} in addition to equilibrium properties of the systems~\\cite{cracknell2016magnetism}. Thus, the symmetry analysis enables us to classify the NLC allowed in either \\T{}-symmetric or \\PT{}-symmetric systems based on the relaxation time dependence. The result is shown in Table~\\ref{Table_relaxation_time_dependence_2nd_conductivity}.\n\nIn \\T{}-symmetric systems, all the terms in NLC are scaled by odd-order $O (\\tau^{2n+1})$. Recalling the linear response theory, the scattering rate $\\gamma$ can be replaced by the adiabaticity parameter whose sign represents irreversibility due to external fields~\\cite{Kubo1957}. Thus, the NLC should be accompanied by a dissipative response. This is consistent with previous theories~\\cite{Morimoto2018NonreciprocalElectronCorrelation,Hamamoto2019RachetScaling}. In contrast to the familiar linear conductivity, the Drude term is prohibited because it is even-order with respect to $\\tau$. The leading order term is the BCD term for the transverse NLC.\n\n On the other hand, the \\T{}-symmetry is broken by the magnetic order in the parity-violating \\PT{}-symmetric systems which we focus on. Therefore, the relaxation time dependence is even-order $O(\\tau^{2n})$, and intrinsic contributions $O(\\tau^{0})$ are allowed. The leading order term is the Drude term $O(\\tau^{2})$. We will show that the Drude term is a measure of the hidden ASOC characteristic of locally-noncentrosymmetric systems. The BCD term is prohibited to be consistent with the fact that the Berry curvature itself disappears due to the \\PT{}-symmetry. Although the effect of the \\T{}-symmetry breaking in acentric systems has been discussed in previous works~\\cite{Gao2014,Sodemann2015,Nandy2019,Du2019}, our classification has clarified the contrasting role of \\T{} and \\PT{}-symmetries in NLC. Below we will see that the \\PT{}-symmetry gives a clear insight into the NLC.\n\nIn our classification, extrinsic contributions such as the side jump and skew scattering are not taken into account~\\cite{Nandy2019,Du2019,CXiao2019ModifiedSemiclassics,Du2020quantumTheoryofNHE}. We, however, note that the extrinsic contributions may be similarly classified by the symmetries. Indeed, for nonmagnetic impurities with $\\delta$-function potential, we show that while extrinsic terms are allowed in the \\T{}-symmetric systems~\\cite{Du2019}, they are strongly suppressed by the \\PT{}-symmetry (See Appendix~\\ref{App_Sec_extrinsic}). This suppression is highly contrasting to the fact that the impurities play an important role in the NLC in \\T{}-symmetric materials such as WTe$_2$~\\cite{Kang2019}. When we focus on the \\PT{}-symmetric magnetic systems, the classification in Table~\\ref{Table_relaxation_time_dependence_2nd_conductivity} is meaningful beyond the relaxation time approximation for impurity scattering.\n \n\t\t\\begin{table}[htbp]\n\t\t\\caption{Relaxation time dependence of the second-order NLC in \\T{}\/\\PT{}-symmetric systems. `N\/A' denotes that the component is forbidden by symmetry.}\n\t\t\\label{Table_relaxation_time_dependence_2nd_conductivity}\n\t\t\\centering\n\t\t$\n\t\t\t\t\\begin{array}{c|ccc}\n\t\t\t\t&\\sigma_\\text{D}\t&\\sigma_\\text{BCD}\t&\\sigma_\\text{int}\t\\\\ \\hline\n\t\t\t\t\\text{\\T{}}\t&\\text{N\/A}&O(\\tau)&O(\\tau^{-1})\\\\\n\t\t\t\t\\text{\\PT{}}&O(\\tau^2) &\\text{N\/A} &O(\\tau^{0})\n\t\t\t\t\\end{array}\n\t\t$\n\t\t\\end{table}\n\n\nAll the terms in NLC are allowed in the absence of both \\T{} and \\PT{}-symmetry. For instance, the Drude term becomes finite when we apply magnetic fields to originally \\T{}-symmetric systems~\\cite{Rikken2001magnetochiral_anisotropy,Rikken2005magnetoelectric_anisotropy,Tokura2018nonreciprocal_review,Ideue2017}, that is described as `magnetic Drude' in Table~\\ref{Table_NLC_MagneticField}. Similarly, we expect a magnetic-field-induced NLC in originally \\PT{}-symmetric systems; the BCD term indeed arises from the \\PT{}-symmetry breaking (called `magnetic BCD' in Table~\\ref{Table_NLC_MagneticField}). This term is clarified in this work below. In the following, we consider the \\PT{}-preserving antiferromagnetic metal with or without the magnetic field, and discuss the Drude and BCD terms which are dominant in clean metals.\n\n\\section{NLC in odd-parity magnetic multipole systems}\\label{Sec_NLC_odd-parity_magnetic_multipole}\n\nWe introduce a minimal model of \\bma{} which undergoes odd-parity magnetic multipole order~\\cite{hikaruwatanabe2017}. Many magnetic compounds in the list of Ref.~\\cite{Watanabe2018grouptheoretical} belong to the same class. The Hamiltonian reads\n\t\t\\begin{equation}\n\t\t\tH(\\bm{k}) =\n\t\t\t\t\t\\epsilon(\\bm{k}) \\, \\tau_0+ \\bm{g} \\left( \\bm{k} \\right) \\cdot \\bm{\\sigma} \\, \\tau_z +\\bm{h} \\cdot \\bm{\\sigma} \\, \\tau_0 + V_{\\rm AB} (\\bm{k}) \\, \\tau_x, \\label{BMA_model_Hamiltonian}\n\t\t\\end{equation}\nwhere $\\bm{\\sigma}$ and $\\bm{\\tau}$ are Pauli matrices representing the spin and sublattice degrees of freedom, respectively. In addition to the intra-sublattice and inter-sublattice hoppings, $\\epsilon (\\bm{k})$ and $V_\\text{AB} (\\bm{k})$, we introduce the staggered $g$-vector $\\bm{g} (\\bm{k})= \\bm{g}_0 (\\bm{k}) + \\bm{h}_\\text{AF}$ consisting of the sublattice-dependent ASOC $\\bm{g}_0 (\\bm{k})$~\\cite{Yanase2014zigzag,Zelezny2014NeelorbitTorque,Hayami2014h} and the molecular field $\\bm{h}_\\text{AF}= h_\\text{AF} \\hat{z}$ due to antiferromagnetic order in \\bma{} ~\\cite{Singh2009BaMn2As2_1,Singh2009BaMn2As2_2,Ramsal2013BMA_MagneticStructure}. The detailed material property of \\bma{} and expressions of $\\epsilon (\\bm{k})$, $V_\\text{AB} (\\bm{k})$, and $\\bm{g}_0 (\\bm{k})$ are available in Appendix~\\ref{App_Sec_model_hamiltonian}. We also consider an external magnetic field $\\bm{h}$ to discuss field-induced NLC. \n\n\\subsection{Field-free nonlinear Hall effect}\\label{Sec_nonlinear_Hall_no_field}\n\nFirst, we show the NLC at zero magnetic field ($\\bm{h} =\\bm{0}$). Then, the NLC is mainly given by the Drude term (see Table~\\ref{Table_relaxation_time_dependence_2nd_conductivity}), and it is determined by the anti-symmetric and anharmonic property of the energy dispersion [see Eq.~\\eqref{drude}]. Such dispersion is known to be a pronounced property of the odd-parity magnetic multipole systems~\\cite{Yanase2014zigzag,Hayami2014h,Sumita2016,hikaruwatanabe2017,Watanabe2018grouptheoretical}. In the case of \\bma{}, the anti-symmetric component was identified to be a cubic term $k_xk_yk_z$~\\cite{hikaruwatanabe2017}. Indeed, the energy spectrum of the model Eq.~\\eqref{BMA_model_Hamiltonian} is obtained as\n\t\t\\begin{equation}\n\t\tE^\\pm_{\\bm{k}}= \\epsilon (\\bm{k}) \\pm \\sqrt{V_{\\rm AB}(\\bm{k})^2 + \\bm{g}(\\bm{k})^2 }. \\label{energyspectrum_no_magnetic_field}\n\t\t\\end{equation}\nThe anti-symmetric distortion in the band structure arises from the coupling term $\\bm{g}_0 (\\bm{k}) \\cdot \\bm{h}_\\text{AF}$ which is approximated by $\\sim k_xk_yk_z$ near time-reversal-invariant momentum. Thus, $\\sigma^{z;xy}$ and its cyclic components of NLC tensor are allowed. This indicates the nonlinear Hall effect, namely, the second-order electric current $J^z$ generated from the electric field $\\bm{E} \\parallel [110]$. For the strong antiferromagnet, $|\\bm{h}_\\text{AF}| \\gg |\\epsilon (\\bm{k})|$, $|V_\\text{AB} (\\bm{k})|$, $|\\bm{g}_0 (\\bm{k})|$, the Drude component is analytically obtained as \n \t\t\\begin{equation}\n\t\t\\sigma^{z;xy}_\\text{D} =\\sigma^{x;yz}_\\text{D} =\\sigma^{y;zx}_\\text{D} = \\frac{e^3\\alpha_{\\parallel}n }{4\\gamma^2} \\,\\text{sgn\\,} (h_\\text{AF}), \\label{Drude_no_external_field}\n\t\t\\end{equation}\nin the lightly-hole-doped region. Here $n$ denotes the carrier density of holes and $\\alpha_{\\parallel}$ represents the strength of ASOC parallel to the staggered magnetization $\\bm{h}_\\text{AF}$. It is noteworthy that Eq.~\\eqref{Drude_no_external_field} does not depend on the antiferromagnetic molecular field and therefore it is useful to evaluate the \\textit{sublattice-dependent} ASOC. Thus, the NLC provides a way to experimentally deduce the sublattice-dependent ASOC, although it was called \"hidden spin polarization\"~\\cite{Zhang2014HiddenSpin,Gotlieb2018HiddenSpinInCuprate} because it is hard to be measured. Equivalence of $\\sigma^{z;xy}_\\text{D} =\\sigma^{x;yz}_\\text{D} =\\sigma^{y;zx}_\\text{D}$ holds independent of parameters and it can be tested by experiments. Numerical calculations of Eq.~\\eqref{drude} are consistent with the above-mentioned symmetry argument and analytic formula as shown in Appendix~\\ref{App_Sec_NLC_no_magnetic_field}. A typical value of the nonlinear Hall response is obtained as $\\sigma^{z;xy}_\\text{D}\/[(\\sigma^{xx})^2 \\sigma^{zz}] \\sim \\mr{10^{-17}}{[A^{-2} \\cdot V \\cdot m^3]}$ and it is much larger than the experimental value of bilayer WTe$_2$, $\\sigma^{y;xx}\/(\\sigma^{xx})^3 \\sim \\mr{10^{-19}}{[A^{-2} \\cdot V \\cdot m^3]}$~\\cite{Ma2019BCD_experiment_WTe2}. Because the Drude term is more divergent with respect to $\\tau$ than the BCD term, we may see a giant nonlinear Hall response in the \\PT{}-symmetric antiferromagnet.\n\nThe NLC is a useful quantity not only to evaluate the sublattice-dependent ASOC but also to detect domain states in antiferromagnetic metals~\\cite{Watanabe2018grouptheoretical}. Indeed, the sign of the NLC depends on the antiferromagnetic domain and hence it may promote developments in the antiferromagnetic spintronics~\\cite{Jungwirth2016,Manchon2019spin-orbit-torque_review}. In fact, the read-out of antiferromagnetic domains has been successfully demonstrated by making use of the NLC~\\cite{Godinho2018AFM_reading}. For \\bma{} and related materials listed in Ref.~\\cite{Watanabe2018grouptheoretical}, the nonlinear Hall effect can be used to identify antiferromagnetic domain states. So far we considered intrinsic contributions. We have shown that the extrinsic contributions from impurity scattering are suppressed due to the preserved \\PT{}-symmetry, and therefore, they are not relevant to the above discussions.\n\n\\subsection{Nematicity-assisted dichroism}\\label{Sec_nematicity_assisted}\n\nIn the absence of the external field, \\bma{}-type magnetic materials do not show the longitudinal NLC along the high symmetry axes, namely, $\\sigma^{\\mu;\\mu\\mu}=0$. Below, we show that the longitudinal NLC can be induced by magnetic fields. Since the BCD term contributes to only the transverse response, we have only to consider the Drude term. Generally speaking, to obtain a finite longitudinal electronic dichroism, the system is required to possess an anti-symmetric dispersion such as $k_\\mu^3$ or higher-order one. According to the group-theoretical classification, the `polarization' in the momentum-space denoted by $k_\\mu$ may share the same symmetry as $k_\\mu^3$~\\cite{hikaruwatanabe2017,Watanabe2018grouptheoretical}. Thus, the momentum-space polarization is a key to realize the longitudinal dichroism.\n\nIn \\bma{} and related materials, the momentum-space polarization can be induced by the nematicity. We can understand this by the discussion of the magnetopiezoelectric effect~\\cite{Varjas2016,hikaruwatanabe2017,Shiomi2019EuMnBi2_MPE,Shiomi2019CaMn2Bi2_MPE,Shiomi2020}. A magnetopiezoelectric effect means that the planer (electronic) nematicity is induced by the out-of-plane electric current. That is written as\n\t\t\\begin{equation}\n\t\t\\varepsilon^{xy} = e^{xy;z} J^z,\n\t\t\\end{equation}\nwhere $\\varepsilon^{\\mu\\nu}$ represents the strain tensor. It was experimentally discovered in EuMnBi$_2$~\\cite{Shiomi2019EuMnBi2_MPE,Shiomi2020} and CaMn$_2$Bi$_2$~\\cite{Shiomi2019CaMn2Bi2_MPE} in accordance with theoretical prediction. The response is derived from the anti-symmetrically distorted Fermi surface and hence realizable in the odd-parity magnetic multipole systems. Similar to the conventional piezoelectric effect, we may expect an inverse effect. Given the in-plane nematic order or strain, the system should obtain the momentum-space polarization $P^{\\,k_z}$ whose symmetry is the same as the electric current $J^z$,\n\t\t\\begin{equation}\n\t\tP^{\\,k_z} = \\tilde{e}^{z;xy} \\varepsilon^{xy}.\n\t\t\\end{equation}\n Accordingly, the longitudinal dichroism $\\sigma^{z;zz}$ is allowed. Thus, nematicity-assisted dichroism which is unique to the odd-parity magnetic multipole systems is implied. \n\nThe nematicity can be induced by the magnetic field through the spin-orbit coupling. In the model for \\bma{} the sublattice-dependent ASOC plays an essential role. By $\\bm{h} \\ne 0$, the energy spectrum of the lower bands $E_{\\bm{k}}^-$ in Eq.~\\eqref{energyspectrum_no_magnetic_field} is modified as\n\t\t\\begin{equation}\n\t\tE_{\\bm{k}}^- = \\epsilon (\\bm{k}) - \\sqrt{V_{\\rm AB}(\\bm{k})^2 + \\bm{g}(\\bm{k})^2+ \\bm{h}^2 \\pm 2 |\\lambda| }, \\label{energyspectrum_with_magnetic_field}\n\t\t\\end{equation}\nwhere $\\lambda^2 = V_{\\rm AB}(\\bm{k})^2\\,\\bm{h}^2+ \\left[ \\bm{g} (\\bm{k})\\cdot \\bm{h}\\right]^2 $. The magnetic field not only lifts the Kramers degeneracy but also causes the nematicity through the coupling $\\left[ \\bm{g}_0 (\\bm{k}) \\cdot \\bm{h}\\right]^2$ in $\\lambda$, although linear terms in $\\bm{h}$ are canceled out between sublattices in sharp contrast to acentric systems studied before~\\cite{Ideue2017}. For \\bma{} with Dresselhaus-type staggered ASOC~\\cite{Manchon2019spin-orbit-torque_review}, the nematicity denoted by $\\varepsilon^{xy}$ is maximally induced by the magnetic field $\\bm{h}$ parallel to $[110]$ or $[1\\bar{1}0]$.\n\nWe expect nematicity-assisted dichroism in \\bma{} under the magnetic field $\\bm{h} \\parallel [110]$ from the above discussions. In numerically calculated NLC $\\sigma_\\text{D}^{z;zz}$ with rotating the magnetic field in the azimuthal plane, the dichroism with {\\it two-fold field-angle dependence} is clearly seen (Fig.~\\ref{Fig_drude_nematicity_assisted_azimuth_dependence}). In this case the magnetic field is a \\textit{bipolar field} rather than a vector field, in sharp contrast to the magnetic Drude term for which the observed field-angle dependence is one-fold~\\cite{Rikken2001magnetochiral_anisotropy,Rikken2005magnetoelectric_anisotropy,Ideue2017}. Although the field-induced NLC is tiny as evaluated in Appendix~\\ref{App_Sec_nematic_assisted_NLC}, it was actually detected in a recent experiment for \\bma{}~\\cite{KimataPrivate}.\n\n\t\t\\begin{figure}[htbp]\n\t\t\\centering \n\t\t\\includegraphics[width=75mm,clip]{nonlinear_response_chemipot_-500_mev_allindex_hargEle_90_azimuthsweep_BaMnAs_unit_135.pdf}\n\t\t\\caption{Drude term of a longitudinal NLC $\\sigma^{z;zz}_\\text{D}$ as a function of the azimuthal angle of external magnetic fields $\\bm{h}=h(\\cos \\phi, \\sin \\phi, 0)$. Strength of the magnetic field $h=0.01$, temperature $T=0.01$, chemical potential $\\mu = -0.5$, relaxation time $\\gamma^{-1}=10^3$, and Brillouin zone mesh $N = 135^3$ are adopted. The other parameters and adopted energy scale are described in Appendix~\\ref{App_Sec_calc_NLC_Mn_magnet}.}\n\t\t\\label{Fig_drude_nematicity_assisted_azimuth_dependence} \n\t\t\\end{figure}\n\n\n\\subsection{Magnetic ASOC and Berry curvature dipole}\\label{Sec_magneticASOC_BCD}\n\nNow we consider the counterpart of the magnetic Drude term~\\cite{Ideue2017}, that is the magnetic BCD term. The \\PT{}-symmetry ensures Kramers doublet at each momentum $\\bm{k}$, and Berry curvature is completely canceled in the odd-parity magnetic multipole systems. The doublet, however, should be split when the \\PT{}-symmetry is broken by the external magnetic field. Let us consider \\bma{}-type magnet under the magnetic field $\\bm{h}=h_z \\hat{z}$ for an example. Then, while the total Berry curvature $\\int d\\bm{k} \\,\\Omega^z$ is trivially induced, the BCD also emerges. Using the allowed symmetry operations, the induced BCD is identified as\n\t\t\\begin{equation}\n\t\t\\mathcal{D}^{\\,xy} = \\mathcal{D}^{\\,yx}. \\label{induced_BCD_in_BMA_with_zField}\n\t\t\\end{equation}\n\nBecause the BCD has the same symmetry as the ASOC~\\cite{Manchon2019spin-orbit-torque_review}, emergence of one indicates the presence of the other.\nTherefore, the field-induced BCD is understood by discussing magnetically-induced ASOC in the following way. Although the sublattice-dependent ASOC is compensated with $\\bm{h}=0$, combination of the staggered exchange spitting $\\bm{h}_\\text{AF}\\cdot \\bm{\\sigma}~\\tau_z$ and uniform Zeeman field $\\bm{h} \\cdot \\bm{\\sigma}~\\tau_0$ leads to imbalance between the sublattices without Brillouin zone folding (Fig.~\\ref{Fig_magnetic_asoc}). One of sublattices obtains an increased carrier density, and consequently the sublattice-dependent ASOC is not compensated. The emergent ASOC has distinct properties compared to the conventional crystal ASOC since the former originates solely from the magnetic effects. We therefore name this field-induced ASOC `magnetic ASOC'. Interestingly, the magnetic ASOC is tunable by external magnetic fields. Thus, the concept of magnetic ASOC may be useful to design spin-momentum locking in more controllable way than the crystal ASOC which is determined by the crystal structure~\\cite{magASOC}. In the model for \\bma{} the magnetic ASOC and BCD with the same symmetry as Eq.~\\eqref{induced_BCD_in_BMA_with_zField} are actually obtained. \n\n\t\t\\begin{figure}[htbp]\n\t\t\\centering \n\t\t\\includegraphics[width=75mm,clip]{magnetic_dresselhaus.pdf}\n\t\t\\caption{Mechanism of the magnetic ASOC and field-induced BCD. The blue-colored arrows denote the spin-polarization or Berry curvature at each $\\bm{k}$. (Left panel) A magnetic field along the $z$-axis splits the Fermi surface depending on the antiferromagnetic molecular field $\\bm{h}_\\text{AF}$. (Right panel) The split Fermi surface is viewed in the $xy$-plane which indicates the Dresselhaus-type ASOC and BCD.}\n\t\t\\label{Fig_magnetic_asoc} \n\t\t\\end{figure}\n\nThe field-induced BCD allows nonlinear Hall conductivity in accordance with Eq.~\\eqref{BCD_term}, which satisfies the relation \n\t\t\\begin{equation}\n\t\t\\sigma_\\text{BCD}^{z;xx} = -\\sigma_\\text{BCD}^{z;yy}= -2\\sigma_\\text{BCD}^{x;xz}= 2\\sigma_\\text{BCD}^{y;yz}.\n\t\t\\label{nonlinear_Hall_by_BCD}\n\t\t\\end{equation}\nFor example, we show the numerical result for $\\sigma_\\mathrm{BCD}^{z;xx}$ in Fig.~\\ref{Fig_BCD_with_z_field_elevation_dependence}, which reveals the dependence on the elevation angle of $\\bm{h}$. The induced BCD is inverted when the external field is flipped. Therefore, the field-angle dependence is one-fold in contrast to the nematicity-assisted dichroism (Fig.~\\ref{Fig_drude_nematicity_assisted_azimuth_dependence}).\n\n\t\t\\begin{figure}[htbp]\n\t\t\\centering \n\t\t\\includegraphics[width=75mm,clip]{nonlinear_response_chemipot_-500_mev_allindex_elevatesweep_Azu_0_BaMnAs_unit_135.pdf}\n\t\t\\caption{BCD term of a nonlinear Hall conductivity $\\sigma^{z;xx}_\\text{BCD}$ as a function of the elevation angle of external magnetic fields $\\bm{h}=h(\\sin \\theta, 0, \\cos \\theta)$. Parameters and unit are the same as Fig.~\\ref{Fig_drude_nematicity_assisted_azimuth_dependence}.}\n\t\t\\label{Fig_BCD_with_z_field_elevation_dependence} \n\t\t\\end{figure}\n\nFinally, we comment on a linear Hall response. Because the systems under the external magnetic field possess neither the \\T{}- nor \\PT{}-symmetry, a linear Hall response is also allowed. This is in contrast to the previously studied acentric systems~\\cite{Moore2010,Sodemann2015,Xu2018BCD_switchable,Ma2019BCD_experiment_WTe2} where the linear Hall response is forbidden because of the \\T{}-symmetry. However, the nonlinear Hall response can be distinguished from the linear one by symmetry. For example, the NLC, $\\sigma_\\text{BCD}^{z;xx}$ and $\\sigma_\\text{BCD}^{z;yy}$, in Eq.~\\eqref{nonlinear_Hall_by_BCD} represents the Hall response for which the linear response is forbidden.\n\n\\section{Conclusion and Discussions}\n\nThis work presents symmetry classification of the second-order NLC, and explores the NLC of odd-parity magnetic multipole systems. The Drude term gives rise to a giant nonlinear Hall conductivity at zero magnetic field, and provides an experimental tool for a probe of the sublattice-dependent ASOC. Thus, the hidden spin polarization in centrosymmetric crystals can be clarified. It also enables us to elucidate domain states in antiferromagnetic metals, and hence the NLC will be useful in the field of antiferromagnetic spintronics. Interestingly, the NLC induced by magnetic fields is significantly different from those studied in previous works. \nWe clarified the nematicity-assisted dichroism and the BCD-induced NLC due to the magnetic ASOC. \n\nIn accordance with our theoretical result, a recent experimental study actually detected nematicity-assisted electric dichroism under the magnetic field~\\cite{KimataPrivate}. \nWe believe that further studies of the nonlinear response in parity-violated magnetic systems will be motivated by our work.\n\n\n\\textbf{Acknowledgments}---\n\nThe authors are grateful to A.~Shitade, A.~Daido,~Y. Michishita, M.~Kimata, and R.~Toshio for valuable comments and discussions. Especially, the authors thank M.~Kimata for providing experimental data and motivating this work. This work is supported by a Grant-in-Aid for Scientific Research on Innovative Areas ``J-Physics'' (Grant No.~JP15H05884) and ``Topological Materials Science'' (Grant No.~JP16H00991,~No,~JP18H04225) from the Japan Society for the Promotion of Science (JSPS), and by JSPS KAKENHI (Grant No.~JP15K05164, No.~JP15H05745, and No.~JP18H01178). H.W. is a JSPS research fellow and supported by JSPS KAKENHI (Grant No.~18J23115).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec::introduction}\n\nThis article develops a novel functional Bayesian network for modeling directed conditional independence and causal relationships of multivariate functional data, which arise in a wide range of applications. For example, learning brain effective connectivity networks from electroencephalogram (EEG) records is crucial for understanding brain activities and neuron responses. Another example is longitudinal medical studies where multiple clinical variables are recorded at possibly distinct time points across variables and\/or patients. Knowing causal dependence of these clinical variables may help physicians decide the right interventions. Functional data can also go beyond those defined on time domain e.g., spatial domain (environmental data, spatially-resolved genomics, etc).\n\nJoint analysis of multiple functional objects has attracted great attention in recent years with focuses mainly on reducing dimensionality and capturing functional dependence. For instance, \\cite{kowal2017bayesian} and \\cite{kowal2019integer} proposed to model time-ordered functional data through a time-varying parameterization for functional time series. Using basis transformation strategies, \\cite{zhang2016functional} built an autoregressive model for spatially correlated functional data, while \\cite{lee2018bayesian} modeled functional data in serial correlation semiparametrically. \\cite{chiou2014linear} developed a linear manifold model characterizing the functional dependence between multiple random processes.\n\n\\paragraph{Functional Graphical Models} In a similar but conceptually different manner, functional graphical models have been recently proposed to model conditional independence of multivariate functional data. Graphical models gives rise to compact probabilistic representation of high-dimensional data through the graph-encoded conditional independence constraints. One key challenge is that the graph is typically unknown and must be inferred from data. While graphical models have been extensively studied for vector- and matrix-variate data \\citep{yuan2007model, wang2009bayesian, leng2012sparse, ni2017sparse}, only recently have there been several developments for the functional data. \\cite{zhu2016bayesian} extended Markov and hyper Markov laws of decomposable undirected graphs for random vectors to those for random functions. \\cite{qiao2019functional} adopted the group lasso penalty on the precision matrix of coefficients extracted from the basis expansion of functions. \\cite{zapata2022partial} introduced the idea of partial separability to reduce the computational cost of \\cite{qiao2019functional}. \\cite{qiao2020doubly} further extended \\cite{qiao2019functional} and proposed to characterize the time-varying conditional independence of random functions through smoothing techniques. To relax the Gaussian process assumption of the aforementioned methods, \\cite{li2018nonparametric}, \\cite{solea2022copula}, and \\cite{lee2022nonparametric} proposed models based on additive conditional independence and copula Gaussian models.\n\nDespite these exciting developments of functional undirected graphical models, the work on functional \\textit{directed} graphical models is sparse. Generally, undirected graphs admit a different set of conditional independence constraints from directed graphs. For example, the directed graph in Figure \\ref{ex1} implies $X_2 \\perp X_3$ but $X_2 \\not\\perp X_3 | X_1$, yet there exists no undirected counterpart that admits the same set of conditional (in)dependence assertions. More importantly, causal discovery (i.e., generation of plausible causal hypotheses) is only possible with directed graphs given additional causal assumptions \\citep{pearl2000causality}. To the best of our knowledge, the functional structural equation model recently proposed by \\cite{lee2022functional} is the only work that infers directional relationships from multivariate functional data. However, as will become evident in Section \\ref{sec::fbn} and \\ref{sec::inference}, our model differs from theirs in several significant aspects.\n\n\\paragraph{Causal Discovery} As hinted earlier, one of the two important problems we intend to address in this work is discovering causality from functional observations. Causal discovery is one of the first steps to investigate the physical mechanism that governs the operation and dynamics of an unknown system. Given the learned causal knowledge, subsequent causal inference (e.g., deriving the interventional and counterfactual distributions) can be conducted under the celebrated do-calculus framework \\citep{pearl2000causality}. Therefore, inferring causal relationships potentially has more significant scientific impacts than learning associations since it may help answer fundamental questions about the nature. Bayesian networks paired with causal assumptions are among the most popular approaches in identifying unknown causal structure represented by a directed acyclic graph (DAG). One pressing obstacle of using Bayesian networks to discover causality from purely observational data is that in general, only Markov equivalence classes (MEC) can be learned based on conditional independence constraints alone. Causal interpretations of members in the same MEC can be drastically different, and, generally, only bounds on causal effects can be calculated \\citep{maathuis2009estimating}. For example, the three DAGs in Figure \\ref{ex2} constitute an MEC with the only conditional independence $X_2 \\perp X_3 | X_1$, but the causal directions are completely reversed in the last graph compared to the first one.\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[h]{0.25\\textwidth}\n\\includegraphics[width=\\textwidth]{EX.pdf}\n\\caption{}\n\\label{ex1}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.72\\textwidth}\n\\includegraphics[width=\\textwidth]{MEC}\n\\caption{}\n\\label{ex2}\n\\end{subfigure}\n\\caption{Two Markov equivalence classes. (a) $X_2 \\perp X_3$. (b) $X_2 \\perp X_3 | X_1$.}\n\\end{figure}\n\nSince 2006, numerous researchers, however, have found that causal discovery (unique causal structure identification) is indeed possible with additional distributional assumptions on the data generating process, at least for finite-dimensional data. Examples include but are not limited to linear non-Gaussian models (LiNGAM, \\citealt{shimizu2006linear}), non-linear additive noise models \\citep{hoyer2008nonlinear}, and linear Gaussian models with equal error variances \\citep{peters2014identifiability}. See more related methods in a recent book of \\cite{peters2017elements}. Although remarkable progresses have been made in the causal discovery area for traditional finite-dimensional data, what remains lacking is method capable of discovering causality from general, purely observational, multivariate functional data. We remark that given a known causal graph, there are existing approaches that can be used to infer causal effects. For example, \\cite{lindquist2012functional} developed a causal mediation analysis framework where the treatment and outcome are scalars and the mediator is a univariate random function. Our scope is substantially different from this line of works in that we do not assume the causal graph to be known; in fact, learning the causal graph structure is precisely the focus of this paper.\n\n\\paragraph{Proposed Functional Bayesian Networks} We propose a novel functional Bayesian network model for multivariate functional data for which the conditional independence and causal relationships are represented by a DAG. As one would expect, the proposed functional Bayesian network factorizes over the DAG and respects all directed Markov properties (i.e., conditional independence constraints) encoded in the DAG via the notion of d-separation. Then for ease of exposition, we reformulate the proposed Bayesian network constructed in the functional space to an equivalent Bayesian network defined on the space of basis coefficients via basis expansion. Because in practice, functional data are almost always observed with noises, two essential ingredients are built in the proposed Bayesian networks to capture the functional dependence and to learn the causal structure. First, we capture the within-function dependence through a set of orthonormal basis functions chosen in a data-driven way. The resulting basis functions are interpretable and computationally efficient. Second, we encode the unknown causal structure by a structural equation model on the basis coefficients. Due to the equivalence of probability measures on the functional space and the space of basis coefficients, the conditional independence and causal relationships naturally transform back to the original random functions. To allow for unique DAG identification, we move away from the Gaussian process assumption often adopted by the existing functional graphical models and instead assume our random functions are generated from a discrete scale mixture of Gaussian distributions. We theoretically prove and empirically verify that the unique DAG identification is indeed possible even when the functions are observed with noises.\n\nTo conduct inference and uncertainty quantification from a finite amount of data, the proposed model is based on a Bayesian hierarchical formulation with carefully chosen prior distributions. Posterior inference is carried out through Markov chain Monte Carlo (MCMC). We perform simulation studies to demonstrate the capability of the proposed model in recovering causal structure and key parameters of interest. A real data analysis with brain EEG records illustrates the applicability of the proposed framework in real world. We also apply the proposed model to a COVID-19 multivariate longitudinal dataset (shown in Section D of the Supplementary Material).\n\nThe rest of the paper is structured as follows. We provide an overview of Bayesian networks in Section \\ref{sec::overview}. The proposed functional Bayesian network is introduced in Section \\ref{sec::fbn}, which includes elaborations of the functional linear non-Gaussian model (Section \\ref{sec::FLiNG}) and the causal identifiability theory (Section \\ref{sec:ci}). Section \\ref{sec::inference} is devoted to Bayesian inference of the proposed model. We provide simulation studies and applications in Sections \\ref{sec::experiment} and \\ref{sec::eeg}, respectively. The main contributions of this paper are summarized in Section \\ref{sec::discussion} with some concluding remarks.\n \n\\section{Overview of Bayesian Networks} \\label{sec::overview}\n\nThroughout the paper, vectors and matrices are boldfaced whereas scalars and sets are not.\n\n\\paragraph{DAGs and Bayesian Networks} Let $\\bm{X} = (X_1, \\ldots, X_p)^T \\in \\mathcal{X}_1 \\times \\cdots \\times \\mathcal{X}_p$ denote a $p$-dimensional random vector. Denote $[m]: = \\{1, \\ldots, m\\}$ for any integer $m \\geq 1$. Let $\\bm{X}_S = (X_j)_{j\\in S}$ be a subvector of $\\bm{X}$ with $S\\subseteq [p]$. A DAG $G = (V, E)$ consists of a set of nodes $V = [p]$ and a set of directed edges represented by a binary adjacency matrix $\\bm{E} = (E_{j\\ell})$ where $E_{j\\ell} = 1$ if and only if $\\ell\\rightarrow j$ for $\\ell \\neq j\\in V$. DAGs do not allow directed cycles $j_0 \\to j_1 \\to \\cdots \\to j_k = j_0$. Each node $j \\in V$ represents a random variable $X_j \\in \\mathcal{X}_j$; we may use $j$ and $X_j$ interchangeably when no ambiguity arises. Each directed edge $\\ell \\to j$ and the lack thereof represent conditional dependence and independence of $X_\\ell$ and $X_j$, respectively. Note that although $X_j$ is often a scalar but it does not need to be. In fact, $X_j$ is a random function or an infinite dimensional random vector in this article. Denote $pa_G(j) = \\{\\ell\\in V: \\ell \\to j\\}$ the set of parents of $j$ in graph $G$. A Bayesian network (BN) $\\mathcal{B} = (G, P)$ on $\\bm{X}$ is a probability model where the joint probability distribution $P$ of $\\bm{X}$ factorizes with respect to $G$ in the following manner,\n\\begin{equation}\\label{eq:bnf}\nP(\\bm{X}) = \\prod_{j = 1}^p P_j(X_j | \\bm{X}_{pa_G(j)}),\n\\end{equation}\nwhere $P_j$ is the conditional distribution of $X_j$ given $\\bm{X}_{pa_G(j)}$ under $P$. Let $de_G(j) = \\{\\ell \\in V: j \\to \\cdots \\to \\ell\\}$ denote the descendants of $j$ in $G$ and let $nd_G(j) = V \\backslash de_G(j) \\backslash \\{j\\}$ denote the non-descendants of $j$. The BN factorization \\eqref{eq:bnf} directly implies the local directed Markov property -- any variable is conditionally independent of its non-descendants given its parents, $X_j \\perp \\bm{X}_{nd_G(j)\/pa_G(j)} | \\bm{X}_{pa_G(j)}, \\forall j \\in [p]$. In fact, the reverse is also true: if a distribution $P$ respects the local Markov property according to a DAG $G$, then $P$ must factorize over $G$ as in \\eqref{eq:bnf}. In summary, BN factorization and local Markov property are equivalent. We may omit the subscript $G$ of $pa_G(j)$ and $nd_G(j)$ and simply write $pa(j)$ and $nd(j)$ instead when $G$ is clear from the context.\n\n\\paragraph{Causal DAGs and Causal Bayesian Networks} A causal DAG $G$ is a DAG except that the directed edges are now interpreted causally, i.e., we say $X_\\ell$ is a direct cause (with respect to $V$) of $X_j$ and $X_j$ is a direct effect of $X_\\ell$ if $\\ell \\to j$. For simplicity, we will overload $nd(j)$ and $pa(j)$ to denote the noneffects and directed causes of $j$ in a causal DAG. To define a causal BN, we begin by asserting the local causal Markov assumption \\citep{spirtes2000causation,pearl2000causality} -- given a causal DAG $G$, a variable is conditionally independent of its noneffects given its direct causes. By noting the correspondence between noneffects and non-descendants, and between direct causes and parents in DAGs and causal DAGs, the local causal Markov assumption simply states that the distribution $P$ of $\\bm{X}$ respects the local Markov property of the causal DAG $G$, which in turn implies that $P$ must also factorize over $G$ (recall the equivalence between BN factorization and local Markov property). Therefore, a causal BN $\\mathcal{B} = (G, P)$ is a probability model where $P$ factorizes with respect to a causal DAG $G$ in the same way as in \\eqref{eq:bnf}.\n\n\\paragraph{Structural Equation Representation of Bayesian Networks} A BN is often represented by a structural equation model (SEM),\n\\begin{equation*}\nX_j = f_j(\\bm{X}, \\epsilon_j), ~ \\forall j \\in [p],\n\\end{equation*}\nwhere the transformation $f_j$ depends on $\\bm{X}$ only through its parents\/direct causes $\\bm{X}_{pa(j)}$, and the exogenous variables $\\bm{\\epsilon}=(\\epsilon_1,\\dots,\\epsilon_p)^T\\sim P_\\epsilon$ are assumed to be mutually independent. Denote the set of transformation functions as $F = \\{f_1, \\ldots, f_p\\}$. Since $F$ and $P_\\epsilon$ induce the joint distribution $P$ of $\\bm{X}$ and it is not difficult to show that the induced distribution $P$ factorizes over $G$, with a slight abuse of notation, we can rewrite the BN as $\\mathcal{B} = (G, F, P_\\epsilon)$.\n \n\\section{Functional Bayesian Networks} \\label{sec::fbn}\n\n\\subsection{General Framework} \\label{sec:gf}\n\nNow we introduce the construction of BNs for multivariate functional data. Denote the space of square integrable functions on domain $\\mathcal{D}$ with respect to measure $\\mu$ as $L^2(\\mathcal{D}) = \\{h: \\int_\\mathcal{D} h^2(\\omega) d\\mu(\\omega) < \\infty\\}$. We focus on a compact $\\mathcal{D} \\subset \\mathbb{R}$ (in fact, without loss of generality, $\\mathcal{D} = [0, 1]$) and the Lebesgue measure $\\mu$ for simplicity. Let $\\bm{Y} = (Y_1, \\ldots, Y_p)^T \\in L^2(\\mathcal{D}_1)\\times\\dots\\times L^2(\\mathcal{D}_p)$ be a collection of $p$ random functions. Denote $\\mathcal{H} = \\bigcup_{j = 1}^p \\{(\\omega, j): \\omega \\in \\mathcal{D}_j\\}$ the joint domain of $\\bm{Y}$ and $(L^2(\\mathcal{H}), \\mathcal{B}(L^2(\\mathcal{H})), P)$ its probability space. Similarly, for any subset $A \\subset [p]$, denote the joint domain $\\mathcal{H}_A = \\bigcup_{j \\in A} \\{(\\omega, j): \\omega \\in \\mathcal{D}_j\\}$ and $\\mathcal{B}(L^2(\\mathcal{H}_A))$ the Borel $\\sigma$-algebra on $L^2(\\mathcal{H}_A)$. Let $A, B, C$ be disjoint subsets of $[p]$. Following \\cite{zhu2016bayesian}, we say $\\bm{Y}_A$ is conditionally independent of $\\bm{Y}_B$ given $\\bm{Y}_C$ under $P$, if for any measurable set $D_A \\subset L^2(\\mathcal{H}_A)$, $P(\\bm{Y}_A \\in D_A | \\bm{Y}_B, \\bm{Y}_C)$ is $\\mathcal{B}(L^2(\\mathcal{H}_C))$ measurable and $P(\\bm{Y}_A \\in D_A | \\bm{Y}_B, \\bm{Y}_C) = P(\\bm{Y}_A \\in D_A | \\bm{Y}_C)$. We introduce a DAG $G = (V, E)$ where each node $j \\in V$ represents a random function $Y_j$. To begin with, we give the formal definition of a functional Bayesian network.\n\n\\begin{definition}[Functional Bayesian Networks]\nWe say $\\mathcal{B} = (G, P)$ is a functional Bayesian network for a set of random functions $\\bm{Y}$ if $P$ factorizes with respect to DAG $G$, \n\\begin{align*}\nP(Y_1 \\in D_1,\\dots,Y_p\\in D_p) = \\prod_{j = 1}^p P_j(Y_j \\in D_j | \\bm{Y}_{pa(j)} \\in D_{pa(j)}),\n\\end{align*}\nfor any measurable sets $D_j \\subset L^2(\\mathcal{D}_j), \\forall j \\in [p]$, where $P_j$ is the conditional probability measure of $Y_j$ given $\\bm{Y}_{pa(j)}$ under $P$.\n\\end{definition}\n\nJust like the ordinary finite-dimensional BN, the functional BN factorization implies the local Markov property and vice versa. \n\n\\begin{definition}[Functional Local Directed Markov Property]\nA probability measure $P$ of $\\bm{Y}$ satisfies the local directed Markov property with respect to $G$ if $Y_j \\perp \\bm{Y}_{nd(j)\/pa(j)} | \\bm{Y}_{pa(j)}$, i.e., $P(Y_j \\in D_j | \\bm{Y}_{nd(j)\/pa(j)}, \\bm{Y}_{pa(j)})$ is $\\mathcal{B}(L^2(\\mathcal{H}_{pa(j)}))$ measurable and $P(Y_j \\in D_j | \\bm{Y}_{nd(j)\/pa(j)}, \\bm{Y}_{pa(j)}) = P(Y_j \\in D_j | \\bm{Y}_{pa(j)})$ for any $D_j\\subset L^2(\\mathcal{D}_j)$.\n\\end{definition}\n\n\\begin{proposition}\nFunctional Bayesian network factorization is equivalent to functional local directed Markov property.\n\\end{proposition}\n\nProof is trivial. For modeling convenience, we use orthonormal basis expansion of random functions to (equivalently) redefine the functional BN in the space of basis coefficients. Let $\\{\\phi_{jk}\\}_{k = 1}^\\infty$ be a sequence of orthonormal basis functions of $L^2(\\mathcal{D}_j)$ and expand $Y_j = \\sum_{k = 1}^\\infty Z_{jk} \\phi_{jk}$, where $Z_{jk} = \\int_{\\mathcal{D}_j} Y_j(\\omega) \\phi_{jk}(\\omega) d\\omega$. The resulting coefficient sequence $\\bm{Z}_j = (Z_{jk})_{k = 1, \\ldots, \\infty}$ lies in the space of square summable sequences $\\ell_j^2 = \\{h_j: \\sum_{k = 1}^\\infty h_{jk}^2 < \\infty\\}$. The within-function and the between-function covariance can then be expressed in terms of the covariance of the coefficient sequences,\n\\begin{align*}\n\\text{cov}(Y_j(\\omega_j), Y_\\ell(\\omega_\\ell)) = \\sum_{k = 1}^\\infty \\sum_{h = 1}^\\infty \\phi_{jk}(\\omega_j) \\phi_{\\ell h}(\\omega_\\ell) \\text{cov}(Z_{jk}, Z_{\\ell h}), ~ \\forall \\omega_j \\in \\mathcal{D}_j, \\omega_\\ell \\in \\mathcal{D}_\\ell, ~ \\forall j, \\ell \\in [p].\n\\end{align*}\nBecause $L^2(\\mathcal{D}_j)$ and $\\ell_j^2$ are isometrically isomorphic for each $j$, for any disjoint subsets $A, B, C \\subset [p]$, $\\bm{Y}_A \\perp \\bm{Y}_B | \\bm{Y}_C$ if and only if $\\bm{Z}_A \\perp \\bm{Z}_B | \\bm{Z}_C$ where $\\bm{Z} = (\\bm{Z}_1, \\ldots, \\bm{Z}_p)^T$. Hence, if $\\bm{Y}$ follows the proposed BN model $\\mathcal{B} = (G, P)$, then the coefficient sequences $\\bm{Z}$ follows $\\mathcal{B}_Z = (G, P_Z)$ for some probability measure $P_Z$ of $\\bm{Z}$, and vice versa. Each node of the DAG $G$ either represents a random function $Y_j$ or, equivalently, its corresponding coefficient sequence $\\bm{Z}_j$. Moreover, the joint probability measure $P$ of $\\bm{Y}$ factorizes with respect to $G$ if and only if the joint probability measure $P_Z$ of $\\bm{Z}$ factorizes with respect to $G$. \n\n\\begin{proposition}\nSuppose $\\bm{Y}\\sim P$ and let $\\bm{Z}$ be the corresponding coefficient sequences from orthonormal basis expansion.\nThen \n\\begin{align*}\nP(Y_1 \\in D_1,\\dots,Y_p\\in D_p) = \\prod_{j = 1}^p P_j(Y_j \\in D_j | \\bm{Y}_{pa(j)} \\in D_{pa(j)}),\n\\end{align*}\nfor any measurable sets $D_j \\subset L^2(\\mathcal{D}_j), \\forall j \\in [p]$ if and only if\n\\begin{align*}\nP_Z(\\bm{Z}_1 \\in D_1',\\dots,\\bm{Z}_p\\in D_p') = \\prod_{j = 1}^p P_{Z j}(\\bm{Z}_j \\in D_j' | \\bm{Z}_{pa(j)} \\in D_{pa(j)}'),\n\\end{align*}\nfor any measurable sets $D_j' \\subset \\ell_j^2, \\forall j \\in [p]$.\n\\end{proposition}\n\nThe proof directly follows the preceding paragraph. Again, just like the ordinary finite-dimensional BN, if one makes the causal Markov assumption, the DAG $G$ in the proposed functional BN can be interpreted causally. Hereafter, by default, we always make the causal Markov assumption (hence $G$ is a causal DAG, the edge strength is interpreted as direct causal effect, etc) but all the results are simply reduced to those of a directed conditional independence model when the causal Markov assumption is dropped.\n \n\\subsection{Functional Linear Non-Gaussian Bayesian Networks} \\label{sec::FLiNG}\n\nSection \\ref{sec:gf} introduces a general framework for modeling directed conditional independence and causal relationships for multivariate functional data. In this subsection, we discuss in detail one specific case of the proposed general framework, namely the \\underline{F}unctional \\underline{Li}near \\underline{N}on-\\underline{G}aussian (FLiNG) BNs. Specifically, the FLiNG-BN assumes $\\bm{Z}$ follows a linear SEM,\n\\begin{align} \\label{eq1}\n\\bm{Z}_j = \\sum_{\\ell = 1}^p \\bm{B}_{j\\ell} \\bm{Z}_\\ell + \\bm{\\epsilon}_j, ~ \\forall j \\in [p],\n\\end{align}\nwhere $\\bm{\\epsilon}_j$ is an infinite-dimensional exogenous vector, $\\bm{B}_{j\\ell} = (B_{j\\ell}(k_j, k_\\ell))_{k_j=1,k_\\ell=1}^{\\infty,\\infty}$ is an infinite-dimensional direct causal effect matrix from $\\bm{Z}_\\ell$ to $\\bm{Z}_j$, and $\\ell\\to j$ is present in $G$ (i.e., $\\bm{Z}_\\ell$ is a direct cause of $\\bm{Z}_j$) if there exist $k_\\ell$ and $k_j$ such that $B_{j\\ell}(k_j, k_\\ell) \\neq 0$. Neither causal effects nor the causal graph is assumed to be known; therefore the main goal of this article is precisely to infer them from observational data. Because $L^2(\\mathcal{D}_j)$ and $\\ell_j^2$ are isometrically isomorphic for all $j \\in [p]$, the casual relationships of $\\bm{Z}$ encoded in DAG $G$ directly transfer to the the casual relationships of $\\bm{Y}$, i.e., $\\bm{Z}_\\ell$ is a direct cause of $\\bm{Z}_j$ if and if only if $Y_\\ell$ is a direct cause of $Y_j$.\n\nIn practice, the random functions $\\bm{Y}$ can only be measured on finite grids with random noises. In other words, we do not observe realizations of $\\bm{Y}$ but instead we observe realizations of $\\bm{W}=(\\bm{W}_1,\\dots,\\bm{W}_p)^T$ where $\\bm{W}_{j} = (W_{j}(1), \\ldots, W_{j}(m_j))$, which is the set of measurements of $Y_{j}$ on a finite grid $D_{j} = \\{\\omega_{j}(1), \\ldots, \\omega_{j}(m_j)\\} \\subset \\mathcal{D}_j$ with independent white noises $e_{j}(m) \\sim N(0, \\sigma_j)$, $\\forall m \\in [m_j]$, \n\\begin{align} \\label{eq2}\nW_j(m) = Y_{j}(\\omega_{j}(m)) + e_{j}(m).\n\\end{align}\nNote that $D_{j}$ can be different across $j$ (also across realizations).\n\nOne seemingly inconsequential element of the FLiNG-BN but turning out to be crucial for discovering causality is the specification of the probability distribution of the exogenous variables $\\bm{\\epsilon}_j= (\\epsilon_{jk})_{k=1}^\\infty$ in \\eqref{eq1}.\nA tempting choice may be Gaussian but it is the non-Gaussianity of $\\bm{\\epsilon}_j$'s that allows causal identification as we will show in Section \\ref{sec:ci}. Specifically, we assume $\\epsilon_{jk}$ to follow a finite scale mixture of Gaussian distributions, $\\epsilon_{jk} \\sim \\sum_{m = 1}^{M_{jk}} \\pi_{jkm} N(0, \\tau_{jkm})$,\nwhere $M_{jk}$ is the number of mixture components. The non-Gaussian exogenous variables lead to non-Gaussian coefficient sequences $\\bm{Z}$, which in turn lead to non-Gaussian-process distributed random functions $\\bm{Y}$. In addition to enabling causal identification, non-Gaussian-processes are robust against outlying curves \\citep{zhu2011robust}. For finite sample inference, we truncate the orthonormal basis at level $K$ such that $\\bm{\\phi}_j = (\\phi_{j1}, \\ldots, \\phi_{jK})^T$, as commonly done in existing functional data analysis literature. Consequently, \\eqref{eq2} is turned into\n\\begin{align} \\label{eq3}\nW_{j}(m) = \\sum_{k = 1}^K Z_{jk} \\phi_{jk}(\\omega_{j}(m)) + e_{j}(m).\n\\end{align}\n \n\\subsection{Causal Identifiability} \\label{sec:ci}\n\nThe proposed functional BNs are useful representations of directed conditional independence and causal relationships for multivariate functional data. The big remaining question is the learning of the underlying (causal) DAGs from observational data. Constraint-based methods, which are often model-free, have been popular for DAG learning. For the proposed functional BNs, we, in principle, can also use constraint-based methods, which test for conditional independence of pairs of functions. However, conditional independence tests are notoriously difficult and inefficient even for scalar random variables. Furthermore, even if we have access to oracle conditional independence tests for random functions, we can only hope for identifying the best MEC by definition (recall that an MEC contains DAGs with exactly the same set of conditional independence relationships). This may be acceptable if one is only interested in learning conditional independence relationships. But as mentioned in Section \\ref{sec::introduction}, for causal discovery, this is clearly unsatisfactory because the directionality of a potentially large number of edges of Markov equivalent DAGs may be left undetermined and hence the causal interpretations of these edges are unclear. Because the proposed FLiNG-BN is a proper probability model, we can exploit certain feature of the model, namely the non-Gaussianity, to uniquely identify the underlying causal DAG.\n \n\\begin{definition}[Causal Identifiability]\nSuppose $\\bm{Y}$ follows the FLiNG-BN $\\mathcal{B} = (G, P)$, and suppose $\\bm{W}$ is a noisy version of $\\bm{Y}$ with noise variances $\\bm{\\sigma}= (\\sigma_1, \\ldots, \\sigma_p)$ as defined in \\eqref{eq2}. Let $P_W$ denote the distribution of $\\bm{W}$ induced from FLiNG-BN and the noises. We say that the causal DAG of FLiNG-BN is identifiable from $\\bm{W}$ if there does not exist another BN $\\mathcal{B}' = (G', P')$ where $G'\\neq G$ and noise variances $\\bm{\\sigma}' = (\\sigma'_1, \\ldots, \\sigma'_p)$ such that the induced distributions on $\\bm{W}$, $P'_W$, is equivalent to $P_W$, i.e., $P_W(\\bm{W}) \\equiv P'_W(\\bm{W})$.\n\\end{definition}\n\n\\begin{theorem}[Causal Identifiability] \\label{them1}\nThe causal DAG of FLiNG-BN is identifiable if the number of Gaussian mixture components $M_{jk} > 1, \\forall j, k$.\n\\end{theorem}\n\nTheorem \\ref{them1} signifies that by examining the probability distribution $P_W$, to which we have access through the observational data alone, one can gauge the likelihood that a given causal DAG is the data generating DAG. With a finite dataset, we shall focus on weighing different candidate causal DAGs by their posterior probabilities. Here, we provide the outline of the proof; the complete proof is given in Section A of the Supplementary Material. Given a chosen set of basis functions, we show the result in the space of basis coefficients. The problem then transforms to prove that, given $\\bm{Z} = \\bm{B} \\bm{Z} + \\bm{\\epsilon}$ and observe $\\bm{W} = \\bm{Z} + \\bm{e}$, there does not exist another equivalent parameterization $\\bm{Z}' = \\bm{B}' \\bm{Z}' + \\bm{\\epsilon}'$ and $\\bm{W} = \\bm{Z}' + \\bm{e}'$. Since we assume each component of $\\bm{\\epsilon}$ follows a Gaussian scale mixture, the induced distribution on $\\bm{W}$ is a multivariate Gaussian mixture (with different precision matrices). We then prove the causal effect matrix $\\bm{B}$ is uniquely identifiable from such mixture model by combining the identification of Gaussian mixture components, uniqueness of LDL decomposition, and proof of causal ordering identification. We demonstrate the identifiability result with a toy example.\n\n\\begin{example} \\label{exp1}\nConsider a true functional causal graph $1\\to 2$ and the corresponding data generating model $Z_1 = \\epsilon_1$ with $\\epsilon_1 \\sim 0.5 N(0, 0.5) + 0.5 N(0, 1)$ and $Z_2 = Z_1 + \\epsilon_2$ with $\\epsilon_2 \\sim 0.5 N(0, 0.5) + 0.5 N(0, 1)$; note for simplicity, we assume in this example that the number of basis functions is $K=1$. Assume we observe with noises $W_1 = Z_1 + e_1$ and $W_2 = Z_2 + e_2$ with $e_1, e_2 \\sim N(0, 0.1)$. We sample $n = 1000$ observations from this model and index them by the subscript $i=1,\\dots,n$. For the purpose of illustration, suppose we know the mixture component assignment of each observation and define four groups of observations based on the combination of variances of $\\epsilon_1$ and $\\epsilon_2$,\n\\begin{equation*}\n\\begin{aligned}\n& C_1 = \\{i: \\text{Var}(\\epsilon_{i1}) = 0.5 ~\\text{and}~ \\text{Var}(\\epsilon_{i2}) = 0.5\\}, ~ C_2 = \\{i: \\text{Var}(\\epsilon_{i1}) = 0.5 ~\\text{and}~ \\text{Var}(\\epsilon_{i2}) = 1\\}, \\\\\n& C_3 = \\{i: \\text{Var}(\\epsilon_{i1}) = 1 ~\\text{and}~ \\text{Var}(\\epsilon_{i2}) = 0.5\\}, ~ C_4 = \\{i: \\text{Var}(\\epsilon_{i1}) = 1 ~\\text{and}~ \\text{Var}(\\epsilon_{i2}) = 1\\}.\n\\end{aligned}\n\\end{equation*}\nWe fit linear regression separately to all observations and to observations in each of the four groups with the true causal direction $1 \\to 2$ (regressing $W_2$ on $W_1$) and the anti-causal direction $2 \\to 1$ (regressing $W_1$ on $W_2$), which are shown in Figure \\ref{demo}. We observe that the fitted lines are almost identical in the causal direction for all groups whereas they can be quite different across groups in the anti-causal direction. Therefore, only the true causal graph gives a unique regression coefficient among all groups. Notice that if there is only one mixture component (i.e., degeneration to the Gaussian case), no comparison can be made between the causal and anti-causal directions since there will be only one regression line.\n\\end{example}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\linewidth]{DEMO}\n\\caption{A toy example for demonstration of causal identification. The left (right) panel shows the linear regression of $W_2$ ($W_1$) on $W_1$ ($W_2$). Data are simulated from the causal graph $1\\to 2$. Colored lines are the fitted linear regressions for all observations and for the observations in groups $C_1$--$C_4$.}\n\\label{demo}\n\\end{figure}\n\nThe next counter example illustrates the necessity of the non-Gaussian assumption for causal identification.\n\n\\begin{example}\nConsider a similar bivariate case to Example \\ref{exp1} but now the exogenous variables are Gaussian instead of mixture of Gaussian. Suppose the true functional causal graph $1\\to 2$ and the corresponding data generating model $Z_1 = \\epsilon_1$ with $\\epsilon_1 \\sim N(0, \\tau_1)$ and $Z_2 = b Z_1 + \\epsilon_2$ with $\\epsilon_2 \\sim N(0, \\tau_2)$; note again for simplicity, we assume in this example that the number of basis functions is $K = 1$. Assume we observe with noises $W_1 = Z_1 + e_1$ and $W_2 = Z_2 + e_2$ with $e_1 \\sim N(0, \\sigma_1)$ and $e_2 \\sim N(0, \\sigma_2)$. The induced joint distribution on $\\bm{W} = (W_1, W_2)$ is then bivariate Gaussian with mean $0$ and covariance matrix\n$$\\begin{pmatrix}\n\\tau_1 + \\sigma_1 & b \\tau_1\\\\\nb \\tau_1 & b^2 \\tau_1 + \\tau_2 + \\sigma_2\n\\end{pmatrix}. $$\n\nFurther consider the anti-causal model $2 \\to 1$ where $Z_2' = \\epsilon_2'$ with $\\epsilon_2' \\sim N(0, \\tau_2')$ and $Z_1' = b' Z_2' + \\epsilon_1'$ with $\\epsilon_1' \\sim N(0, \\tau_1')$. Suppose $W_1 = Z_1' + e_1'$ and $W_2 = Z_2' + e_2'$ with $e_1' \\sim N(0, \\sigma_1')$ and $e_2' \\sim N(0, \\sigma_2')$. The induced joint distribution on $\\bm{W} = (W_1, W_2)$ is still bivariate Gaussian with mean 0 and covariance matrix\n$$\\begin{pmatrix}\nb^{'2} \\tau_2' + \\tau_1' + \\sigma_1' & b' \\tau_2'\\\\\nb' \\tau_2' & \\tau_2' + \\sigma_2'\n\\end{pmatrix}. $$ \nFor any chosen $\\tau_1', \\sigma_1' > 0$ such that $\\tau_1' + \\sigma_1' < \\tau_1 + \\sigma_1 - b^2 \\tau_1^2 \/ (b^2 \\tau_1 + \\tau_2 + \\sigma_2)$, if we set\n\\begin{equation*}\n\\begin{aligned}\n& b' = (\\tau_1 + \\sigma_1 - \\tau_1' - \\sigma_1') \/ b \\tau_1, \\\\\n&\\tau_2' = b^2 \\tau_1^2 \/ (\\tau_1 + \\sigma_1 - \\tau_1' - \\sigma_1'), \\\\\n& \\sigma_2' = b^2 \\tau_1 + \\tau_2 + \\sigma_2 - b^2 \\tau_1^2 \/ (\\tau_1 + \\sigma_1 - \\tau_1' - \\sigma_1'),\n\\end{aligned}\n\\end{equation*}\nthen the induced distribution coincides with that under the true causal model (i.e., Gaussian with mean 0 and the same covariance). Therefore, causal identification fails in this case.\n\\end{example}\n\n\\section{Bayesian Inference} \\label{sec::inference}\n\nThe inference of the proposed FLiNG-BN framework can be carried out in either a frequentist (e.g., maximizing penalized likelihood) or a Bayesian (e.g., sampling from posterior distribution) fashion. Existing frequentist functional graphical models \\citep{qiao2019functional,qiao2020doubly,zapata2022partial,solea2022copula,lee2022nonparametric,lee2022functional} often estimate graphs in two separate steps -- (i) estimate the basis coefficient sequence of each function marginally via functional principle component analysis, and (ii) learn an directed\/undirected graph based on the estimated coefficient sequences. However, the eigenfunctions that marginally explain the most variation of each individual function do not necessarily explain well the conditional\/causal relationships among a set of functions. Moreover, the estimation uncertainty is not propagated from the first step to the second, which may result in overly confident inference. To mitigate these potential drawbacks of the two-step approaches, we propose a fully Bayesian inference procedure that jointly infers basis coefficient sequences and the DAG structure. This joint inference approach constructs orthonormal basis functions adaptive to their conditional\/causal relationships and allows for finite-sample inference and uncertainty quantification. \n\n\\subsection{Adaptive Orthonormal Basis Functions}\n\nWe assume the basis functions to be shared across all random functions \\citep{kowal2017bayesian,zapata2022partial}, $\\phi_{jk}(\\omega) := \\phi_k(\\omega),\\forall j\\in [p]$, which are more parsimonious than models based on function-specific basis functions. Moreover, the common basis functions put the basis coefficient sequences $\\bm{Z}_j,\\forall j\\in[p]$ on an equal footing (e.g., the magnitudes of basis coefficients are directly comparable) so that they are directly comparable and the BN on $\\bm{Z}$ has a more coherent interpretation. In this case, the non-zero matrix block $\\bm{B}_{j\\ell}$ corresponds to a causal connection from $\\bm{Y}_\\ell$ to $\\bm{Y}_j$. Loosely speaking, if we regard the basis functions as signal channels, then a significant non-zero $B_{j\\ell}(k_j, k_\\ell)$ indicates that $\\bm{Y}_\\ell$ directly affects $\\bm{Y}_j$ through its signal transmission from the $k_\\ell$-th channel to the $k_j$-th channel. \n\nAs mentioned above, we do not pre-specify a fixed set of orthonormal basis functions but instead they are learned adaptively from data by further expanding them with spline basis functions \\citep{kowal2017bayesian}, $\\phi_k(\\omega) = \\sum_{\\ell = 1}^L A_{k\\ell} b_\\ell(\\omega)$, where $\\bm{b} = (b_1, \\ldots, b_L)^T$ is a set of cubic B-spline basis functions with equally spaced knots and $\\bm{A}_k = (A_{k1}, \\ldots, A_{kL})^T, \\forall k \\in [K]$ are spline coefficients. Because $\\bm{A}_k$'s are not fixed \\emph{a priori}, so are $\\phi_k$'s.\n\n\\subsection{Prior Model}\n\nWe summarize our model and its entailed parameters using a DAG shown in Figure \\ref{hmodel}. The prior distributions of the model parameters are introduced in this section. We simulate posterior samples through Markov chain Monte Carlo (MCMC). Details are given in Section B of the Supplementary Material. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\linewidth]{HMODEL}\n\\caption{A DAG illustrating the model hierarchy. Single-line arrows are stochastic relationships and double-line arrows are deterministic relationships. The observed node $\\bm{W}_j$ is shown in rectangle and other nodes are shown in circles.}\n\\label{hmodel}\n\\end{figure}\n\n\\paragraph{Prior on B-spline Coefficients $\\bm{A}_k$} The prior on $\\bm{A}_k$ serves three purposes. First, it forces $\\phi_k$'s to be orthonormal, i.e., $\\int \\phi_k(\\omega) \\phi_h(\\omega) d\\omega = I(k = h), \\forall k, h \\in[K]$. Second, it regularizes the roughness of $\\phi_k$'s to prevent overfitting and sorts the orthonormal basis functions by increasing roughness. Third, it enables posterior inference on the orthonormal basis functions simultaneously with the graph estimation without having to fix them \\emph{a priori}.\n\nWe summarize the main steps of prior specification and refer the details to \\cite{kowal2017bayesian}. First, to regularize the roughness of $\\phi_k$ in a frequentist framework, one would consider a penalized likelihood with the roughness penalty,\n$$\\lambda_k \\mathcal{P}(\\bm{A}_k) = \\lambda_k \\int [\\phi_k^{''}(\\omega)]^2 \\ d\\omega = \\lambda_k \\bm{A}_k^T \\bm{\\Omega} \\bm{A}_k,$$\nwhere $\\lambda_k > 0$ is the regularization parameter and $\\bm{\\Omega} = \\int \\bm{b}^{''}(\\omega)[\\bm{b}^{''}(\\omega)]^T \\ d\\omega$. As a Bayesian counterpart, the regularization term is equivalent to a prior on the B-spline coefficients $\\bm{A}_k \\sim N(\\bm{0}, \\lambda_k^{-1} \\bm{\\Omega}^{-})$, where $\\bm{\\Omega}^{-}$ is a pseudo-inverse (since $\\bm{\\Omega}$ is rank-deficient by 2). Let $\\bm{\\Omega} = \\bm{U} \\bm{D} \\bm{U}^T$ be the singular value decomposition of $\\bm{\\Omega}$. To facilitate efficient computation, we follow \\cite{wand2008semiparametric} and reparameterize $\\phi_k = \\sum_{\\ell=1}^L A_{k\\ell}b_\\ell = \\sum_{\\ell=1}^L \\tilde{A}_{k\\ell} \\tilde{b}_\\ell$ with $\\tilde{\\bm{b}}(\\omega) = (1, \\omega, \\bm{b}^T(\\omega) \\bm{U}_P \\bm{D}_P^{-1\/2})^T$ where $\\bm{D}_P$ is the $(L - 2) \\times (L - 2)$ submatrix of $\\bm{D}$ corresponding to non-zero singular values and $\\bm{U}_P$ is the corresponding $L \\times (L - 2)$ submatrix of $\\bm{U}$. The reparameterization induces a prior on $\\tilde{\\bm{A}}_k = (\\tilde{A}_{k1}, \\ldots, \\tilde{A}_{kL})^T\\sim N(0, S_k)$ where $S_k = \\mathrm{diag}(\\infty, \\infty, \\lambda_k^{-1}, \\ldots, \\lambda_k^{-1})$ with the first two dimensions corresponding to the unpenalized constant and linear terms. In practice, one can replace $\\infty$ by a large number, say $10^8$. \n\nSecond, we constrain the regularization parameters $\\lambda_1 > \\cdots > \\lambda_K > 0$ to identify the ordering of basis functions, which sorts the basis functions by decreasing smoothness. Unlike the functional principal component analysis (PCA) where the principal components are ordered by the proportion of variance explained, the adopted Bayesian approach is less prone to rough functions. Given the ordering constraint, a uniform prior is imposed such that $\\lambda_k \\sim U(L_k, U_k)$, where $U_1 = 10^8$, $L_k = \\lambda_{k + 1}$ for $k = 1, \\ldots, K - 1$, $U_k = \\lambda_{k - 1}$ for $k = 2, \\ldots, K$, and $L_K = 10^{-8}$.\n\nFinally, consider the orthonormal constraint\n\\begin{align} \\label{penmt}\n\\int \\phi_k(\\omega) \\phi_h(\\omega) = \\int \\tilde{\\bm{A}}_k^T \\tilde{\\bm{b}}(\\omega) \\tilde{\\bm{b}}^T(\\omega)\\tilde{\\bm{A}}_h \\ d\\omega = \\tilde{\\bm{A}}_k^T \\bm{J} \\tilde{\\bm{A}}_h = I(k = h),\n\\end{align}\nwith $\\bm{J} = \\int \\tilde{\\bm{b}}(\\omega) \\tilde{\\bm{b}}^T(\\omega) \\ d\\omega$. This constraint can be easily enforced by projection and normalization during the course of MCMC; see Section B of the Supplementary Material for details.\n\n\\paragraph{Priors on the DAG Adjacency Matrix $\\bm{E}$ and Direct Causal Effects $\\bm{B}$} The key problem we aim to address in this article is causal structure learning, i.e., inferring the adjacency matrix, $\\bm{E} = (E_{j\\ell})$ (recall $E_{j \\ell} = 1$ if and only if $\\ell \\to j$). We propose to use a beta-Bernoulli-like prior $E_{j \\ell} \\sim \\mathrm{Bernoulli}(r)$ with $r \\sim \\mathrm{Beta}(a_r, b_r)$, subject to the acyclicity constraint,\n$$P(\\bm{E} | r) \\propto \\prod_{j\\neq\\ell} r^{E_{j\\ell}} (1-r)^{1-E_{j\\ell}} I(G \\text{ is a DAG}).$$\nWe set $a_r = b_r = 1$. \\cite{scott2010bayes} showed that the beta-Bernoulli prior allows automatic multiplicity adjustment in sparse regression problem. In our context, the marginal distribution of $\\bm{E}$ with $r$ integrated out equals\n\\begin{align} \\label{mar}\nP(\\bm{E}) \\propto \\mathrm{Beta}\\left(\\sum_{j\\neq\\ell}E_{j\\ell}+1,\\sum_{j\\neq\\ell}(1-E_{j\\ell})+1\\right)I(G \\text{ is a DAG}).\n\\end{align}\nThe marginal distribution strongly prevents false discoveries by increasing the penalty against additional edges as the dimension $p$ grows. For example, the marginal \\eqref{mar} favors an empty graph over a graph with one edge by a factor of $p^2 - p$, which increases with $p$.\n\nConditional on $\\bm{E}$, we assume independent matrix-variate spike-and-slab priors on the direct causal effects,\n\\begin{align*}\n\\bm{B}_{j \\ell}|E_{j \\ell} \\sim (1 - E_{j \\ell}) \\delta_{\\bm{O}}(\\bm{B}_{j \\ell}) + E_{j \\ell} N(\\bm{B}_{j\\ell}|\\bm{O}, \\gamma \\bm{I}, \\bm{I}),\n\\end{align*}\nwhere $\\delta_{\\bm{O}}(\\cdot)$ is a point mass at a $K\\times K$ zero matrix $\\bm{O}$ and $N(\\cdot|\\bm{O}, \\gamma \\bm{I}, \\bm{I})$ is a centered matrix-variate normal distribution with row and column covariance matrices $\\gamma \\bm{I}$ and $\\bm{I}$ where $\\bm{I}$ is a $K\\times K$ identity matrix. The hyperparameter $\\gamma$ indicates the overall causal effect size and is assumed to follow a conjugate inverse-gamma prior, $\\gamma \\sim IG(a_\\gamma, b_\\gamma)$ with $a_\\gamma = b_\\gamma = 1$.\n\n\\paragraph{Prior on the Gaussian Scale Mixture} We choose conjugate priors,\n\\begin{align*}\n\\bm{\\pi}_{jk} = (\\pi_{jk1}, \\ldots, \\pi_{jkM}) \\sim \\text{Dirichlet}(\\alpha,\\dots,\\alpha), ~ \\tau_{jkm} \\sim IG(a_\\tau, b_\\tau), ~ \\forall j \\in [p], k \\in [K], m \\in [M],\n\\end{align*}\nwhich allows for straightforward Gibbs sampling. As default, we set $\\alpha = 1$ and $a_\\tau = b_\\tau = 1$. \n\n\\paragraph{Prior on Observation Noises} We complete the prior specification with a conjugate inverse-gamma prior on the variance of observation noises, $\\sigma_j \\sim IG(a_\\sigma, b_\\sigma), \\forall j \\in [p]$ with $a_\\sigma = b_\\sigma = 0.01$.\n\nFinally, we summarize the differences between the proposed FLiNG-BN and the work from \\cite{lee2022functional}. First, \\cite{lee2022functional} assume their functions to be noiseless whereas we consider the scenario where functions are observed with noises. The causal identifiability theory is significantly more complicated when functions are noisy. Second, they assume their functions to be Gaussian whereas our functions are non-Gaussian; this difference leads to different learning algorithms and identifiability theory. Third, their inference is a two-step procedure based on causal ordering identification and sparse function-on-function regression, while the proposed Bayesian hierarchical model admits one-step inference procedure, which learns the graph structure by directly searching in the graph space without having to learn the causal ordering first.\n\n\\section{Simulation Studies} \\label{sec::experiment}\n\nWe conducted simulation studies to evaluate the proposed FLiNG-BN model. We considered two scenarios. In the first scenario, the functions were observed on an evenly spaced grid; this is the scenario commonly studied in the existing functional undirected graphical models \\citep{qiao2019functional} and is also similar to our later EEG application. In the second scenario, the functions were observed on an unevenly spaced grid, similar to the COVID-19 longitudinal application (the details are shown in Section C of the Supplementary Material). We compared the proposed FLiNG-BN with a functional undirected graphical model (FGLASSO; \\citealt{qiao2019functional}). We did not make comparison with \\cite{lee2022functional} due to lack of publicly available code at the time of submission. In addition, we compared FLiNG-BN with approaches based on two-step estimation procedures. In the first step, we extracted basis coefficients obtained from functional PCA using the package \\texttt{fdapace} \\citep{carroll2021}. In the second step, given the estimated basis coefficients, we constructed causal graphs using either the LiNGAM \\citep{shimizu2006linear} algorithm (termed FPCA-LiNGAM) or the PC \\citep{spirtes1991algorithm} algorithm (termed FPCA-PC). LiNGAM estimates a causal DAG based on the linear non-Gaussian assumption whereas PC generally returns only an equivalence class of DAGs based on conditional independence tests. Their implementations are available from R package \\texttt{pcalg} \\citep{kalisch2020overview}.\n\nTo mimic the EEG data application, we simulated data from FLiNG-BN with all the combinations of sample size $n \\in \\{50, 100, 200\\}$, number of functions $p \\in \\{30, 60, 90\\}$, and grid size $d \\in \\{125, 250\\}$. The grid spanned the unit interval $[0,1]$. We set the true number of basis functions to be $K = 5$. To generate basis functions, we first simulated the non-orthnormal functions $\\phi_k^U, \\forall k \\in [K]$ from a set of $L = 6$ cubic B-spline basis functions with evenly spaced knots, $\\phi_k^U = \\sum_{\\ell=1}^L A_{k\\ell}b_\\ell$, where $A_{k\\ell}$'s were generated from a standard normal distribution. We then empirically orthonormalized $(\\phi_1^U, \\ldots, \\phi_K^U)$ to get the orthonormal basis functions $(\\phi_1, \\ldots, \\phi_K)$. The simulation true causal graph $G$ was generated from the Erd\\H{o}s-R\\'{e}nyi model with connection probability $2\/p$, subject to the acyclicity constraint. Given the true graph $G$, each block of non-zero direct causal effects $\\bm{B}_{j\\ell}$ was generated independently from a standard matrix-variate normal distribution. Then the basis coefficient sequences $\\bm{Z}$ were generated from \\eqref{eq1} where the exogenous variables $\\bm{\\epsilon}_j$'s were generated from a centered Laplace distribution with scale $b = 0.5$. Note that when we fit FLiNG-BN to the simulated data, we still assumed the exogenous variables to be discrete scale Gaussian mixture although the simulation true exogenous variables were Laplace (i.e., continuous scale Gaussian mixture). Finally, noisy observations were simulated following \\eqref{eq3} with the signal-to-noise ratio, i.e., the mean value of $|y_j^{(i)}(\\omega_j^{(i)}(m))| \/ \\sigma_j$ across all samples $i \\in [n]$ and grid points $m \\in [m_{j}^{(i)}]$, set to 5.\n\nFor implementing the proposed FLiNG-BN, we set the number of mixture components to $M = 5$ and the number B-spline basis functions to $L=20$ (note that the simulation truth was $L=6$), and ran MCMC for 5,000 iterations (discarding the first half as burn-in and retaining every 5th iteration after burn-in). The causal graph $G$ was estimated by thresholding the posterior probability of inclusion at 0.5 (i.e., the median probability model). Parameters of competing methods were set to their default values. To assess the graph recovery performance, we calculated true positive rate (TPR), false discovery rate (FDR), and Matthews correlation coefficient (MCC),\n\\begin{align*}\n& \\text{TPR} = \\text{TP}(\\text{TP} + \\text{FN})^{-1}, ~~~~~ \n\\text{FDR} = \\text{FP}(\\text{TP} + \\text{FP})^{-1}, \\\\\n& \\text{MCC} = (\\text{TP} \\times \\text{TN} - \\text{FP} \\times \\text{FN})\\left[(\\text{TP} + \\text{FP}) \\times (\\text{TP} + \\text{FN}) \\times (\\text{TN} + \\text{FP}) \\times (\\text{TN} + \\text{FN})\\right]^{-1\/2},\n\\end{align*}\nwhere TP, TN, FP, and FN stand for the numbers of true positives, true negatives, false positives, and false negatives, respectively. MCC ranges from $-$1 to 1 with $0$ indicating a random guess and $1$ a perfect recovery. Since FGLASSO learns an undirected graph, we compared it with a moralization of the true graph\\footnote{Graph moralization converts a DAG to an undirected graph by first marrying all the unmarried parents and then removing all the directions. A probability distribution that respects the Markov property of a DAG must respect the Markov property of its moral graph.}. Similarly, since PC algorithm returns the MEC representation\\footnote{The MEC representation is shown in an essential graph, where any edge presented between two nodes is directed if and only if it follows the same direction in all members of this MEC. Otherwise, it is undirected.}, we compared it with the MEC of the true causal graph. \n\nThe results based on 50 repeat simulations are summarized in Table \\ref{s1}, from which we conclude that the proposed FLiNG-BN significantly outperformed all the competitors FGLASSO, FPCA-LiNGAM, and FPCA-PC across all combinations of $n$, $p$, and $d$. This is not surprising because (i) FGLASSO is not designed for learning directed graphs; they were compared with the proposed FLiNG-BN because of lack of alternative functional BN implementation. (ii) Although FPCA-LiNGAM and FPCA-PC are capable of learning directed graphs, they still performed poorly because they are implemented in a two-step procedure where there is little reason to believe that the basis coefficients extracted by the functional PCA in the first step are useful to capture the functional dependence in the second step. (iii) Unlike the proposed approach, none of the competing methods controls for false discovery and some impose the stringent Gaussian assumption, resulting in high FDR and\/or low TPR. Our suggested method to determine $K$ also worked well.\n\n\\begin{table}[ht]\n\\caption{Functions observed on evenly spaced grid. Average operating characteristics based on 50 repetitions are reported; standard deviations are given within the parentheses. Since LiNGAM is not applicable to cases where $q > n$ with $q = pK$ being the total number of extracted basis coefficients across all functions, the results from those cases are not available and indicated by -.}\n\\resizebox{\\textwidth}{!}{\n\\centering\n\\begin{tabular}{ccc|ccc|ccc|ccc|ccc}\n\\toprule\n\\multirow{2}{*}{$p$} & \\multirow{2}{*}{$d$} & \\multirow{2}{*}{$n$} & \\multicolumn{3}{c|}{FLiNG-BN} & \\multicolumn{3}{c|}{FGLASSO} & \\multicolumn{3}{c|}{FPCA-LiNGAM} & \\multicolumn{3}{c}{FPCA-PC} \\\\\n\\cmidrule(lr){4-6} \\cmidrule(lr){7-9} \\cmidrule(lr){10-12} \\cmidrule(lr){13-15}\n& & & TPR & FDR & MCC & TPR & FDR & MCC & TPR & FDR & MCC & TPR & FDR & MCC \\\\\n\\midrule\n30 & 125 & 50 & 0.62 (0.07) & 0.14 (0.07) & 0.72 (0.07) & 0.58 (0.02) & 0.88 (0.02) & 0.16 (0.02) & - & - & - & 0.22 (0.03) & 0.89 (0.02) & 0.12 (0.02) \\\\\n30 & 125 & 100 & 0.71 (0.08) & 0.19 (0.05) & 0.75 (0.06) & 0.63 (0.03) & 0.85 (0.03) & 0.20 (0.03) & 0.84 (0.02) & 0.85 (0.01) & 0.31 (0.02) & 0.30 (0.01) & 0.89 (0.01) & 0.13 (0.01) \\\\\n30 & 125 & 200 & 0.73 (0.05) & 0.13 (0.08) & 0.79 (0.06) & 0.69 (0.03) & 0.84 (0.05) & 0.19 (0.03) & 0.92 (0.04) & 0.87 (0.01) & 0.30 (0.01) & 0.13 (0.02) & 0.96 (0.01) & 0.02 (0.01) \\\\\n\\midrule\n30 & 250 & 50 & 0.68 (0.05) & 0.25 (0.08) & 0.73 (0.06) & 0.57 (0.02) & 0.88 (0.04) & 0.16 (0.04) & - & - & - & 0.30 (0.02) & 0.87 (0.01) & 0.15 (0.01) \\\\\n30 & 250 & 100 & 0.75 (0.04) & 0.26 (0.03) & 0.74 (0.03) & 0.64 (0.03) & 0.85 (0.04) & 0.18 (0.03) & 0.88 (0.04) & 0.86 (0.02) & 0.34 (0.02) & 0.18 (0.02) &0.92 (0.02) & 0.08 (0.01) \\\\\n30 & 250 & 200 & 0.85 (0.01) & 0.30 (0.06) & 0.79 (0.04) & 0.69 (0.02) & 0.83 (0.04) & 0.21 (0.02) & 0.97 (0.05) & 0.85 (0.03) & 0.35 (0.03) & 0.22 (0.02) & 0.94 (0.01) & 0.08 (0.02) \\\\\n\\midrule\n60 & 125 & 50 & 0.68 (0.03) & 0.05 (0.03) & 0.80 (0.02) & 0.57 (0.04) & 0.89 (0.06) & 0.11 (0.05) & - & - & - & 0.28 (0.02) & 0.87 (0.01) & 0.16 (0.01) \\\\\n60 & 125 & 100 & 0.68 (0.04) & 0.12 (0.04) & 0.75 (0.04) & 0.60 (0.03) & 0.85 (0.05) & 0.15 (0.04) & - & - & - & 0.28 (0.01) & 0.89 (0.01) & 0.15 (0.01) \\\\\n60 & 125 & 200 & 0.74 (0.02) & 0.11 (0.02) & 0.82 (0.02) & 0.61 (0.03) & 0.82 (0.04) & 0.17 (0.03) & 0.86 (0.03) & 0.89 (0.02) & 0.25 (0.02) & 0.22 (0.01) & 0.95 (0.01) & 0.11 (0.01) \\\\\n\\midrule\n60 & 250 & 50 & 0.70 (0.02) & 0.15 (0.02) & 0.77 (0.01) & 0.59 (0.04) & 0.82 (0.04) & 0.16 (0.03) & - & - & - & 0.35 (0.02) & 0.85 (0.02) & 0.21 (0.01) \\\\\n60 & 250 & 100 & 0.70 (0.01) & 0.13 (0.10) & 0.79 (0.05) & 0.62 (0.04) & 0.80 (0.04) & 0.17 (0.03) & - & - & - & 0.26 (0.01) & 0.89 (0.02) & 0.13 (0.01) \\\\\n60 & 250 & 200 & 0.76 (0.02) & 0.11 (0.01) & 0.85 (0.01) & 0.69 (0.05) & 0.80 (0.03) & 0.19 (0.04) & 0.91 (0.02) & 0.85 (0.02) & 0.33 (0.01) & 0.17 (0.01) & 0.84 (0.05) & 0.15 (0.03) \\\\\n\\midrule\n90 & 125 & 50 & 0.63 (0.04) & 0.10 (0.04) & 0.75 (0.03) & 0.52 (0.02) & 0.89 (0.03) & 0.10 (0.03) & - & - & - & 0.23 (0.01) & 0.88 (0.00) & 0.15 (0.01) \\\\\n90 & 125 & 100 & 0.66 (0.03) & 0.12 (0.03) & 0.74 (0.02) & 0.55 (0.04) & 0.87 (0.03) & 0.15 (0.02) & - & - & - & 0.18 (0.01) & 0.92 (0.01) & 0.10 (0.01) \\\\\n90 & 125 & 200 & 0.67 (0.02) & 0.13 (0.02) & 0.76 (0.01) & 0.57 (0.03) & 0.85 (0.04) & 0.17 (0.03) & - & - & - & 0.17 (0.01) & 0.94 (0.01) & 0.08 (0.01) \\\\\n\\midrule\n90 & 250 & 50 & 0.58 (0.03) & 0.09 (0.02) & 0.68 (0.03) & 0.54 (0.05) & 0.87 (0.04) & 0.11 (0.04) & - & - & - & 0.32 (0.01) & 0.84 (0.00) & 0.21 (0.01) \\\\\n90 & 250 & 100 & 0.65 (0.05) & 0.13 (0.04) & 0.73 (0.04) & 0.58 (0.04) & 0.82 (0.03) & 0.15 (0.03) & - & - & - & 0.18 (0.02) & 0.93 (0.01) & 0.11 (0.01) \\\\\n90 & 250 & 200 & 0.70 (0.02) & 0.12 (0.02) & 0.78 (0.01) & 0.61 (0.06) & 0.80 (0.04) & 0.18 (0.05) & - & - & - & 0.22 (0.01) & 0.89 (0.01) & 0.16 (0.01) \\\\\n\\bottomrule\t\t\t\t\t\n\\end{tabular}}\n\\label{s1}\n\\end{table}\n\nThe proposed FLiNG-BN has a few hyperparameters $L$, $M$, $\\alpha$, $(a_r, b_r)$, $(a_\\gamma, b_\\gamma)$, $(a_\\tau, b_\\tau)$, and $(a_\\sigma, b_\\sigma)$. We performed sensitivity analyses of these parameters at four different values with $(n, p, d) = (100, 30, 250)$. Results are summarized in Section C of the Supplementary Material. Our model appeared to be relatively robust within the tested ranges of hyperparameters.\n\n\\section{Applications} \\label{sec::eeg}\n\nWe applied the proposed FLiNG-BN to the brain EEG dataset downloaded from \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/eeg+database} \\citep{zhang1995event}. The dataset consists of 122 subjects with 77 in the alcoholic group and 45 in the control group, and was previously used to demonstrate functional undirected graphical models by \\cite{zhu2016bayesian} and \\cite{qiao2019functional}. The 64 electrodes placed on subjects' scalps (standard positions) measuring voltage values were sampled at 256 Hz for one second. Each subject completed 120 trials under one stimulus or two stimuli. See \\cite{zhang1995event} for details of the data collection procedure. We averaged all trials for each subject under the one stimulus condition. We separately analyzed these two groups to find their commonalities and differences of brain activity. Hence, we had $n = 77$ or $n = 45$ subjects and $p = 64$ functions representing the brain EEG signals at different scalp positions recorded at $d = 256$ time points. We focused on EEG signals filtered at $\\alpha$ frequency bands between 8 and 12.5 Hz using the \\texttt{eegfilt} function in the EEGLAB toolbox from MATLAB \\citep{delorme2004eeglab}.\n\nTo check the Gaussianity of the observed functions, we performed Shapiro--Wilk normality test \\citep{shapiro1965analysis} to each of $p = 64$ scalp positions at each of $d = 256$ time points. The null hypothesis (i.e., the observations are marginally Gaussian) was rejected for many combinations of scalp position and time point and therefore, the non-Gaussianity of the proposed model is deemed appropriate.\n\nFive orthonormal basis functions were selected for both the alcoholic and control group according to the procedure described in Section B of the Supplementary Material. We ran MCMC for 10,000 iterations, discarded the first half as burn-in, and retained every 10th iteration after burn-in. The estimated basis functions are shown in Figure \\ref{pfunction}. As evident from the plot, they are very similar across the two groups. The causal networks estimated by thresholding the posterior probability of inclusion at 0.9 are shown in Figure \\ref{fnetwork}. The sparsity level is approximately 3.0\\% for the alcoholic group and 2.5\\% for the\ncontrol group. \n\n\\begin{figure}[ht]\n\\centering\n\\begin{subfigure}[ht]{1 \\textwidth}\n\\centering\n\\includegraphics[width = 1 \\textwidth]{FALCOHOL}\n\\caption{Alcoholic group.}\n\\end{subfigure}\n\\begin{subfigure}[ht]{1 \\textwidth}\n\\centering\n\\includegraphics[width = 1 \\textwidth]{FCONTROL}\n\\caption{Control group.}\n\\end{subfigure}\n\\caption{Estimated basis functions from brain EEG records that explained 90\\% of the variation.}\n\\label{pfunction}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width = 1 \\textwidth]{COMBINEAC}\n\\caption{Estimated causal brain networks from EEG records by FLiNG-BN with posterior probability of inclusion $\\geq 0.9$, separately for the alcoholic (left) and control (right) group.}\n\\label{fnetwork}\n\\end{figure}\n\nOur results reveal several interesting patterns. First, the connection is relatively dense in the frontal region for both groups. Second, the alcoholic group has more directed connections detected in the left temporal and occipital regions. Third, most brain locations tend to connect to adjacent positions, while distant locations are much less connected. Figure \\ref{dnetwork} shows the common and differential networks for the two groups, where a substantial connectivity difference is observed between the two groups.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width = 1 \\textwidth]{COMBINECD}\n\\caption{Common (left panel) and differential (right panel) connections for the two groups. Black arrows indicate common connections, red arrows indicate connections detected by the alcoholic group only, and green arrows indicate connections detected by the control group only. }\n\\label{dnetwork}\n\\end{figure}\n\nIn addition, we demonstrated the proposed FLiNG-BN model with an additional application to COVID-19 multivariate longitudinal data, which have unevenly spaced measurements, in Section D of the Supplementary Material.\n\n\\section{Discussion} \\label{sec::discussion}\n\nIn this paper, we have proposed a functional Bayesian network model for causal discovery from multivariate functional data. We have discussed in detail a specific case of functional Bayesian network, namely the functional linear non-Gaussian model, and proved the underlying causal structure is identifiable even if the functions are purely observational and observed with noises. A fully Bayesian inference procedure has been proposed to implement our framework. Through simulation studies and real data applications, we have demonstrated the ability of our model in causal discovery.\n\nWe briefly discuss several possible directions to extend our current work. First, we may replace the underlying DAG with cyclic graphs, chain graphs, or ancestral graphs for more general causal and conditional independence structures. We have chosen a linear non-Gaussian SEM on the basis coefficients but this model can be replaced with a nonlinear SEM. Second, instead of fixing the number of basis functions, one could resort to increasing shrinkage priors \\citep{bhattacharya2011sparse, legramanti2020bayesian} to adaptively truncate redundant components. Finally, since we have two groups of observations in the EEG application, it would be interesting to jointly estimate the brain networks or directly estimate the differential network.\n\n\\bibliographystyle{jasa} \n\\small ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}