diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzebkd" "b/data_all_eng_slimpj/shuffled/split2/finalzzebkd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzebkd" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nQuantum information science has undergone remarkable advances in the recent years, progressing in all its sub-fields of computing~\\cite{Nielsen_Chuang,preskill2018quantum}, communication~\\cite{ReviewQKD2020} and sensing~\\cite{Cappellaro_2012,ReviewSensing}, both with discrete- and continuous-variable systems~\\cite{BraunsteinREV,SerafiniBook,Weedbrook_2012}. In this wide scenario, quantum sensing is arguably one of the most mature areas for near-term technological deployment. Its theoretical and experimental developments have been strongly based on quantum metrology~\\cite{MetrologySAM,Paris09} and quantum hypothesis testing~\\cite{Helstrom_1976,Barnett_09,Chefles_2000,Janos2010}. Especially, the latter approach has allowed to show a quantum advantage over classical strategies in tasks of target detection~\\cite{Lloyd2008,Tan2008} and data readout~\\cite{Qreading}, modelled as binary problems of quantum channel discrimination. \n\nThe discrimination of quantum channels is an incredibly rich area of investigation~\\cite{zhuang2019physical,zhuang2020entanglement}, with unexplored consequences but also non-trivial difficulties. It represents a double optimization problem where both input states and output measurements need to be varied. Furthermore, in the bosonic setting, it has to be formulated as an energy-constrained problem, where the mean number of input photons is limited to some finite, small, value. In such a scenario, the central question is that of showing quantum advantage: Can truly-quantum states, e.g., entangled, lead to an advantage over classical, i.e. coherent, states? Addressing this question in the multi-ary case is difficult, since the theory is missing powerful tools that are instead available for the binary case.\n\n\n\nIn this work, we take a step forward by developing the theory of quantum hypothesis testing for the multi-ary setting of barcode decoding and pattern classification. We start from a general model of digital image, where each pixel is described by an ensemble of quantum channels defined over a finite alphabet. We specialize to the case where the single-pixel alphabet is binary, so that there are $2^n$ possible hypotheses or configurations for an $n$-pixel barcode. We then show how the use of quantum sources of light, based on entangled states, can clearly outperform classical strategies based on coherent states for the readout of the barcode configuration, i.e., the retrieval of its data. In particular, we derive an analytical condition for the maximum number of pixels or the minimum number of probings such that quantum advantage is obtained. This result holds not only for a uniform distribution of the possible configurations, but also when data is stored by the positions of $k$ white pixels among a grid of otherwise black pixels.\n\nBesides data readout or barcode decoding, we consider the general problem of pattern recognition, \nwhere the task is to classify an image, e.g.~a handwritten digit, without necessarily reconstructing it \npixel by pixel. Here the image distribution \nis not uniform and generally unknown, and optimal classification has to be approximated via a collection \nof correctly classified examples, following supervised learning strategies \\cite{murphy2012machine}. \nWe consider the ultimate limits of this procedure, where we may optimize over the optical circuit, measurements \nand subsequent classical post-processing algorithms. Introducing relevant bounds, we theoretically prove that this \nproblem is significantly simpler than that of barcode decoding, as long as the minimum Hamming distance between \nimages from different classes is large enough. Moreover, we show how clear quantum advantage can be \nobtained as a function of the number of training data. \nFinally, we consider a simplified scheme for recognizing black and white\npatterns, such as digital images of handwritten digits, by means local\nmeasurements followed by a classical nearest neighbor classifier. More\nspecifically, we apply this classifier to the measurement outcomes that are\nobtained by either using entangled states or coherent states at the input of\nthe grid of pixels. We are able to show a clear quantum advantage that holds\neven when we employ sub-optimal photon counting measurements for the quantum\ncase, which are particularly relevant for near-term experiments. \nThe advantage becomes particularly evident at relatively\nsmall energies where a total of a few hundred of photons are irradiated over\neach pixel.\n\n\n\n\n\n\n\n\n\n\n\n\nThe paper is organized as follows. In Section~\\ref{s:barcode} we discuss the problem of \nbarcode decoding and show the possible advantage in using quantum detectors with \nentangled input states. In Section~\\ref{s:pattern} we discuss the related problem \nof pattern recognition, showing a similar advantage. Discussions are drawn in Section~\\ref{s:conclusions}. \n\n\n\n\n\\section{Barcode Decoding}\\label{s:barcode}\n\n\\subsection{Quantum mechanical model of a digital image}\n\nA basic imaging system irradiates light over an array of pixels which can be read in transmission or in reflection. From the ratios between input and output intensities, the system generates a corresponding array of grey-levels that constitutes a monochromatic image. In a quantum mechanical setting, each pixel can therefore be modeled as a bosonic lossy channel $\\mathcal E_i$ whose transmissivity depends on the grey-level $i$. This lossy channel can be probed by an input state (with some limited energy) and a corresponding output measurement, generally described by positive-operator valued measure (POVM). Finally, the outcome is processed by a decision test that identifies the channel and, therefore, the grey-level $i$.\n\n\n\n\n\n\n\n\n\n\nLet us formalize the problem in more mathematical detail, which can be seen as a multi-ary and multi-pixel generalization of the basic model of quantum reading~\\cite{Qreading}. Assume that each pixel is described by a channel ensemble $\\{\\mathcal{E}%\n_{i}\\}$ spanned by the label $0\\leq i\\leq C-1$, where $C$ is the discrete number of grey-levels that can \nbe assumed by the pixel. Let us define\nan {\\it image} over $n$\\ pixels as a sequence $\\bs{i}:=i_{0},\\cdots,i_{n-1}$, together with an associated probability distribution $\\pi_{\\bs{i}}$, which is simply $\\pi_{\\bs{i}}=C^{-n}$ in the uniform case. \nThe global channel describing the entire array of $n$ pixels is the tensor product \n$\\mathcal{E}_{\\bs{i}}^{n}:=\\mathcal{E}_{i_{0}}\\otimes\\cdots\\otimes\n\\mathcal{E}_{i_{n-1}}$. Thus, an image can equivalently be represented by an ensemble\nof multi-channels $\\{\\pi_{\\bs{i}},\\mathcal{E}_{\\bs{i}}^{n}\\}$. \n\n\n\nIn order to read the image, let us assume that we have a generic $2n$-mode state $\\tilde\\rho$ at the input: $n$ signal modes are sent through the pixels, while $n$ idler modes are used to help the measurement. At the output, there is an ensemble of possible states $\\{\\pi_{\\bs{i}}%\n,\\rho_{\\bs{i}}\\}$ where $\\rho_{\\bs{i}}:=\\mathcal I^{n}\\otimes \\mathcal{E}_{\\bs{i}}%\n^{n}(\\tilde\\rho)$. In the case of a classical transmitter, the signal modes are prepared in coherent states while the idler modes are in vacuum states. In the case of a quantum transmitter, signal and idler modes are entangled pairwise. In particular, each signal-idler pair is described by a two-mode squeezed vacuum (TMSV) state.\n\n\n\n\n\nIn general, we probe the image $\\bs{i}$ with identical inputs for $M$ times, leading to the overall input state $\\tilde\\rho^{\\otimes M}$ and corresponding output \n$\\rho_{\\bs{i}}^{\\otimes M}:=\\left[\\mathcal I^n\\otimes \\mathcal{E}_{\\bs{i}}^{n}%\n(\\tilde\\rho)\\right] ^{\\otimes M}$. We measure this output with a collective POVM,\nwhere each measurement operator $\\Pi_{\\bs{i}'}$ represents the decision that the image is $\\bs{i}'$. Because input states are energy-constrained, there will be readout errors described by the conditional probabilities\n\\begin{equation}\n\tp_{\\rm read}({\\bs i}'|\\bs i) = \\Tr\\left(\\Pi_{{\\bs i}'}\\rho_{\\bs i}^{\\otimes M}\\right)~.\n\t\\label{measupattern}\n\\end{equation}\nBy including the priors $\\{\\pi_i\\}$, we may therefore define the success probability or, equivalently, the error probability\n\\begin{align}\n\tp_{\\text{succ}}&:=\\sum_{\\bs{i}}\\pi_{\\bs{i}}p_{\\rm read}(\\bs{i}%\n|\\bs{i}),~~~p_{\\text{err}}=1-p_{\\text{succ}}.\n\\label{success}\n\\end{align}\n\n\nUsing Refs.~\\cite{Barnum,Montanaro} and the multiplicativity of the fidelity over tensor products, one finds that the minimum error probability (optimized over POVMs) satisfies\n\\begin{equation}\n\\frac{1}{2}\\sum_{\\bs{i}\\neq\\bs{j}}\\pi_{\\bs{i}%\n}\\pi_{\\bs{j}}F^{2M}_{{\\bs{i}}:{\\bs{j}}}\n\\leq\np_{\\text{err}}\\leq\\sum_{\\bs{i}\\neq\\bs{j}}\\sqrt{\\pi_{\\bs{i}}%\n\\pi_{\\bs{j}}}F^{M}_{{\\bs{i}}:{\\bs{j}}},\n \\label{BarnumMontanaro}%\n\\end{equation}\nwhere \n\\begin{equation}\nF_{{\\bs{i}}:{\\bs{j}}} := \nF(\\rho_{\\bs{i}},\\rho_{\\bs{j}})=\n\\Vert\\sqrt{\\rho_{\\bs i}}\\sqrt{\\rho_{\\bs j}}\\Vert_{1}%\n=\\mathrm{Tr}\\sqrt{\\sqrt{\\rho_{\\bs i}}\\rho_{\\bs j}\\sqrt{\\rho_{\\bs i}}}%\n\\end{equation}\nis the fidelity between two generic single-probing multi-pixel output states, $\\rho_{\\bs{i}}$ and $\\rho_{\\bs{j}}$. \nThe inequalities in Eq.~\\eqref{BarnumMontanaro} bound the performances of a pretty good measurement~\\cite{PGM1,PGM2,PGM3} and have no explicit dependence on the dimension of the Hilbert space, so that they hold for bosonic states as long as these states are energy-constrained. \nBelow, we build on these inequalities to derive our bounds for decoding barcodes. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Barcode discrimination}\n\n\n\\begin{figure}[th!]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{fig-barcode.pdf}\n\t\\caption{ { Barcode decoding.} Examples of 1D (a) and 2D (b) barcodes. (c) \n\t\tSchematic physical setup for decoding a 2D barcode with $n=5\\times5$ pixels. A source shines light on the grid of pixels, each modeled by a quantum channel, which is either $\\mathcal E_B$ or $\\mathcal E_W$ depending on the \n\t\tpixel grey-level, black (B) or white (W). The reflected\/scattered light is collected by a detector, which aims at recognizing {\\it all} pixel values depending on \n\t\tthe detected photons. In a quantum setup, the signal modes shined over the pixels are entangled with idler modes that are directly sent to the detector for a joint measurement. \n\t\t(d) General theoretical setup scheme for optimal barcode discrimination using entangled TMSV states, \n\t\twhere the mixing optical circuit, the measurements and following classical post-processing must be optimized. \n\t\t(e,f) Special setup to claim quantum advantage: we compare independent entangled TMSV states on each pixel followed \n\t\tby independent {\\it local} measurements (e) with classical coherent sources\n\t\tfollowed by global measurements. Quantum advantage is claimed whenever (e) beats (f).\n\t}\n\t\\label{fig:barcode}\n\\end{figure}\n\nAn important case of the general problem discussed in the previous section is barcode decoding, whose \nschematic setup is shown in Fig.~\\ref{fig:barcode}. \nA {\\it barcode} is either a \none-dimensional (1D) or two-dimensional (2D) grid \nof pixels with two possible colors, black (B) or white (W). \nWith a slight abuse of jargon, we call {\\it pixels} the elements \nof the 1D or 2D grid that define a barcode. In the 1D case (Fig.~\\ref{fig:barcode}a),\na pixel is a black or white vertical bar, while in the 2D case (Fig.~\\ref{fig:barcode}b)\na pixel is an elementary square. It is worth noting that many of the conclusions drawn for barcode decoding can be extended to more general images. Indeed, \na higher number of grey-levels $C>2$ can always be formally mapped into a barcode. For instance, $C=256$ corresponds to an 8-bit grey scale and each bit can be \nrepresented as a binary variable with two possible configurations (B or W, by convention). As such, images with $C>2$ can be mapped into a ``barcode'' image with a higher number of pixels. \n\n\n\n\nThe general problem of barcode discrimination can be depicted as in Fig.~\\ref{fig:barcode}c and \\ref{fig:barcode}d. \nAccording to our notation, each pixel of a barcode has two possible grey-levels $i\\in\\{B,W\\}$ and therefore corresponds to two possible quantum channels $\\mathcal E_B$ and $\\mathcal E_W$. For barcode decoding, we assume that the pixels are independently probed, so that the input state takes the tensor-product form $\\tilde \\rho =\\rho_0^{\\otimes n}$. Note that this assumption does not reduce the generality of our treatment. In fact, for the quantum source this leads to one of the best possible choices (tensor product of TMSV states). For the classical source, we know that independent and identical coherent states are able to saturate the lower bounds for general mixtures of multi-mode coherent states~\\cite{zhuang2020entanglement}.\nAs for detection, the general scheme to correctly distinguish the various configurations consists in choosing a mixing\noptical circuit, followed by measurements and classical post-processing algorithms \nas in Fig.~\\ref{fig:barcode}d.\nFrom an operational point of view, a suboptimal solution can be found for this problem by restricting to a cascade of beam splitters and phase shifters with tunable parameters, followed by independent measurements \n(e.g. homodyne or photodetection), similar to that of Ref.~\\cite{zhuang2019physical}; while for the classical post-processing we may employ statistical classification \nalgorithms commonly employed in machine learning applications, e.g. based on neural networks \\cite{murphy2012machine}.\nThe suboptimal solution is then numerically investigated by minimizing the parameters of the optical and neural \nnetworks in order to minimize $p_{\\rm err}$. \nWhen photodetection measurements are employed, analytic gradients can be computed following Ref.~\\cite{banchiGBS} to \nspeed-up the optimization algorithm. \nIn this paper however we focus on the most general case and study the fundamental limits of barcode \ndecoding and patter recognition, \nintroducing different theoretical limits that any possible scheme must satisfy. \nIndeed, the physical optical circuit and measurements, and also the classical post-processing algorithm \nin Fig.~\\ref{fig:barcode}d, can all be reabsorbed into an abstract POVM that must be optimized. \n\n\nWe start by considering the case where a barcode with $n$ pixels is prepared in one of all possible $2^n$ patterns, each with equal prior. Then, we will consider the case where the patterns are restricted to specific configurations, where $k$ white pixels are randomly positioned within a grid of otherwise black pixels.\n\n\n\n\n\n\n\n\n\n\nStarting from the input state\n$\\tilde \\rho =\\rho_0^{\\otimes n}$, the possible states at the output of the barcode \n$\\rho_{\\bs i} = \\mathcal I^n \n\t\\otimes \\mathcal E_{i_1}\\otimes\\cdots\\otimes\\mathcal E_{i_n} (\\tilde \\rho)$,\ntake the product form \n\\begin{equation}\n\t\\rho_{\\bs i} = \\bigotimes_{k=1}^n \\rho_{i_k}.\n\t\\label{RhoIdlers}\n\\end{equation}\nCorrespondingly, the fidelities can be simplified as \n\\begin{equation}\n\tF(\\rho_{\\bs i},\\rho_{\\bs j}) = F(\\rho_W,\\rho_B)^{{\\rm hamming}(\\bs{i},\\bs{j})},\n\t\\label{FidIdlers}\n\\end{equation}\nwhere $\\rho_i = \\mathcal I\\otimes\\mathcal E_i(\\rho_{0})$ for $i\\in\\{B,W\\}$ and \n${\\rm hamming}(\\bs{i},\\bs{j})$ is the Hamming distance between the two binary images \n${\\bs i}$ and ${\\bs j}$,\nnamely the number of pixels in which the two images differ.\nUsing the properties of the Hamming distance, in Appendix~\\ref{a:bound} we show that, for uniform \n{\\it a priori} probabilities, the $M$-probing bounds~\\eqref{BarnumMontanaro} become \n\\begin{equation}\n\t\\frac{(F^{2M}_{\\rm max}+1)^n-1}{2^{n+1}} \\leq \n\tp_{\\rm err} \\leq \n\t1-\\left(1-\\frac{F^M_{\\rm max}}2\\right)^n,\n\t\\label{perrcomb}\n\\end{equation}\nwhere $F_{\\rm max}$ is the fidelity between any \ntwo (different) images with minimum Hamming distance [cf. Eq.~\\eqref{FidIdlers}].\n\n\nThe minimum Hamming distance\nis achieved when the two images differ by a single pixel. \nThus, we get $F_{\\rm max} = F(\\rho_B,\\rho_W)$, namely the maximum fidelity between any two images is given by the fidelity between the \nstates describing the grey-levels of a single pixel. \nUsing the Bernoulli's inequality, we then simplify Eq.~\\eqref{perrcomb} as \n\\begin{equation}\n\\frac{n}{2^{n+1}} F_{\\rm max}^{2M} \\leq\t\np_{\\rm err} \\leq \\frac n2 F_{\\rm max}^M~.\n\\label{perrsimple}\n\\end{equation}\nIn Appendix~\\ref{a:locmeas}, we show that the upper bound can be achieved using {\\it local} measurements, \nnamely where each pixel is measured independently from the others and \n$\\Pi_{\\bs i} = \\bigotimes_{j=0}^{n-1} \\Pi_{i_j}$ in Eq.~\\eqref{measupattern}, though each \npixel and its respective idler may be measured together (see Fig.~\\ref{fig:barcode}e). \nOnce we restrict to local operations, the optimum is achieved by independent\nHelstrom measurements \\cite{helstrom1969quantum} and the upper bound in Eq.~\\eqref{perrsimple}\nfollows from Fuchs\u2013van de Graaf inequalities \\cite{fuchs1999cryptographic}.\nA sub-optimal local measurement is obtained by combining the signal and idler via \na beam splitter followed by independent measurements \n\\cite{Qreading}.\nMoreover, in the supplementary material \\cite{suppmat} we also discuss different inequalities on\n$p_{\\rm err}$ based on the multiple quantum Chernoff bound~\\cite{QSDReview,MultiChernoff,GaussianQCB2008}. \n\n\n\nTwo interesting observations can be made from the bounds~\\eqref{perrsimple}.\nFirst, the upper bound for the error probability becomes small whenever\n$\tn F^M(\\rho_W,\\rho_B) \\ll 1$.\nThis implies that, although the set of images (namely barcode \nconfigurations) grows exponentially with the number of pixels as $2^n$, the required fidelities \nto accurately distinguish all configurations should decrease polynomially with $1\/n$. \nIn particular, $M=\\mathcal O(\\log n)$ copies are needed for correct discrimination. \nThe second observation is that, due the factor $2^{-n}$, \nthe lower bound in Eq.~\\eqref{perrsimple} decreases exponentially with $n$. \nAs we show in Appendix~\\ref{a:locmeas}, this factor disappears from the lower bound \nwhen local measurements are employed. \nIt is known that, in general,\noptimum mixed state discrimination requires a joint\nmeasurement~\\cite{calsamiglia2010,bandyopadhyay2011}, yet\nin our setting optimal global measurements may in principle \nexponentially reduce the probability of error. \nNonetheless, it is currently an open question to verify whether and exponentially decreasing \nerror is achievable with optimal quantum measurements. In the next section we will claim \nquantum advantage whenever the upper bound on $p_{\\rm err}$ obtained with entangled states and local \nmeasurements is smaller than the lower bound on $p_{\\rm err}$ obtained with classical states and possibly global \nmeasurements, as schematically shown in Figs.~\\ref{fig:barcode}d)~\\ref{fig:barcode}e). \nTherefore, \nif the lower bound in \\eqref{perrcomb} is loose, the regimes for quantum advantage are larger. \n\n\n\nIn the previous bounds we considered a uniform distribution of black an white pixels \nin the barcode. We may also consider a different encoding with a fixed \nnumber of white pixels, generalizing the results of Ref.~\\cite{zhuang2020entanglement}. \nThe task is then to find the position of $k$ white pixels in a barcode with $n$ bars. \nThe number of possible configurations is $\\binom nk\\approx 2^{n H(k\/n)}$ where \n$H$ is the binary entropy function and the approximation holds when both $n$ and $k$ are large. \nTherefore, in that regime, the configuration space grows exponentially with $n$, as in the \nuniform case discussed above. In the asymptotic regime we obtain the following bounds \n\\begin{equation}\n\t\\frac{k(n-k)}{2^{nH(k\/n)+1}} F_{\\rm max}^{4M} \\lesssim\n\tp^{k-{\\rm whites}}_{\\rm err} \\lesssim\n\tk(n-k) F_{\\rm max}^{2M},\n\t\\label{kCPF}\n\\end{equation}\nwhile the exact expressions for finite $M$, $n$ and $k$ are discussed in Appendix~\\ref{a:kcpf}.\n\n\n\\subsection{Quantum enhancement}\\label{s:enhancement}\nWe now focus on photonic setups and model each pixel as a bosonic channel with \ntransmissivity $\\eta_i$, where $i\\in\\{B,W\\}$ is the pixel color. \nWe discuss the regime where we get an advantage from using entangled photons as input. \nWe compare the case where each input $\\rho_0$ is a TMSV state $\\ket{\\Phi_{N_S}}$ \nwith $N_S$ average photons and the case where the input is a coherent state with the same \nnumber of signal photons $\\ket{\\sqrt{N_S}}\\otimes \\ket{0}$ (where the vacuum state means that no idler is used).\nNote that one can replace the vacuum idler with an arbitrary state, such as a strong local oscillator in a coherent state, however that will not give a better performance when the optimum measurement is considered.\nAssuming $M$ probings of the barcode, we have a total of $N_{\\rm tot} = MN_S$ mean photons irradiated over each pixel. According to the analysis from the previous section, provable quantum advantage can be achieved\nwhenever the upper bound from Ineqs.~\\eqref{perrsimple}, obtained with TMSV input states, is less than the lower bound obtained with coherent state inputs. Since the upper bound in~\\eqref{perrsimple} is obtained with local measurements, what we call ``provable advantage'' \nmeans that possibly non-optimal local measurement strategies with entangled inputs \nbeat any strategy with coherent states, even when the latter is enhanced by\ncomplex global measurements. \nProvable quantum advantage may be more difficult for larger $n$, given \nthe exponentially decreasing factor in the lower bound of Eq.~\\eqref{perrsimple}, \nbut here we show that it can be achieved for every number of pixels $n$ with suitably large number of probings $M$. \n\n\nUsing the formula for the \nfidelity between two generally-mixed Gaussian states \\cite{Fidelity,marian2012uhlmann}, for TMSV states at the input, we compute (see Appendix~\\ref{a:fid})\n\\begin{equation}\n\tF_q(\\rho_W,\\rho_B)^M = \n\t\\left(\\frac{1}{1+N_S\\Delta_q}\\right)^M \\geq e^{-M N_S \\Delta_{\\rm q}},\n\t\\label{fidelityQ}\n\\end{equation}\nwhere the index $q$ stands for {\\it quantum} and \n\\begin{equation}\n\t\\Delta_{\\rm q} = \t1- \\sqrt{(1-\\eta_W) (1-\\eta_B)}-\\sqrt{\\eta_W \\eta_B}.\n\\end{equation}\nFor a coherent-state input, we instead have\n\\begin{align}\n\tF_{\\rm c}(\\rho_W,\\rho_B)^M &= \n\te^{-M N_S \\Delta_{\\rm c}},\n\n&\n\\Delta_{\\rm c} &= \\frac{(\\sqrt{\\eta_B}-\\sqrt{\\eta_W})^2}2.\n\\label{fidelityC}\n\\end{align}\nBy comparing Eqs.~\\eqref{fidelityQ} and \\eqref{fidelityC} we see that, for fixed $M$, the fidelity between coherent states \ndisplays an exponential decay as a function of $N_S$, while for \nquantum states we see a polynomial decay in $N_S$. \nNonetheless, \nfor large $M$ and small $N_S$, the inequality in Eq.~\\eqref{fidelityQ} becomes tight and, \nsince $\\Delta_{\\rm q} \\geq \\Delta_{\\rm c}$, in that limit\nwe find that quantum light always provides an advantage\nfor discrimination, irrespective of the values of $\\eta_W$ and $\\eta_B$. \nThe limits of small $N_S$ and large $M$ are widely employed to show quantum advantage \nand can be realized experimentally with little imperfections \\cite{zhang2013entanglement}.\nTherefore, from now on we will focus on such limits, $M\\to\\infty$ and $N_S\\to0$, while keeping\nfixed the total mean number of photons $M N_S$ irradiated over each pixel.\n\n\n\nTo properly demonstrate the advantage, we need to \nshow that the upper bound on the probability of error using quantum light is smaller than the \nlower bound on the probability of error using coherent states. From Ineqs.~\\eqref{perrsimple}, we see that this happens when \n$\tF_{\\rm c}^{2M} \\geq 2^n F_{\\rm q}^M$.\nSetting $ n =\\nu M N_S$, the previous inequality implies that quantum advantage is obtained for\n\\begin{equation}\n\t\\nu\\leq \\nu_{\\rm th} =\\frac{\\Delta_{\\rm q}-2\\Delta_{\\rm c}}{\\log 2},\n\t\\label{nuthreshold}\n\\end{equation}\nwhich is a barcode multi-pixel generalization of the ``threshold energy'' theorem proven in the context of single-cell quantum reading~\\cite{Qreading}. \n\nAccording to Eq.~(\\ref{nuthreshold}), whenever the number $n$ of pixels is smaller than a certain threshold, entangled light \nalways provides an advantage in the discrimination of barcode configurations (barcode decoding) with respect to the best classical strategy with the same signal energy, even when \nthe latter uses possibly complex global measurements. The behaviour of $\\nu_{\\rm th}$ as a function \nof $\\eta_W$ and $\\eta_B$ is numerically shown in Fig.~\\ref{fig:nuth}. \n\nQuantum advantage can also be proven when we consider a prior distribution for the barcode configurations that is non-uniform, more precisely for the case where the number $k$ of white pixels is fixed. Using Ineqs.~\\eqref{kCPF}, we find that there is a provable quantum advantage when \n$\tF_{\\rm c}^{4M} \\geq 2^{n H(k\/n)+1} F_{\\rm q}^{2M}$, namely when \n$n H(k\/n)+1 \\leq 2\\nu_{\\rm th} M N_S$. Therefore, as in the previous case, quantum advantage \nmay be observed when the number of pixels is sufficiently small or the number of probes $M$ \nis sufficiently large, as long as $\\nu_{\\rm th}\\geq 0$.\n\n\n\\begin{figure}[t]\n\t\\centering\n\n\t\\includegraphics[width=0.99\\linewidth]{nuthf-crop.pdf}\n\t\\caption{{ Regimes of provable quantum advantage}. a)\n\t\tThreshold value from Eq.~\\eqref{nuthreshold} as a function \n\t\tof $\\eta_W$ and $\\eta_B$. The threshold $\\nu_{\\rm th}$ is negative in \n\t\tthe filled gray area and positive for $\\eta_W>1-\\eta_B$. Contours \n\t\tare from 0.01 in steps of 0.02. b) Threshold $\\nu_{\\rm th}$ for \n\t\t$\\eta_W=1$. \n\t\tWhenever $n \\leq M N_S \\nu_{\\rm th}$, or similarly $M\\geq \\frac{n}{N_s\\nu_{\\rm th}}$, \n\t\tentangled light beats classical strategies based on coherent states.\n\t}%\n\t\\label{fig:nuth}\n\\end{figure}\n\n\n\n\n\nIt is currently an open question to prove whether or not the lower bound in~\\eqref{perrsimple} can be achieved when classical light is employed. Nonetheless, our \n\tanalysis shows that even assuming that such bound can be achieved with classical inputs, \n\ta strategy based on entangled light and the much simpler local measurements can beat \n\tany approach based on coherent states. \n\tOn the other hand, if only local measurements can be performed, then the factor $2^{-n}$ in \n\tthe lower bound ~\\eqref{perrsimple} disappears (see Appendix~\\ref{a:locmeas}). This corresponds \n\tto the case $\\nu=0$. Therefore, in that case, whenever $\\nu_{\\rm th} > 0$,\n\tnamely when $\\eta_W>1-\\eta_B$, \nquantum light provides an advantage for decoding uniformly-distributed barcodes, irrespective of $n$.\n\n\n\n\\section{Pattern recognition}\\label{s:pattern}\n\n\\subsection{Statistical pattern classification}\n\n\n\nWe now focus on the problem of pattern recognition. \nConsider the problem of recognizing handwritten digits as shown in Fig.~\\ref{fig:digits}a, whose images have been adapted from the MNIST dataset \\cite{lecun1998gradient}. Each image depicts a single handwritten digit and the task is to \nextract from the image the corresponding number 0-9. From an algorithmic perspective, this task is more complex than the mere decision of whether \na pixel is black or white but, from a physical point of view, this problem is actually simpler \nas errors are tolerated. Indeed, a human is able to instantly recognize all the numbers\nin Fig.~\\ref{fig:digits}a even when some of the pixels are randomly flipped. Therefore, \nfor reliable pattern recognition, it is not necessary to perfectly reconstruct the entire image. Compared to the barcode configurations of Fig.~\\ref{fig:barcode}, where each pixel provides important information, here the goal is to recognize a global property that is robust against individual pixel errors, which means that entirely different strategies are possible.\n\n\n\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{digits.pdf}\n\t\\caption{{ Pattern recognition.}\n\t\t(a) Images from the MNIST dataset without pixel recognition error ($p=0\\%$) \n\t\tand with pixel error probabilities $p=2\\%,4\\%,6\\%$, where each pixel is randomly flipped with probability $p$. \n\t\t(b) Probability $P_{cc'}(h)$ that one image from class $c$ has Hamming distance $h$, with $0\\leq h \\leq 28{\\times} 28$,\n\t\tfrom another image from class $c'$. The empirical histogram is evaluated for images from the MNIST dataset that \n\t\tcorrespond to digits 0 and 1. The Gaussian fit has mean $\\mu_{01} \\simeq 157$ and standard deviation $\\sigma\\simeq 27$. \n\t\tDifferent digits show a similar behaviour with $110\\lesssim \\mu_{cc'}\\lesssim 167$, where the minimum is achieved \n\t\tbetween 1 and 7.\n\t}%\n\t\\label{fig:digits}\n\\end{figure}\n\n\nIn statistical learning theory \\cite{hastie2009elements},\ndifferent learning tasks, such as image classification, can be modeled using probabilities. \nWe consider the abstract space of all possible images and define the probability $\\pi_{\\bs i}$ \nof getting the image $\\bs i$ -- this is unknown and generally not uniform. \nImage classification is a rule that attaches a certain label $c$, or class, to a given image \n$\\bs i$. If this rule is deterministic, then it can be modeled via a function $c=f(\\bs i)$ but, more generally, the strategy is stochastic: given a certain image, the rule\npredicts different possible classes $c$ with a probability distribution $P(c|\\bs i)$.\nLet us consider a pair $(c,\\bs i)$ and assume that, given our data, we have built \na classifier $\\tilde c(\\bs i)$ \nthat assigns a certain class $\\tilde c(\\bs i)$ to the image $\\bs i$. The error in our classification \ncan be described by a loss matrix with elements $L_{c\\tilde c}$ that models the error \nof misclassification. The common choice is the 0-1 loss \nwith $L_{cc}=0$ and $L_{c\\tilde c}=1$ for $\\tilde c\\neq c$. \nBy using the conditioning $P(c,\\bs i)=P(c|\\bs i)\\pi_{\\bs i} = P(\\bs i|c) P(c)$, the expected classification \nerror can be written as\n\\begin{equation}\n\tE = \\ave_{(c,\\bs i)\\sim P(c,\\bs i)}[L_{c,\\tilde c(\\bs i)}] = \n\n\t1-\\sum_{\\bs i} P(\\tilde c(\\bs i)|\\bs i)\\,\\pi_{\\bs i}~.\n\t\\label{experr}\n\\end{equation}\nFor known $P(c,\\bs i)$ the optimal classifier is then the one minimizing the\nexpected classification error, \n$\t\\tilde c_{\\rm B}(\\bs i) = \\argmax_c P(c|\\bs i)$,\nwhich is called {\\it Bayes classifier}, while \nthe resulting error from \\eqref{experr},\nis called {\\it Bayes rate}. \nThe Bayes rate represents the theoretical minimum error that can be expected with \nthe optimal classifier.\n\n\nWe now study the error of pattern classification when images are noisy, for instance \ndue to an imperfect detection. The setup is the same of Fig.~\\ref{fig:barcode},\nwhere light, either quantum or classical, is used to illuminate the pattern (e.g.~a \nhandwritten digit as in Fig.~\\ref{fig:digits}a) and, from the detected output, the task \nis to find the correct class (e.g.~a number between 0-9). \nFor this purpose, we introduce the minimum error as a generalization of \nEqs.~\\eqref{experr} and \\eqref{measupattern}\n\\begin{equation}\n\tE^{\\rm Q} := \n\t\\min_{\\{\\Pi_c\\}} \\sum_{c\\neq \\tilde c}\\sum_{\\bs i} \\Tr[\t\\Pi_{\\tilde c}\\rho_{\\bs i}^{\\otimes M}] P(c,\\bs i)~,\n\t\\label{Equantum}\n\\end{equation}\nwhere the operators $\\{\\Pi_c\\}$ define a POVM whose measurement outcome $c$\npredicts the class of the image $\\bs i$ encoded into the quantum state $\\rho_{\\bs i}$. \nFor a two-class decision problem, the optimal POVM can be explicitly found by Helstrom theorem \n\\cite{helstrom1969quantum}. When the number of classes is larger, a \n``pretty good'' approximation to the optimal POVM can be obtained with pretty good measurements. \nFor general measurements, \nwe may derive bounds similar to \\eqref{BarnumMontanaro}, generalizing \n\\cite{Barnum,Montanaro,montanaro2019pretty} (see Appendix~\\ref{s:pgm} for details). \n\\begin{align}\n\tB[F(\\rho_B,\\rho_W)^{2M}] \\leq\tE^{\\rm Q} \\leq 2K B[F(\\rho_B,\\rho_W)^M],\n\t\\label{PatternBounds}\n\\end{align}\nwhere $K$ \nis such that $K^{-1}$ is the minimum non-zero value of $\\sqrt{P(c,\\bs i)P(c',\\bs i')}$,\nwhich is independent of $M$, $\\rho_W$ and $\\rho_B$, and we have defined \n\\begin{equation}\n\tB[F] := \\frac12\\sum_{c\\neq c'} P(c)P(c') \\sum_{h=1}^n P_{cc'}(h) F^h, \n\t\\label{BF}\n\\end{equation}\nwhere $P_{cc'}(h)$ is the probability that two images from different classes $c$ and $c'$ have \nHamming distance $h$. \nFor large $M$, the term with minimum Hamming distance dominates and we may write\n\\begin{equation}\n\tB[F^M] \\propto F^{M h_{\\rm min}}~,\n\t\\label{Bscaling}\n\\end{equation}\nwhere $h_{\\rm min}$ is the minimum Hamming distance between two images from different classes.\nThe Ineqs.~\\eqref{PatternBounds} and the expansion \\eqref{Bscaling} represent the most \nimportant results of this section, generalizing Ineqs.~\\eqref{perrsimple} and \\eqref{kCPF} \nto the problem of pattern recognition. By comparing those bounds, we find that \nquantum-enhanced pattern recognition is significantly simpler than barcode discrimination \nwhen $h_{\\rm min}>1$, as the error decreases with the faster rate \\eqref{Bscaling}. \n\n\n\nThe error $E^{\\rm Q}$ is a quantum generalization of Bayes rate,\nand quantifies the theoretical optimal performance of the classification rule.\nHowever, alike the Bayes rate, it is difficult to compute since the distribution \n$P(c,\\bs i)$ is typically unknown, and no closed-form solutions to \\eqref{Equantum} \nexist beyond the two-class case. To solve these issues, \nin the next section we propose a supervised learning approach where \nan optimal classification measurement is estimated\nfrom a collection of correctly classified data. \n\n\n\\subsection{Supervised quantum pattern recognition}\\label{s:supervi}\n\nIn data driven approaches the task is to approximate the optimal classifier \nvia a collection of already classified examples $(c^{\\mathcal T}_k,\\bs i^{\\mathcal T}_k)$. \nThe set $\\mathcal T= \\{(c^{\\mathcal T}_k,\\bs i^{\\mathcal T}_k)~{\\rm for}~k=1,\\dots,T\\}$ is\ncalled {\\it training set} and $T$ is its cardinality. \nIn the framework of statistical learning theory, we can treat \nthe elements of this set as \n{\\it samples} from the abstract and unknown joint probability distribution $P(c,\\bs i)$ introduced above.\nThen, in the limit of large $T$ we may approximate the averages with respect to $P(c,\\bs i)$ with \n{\\it empirical} averages over the training set. This allows us to explicitly compute the classification \nerror \\eqref{Equantum} and the theoretical bounds \\eqref{PatternBounds}.\nTherefore we define an empirical learning method, also called ``training'', as an \noptimization of the POVM $\\{\\Pi_c\\}$ to correctly classify, as much as possible, the \nknown samples from the training set $\\mathcal T$ \n\\begin{equation}\n\t{\\rm training:~} \n\t\\min_{\\{\\Pi_c\\}} \\frac1T \\sum_{k=1}^T \\sum_{c\\neq c_k^\\mathcal T }\n\t\\Tr[\t\\Pi_{c}\\rho_{\\bs i_k^\\mathcal T}^{\\otimes M}] =: E^{\\rm Q}_{\\mathcal T}.\n\t\\label{training}\n\\end{equation}\nFrom an operational point of view, a suboptimal solution to optimal detection $\\{\\Pi_c\\}$ \ncan be found for instance \nas shown Fig.~\\ref{fig:barcode}d and discussed in section \\ref{s:barcode}, by optimizing over the \navailable optical circuit, measurement schemes and classical post-processing. Here on the \nother hand we study the ultimate theoretical limits that any classification task must satisfy, \nstudying the minimum training error $E^{\\rm Q}_{\\mathcal T}$ via bounds like \\eqref{PatternBounds},\nwhile the ability to classify unseen data will be discussed in the next section. \nIndeed,\nupper and lower bounds \non $E^{\\rm Q}_{\\mathcal T}$ can be obtained (see Appendix~\\ref{s:pgm})\nas an average fidelity between states $\\rho_{\\bs i_k^\\mathcal T}$ and $ \\rho_{\\bs i_{k'}^{\\mathcal T}}$ \nwhose images from the training set have different classes, $c_k^{\\mathcal T}\\neq c_{k'}^{\\mathcal T}$.\nThanks to Eq.~\\eqref{FidIdlers} we finally get\n\\begin{align}\n\tB_{\\mathcal T}[F(\\rho_B,\\rho_W)^{2M}] \\leq\tE^{\\rm Q}_{\\mathcal T} \\leq 2T B_{\\mathcal T}[F(\\rho_B,\\rho_W)^M],\n\t\\label{PatternBounds1}\n\\end{align}\nwhere we have defined \n\\begin{equation}\n\tB_{\\mathcal T}[F] = \t\\sum_{k, k' : c_k^{\\mathcal T}\\neq c_{k'}^{\\mathcal T} } \n\t\\frac{F^{{\\rm hamming}({\\bs i_k^\\mathcal T},{\\bs i_{k'}^{\\mathcal T}})}}{2T^2}.\n\t\\label{Bfunc}\n\\end{equation}\nIt is simple to show that $B_{\\mathcal T}[F]$ is a particular case of $B[F]$\nfrom Eq.~\\eqref{BF} in which averages \nover the abstract distribution are substituted with averages over the empirical distribution. \nAs such, we may rewrite $B_{\\mathcal T}$ as in Eq.~\\eqref{BF} and obtain the large-$M$ scaling\n\\eqref{Bscaling}. \n\n\n\nAs a relevant example, we consider the problem of handwritten digit classification with the \nMNIST dataset \\cite{lecun1998gradient}. \nThe MNIST dataset is composed of a training set of $60000$ images and \ncorresponding classes, and a testing set of $10000$ images and \ncorresponding classes. Each original image is in grey scale and has $n=28{\\times}28$ pixels. For simplicity we first map each pixel to either black or white, depending on the closest grey-level. \nIn this way, every image can be seen as a 2D barcode. For the MNIST dataset \nwe see from Fig.~\\ref{fig:digits}b) that the probability $P_{cc'}(h)$ that two images from different classes have \nHamming distance $h$ resembles a Gaussian distribution with mean $\\mu_{cc'}$ \nand standard deviation $\\sigma_{cc'}$, and minimum non-zero value $h^{\\rm min}_{cc'}$. \nUsing this approximation, we find in Appendix~\\ref{s:pgm} analytical approximations for $B_{\\mathcal T}[F]$, \nrecovering the scaling \\eqref{Bscaling}, where $h_{\\rm min} = \\min_{c\\neq c'} h^{\\rm min}_{cc'}$. \nFor the MNIST dataset, we find $h_{\\rm min}=25$. \nTherefore, from \\eqref{PatternBounds1} we may \nget an error that decays as $E^{\\rm Q}_{\\mathcal T} \\approx F(\\rho_B,\\rho_W)^{\\alpha M h_{\\rm min}}$, \nindependently on the number of pixels $n$ and with \n$1\\leq \\alpha\\leq 2$. Moreover, thanks to Ineqs.~\\eqref{PatternBounds1} we may define a guaranteed quantum \nadvantage when the upper bound obtained with entangled states is smaller than the lower bound obtained with \nclassical data, namely when $2T F_q^{M h_{\\rm min}}\\leq F_c^{2 M h_{\\rm min}}$. Since the training set \nis normally very large, we may set $2T=2^{\\nu M h_{\\rm min} N_S}$ for some $\\nu$ and the above inequality \nbecomes equivalent to \\eqref{nuthreshold}, in the limit $M\\to \\infty$ and $N_S\\to0$. Therefore, \nwe may repeat the same analysis of Sec.~\\ref{s:enhancement}: \n whenever $\\nu_{\\rm th}>0$ (see Fig.~\\ref{fig:nuth}), quantum advantage can be proven \nfor training sets whose dimension is bounded as $2T\\leq 2^{\\nu_{\\rm th} M N_S h_{\\rm min} }$.\nIn other terms, setting $N_{\\rm tot}=MN_S$ we find a simple relation between the number of photons to show \nquantum advantage and the dimension of the training set as \n\\begin{equation}\n\tN_{\\rm tot} \\geq \\frac{\\log_2(2T)}{\\nu_{\\rm th}h_{\\rm min}} \\simeq 0.65\\, \\nu_{\\rm th}^{-1} ~.\n\\end{equation}\nIn the above expression the first inequality holds in general, while the approximated numerical \nvalue is for the MNIST dataset, where $h_{\\rm min}=25$ and $T=6{\\times} 10^4$. \n\n\nTo conclude this section we note that \nunlike \\eqref{perrcomb}, the upper bound in \\eqref{PatternBounds1} is achieved with \nglobal measurements, so a strategy like the one in Fig.~\\ref{fig:barcode}f may be needed to achieve \nsuch classification accuracy.\nBounds with local measurement errors are discussed in the \nnext section, where each pixel is detected independently.\n\n\\subsection{Independent on-pixel measurements}\nIn the previous section we have studied the ultimate physical limits for pattern recognition \nby optimizing over all the elements of the optical apparatus, namely the optical circuit, the \nmeasurements and the classical post-processing routines (Fig.~\\ref{fig:barcode}c). Together these can all be \ndescribed as an abstract global POVM, as in Eq.~\\eqref{Equantum}. \nHere we consider a simplified setup, similar to that of Fig.~\\ref{fig:barcode}c but without the optical \ncircuit and with local measurements $\\Pi_{\\bs i}=\\prod_{j=1}^N \\Pi_{i_j}$. \nHere a noisy image is reconstructed first, and then a classical algorithm is used to classify it. \nAs before, we call $\\bs i$ the real physical configuration of the $n$ pixels, \neach either black or white $i_j=\\{B,W\\}$, and $\\tilde{\\bs i}$ the binary variables corresponding \nto the reconstructed image, read by the sensors. \nUsing $M$ copies to \nperform the detection, all possible reconstructed images can appear with\nprobability $\tp_{\\rm read}(\\tilde {\\bs i}|\\bs i) $ as in Eq.~\\eqref{measupattern}. \nConsidering also the classical classification routine, the local setup \nconsists in choosing a non-optimal POVM in Eqs.~\\eqref{Equantum} or \\eqref{training} as \n\\begin{equation}\n\t\\Pi_c = \\sum_{\\bs i} A(c|\\tilde{\\bs i}) \\prod_{j=1}^n \\Pi_{\\tilde{i}_j}~,\n\t\\label{localalgo}\n\\end{equation}\nwhere $A(c|\\bs i)$ is any reliable (possibly non-linear) \nmachine learning algorithm that can classify the reconstructed \nimages. The above equation defines a POVM as long as $\\sum_c A(c|\\bs i)=1$ for all $\\bs i$, which \nis an obvious requirement since every image must be in at least one class. \n\nThe classical algorithm must be noise resilient, because some pixels \nmight not be properly reconstructed, see e.g. Fig.~\\ref{fig:digits}a.\nNoise naturally occurs in\nreadouts that are made in reflection where the light is diffused back to the receiver.\nClassification in the presence of different forms of noise has a large literature \nin machine learning \\cite{angluin1988learning}. Here, we\nassume that our training set is composed of noiseless \nimages that are correctly classified,\nnamely that \n$c_k^{\\mathcal T}$ is \nthe true class of $i_k^{\\mathcal T}$. \nAlthough not explicitly discussed here, \nit is possible to extend our analysis to noisy training sets via the method of importance reweighting\n\\cite{liu2015classification,aslam1996sample,manwani2013noise}. \n\n\nAs for the classical algorithm in Eq.~\\eqref{localalgo}, \nthere are different strategies to define a classifier given the training set,\nall with different performances and ranges of applicability \\cite{hastie2009elements}. \nHere we focus on the nearest neighbor classifier \\cite{cover1967nearest}, defined as \n\\begin{align}\n\t\\tilde c^{\\mathcal T}_{\\rm NN}(\\bs i)&= c^{\\mathcal T}_{k_{\\rm min}}~,\n\t\t\t\t\t\t\t\t\t&\n\tk_{\\rm min} &= \\argmin_k D(\\bs i,\\bs i_k^{\\mathcal T})~,\n\t\\label{nnrule}\n\\end{align}\nwhere $D(\\bs i,\\bs i')$ is a suitable distance between two images. In other terms, \nclassification of an unknown image $\\bs i$ is done by selecting the class \n$c_{k_{\\rm min}}^{\\mathcal T}$ of the image from the training set that \nis closest to $\\bs i$, according to distance $D$. \nThe corresponding algorithm in \\eqref{localalgo} is \n$A(c|\\bs i) = \\delta_{c,\\tilde c^{\\mathcal T}_{\\rm NN}(\\bs i)}$.\nMore advanced neural-network based algorithms will be considered \nin another paper \\cite{cillian}.\n\nIn spite of being very simple, the nearest neighbor classifier \nhas many desirable features. Indeed, under mild conditions, it has \nbeen proven \\cite{cover1967nearest} that, for $T\\to\\infty$, \nthe classification error using the nearest neighbor classifier \nis at most twice the Bayes rate, irrespective of the number \nof classes. More details are shown in the Supplemementary material~\\cite{suppmat}, where we study the\nperformance of this classifier for finite $T$, i.e., for finite training sets.\nAnother feature is the ability to choose the most appropriate distance $D$. \nHere we choose the Hamming distance, which allows us to exploit many results from\nprevious sections. \n\n\n\n\nIn this section we consider quantum sources and sensors, but classical algorithms for nearest neighbor \nclassification. Quantum computers can perform\nnearest neighbor classification quicker than any classical counterpart \\cite{wiebe2015quantum}, but \nhow to mix those quantum algorithms with optical detection schemes is still an open problem.\n\nInserting Eq.~\\eqref{localalgo} into \\eqref{Equantum} \nand employing the nearest neighbor classifier \nwe get \n\t\\begin{equation}\nE^{\\rm NN}_{\\mathcal T} := \n\\sum_{\\bs i,\\tilde{\\bs i}} \\sum_{c\\neq \\tilde{c}_{\\rm NN}^{\\mathcal T}(\\tilde{\\bs i})} P(c,\\bs i)\n\\prod_{j=1}^n \\Tr[\t\\Pi_{\\tilde{i}_j}\\rho_{i_j}^{\\otimes M} ],\n\\label{Ennt}\n\\end{equation}\nTo understand this error, suppose that the pixel error probability $p$ is independent on \nwhether the pixel is black or white. In this case, the probability that the reconstructed \nimage $\\tilde{\\bs i}$ differs from the true image $\\bs i$ in \n$k={\\rm hamming}(\\bs i,\\tilde{\\bs i})$ pixels is a binomial distribution $\\propto p^k(1-p)^{n-k}$,\nwith mean $np$. Thanks to the analysis shown in Fig.~\\ref{fig:digits}b we know that,\non average, as long as the number of wrongly detected pixels is smaller than the typical \nseparation in Hamming distance between different classes, the nearest neighbour classifier \nshould provide the correct result. For the transformed MNIST dataset, \n$n=28{\\times}28$ and the typical \nnumber of flips between different classes is $\\approx 160$ (see Fig.~\\ref{fig:digits}b), so \na pixel error probability up to $p\\simeq 160\/784\\simeq 20\\%$ should be tolerated by the algorithm. \n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{digits2.pdf}\n\t\\caption{{ Pattern recognition with independent on-pixel measurements.}\n\t\t(a) Classification error, namely empirical probability of recognizing the wrong digit \n\t\tusing the nearest neighbor classifier with Hamming distance, as a function of the pixel error probability. \n\t\t(b) Classification error when each pixel is probed using either coherent inputs or \n\t\tentangled TMSV states. \n\t\tThe plot is generated \n\t\tby combining the error coming from the single pixel error probability\n\t\t(see Appendix~\\ref{a:locmeas}) with\n\t\n\t\tthe classification error from (a). \n\t\n\t\tWe focus on the limit $M\\to\\infty$, $N_S\\to0$ while keeping fixed the total mean number of photons $M N_S$ irradiated over each pixel.\n\t\n\t\tThe colored areas represent the region between the upper and lower bounds assuming quantum (blue) or classical (red) sources combined with optimal local measurements. These bounds depend on the quantum and classical fidelities from Eqs.~\\eqref{fidelityQ} and~\\eqref{fidelityC}. \n\t\tThe cyan and orange lines represents the performance with \n\t\tquantum light (cyan) or classical coherent states (orange) using (non-optimal) photodetection measurements. \n\t\tWe set $\\eta_B=0.9$ and $\\eta_W=0.95$. \n\t}%\n\t\\label{fig:digits2}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:digits2} we study the robustness of the nearest neighbor classifier via a numerical \nanalysis with the transformed MNIST dataset, where each image \nis transformed into a 2D barcode as described in Sec.~\\ref{s:supervi}.\nWe use such transformed training set to build a nearest neighbor classifier, \nand then estimate the error \\eqref{Ennt} as an average over the testing set, namely \nas $N_{\\rm wrong}\/10000$ where \n$N_{\\rm wrong}$ is the number of times that \nin the 10000 entries of the testing set, the predicted digit is different from the true one. \nSince the images from the testing set are samples from the \nabstract and unknown probability $P(c,\\bs i)$ \nin the limit of infinitely large testing sets such estimate\nconverges to $E^{\\rm NN}_{\\mathcal T}$ from Eq.~\\eqref{Ennt}. \nMoreover, since the images \nfrom the testing set are different from the ones in the training set, this error contains \ntwo terms: an error due to imperfect detection and an generalization error, since we \nare classifying previously unseen data. \n\nIn Fig.~\\ref{fig:digits2}a we study the classification error as a function of the \nprobability $p$ of wrong pixel detection. As we see,\neven for noiseless images, namely when $p=0$,\nthe classification error is still non-zero, as the nearest neighbor classifier may \nprovide wrong outcomes. Nonetheless, as predicted, \nFig.~\\ref{fig:digits2}a shows that the nearest neighbor classifier is remarkably robust against \nrelatively high pixel error probabilities $p$.\n\n\nIn Fig.~\\ref{fig:digits2}b we combine the bound on the pixel error probability \n(see Eq.~\\eqref{perrlocal}, for a single pixel $n=1$) with the theoretical curve that \npredicts the classification error from the pixel error probability in Fig.~\\ref{fig:digits2}a.\nThe bounds on the pixel error probabilities are obtained from the fidelities, \nEqs.~\\eqref{fidelityQ} and~\\eqref{fidelityC}, which consider either coherent \nstates or entangled TMSV states with the same average number of photons $MN_S$. \nThe results from Fig.~\\ref{fig:digits2}b show that the classification\nerror when we use quantum light is lower than the corresponding classical value. \nThese results are based on the assumption \nthat the detector performs the optimal Helstrom measurement, which \nmay be complex to implement experimentally. Therefore, \nin Fig.~\\ref{fig:digits2}b, we also \nconsider the simpler photodetection measurement, where the POVM in Eq.~\\eqref{measupattern} \nis a projection onto the Fock basis. The resulting pixel error probabilities \nwith both coherent states and TMSV inputs are studied in the Supplementary Material~\\cite{suppmat},\nadapting the analysis from Ref.~\\cite{ortolano2020experimental}. \nWe see that even for this non-optimal measurement, entangled inputs always provide an \nadvantage against purely classical coherent states for all possible values of $MN_S$. \nThis advantage can be experimentally observed via a setup like that of Ref.~\\cite{ortolano2020experimental}.\n\n\n\\section{Discussion}\\label{s:conclusions}\nIn this work, we have investigated multi-pixel problems of quantum channel\ndiscrimination, namely the identification of barcode configurations (equivalent\nto readout of the stored data) and the classification of black and white\npatterns, e.g.~given by noisy digital images of handwritten digits. In both\ncases, we have shown that the use of quantum light based on entangled states\nclearly outperforms classical strategies based on coherent states. \n\nFor both quantum-enhanced \nbarcode decoding and pattern recognition, we have\nanalytically studied, via bounds, the physical limits to the classification error that we may get by optimizing \nover all optical elements, measurements and classical post-processing. \nThis allows us to to derive explicit analytical conditions for the quantum advantage to hold. \nMoreover, \nthe analysis of our bounds allows us to rigorously prove that quantum-enhanced \npattern recognition can vastly reduce the classification error with respect to the mere \nindependent measurement of each pixel. \n\nNonetheless, being easier from the experimental point of view, \nwe have also \nconsidered a simplified setup where all pixels are probed independently and,\nfor the problem of pattern recognition, we found that \nphoton counting measurements are sufficient to show quantum advantage,\npaving the way for an experimental demonstration with state of the art quantum\ntechnology.\n\n\\acknowledgements\n\nL.B. acknowledges support by the program ``Rita Levi Montalcini'' for young researchers. \nQ.Z. acknowledges support from the Defense Advanced Research Projects Agency (DARPA) under Young Faculty Award (YFA) Grant No. N660012014029 and Craig M. Berge Dean's Faculty Fellowship of University\nof Arizona.\nS.P. acknowledges funding from EU Horizon 2020 Research and Innovation Action under grant agreement No. 862644 (Quantum Readout Techniques and Technologies, QUARTET). \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOver the past decade, access to knowledge has fundamentally changed. This process began around 2011, when Stanford professors Andrew Ng, Sebastian Thrun and others made their AI courses available to everyone through online courses \\cite{ngorigins}. This type of course is often referred to as a Massive Open Online Course (MOOC). Popular MOOC platforms include Coursera, Udacity, edX, Udemy and others. Until 2011, AI could generally only be studied in a limited number of available university courses or from books or papers. Furthermore, those resources were mainly available in developed countries. As a consequence, potential learners in emerging markets could not easily access respective resources. Due to MOOCs, the so-called ``democratization of AI knowledge\" has begun to fundamentally change the way we learn and has given rise to new AI superpowers, such as China \\cite{lee2018ai}.\n\nWe argue in Section~\\ref{sec:moocs} that MOOCs have now given many universities and professors serious competitors. It can also be assumed that this competition will intensify even further in the coming years and decades. In order to justify the added value of three to five-year degree programs to prospective students, universities must differentiate themselves from MOOCs in some way or other.\n\nIn this paper, we show how we address this challenge at Deggendorf Institute of Technology (DIT) in our AI courses. DIT is located in rural Bavaria, Germany and has a diverse student body of different educational and cultural backgrounds. Concretely, we present two courses (that focus on ML and include slightly broader AI topics when needed):\n\\begin{itemize}[noitemsep,topsep=0pt]\n \\item Computer Vision (Section~\\ref{sec:computervision})\n \\item Innovation Management for AI (Section~\\ref{sec:innovation})\n\\end{itemize}\n\nIn addition to teaching theory, we put emphasis on real-world problems, hands-on projects and taking advantage of hardware. Particularly the latter is usually not directly possible in MOOCs. In this paper, we share our best practices. We also show how our courses contribute to DIT's ability to differentiate itself from MOOCs and other universities.\n\n\\section{MOOCs Have Become Game Changers}\n\\label{sec:moocs}\nIn addition to courses on AI, a variety of other courses on almost any topic have emerged on various MOOC platforms over the past decade. Those courses enable learners to study high-quality content from renowned professors, remotely, at their own speed and at little or no cost. Furthermore, collaborations with renowned universities and industry partners have emerged.\nSome MOOC platforms offer career coaching, too. Companies have also launched collaborative programs with MOOC platforms to train their employees.\n\nThere are plenty of examples of professionals who have found new, high-paying jobs in various industries within a short period of time after completing hands-on MOOCs, for example in the news \\cite{lohr2020remember} or on LinkedIn. This is particularly true for IT, a sector that has traditionally been open to lateral entrants and autodidacts. In recent years, MOOCs have therefore become steadily more established. This trend has also been further consolidated during the COVID-19 pandemic, as millions of people around the globe have been undergoing retraining \\cite{bylieva2020analysis}.\n\nIn summary, universities will be facing the following challenges in the coming years:\n\n\\begin{enumerate}[noitemsep,topsep=0pt]\n\\item In just a few years many very good high school graduates could decide against the traditional completion of a university degree program. They would then rather acquire all necessary practical skills through MOOCs within a few months or perhaps a year. In parallel, they could also gain practical experience by working part-time or founding startups. As a consequence, they could quickly get excellent jobs and outperform traditional university graduates on the jobs market.\n\\item Due to the demographic change \\cite{magnus2012age} and the potential lack of qualified new students, many universities in developed countries may become unable to maintain their current size. In view of the return on investment, politicians or administrators may thus probably sooner or later start thinking about closing individual departments or even entire universities.\n\\item Many (non-computer science) degree programs have so far only taught traditional content, with little or no link to the digital transformation and automation through AI. If this important content continues to go unnoticed in education, those degree programs will almost certainly train their students for unemployment.\n\\end{enumerate}\n\nUniversities must face up to these challenges, which also provide many opportunities, though. As a result, universities could emerge even stronger from that competition. Most importantly, universities must differentiate themselves from MOOCs. In the following sections, we show how we address these challenges by teaching cutting-edge real-world content and taking advantage of physical university infrastructure. We also actively promote our courses through social media, press releases and other channels in order to attract more prospective students. In addition, our courses are open to students of other departments, including electrical engineering, mechanical engineering, healthcare or business. This allows us to support them in learning the tools of the 21st century that they need in order to actively contribute to the digital transformation of their disciplines.\n\n\n\\section{Computer Vision Course}\n\\label{sec:computervision}\nPopular MOOC platforms offer a number of excellent courses\\footnote{These include, but are not limited to, the following courses: \\url{http:\/\/www.udacity.com\/course\/computer-vision-nanodegree--nd891}, \\url{http:\/\/www.coursera.org\/learn\/computer-vision-basics}.} on computer vision (CV). In order to survive in international competition, the content of a today's university CV course must meaningfully differentiate itself from those by offering unique selling propositions.\nBased on these principles, we have started to teach this novel course in 2020 at DIT. Note that there is a separate deep learning course taught by a different professor in our department. Most students take both courses in parallel and have previously taken an introductory machine learning course.\n\n\\subsection{Content}\nWe provide students with a broad and deep background in CV. That is why we discuss both, traditional and modern neural network-based CV methods. In practice, successful CV applications tend to combine both approaches \\cite{o2019deep}, in particular when only a limited number of training examples are available \\cite{ahmed2020deep}. Concretely, we discuss the following topics in the first half of the term:\n\n\\begin{itemize}[noitemsep,topsep=0pt]\n\\item Introduction: applications, computational models for vision, perception and prior knowledge, levels of vision, how humans see\n\\item Pixels and filters: digital cameras, image representations, noise, filters, edge detection\n\\item Regions of images: segmentation, perceptual grouping, Gestalt theory, segmentation approaches, image compression by learning clusters\n\\item Feature detection: RANSAC, Hough transform, Harris corner detector\n\\item Object recognition: challenges, template matching, histograms, machine learning\n\\item Convolutional neural networks: neural networks, loss functions and optimization, backpropagation, convolutions and pooling, hyperparameters, AutoML, efficient training, selected architectures\n\\item Image sequence processing: motion, tracking image sequences, temporal models, Kalman filter, correspondence problem, optical flow, recurrent neural networks\n\\item Foundations of mobile robotics: robot motion, sensors, probabilistic robotics, particle filters, SLAM\n\\item Advanced topics: 3D vision, generative adversarial networks, self-supervised learning\n\\end{itemize}\n\nIn the second half of the term, students work in groups of 1 to 4 members on a CV project.\n\n\\subsection{Unique Selling Propositions}\nThis course differentiates itself from other CV courses, in particular MOOCs, as follows:\n\\begin{enumerate}[noitemsep,topsep=0pt]\n\\item Most CV courses taught on MOOC platforms or at universities only include smaller, isolated problems that can be implemented on almost any commercially available computer or by using cloud services. This course includes a larger real-world project in the second half of the term instead. Students choose a CV project of their choice, in which they also apply agile project management and use respective tools. In order to provide students with a real added value of a physical university course, they are highly encouraged to use the NVIDIA Jetbot platform depicted in Figure~\\ref{fig:robot}. \n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[width=0.85\\columnwidth]{img_robot.JPG}}\n\\caption{The NVIDIA Jetbot mobile robot platform used in the projects. Find more information at \\url{http:\/\/www.github.com\/NVIDIA-AI-IOT\/jetbot}.}\n\\label{fig:robot}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\nIt possesses a camera and efficiently executes CV algorithms on its NVIDIA Jetson GPU. By using this platform, students can not only better understand the course content. Rather it enables them to experience how these algorithms behave in the real world. During the COVID-19 pandemic, they could take the robot kits home in order to work on their projects remotely.\n\n\\item We cover challenging content that is more complex than in most available MOOCs: We first reviewed CV courses at introductory and advanced levels of international top universities, including Stanford, MIT and Imperial College London. We then selected the topics that we find most relevant to solving real-world problems. Furthermore, we present these topics in a more understandable way and include additional revisions of the underlying concepts. Like this, we also make this course more accessible to students of other disciplines.\n\n\\end{enumerate}\n\n\\subsection{Outcomes and Students' Feedback}\n19 students of different degree programs signed up for the first iteration of this course. In total, they implemented 10 projects in groups of 1 to 3 members.\n\nAbout half of the projects used a NVIDIA Jetbot. Those projects included object following and simultaneous localization and mapping (SLAM). The other projects included a face mask detector and a clothes classifier. We find the coin counter depicted in Figure~\\ref{fig:coins} particularly worth mentioning though.\n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[width=0.85\\columnwidth]{img_coins.png}}\n\\caption{Coin counter. Image courtesy: Patrick Gawron and Achot Terterian.}\n\\label{fig:coins}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\nIt first applies object segmentation and detection to a photo that contains an arbitrary number of coins. It then aggregates the amounts of the individual coins. The underlying ML model also handles multiple currencies in the same photo. In the project presentation, the group also discussed how they solved the challenge of collecting a data set of coins that includes a variety of angles, conditions, reflections and currencies.\n\nWe received quantitative and qualitative feedback from students through a formal course evaluation. The overall feedback was a 1.3 on a scale 1 to 5 where 1 is the best. However, a few students suggested a longer introduction to deep learning frameworks for the first half of this course. They would then have been able to start working on their projects quicker in the second half. In the second iteration, we have therefore added an extended introduction to deep learning frameworks.\n\n\n\\section{Innovation Management for AI Course}\n\\label{sec:innovation}\n\nIn recent years, many companies have started to invest in ML and AI to stay competitive. However, the sad truth is that some 80\\% of all AI projects fail or do not result in any financial value \\cite{insights2019artificial}. That is a serious concern because there is clearly an acute need in industry for experts who have a comprehensive knowledge of what needs to be done so that AI adds value to businesses. In our view, one of the underlying causes is the way AI is taught in universities, as most courses cover only purely methodological and engineering aspects of AI. We are convinced that professors need to address this problem by also enabling students to think in a broader and business-oriented sense of AI innovation management. At DIT, we therefore started to teach this novel and internationally unique course in 2020. \n\n\\subsection{Content}\nWe discuss a range of challenges, both technical and managerial, that companies typically face when using AI \\cite{glauner2020unlocking}. We first look back at some of the historic promises, successes and failures of AI. We then contrast them to some of the advances of the deep learning era and contemporary challenges. Concretely, we discuss the following topics:\n\n\\begin{itemize}[noitemsep,topsep=0pt]\n\\item Introduction: how AI is changing our society, selected examples of successful and unsuccessful AI projects and transformations\n\\item History and promises of AI: Dartmouth conference, AI from 1955 to 2011, AI winters\n\\item Deep learning era: breakthroughs, DeepMind, promises and hypes, no free lunch theorem, AI innovation in China, technological singularity\n\\item Contemporary challenges: regulation, explainable AI, ethics\n\\item AI transformation of companies: opportunities, challenges, best practices, roles, data strategy, data governance\n\\end{itemize}\n\nWe offer this course as an intensive course. On day one, we teach the content above. On the following two days, students work on a case study on how to successfully implement AI in a company of their choice. They present the outcomes of their case study on day four.\n\n\\subsection{Unique Selling Propositions}\nThis course differentiates itself from other courses, in particular MOOCs, as follows:\n\n\\begin{enumerate}[noitemsep,topsep=0pt]\n\\item During an intensive online search for related courses, we only found introductions to AI for managers\\footnote{These include, but are not limited to, the following courses: \\url{http:\/\/www.udacity.com\/course\/ai-for-business-leaders--nd054}, \\url{http:\/\/www.udemy.com\/course\/intro-ai-for-managers\/}.}. However, we did not find any business-related courses for AI experts. In this course, we bridge that gap.\n\\item Students learn respective best practices along the entire AI value chain and how these lead to productively deployed applications that add real value. They work on a case study on how specific AI use cases are implemented in companies, what challenges may be encountered and how they may be solved.\n\\end{enumerate}\n\n\\subsection{Outcomes and Students' Feedback}\n21 students of different degree programs signed up for the first iteration of this course. In total, they worked on 11 case studies in groups of 2 to 4 members.\n\nMost of the students who took this course are computer scientists studying in a part-time continuing education AI degree program. We received very positive feedback from them as they could include in their case studies some of the current challenges they face at work. A few business students also took this course as they were eager to learn more about AI. They contributed their in-depth business knowledge to the case study presentations. This turned out to be a valuable experience for the computer scientists.\n\nWe could, however, not quantitatively assess this course yet. Our university's course evaluation scheme does not include intensive courses. We are planning to address this issue in the future with an unofficial course evaluation.\n\n\\section{Conclusions}\nUniversities are facing major challenges as a result of the rapidly advancing digital transformation of teaching. These include in particular competition from Massive Open Online Courses (MOOCs). This transformation is further being accelerated by the demographic change in developed countries and could result in a dwindling number of potential students in the near future. However, if universities address those challenges swiftly, ambitiously and sustainably, they can even emerge stronger from this situation by providing better and modern courses to their students. In this paper, we showed how we address those challenges in AI education at Deggendorf Institute of Technology. Concretely, we teach innovative and unique courses on computer vision and innovation management for AI. We shared our best practices and how our courses contribute to Deggendorf Institute of Technology's ability to differentiate itself from MOOCs and other universities.\n\nBoth courses are currently being offered again. The number of students that signed up has more than doubled. Our courses are thus positively perceived by students.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The Campbell-Magaard theorem}\n\nConsider the Lorentzian metric of the five-dimensional space written in a\nGaussian form\n\\begin{equation}\nds^{2}=\\overline{g}_{ij}\\left( x,\\psi\\right) dx^{i}dx^{j}+d\\psi^{2},\n\\label{hds2}%\n\\end{equation}\nwhere $x=\\left( x^{1},...,x^{4}\\right) $, and Latin indices run from $1$ to\n$4$ while the Greek ones go from $1$ to $5.$\n\nBy splitting the vacuum Einstein equations in terms of the extrinsic and\nintrinsic curvatures of the slices $\\psi=const$, it can be shown that the\nequations have the following structure:%\n\n\\begin{align}\n\\frac{\\partial^{2}\\overline{g}_{ij}}{\\partial\\psi^{2}} & =F_{ij}\\left(\n\\overline{g},\\frac{\\partial\\overline{g}}{\\partial\\psi},\\frac{\\partial\n\\overline{g}}{\\partial x},\\frac{\\partial^{2}\\overline{g}}{\\partial x^{2}%\n},\\frac{\\partial^{2}\\overline{g}}{\\partial x\\partial\\psi}\\right)\n\\label{dyn}\\\\\n\\nabla_{j}\\left( \\Omega^{ij}-g^{ij}\\Omega\\right) & =0\\label{c1}\\\\\nR+\\Omega^{2}-\\Omega_{ij}\\Omega^{ij} & =0,\\label{c2}%\n\\end{align}\nwhere $F_{ij}$ are analytic functions of their arguments, $\\nabla_{j}$ is the\ncovariant derivative with respect to the induced metric $g_{ij}=\\overline\n{g}_{ij}\\left( x,\\psi=const\\right) $; $R$ and\\ $\\Omega_{ij}$\\ denote,\nrespectively, the scalar curvature and the extrinsic curvature of the\nhypersurface $\\psi=const;$ and $\\Omega=g^{ij}\\Omega_{ij}$. Recall that in the\ncoordinates adopted the extrinsic curvature assumes the simple form:\n\\begin{equation}\n\\Omega_{ij}=-\\frac{1}{2}\\frac{\\partial\\overline{g}_{ij}}{\\partial\\psi\n}.\\label{ext}%\n\\end{equation}\nIt is well known that, owing to the Bianchi identities, the second and third\nequations need to be imposed only on the hypersurface, since they are\npropagated by the first one. In this sense, it is said that the Einstein\nequations consist of the \\textit{dynamical }equation (\\ref{dyn}) plus\n\\textit{constraint equations }(\\ref{c1}) and (\\ref{c2}) for $\\Omega_{ij}$ and\n$g_{ij}$ .\n\nLet now consider the hypersurface $\\psi=0.$ According to the Cauchy-Kowalewski\ntheorem, for any point in this hypersurface, say the origin, there is an open\nset in five dimensions containing that point, where the equation (\\ref{dyn})\nalways has a unique analytic solution $\\overline{g}_{ik}\\left( x,\\psi\\right)\n$ provided that the following analytic initial conditions are specified:\n\\begin{align}\n\\overline{g}_{ij}\\left( x,0\\right) & =g_{ij}\\left( x\\right)\n\\label{cig}\\\\\n\\left. \\frac{\\partial\\overline{g}_{ij}}{\\partial\\psi}\\right\\vert _{\\psi=0} &\n=-2\\Omega_{ij}\\left( x\\right) .\\label{cih}%\n\\end{align}\n\n\nFrom the perspective of the embedding problem these initial conditions\nrepresent, respectively, the metric and the extrinsic curvature of the\nhypersurface $\\psi=0$, whereas the solution of equation (\\ref{dyn}) gives the\nmetric of the $\\left( n+1\\right) -$dimensional space. Thus, if there is a\nsolution for the constraint equations for any given metric $g_{ik}$, then the\ntheorem is proved, since the solution found $\\overline{g}_{ij}\\left(\nx,\\psi\\right) $ substituted in (\\ref{hds2}) will give rise to a metric that\nsatisfies the vacuum Einstein equation $R_{\\mu\\nu}=0$. Clearly, the embedding\nmap is then given by the equation $\\left( x,\\psi=0\\right) $.\n\nIt turns out, as Magaard has proved \\cite{magaard,dahia1}, that the constraint\nequations always have a solution. Indeed, by simple counting operation we can\nsee that there are $n(n+1)\/2$ unknown functions (the independent elements of\nextrinsic curvature) and $n+1$ constraint equations. The metric $g_{ij}\\left(\nx\\right) $\\ must be considered as a given datum. For $n\\geq2$, the number of\nvariables is equal or greater than the number of equations. Magaard has shown\nthat after the elimination of equation (\\ref{c2}), the first-order\ndifferential equation (\\ref{c1}) can be written in a canonical form (similar\nto (\\ref{dyn})) with respect to $n$ components of $\\Omega_{ij}$ conveniently\nchosen. Taking initial conditions for these components in such a way that the\nright-hand side of the mentioned equation is analytic at the origin, the\nCauchy-Kowalewski theorem can be applied once more to guarantee the existence\nof a solution for the constraint equations.\n\n\\section{Dynamically generated embedding}\n\nFrom the above we see that Magaard's proof of the CM theorem is formulated in\nterms of an initial value problem. Roughly speaking we can say that a\n$(3+1)$-spacetime is taken as part of the initial data and it is\n\\textquotedblleft propagated\\textquotedblright\\ along a spacelike extra\ndimension by the dynamical part of the Einstein vacuum equations to generate\nthe higher-dimensional space. Nevertheless, it is enough clear that, despite\nsome similarities, the CM theorem is not concerned with real dynamical\npropagation since the initial data \\textquotedblleft evolve\\textquotedblright%\n\\ along a spacelike direction. Therefore there is no reason why we should\nexpect a causality relation between different slices of the higher dimensional space.\n\nHowever, we can look at this picture from a different perspective. Indeed,\nsupported by the CM theorem, we know that given any point $p\\in$ $M$ there is\na five-dimensional vacuum space $(\\widetilde{M},\\widetilde{g})$ into which a\nneighborhood of $p$ in $M$ is embedded. Now we can determine an open subset of\n$\\widetilde{M}$, say, $\\widetilde{O}$, containing the point $p$ in which there\nexists a four-dimensional hypersurface $\\Sigma$, which is spacelike\neverywhere, acausal and that contains the point $p$ (see appendix I). The\nembedding of $\\Sigma$ into $\\left( \\widetilde{O},\\widetilde{g}\\right) $\ninduces a positive definite metric $h$ in the hypersurface. Let $K$ be the\nextrinsic curvature of $\\Sigma$ in $\\left( \\widetilde{O},\\widetilde\n{g}\\right) .$ The metric and the extrinsic curvature are analytic fields in\nthe hypersurface $\\Sigma$, thus they belong to the local Sobolev\nspace\\footnote{By writing $h\\in W^{m}\\left( \\Sigma\\right) $ we mean that the\nnorm of $h$ together with its covariant derivatives of order equal or less\nthan $m$ are square integrable in any open set $\\mathcal{U}$ of $\\Sigma$ with\ncompact closure. For the sake of simplicity, we shall assume that the norm and\nderivatives are calculated with respect to an Euclidean metric. Here we are\nadopting the notation of Sobolev spaces used in \\cite{HE}.}\\emph{ }%\n$W^{m}\\left( \\Sigma\\right) $ for any $m$. The set $\\left( h,K,\\Sigma\n\\right) $ constitutes appropriate initial data for the Einstein vacuum\nequations, since $h$ and $K$ satisfy the vacuum constraint equations in the\nhypersurface $\\Sigma$ and fulfill the required condition of regularity (see\n\\cite{HE}, page 248-249, and \\cite{CB}, for instance).\n\nConsider now $D\\left( \\Sigma\\right) $, the domain of dependence of $\\Sigma$\nrelative to $\\left( \\widetilde{O},\\widetilde{g}\\right) $ \\cite{wald}. Since\n$\\Sigma$ is an acausal hypersurface of $\\widetilde{O}$, then $D\\left(\n\\Sigma\\right) $ is open in $\\widetilde{O}$ (see \\cite{oneill}, page 425). Of\ncourse $D(\\Sigma)$ is a non-empty set, since $\\Sigma\\in D(\\Sigma).$ By\nconstruction, the five-dimensional manifold $(D\\left( \\Sigma\\right)\n,\\widetilde{g})$ is a solution of the Einstein vacuum equations, hence\n$(D\\left( \\Sigma\\right) ,\\widetilde{g})$ is a Cauchy development for the\nEinstein vacuum equations of the initial data $\\left( h,K,\\Sigma\\right) .$\n\nAs we have mentioned before $D(\\Sigma)$ is open, and thus the non-empty set\n$M\\cap D\\left( \\Sigma\\right) $ is a neighbourhood of $p$ in $M$ contained in\n$D\\left( \\Sigma\\right) $. Therefore $M\\cap D\\left( \\Sigma\\right) $ is\nembedded in $D\\left( \\Sigma\\right) $, i.e., in a Cauchy development of\n$\\left( h,K,\\Sigma\\right) $ (see figure 1). In other words the dynamical\nevolution of the initial data $\\left( h,K,\\Sigma\\right) $ generates a\nfive-dimensional \\ vacuum space into which the spacetime is locally embedded.\nIn this sense, we can say that this local embedding is dynamically generated\nby the physical propagation of those initial data.\n\nMore precisely this result can be stated as follows: \\textit{Consider an\nanalytic spacetime }$\\left( M,g\\right) .$\\textit{\\ For any }$p\\in\nM$\\textit{\\ there are initial data }$\\left( h,K,\\Sigma\\right) $%\n\\textit{\\ whose Cauchy development for the Einstein vacuum\\ equations is a\nfive-dimensional vacuum space into which a neighbourhood of }$p$\\textit{ in\n}$M$\\textit{\\ is analytically and isometrically embedded.}\n\n\\begin{figure}[ptb]\n\\begin{center}\n\\includegraphics[scale=0.4]\n{figure.ps}\n\\end{center}\n\\caption{{\\small {Sketch of the local embedding of spacetime $M$ into the\nCauchy development $D\\left( \\Sigma\\right) $ of initial data given in an\nacausal spacelike four dimensional hypersurface $\\Sigma.$}}}%\n\\end{figure}\n\nFurthermore, the Einstein vacuum\\ equations admit a well-posed initial value\nformulation with respect to the data $\\left( h,K,\\Sigma\\right) $ (see, for\nexample,\\cite{HE,CB,wald,Friedrich}). Therefore the general properties of\nsolutions of the vacuum Einstein equations, related to the hyperbolic\ncharacter of the differential equations, are applicable to our solution\n$\\left( D\\left( \\Sigma\\right) ,\\widetilde{g}\\right) $. This ensures that\nthe dependence of the solution $\\left( D\\left( \\Sigma\\right) ,\\widetilde\n{g}\\right) $ on the initial data $\\left( h,K,\\Sigma\\right) $ is continuous\n(Cauchy stability). As a consequence the spacetime embedded into $\\left(\nD\\left( \\Sigma\\right) ,\\widetilde{g}\\right) $ is stable in a similar sense\ntoo, as we describe in the next section.\n\nAnother important property is that any change of data outside $\\Sigma$ does\nnot affect the solution in the future domain of dependence (causality). Thus\nit follows that any perturbation outside $\\Sigma$ will not disturb the\nembedding of spacetime in $D\\left( \\Sigma\\right) $.\n\n\\section{Cauchy Stability}\n\nConsider an analytic spacetime $\\left( M,g\\right) $ and let $\\left(\nh,K,\\Sigma\\right) $ be a set of analytic initial data with a Cauchy\ndevelopment $\\left( D\\left( \\Sigma\\right) ,\\widetilde{g}\\right) $ in which\nthe spacetime is locally embedded, around $p\\in M.$ Additionally let us admit\nthat this initial data set satisfies the following property: the image of $p$\nthrough the embedding lies in $\\Sigma$. In other words, corresponding to the\nset $\\left( h,K,\\Sigma\\right) $ there are some neighbourhood $O$ of $p$ in\n$M$ and a map $\\varphi:O\\subset M\\rightarrow$ $D\\left( \\Sigma\\right) $ which\nis an embedding, with $\\varphi\\left( p\\right) \\in\\Sigma$ \\footnote{The\nexistence of the set $\\left( h,K,\\Sigma\\right) $ was shown in the previous\nsection. Possibly the initial data set is not unique. The results obtained in\nthis section are applicable separately to each one of all possible initial\ndata set.}.\n\nNow we denote by $\\left( h^{\\prime},K^{\\prime}\\right) $ a new set of initial\ndata which satisfies the vacuum Einstein constraint equation in $\\Sigma$. For\nthe sake of simplicity let us assume that the fields $h^{\\prime}$ and\n$K^{\\prime}$ are $C^{\\infty}$ in $\\Sigma$. In this case, the new generated\nmetric $g^{\\ast}$ is a $C^{\\infty}$ field.\n\nLet $V$ be an open set of $J^{+}\\left( \\Sigma\\right) $, the causal future of\n$\\Sigma$ in $\\left( D\\left( \\Sigma\\right) ,\\widetilde{g}\\right) $, with\ncompact closure and $\\mathcal{U}$ $\\subset\\Sigma$ some neighbourhood of\n$J^{-}\\left( \\overline{V}\\right) \\cap\\Sigma$, the causal past of\n$\\overline{V}$ (the closure of $V$) in $\\Sigma$, with compact closure in\n$\\Sigma$. According to the Cauchy stability theorem (see \\cite{HE}, page 253,\nand \\cite{CB}), for any $\\varepsilon>0$ there is some $\\delta>0$ such that any\ninitial data $\\left( h^{\\prime},K^{\\prime}\\right) $ on $\\Sigma$ close to\n$\\left( h,K\\right) $ in $\\mathcal{U}$ with respect to the local Sobolev\nnorm, i.e., $\\left\\| h^{\\prime}-h,\\mathcal{U}\\right\\| _{m}^{\\sim}<\\delta$\nand $\\left\\| K^{\\prime}-K,\\mathcal{U}\\right\\| _{m-1}^{\\sim}<\\delta$ ( $m>4)$\n\\footnote{By the symbol \\symbol{126}we mean that derivatives are taken only in\ntangent directions of $\\Sigma$.}, give rise to a new metric $g^{\\ast}$ which\nis near to the old one $\\widetilde{g}$ in $V$, i.e., $\\left\\| g^{\\ast\n}-\\widetilde{g},V\\right\\| _{m}<\\varepsilon$.\n\nNow let $V$ be such that $V$ $\\cap\\left( M\\cap D\\left( \\Sigma\\right)\n\\right) =N$ is a non-empty set, where $M\\cap D\\left( \\Sigma\\right) $ means\n$\\varphi\\left( O\\right) ,$ the image of $O$ through the embedding. And let\n$g^{\\prime}$ be the induced metric on $N$ by the embedding of $N$ in $\\left(\nV,g^{\\ast}\\right) $. We shall see that if $\\delta$ is sufficiently small then\n$g^{\\prime}$ will be a Lorentzian metric and it will be close to the spacetime\nmetric $g$ in $N$.\n\nFor the sake of simplicity, let us make some assumptions. First, we assume\nthat $D\\left( \\Sigma\\right) $ is covered by Gaussian coordinates\n(\\ref{hds2}) adapted to the embedding, in which the embedding map is $\\left(\nx,\\psi=0\\right) $. If this was not the case, we could find a neighbourhood\n$S$ of $p$ in $\\Sigma$ and a neighbourhood $\\widetilde{O}$ of $p$ in $D\\left(\n\\Sigma\\right) $ such that the domain of dependence of $S$ relative to\n$\\left( \\widetilde{O},\\widetilde{g}\\right) $, $D\\left( S,\\widetilde\n{O}\\right) ,$ is covered by (\\ref{hds2}). We would proceed in the following\nmanner. Since the embedding exists, we know that $M\\cap D\\left(\n\\Sigma\\right) $ is a timelike hypersurface of $\\left( D\\left(\n\\Sigma\\right) ,\\widetilde{g}\\right) .$ By the usual procedure we construct,\nfrom the geodesics that cross $M\\cap D\\left( \\Sigma\\right) $ orthogonally,\nGaussian normal coordinates in a neighbourhood $\\widetilde{O}$ of $p$ in\n$D\\left( \\Sigma\\right) .$ Now make $S=\\Sigma\\cap\\widetilde{O}$. We then\nconcentrate our analysis in the region $D\\left( S,\\widetilde{O}\\right) $.\n\nSecond, let us assume that the Sobolev norm is evaluated with respect to an\nEuclidean metric defined on $D\\left( \\Sigma\\right) $ and that in coordinates\n(\\ref{hds2}) the Euclidean metric has the canonical form, i.e., $diag\\left(\n+1,+1,+1,+1,+1\\right) $. Then, for example, $\\left\\| g^{\\ast}-\\widetilde\n{g},V\\right\\| _{m=0}=\\left[ \\int_{V}\\left| g^{\\ast}-\\widetilde{g}\\right|\n^{2}d^{4}xd\\psi\\right] ^{\\frac{1}{2}}$ where\n\\[\n\\left| g^{\\ast}-\\widetilde{g}\\right| =\\left[\n{\\textstyle\\sum\\limits_{i,j=1}^{4}}\n\\left( g_{ij}^{\\ast}-\\widetilde{g}_{ij}\\right) ^{2}+2%\n{\\textstyle\\sum\\limits_{i=1}^{4}}\n\\left( g_{i5}^{\\ast}\\right) ^{2}+\\left( g_{55}^{\\ast}-1\\right)\n^{2}\\right] ^{\\frac{1}{2}}%\n\\]\n\n\nAs we have mentioned, in the given coordinates, the embedding map is $\\left(\nx,\\psi=0\\right) $. Thus the induced metric in $N$ by the new solution\n$g^{\\ast}$ is given by $g_{ij}^{\\prime}\\left( x\\right) =g_{ij}^{\\ast}\\left(\nx,\\psi=0\\right) $. Let us show that if $\\varepsilon$ is sufficiently small\nthe induced metric $g^{\\prime}$ in $N$ is Lorentzian.\n\nMetrics which are $C^{\\infty}$ in the whole manifold belong to local Sobolev\nspaces $W^{m}$ for any $m$. Thus they obey some important inequalities which\nhold on Sobolev spaces. For example, according to lemma 7.4.1 in ref.\n\\cite{HE} (page 235), we have that for $m\\geq3$, $\\left| g^{\\ast}%\n-\\widetilde{g}\\right| \\leq P\\left\\| g^{\\ast}-\\widetilde{g},V\\right\\| _{m}$\non $V$, where $P$ is a positive constant (depending on $V$). Thus if $\\left\\|\ng^{\\ast}-\\widetilde{g},V\\right\\| _{m}<\\varepsilon$, it follows that all\ncomponents satisfy the inequality $\\left| g_{\\mu\\nu}^{\\ast}-\\widetilde\n{g}_{\\mu\\nu}\\right|
0$ which depends only on $\\widetilde{g}$\nand $V$ (an estimate of $\\xi$ is given in appendix II) such that for\n$\\varepsilon<\\xi$ we have\n\\[\n\\left| \\det g_{ij}^{\\ast}-\\det\\widetilde{g}_{ij}\\right| <\\inf_{V}\\left|\n\\det\\widetilde{g}_{ij}\\right|\n\\]\nThis means that $\\det g_{ij}^{\\ast}<0$ for all points in $V$, since\n$\\det\\left( \\widetilde{g}_{ij}\\right) $ is negative in $V$. This shows that\n$g^{\\prime}$ is Lorentzian for $\\varepsilon<\\xi$.\n\nNow let us compare the induced metric $g^{\\prime}$ with the original spacetime\nmetric $g$. It is known that for a field in a Sobolev space the norm of the\nrestriction of that field to a hypersurface is related to its norm in the\nmanifold (see \\cite{HE}, lemma 7.4.3, page 235). According to the mentioned\nlemma 7.4.3, there is a positive constant $Q$ (depending on $N$ and $V$ ) such\nthat $\\left\\Vert g^{\\ast}-\\widetilde{g},N\\right\\Vert _{m}\\leq Q\\left\\Vert\ng^{\\ast}-\\widetilde{g},V\\right\\Vert _{m+1}$, where $\\left\\Vert g^{\\ast\n}-\\widetilde{g},N\\right\\Vert _{m}$ is defined on $N$ with the induced\nEuclidean metric. Thus, for example,%\n\\[\n\\left\\Vert g^{\\ast}-\\widetilde{g},N\\right\\Vert _{0}=\\left( \\int_{N}\\left\\vert\ng^{\\ast}-\\widetilde{g}\\right\\vert ^{2}d^{4}x\\right) ^{\\frac{1}{2}}%\n\\]\nOn the other hand $\\left\\Vert g^{\\ast}-\\widetilde{g},N\\right\\Vert _{m}^{\\sim\n}\\leq\\left\\Vert g^{\\ast}-\\widetilde{g},N\\right\\Vert _{m}$ (by $\\sim$ we mean\nthat derivatives are taken only in directions tangent to $N$). Now consider\n$g^{\\prime},$ the induced metric on $\\ N$. In this special coordinates it is\neasy to see $\\left\\Vert g^{\\prime}-g,N\\right\\Vert _{m}^{\\sim}$ $\\leq\\left\\Vert\ng^{\\ast}-\\widetilde{g},N\\right\\Vert _{m}^{\\sim}$, since the induced metric has\nless components than the higher dimensional metric. Therefore $\\left\\Vert\ng^{\\prime}-g,N\\right\\Vert _{m}^{\\sim}\\leq Q\\left\\Vert g^{\\ast}-\\widetilde\n{g},V\\right\\Vert _{m+1}$.\n\nFrom the results obtained above we can now show that if the initial data are\nsufficiently close to $\\left( h,K\\right) $ on a neighbourhood of the causal\npast of $N$ in $\\Sigma$ then the induced metric by the new solution is\nLorentzian and close to $g$ in $N$.\n\nMore precisely, \\textit{let }$N$\\textit{ be an open set in }$D^{+}\\left(\n\\Sigma\\right) \\cap M$\\textit{ with compact closure in }$M$\\textit{ and }%\n$\\eta>0$\\textit{, there exist some neighborhood }$\\mathcal{U}$\\textit{ of\n}$J^{-}\\left( \\overline{N}\\right) \\cap\\Sigma$\\textit{ of compact closure in\n}$\\Sigma$\\textit{ and some }$\\delta>0$\\textit{ such that }$C^{\\infty}$\\textit{\ninitial data }$\\left( h^{\\prime},K^{\\prime}\\right) $\\textit{ close to\n}$\\left( h,K\\right) $\\textit{ in }$\\mathcal{U}$\\textit{, i.e., }$\\left\\Vert\nh^{\\prime}-h,\\mathcal{U}\\right\\Vert _{m}^{\\sim}<\\delta$\\textit{ and\n}$\\left\\Vert K^{\\prime}-K,\\mathcal{U}\\right\\Vert _{m-1}^{\\sim}<\\delta\n$\\textit{, give rise to a metric }$g^{\\ast}$\\textit{ which induces a\nLorentzian metric in }$N$\\textit{ which is near to }$g$\\textit{, i.e.,\n}$\\left\\Vert g^{\\prime}-g,N\\right\\Vert _{m-1}^{\\sim}<\\eta$\\textit{.}\n\nIn order to see this it suffices to take $m>4$ (\\cite{CB}) and to make\nappropriate choices for $\\varepsilon$ and $V$ in the Cauchy stability theorem.\nIndeed, let $V$ be any neighbourhood of $N$ in $D^{+}\\left( \\Sigma\\right) $\nwith compact closure. Since $\\overline{N}\\subset\\overline{V}$ then\n$J^{-}\\left( \\overline{N}\\right) \\subset J^{-}\\left( \\overline{V}\\right)\n$. Thus, taking $\\mathcal{U}$ to be some neighbourhood of $J^{-}\\left(\n\\overline{V}\\right) \\cap\\Sigma$ of compact closure in $\\Sigma$, the Cauchy\nstability theorem ensures that there exists $\\delta$ such that the new metric\ngenerated $g^{\\ast}$ satisfies $\\left\\| g^{\\ast}-\\widetilde{g},V\\right\\|\n_{m}<\\varepsilon$ for any $\\varepsilon>0$. Now if we take $\\varepsilon\n<\\min(\\xi$,$\\frac{\\eta}{Q})$, we guarantee that the induced metric $g^{\\prime\n}$ is Lorentzian in $N$ and that $\\left\\| g^{\\prime}-g,N\\right\\|\n_{m-1}^{\\sim}<\\eta$.\n\n\\section{Final Remarks}\n\nFrom this analysis we can conclude that for each ambient space whose existence\nis guaranteed by CM theorem there corresponds an initial data set\\emph{\n}$\\left( h,K,\\Sigma\\right) $\\emph{ }in respect to which it possess, in a\ncertain domain, the desirable physical properties of causality and stability.\nThis cannot be ensured by CM theorem itself, but by an indirect reasoning as\ndiscussed above. As a direct consequence of this result we have found that for\nany analytic spacetime there exist initial data in whose Cauchy development\nfor the vacuum Einstein equations it can be locally embedded. The embedding is\nCauchy stable and obeys the domain of dependence property with respect to\n$\\left( h,K,\\Sigma\\right) $.\n\nThe extension of the above result to the case where a cosmological constant is\nincluded in the field equations can easily be done. On the other hand the same\nanalysis cannot be applied to the case of a brane in a straightforward way.\nThe problem is that in this case we cannot guarantee the existence of a smooth\nspacelike hypersurface in an open set of $\\widetilde{M},$ at least by the\nmethod employed here, because, as it is known, the brane must be embedded in a\nspace whose metric has a discontinuity in the derivative along the normal\ndirection with respect to the brane. Thus this metric is not analytic in any\nopen set containing the point $p$ and for this reason we cannot use the\nCauchy-Kowalewski theorem to guarantee the existence of a function whose\ngradient is everywhere timelike as we have done in the appendix I.\nNevertheless, this question deserves further investigation.\n\nThe analyticity of the initial data in $\\Sigma$ is a restriction imposed by CM\ntheorem. However, they are not to be considered an unphysical condition.\nIndeed, it must be realized that a great part of the physical solutions, even\nin the relativistic regime, are analytic in a certain domain. The crucial\npoint here is the domain of convergence. What seems to be an unrealistic is a\nfield which is analytic in the whole manifold. Thus we can say that if we\ncould handle the initial data in the spacelike four-dimensional hypersurface,\nit would be physically feasible to prepare initial data which are analytic in\nthe interior of a compact domain $S\\subset\\Sigma$ containing the point $p$ in\norder to generate the desired embedding in the interior of $D(S)$.\n\nMoreover we have seen that $C^{\\infty}$ initial data sufficiently close to an\nanalytic initial data set give rise to embeddings of $C^{\\infty}$spacetimes\nwhich are near to the original analytic spacetime.\\emph{ }This may suggest the\npossibility of extending the CM theorem to a less restrictive differential\nclass of embedddings. We are currently investigating this possibility.\n\n\\section{Appendix I}\n\nConsider the five-dimensional vacuum space$\\left( \\widetilde{M},\\widetilde\n{g}\\right) $ into which the spacetime $\\left( M,g\\right) $ is locally\nembedded around the point $p\\in M$. Let us take the following equation for the\nunknown function $\\phi:$%\n\\begin{equation}\n\\widetilde{g}^{\\alpha\\beta}\\partial_{\\alpha}\\phi\\partial_{\\beta}\\phi=-1\n\\end{equation}\nIn Gaussian coordinates this equation can be written in the following form\n\\begin{equation}\n\\frac{\\partial\\phi}{\\partial\\psi}=\\pm\\sqrt{-1+\\widetilde{g}^{ij}\\partial\n_{i}\\phi\\partial_{j}\\phi}%\n\\end{equation}\nLet us consider the equation with the positive sign. According to the\nCauchy-Kowalewski theorem, there is an open set $\\widetilde{O}$ in\n$\\widetilde{M}$ containing the point $p$ where the equation has a solution\nprovided the initial condition\n\\begin{equation}\n\\left. \\phi\\right\\vert _{\\psi=0}=f\\left( x\\right)\n\\end{equation}\nensures that the right-hand side of that equation be an analytic function of\nits arguments $\\left( \\partial_{i}\\phi\\right) $ at the point $p.$ This can\nbe achieved by choosing $f\\left( x\\right) $ in such a way that the following\ninequality be satisfied at the point $p:$%\n\\begin{equation}\n\\left. \\left( g^{ij}\\partial_{i}f\\partial_{j}f\\right) \\right\\vert _{p}>1\n\\end{equation}\nThis can always be done. For example, take $f\\left( x\\right) =\\lambda\nV_{i}x^{i}$, where $\\lambda>1$ and $V_{i}$ is the component of a unit\nspacelike vector with respect to the spacetime metric at the point $p$.\n\nTherefore, the solution $\\phi$ is a function whose gradient is everywhere\ntimelike in $\\left( \\widetilde{O},\\widetilde{g}\\right) $, and this means\nthat stable causality condition holds on $\\left( \\widetilde{O},\\widetilde\n{g}\\right) $. Now let us assume without loss of generality that $\\phi\\left(\np\\right) =0.$ Considering that the gradient of $\\phi$ does not vanish in\n$\\widetilde{O}$, we know that the inverse image $\\phi^{-1}\\left( 0\\right) $\nis a hypersurface $\\Sigma$ of $\\widetilde{O}$. Since the gradient of $\\phi$\nwhich is orthogonal to $\\Sigma$ is everywhere timelike, then we can conclude\nthat $\\Sigma$ is a spacelike hypersurface. Moreover, $\\Sigma$ is achronal.\nIndeed, every future directed timelike curve which leaves $\\Sigma$ cannot\nreturn to $\\Sigma$ since $\\phi$ does not change sign along these curves. It\nhappens that an achronal spacelike hypersurface is acausal \\cite{oneill}.\nTherefore $\\Sigma$ is an acausal spacelike hypersurface in $\\left(\n\\widetilde{O},\\widetilde{g}\\right) $.\n\n\\section{\\bigskip Appendix II}\n\nIn this section we want to determine an estimate of how much near to\n$\\widetilde{g}$ the new metric $g^{\\ast}$ must be in order to induce a\nLorentzian metric in $N$. As described above, this is achieved if the\ncondition%\n\\[\n\\left| \\det g_{ij}^{\\ast}-\\det\\widetilde{g}_{ij}\\right| <\\inf_{V}\\left(\n\\det\\widetilde{g}_{ij}\\right)\n\\]\nholds on $V$. As we have seen, for $m\\geq3$, $\\left\\| g^{\\ast}-\\widetilde\n{g},V\\right\\| _{m}<\\varepsilon$ implies that $\\left| g_{\\mu\\nu}^{\\ast\n}-\\widetilde{g}_{\\mu\\nu}\\right|
>H$ we see that $A$ and $B$ vary very slowly with time \ncompared to other terms in $S_i$. The most rapidly varying terms are those containing\n$\\int\\omega_1 dt$ in the exponent. Due to these terms, $S_i$ fluctuates rapidly\nwith time. \n\nDue to the presence of a vector field $S_i$ in our theory, our cosmological\nsolution naturally contains a constant three dimensional unit vector \n$n_i$ defined in Eq. \\ref{eq:ni}. This vector defines a direction in space\nand hence breaks rotational invariance. However the background metric is\nstill isotropic since the vector $n_i$ does not contribute to Einstein's\nequations. Furthermore it is unlikely to lead to very large observable \nconsequences of the breakdown of isotropy. This is because the field $S_\\mu$\ndoes not directly interact with visible matter \\cite{Cheng,Cheng1}. \nNevertheless, it is extremely interesting to determine the cosmological \npredictions of this breakdown of isotropy in view of several observations\nwhich indicate a preferred direction in the universe \n\\cite{Birch,JP99,huts1,huts2,Oliveira,Ralston,Schwarz,Eriksen}.\nModels in which vector fields acquire nonzero vacuum or background \nvalues have also\nbeen considered by many authors \\cite{Ford,Lidsey,Armendariz,Alves,Benini,Wei1,Ackerman,Dulaney1,Yokoyama,Golovnev,Kanno,Koivisto,Bamba,Chiba1,Dulaney,Dimopoulos}. \nIt has been argued that many of these models, which lead to prolonged\nanisotropic accelerated expansion, are unstable \\cite{Himmetoglu}.\nIn our model the vector field does not directly lead to anisotropic \nexpansion, even though it acquires a non-zero background value.\n\n\\subsection{Leading Order Solution}\nAt leading order we can assume that $A$ and $B$ are time independent. The\nleading order solution for ${\\cal S}$ can then be written as\n\\begin{equation}\n{\\cal S} = {1\\over \\sqrt{a}} (A' \\cos\\theta + B'\\sin\\theta)\n\\end{equation}\nwhere $\\theta = \\int \\omega_1 dt$ and $A'$ \\& $B'$ are some real constants.\nWe also find\n\\begin{equation}\n\\dot {\\cal S} = {1\\over \\sqrt{a}} \\left(-{H\\over 2}p + \\omega_1 q\\right)\\, ,\n\\end{equation}\nwhere, $p = A'\\cos\\theta + B'\\sin\\theta$, $q=-A'\\sin\\theta + B'\\cos\\theta$.\nWe next substitute these into the 0-0 component of the Einstein's equation. \nWe define,\n\\begin{equation}\n\\rho_{S_i} = {1\\over 2a^2}\\dot S_i^2 + {1\\over 2a^2}\\omega^2 S_i^2\n\\label{eq:rhoSi}\n\\end{equation}\nThis essentially acts as the contribution to the energy density provided\nby the field $S_i$. We find,\n\\begin{equation}\n\\rho_{S_i} = {1\\over 2a^3}\\left[\\omega^2(p^2+q^2) + {H^2\\over 4}\n(p^2-q^2) - \\omega_1 H pq\\right] \n\\end{equation}\nwhere, $p^2+q^2 = A'^2+B'^2$, which is a constant. Since $\\omega_1$ is very large,\n$S_i$ is a rapidly oscillating function of time. Hence it is reasonable to\nreplace the oscillatory functions with their time averages. After averaging\nover time, $(p^2-q^2)\\rightarrow 0$ and $pq\\rightarrow 0$. Hence,\n\\begin{equation}\n\\rho_{S_i} = {1\\over 2a^3}\\left(A'^2 + B'^2\\right)\\omega^2\n\\end{equation}\nWe next consider the i-j component of the Einstein's equation. \nWe define,\n\\begin{equation}\n-3P_{S_i} = -{1\\over 2a^2}\\dot S_i^2 + {1\\over 2a^2}\\omega^2 S_i^2\n\\label{eq:PSi}\n\\end{equation}\nwhich effectively acts as the contribution of the $S_i$ field to pressure.\nAfter substituting the time averaged values for the oscillatory functions, we get\n\\begin{equation}\nP_{S_i} = 0 \n\\end{equation}\nHence, we find that the field $S_i$ essentially acts as the cold dark matter. \nIts energy density $\\rho_{S_i}$ varies as $1\/a^3$ and its pressure \n$P_{S_i}$ is zero at leading order. A similar phenomenon is seen in the\ncase of coherent axion oscillations \n\\cite{Preskill,Abbott,Dine,Steinhardt,KT}.\n\n\nThe modified Einstein's equations, at leading order, can now be written as\n\\begin{eqnarray}\nH^2 = {\\dot a^2\\over a^2} &=& {\\lambda\\over 3\\beta}\\eta^2 + {2\\over 3\\beta\\eta^2 a^3}\n(A'^2+B'^2)\\omega^2\n\\label{eq:H} \\\\\n2{\\ddot a\\over a} + {\\dot a^2\\over a^2} &=& {\\lambda\\over \\beta}\\eta^2\\,.\n\\label{eq:ddota}\n\\end{eqnarray}\nEq. (\\ref{eq:H}) generalizes the expression for the Hubble constant,\nEq. (\\ref{eq:hubble1}), for the case when the vector field is non-zero.\n\n\\subsection{Corrections to the leading order}\nWe next calculate the corrections to the leading order result by taking\ninto account the time dependence of the coefficients $A$ and $B$ in Eq. (\\ref{eqSisol}). Substituting for $A$ and $B$, from Eq. (\\ref{ABsol}), in Eq. (\\ref{eqSisol}), \nwe find,\n\\begin{eqnarray}\n{\\cal S} &=& {1 \\over \\sqrt{\\omega_1 a}}\\left[Q\\cos(\\theta-x) + P\\sin(\\theta-x)\n\\right]={1\\over \\sqrt{\\omega_1 a}} U\\\\\n\\dot {\\cal S} &=& {1 \\over \\sqrt{\\omega_1 a}}\\left[{H\\over 2}\\left({\\dot x\\over \n\\omega_1} -1\\right)U + (\\omega_1-\\dot x)V\\right]\n\\end{eqnarray}\nwhere, \n\\begin{eqnarray*}\nU &=& N\\cos\\theta + M\\sin\\theta\\ , V = -N\\sin\\theta+M\\cos\\theta \\\\\n\\textnormal{and}\\,\\,\\,M &=& P\\cos x + Q\\sin x\\ , N = -P\\sin x+Q\\cos x\\,.\n\\end{eqnarray*}\nHere, $x = {1 \\over 2}\\sin^{-1}{H \\over 2\\omega} = {1 \\over 2}\\cos^{-1}{\\omega_1\\over\n\\omega}$, $\\dot x = \\dot H\/4\\omega_1 = -\\dot\\omega_1\/H$ and $P$ \\& $Q$ \nare some real constants.\n\nSubstituting these in Eq. (\\ref{eq:rhoSi}), we get,\n\\begin{eqnarray}\n\\rho_{S_i} = {1\\over 2\\omega_1a^3}\\Bigg[(U^2+V^2)\\omega^2\n&+&{H^2\\over 4}(U^2-V^2) - \\omega_1H\\left(1-{\\dot x\\over \\omega_1}\\right)^2UV\n\\nonumber\\\\\n&+& (\\dot x^2-2\\omega_1\\dot x)\\left({H^2\\over 4\\omega_1^2}U^2 + V^2\\right)\n\\Bigg]\\, . \n\\end{eqnarray}\nThe third term on the right hand side reduces to $\\omega_1HUV$, since $\\dot x\/\n\\omega_1<<1$. The fourth term simplifies to $-\\dot H V^2\/2\\,$, if we\nneglect terms suppressed by factors of $H\/2\\omega_1$. Hence we find\n\\begin{equation}\n\\rho_{S_i} = {1\\over 2\\omega_1a^3}\\Bigg[(U^2+V^2)\\omega^2\n+{H^2\\over 4}(U^2-V^2) - \\omega_1HUV\n- {\\dot H\\over 2}V^2\n\\Bigg]\\, . \n\\end{equation}\nWe again substitute time averaged values for rapidly oscillating functions. \nThis sets $(U^2-V^2)\\rightarrow 0$, $UV\\rightarrow 0$ and $(U^2+V^2)\n=(M^2+N^2)= P^2+Q^2$, which is a constant. A leading order expression for\n$\\dot H$ can be computed using Eq. (\\ref{eq:H}). We find\n\\begin{equation}\n\\dot H = -{1\\over \\beta\\eta^2a^3}(A'^2+B'^2)\\omega^2\\,.\n\\end{equation}\nThus, we get,\n\\begin{equation}\n\\rho_{S_i} = {(P^2+Q^2)M_S\\over 2 a^3} + {(P^2+Q^2)\\lambda\\eta^2\n\\over 48\\beta a^3 M_S} \n+{(P^2+Q^2)(A'^2+B'^2)M_S \\over 6\\beta\\eta^2 a^6}\\,. \n\\end{equation}\nThe leading term varies as $a^{-3}$ as already found in the previous\nsection. Here, we also find two subleading terms. One of these falls as\n$1\/a^3$ and the second falls much faster, as\n$a^{-6}$, as the universe expands. \nWe similarly find the corrections to the pressure term $P_{S_i}$. We find that,\nusing Eq. (\\ref{eq:PSi}),\n\\begin{equation}\n-3P_{S_i} = {1\\over 2a^3\\omega_1}\\Bigg[(U^2-V^2)\\omega_1^2\n+\\omega_1HUV + {\\dot H\\over 2}V^2 \\Bigg]\\, . \n\\end{equation}\nAgain, substituting time averaged values for the rapidly oscillating functions,\nwe get\n\\begin{equation}\nP_{S_i} = -{1 \\over {6\\,\\omega_1 a^3}}{\\dot{H} \\over 4}(P^2+Q^2) \n= {(P^2+Q^2)(A'^2+B'^2)M_S \\over 24\\beta\\eta^2 a^6}\\,.\n\\end{equation}\nHence, we get a small correction term to $P_{S_i}$, which also decays rapidly\nas $a^{-6}$ as the universe expands.\n\nThe 0-0 and i-j component of the Einstein's equations can, now, be written as, \n\\begin{equation}\n{3\\beta \\over 4}\\eta^2H^2 = {\\lambda \\over 4}\\eta^4 + {(P^2+Q^2) \\over 2\\omega_1a^3} \\left(\\omega^2-{\\dot{H} \\over 4}\\right)\n\\label{eqMEE1sim}\n\\end{equation}\nand\n\\begin{equation}\n{3\\beta \\over 4}\\eta^2\\left(2{\\ddot{a} \\over a} + {\\dot{a}^2 \\over a^2}\\right)= \n{3\\lambda \\over 4}\\eta^4 + {(P^2+Q^2) \\over 2\\omega_1 a^3}{\\dot{H} \\over 4}\n\\label{eqMEE2sim}\n\\end{equation}\nrespectively. The first of the above two equations can be cast in the form,\n\\begin{equation}\n1 = \\Omega_\\Lambda + \\Omega_{S_i}\n\\label{eqSumComp}\n\\end{equation}\nwhere\n\\[\n\\Omega_\\Lambda = {\\rho_\\Lambda \\over \\rho_{cr}}\\,,\\Omega_{S_i} = {\\rho_{S_i} \\over \\rho_{cr}} \\,,\n\\]\n\\begin{eqnarray*}\n\\rho_\\Lambda = {\\lambda \\over 4}\\eta^4\\,\\,, \n\\rho_{S_i} = {(P^2+Q^2) \\over 2\\omega_1a^3}\\left(\\omega^2-{\\dot{H} \\over 4}\\right)\\,\\, {\\rm and} \\,\\, \\rho_{cr} = {3\\beta \\over 4}\\eta^2H^2\\,.\n\\end{eqnarray*}\nEq. (\\ref{eqSumComp}) looks like the $\\Lambda$CDM model with $\\Omega_M = \\Omega_{S_i}$.\n\nThus, the energy density $\\rho_{S_i}$ and the corresponding pressure $P_{S_i}$ of the vector field $S_i$, including the correction terms, are obtained as\n\\[\n\\rho_{S_i} = {c_1 \\over 2\\omega_1a^3}\\left(\\omega^2 + {c_2 \\over a^3}\\right)\n\\]\nand\n\\[\nP_{S_i}= {c_1c_2 \\over 6\\,\\omega_1a^6}\n\\]\nwhere, \n$$ c_1 = P^2+Q^2\\ ,\\ c_2={A'^2+B'^2\\over 4\\beta}\\left(1+{3\\beta\\over 2}\\right)f^2\\ .$$\nIn the limit $x\\rightarrow 0$ and $\\omega_1\\rightarrow \\omega$, we find\n$A'=Q\/\\sqrt{\\omega}$ and $B'=P\/\\sqrt{\\omega}$. \n\nWe can make an estimate of the term $(P^2+Q^2)$. Since the recent cosmological observations support a flat $\\Lambda$CDM model, we can equate the second term on right hand side of Eq. (\\ref{eqMEE1sim}), evaluated at present time, to $\\rho_{M,0}$. The contribution due to $\\dot{H}$ is negligible. Thus, we get,\n\\[\nP^2+Q^2 \\approx {3M_P^2 H_0^2\\Omega_{M} \\over 4 \\pi M_S }\n\\]\nwhere $\\Omega_M$ is computed at the current time.\n\\section{Including the contribution due to radiation}\nIn this section, we obtain a set of dynamical equations to study the evolution of different components of the universe since the beginning of the \nradiation dominated era. For this purpose we introduce, by hand, the contribution due to radiation. We expect to reproduce the usual Big Bang evolution where radiation dominates at early times, followed by dark matter and dark energy dominated eras, respectively, at late times. We introduce a radiation term with energy density $\\rho_R $ and it's corresponding pressure term $P_R $ in the energy-momentum tensor $T_{\\mu \\nu}$. The resulting equations are solved numerically.\n\n\nIt is convenient to introduce the following variables,\n\\begin{eqnarray}\n&& X^2 = \\frac{\\lambda}{3\\beta}{\\eta^2 \\over H^2} = \\Omega_\\Lambda\\,, \\\\ \\nonumber\n&& Y^2 = {2 \\over 3\\beta}{{\\cal S}^2 \\over a^2\\eta^2} = \\Omega_1\\,, \\\\ \\nonumber\n&& Z^2 = {2 \\over 3\\beta}\\left(1+{3\\beta \\over 2}\\right){{f^2 {\\cal S}^2} \\over {a^2 H^2}} = \\Omega_2\\,, \\\\\n&& R = \\frac{4}{3\\beta}{\\rho_{R,0} \\over a^4\\eta^2H^2}=\\Omega_R \\,. \\nonumber\n\\end{eqnarray}\nHere, $\\Omega_{S_i} = \\Omega_1 + \\Omega_2$, $\\rho_{R,0}$ is the \nradiation energy density in the current era and \nthe prime denotes derivative with respect to $\\ln a$. \nHence for any function $f$,\n$$f' \\equiv {df\\over d\\ln a}={1\\over H}{df\\over dt}\\,.$$\nWith these variables, we can cast the equations (\\ref{eqMEE1}), (\\ref{eqEta}) and (\\ref{eqSi}), along with $\\rho_R$ and $P_R$, in a dimensionless form, to obtain the following set of equations :\n\\begin{eqnarray}\n&& X' = X(2-2X^2-Z^2) \\,, \\\\\\nonumber\n&& Y' = -Y(2X^2+Z^2)-{\\kappa \\over 2}XZ \\,, \\\\ \\nonumber\n&& Z' = Z(1-2X^2-Z^2)+{\\kappa \\over 2}XY \\,, \\\\ \\nonumber\n&& R' = -2R(2X^2+Z^2) \\,,\n\\end{eqnarray}\nwhere, $\\kappa =\\sqrt{{12\\beta \\over \\lambda}}{\\omega \\over \\eta} = \n \\sqrt{3}M_PM_S\/\\sqrt{2\\pi\\rho_V} $.\n\nWe studied the dynamical equations numerically from the beginning of \nradiation dominated era ($\\ln a = -29$) till today ($\\ln a = 0$).\nThe results are presented in Fig. (\\ref{omega-i}). In the graphs we only show\nresults for the range $\\ln a = [-14,0]$, as radiation is the only dominant \ncomponent in the omitted regions. The plots show the results for \nthree values of $\\kappa=50, 200, 500$. The initial conditions for these\nthree cases have been chosen so as to match the final observed values\nof $\\Omega_M$ and $\\Omega_\\Lambda$ \\cite{Dodelson,Essence,Kowalski,wmap}. \n\nAs is evident from the plots, varying $\\kappa$ varies the frequency of oscillations. Besides that the results are almost identical, as long as $\\kappa>>1$. \nThis can be understood from the expression of $\\kappa$. \nFor fixed values of $M_P$ and $\\rho_V$, \nincreasing $\\kappa$ increases $\\omega$ or\n$M_S$, which implies more rapid oscillations. Furthermore as seen\nfrom our analytic results, applicable when radiation energy density\nis negligible, we reproduce the standard $\\Lambda CDM$ model in the large\n$\\kappa$ limit. \n\n\n\\begin{center}\n\\begin{figure}\n\\includegraphics[angle=-90,width = 0.94\\textwidth]{test3.eps}\n\\caption{The ratio of energy density to the critical energy density, \n$\\Omega_i$, for different components as\n a function of $\\ln(a)$ for $\\kappa=50,200, 500$.}\n\\label{omega-i}\n\\end{figure}\n\\end{center}\n\n\\section{Conclusions}\nWe have analyzed a locally scale invariant generalization of Einstein's \ngravity. The theory requires introduction of a scalar and a vector \nfield. The scale invariance in the theory is broken by a recently introduced\nmechanism called the cosmological symmetry breaking. \nWe have shown that this theory naturally leads to both dark energy and \ndark matter. Due to scale invariance the cosmological constant term is\nabsent in the action. The solutions to the equations of motion admit a \nconstant, non-zero value of the scalar field, which leads to a small \ncosmological constant or dark energy. The cold dark matter arises in the\nform of vacuum oscillations of the vector field. \nWe have shown that the theory behaves very similar to the $\\Lambda CDM$ \nmodel with negligible corrections. The precise values of the energy densities\nof different components are fixed by the initial conditions. Some of the\nparameters in the model take very small values and it is necessary \nto find an explanation for such small values. Furthermore it is \nimportant to compute quantum corrections in this model since that\nwill determine whether the model suffers from fine tuning problems. \nThe model can be generalized to include the standard\nmodel fields. The \nscalar field may then be identified with the Higgs multiplet. In this\ncase the Higgs particle is absent from the particle spectrum and hence\nprovides a very interesting test of the model. Alternatively the scalar\nfield might be identified with a GUT scalar field multiplet. This possibility\nhas so far not been studied in the literature.\n\n\\bigskip\n\n{\\bf Acknowledgements :} Pavan Kumar Aluri and Naveen Kumar Singh thank \nthe Council of Scientific and Industrial Research(CSIR), India for providing \ntheir Ph.D. fellowships. Their fellowship numbers are \nF.No.09\/092(0413)\/2005-EMR-I and F.No.09\/092(0437)\/2005-EMR-I, respectively.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\nProbabilistic ``paradoxes'' can have unexpected applications in computational problems,\nbut mathematical tools often do not exist to prove the reliability of the resulting computations, so instead practitioners have to rely on heuristics, intuition and experience.\nA case in point is the Kruskal Count, a probabilistic concept discovered by Martin Kruskal and popularized in a card trick by Martin Gardner, which exploits the property that for many Markov chains on $\\ZZ$ independent walks will intersect fairly quickly when started at nearby states.\nIn a 1978 paper John Pollard applied the same trick to a mathematical problem related to code breaking, the Discrete Logarithm Problem: solve for the exponent $x$, given the generator $g$ of a cyclic group $G$\nand an element $h\\in G$ such that $g^x=h$.\n\nPollard's Kangaroo method is based on running two independent random walks on a cyclic group $G$, one starting at a known state (the ``tame kangaroo'') and the other starting at the unknown but nearby value of the discrete logarithm $x$ (the ``wild kangaroo''), and terminates after the first intersection of the walks.\nAs such, in order to analyze the algorithm it suffices to develop probabilistic tools for examining the expected time until independent random walks on a cyclic group intersect, in terms of some measure of the initial distance between the walks.\n\nPast work on problems related to the Kruskal Count seem to be of little help here.\nPollard's argument of \\cite{Pol00.1} gives rigorous results for specific values of $(b-a)$, but the recurrence relations he uses can only be solved on a case-by-case basis by numerical computation.\nLagarias et.al. \\cite{LRV09.1} used probabilistic methods to study the {\\em distance traveled} before two walks intersect, but only for walks in which the number of steps until an intersection was simple to bound.\nAlthough our approach here borrows a few concepts from the study of the Rho algorithm in \\cite{KMPT07.1}, such as examining the expected number of intersections and some measure of its variance,\na significant complication in studying this algorithm is that when $b-a\\ll|G|$ the kangaroos will have proceeded only a small way around the cyclic group before the algorithm terminates.\nAs such, mixing time is no longer a useful notion, and instead a notion of convergence is required which occurs long before the mixing time.\nThe tools developed here to avoid this problem may prove of independent interest when examining other pre-mixing properties of Markov chains.\n\nThe key probabilistic results required are upper and lower bounds on expected time until intersection of independent walks on $\\ZZ$ started from nearby states.\nIn the specific case of the walk involved in the Kangaroo method these bounds are equal, and so the lead constants are sharp, which is quite rare among the analysis of algorithms based on Markov chains.\nMore specifically we have:\n\n\\begin{theorem} \\label{thm:main}\nSuppose $g,h\\in G$ are such that $h=g^x$ for some $x\\in[a,b]$.\nIf $x$ is a uniform random integer in $[a,b]$ then the expected number of group operations required by the Distinguished Points implementation of Pollard's Kangaroo method is\n$$\n(2+o(1))\\sqrt{b-a}\\,.\n$$\nThe expected number of group operations is maximized when $x=a$ or $x=b$, at\n$$\n(3+o(1))\\sqrt{b-a}\n$$\n\\end{theorem}\n\nPollard \\cite{Pol00.1} previously gave a convincing but not completely rigorous argument for the first bound,\nwhile the second was known only by a rough heuristic.\nGiven the practical significance of Pollard's Kangaroo method for solving the discrete logarithm problem, we find it surprising that there has been no fully rigorous analysis of this algorithm, particularly since it has been 30 years since it was first proposed in \\cite{Pol78.1}.\n\nThe paper proceeds as follows.\nA general framework for analyzing intersection of independent walks on the integers is constructed in Section \\ref{sec:collision}.\nThis is followed in Section \\ref{sec:prelim} by a detailed description of the Kangaroo method,\nwith analysis in Section \\ref{sec:kangaroo}.\nThe paper finishes in Section \\ref{sec:generalize} with an extension of the results to more general step sizes, resolving a conjecture of Pollard's.\n\n\n\\section{Uniform Intersection Time and a Collision Bound} \\label{sec:collision}\n\nGiven two independent instances $X_i$ and $Y_j$ of a Markov Chain on $\\ZZ$, started at nearby states $X_0$ and $Y_0$ (as made precise below), we consider the expected number of steps required by the walks until they first intersect.\nObserve that if the walk is increasing, i.e. $\\P(u,v)>0$ only if $v>u$, then to examine the number of steps required by the $X_i$ walk it suffices to let $Y_j$ proceed an infinite number of steps and then evolve $X_i$ until $X_i=Y_j$ for some $i,j$.\nThus, rather than considering a specific probability $\\Pr{X_i=Y_j}$ it is better to look at $\\Pr{\\exists j:\\,X_i=Y_j}$.\nBy symmetry, the same approach will also bound the expected number of steps required by $Y_j$ before it reaches a state visited by the $X_i$ walk.\n\nFirst, however, because the walk is not ergodic then\nalternate notions resembling mixing time and a stationary distribution will be required.\nHeuristic suggests that after some warm-up period the $X_i$ walk will be sufficiently randomized that at each subsequent step the probability of colliding with the $Y_j$ walk is roughly the inverse of the average step size.\nOur replacement for mixing time will measure the number of steps required for this to become a rigorous statement:\n\n\\begin{definition}\nA {\\em stopping time} for a random walk $\\{X_i\\}_{i=0}^{\\infty}$ is a random variable $T\\in{\\mathbb N}$ such that the event $\\{T=t\\}$ depends only on $X_0,\\,X_1,\\,\\ldots,\\,X_t$.\nThe average time until stopping is $\\overline{T}=\\EE T$.\n\\end{definition}\n\n\\begin{definition} \\label{def:uniform-time}\nConsider a Markov chain $\\P$ on an infinite group $G$.\nA {\\em nearly uniform intersection time} $T(\\epsilon)$ is a stopping time such that for some $U>0$ and $\\epsilon\\geq 0$ the relation\n$$\n(1-\\epsilon)U \\leq \\Pr{\\exists j:\\,X_{T(\\epsilon)+\\Delta}=Y_j} \\leq (1+\\epsilon)U\n$$\nholds for every $\\Delta\\geq 0$ and every $(X_0,Y_0)$ in a designated set of initial states $\\Omega\\subset G\\times G$.\n\\end{definition}\n\nIn general the probability that two walks will ever intersect may go to zero in the limit.\nHowever, if a walk is {\\em transitive} on $\\ZZ$ (i.e. $\\P(u,v)=\\P(0,v-u)$), {\\em increasing} (i.e. $\\P(u,v)>0$ only when $v>u$), and {\\em aperiodic} (i.e. $gcd\\{k:\\,\\P(0,k)>0\\}=1$),\nthen one out of every $\\bar{S} = \\sum_{k=1}^\\infty k\\P(0,k)$ states is visited and a stopping time will exist satisfying\n$$\n\\frac{1-\\epsilon}{\\bar{S}} \\leq \\Pr{\\exists j:\\,X_{T(\\epsilon)+\\Delta}=Y_j} \\leq \\frac{1+\\epsilon}{\\bar{S}}\\,.\n$$\nAn obvious choice of starting states are all $Y_0\\leq X_0$, but for reasons that will be apparent later it better serves our purposes to expand to the case of $Y_0