text
stringlengths
0
59k
id
stringlengths
40
40
\section{Introduction} Let $G$ be a simple undirected graph with the \textit{vertex set} $V(G)$ and the \textit{edge set} $E(G)$. A vertex with degree one is called a \textit{pendant vertex}. The distance between the vertices $u$ and $v$ in graph $G$ is denoted by $d_G(u,v)$. A cycle $C$ is called \textit{chordless} if $C$ has no \textit{cycle chord} (that is an edge not in the edge set of $C$ whose endpoints lie on the vertices of $C$). The \textit{Induced subgraph} on vertex set $S$ is denoted by $\langle S\rangle$. A path that starts in $v$ and ends in $u$ is denoted by $\stackrel\frown{v u}$. A \textit{traceable} graph is a graph that possesses a Hamiltonian path. In a graph $G$, we say that a cycle $C$ is \textit{formed by the path} $Q$ if $ | E(C) \setminus E(Q) | = 1 $. So every vertex of $C$ belongs to $V(Q)$. In 2011 the following conjecture was proposed: \begin{conjecture}(Hoffmann-Ostenhof \cite{hoffman}) Let $G$ be a connected cubic graph. Then $G$ has a decomposition into a spanning tree, a matching and a family of cycles. \end{conjecture} Conjecture \theconjecture$\,$ also appears in Problem 516 \cite{cameron}. There are a few partial results known for Conjecture \theconjecture. Kostochka \cite{kostocha} noticed that the Petersen graph, the prisms over cycles, and many other graphs have a decomposition desired in Conjecture \theconjecture. Ozeki and Ye \cite{ozeki} proved that the conjecture holds for 3-connected cubic plane graphs. Furthermore, it was proved by Bachstein \cite{bachstein} that Conjecture \theconjecture$\,$ is true for every 3-connected cubic graph embedded in torus or Klein-bottle. Akbari, Jensen and Siggers \cite[Theorem 9]{akbari} showed that Conjecture \theconjecture$\,$ is true for Hamiltonian cubic graphs. In this paper, we show that Conjecture \theconjecture$\,$ holds for traceable cubic graphs. \section{Results} Before proving the main result, we need the following lemma. \begin{lemma} \label{lemma:1} Let $G$ be a cubic graph. Suppose that $V(G)$ can be partitioned into a tree $T$ and finitely many cycles such that there is no edge between any pair of cycles (not necessarily distinct cycles), and every pendant vertex of $T$ is adjacent to at least one vertex of a cycle. Then, Conjecture \theconjecture$\,$ holds for $G$. \end{lemma} \begin{proof} By assumption, every vertex of each cycle in the partition is adjacent to exactly one vertex of $T$. Call the set of all edges with one endpoint in a cycle and another endpoint in $T$ by $Q$. Clearly, the induced subgraph on $E(T) \cup Q$ is a spanning tree of $G$. We call it $T'$. Note that every edge between a pendant vertex of $T$ and the union of cycles in the partition is also contained in $T'$. Thus, every pendant vertex of $T'$ is contained in a cycle of the partition. Now, consider the graph $H = G \setminus E(T')$. For every $v \in V(T)$, $d_H(v) \leq 1$. So Conjecture \theconjecture$\,$ holds for $G$. \vspace{1em} \end{proof} \noindent\textbf{Remark 1.} \label{remark:1} Let $C$ be a cycle formed by the path $Q$. Then clearly there exists a chordless cycle formed by $Q$. Now, we are in
b7c40b41b7eedaa408f87d154284a1aba126589c
\section{Principle of nano strain-amplifier} \begin{figure*}[t!] \centering \includegraphics[width=5.4in]{Fig1} \vspace{-0.5em} \caption{Schematic sketches of nanowire strain sensors. (a)(b) Conventional non-released and released NW structure; (c)(d) The proposed nano strain-amplifier and its simplified physical model.} \label{fig:fig1} \vspace{-1em} \end{figure*} Figure \ref{fig:fig1}(a) and 1(b) show the concept of the conventional structures of piezoresistive sensors. The piezoresistive elements are either released from, or kept on, the substrate. The sensitivity ($S$) of the sensors is defined based on the ratio of the relative resistance change ($\Delta R/R$) of the sensing element and the strain applied to the substrate ($\varepsilon_{sub}$): \begin{equation} S = (\Delta R/R)/\varepsilon_{sub} \label{eq:sensitivity} \end{equation} In addition, the relative resistance change $\Delta R/R$ can be calculated from the gauge factor ($GF$) of the material used to make the piezoresistive elements: $\Delta R/R = GF \varepsilon_{ind}$, where $\varepsilon_{ind}$ is the strain induced into the piezoresistor. In most of the conventional strain gauges as shown in Fig. \ref{fig:fig1} (a,b), the thickness of the sensing layer is typically below a few hundred nanometers, which is much smaller than that of the substrate. Therefore, the strain induced into the piezoresistive elements is approximately the same as that of the substrate ($\varepsilon_{ind} \approx \varepsilon_{sub}$). Consequently, to improve the sensitivity of strain sensors (e.g. enlarging $\Delta R/R$), electrical approaches which can enlarge the gauge factor ($GF$) are required. Nevertheless, as aforementioned, the existence of the large gauge factor in nanowires due to quantum confinement or surface state, is still considered as controversial. It is also evident from Eq. \ref{eq:sensitivity} that the sensitivity of strain sensors can also be improved using a mechanical approach, which enlarges the strain induced into the piezoresistive element. Figure \ref{fig:fig1}(c) shows our proposed nano strain-amplifier structure, in which the piezoresistive nanowires are locally fabricated at the centre of a released bridge. The key idea of this structure is that, under a certain strain applied to the substrate, a large strain will be concentrated at the locally fabricated SiC nanowires. The working principle of the nano strain-amplifier is similar to that of the well-known dogbone structure, which is widely used to characterize the tensile strength of materials \cite{dogbone1,dogbone2}. That is, when a stress is applied to the dogbone-shape of a certain material, a crack, if generated, will occur at the middle part of the dogbone. The large strain concentrated at the narrow area located at the centre part with respect to the wider areas located at outer region, causes the crack. Qualitative and quantitative explanations of the nano strain-amplifier are presented as follows. For the sake of simplicity, the released micro frame and nanowire (single wire or array) of the nano strain-amplifier can be considered as solid springs, Fig. \ref{fig:fig1}(d). The stiffness of these springs are proportional to their width ($w$) and inversely proportional to their length (l): $K \propto w/l$. Consequently, the model of the released nanowire and micro frames can be simplified as a series of springs, where the springs with higher stiffness correspond to the micro frame, and the single spring with lower stiffness corresponds to the
1b77ae9f541b19668cc96624c7ec0f83945284e2
\section{Introduction}\label{intro} Gas has a fundamental role in shaping the evolution of galaxies, through its accretion on to massive haloes, cooling and subsequent fuelling of star formation, to the triggering of extreme luminous activity around super massive black holes. Determining how the physical state of gas in galaxies changes as a function of redshift is therefore crucial to understanding how these processes evolve over cosmological time. The standard model of the gaseous interstellar medium (ISM) in galaxies comprises a thermally bistable medium (\citealt*{Field:1969}) of dense ($n \sim 100$\,cm$^{-3}$) cold neutral medium (CNM) structures, with kinetic temperatures of $T_{\rm k} \sim 100$\,K, embedded within a lower-density ($n \sim 1$\,cm$^{-3}$) warm neutral medium (WNM) with $T_{\rm k} \sim 10^{4}$\,K. The WNM shields the cold gas and is in turn ionized by background cosmic rays and soft X-rays (e.g. \citealt{Wolfire:1995, Wolfire:2003}). A further hot ($T_{\rm k} \sim 10^{6}$\,K) ionized component was introduced into the model by \cite{McKee:1977}, to account for heating by supernova-driven shocks within the inter-cloud medium. In the local Universe, this paradigm has successfully withstood decades of observational scrutiny, although there is some evidence (e.g. \citealt{Heiles:2003b}; \citealt*{Roy:2013b}; \citealt{Murray:2015}) that a significant fraction of the WNM may exist at temperatures lower than expected for global conditions of stability, requiring additional dynamical processes to maintain local thermodynamic equilibrium. Since atomic hydrogen (\mbox{H\,{\sc i}}) is one of the most abundant components of the neutral ISM and readily detectable through either the 21\,cm or Lyman $\alpha$ lines, it is often used as a tracer of the large-scale distribution and physical state of neutral gas in galaxies. The 21\,cm line has successfully been employed in surveying the neutral ISM in the Milky Way (e.g. \citealt{McClure-Griffiths:2009,Murray:2015}), the Local Group (e.g. \citealt{Kim:2003,Bruns:2005,Braun:2009,Gratier:2010}) and low-redshift Universe (see \citealt{Giovanelli:2016} for a review). However, beyond $z \sim 0.4$ (\citealt{Fernandez:2016}) \mbox{H\,{\sc i}} emission from individual galaxies becomes too faint to be detectable by current 21\,cm surveys and so we must rely on absorption against suitably bright background radio (21\,cm) or UV (Lyman-$\alpha$) continuum sources to probe the cosmological evolution of \mbox{H\,{\sc i}}. The bulk of neutral gas is contained in high-column-density damped Lyman-$\alpha$ absorbers (DLAs, $N_{\rm HI} \geq 2 \times 10^{20}$\,cm$^{-2}$; see \citealt*{Wolfe:2005} for a review), which at $z \gtrsim 1.7$ are detectable in the optical spectra of quasars. Studies of DLAs provide evidence that the atomic gas in the distant Universe appears to be consistent with a multi-phase neutral ISM similar to that seen in the Local Group (e.g. \citealt*{Lane:2000}; \citealt*{Kanekar:2001c}; \citealt*{Wolfe:2003b}). However, there is some variation in the cold and warm fractions measured throughout the DLA population (e.g. \citealt*{Howk:2005}; \citealt{Srianand:2005, Lehner:2008}; \citealt*{Jorgenson:2010}; \citealt{Carswell:2011, Carswell:2012, Kanekar:2014a}; \citealt*{Cooke:2015}; \citealt*{Neeleman:2015}). The 21-cm spin temperature affords us an important line-of-enquiry in unraveling the physical state of high-redshift atomic gas. This quantity is sensitive to the processes that excite the ground-state of \mbox{H\,{\sc i}} in the ISM (\citealt{Purcell:1956,Field:1958,Field:1959b,Bahcall:1969}) and therefore dictates the detectability of the 21\,cm line in absorption. In the CNM the spin temperature is governed by collisional excitation and so is driven to the
467d37196f1b4f439bbd55a25dd550222fb466db
\section{Introduction} Given $\rho>0$, we consider the problem \begin{equation}\label{eq:main_prob_U} \begin{cases} -\Delta U + \lambda U = |U|^{p-1}U & \text{in }\Omega,\smallskip\\ \int_\Omega U^2\,dx = \rho, \quad U=0 & \text{on }\partial\Omega, \end{cases} \end{equation} where $\Omega\subset{\mathbb{R}}^N$ is a Lipschitz, bounded domain, $1<p<2^*-1$, $\rho>0$ is a fixed parameter, and both $U\in H^1_0(\Omega)$ and $\lambda\in{\mathbb{R}}$ are unknown. More precisely, we investigate conditions on $p$ and $\rho$ (and also $\Omega$) for the solvability of the problem. The main interest in \eqref{eq:main_prob_U} relies on the investigation of standing wave solutions for the nonlinear Schr\"odinger equation \[ i\frac{\partial \Phi}{\partial t}+\Delta \Phi+ |\Phi|^{p-1}\Phi=0,\qquad (t,x)\in {\mathbb{R}}\times \Omega \] with Dirichlet boundary conditions on $\partial\Omega$. This equation appears in several different physical models, both in the case $\Omega={\mathbb{R}}^N$ \cite{MR2002047}, and on bounded domains \cite{MR1837207}. In particular, the latter case appears in nonlinear optics and in the theory of Bose-Einstein condensation, also as a limiting case of the equation on ${\mathbb{R}}^N$ with confining potential. When searching for solutions having the wave function $\Phi$ factorized as $\Phi(x,t)=e^{i\lambda t} U(x)$, one obtains that the real valued function $U$ must solve \begin{equation}\label{eq:NLS} -\Delta U + \lambda U = |U|^{p-1}U ,\qquad U\in H^1_0(\Omega), \end{equation} and two points of view are available. The first possibility is to assign the chemical potential $\lambda\in{\mathbb{R}}$, and search for solutions of \eqref{eq:NLS} as critical points of the related action functional. The literature concerning this approach is huge and we do not even make an attempt to summarize it here. On the contrary, we focus on the second possibility, which consists in considering $\lambda$ as part of the unknown and prescribing the mass (or charge) $\|U\|_{L^2(\Omega)}^2$ as a natural additional condition. Up to our knowledge, the only previous paper dealing with this case, in bounded domains, is \cite{MR3318740}, which we describe below. The problem of searching for normalized solutions in ${\mathbb{R}}^N$, with non-homogeneous nonlinearities, is more investigated \cite{MR3009665,MR1430506}, even though the methods used there can not be easily extended to bounded domains, where dilations are not allowed. Very recently, also the case of partial confinement has been considered \cite{BeBoJeVi_2016}. Solutions of \eqref{eq:main_prob_U} can be identified with critical points of the associated energy functional \[ \mathcal{E}(U) = \frac12\int_\Omega|\nabla U|^2\,dx - \frac{1}{p+1} \int_\Omega|U|^{p+1}\,dx \] restricted to the mass constraint \[ {\mathcal{M}}_\rho=\{U\in H_0^1(\Omega) : \|U\|_{L^2(\Omega)}=\rho\}, \] with $\lambda$ playing the role of a Lagrange multiplier. A cricial role in the discussion of the above problem is played by the Gagliardo-Nirenberg inequality: for any $\Omega$ and for any $v\in H^1_0(\Omega)$, \begin{equation} \label{sobest} \|v\|^{p+1}_{L^{p+1}(\Omega)} \leq C_{N,p} \| \nabla v \|_{L^2(\Omega)}^{N(p-1)/2} \| v \|_{L^2(\Omega)} ^{(p+1)-N(p-1)/2}, \end{equation} the equality holding only when $\Omega={\mathbb{R}}^N$ and $v=Z_{N,p}$, the positive solution of $-\Delta Z + Z = Z^{p}$ (which is unique up to translations \cite{MR969899}). Accordingly, the exponent $p$ can be classified in relation with the so called \emph{$L^2$-critical exponent} $1+4/N$ (throughout all the paper, $p$ will be always Sobolev-subcritical and its criticality will be understood in the $L^2$ sense). Indeed we have that ${\mathcal{E}}$ is bounded below and coercive on ${\mathcal{M}}_\rho$ if and only if either $p$ is subcritical, or it is
bdfca2090653522b89d9415dc698595c4bfce1ec
\section{Introduction} \label{sec:intro} Despite the immense popularity and availability of online video content via outlets such as Youtube and Facebook, most work on object detection focuses on static images. Given the breakthroughs of deep convolutional neural networks for detecting objects in static images, the application of these methods to video might seem straightforward. However, motion blur and compression artifacts cause substantial frame-to-frame variability, even in videos that appear smooth to the eye. These attributes complicate prediction tasks like classification and localization. Object-detection models trained on images tend not to perform competitively on videos owing to domain shift factors \cite{KalogeitonFS15}. Moreover, object-level annotations in popular video data-sets can be extremely sparse, impeding the development of better video-based object detection models. Girshik \emph{et al}\bmvaOneDot \cite{RCNN_girshick14CVPR} demonstrate that even given scarce labeled training data, high-capacity convolutional neural networks can achieve state of the art detection performance if first pre-trained on a related task with abundant training data, such as 1000-way ImageNet classification. Followed the pretraining, the networks can be fine-tuned to a related but distinct domain. Also relevant to our work, the recently introduced models Faster R-CNN \cite{Faster_RCNN_RenHG015} and You Look Only Once (YOLO) \cite{YOLO_RedmonDGF15} unify the tasks of classification and localization. These methods, which are accurate and efficient, propose to solve both tasks through a single model, bypassing the separate object proposal methods used by R-CNN \cite{RCNN_girshick14CVPR}. In this paper, we introduce a method to extend unified object recognition and localization to the video domain. Our approach applies transfer learning from the image domain to video frames. Additionally, we present a novel recurrent neural network (RNN) method that refines predictions by exploiting contextual information in neighboring frames. In summary, we contribute the following: \begin{itemize} \item A new method for refining a video-based object detection consisting of two parts: (i) a \emph{pseudo-labeler}, which assigns provisional labels to all available video frames. (ii) A recurrent neural network, which reads in a sequence of provisionally labeled frames, using the contextual information to output refined predictions. \item An effective training strategy utilizing (i) category-level weak-supervision at every time-step, (ii) localization-level strong supervision at final time-step (iii) a penalty encouraging prediction smoothness at consecutive time-steps, and (iv) similarity constraints between \emph{pseudo-labels} and prediction output at every time-step. \item An extensive empirical investigation demonstrating that on the YouTube Objects \cite{youtube-Objects} dataset, our framework achieves mean average precision (mAP) of $68.73$ on test data, compared to a best published result of $37.41$ \cite{Tripathi_WACV16} and $61.66$ for a domain adapted YOLO network \cite{YOLO_RedmonDGF15}. \end{itemize} \section{Methods} \label{sec:method} In this work, we aim to refine object detection in video by utilizing contextual information from neighboring video frames. We accomplish this through a two-stage process. First, we train a \emph{pseudo-labeler}, that is, a domain-adapted convolutional neural network for object detection, trained individually on the labeled video frames. Specifically, we fine-tune the YOLO object detection network \cite{YOLO_RedmonDGF15}, which was originally trained for the 20-class PASCAL VOC \cite{PASCAL_VOC} dataset, to the Youtube-Video \cite{youtube-Objects} dataset. When fine-tuning to the 10 sub-categories present in the video dataset, our
0139d272d2b2af4606ec78c8e956da717b63bad8
\section{Introduction} The wave-particle duality is an alternative statement of the complementarity principle, and it establishes the relation between \ankb{corpuscular and undulatory}{the corpuscular and the ondulatory} nature of quantum entities \cite{Bohr1928}. It can be illustrated in a two-way interferometer, where the apparatus can be set to observe \ankb{particle}{the particle} behavior when a single path is \ankb{taken}{taken,} or wave-like behavior, when the impossibility to define a path is shown by the interference. A modern approach to the wave-particle duality includes quantitative relations between quantities that represent the possible \textit{a priori} knowledge of the which-way information (\ankb{predictability}{predicability}) and the ``quality'' of the interference fringes (\ankb{Visibility}{visibility}). Several publications in the literature \cite{Bohr1928, Wootters1979, Summhammer1987, Greenberger1988, Mandel1991} contributed to the formulation of the quantitative analysis of the wave-particle duality. For a bipartite system\ankb{ entanglement, the quantum correlations between each part, can play a role. Such correlations can}{, entanglement can} give an extra which-way (path) information about the interferometric possibilities. The quantitative relations for systems composed by two particles were \ankb{}{extensively} studied in \cite{Jaeger1993, Jaeger1995, Englert1996, Englert2000, Scully1989, Scully1991, Mandel1995, Tessier2005, Jakob2010, Miatto2015,Bagan2016, Coles2016}. Therefore, \ankb{understand}{understanding} the behavior of such quantities, in various regimes and situations, is essential to answer fundamental and/or technological questions of the quantum theory \cite{Greenberger1999}. \alams{The Complementarity quantities can present interesting dynamical behaviors,}{Concerning the study of the dynamical behavior of complementarity quantities,} an example is the so-called \textit{quantum eraser}, \alams{where an increase or preservation of the visibility of an interferometer experiment is caused when the ``which-way'' information is erased.}{\ankb{ i.e. an increasing or preservation of the \ankb{Visibility}{visibility} in an interferometric scheme (or the ``erasure'' of the which-way information probably stored in the initial state).}{ where an increase or preservation of the visibility of an interferometer experiment is caused when the which-path information is erased.}} Since its proposal \cite{Scully1982}\ankb{ this phenomena}{, it} has \ankb{investigated it carefully,}{been investigated carefully} both theoretically and experimentally (see for example Refs. \cite{Englert2000, Scully1991, Mandel1995, Storey1994, Wiseman1995, Mir2007, Luis1998, Busch2006, Rossi2013, Walborn2002, Mir2007, Teklemariam2001, Teklemariam2002, Kim2000, Salles2008, Heuer2015}). In \ankb{a recent work }{Ref.~}\cite{Rossi2013}, the authors explore the quantum eraser problem in \ankb{multipartite}{a multipartite} model\ankb{, where two cavities ($q_A+q_B$), which will be taken as a two qubit system $A + B$, in an initial maximally entangled state (and therefore with zero \ankb{Visibility}{visibility}), couple through a Jaynes-Cummings Hamiltonian to $N$ two-level atoms (we will call the global system as $q_A + q_B + R$, where all the individual systems are qubits)}{. Initially a bipartite qubit system is prepared in a maximally entangled state and interacts with $N$ other qubits. This model can be implemented considering the qubits of interest the cavity modes of two cavities and the $N$ qubits as two-level atoms}. In this work \cite{Rossi2013}, an increase of visibility is achieved by performing appropriate projective measurements. An intrinsic relation between the complementarity quantities and the performed measurements is outlined: since \ankb{they}{the measurements} were made in order to obtain an \ankb{increasing}{increase} of the \ankb{Visibility}{visibility}, the remaining quantities (Entanglement as measured by the concurrence, and the predictability) must obey a ``complementary'' behavior.
ef6fecbf91bf23ce97668d9e0f68279d16e146af
\section{#1}\setcounter{equation}{0}} \newcommand{\subsect}[1]{\subsection{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \font\mbn=msbm10 scaled \magstep1 \font\mbs=msbm7 scaled \magstep1 \font\mbss=msbm5 scaled \magstep1 \newfam\mbff \textfont\mbff=\mbn \scriptfont\mbff=\mbs \scriptscriptfont\mbff=\mbss\def\fam\mbff{\fam\mbff} \def{\mbf T}{{\fam\mbff T}} \def{\mbf P}{{\fam\mbff P}} \def{\mbf F}{{\fam\mbff F}} \newcommand{{\mbf D}} {\mathbb{D}} \newcommand{{\mbf R}} { \mathbb{R}} \newcommand{\cH} {{\mathcal H}} \newcommand{\cP} {{\mathcal P}} \newcommand{{\mbf N}} { \mathbb{N}} \newcommand{{\mbf Z}} {\mathbb{Z} } \newcommand{\mbf C} {{\mathbb C}} \newcommand {\mbf Q} {{\mathbb Q}} \newtheorem{Th}{Theorem}[section] \newtheorem{Lm}[Th]{Lemma} \newtheorem{C}[Th]{Corollary} \newtheorem{D}[Th]{Definition} \newtheorem{Proposition}[Th]{Proposition} \newtheorem{R}[Th]{Remark} \newtheorem{Problem}[Th]{Problem} \newtheorem{E}[Th]{Example} \newtheorem*{P1}{Problem 1} \newtheorem*{P2}{Problem 2} \newtheorem*{P3}{Problem 3} \begin{document} \title[On Properties of Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces]{On Properties of Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces} \author{Alexander Brudnyi} \address{Department of Mathematics and Statistics\newline \hspace*{1em} University of Calgary\newline \hspace*{1em} Calgary, Alberta\newline \hspace*{1em} T2N 1N4} \email{abrudnyi@ucalgary.ca} \keywords{Predual space, Whitney problems, Finiteness Principle, linear extension operator, approximation property, dual space, Jackson operator, weak$^*$ topology, weak Markov set} \subjclass[2010]{Primary 46B20; Secondary 46E15} \thanks{Research supported in part by NSERC} \date{} \begin{abstract} Let $C_b^{k,\omega}({\mbf R}^n)$ be the Banach space of $C^k$ functions on ${\mbf R}^n$ bounded together with all derivatives of order $\le k$ and with derivatives of order $k$ having moduli of continuity majorated by $c\cdot\omega$, $c\in{\mbf R}_+$, for some $\omega\in C({\mbf R}_+)$. Let $C_b^{k,\omega}(S):=C_b^{k,\omega}({\mbf R}^n)|_S$ be the trace space to a closed subset $S\subset{\mbf R}^n$. The geometric predual $G_b^{k,\omega}(S)$ of $C_b^{k,\omega}(S)$ is the minimal closed subspace of the dual $\bigl(C_b^{k,\omega}({\mbf R}^n)\bigr)^*$ containing evaluation functionals of points in $S$. We study geometric properties of spaces $G_b^{k,\omega}(S)$ and their relations to the classical Whitney problems on the characterization of trace spaces of $C^k$ functions on ${\mbf R}^n$. \end{abstract} \maketitle \section{Formulation of Main Results} \subsection{Geometric Preduals of ${\mathbf C^{k,\omega}}$ Spaces} In what follows we use the standard notation of Differential Analysis. In particular, $\alpha=(\alpha_1,\dots,\alpha_n)\in \mathbb Z^n_+$ denotes a multi-index and $|\alpha|:=\sum^n_{i=1}\alpha_i$. Also, for $x=(x_1,\dots, x_n)\in\mathbb R^n$, \begin{equation}\label{eq1} x^\alpha:=\prod^n_{i=1}x^{\alpha_i}_i \ \ \text{ and} \ \ D^\alpha:=\prod^n_{i=1}D^{\alpha_i}_i,\quad {\rm where}\quad D_i:=\frac{\partial}{\partial x_i}. \end{equation} Let $\omega$ be a nonnegative function on $(0,\infty)$ (referred to as {\em modulus of continuity}) satisfying the following conditions: \begin{enumerate} \item[(i)] $\omega(t)$ and $\displaystyle \frac {t}{\omega( t)}$ are nondecreasing functions on $(0,\infty)$;\medskip \item[(ii)] $\displaystyle \lim_{t\rightarrow 0^+}\omega(t)=0$. \end{enumerate} \begin{D}\label{def1} $C^{k,\omega}_b(\mathbb R^n)$ is the Banach subspace of functions $f\in C^k(\mathbb R^n)$ with norm \begin{equation}\label{eq3} \|f\|_{C^{k,\omega}_b(\mathbb R^n)}:=\max\left(\|f\|_{C^k_b(\mathbb R^n)}, |f|_{C^{k,\omega}_b(\mathbb R^n)}\right) , \end{equation} where \begin{equation}\label{eq4} \|f\|_{C^k_b(\mathbb R^n)}:=\max_{|\alpha|\le k}\left\{\sup_{x\in\mathbb R^n}|D^\alpha f(x)|\right\} \end{equation} and \begin{equation}\label{eq5} |f|_{C^{k,\omega}_b(\mathbb R^n)}:=\max_{|\alpha|=k}\left\{\sup_{x,y\in\mathbb R^n,\, x\ne y}\frac{|D^\alpha f(x)-D^\alpha f(y)|}{\omega(\|x-y\|)}\right\}. \end{equation} Here $\|\cdot\|$ is the Euclidean norm of $\mathbb R^n$. \end{D} If $S\subset\mathbb R^n$ is a closed subset, then by $C_b^{k,\omega}(S)$ we denote the trace space of functions $g\in C_b^{k,\omega}(\mathbb R^n)|_{S}$ equipped with the quotient norm \begin{equation}\label{eq1.5} \|g\|_{C_b^{k,\omega}(S)}:=\inf\{\|\tilde g\|_{C_b^{k,\omega}(\mathbb R^n)}\, :\, \tilde g\in C_b^{k,\omega}({\mbf R}^n),\ \tilde g|_{S}=g\}. \end{equation} Let $\bigl(C_b^{k,\omega}(\mathbb R^n)\bigr)^*$ be the dual of $C_b^{k,\omega}(\mathbb R^n)$. Clearly, each evaluation functional $\delta_x^0$ at $x\in\mathbb R^n$ (i.e., $\delta_x^0(f):=f(x)$, $f\in C_b^{k,\omega}({\mbf R}^n)$) belongs to $\bigl(C_b^{k,\omega}(\mathbb R^n)\bigr)^*$ and has norm one. By $G^{k,\omega}_b(S)\subset \bigl(C_b^{k,\omega}(\mathbb R^n)\bigr)^*$ we denote the minimal closed subspace containing all $\delta_x^0$, $x\in S$. \begin{Th}\label{te1.2} The restriction map to the set $\{\delta_s^0\, :\, s\in S\}\subset G_b^{k,\omega}(S)$ determines an isometric isomorphism between the dual of $G^{k,\omega}_b(S)$ and $C^{k,\omega}_b(S)$. \end{Th} In what follows, $G_b^{k,\omega}(S)$ will be referred to as the {\em geometric predual}
12790d44df56e9367af1706e5d4b7a67a1df785f
\section{Introduction} \label{sec:introduction} In recent years there has been a resurgence of interest in the properties of metastable states, due mostly to the studies of the jammed states of hard sphere systems; see for reviews Refs. \onlinecite{charbonneau16, baule16}. There are many topics to study, including for example the spectrum of small perturbations around the metastable state, i.e. the phonon excitations and the existence of a boson peak, and whether the Edwards hypothesis works for these states. In this paper we shall study some of these topics in the context of classical Heisenberg spin glasses both in the presence and absence of a random magnetic field. Here the metastable states which we study are just the minima of the Hamiltonian, and so are well-defined outside the mean-field limit. It has been known for some time that there are strong connections between spin glasses and structural glasses ~\cite{tarzia2007glass,fullerton2013growing, moore06}. It has been argued in very recent work~\cite{baity2015soft} that the study of the excitations in classical Heisenberg spin glasses provides the opportunity to contrast with similar phenomenology in amorphous solids~\cite{wyart2005geometric, charbonneau15}. The minima and excitations about the minima in Heisenberg spin glasses have been studied for many years \cite{bm1981, yeo04, bm1982} but only in the absence of external fields. In Sec. \ref{sec:models} we define the models to be studied as special cases of the long-range one - dimensional $m$-component vector spin glass where the exchange interactions $J_{ij}$ decrease with the distance between the spins at sites $i$ and $j$ as $1/r_{ij}^{\sigma}$. The spin $\mathbf{S}_i$ is an $m$-component unit vector. $m=1$ corresponds to the Ising model, $m=2$ corresponds to the XY model and $m=3$ corresponds to the Heisenberg model. By tuning the parameter $\sigma$, one can have access to the Sherrington-Kirkpatrick (SK) model and on dilution to the Viana-Bray (VB) model, and indeed to a range of universality classes from mean-field-type to short-range type \cite{leuzzi2008dilute}, although in this paper only two special cases are studied; the SK model and the Viana-Bray model. We intend to study the cases which correspond to short-range models in a future publication. In Sec. \ref{sec:metastability} we have used numerical methods to learn about the metastable minima of the SK model and the Viana Bray model. Our main procedure for finding the minima is to start from a random configuration of spins and then align each spin with the local field produced by its neighbors and the external random field, if present. The process is continued until all spins are aligned with their local fields. This procedure finds local minima of the Hamiltonian. In the thermodynamic limit, the energy per spin $\varepsilon$ of these states reaches a characteristic value, which is the same for almost all realization of the bonds and random external fields, but slightly dependent on the dynamical algorithm used for selecting the spin to be flipped e.g. the ``polite'' or ``greedy'' or Glauber dynamics or the sequential algorithm used in the numerical work in this paper \cite{newman:99,parisi:95}. In the context of Ising spin glasses in zero random fields
3935b10ebd2ab44f820e6d8233731be14a55d558
\section{Introduction} \label{sec:intro} The production of hadrons and jets at a future Electron Ion Collider (EIC) will play a central role in understanding the structure of the protons and nuclei which comprise the visible matter in the universe. Measurements of inclusive jet and hadron production with transversely polarized protons probe novel phenomena within the proton such as the Sivers function~\cite{Kang:2011jw}, and address fundamental questions concerning the validity of QCD factorization. Event shapes in jet production can give insight into the nuclear medium and its effect on particle propagation~\cite{Kang:2012zr}. The precision study of these processes at a future EIC will provide a much sharper image of proton and nucleus structure than is currently available. Progress is needed on both the experimental and theoretical fronts to achieve this goal. Currently, much of our knowledge of proton spin phenomena, such as the global fit to helicity-dependent structure functions~\cite{deFlorian:2008mr}, comes from comparison to predictions at the next-to-leading order (NLO) in the strong coupling constant. Theoretical predictions at the NLO level for jet and hadron production in DIS suffer from large theoretical uncertainties from uncalculated higher-order QCD corrections~\cite{Hinderer:2015hra} that will eventually hinder the precision determination of proton structure. In some cases even NLO is unknown, and an LO analysis fails to describe the available data~\cite{Gamberg:2014eia}. Given the high luminosity and expected precision possible with an EIC, it is desirable to extend the theoretical precision beyond what is currently available. For many observables, a prediction to next-to-next-to-leading order (NNLO) in the perturbative QCD expansion will ultimately be needed. An important step toward improving the achievable precision for jet production in electron-nucleon collisions was taken in Ref.~\cite{Hinderer:2015hra}, where the full NLO ${\cal O}(\alpha^2\alpha_s)$ corrections to unpolarized $lN \to jX$ and $lN \to hX$ scattering were obtained. Focusing on single-inclusive jet production for this discussion, it was pointed out that two distinct processes contribute: the deep-inelastic scattering (DIS) process $lN \to ljX$, where the final-state lepton is resolved, and $\gamma N \to jX$, where the initial photon is almost on-shell and the final-state lepton is emitted collinear to the initial-state beam direction. Both processes were found to contribute for expected EIC parameters, and the shift of the leading-order prediction was found to be both large and dependent on the final-state jet kinematics. Our goal in this manuscript is to present the full ${\cal O}(\alpha^2\alpha_s^2)$ NNLO contributions to single-inclusive jet production in electron-nucleon collisions, including all the relevant partonic processes discussed above. Achieving NNLO precision for jet and hadron production is a formidable task. The relevant Feynman diagrams which give rise to the NNLO corrections consist of two-loop virtual corrections, one-loop real-emission diagrams, and double-real emission contributions. Since these three pieces are separately infrared divergent, some way of regularizing and canceling these divergences must be found. However, theoretical techniques for achieving this cancellation in the presence of final-state jets have seen great recent progress. The introduction of the $N$-jettiness subtraction scheme for higher order QCD calculations~\cite{Boughezal:2015dva,Gaunt:2015pea} has lead to the first complete NNLO descriptions of jet production processes in hadronic
417ff865a9cbac3a94a3b3975ec2721852d90a63
\section{Introduction} \label{sec:intro} Compact stars have a large number of pulsation modes that have been extensively studied since the seminal work of Chandrasekhar on radial oscillations \cite{APJ140:417:1964,PRL12:114:1964}. In general, these modes are very difficult to observe in the electromagnetic spectrum; therefore most efforts have concentrated on gravitational wave asteroseismology in order to characterise the frequency and damping times of the modes that emit gravitational radiation. In particular, various works focused on the oscillatory properties of pure hadronic stars, hybrid stars and strange quark stars trying to find signatures of the equation of state of high density neutron star matter (see \cite{AA325:217:1997,IJMPD07:29:1998,AA366:565:2001,APJ579:374:2002,PRD82:063006:2010,EL105:39001:2014} and references therein). More recently, compact star oscillations have attracted the attention in the context of Soft Gamma ray Repeaters (SGRs), which are persistent X-ray emitters that sporadically emit short bursts of soft $\gamma$-rays. In the quiescent state, SGRs have an X-ray luminosity of $\sim 10^{35}$ erg/s, while during the short $\gamma$-bursts they release up to $10^{42}$ erg/s in episodes of about 0.1 s. Exceptionally, some of them have emitted very energetic giant flares which commenced with brief $\gamma$-ray spikes of $\sim 0.2$ s, followed by tails lasting hundreds of seconds. Hard spectra (up to 1 MeV) were observed during the spike and the hard X-ray emission of the tail gradually faded modulated at the neutron star (NS) rotation period. The analysis of X-ray data of the tails of the giant flares of SGR 0526-66, SGR 1900+14 and SGR 1806-20 revealed the presence of quasi-periodic oscillations (QPOs) with frequencies ranging from $\sim$ 18 to 1840 Hz \cite{APJ628:L53:2005,APJ632:L111:2005,AA528:A45:2011}. There are also candidate QPOs at higher frequencies up to $\sim 4$ kHz in other bursts but with lower statistical significance \cite{ElMezeini2010}; in fact, according to a more recent analysis only one burst shows a marginally significant signal at a frequency of around 3706 Hz \cite{Huppenkothen2013}. Several characteristics of SGRs are usually explained in terms of the \textit{magnetar} model, assuming that the object is a neutron star with an unusually strong magnetic field ($B \sim 10^{15} $ G) \cite{Woods2006}. In particular, giant flares are associated to catastrophic rearrangements of the magnetic field. Such violent phenomena are expected to excite a variety of oscillation modes in the stellar crust and core. In fact, recent studies have accounted for magnetic coupling between the crust and the core, and associate QPOs to global magneto-elastic oscillations of highly magnetized neutron stars \cite{Levin2007,CerdaDuran2009,Colaiuda2012,Gabler2014}. There has also been interest in the possible excitation of low order $f$-modes because of their strong coupling to potentially detectable gravitational radiation \cite{Levin2011}. In the present paper we focus on radial oscillations of neutron stars permeated by ultra-strong magnetic fields. These modes might be relevant within the magnetar model because they could be excited during the violent events associated with gamma flares. Since they have higher frequencies than the already known QPOs, they cannot be directly linked to them at present. However, it is relevant to know all the variety of pulsation modes of strongly magnetized neutron stars because the number of observations is
1c5ca046551cfa4aa819c11109c3debac9a8e52b
\section{Introduction} Ultra-cold molecules can play important roles in studies of many-body quantum physics~\cite{pupillo2008cold}, quantum logic operations~\cite{demille2002quantum}, and ultra-cold chemistry~\cite{ni2010dipolar}. In our recent studies of LiRb, motivated largely by its large permanent dipole moment~\cite{aymar2005calculation}, we have explored the generation of these molecules in a dual species MOT~\cite{dutta2014formation,dutta2014photoassociation}. In particular, we have found that the rate of generation of stable singlet ground state molecules and first excited triple state molecules through photoassociation, followed by spontaneous emission decay, can be very large~\cite{v0paper,Adeel,lorenz2014formation}. There have been very few experimental studies of triplet states in LiRb~\cite{Adeel}, in part because they are difficult to access in thermally-distributed systems. Triplet states of bi-alkali molecules are important to study for two reasons: first, Feshbach molecules, which are triplet in nature, provide an important association gateway for the formation of stable molecules~\cite{marzok2009feshbach}; also, photoassociation (PA) of trapped colliding atoms is often strongest for triplet scattering states. Mixed singlet - triplet states are usually required to transfer these molecules to deeply bound singlet states. \begin{figure}[t!] \includegraphics[width=8.6cm]{PEC.png}\\ \caption{(Color on-line) Energy level diagram of the LiRb molecule, showing relevant PECs from Ref.~\protect\cite{Korek}. Vertical lines show the various optical transitions, including {\bf (a)} photoassociation of atoms to molecular states below the D$_1$ asymptote; {\bf (b)} spontaneous decay of excited state molecules leading to the $a \: ^3 \Sigma ^+$ state; {\bf (c)} RE2PI to ionize LiRb molecules, ($\nu_{c}$ used later in this paper is the frequency of this laser source); and {\bf (d)} state-selective excitation of the $a \: ^3 \Sigma ^+$ state for depletion of the RE2PI signal (with laser frequency $\nu_{d}$). The black dashed line represents our PA states. The inset shows an expanded view of the different $d \ ^3\Pi$ spin-orbit split states as well as the perturbing neighbor $D \ ^1\Pi$.} \label{fig:PEC} \end{figure} \begin{figure*} [t!] \includegraphics[width=\textwidth]{v11Progression.png}\\ \caption{(Color on-line) Subsection of the RE2PI spectra. The PA laser is tuned to the $2(0^-) \ v=-11 \ J=1$ resonance, from which spontaneous decay is primarily to the $a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ state. Most of these lines are $d \ ^3\Pi_{\Omega} \ v^{\prime} \leftarrow a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ transitions, where $v^\prime$ is labeled on individual lines. From top to bottom: black solid lines label transitions to $\Omega=2$, blue dashed lines label transitions to $\Omega=1$, green dot-dashed lines label transitions to $\Omega=0$. Also shown (red dotted lines) are three $ D \ ^1\Pi \ v^\prime \leftarrow a \ ^3\Sigma^+ \ v^{\prime \prime}=11$ transitions.} \label{fig:v11progression} \end{figure*} We show an abbreviated set of potential energy curves (PEC), as calculated in Ref.~\cite{Korek}, in Fig.~\ref{fig:PEC}. The d$^3 \Pi$ - D$^1 \Pi$ complex in LiRb, asymptotic to the Li 2p $^2P_{3/2, 1/2}$ + Rb 5s $^2S_{1/2}$ free atom state, has several features that can promote its utility in stimulated-Raman-adiabatic-passage (STIRAP) and photoassociation. First, the \textit{ab inito} calculations of Ref.~\cite{Korek} predict mixing between low vibrational levels of the $d \ ^3\Pi_1$ and the D$^1 \Pi$ states. Second, both legs of a STIRAP transfer process from loosely bound triplet-character Feshbach molecules to the rovibronic ground state can
d95635bc08fd14ade0a89cf08420e98ffd0f6b8b
\section{Introduction}\label{sec:introduction} The use of GPS for localizing sensor nodes in a sensor network is considered to be excessively expensive and wasteful, also in some cases intractable, \cite{bul:00,bis:06}. Instead many solutions for the localization problem tend to use inter-sensor distance or range measurements. In such a setting the localization problem is to find unknown locations of say $N$ sensors using existing noisy distance measurements among them and to sensors with known locations, also referred to as anchors. This problem is known to be NP hard \cite{mor:97}, and there have been many efforts to approximately solve this problem, \cite{kim:09,bis:04,wan:06,bis:06,gho:13,nar:14,cha:09,soa:15,sch:15,sri:08}. One of the major approaches for approximating the localization problem, has been through the use of convex relaxation techniques, namely semidefinite, second-order and disk relaxations, see e.g., \cite{kim:09,bis:06,bis:04,wan:06,sri:08,gho:13,soa:15}. Although the centralized algorithms based on the these approximations reduce the computational complexity of solving the localization problem, they are still not scalable for solving large problems. Also centralized algorithms are generally communication intensive and more importantly lack robustness to failures. Furthermore, the use of these algorithms can become impractical due to certain structural constraints resulting from, e.g., privacy constraints and physical separation. These constraints generally prevent us from forming the localization problem in a centralized manner. One of the approaches to evade such issues is through the use of scalable and/or distributed algorithms for solving large localization problems. These algorithms enable us to solve the problem through collaboration and communication of several computational agents, which could correspond to sensors, without the need for a centralized computational unit. The design of distributed localization algorithms is commonly done by first reformulating the problem by exploiting or imposing structure on the problem and then employing efficient optimization algorithms for solving the reformulated problem, see e.g., some recent papers \cite{sim:14,gho:13,soa:15,sri:08}. For instance, authors in \cite{sri:08} put forth a solution for the localization problem based on minimization the discrepancy of the squared distances and the range measurements. They then propose a second-order cone relaxation for this problem and apply a Gauss-Seidel scheme to the resulting problem. This enables them to solve the problem distributedly. The proposed algorithm does not provide a guaranteed convergence and at each iteration of this algorithm, each agent is required to solve a second-order cone program, SOCP, which can potentially be expensive. Furthermore, due to the considered formulation of the localization problem, the resulting algorithm is prone to amplify the measurement errors and is sensitive to outliers. In \cite{sim:14}, the authors consider an SDP relaxation of the maximum likelihood formulation of the localization problem. They further relax the problem to an edge-based formulation as suggested in \cite{wan:06}. This then allows them to devise a distributed algorithm for solving the reformulated problem using alternating direction method of multipliers (ADMM). Even though this algorithm has convergence guarantees, each agent is required to solve an SDP at every iteration of the algorithm. In order to alleviate this, authors in \cite{gho:13} and \cite{soa:15} consider a disk relaxation of the localization problem and which correspond to an under-estimator of the
d8a5e808b3b90b0513cf0c72dcfd2e3df48814d2
\section{Introduction} Quantum theory of gravity is one of the most sought-after goals in physics. Despite continuous efforts to tackle this important problem, resulting in interesting proposals such as superstring theory and loop quantum gravity, there is still no clear sign of the theory of quantum gravity \cite{Giulini,Rovelli,Kiefer}. However, when the back-action of matter on the gravitation field is neglected, one can write down a theory of quantum fields in a background curved spacetime by extending quantum field theory in Minkowski metric to a general metric \cite{Birrell, Fulling, Mukhanov, Parker}. This is analogous to treating external fields as c-numbers and predicts interesting new phenomena that are valid in appropriate regimes. Similarly to the prediction of pair creation in external electric fields \cite{Sauter, HeisenbergEuler1936, Schwinger}, external gravitational fields (as described by a background spacetime) induce particle creation \cite{Parker1971, Hawking1975, Unruh1976}. In the latter, particle creation is caused by the change in the vacuum state itself under quite generic conditions. In this work, we propose a classical optical simulation of particle creation in binary waveguide arrays. There have been many proposals and experimental demonstrations of a plethora of interesting physics in coupled waveguide arrays \cite{Peschel, Morandotti, Lahini, Longhi11, Longhi12, Crespi, Keil, Rodriguez13, Rodriguez14, LeeAngelakis14, Marini14, RaiAngelakis15, Keil15}. In particular, optical simulation of the 1+1 dimensional Dirac equation in binary waveguide arrays has been proposed \cite{Longhi10a,Longhi10b} and experimentally demonstrated \cite{Dreisow10,Dreisow12}. We show that the setup can be generalised to also simulate the Dirac equation in 2 dimensional curved spacetime. Particle creation is by definition a multi-particle phenomenon and the full simulation of the result requires quantum fields as a main ingredient (for example, see \cite{Boada} for a simulation of the Dirac equation in curved spacetime with cold atoms on optical lattices). However, light propagation in a waveguide array is an intrinsically classical phenomena, so how can we simulate particle creation in a binary waveguide array? The short answer is that we will be looking at a single-particle analog of particle creation. As we will see, the fundamental reason behind particle creation is the difference in the vacuum state, which in turn is captured by different mode-expansions of quantum fields. We can thus concentrate on a single-mode at a time and simulate the effect. In fact, the well-known Klein paradox shows that the single-particle Dirac equation contains subtle hints of multi-particle effects, and the phenomenon of pair production in strong electric fields has been studied within the single particle picture \cite{Ruf}. We study the time evolution of spinor wave packets and demonstrate that an analog of particle creation can be {\it visualised} in the light evolution in a binary waveguide array. Here, we stress that by using the phenomenon of `zitterbewegung' (the jittering motion of a Dirac particle), one can bypass the quantitative checks in `proving' the simulation of particle creation. This article is organised as follows. In section II, we provide a pedagogical introduction to the Dirac equation in curved spacetime, assuming familiarity with the conventional Dirac equation. In section III, we specialise to
3a890d61b35aff2fdcf6bc715cfb6957a8a2cb8c
\part{Background} \bigskip \addcontentsline{toc}{section}{1. Preliminaries} \textbf{\S 1. Preliminaries} \stepcounter{section} \bigskip The prerequisites for reading this paper are a background of one year of graduate level study in set theory, a working knowledge of forcing, and some basic familiarity with proper forcing and generalized stationarity. For a regular uncountable cardinal $\lambda$ and a set $X$ with $\lambda \subseteq X$, we let $P_\lambda(X)$ denote the set $\{ a \subseteq X : |a| < \lambda \}$. A set $S \subseteq P_\lambda(X)$ is stationary iff for any function $F : X^{<\omega} \to X$, there exists $b \in S$ such that $b \cap \lambda \in \lambda$ and $b$ is closed under $F$. \bigskip In this paper, a \emph{forcing poset} is a pair $(\mathbb{P},\le_\mathbb{P})$, where $\mathbb{P}$ is a nonempty set and $\le_\mathbb{P}$ is a reflexive and transitive relation on $\mathbb{P}$. To simplify notation, we usually refer to $\mathbb{P}$ itself as a forcing poset, with the relation $\le_\mathbb{P}$ being implicit. If $\mathbb{Q}$ is a forcing poset, we will write $\dot G_\mathbb{Q}$ for the canonical $\mathbb{Q}$-name for a generic filter on $\mathbb{Q}$. Let $\mathbb{P}$ and $\mathbb{Q}$ be forcing posets. Then $\mathbb{P}$ is a \emph{suborder} of $\mathbb{Q}$ if $\mathbb{P} \subseteq \mathbb{Q}$ and $\le_\mathbb{P} \ = \ \le_\mathbb{Q} \cap \ (\mathbb{P} \times \mathbb{P})$. Let $\mathbb{P}$ be a suborder of $\mathbb{Q}$. We say that $\mathbb{P}$ is a \emph{regular suborder} of $\mathbb{Q}$ if: \begin{enumerate} \item whenever $p$ and $q$ are in $\mathbb{P}$ and are incompatible in $\mathbb{P}$, then $p$ and $q$ are incompatible in $\mathbb{Q}$; \item if $A$ is a maximal antichain of $\mathbb{P}$, then $A$ is predense in $\mathbb{Q}$. \end{enumerate} \begin{lemma} Suppose that $\mathbb{P}_0$ is a suborder of $\mathbb{P}_1$, and $\mathbb{P}_1$ is a suborder of $\mathbb{Q}$. Assume, moreover, that $\mathbb{P}_0$ and $\mathbb{P}_1$ are both regular suborders of $\mathbb{Q}$. Then $\mathbb{P}_0$ is a regular suborder of $\mathbb{P}_1$. \end{lemma} \begin{proof} Straightforward. \end{proof} Let $\mathbb{P}$ be a regular suborder of $\mathbb{Q}$, and assume that $G$ is a generic filter on $\mathbb{P}$. In $V[G]$, define the forcing poset $\mathbb{Q} / G$ to consist of conditions $q \in \mathbb{Q}$ such that for all $s \in G$, $q$ and $s$ are compatible in $\mathbb{Q}$, with the same ordering as $\mathbb{Q}$. Then $\mathbb{Q}$ is forcing equivalent to the two-step iteration $\mathbb{P} * (\mathbb{Q} / \dot G_\mathbb{P})$. Moreover: \begin{lemma} Let $\mathbb{P}$ be a regular suborder of $\mathbb{Q}$. \begin{enumerate} \item Suppose that $H$ is a $V$-generic filter on $\mathbb{Q}$. Then $H \cap \mathbb{P}$ is a $V$-generic filter on $\mathbb{P}$, and $H$ is a $V[H \cap \mathbb{P}]$-generic filter on $\mathbb{Q} / (H \cap \mathbb{P})$. \item Suppose that $G$ is a $V$-generic filter on $\mathbb{P}$ and $H$ is a $V[G]$-generic filter on $\mathbb{Q} / G$. Then $H$ is a $V$-generic filter on $\mathbb{Q}$, $G = H \cap \mathbb{P}$, and $V[G][H] = V[H]$. \end{enumerate} \end{lemma} \begin{proof} See \cite[Lemma 1.6]{jk26}. \end{proof} Let $\mathbb{P}$ and $\mathbb{Q}$ be forcing posets with maximum conditions. A function $\pi : \mathbb{Q} \to \mathbb{P}$ is said to be a \emph{projection mapping} if: \begin{enumerate} \item $\pi$ maps the maximum condition in $\mathbb{Q}$ to the maximum condition in $\mathbb{P}$; \item if $q \le
6e964ae3be26a56c939b5985e33af46f1e30cd66
\section{Introduction} The concept of synchronistion is based on the adjustment of rhythms of oscillating systems due to their interaction \cite{pikovsky01}. Synchronisation phenomenon was recognised by Huygens in the 17th century, time when he performed experiments to understand this phenomenon \cite{bennett02}. To date, several kinds of synchronisation among coupled systems were reported, such as complete \cite{li16}, phase \cite{pereira07,batista10}, lag \cite{huang14}, and collective almost synchronisation \cite{baptista12}. Neuronal synchronous rhythms have been observed in a wide range of researches about cognitive functions \cite{wang10,hutcheon00}. Electroencephalography and magnetoencephalography studies have been suggested that neuronal synchronization in the gamma frequency plays a functional role for memories in humans \cite{axmacher06,fell11}. Steinmetz et al. \cite{steinmetz00} investigated the synchronous behaviour of pairs of neurons in the secondary somatosensory cortex of monkey. They found that attention modulates oscillatory neuronal synchronisation in the somatosensory cortex. Moreover, in the literature it has been proposed that there is a relationship between conscious perception and synchronisation of neuronal activity \cite{hipp11}. We study spiking and bursting synchronisation betwe\-en neuron in a neuronal network model. A spike refers to the action potential generated by a neuron that rapidly rises and falls \cite{lange08}, while bursting refers to a sequence of spikes that are followed by a quiescent time \cite{wu12}. It was demonstrated that spiking synchronisation is relevant to olfactory bulb \cite{davison01} and is involved in motor cortical functions \cite{riehle97}. The characteristics and mechanisms of bursting synchronisation were studied in cultured cortical neurons by means of planar electrode array \cite{maeda95}. Jefferys $\&$ Haas discovered synchronised bursting of CA1 hippocampal pyramidal cells \cite{jefferys82}. There is a wide range of mathematical models used to describe neuronal activity, such as the cellular automaton \cite{viana14}, the Rulkov map \cite{rulkov01}, and differential equations \cite{hodgkin52,hindmarsh84}. One of the simplest mathematical models and that is widely used to depict neuronal behaviour is the integrate-and-fire \cite{lapicque07}, which is governed by a linear differential equation. A more realistic version of it is the adaptive exponential integrate-and-fire (aEIF) model which we consider in this work as the local neuronal activity of neurons in the network. The aEIF is a two-dimensional integrate-and-fire model introduced by Brette $\&$ Gerstner \cite{brette05}. This model has an exponential spike mechanism with an adaptation current. Touboul $\&$ Brette \cite{touboul08} studied the bifurcation diagram of the aEIF. They showed the existence of the Andronov-Hopf bifurcation and saddle-node bifurcations. The aEIF model can generate multiple firing patterns depending on the parameter and which fit experimental data from cortical neurons under current stimulation \cite{naud08}. In this work, we focus on the synchronisation phenomenon in a randomly connected network. This kind of network, also called Erd\"os-R\'enyi network \cite{erdos59}, has nodes where each pair is connected according to a probability. The random neuronal network was utilised to study oscillations in cortico-thalamic circuits \cite{gelenbe98} and dynamics of network with synaptic depression \cite{senn96}. We built a random neuronal network with unidirectional connections that represent chemical synapses. We show that there are clearly separated ranges of parameters that lead to spiking or bursting synchronisation. In addition, we analyse the robustness to external
1ba0a8c597a5b6ee0830ca23c5b9ab8ce3b62efd
\subsection{Power} \noindent PMTs that meet the required specifications in terms of pulse rise time, dark current and counting rates, and quantum efficiency require applied high voltages (HV) between $1-2.5$~kV and have maximum current ratings of $0.2-0.5$~mA. For the detector design using 12 read-out channels per module, the HV power supply (HVPS) must provide approximately 10~mA per module. In order to minimize costs, we aim to use one HV power supply to power 10 modules (120 channels), and thus we require a HVPS rated to approximately 100~mA and 500~W. For a 100 module detector, 10 HVPS are required and the total power requirement would thus be approximately 5~kW. Several commercial HVPS systems exist that meet these requirements. For example, the \href{http://theelectrostore.com/shopsite_sc/store/html/PsslashEK03R200-GK6-Glassman-New-refurb.html}{Glassman model number PS/EK03R200-GK6} provides an output of $\pm3$~kV with a maximum of 200 mA, and features controllable constant current / constant voltage operation. Regulation and monitoring of the power supplied to the detector will be required on both the module distribution boards and the front-end distribution boards. In both cases, over-current and over-voltage protection will be necessary both for safety and in order to protect the front-end electronics from damage. The monitoring may be accomplished by a measurement circuit that digitizes and transmits the measured voltages and currents over a serial bus to the slow control system for the detector by a generic, CERN built data acquisition board called an Embedded Local Monitoring Board (ELMB)~\cite{ELMB}. Energy calibration will be done in situ using an $^{241}$Am source, which yields a 60~keV $X$-ray. Calibration runs performed at specified intervals will track the PMT+scintillator response as a function of time. In addition to energy calibration, an LED pulser that can deliver a stable light pulse into each scintillator will also be deployed. The LED system will be used to monitor drift in response of the PMT+scintillator as a function of time in between $^{241}$Am source calibrations as well as detect any inefficient or non-functional readout channels. \subsection{Power} \noindent PMTs that meet the required specifications in terms of pulse rise time, dark current and counting rates, and quantum efficiency require applied high voltages (HV) between $1-2.5$~kV and have maximum current ratings of $0.2-0.5$~mA. For the detector design using 12 read-out channels per module, the HV power supply (HVPS) must provide approximately 10~mA per module. In order to minimize costs, we aim to use one HV power supply to power 10 modules (120 channels), and thus we require a HVPS rated to approximately 100~mA and 500~W. For a 100 module detector, 10 HVPS are required and the total power requirement would thus be approximately 5~kW. Several commercial HVPS systems exist that meet these requirements. For example, the \href{http://theelectrostore.com/shopsite_sc/store/html/PsslashEK03R200-GK6-Glassman-New-refurb.html}{Glassman model number PS/EK03R200-GK6} provides an output of $\pm3$~kV with a maximum of 200 mA, and features controllable constant current / constant voltage operation. Regulation and monitoring of the power supplied to the detector will be required on both the module distribution boards and the front-end distribution boards. In both cases, over-current and over-voltage protection will be necessary
a5fd2f310d6714cde82b7c04bb3e6f796c5aae70
\section{Introduction Throughout this paper, let $R$ denote a commutative Noetherian ring with identity and ${\mathfrak a}$ a proper ideal of $R$. For any non-zero $R$-module $M$, the ith local cohomology module of $M$ is defined as $$H^{i}_{{\mathfrak a}}(M):=\lim_{\underset{n\geq 1}{\longrightarrow}} \operatorname{Ext}^{i}_{R}(R/{{\mathfrak a}^n},M).$$ $\operatorname{V}({\mathfrak a})$ denotes the set of all prime ideals of $R$ containing ${\mathfrak a}$. For an $R$-module $M$, the {\it cohomological dimension} of $M$ with respect to ${\mathfrak a}$ is defined as $\operatorname{cd}({\mathfrak a},M):=\sup \{i\in \mathbb{Z}\mid H^{i}_{{\mathfrak a}}(M)\neq 0\}$ which is known that for a local ring $(R,{\mathfrak m})$ and ${\mathfrak a}={\mathfrak m}$, this is equal to the dimension of $M$. For unexplained notation and terminology about local cohomology modules, we refer the reader to \cite{BH} and \cite{BSH}. The notion of {\it cohomological dimension filtration} (abbreviated as {\it cd-filtration}) of $M$ introduced by A. Atazadeh and et al. \cite{ASN} which is a generalization of the concept of dimension filtration that is defined by P. Schenzel \cite{Sch} in local case. For any integer $0\leq i\leq \operatorname{cd}({\mathfrak a},M)$, let $M_{i}$ denote the largest submodule of $M$ such that $\operatorname{cd}({\mathfrak a},M_{i})\leq i$. Because of the maximal condition of a Noetherian $R$-module the submodules $M_{i}$ of $M$ are well-defined. Moreover, it follows that $M_{i-1}\subseteq M_{i}$ for all $1\leq i\leq\operatorname{cd}({\mathfrak a},M)$. In the present article, we will use the concept of relative Cohen-Macaulay modules. An $R$-module $M$ is {\it relative Cohen-Macaulay} w.r.t ${\mathfrak a}$ whenever $H^{i}_{{\mathfrak a}}(M)=0$ for all $i\neq\operatorname{height}_{M}({\mathfrak a})$. In other words, $M$ is relative Cohen-Macaulay w.r.t ${\mathfrak a}$ if and only if $\operatorname{grade}({\mathfrak a},M)=\operatorname{cd}({\mathfrak a},M)$ (see \cite{Z}). Notice that this concept has a connection with a notion which has been studied under the title of {\it cohomologically complete intersection ideals} in \cite{HSch}. It is well-known that $\operatorname{height}_R({\mathfrak a})\leq\operatorname{cd}({\mathfrak a},R)\leq\operatorname{ara}({\mathfrak a})$. The ideal ${\mathfrak a}$ is called a set-theoretic complete intersection ideal whenever $\operatorname{height}_R {\mathfrak a}=\operatorname{ara}({\mathfrak a})$. A set-theoretic complete intersection ideal is a cohomologically complete intersection ideal. Recently, relative Cohen-Macaulay modules have been studied also in \cite{HS}.\\ Sharp \cite{SH} and some other authors have shown that a Cohen-Macaulay local ring $R$ admits a canonical module if and only if it is the homomorphic image of a Gorenstein local ring. In particular, if $R$ is a complete Cohen-Macaulay local ring of dimension $n$, then $w_{R}=\operatorname{Hom}_{R}(H^{n}_{{\mathfrak m}}(R),E(R/ {\mathfrak m}))$ is a canonical module of $R$. The outline of the paper is as follows.\\ Section 2 is devoted to discuss main topics of this paper. We initiate this section by showing that if $(R,{\mathfrak m})$ is a complete local ring and $M$ is relative Cohen-Macaulay w.r.t ${\mathfrak a}$ with $\operatorname{cd}({\mathfrak a},M)=c>0$, and $\operatorname{Supp}_{R}(H^{c}_{{\mathfrak a}}(M))\subseteq\operatorname{V}({\mathfrak m})$, then ${\mathfrak a}$ contains a regular element on $D_{R}(H^{c}_{{\mathfrak a}}(M))$ (see Corollary~\ref{co}). In this horizon, we prove the following theorem (see Theorem~\ref{p3}). \begin{thm}\label{p0} Let $(R,{\mathfrak m})$ be a local ring and let $M$ be a finite $R$-module with $\operatorname{cd}({\mathfrak a},M)=c>0$. Assume that $\underline{x}=x_{1},\ldots ,x_{n}\in{\mathfrak a}$ is a regular sequence on both $M$ and $D_R(H^{c}_{{\mathfrak a}}(M))$. Then $$\operatorname{cd}({\mathfrak a},M/\underline{x}M)=\operatorname{cd}({\mathfrak a},M)-n.$$ \end{thm} As an application of Theorem~\ref{p0}, we bring the next
2f59f7879be9c80f1df4909ebccace08db257c28
\section{Introduction} To write high-quality program code for a Multi-Processor System-on-Chip (MPSoC), software developers must fully understand how their code will be executed on-chip. Debugging and tracing tools can help developers to gain this understanding. They are a keyhole through which developers can peek and observe the software execution. Today, and even more in the future, this keyhole narrows as MPSoCs integrate more functionalities, while at the same time the amount of software increases dramatically. Furthermore, concurrency and deep interaction of software with hardware components beyond the instruction set architecture (ISA) boundary are on the rise. Therefore more, not less, insight into the system is needed to keep up or even increase developer productivity. Many of today's MPSoCs are executing concurrent code on multiple cores, interact with the physical environment (cyber-physical systems), or must finish execution in a bounded amount of time (hard real-time). In these scenarios, a non-intrusive observation of the software execution is required, like it is provided by tracing. Instead of stopping the system for observation, as done in run-control debugging, the observed data is transferred off-chip for analysis. Unfortunately, observing a full system execution would generate data streams in the range of petabits per second~\cite[p.~16]{vermeulen_debugging_2014}. This is the most significant drawback of tracing: the system insight is limited by the off-chip bottleneck. Today's tracing systems, like ARM CoreSight~\cite{CoreSightWebsite} or NEXUS 5001~\cite{_nexus_2003} are designed to efficiently capture the operation of a functional unit (like a CPU) as compressed trace stream. With filters and triggers it is possible to configure which and when a functional unit is traced (observed). The trace streams (or short, traces) are then transported across an off-chip interface (and possibly other intermediate devices) to a host PC. Upon arrival the compressed trace streams are first decompressed (reconstructed) using the program binary and other static information which was removed before. The reconstructed trace streams are then fed to a data analysis application, which extracts information out of the data. This information can then be presented to a developer or it can be used by other tools, e.g. for runtime verification. \medskip The \textbf{main idea} in this work is to move the data analysis (at least partially) from the host PC into the chip. Bringing the computation closer to the data sources reduces the off-chip bandwidth requirements, and ultimately increases insight into software execution. To realize this idea, we introduce \emph{DiaSys}, a replacement for the tracing system in an MPSoC. DiaSys does not stream full execution traces off-chip for analysis. Instead, it first creates events from observations on the chip. Events can signal any interesting state change of the observed system, like the execution of a function in the program code, a change in interconnect load beyond a threshold, or the read of a data word from a certain memory address. A \emph{diagnosis application} then processes the observed events to give them ``meaning.'' Given an appropriate diagnosis application, a software developer might not be presented with any more a sequence of events like ``a read/write request was
c06ec0f30465dd21ff3ea5027d2bbfa652a2e3e5
\section{Introduction} \label{sec:intro} The Type~Ia supernova (SN~Ia) SN~2011fe was discovered on 2011 August 24, just 11~hr after explosion \citep{nugent11}. It is among the nearest ($\sim 6.9$~Mpc) and youngest ($\sim 11$~hr) SNe~Ia ever discovered. Extensive spectroscopic and photometric studies of SN~2011fe indicate that it is ``normal'' in nearly every sense: in luminosity, spectral and color evolution, abundance patterns, etc. \citep{parrent12,richmond12,roepke12,vinko12,munari13,pereira13}. Its unremarkable nature coupled with the wealth of observations made over its lifetime render it an ideal laboratory for understanding the physical processes which govern the evolution of normal SNe~Ia. Indeed, these data have allowed observers to place numerous and unprecedented constraints on the progenitor system of a particular SN~Ia \citep[e.g.,][]{li11,nugent11,bloom12,chomiuk12,horesh12,margutti12}. Equally as information-rich as observations taken at early times are those taken much later, when the supernova's photosphere has receded and spectrum formation occurs deep in the SN core. For example, \citet{shappee13} used late-time spectra to further constrain the progenitor system of SN~2011fe, namely that the amount of hydrogen stripped from the putative companion must be $< 0.001~M_\odot$. \citet{mcclelland13} found that the luminosity from SN~2011fe in the 3.6~$\mu$m channel of \textit{Spitzer}/IRAC fades almost twice as quickly as in the 4.5~$\mu$m channel, which they argue is a consequence of recombination from doubly ionized to singly ionized iron peak elements. In addition, \citet{kerzendorf14} used photometric observations near 930~d post-maximum light to construct a late-time quasi-bolometric light curve, and showed that the luminosity continues to trace the radioactive decay rate of $^{56}$Co quite closely, suggesting that positrons are fully trapped in the ejecta, disfavoring a radially combed or absent magnetic field in this SN. \citet{graham15} presented an optical spectrum at 981~d post-explosion and used constraints on both the mass of hydrogen as well as the luminosity of the putative secondary star as evidence against a single-degenerate explosion mechanism. \citet{taubenberger15} presented an optical spectrum at 1034~d post-explosion, and speculated about the presence of [\ion{O}{1}] lines near 6300~\AA, which, if confirmed, would provide strong constraints on the mass of unburned material near the center of the white dwarf progenitor of SN~2011fe. Non-detections of the H$\alpha$ line at both of these very late epochs also strengthened the constraints on the presence of hydrogen initially posed by \citet{shappee13}. Finally, \citet{mazzali15} used spectrum synthesis models of SN~2011fe from 192 to 364 days post-explosion to argue for a large central mass of stable iron and a small mass of stable nickel -- about 0.23~$M_\odot$ and 0.01~$M_\odot$, respectively. We complement these various late-time analyses with a series of radiative transfer models corresponding to a series of optical and ultraviolet (UV) spectra of SN~2011fe. \section{Observations} \label{sec:obs} \begin{table} \begin{tabular}{lll} UT Date & Phase & Telescope \\ & (days) & $+$Instrument \\ 2011 Dec 19 & $+$100 & WHT$+$ISIS \\ 2012 Apr 2 & $+$205 & Lick 3-m$+$KAST \\ 2012 Jul 17 & $+$311 & Lick 3-m$+$KAST \\ 2012 Aug 23 & $+$349 & Lick 3-m$+$KAST \\ 2013 Apr 8 & $+$578 &Lick 3-m$+$KAST \end{tabular} \caption{Observing log of spectra that appear here for the first time. The phase is with respect to maximum light.}
d18a74adc6f563a5494f9cf0ee38af9746eada63
\section{High-field instability from spin-wave theory} In the asymptotic high-field limit all spins are aligned along the field axis. Magnon excitations are suppressed by a large energy gap. By lowering the field strength the magnon gap decreases and eventually vanishes at some critical field strength $h_{\mathrm{c}0}$. Below $h_{\mathrm{c}0}$ the high-field state becomes unstable, indicating a transition towards one of the various intermediate-field phases. (This is true as long as the continuous transition is not preempted by a first-order transition at some higher field $h_{\mathrm c} > h_{\mathrm{c}0}$.) The wavevector at which the magnon gap closes then determines the ordering wavevector of the intermediate-field phase. We parameterize the magnon excitations above the polarized ground state at high field by Holstein-Primakoff bosons $a_i$ and $b_i$ on the A and B sublattices of the honeycomb lattice. It is convenient to use a spin-space frame obtained by rotating the cubic-axes basis $\vec e_x$, $\vec e_y$, $\vec e_z$ such that the magnetic field lies in the $3$-direction, \begin{align} \label{eq:spin-frame} \vec e_1 & = \frac{(\vec e_z \times \vec h) \times \vec h}{\lvert (\vec e_z \times \vec h) \times \vec h \rvert}, & \vec e_2 & = \frac{\vec e_z \times \vec h}{\lvert \vec e_z \times \vec h \rvert}, & \vec e_3 & = \frac{\vec h}{\lvert \vec h \rvert}. \end{align} E.g., for field in the diagonal $[111]$ direction we choose the new spin-basis vectors $\vec e_1 = (\vec e_x + \vec e_y - 2 \vec e_z)/\sqrt{6}$, $\vec e_2 = (-\vec e_x + \vec e_y)/\sqrt{2}$, and $\vec e_3 = (\vec e_x + \vec e_y + \vec e_z)/\sqrt{3}$. To leading order in the $1/S$ expansion the spin operators in this basis read: \begin{figure*}[!p] \includegraphics[scale=0.66]{magnon-dispersion_001_phi062_h2983.pdf}\hfill \includegraphics[scale=0.66]{magnon-dispersion_001_phi1687_h2217.pdf} \caption{Magnon excitation spectrum $\varepsilon_{\vec q}$ from linear spin-wave theory in the polarized phase for field in the $[001]$ direction and $h=h_{\mathrm{c}0}$ along high-symmetry lines in the Brillouin zone (see inset). The magnon gap vanishes at M$_1$ and M$_3$ for $\pi/2 < \varphi < \varphi_{\mathrm{c}2}$, when the transition is towards the canted zigzag phase (left panel), as well as for $3\pi/2 < \varphi < \varphi_{\mathrm{c}4}$, when the transition is towards the canted stripy phase (right panel).} \label{fig:magnon-spectrum-001} \end{figure*} \begin{figure*}[!p] \includegraphics[scale=0.65]{magnon-dispersion_111_phi062_h2615.pdf}\hfill \includegraphics[scale=0.65]{magnon-dispersion_111_phi1687_h1663.pdf}\hfill \includegraphics[scale=0.65]{magnon-dispersion_111_phi1922_h4850.pdf} \caption{Same as Fig.~\ref{fig:magnon-spectrum-001} for field in the $[111]$ direction. The magnon gap vanishes at the K points in the Brillouin zone for $\pi/2 < \varphi < \varphi_{\mathrm{c}1}$, when the transition is towards the AF vortex phase (left panel), as well as for $3\pi/2 < \varphi < 7\pi/4$, when the transition is towards the vortex phase (middle panel). For $\varphi_{\mathrm{c}3} < \varphi < 2\pi$ the instability wave vector is at the $\Gamma$ point, indicating the transition towards the canted N\'eel phase. Note that due to the unbroken $\mathbb{Z}_3$ symmetry the magnon spectrum is the same at all three $\mathrm M$ points, in contrast to the situation when $\vec h \nparallel [111]$. } \label{fig:magnon-spectrum-111} \end{figure*} \begin{figure*}[!p] \includegraphics[scale=0.65]{magnon-dispersion_1107_phi062_h2629.pdf}\hfill \includegraphics[scale=0.65]{magnon-dispersion_1107_phi1687_h1696.pdf} \caption{Same as Fig.~\ref{fig:magnon-spectrum-001} for field in the $[11\frac{7}{10}]$ direction, when the magnon gap vanishes at incommensurate wavevectors between M$_2$ and K, K'.} \label{fig:magnon-spectrum-1107} \end{figure*} \begin{table*}[!p] \caption{Instability field strength $h_{\mathrm{c}0}$,
185e9f90d93d078196d46d0c79174714237f3669
\section{Introduction} The black hole information puzzle is the puzzle of whether black hole formation and evaporation is unitary, and debate on this issue has continued for more than 36 years \cite{Page:1993up, Giddings:2006sj, Mathur:2008wi}, since Hawking radiation was discovered \cite{Hawking:1974sw}. Hawking originally used local quantum field theory in the semiclassical spacetime background of an evaporating black hole to deduce \cite{Hawking:1976ra} that part of the information about the initial quantum state would be destroyed or leave our Universe at the singularity or quantum gravity region at or near the centre of the black hole, so that what remained outside after the black hole evaporated would not be given by unitary evolution from the initial state. However, this approach does not fully apply quantum theory to the gravitational field itself, so it was objected that the information-loss conclusion drawn from it might not apply in quantum gravity \cite{Page:1979tc}. Maldacena's AdS/CFT conjecture \cite{Maldacena:1997re} has perhaps provided the greatest impetus for the view that quantum gravity should be unitary within our Universe and give no loss of information. If one believes in local quantum field theory outside a black hole and also that one would not experience extreme harmful conditions (`drama') immediately upon falling into any black hole sufficiently large that the curvature at the surface would not be expected to be dangerous, then recent papers by Almheiri, Marolf, Polchinski, and Sully (AMPS) \cite{Almheiri:2012rt}, and by them and Stanford (AMPSS) \cite{Almheiri:2013hfa}, give a new challenge to unitarity, as they argued that unitarity, locality, and no drama are mutually inconsistent. It seems to us that locality is the most dubious of these three assumptions. Nevertheless, locality seems to be such a good approximation experimentally that we would like a much better understanding of how its violation in quantum gravity might be able to preserve unitarity and yet not lead to the drama of firewalls or to violations of locality so strong that they would be inconsistent with our observations. Giddings (occasionally with collaborators) has perhaps done the most to investigate unitary nonlocal models for quantum gravity \cite{Giddings:2006sj, Giddings:2006be, Giddings:2007ie, Giddings:2007pj, Giddings:2009ae, Giddings:2011ks, Giddings:2012bm, Giddings:2012dh, Giddings:2012gc, Giddings:2013kcj, Giddings:2013jra, Giddings:2013noa, Giddings:2014nla, Giddings:2014ova, Giddings:2015uzr, Donnelly:2016rvo, Giddings:2017mym, Donnelly:2017jcd}. For other black hole qubit models, see \cite{Terno:2005ff, Levay:2006pt, Levay:2007nm, Duff:2008eei, Levay:2008mi, Borsten:2008wd, Rubens:2009zz, Levay:2010ua, Duff:2010zz, Duff:2012nd, Borsten:2011is, Levay:2011bq, Avery:2011nb, Dvali:2011aa, Borsten:2012sga, Borsten:2012fx, Dvali:2012en, Duff:2013xna, Levay:2013epa, Verlinde:2013vja, Borsten:2013vea, Duff:2013rma, Borsten:2013uma, Dvali:2013lva, Prudencio:2014ypa, Pramodh:2014jha, Chatwin-Davies:2015hna, Dai:2015dqt, Belhaj:2016yyq, Belhaj:2016yfo}. Here we present a qubit toy model for how a black hole might evaporate unitarily and without firewalls, but with nonlocal gravitational degrees of freedom. We model radiation modes emitted by a black hole as localized qubits that interact locally with these nonlocal gravitational degrees of freedom. Similar models were first investigated by Giddings in his previously referred papers, particularly in \cite{Giddings:2011ks,Giddings:2012bm,Giddings:2012dh}. Nomura and his colleagues also have a model \cite{Nomura:2014woa,Nomura:2014voa,Nomura:2016qum} with some similarities to ours. In this way we can go from modes near the horizon that to an infalling observer appear to be close to a vacuum state (and hence without a firewall),
c7dfc98266d9a9c36e71efeb0e655803ef3a7190
\section{Results} The actuation concept consists of two elements. First, the conversion of rotation into time-reversible translation is achieved by each particle separately, hence, a single-particle problem. Second, the breaking of the time reversibility arises from the interaction between the particles in a cluster. In the following, these two elements are demonstrated qualitatively with a combination of analytic and numerical calculations using sd-particles. Afterwards, the investigations are used to explain our experimental observations of actuated clusters from magnetic Janus spheres, which move under oscillating fields. \subsection{Oscillation of single sd-particles} We access the role of magnetic asymmetry of a single dipolar particle by considering its reciprocal motion under oscillating fields for four different scenarios (Fig.~\ref{pics:sk-trans-sd}). If, in an isotropic medium, a sphere with a central dipole (case i, Fig.~\ref{pics:sk-trans-sd}) is exposed to an oscillating field $B^{z}=B_0^{z}\sin(\omega_B t)$, the dipole follows the field via up and down rotation by angle $\theta\in [-\theta_A,\theta_A]$. As a magnetic object rotates around its magnetic center this results only in particle rotation. Asymmetry is introduced by sd-particles, where the magnetic center is shifted away from the geometric center (case ii, Fig.~\ref{pics:sk-trans-sd}). Here, the dipole is shifted radially outwards by $\xi\,\frac{d_p}{2}$ along the dipole orientation, where $\xi\in[0,1]$ is the shift parameter and $d_p$ is the particle diameter. Under oscillating fields, the rotation around the magnetic center displaces the particle center periodically. In an isotropic environment, the particle center moves on an arc \cite{Che09} with a radius determined by the shift $\xi$. The behavior changes in the proximity of a wall, where anisotropic drag is present. We consider this case here because the experiments described later in this work are performed on particles moving close to a planar substrate. We emphasize that the presence of a wall is, however, not essential for the actuation mechanism presented here. \begin{figure} \centerline{\includegraphics[width=8.7cm]{sketch1.pdf} } \caption{Time-reversible motion of a single dipolar sphere under oscillating magnetic fields $B^{\text{z}}$. The scenarios depict a sphere with central dipole (i, iii) and shifted dipole (ii, iv) in an isotropic medium (i, ii) and close to a wall (iii, iv). The dipole (light red arrow) with moment $m$ oscillates up (green) and down (yellow), which results in a displacement of the sphere (dotted green and yellow circles) under asymmetric conditions (ii\,-\,iv). } \label{pics:sk-trans-sd} \end{figure} Already for a sphere with central dipole (case iii, Fig.~\ref{pics:sk-trans-sd}) in a liquid medium close to a wall, rotation is converted into reversible back and forth displacement. An sd-particle also rolls back and forth, and additionally performs a periodic displacement away from the wall and towards the wall (case iv, Fig.~\ref{pics:sk-trans-sd}) because the magnetic center is shifted away from the geometric center of the sphere. In contrast to case iii, this motion is not symmetric with respect to $\theta$ due to the hindrance by the wall. For an sd-particle in an isotropic medium, we can derive an analytic expression for the displacement of the particle center that arises from the rotation by an infinitesimal angle $\mathrm{d}\theta$ around the magnetic center (see SI Appendix). The infinitesimal displacement,
5410a4b3a1b09f1912adf6ec69edc991a5231d5d
\section{Introduction} Currents of spin angular momentum play a central role in the field of spintronics. Significant contributions have been made by spin currents, such as control of magnetizations by spin transfer torque,\citep{PhysRevLett.84.3149,demidov2011control,berger1996emission,ralph2008spin} transmission of electric signals through insulators,\citep{kajiwara2010transmission,cornelissen2015long,:/content/aip/journal/apl/107/17/10.1063/1.4935074} thermoelectric conversion,\citep{uchida2008observation,uchida2010spin,jaworski2010observation,kirihara2012spin,7452553} and electric probing of insulator magnetization.\citep{nakayama2013spin,chen2013theory,althammer2013quantitative} In order to detect and utilize these spin based-phenomena, conversion between spin and charge currents is necessary. For realizing efficient spin-to-charge current conversion, a wide range of materials are currently being investigated, including metals,\citep{saitoh2006conversion,kimura2007room,miao2013inverse,liu2012spin,mosendz2010quantifying,wang2014scaling,valenzuela2006direct,mendes2014large,niimi2015reciprocal,Azevedo:2005kw} semiconductors,\citep{wunderlich2005experimental,sinova2004universal,kato2004observation,ando2011electrically,ando2012observation,murakami2003dissipationless,chen2013direct} organic materials,\citep{SunSchootenKavandEtAl2016,ando2013solution} carbon-based materials,\citep{dushenko2015experimental} and topological insulators.\citep{shiomi2014spin,AndoHamasakiKurokawaEtAl2014,mellnik2014spin} Finding materials suitable to the spin-to-charge conversion is thus indispensable to making spintronic devices. One of the popular methods of the spin-to-charge conversion is the inverse spin Hall effect (ISHE),\citep{Azevedo:2005kw,saitoh2006conversion} which is the reciprocal effect of the spin Hall effect\citep{hirsch1999spin,kato2004observation,sinova2004universal,murakami2003dissipationless} caused by spin-orbit interaction. In the ISHE, a spin current generates a transverse charge current in a conductor such as Pt. Since the first demonstration of the ISHE in Pt and Al,\citep{saitoh2006conversion,valenzuela2006direct} it has been extensively studied because of its versatility.\citep{hoffmann2013spin,RevModPhys.87.1213} Dynamical generation of spin currents can be achieved by the spin pumping (SP).\citep{mizukami2002effect,Tserkovnyak:2005fr,tserkovnyak2002enhanced} At the interface between a normal conductor (N) and a ferromagnet (F), the SP causes emission of spin currents into the N layer from magnetization dynamics in the adjacent F layer. Such the magnetization dynamics is typically triggered by applying a microwave field; at the ferromagnetic resonance (FMR) or the spin wave resonance (SWR) condition, the magnetization resonantly absorbs the microwave power and exhibits a coherent precessional motion. A part of the angular momenta stored in this precessional motion is the source of the spin current generated by the SP. The combination of the ISHE and the SP enables electric detection and generation of spin currents.\citep{saitoh2006conversion,Azevedo:2005kw} This is the setup commonly used to study the properties of spin-to-charge current conversion and spin transport in materials. A spin current is injected into an N layer by the SP and is converted to a measurable electromotive force by the ISHE. The conversion efficiency between spin and charge currents in this process can be determined by estimating the density of the injected spin current by analyzing microwave spectra.\citep{ando2011inverse,mosendz2010quantifying} The spin transport property of a material can be investigated by constructing a heterostructure in which the material of interest is placed between a spin-current injector and detector layers.\citep{shikoh2013spin,dushenko2015experimental,watanabe2014polaron} \begin{figure} \begin{centering} \includegraphics{Fig1SpandRectHeat} \par\end{centering} \caption{(Color online) A schematic illustration of the SP and ISHE processes. The SP induces an spin current, $\mathbf{j}_{{\rm s}}$, and the ISHE convert $\mathbf{j}_{{\rm s}}$ into an charge current, $\mathbf{j}_{{\rm c,ISHE}}$. A microwave driving the FMR induces an rf current, $\mathbf{j}_{{\rm rf}}$, causing a dc current $j_{{\rm rect}}$ via galvanomagnetic effects. The absorbed microwave power at FMR induces a temperature gradient, $\nabla T$, causing a thermoelectric voltage. These process results in unwanted signals.\label{fig:A-schematic-illustration}} \end{figure} It should be noted that the voltage signal from the ISHE can be contaminated by other contributions in practice experiments. We thus need to extract the ISHE contribution by separating or minimizing the unwanted signals in
842d24bc33ac769220affce1ce229e3d0c36375b
\section{Introduction}\label{introduction} \setcounter{equation}{0} \numberwithin{equation}{section} In the present paper, we consider the equation \begin{equation}\label{1.1} -y''(x)+q(x)y(x )=f(x),\quad x\in \mathbb R \end{equation} where $f\in L_p^{\loc}(\mathbb R)$, $p\in[1,\infty)$ and \begin{equation}\label{1.2} 0\le q \in L_1^{\loc}(\mathbb R). \end{equation} Our general goal is to determine a space frame within which equation \eqref{1.1} always has a unique stable solution. To state the problem in a more precise way, let us fix two positive continuous functions $\mu(x)$ and $\theta(x),$ $x\in\mathbb R,$ a number $p\in[1,\infty)$, and introduce the spaces $L_p(\mathbb R,\mu)$ and $L_p(\mathbb R,\theta):$ \begin{align} &L_p(\mathbb R,\mu)=\left\{f\in L_p^{\loc}(\mathbb R):\|f\|_{L_p(\mathbb R ,\mu)}^p=\int_{-\infty}^\infty|\mu(x)f(x)|^pdx<\infty\right\}\label{1.3}\\ &L_p(\mathbb R,\theta)=\left\{f\in L_p^{\loc}(\mathbb R):\|f\|_{L_p(\mathbb R ,\theta)}^p=\int_{-\infty}^\infty|\theta(x)f(x)|^pdx<\infty\right\}.\label{1.4} \end{align} For brevity, below we write $L_{p,\mu}$ and $L_{p,\theta},$ \ $\|\cdot\|_{p,\mu}$ and $\|\cdot\|_{p,\theta}$, instead of $L_p(\mathbb R,\mu),$ $L_p(\mathbb R,\theta)$ and $\|\cdot\|_{L_p(\mathbb R,\mu)}$, $\|\cdot\|_{L_p(\mathbb R,\theta)},$ respectively (for $\mu=1$ we use the standard notation $L_p$ $(L_p:=L_p(\mathbb R))$ and $\|\cdot\|_p$ $(\|\cdot\|_p:=\|\cdot\|_{L_p}).$ In addition, below by a solution of \eqref{1.1} we understand any function $y,$ absolutely continuous together with its derivative and satisfying equality \eqref{1.1} almost everywhere on $\mathbb R$. Let us introduce the following main definition (see \cite[Ch.5, \S50-51]{12}: \begin{defn}\label{defn1.1} We say that the spaces $L_{p,\mu}$ and $L_{p,\theta}$ make a pair $\{L_{p,\mu},L_{p,\theta}\}$ admissible for equation \eqref{1.1} if the following requirements hold: I) for every function $f\in L_{p,\theta}$ there exists a unique solution $y\in L_{p,\mu}$ of \eqref{1.1}; II) there is a constant $c(p)\in (0,\infty)$ such that regardless of the choice of a function $f\in L_{p,\theta}$ the solution $y\in L_{p,\mu}$ of \eqref{1.1} satisfies the inequality \begin{equation}\label{1.5} \|y\|_{p,\mu}\le c(p)\|f\|_{p,\theta}. \end{equation} \end{defn} Let us in addition we make the following conventions: For brevity we say ``problem I)--II)" or ``question on I)--II)" instead of ``problem (or question) on conditions for the functions $\mu$ and $\theta$ under which requirements I)--II) of Definition \ref{defn1.1} hold." We say ``the pair $\{L_{p,\mu};L_{p,\theta}\}$ admissible for \eqref{1.1}" instead of ``the pair of spaces $\{L_{p,\mu};L_{p,\theta}\}$ admissible for equation \eqref{1.1}", and we often omit the word ``equation" before \eqref{1.1}. By $c,\, c(\cdot)$ we denote absolute positive constants which are not essential for exposition and may differ even within a single chain of calculations. Our general requirement \eqref{1.2} is assumed to be satisfied throughout the paper, is not referred to, and does not appear in the statements. Let us return to Definition \ref{defn1.1}. The question on the admissibility of the pair $\{L_p,L_p\}$ for \eqref{1.1} was studied in \cite{3,6} (in \cite{3,6} for $\mu\equiv\theta\equiv1$ in the case where I)--II) were valid, we said that equation \eqref{1.1} is correctly solvable in $L_p.$ We maintain this terminology in the present paper.) Let us quote the main result of \cite{3,6} (in terms of Definition \ref{defn1.1}). \begin{thm} \label{thm1.2} \cite{3} The pair $\{L_p,L_p\}$ is admissible for \eqref{1.1} if and only if there is $a\in(0,\infty)$ such that $q_0(a)>0.$ Here \begin{equation}\label{1.6} q_0(a)=\inf_{x\in\mathbb R}\int_{x-a}^{x+a}q(t)dt. \end{equation} \end{thm} Below we continue the investigation started in \cite{3,6}. Our goal is as follows: given equation \eqref{1.1}, to determine requirements to the weights $\mu$ and $\theta$ under which the pair $\{L_{p,\mu};L_{p,\theta}\},$ $p\in[1,\infty),$ is admissible for \eqref{1.1}. Such an approach to the inversion of \eqref{1.1} allows to study this equation also in the case where \thmref{thm1.2} is
7c6ec6b07f6e9cc57c84968ebb95269aa2395b34
\section{Introduction} Spectral embedding methods are based on analyzing Markov chains on a high-dimensional data set $\left\{x_i\right\}_{i=1}^{n} \subset \mathbb{R}^d$. There are a variety of different methods, see e.g. Belkin \& Niyogi \cite{belk}, Coifman \& Lafon \cite{coif1}, Coifman \& Maggioni \cite{coif2}, Donoho \& Grimes \cite{donoho}, Roweis \& Saul \cite{rs}, Tenenbaum, de Silva \& Langford \cite{ten}, and Sahai, Speranzon \& Banaszuk \cite{sahai}. A canonical choice for the weights of the graph is declare that the probability $p_{ij}$ to move from point $x_j$ to $x_i$ to be $$ p_{ij} = \frac{ \exp\left(-\frac{1}{\varepsilon}\|x_i - x_j\|^2_{\ell^2(\mathbb{R}^d)}\right)}{\sum_{k=1}^{n}{ \exp\left(-\frac{1}{\varepsilon}\|x_k - x_j\|^2_{\ell^2(\mathbb{R}^d)}\right)}},$$ where $\varepsilon > 0$ is a parameter that needs to be suitably chosen. This Markov chain can also be interpreted as a weighted graph that arises as the natural discretization of the underlying 'data-manifold'. Seminal results of Jones, Maggioni \& Schul \cite{jones} justify considering the solutions of $$ -\Delta \phi_n = \lambda_n^2 \phi_n$$ as measuring the intrinsic geometry of the weighted graph. Here we always assume Neumann-boundary conditions whenever such a graph approximates a manifold. \begin{figure}[h!] \begin{tikzpicture}[scale=0.2\textwidth, node distance=1.2cm,semithick] \node[origVertex] (0) {}; \node[origVertex] (1) [right of=0] {}; \node[origVertex] (2) [above of=0] {}; \node[origVertex] (3) [above of=1] {}; \node[origVertex] (4) [above of=2] {}; \path (0) edge[origEdge, out=-45, in=-135] node[newVertex] (m0) {} (1) edge[origEdge, out= 45, in= 135] node[newVertex] (m1) {} (1) edge[origEdge] node[newVertex] (m2) {} (2) (1) edge[origEdge] node[newVertex] (m3) {} (3) (2) edge[origEdge] node[newVertex] (m4) {} (3) edge[origEdge] node[newVertex] (m5) {} (4) (3) edge[origEdge, out=125, in= 55, looseness=30] node[newVertex] (m6) {} (3); \path (m0) edge[newEdge, out= 135, in=-135] (m1) edge[newEdge, out= 45, in= -45] (m1) edge[newEdge, out=-145, in=-135, looseness=1.7] (m2) edge[newEdge, out= -35, in= -45, looseness=1.7] (m3) (m1) edge[newEdge] (m2) edge[newEdge] (m3) (m2) edge[newEdge] (m4) edge[newEdge, out= 135, in=-135] (m5) (m3) edge[newEdge] (m4) edge[newEdge, out= 45, in= 15] (m6) (m4) edge[newEdge] (m5) edge[newEdge, out= 90, in= 165] (m6) ; \draw [thick, xshift=0.006cm,yshift=0.005cm] plot [smooth, tension=1] coordinates { (0.03,0.01) (0.04,-0.01) (0.06,0.01) (0.055,0.02) (0.05, 0.01) (0.04, 0.01) (0.035, 0.01) (0.03, 0.02) (0.03,0.01) }; \end{tikzpicture} \caption{Graphs that approximate smooth manifolds.} \end{figure} The cornerstone of spectral embedding is the realization that the map \begin{align*} \Phi: \left\{x_i\right\}_{i=1}^{n} &\rightarrow \mathbb{R}^k \\ x &\rightarrow (\phi_1(x), \phi_2(x), \dots, \phi_k(x)). \end{align*} can be used as an effective way of reducing the dimensionality. One useful explanation that is often given is to observe that the Feynman-Kac formula establishes a link between random walks on the weighted graph and the evolution of the heat equation. We observe that random walks have a tendency to be trapped in clusters and are unlikely to cross over bottlenecks and, simultaneously, that the evolution of the heat equation can be explicitely given as $$ \left[e^{t \Delta} f\right](x) = \sum_{n=1}^{\infty}{e^{-\lambda_n^2 t} \left\langle f, \phi_n \right\rangle \phi_n(x)}.$$ The exponential decay $e^{-\lambda_n^2 t}$ implies that the long-time dynamics is really governed by the low-lying eigenfunctions who then have to be able to somehow reconstruct the random walks' inclination for getting trapped in clusters and should thus be able to reconstruct the cluster. We believe this intuition to be useful and our further exposition will be
48196a99e6c64a2673a380447e415ed30b20c95a
\section{Introduction}\label{sintro} \section{Background: CUR and low-rank approximation}\label{sbcgr} {\em Low-rank approximation} of an $m\times n$ matrix $W$ having a small numerical rank $r$, that is, having a well-conditioned rank-$r$ matrix nearby, is one of the most fundamental problems of numerical linear algebra \cite{HMT11} with a variety of applications to highly important areas of modern computing, which range from the machine learning theory and neural networks \cite{DZBLCF14}, \cite{JVZ14} to numerous problems of data mining and analysis \cite{M11}. One of the most studied approaches to the solution of this problem is given by $CUR$ {\em approximation} where $C$ and $R$ are a pair of $m\times l$ and $k\times n$ submatrices formed by $l$ columns and $k$ rows of the matrix $W$, respectively, and $U$ is a $k\times l$ matrix such that $W\approx CUR$. Every low-rank approximation allows very fast approximate multiplication of the matrix $W$ by a vector, but CUR approximation is particularly transparent and memory efficient. The algorithms for computing it are characterized by the two main parameters: (i) their complexity and (ii) bounds on the error norms of the approximation. We assume that $r\ll \min\{m,n\}$, that is, the integer $r$ is much smaller than $\min\{m,n\}$, and we seek algorithms that use $o(mn)$ flops, that is, much fewer than the information lower bound $mn$. \section{State of the art and our progress}\label{ssartpr} The algorithms of \cite{GE96} and \cite{P00} compute CUR approximations by using order of $mn\min\{m,n\}$ flops.\footnote{Here and hereafter {\em ``flop"} stands for ``floating point arithmetic operation".} \cite{BW14} do this in $O(mn\log(mn))$ flops by using randomization. These are record upper bounds for computing a CUR approximation to {\em any input matrix} $W$, but the user may be quite happy with having a close CUR approximations to {\em many matrices} $W$ that make up the class of his/her interest. The information lower bound $mn/2$ (a flop involves at most two entries) does not apply to such a restricted input classes, and we go well below it in our paper \cite{PSZa} (we must refer to that paper for technical details because of the limitation on the size of this submission). We first formalize the problem of CUR approximation of an average $m\times n$ matrix of numerical rank $r\ll \min\{m,n\}$, assuming the customary Gaussian (normal) probability distribution for its $(m+n)r$ i.i.d. input parameters. Next we consider a two-stage approach: (i) first fix a pair of integers $k\le m$ and $l\le n$ and compute a CUR approximation (by using the algorithms of \cite{GE96} or \cite{P00}) to a random $k\times l$ submatrix and then (ii) extend it to computing a CUR approximation of an input matrix $W$ itself. We must keep the complexity of Stage (i) low and must extend the CUR approximation from the submatrix to the matrix $W$. We prove that for a specific class of input matrices $W$ these two tasks are in conflict (see Example 11 of \cite{PSZa}), but such a class of hard inputs is narrow, because we prove that our algorithm produces a close approximation to the average $m\times n$ input matrix $W$
ff44ea6d25a77da54b74b5060a441c09c58b4dfa
\section{Introduction} NGC 4660 is an E5 elliptical galaxy with a strong incrusted disk component, and has frequently been used as a `classic' example of this type of galaxy. As such it has appeared frequently in the literature. It is located in the Virgo cluster, and has been identified as a member of the nearest compact group satisfying the Hickson criterion \citep{mam89}, along with other Virgo galaxies like M59 and M60, but \citet{mam08} concluded from surface brightness fluctuation measurements that M59 and NGC 4660 form a pair $\sim 1 - 2$ Mpc closer to us than the other three galaxies of the potential compact group. Hence we adopt a distance of 18 Mpc for this galaxy, giving a scale of 87 pc/arcsec and 5.2 kpc/arcmin. Generally it has been assumed that the disk in NGC 4660 is primordial and that the galaxy has not undergone a significant interaction or merger, but here we report the discovery of a long, curved filament apparently emanating from NGC 4660, in the direction of M59, which implies that it has experienced some kind of major event with a dynamical timescale of a few $\times 10^{8}$ years. The filament was discovered on a co-added array of scanned Kodak Technical Pan films of the SE region of the Virgo cluster taken with the UK Schmidt Telescope \citep{kat98, kat01} which have previously been used to identify or confirm other filamentary structures, e.g. the filamentary `ellipse' of IC 3481 \citep{per09}, the filamentary connections between NGC 4410 A/B, C and D \citep{per08} etc. Although such Schmidt plate/film studies have become almost obselete in modern astronomy, this discovery of the present, previously unsuspected, filament of NGC 4660 shows that they could still be put to productive uses. Here we report the discovery of the filament using the Schmidt data, and describe subsequent multiband CCD observations with the 2.1m telescope at the San Pedro M\'artir Observatory, which provide colour information for this galaxy and further imaging of the filament. In Section 2 we provide a summary of the previous work on the galaxy NGC 4660, while in Section 3 we describe the observational data and their processing. Results of the photometry of NGC 4660 and of the filament are given in Section 4, and in Section 5 we discuss the age and the formation of the filament and give general conclusions. \section {Previous work on NGC 4660} NGC 4660 (VCC 2000) has been studied in various ways, and has formed part of many samples of early-type galaxies over the years, on account of its proximity to us in the Virgo cluster and its incrusted disk component. It is located at RA = 12:44:32.0 Dec. = +11:11:26 (J2000), has a recessional velocity of 1083 km/s, and redshift of 0.003612 (NASA/IPAC Extragalactic Database). Although its classification in the SIMBAD database is E5, the Virgo Cluster Catalogue \citep{bin85} gives a classification of E3/S0--1(3), which emphasises its transitionary nature. There have been a few previous photometric and structural studies of NGC 4660, though three of them
daee9918f4f93de04946c46745d1c314288d3d3e
\section*{Supplemental Material} In this Supplemental Material, we provide more numerical data for the ground-state entanglement entropy and entanglement spectrum. \subsection*{Ground-state entanglement entropy} In the main text, we have discussed the ground-state entanglement entropy $S(\overline{\rho})$ obtained by averaging the density matrices of the three ground states, i.e., $\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$. Now we compute the corresponding result $S(|\Psi_i\rangle)$ and its derivative $dS(|\Psi_i\rangle)/dW$ of the three individual states. The sample-averaged results are shown in Fig.~\ref{Spsi}. The data of three individual states have some differences, but are qualitatively the same: for all of them, the entanglement decreases with $W$, and the derivative with respect to $W$ has a single minimum that becomes deeper for larger system sizes. For the finite systems that we have studied, the location of the minimum does depend somewhat on the individual states, but the value does not deviate much from $W=0.6$. To incorporate the effects of all of the three states, we compute the mean $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$. This is an alternative averaging method to the one ($\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$) that we use in the main text. The sample-averaged results are shown in Fig.~\ref{Sbar}. The minimum of $\langle d\overline{S}/dW\rangle$ is located at $W\approx0.6$ for $N=5-9$ electrons [Fig.~\ref{Sbar}(c)], and its depth diverges as $h\propto N^{1.33}$ with the system size [Fig.~\ref{Sbar}(d)]. The scaling $d\overline{S}/dW\propto N^{\frac{1}{2}+\frac{1}{2\nu}}f'[N^{\frac{1}{2\nu}}(W-W_c)]$ suggests $\nu\approx 0.6$. $\langle\overline{S}\rangle$ agrees with an area law at all $W$'s, and the entanglement density starts to drop at $W\approx0.4$ [Fig.~\ref{Sbar}(b)]. All of these results are very similar to those shown in Figs.~1 and 2 in the main text. This means both averaging methods, i.e., $\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$ and $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$, can identify the ground-state phase transitions and give the same critical $W$. However, we observe larger finite-size effects of $h$ and error bars in $\langle d\overline{S}/dW\rangle$ (especially at small $W$). \begin{figure*} \centerline{\includegraphics[width=\linewidth]{entropy_psi.pdf}} \caption{$\langle S(|\Psi_i\rangle)\rangle$ and $\langle dS(|\Psi_i\rangle)/dW\rangle$ for (a,d) $|\Psi_1\rangle$, (b,e) $|\Psi_2\rangle$ and (c,f) $|\Psi_3\rangle$, where $|\Psi_1\rangle$, $|\Psi_2\rangle$ and $|\Psi_3\rangle$ are the three states with ascending energies in the ground-state manifold. Here we averaged $20000$ samples for $N=4-7$, $5000$ samples for $N=8$, and $800$ samples for $N=9$ electrons. The data at $W=\infty$, i.e., the noninteracting limit are also given.} \label{Spsi} \end{figure*} \begin{figure} \centerline{\includegraphics[width=\linewidth]{entropy_psi_avg.pdf}} \caption{We measure the ground-state entanglement by $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$. (a) $\langle\overline{S}\rangle$, (b) the entanglement density $\alpha$, and (c) $\langle d\overline{S}/dW\rangle$ versus the disorder strength $W$. (d) The depth $h$ of $\langle d\overline{S}/dW\rangle$ versus the number of electrons $N$ on a double logarithmic plot. The linear fit (dashed line) shows $h\propto N^{1.33}$. Here we averaged $20000$ samples for $N=4-7$, $5000$ samples for $N=8$, and $800$ samples for $N=9$ electrons. The data at $W=\infty$, i.e., the noninteracting limit are also given in (a) and (b).} \label{Sbar} \end{figure} \subsection*{Ground-state entanglement spectrum (ES)} In the main text, we consider the density of states (DOS) $\overline{D}(\xi)$ and level statistics $\overline{P}(s)$ of the ES averaged over three ground states. We find that the results of each individual state are almost the same as those obtained by averaging over three ground states, which justifies the procedures of doing an average. Here, we demonstrate the results [$D_1(\xi)$
5fab997cf8b27e973a64a061be0aa50cb3e2044f
\section{Introduction} The shallow water equations (Saint-Venant \cite{Saint-Venant1871}) are a common $(d - 1)$ approximation to the $d$--dimensional Navier-Stokes equations to model incompressible, free surface flows. Due to the ability of high-order Galerkin methods to keep dissipation and dispersion errors low (Ainsworth et al.\ \cite{ainsworthEtAl2006}) and their flexibility with arbitrary geometries and {\it hp}-adaptivity, these methods are proving their mettle for solving the shallow water equations (SW) in the modeling of non-linear waves in different geophysical flows (Ma \cite{ma1993}, Iskandarani \cite{iskandaraniEtAl1995}, Taylor et al.\ \cite{taylorTribbia1997}, Giraldo \cite{giraldo2001}, Giraldo et al.\ \cite{giraldoHesthavenWarburton2002}), Dawson and Aizinger \cite{dawsonAizinger2005}, Kubatko et al.\ \cite{kubatkoEtAl2006}, Nair et al.\ \cite{nairThomasLoft2007}, Giraldo and Restelli \cite{giraldoRestelli2010}, Xing et al.\ \cite{xingZhangShu2010}, K\"arn\"a et al.\ \cite{karnaEtAl2011}, Hendricks et al.\ \cite{hendricksKoperaGiraldo2015}, Hood \cite{karolineTHESIS2016}). Although it is typically assumed that high-order Galerkin methods are not strictly necessary, they do offer many advantages over their low-order counterparts. Examples include their ability to resolve fine scale structures and to do so with fewer degrees of freedom, as well as their strong scaling properties on massively parallel computers (M\"uller et al.\ \cite{mullerEtAl2016}, Abdi et al.\ \cite{abdiEtAl2016}, Gandhem et al.\ \cite{gandhamEtAl2015}). High-order methods are often attributed with some disadvantages as well. For example, they are constrained by small time-steps. To overcome this restriction, we follow K\"arn\"a et al.\ \cite{karnaEtAl2011} and implement an implicit Runge-Kutta scheme based on Giraldo et al.\ \cite{giraldoEtAl2013}. Furthermore, the numerical approximation of non-linear hyperbolic systems via high-order methods is susceptible to unphysical Gibbs oscillations that form in the proximity of strong gradients such as those of sharp wave fronts (e.g. bores). Filters such as Vandeven's and Boyd's \cite{vandeven1991, boyd1996} are the most common tools to handle this problem, as well as artificial diffusion of some sort. However, we noticed that filtering may not be sufficient as the flow strengthens and the wave sharpness intensifies. We have recently shown this issue with the solution of the non-linear Burgers' equation by high order spectral elements in \cite[\S 5]{marrasEtAl2015b} and will show how a proper dynamic viscosity does indeed improve on filters for the shallow water system as well. To preserve numerical stability without compromising the overall quality of the solution, Pham Van et al.\ \cite{phanVanEtAl2014} and Rakowsky et al.\ \cite{rakowskyEtAl2013} utilized a Lilly-Smagorinsky model \cite{lilly1962, smagorinsky1963}. To account for sub-grid scale effects, viscosity was also utilized in the DG model described by Gourgue et al.\ \cite{gourgueEtal2009} to improve their inviscid simulations. Recently, Pasquetti et al.\ \cite{pasquettiGuermondPopoICOSAHOM2014} stabilized the high-order spectral element solution of the one-dimensional Saint-Venant equations via the entropy viscosity method. Michoski et al.\ \cite{michoskiEtAl2016} compared artificial viscosity, limiters, and filters for the (modal) DG solution of SW, concluding that a dynamically adaptive diffusion may be the most effective means of regularization at higher orders. The diffusion model that we present in this work stems from the sub-grid scale (SGS) model that was proposed by Nazarov and Hoffman \cite{nazarovHoffman2013} to stabilize the linear finite element solution of compressible flows with shock waves. This was later applied to stabilize high-order continuous and discontinuous Galerkin (CG/DG) in
d53bbf469ca04bb5fbd7062a450cba2c00479b10
\section{Introduction and Summary of Results} The problem of admissible functional-classes has been of recent interest in the context of higher-spin (HS) theories \cite{Vasiliev:1990en}. In particular, in \cite{Kessel:2015kna,Boulanger:2015ova} the quadratic interaction term sourcing Fronsdal's equations was extracted from Vasiliev's equations obtaining an expression of the schematic form:\footnote{Notice that in the following we use the schematic notation $\Box^l\sim \dots\nabla_{\mu(l)}\phi\ldots\nabla^{\mu(l)}\phi$. We give precise formulas for the above contractions in the spinor language in the following section in eq.~\eqref{dic}. In this section all formulas are schematic and provide some intuition on their generic structure.} \begin{equation}\label{back} \Box\phi_{\mu(s)}+\ldots=\sum_{l=0}^\infty\frac{j_l}{l!l!}\,\Box^l\left(\nabla\ldots\nabla\phi\ \nabla\ldots\nabla\phi\right)_{\mu(s)}\,, \end{equation} which is sometime in the literature referred to as pseudo-local or quasi-local, meaning that it is a formal series in derivatives.\footnote{This terminology originates from the fact that formal series allow truncations to finitely many terms which are always local.} The extracted Fronsdal current have coefficients whose asymptotic behaviour is given by $j_l\sim \frac1{l^3}$ for $l\to \infty$ for any choice of the spin $s$. The asymptotic behaviour of the coefficients raised the important question whether the backreaction extracted is or not strongly coupled\footnote{Preliminary questions of this type were raised in \cite{Boulanger:2008tg}.}. Furthermore, a key question whose study was undertaken in \cite{Skvortsov:2015lja,Taronna:2016ats} was whether it is possible to extract the coefficients of the canonical Metsaev vertices with finitely many derivatives \cite{Metsaev:2005ar,Joung:2011ww} from the above tails. Indeed most of the coefficients at the cubic order are unphysical, since they can be removed by local field redefinitions. In this respect, the full list of Metsaev-like couplings was indeed extracted holographically in \cite{Sleight:2016dba} and amounts to a finite number of coefficients for any triple of spins, to be contrasted to the above infinite set (see also \cite{Taronna:2010qq,Sagnotti:2010at} for the analogous string theory computation and corresponding cubic couplings).\footnote{It is important to stress that in a fully non-linear HS theory it is expected that the appropriate field frame which makes HS geometry manifest will entail \emph{all} of the above coefficients. The situation should be similar to the Einstein-Hilbert cubic couplings which are dressed by improvement terms that can be removed by a field redefinition at the cubic order. This is a further key motivation to understand these higher-derivative tails.} Remarkably, the pseudo-local nature of the above currents implies that the only way to relate them to their Metsaev-like counterparts is via a pseudo-local field redefinition of the same schematic form: \begin{align}\label{pseudored} \phi_{\mu(s)}\rightarrow\phi^\prime_{\mu(s)}=\phi_{\mu(s)}+\sum_{l=0}^\infty \frac{a_l}{l!l!}\,\Box^l\left(\nabla\ldots\nabla\phi\ \nabla\ldots\nabla\phi\right)_{\mu(s)}\,, \end{align} involving a sum over infinitely many terms unbounded in derivatives (i.e. pseudo-local). This result has motivated a renewed interest in the analysis of the admissible functional classes in HS theories. Indeed, an arbitrary pseudo-local redefinition defined in \eqref{pseudored} is sufficient to remove all pseudo-local current interactions \cite{Prokushkin:1999xq,Kessel:2015kna} and some further condition on the coefficients $a_l$ should be imposed on top of quasi-locality. A proposal\footnote{The proposal of \cite{Taronna:2016ats} is based on jet space and on the convergence of the infinite derivative expansion. It turns out that this proposal ensures the invariance of the Witten diagrams under the corresponding admissible field redefinitions.} based on the invariance of the holographic Witten-diagrams
9b897dee32341df521b91e8a88f9d2b9a04e562f
\section{Introduction} \blfootnote{ % % % % \hspace{-0.65cm} This work is licensed under a Creative Commons Attribution 4.0 International License. License details: \url{http://creativecommons.org/licenses/by/4.0/} } Argumentation mining is a relatively new challenge in corpus-based discourse analysis that involves automatically identifying argumentative structures within a corpus. Many tasks in argumentation mining~\cite{argumentsurvey2015} and debating technologies~\cite{colingdemo2014} involve categorizing a sequence in the context of another sequence. For example, in \emph{context dependent claim detection}~\cite{cdcd2014}, given a sentence, one task is to identify whether the sentence contains a claim relevant to a particular debatable topic (generally given as a context sentence). Similarly in \emph{context dependent evidence detection}~\cite{evidence2015}, given a sequence (possibly multiple sentences), one task is to detect if the sequence contains an evidence relevant to a particular topic. We refer to such class of problems in computational argumentation as \emph{bi-sequence classification} problems---given two sequences $s$ and $c$ we want to predict the label for the target sequence $s$ in the context of another sequence $c$\footnote{In this paper, we shall ignore the subtle distinction between sentence and sequence and both will mean just a text segment composed of words.}. Apart from the debating tasks, several other natural language inference tasks fall under the same paradigm of having a pair of sequences. For example, \emph{recognizing textual entailment}~\cite{snli2015}, where the task is to predict if the meaning of a sentence can be inferred from the meaning of another sentence. Another class of problems originated from question-answering systems also known as \emph{answer selection}, where given a question, a candidate answer needs to be classified as an answer to the question at hand or not. Recently, deep learning approaches have obtained very high performance across many different natural language processing tasks. These models can often be trained in an end-to-end fashion and do not require traditional, task-specific feature engineering. For many single sequence classification tasks, the state-of-the-art approaches are based on recurrent neural networks (RNN variants like Long Short-Term Memory (LSTM)~\cite{lstm97} and Gated Recurrent Unit (GRU)~\cite{cho14}) and convolution neural network based models (CNN)~\cite{Kim14}. Whereas for bi-sequence classification, the context sentence $c$ has to be explicitly taken into account when performing the classification for the target sentence $s$. The context can be incorporated into the RNN and CNN based models in various ways. However there is not much understanding in current literature as to the best way to handle context in these deep learning based models. In this paper, we empirically evaluate(see Section ~\ref{sec:experiments}) the performance of five different ways of handling context in conjunction with target sentence(see Section~\ref{sec:models}) for multiple bi-sequence classification tasks(see Section~\ref{sec:tasks}) using architectures composed of RNNs and(/or) CNNs. In a nutshell, this paper makes the two major novel contributions: \begin{enumerate} \item We establish the first deep learning based baselines for three bi-sequence classification tasks relevant to argumentation mining with zero feature engineering. \item We empirically compare the performance of several ways handling context for bi-sequence classification problems in RNN and CNN based models. While some of these variants are used in various other tasks, there has been no
03df1d25322bbd4914a5768042c64fe6efc97101
\section{Menger co-analytic groups} We shall assume all spaces are completely regular. \begin{defn} A topological space is \textbf{analytic} if it is a continuous image of $\mathbb{P}$, the space of irrationals. A space is \textbf{Lusin} if it is an injective continuous image of $\mathbb{P}$. (This is the terminology of \cite{RJ}. This term is currently used for a different concept.) A space is \textbf{$K$-analytic} if it is a continuous image of a Lindel\"of \v{C}ech-complete space. A space is \textbf{$K$-Lusin} if it is an injective continuous image of a Lindel\"of \v{C}ech-complete space. \end{defn} \begin{defn} A space is \textbf{co-analytic} if $\beta X\setminus X$ is analytic. In general, we call $\beta X\setminus X$ \textbf{the remainder} of $X$. $b X\setminus X$, for any compactification $b X$ of $X$, is called \textbf{a remainder} of $X$. \end{defn} \begin{defn} A space is \textbf{Menger} if whenever $\{\mathcal{U}_n : n < \omega\}$ is a sequence of open covers, there exist finite $\mathcal{V}_n$, $n < \omega$, such that $\mathcal{V}_n \subseteq \mathcal{U}_n$ and $\bigcup \{\mathcal{V}_n : n < \omega\}$ is a cover. \end{defn} Arhangel'ski\u\i ~\cite{Ar2} proved that Menger analytic spaces are $\sigma$-compact, generalizing Hurewicz's classic theorem that Menger completely metrizable spaces are $\sigma$-compact. Menger's conjecture was disproved in \cite{MF}, where Miller and Fremlin also showed it undecidable whether Menger co-analytic sets of reals are $\sigma$-compact. In \cite{TT} we proved that Menger \v{C}ech-complete spaces are $\sigma$-compact and obtained various sufficient conditions for Menger co-analytic topological spaces to be $\sigma$-compact. We continue that study here. In \cite{TT} we observed that $\mathbf{\Pi}_{1}^1$-determinacy -- which we also call \textbf{CD}: the \emph{Axiom of Co-analytic Determinacy} -- implies Menger co-analytic sets of reals are $\sigma$-compact. Indeed, \textbf{PD} (\emph{the Axiom of Projective Determinacy}) implies Menger projective sets of reals are $\sigma$-compact \cite{T2}, \cite{TT}. When one goes beyond co-analytic spaces in an attempt to generalize Arhangel'ski\u\i's theorem, one runs into ZFC counterexamples, but it is not clear whether there is a ZFC co-analytic counterexample. Assuming $V=L$, there is a counterexample which is a subset of $\mathbb{R}$ \cite{MF}, \cite{TT}. Here we prove: \begin{thm}\label{thm1} \textbf{CD} implies every Menger co-analytic topological group is $\sigma$-compact. \end{thm} \begin{rem} \textbf{CD} follows from the existence of a measurable cardinal \cite{M}. \end{rem} We first slightly generalize the \textbf{CD} result quoted above. \begin{lem}\label{lem1} \textbf{CD} implies every separable metrizable Menger co-analytic space is $\sigma$-compact. \end{lem} In order to prove this, we need some general facts about analytic spaces and perfect maps. \begin{lem}\label{lemA} Metrizable perfect pre-images of analytic spaces are analytic. \end{lem} \begin{proof} Rogers and Jayne \cite[5.8.9]{RJ} prove that perfect pre-images of metrizable analytic spaces are $K$-analytic, and that $K$-analytic metrizable spaces are analytic \cite[5.5.1]{RJ}. \end{proof} \begin{lem}[{ \cite[3.7.6]{E}}]\label{lemB} If $f:X\to Y$ is perfect, then for any $B\subseteq Y$, $f_B:f^{-1}(B)\to B$ is perfect. \end{lem} \begin{lem}[{ \cite[5.2.3]{RJ}}]\label{lemC} If $f$ is a continuous map of a compact Hausdorff $X$ onto a Hausdorff space $Y$ and the restriction of $f$ to a dense subspace $E$ of $X$ is perfect, then $f^{-1}\circ f(E)=E$. \end{lem} \begin{lem}\label{lemD} Metrizable perfect pre-images of co-analytic spaces are co-analytic. \end{lem} \begin{proof} Let $M$ be a metrizable perfect pre-image of a co-analytic $X$. Let $p$
ff4a6df1a52f4891034852c14c652312e4fea285
\section{Introduction} There are many open questions regarding the strength and geometry of the magnetic field in radio galaxies and their relation to other properties of the radio source. The observed degree of polarization depends on the intrinsic properties, such as the regularity and orientation of the source magnetic fields as well as the Faraday effects from the intervening regions of ionized gas along the line of sight. The largest current sample of polarized sources is the NRAO/VLA all sky survey, NVSS, at 1.4 GHz \citep{1998AJ....115.1693C}. It shows that the majority of extragalactic radio sources are only a few percent polarized. Polarization studies of small samples of extragalactic radio sources at other frequencies also show a similar weak average polarization, and suggest the fractional polarization increases at frequencies higher than 1.4 GHz \citep[e.g.][]{2009A&A...502...61M}. It is not clear which mechanism is dominant in reducing the fractional polarization at lower frequencies and depolarizing the sources, although several models have been suggested \citep{1966MNRAS.133...67B,1991MNRAS.250..726T,1998MNRAS.299..189S,2008A&A...487..865R,2015MNRAS.450.3579S}. One key cause for depolarization is Faraday rotation, which can be characterized to first order by a change in the angle of the linear polarization: \begin{equation} \Delta \chi=\left(0.812 \int \frac{n_e{\bf B}}{(1+z)^2}\cdot \frac{d{\bf l}}{dz} \,dz\right) \lambda^2 \equiv \phi \lambda^2 \end{equation} where $\Delta \chi$ is the amount of the rotation of the polarization vector in rad, $\lambda$ is the observation wavelength in m, $z$ is the redshift of the Faraday screen, ${\bf B}$ is the ionized medium magnetic field vector in $\mu$G, $n_e$ is the number density of electrons in the medium in cm$^{-3}$ and $\,d{\bf l}$ is the distance element along the line of sight in pc. The term in parentheses is called the Faraday depth, $\phi$. For a single line of sight through a thin ionized screen, this is equivalent to the rotation measure, $\textrm{RM}$, defined by $\textrm{RM} \equiv \frac{\Delta \chi}{\Delta \lambda^2}$ which can be measured observationally. Different lines of sight to the source all within the observing beam can have different values of $\phi$. Typically, this progressively depolarizes the source at longer wavelengths, but it can also lead to constructive interference and re-polarization, i.e., higher fractional polarizations at longer wavelengths. There are at least three separate possible Faraday screens with different $\textrm{RM}$ distributions along the line of sight: the Galactic component, intervening extragalactic ionized gas, and material local to the source. Multiple studies such as \cite{2005MNRAS.359.1456G,2008ApJ...676...70K,2010MNRAS.409L..99S,2012ApJ...761..144B,2012arXiv1209.1438H,2013ApJ...771..105B,2014ApJ...795...63F,2014MNRAS.444..700B,2014PASJ...66...65A,2015aska.confE.114V,2015arXiv150900747V} have tried to identify and distinguish these separate components and study the evolution of the magnetic field of galaxies through cosmic time. When many lines of sight each have independent single Faraday depths, this problem is approached statistically. Another long standing puzzle is the anti-correlation between the total intensity of radio sources and their degree of polarization, as observed by many groups such as \cite{2002A&A...396..463M}, \cite{2004MNRAS.349.1267T}, \cite{2006MNRAS.371..898S}, \cite{2007ApJ...666..201T}, \cite{2010ApJ...714.1689G}, \cite{2010MNRAS.402.2792S} and \cite{2014ApJ...787...99S}. The physical nature of this relation has been a mystery for almost a decade, and is confused by the dependency on other source properties. \cite{2010ApJ...714.1689G} found that most of their highly polarized sources are steep spectrum, show signs of resolved structure on
31ec072526eada9187ae9e5d9bca9e7b1fe6f89c
\section{Introduction}\label{sec:intro} The constraint satisfaction problem (CSP) provides a framework in which it is possible to express, in a natural way, many combinatorial problems encountered in computer science and AI~\cite{Cohen06:handbook,Creignou01:book,Feder98:monotone}. An instance of the CSP consists of a set of variables, a domain of values, and a set of constraints on combinations of values that can be taken by certain subsets of variables. The basic aim is then to find an assignment of values to the variables that satisfies the constraints (decision version) or that satisfies the maximum number of constraints (optimization version). Since CSP-related algorithmic tasks are usually hard in full generality, a major line of research in CSP studies how possible algorithmic solutions depend on the set of relations allowed to specify constraints, the so-called {\em constraint language}, (see, e.g.~\cite{Bulatov??:classifying,Cohen06:handbook,Creignou01:book,Feder98:monotone,Krokhin17:book}). The constraint language is denoted by $\Gamma$ and the corresponding CSP by $\CSP\Gamma$. For example, when one is interested in polynomial-time solvability (to optimality, for the optimization case), the ultimate sort of results are dichotomy results~\cite{Bulatov17:dichotomy,Bulatov??:classifying,Feder98:monotone,Kolmogorov17:complexity,Thapper16:finite,Zhuk17:dichotomy}, pioneered by~\cite{Schaefer78:complexity}, which characterise the tractable restrictions and show that the rest are NP-hard. Classifications with respect to other complexity classes or specific algorithms are also of interest (e.g.~\cite{Barto14:jacm,Barto12:NU,Kolmogorov15:power,Larose09:universal}). When approximating (optimization) CSPs, the goal is to improve, as much as possible, the quality of approximation that can be achieved in polynomial time, see e.g. surveys~\cite{Khot10:UGCsurvey,MM17:cspsurvey}. Throughout the paper we assume that P$\ne$NP. The study of {\em almost satisfiable} CSP instances features prominently in the approximability literature. On the hardness side, the notion of approximation resistance (which, intuitively, means that a problem cannot be approximated better than by just picking a random assignment, even on almost satisfiable instances) was much studied recently, e.g.~\cite{Austrin13:usefulness,Chan13:resist,Hastad14:maxnot2,Khot14:strong}. Many exciting developments in approximability in the last decade were driven by the {\em Unique Games Conjecture} (UGC) of Khot, see survey~\cite{Khot10:UGCsurvey}. The UGC states that it is NP-hard to tell almost satisfiable instances of $\CSP\Gamma$ from those where only a small fraction of constraints can be satisfied, where $\Gamma$ is the constraint language consisting of all graphs of permutations over a large enough domain. This conjecture (if true) is known to imply optimal inapproximability results for many classical optimization problems~\cite{Khot10:UGCsurvey}. Moreover, if the UGC is true then a simple algorithm based on semidefinite programming (SDP) provides the best possible approximation for all optimization problems $\CSP\Gamma$~\cite{Prasad08:optimal}, though the exact quality of this approximation is unknown. On the positive side, Zwick~\cite{Zwick98:finding} initiated the systematic study of approximation algorithms which, given an almost satisfiable instance, find an almost satisfying assignment. Formally, call a polynomial-time algorithm for CSP {\em robust} if, for every $\eps>0$ and every $(1-\eps)$-satisfiable instance (i.e. at most a $\eps$-fraction of constraints can be removed to make the instance satisfiable), it outputs a $(1-g(\eps))$-satisfying assignment (i.e. that fails to satisfy at most a $g(\eps)$-fraction of constraints). Here, the {\em loss} function $g$ must be such that $g(\eps)\rightarrow 0$ as $\eps\rightarrow 0$. Note that one can without loss of generality assume that $g(0)=0$, that is, a robust algorithm must return a
3831f5d43651b93443f7b97153ed8ff9384f90a1
\section{Introduction} \label{sec:Introduction} Topological insulators (TIs) form a class of materials with unique properties, associated with a non-trivial topology of their quasiparticle band structure (for a review, see Refs.~\cite{Zhang:rev, Hasan-Kane:rev, Hasan-Moore:rev, Ando:rev}). The key feature of two-dimensional (2D) and three-dimensional (3D) TIs is the existence of special gapless edge and surface states, respectively, while the bulk states of those materials are gapped. The hallmark property of the surface states is their topological protection. Mathematically, the nontrivial topological properties of time-reversal (TR) invariant TIs are generally described \cite{Moore:2007} by multiple copies of the $Z_2$ invariants found by Kane and Mele \cite{Kane-Mele}. This implies that the energy band gap should close at the boundary between topological and trivial insulator (e.g., vacuum) giving rise to the occurrence of the gapless interface states and the celebrated bulk-boundary correspondence. The discovery of the $Z_2$ topology in TIs is an important breakthrough because it showed that nontrivial topology can be embedded in the band structure and that the presence of an external magnetic field is not mandatory for the realization of topological phases. Another distinctive feature of the 3D TIs is a relativistic-like energy spectrum of the surface states, whose physical origin is related to a strong spin-orbit coupling \cite{Hsieh:2009}. Indeed, the surface states on each of the surfaces are described by 2D massless Dirac fermions in an irreducible 2$\times$2 representation, with a single Dirac point in the reciprocal space. For comparison, quasiparticles in graphene demonstrate similar properties, but have four inequivalent Dirac cones due to a spin and valley degeneracy \cite{Castro:2009} that makes certain aspects of their physics very different from those of the surface states in TIs. In our study below, we will concentrate only on the case of the strong 3D TIs whose surface states are protected by the topology of the bulk bands in combination with the TR symmetry. This leads to the locking of momenta and spin degrees of freedom and, consequently, to the formation of a helical Dirac (semi)metal state \cite{Hsieh:2009}. Such a state is characterized by the electron antilocalization and the absence of backscattering. The phenomenon of antilocalization has deep mathematical roots and is usually explained by an additional Berry's phase $\pi$ that is acquired when an electron circles a Dirac point. From the physical viewpoint, when scattering on an impurity, an electron must change its spin in order to preserve its chirality. Such a process is possible only in the case of magnetic impurities which break explicitly the TR symmetry. Experimentally, a linear relativistic-like dispersion law of the surface states is observed in Bi$_{1-x}$Sb$_x$, Bi$_2$Se$_3$, Bi$_2$Te$_3$, Sb$_2$Te$_3$, Bi$_2$Te$_2$Se, and other materials by using angle resolved photoemission spectroscopy (ARPES) \cite{Hsieh:2008, Zhang:2009, Hsieh:2009, Chen:2009, Cava-Hasan}. Furthermore, scanning tunneling microscopy and scanning tunneling spectroscopy provide additional information about the topological nature of the surface states, such as the quasiparticles interference patterns around impurities and defects. The Fourier analysis of these patterns has shown that the backscattering between $\mathbf{k}$ and $-\mathbf{k}$ is highly suppressed in Bi$_{1-x}$Sb$_x$ \cite{Roushan:2009} and Bi$_2$Te$_3$ \cite{Zhang-Cheng:2009} in accord with
bcd73857a29c93a81082cd2b5634975a37343c29
\section{Gibbs' Canonical Ensemble} From Gibbs' 1902 text {\it Elementary Principles in Statistical Mechanics}, page 183 : \begin{quotation} ``If a system of a great number of degrees of freedom is microcanonically distributed in phase, any very small part of it may be regarded as canonically distributed.'' \end{quotation} Thus J. Willard Gibbs pointed out that the energy states of a ``small'' system weakly coupled to a larger ``heat reservoir'' with a temperature $T$ have a ``canonical'' distribution : $$ f(q,p) \propto e^{-{\cal H}(q,p)/kT} \ . $$ with the Hamiltonian ${\cal H}(q,p)$ that of the small system. Here $(q,p)$ represents the set of coordinates and momenta of that system. `` {\it Canonical} '' means simplest or prototypical. The heat reservoir coupled to the small system and responsible for the canonical distribution of energies is best pictured as an ideal-gas thermometer characterized by an unchanging kinetic temperature $T$ . The reservoir gas consists of many small-mass classical particles engaged in a chaotic and ergodic state of thermal and mechanical equilibrium with negligible fluctuations in its temperature and pressure. Equilibrium within this thermometric reservoir is maintained by collisions as is described by Boltzmann's equation. His ``H Theorem'' establishes the Maxwell-Boltzmann velocity distribution found in the gas. See Steve Brush's 1964 translation of Boltzmann's 1896 text {\it Vorlesungen \"uber Gastheorie}. Prior to fast computers texts in statistical mechanics were relatively formal with very few figures and only a handful of numerical results. In its more than 700 pages Tolman's 1938 tome {\it The Principles of Statistical Mechanics} includes only two Figures. [ The more memorable one, a disk colliding with a triangle, appears on the cover of the Dover reprint volume. ] Today the results-oriented graphics situation is entirely different as a glance inside any recent issue of {\it Science} confirms. \section{Nos\'e-Hoover Canonical Dynamics -- Lack of Ergodicity} In 1984, with the advent of fast computers and packaged computer graphics software already past, Shuichi Nos\'e set himself the task of generalizing molecular dynamics to mimic Gibbs' canonical distribution\cite{b1,b2}. In the end his approach was revolutionary. It led to a new form of heat reservoir described by a single degree of freedom with a logarithmic potential, rather than the infinitely-many oscillators or gas particles discussed in textbooks. Although the theory underlying Nos\'e's approach was cumbersome Hoover soon pointed out a useful simplification\cite{b3,b4} : Liouville's flow equation in the phase space provides a direct proof that the ``Nos\'e-Hoover'' motion equations are consistent with Gibbs' canonical distribution. Here are the motion equations for the simplest interesting system, a single one-dimensional harmonic oscillator : $$ \dot q = (p/m) \ ; \ \dot p = -\kappa q - \zeta p \ ; \ \dot \zeta = [ \ (p^2/mkT) - 1 \ ]/\tau^2 \ . $$ The ``friction coefficient'' $\zeta$ stabilizes the kinetic energy $(p^2/2m)$ through integral feedback, extracting or inserting energy as needed to insure a time-averaged value of precisely $(kT/2)$ . The parameter $\tau$ is a relaxation time governing the rate of the thermostat's response to thermal fluctuations.
4df0043dbd096d6c69998163b90855e98340756c
\section{Introduction} In recent decades acoustic techniques in solid state physics demonstrate serious progress, especially by moving to previously unattainable high-frequency band, up to a terahertz frequencies \cite{ps-ultrasonics}. Considerable efforts in this direction, called often picosecond ultrasonics, are stimulated by the short-wavelength character of acoustic waves in this band, and, in some cases, efficient coupling of acoustic strain to electronic, optical, magnetic excitations in solid state. This allows application of high-frequency acoustic signals for testing and control of nanodimensional solid-state structures. From practical point of view, the most serious restriction of picosecond ultrasonics is the use of ultrafast (femtosecond) lasers for both excitation of acoustic signals and detection of its coupling to a solid-state nanostructure, usually with the use of pump-probe technique. In spite of considerable improvement of such lasers characteristics and growth of their availability, development of a robust electrically controlled picosecond acoustic technique would be an essential breakthrough in the field. Speaking about the high-frequency acoustic wave excitation, terahertz sasers could be a solution of the problem \cite{saser1,saser2,saser3}. For detection purposes, several options are available. The superconductor based detectors are in use from 70th. The robust bolometers used to be widely employed for acoustic spectroscopy are currently less popular since, in contrast to optical methods, they are hardly sensitive to the spectrum of an acoustic signal. The superconductor contacts do possess spectral selectivity \cite{super-contacts} but their fabrication is quite sophisticated. Semiconductor-based approaches are more preferable. A photo-electric acoustic wave detection by {\it p-i-n} diodes with a quantum well embedded into the {\it i} region demonstrated high efficiency, but, although based on electric current measurement, requires use of femtosecond laser for temporal signal sampling \cite{pin}. An alternative, also semiconductor-based, method using Schottky diodes has been demonstrated recently \cite{schottky}. It is purely electrical and is based on induction of displacement current by propagating acoustic wave. Considering such factors as all-electrical detection principle, use of robust well-studied devices technology which can be integrated with various solid-state structures, possible room-temperature applications, this method looks an attractive candidate for wide use as high-frequency acoustic detector. In this paper the main physical principles of Schottky diode acoustic detection are considered theoretically in details. The developed model allows to address such issues as feasible magnitude of the electrical signal, fundamental restrictions on detectable acoustic signal frequency, possible ways of the diode structure optimization. The paper is organized as follows. In section I the expression for the accumulated electrical charge due to the acoustic strain perturbation is obtained for important cases of piezoelectric and deformation potential coupling. It is used then in section II for analysis of the electrical response of the Schottky diode. Then, the conclusions follow. \section{Expression for the acoustic wave induced charge in a diode} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig1} \caption{The schematics of the energy diagram of $n$-type Schottky diode with the used coordinate frame. $z=z_i$ corresponds to the metal-semiconductor interface. The insert shows the model electrical circuit which is used for the electrical detection of acoustic signals. } \label{fig:1} \end{figure} The energy diagram of the
fe9f0cbedace3b1bc230475543afbb1bc2881730
\section{Introduction} \label{sec:intro} Atmospheric metals are not expected to be present in isolated white dwarfs (WDs) with effective temperatures below $\sim25,000$ K. At these temperatures, radiative forces become too weak \citep{chayer95} to significantly counteract the quick gravitational settling that sinks material heavier than helium in extremely short timescales compared to the typical cooling ages of WDs \citep{FM79,K09}. However, it has been found that $\sim25\%-50\%$ of all field WDs exhibit spectral lines that are indicative of the presence of metals in their atmospheres \citep{Z03,Z10,K14}. The high-metallicity material found in the atmospheres of most of these ``polluted'' WDs is consistent with the composition of rock-forming material \citep{Z07,G12,farihi13,JY14}. This observation suggesting that pollution comes from minor rocky bodies (e.g., asteroids). One possibility is that these rocky bodies get very close to the WD so they can be tidally disrupted and then accreted. Further support of this picture comes from observations of circumstellar disks --revealed by infrared excess in the stellar spectrum-- around many polluted WDs (see \citealt{farihi16} for a recent review). These disks orbit within ${\sim}1R_\odot$, roughly the distance at which the material would reside after the tidal disruption (the Roche radius). All the WDs with detected disks have atmospheric pollution. More recently, this picture has been reinforced by the recent observation of minor bodies transiting the polluted WD 1145+017 \citep{vanderburg15,alonso16,G16,rap16,xu16}. Although the leading explanation for WDs pollution --the accretion of tidally disrupted asteroids-- seems robust and well supported by observations, the underlying dynamical mechanism responsible for placing these rocky bodies in star grazing orbits remains much less constrained and understood. A better understanding of this mechanism can lead to new insights into initial conditions leading to WD pollution, as well as into the long-term dynamics and evolution of the planetary systems around WDs and/or their progenitors (typically A and F stars; see \citealt{veras16} for a recent review on this subject). A theoretical model to explain the WD pollution from planetary dynamical instabilities was put forward by \citet{DS02}. According to their model, a planetary system that is marginally stable throughout the main sequence can become unstable due to stellar mass loss during post-MS evolution. This global instability can then promote some asteroids into star grazing orbits. This idea has been explored in more detail using realistic numerical $N$-body integrations of multi-planet systems (no asteroids) and stellar evolution \citep{veras13,MVV14,VG15}. Similarly, the mass loss of the host star can widen the region around mean-motion resonances where chaotic diffusion of asteroids acts efficiently, leading to their posterior tidal disruption \citep{B11,DWS12,FH14}. As well, mass loss in close binary systems can drive the outermost planetesimals into the chaotic orbits, with one of the possible outcomes being collisions with either one of the stars \citep{kratter12}. Thus far, these proposed dynamical mechanisms rely on generally short-timescale instabilities (either scattering or mean-motion-resonance overlap) triggered (or enhanced) by mass loss or simply by the aging of the planetary systems, and still face some difficulties. In particular, these mechanisms are subject to the following constraints: \begin{enumerate} \item {\it the delivery of material must happen
f059ade8ca1b6c7457fa8310cb89acd964da485d
\section{Introduction} In his book ``Proximal Flows''~\cite[Section~\RNum{2}.3, p.\ 19]{glasner1976proximal} Glasner defines the notion of a {\em strongly amenable group}: A group is strongly amenable if each of its proximal actions on a compact space has a fixed point. A continuous action $G \curvearrowright X$ of a topological group on a compact Hausdorff space is proximal if for every $x, y \in X$ there exists a net $\{g_n\}$ of elements of $G$ such that $\lim_n g_n x = \lim_n g_n y$. Glasner shows that virtually nilpotent groups are strongly amenable and that non-amenable groups are not strongly amenable. He also gives examples of amenable --- in fact, solvable --- groups that are not strongly amenable. Glasner and Weiss~\cite{glasner2002minimal} construct proximal minimal actions of the group of permutations of the integers, and Glasner constructs proximal flows of Lie groups~\cite{glasner1983proximal}. To the best of our knowledge there are no other such examples known. Furthermore, there are no other known examples of minimal proximal actions that are not also {\em strongly proximal}. An action $G \curvearrowright X$ is strongly proximal if the orbit closure of every Borel probability measure on $G$ contains a point mass measure. This notion, as well as that of the related Furstenberg boundary~\cites{furstenberg1963poisson, furstenberg1973boundary, furman2003minimal}, have been the object of a much larger research effort, in particular because a group is amenable if and only if all of its strongly proximal actions on compact spaces have fixed points. Richard Thompson's group $F$ has been alternatively ``proved'' to be amenable and non-amenable (see, e.g.,~\cite{cannon2011thompson}), and the question of its amenability is currently unresolved. In this paper we pursue the less ambitious goal of showing that is it not strongly amenable, and do so by directly constructing a proximal action that has no fixed points. This action does admit an invariant measure, and thus does not provide any information about the amenability of $F$. It is a new example of a proximal action which is not strongly proximal. \vspace{0.3in} The authors would like to thank Eli Glasner and Benjamin Weiss for enlightening and encouraging conversations. \section{Proofs} Let $F$ denote Thompson's group $F$. In the representation of $F$ as a group of piecewise linear transformations of $\mathbb{R}$ (see, e.g.,~\cite[Section 2.C]{kaimanovich2016thompson}), it is generated by $a$ and $b$ which are given by \begin{align*} a(x) &= x-1\\ b(x) &= \begin{cases} x& x \leq 0\\ x/2& 0 \leq x \leq 2\\ x-1& 2 \leq x. \end{cases} \end{align*} The set of dyadic rationals $\Gamma =\mathbb{Z}[\frac{1}{2}]$ is the orbit of $0$. The Schreier graph of the action $G \curvearrowright \Gamma$ with respect to the generating set $\{a,b\}$ is shown in Figure~\ref{fig:schreier} (see~\cite[Section 5.A, Figure 6]{kaimanovich2016thompson}). The solid lines denote the $a$ action and the dotted lines denote the $b$ action; self-loops (i.e., points stabilized by a generator) are omitted. This graph consists of a tree-like structure (the blue and white nodes) with infinite chains attached to each node (the red nodes). \begin{figure}[ht] \centering \includegraphics[scale=0.6]{schreier.pdf} \caption{\label{fig:schreier}The action of $F$ on $\Gamma$.} \end{figure} Equipped with the product topology, $\{-1,1\}^\Gamma$ is a
307a7516e620bdb53ec72d9db06c57a1b2448573
\section{Introduction} \label{sec:1} The influence of social groups in pedestrian dynamics, especially in evacuation scenarios, is an area of recent interest, see e.g. \cite{mueller2,mueller} and other contributions in these proceedings. The situations that are considered are widespread and well-known in everyday life. For example, many people visit concerts or soccer matches not alone, but together with family and friends in so-called social groups. In case of emergency, these groups will try to stay together during an evacuation. The strength of this cohesion depends on the composition of the social group. Several adult friends would form a loose group that is mainly connected via eye contact, whereas a mother would take her child's hand and form a strong or even fixed bond. In addition, even the size of the social groups could have an effect on the evacuation behaviour. In order to consider these phenomena in a more detailed way, a cooperation of researchers of the universities of Cologne and Wuppertal and the Forschungszentrum J\"ulich has performed several experiments aiming at the determination of the general influence of inhomogeneities on pedestrian dynamics. They contained two series of experiments with pupils of different ages in two schools in Wuppertal. The first series focussed on the determination of the fundamental diagram of inhomogeneous groups, i.e. pedestrians of different size. The second series of experiments considered evacuation scenarios. In several runs the parameters of the crowd of evacuating pupils were varied, i.e. the size of the social group and its structure and the interaction between the group members. Here we present first results for these evacuation experiments. \section{Teaching units} \label{sec:2} The experiments were accompanied by teaching units for all involved students providing an introduction into the topic of traffic and pedestrian dynamics. In classes of fifth and sixth grade, the focus of the classes was on the important quantities of pedestrian dynamics, especially density, time and bottleneck situations. This introduction to crowd effects and pedestrian behaviour was intended to raise awareness for their relevance for their everyday lives and safety issues. Therefore we arranged little experiments the students could perform themselves, e.g. the panic experiment according to Mintz \cite{mintz} (see Fig.~\ref{fig:1}). In small groups the pupils had to pull several wooden wedges out of a bottle with a narrow neck as fast as possible and observe the blocking of the wedges when every students pulls at the same time. This experiment was supposed to indicate that coordination can lead to better results compared to selfish behavior. The older pupils of classes 10 and 11 participated in an introduction to cellular automata and the physics of traffic. They received several worksheets on the Game of Life and other cellular automata, especially the Nagel-Schreckenberg model \cite{nasch}. The aim of these lessons was to obtain a first qualitative and quantitative understanding of the collective effects in traffic systems. This should help to increase the identification with the experiments they later participated in and raise awareness about the relevance of this kind of research for everyday life. \begin{figure}[t] \sidecaption
e446ef41a60af69ad04b265c37552ed3922672e1
\section{Introduction} \label{sec:intro} Policy search algorithms based on supervised learning from a computational or human ``teacher'' have gained prominence in recent years due to their ability to optimize complex policies for autonomous flight \cite{rmswd-lmruc-13}, video game playing \cite{rgb-rilsp-11,gsllw-amcts-14}, and bipedal locomotion \cite{mlapt-icdcc-15}. Among these methods, guided policy search algorithms \cite{levine2015end} are particularly appealing due to their ability to adapt the teacher to produce data that is best suited for training the final policy with supervised learning. Such algorithms have been used to train complex deep neural network policies for vision-based robotic manipulation \cite{levine2015end}, as well as a variety of other tasks \cite{zkla-ldcpa-15,mlapt-icdcc-15}. However, convergence results for these methods typically follow by construction from their formulation as a constrained optimization, where the teacher is gradually constrained to match the learned policy, and guarantees on the performance of the final policy only hold at convergence if the constraint is enforced exactly. This is problematic in practical applications, where such algorithms are typically executed for a small number of iterations. In this paper, we show that guided policy search algorithms can be interpreted as approximate variants of mirror descent under constraints imposed by the policy parameterization, with supervised learning corresponding to a projection onto the constraint manifold. Based on this interpretation, we can derive a new, simplified variant of guided policy search, which corresponds exactly to mirror descent under linear dynamics and convex policy spaces. When these convexity and linearity assumptions do not hold, we can show that the projection step is approximate, up to a bound that depends on the step size of the algorithm, which suggests that for a small enough step size, we can achieve continuous improvement. The form of this bound provides us with intuition about how to adjust the step size in practice, so as to obtain a simple algorithm with a small number of hyperparameters. The main contribution of this paper is a simple new guided policy search algorithm that can train complex, high-dimensional policies by alternating between trajectory-centric reinforcement learning and supervised learning, as well as a connection between guided policy search methods and mirror descent. We also extend previous work on bounding policy cost in terms of KL divergence \cite{rgb-rilsp-11,slmja-trpo-15} to derive a bound on the cost of the policy at each iteration, which provides guidance on how to adjust the step size of the method. We provide empirical results on several simulated robotic navigation and manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters. \section{Guided Policy Search Algorithms} \label{sec:gps} We first review guided policy search methods and background. Policy search algorithms aim to optimize a parameterized policy $\pi_\theta(\action_t|\state_t)$ over actions $\action_t$ conditioned on the state $\state_t$. Given stochastic dynamics $p(\mathbf{x}_{t+1}|\state_t,\action_t)$ and cost $\ell(\state_t,\action_t)$, the goal is to minimize the expected cost under the policy's trajectory distribution, given by \mbox{$J(\theta) = \sum_{t=1}^T E_{\pi_\theta(\state_t,\action_t)} [\ell(\state_t,\action_t)]$}, where we overload notation to use $\pi_\theta(\state_t,\action_t)$ to denote the marginals
b2626f3873b25d41c2de31c91c507e7e8082993f
\section{Introduction} Spatio--temporal models are widely used by practitioners. Explaining economic, environmental, social, or biological phenomena, such as peer influence, neighbourhood effects, contagion, epidemics, interdependent preferences, climate change, and so on, are only some of the interesting applications of such models. A widely used spatio--temporal model is the spatial dynamic panel data model (SDPD) proposed and analysed by \cite{LeeYu10a}. See \cite{LeeYu10b} for a survey. To improve adaptivity of SDPD models, \cite{DouAlt15} recently proposed a generalized model that assigns different coefficients to varied locations and assumes heteroskedastic and spatially correlated errors. The model is \begin{equation}\label{eqn1} {\mathbf y}_t = D(\boldsymbol{\lambda}_0){\mathbf W}{\mathbf y}_t + D({\boldsymbol{\lambda}_1}){\mathbf y}_{t-1} + D(\boldsymbol{\lambda}_2){\mathbf W}{\mathbf y}_{t-1} + \mbox{\boldmath$\varepsilon$}_t, \end{equation} where the vector ${\mathbf y}_t$ is of order $p$ and contains the observations at time $t$ from $p$ different locations; the errors $\mbox{\boldmath$\varepsilon$}_t$ are serially uncorrelated; the \emph{spatial matrix} ${\mathbf W}$ is a weight matrix with zero main diagonal and is assumed to be known; $D(\boldsymbol{\lambda}_j)$ with $j=0,1,2$ are diagonal matrices, and $\boldsymbol{\lambda}_j$ are the vectors with the spatial coefficients $\lambda_{ji}$ for $i=1,\ldots,p$. The \emph{generalized SDPD} model in (\ref{eqn1}) guarantees adaptivity by means of its $3p$ parameters. It is characterized by the sum of three terms: the \emph{spatial component}, driven by matrix ${\mathbf W}$ and the spatial parameter $\boldsymbol{\lambda}_0$; the \emph{dynamic component}, driven by the autoregressive parameter $\boldsymbol{\lambda}_1$; and the \emph{spatial--dynamic component}, driven by matrix ${\mathbf W}$ and the spatial--autoregressive parameter $\boldsymbol{\lambda}_2$. If the vectors $\boldsymbol{\lambda}_j$ are scalars for all $j$, then model (\ref{eqn1}) reduces to the classic SDPD of \cite{LeeYu10a}. The errors $\mbox{\boldmath$\varepsilon$}_t$ in model (\ref{eqn1}) are serially uncorrelated and may show heteroskedasticity and cross-correlation over space, so that $\mathop{var}(\mbox{\boldmath$\varepsilon$}_t)$ is a full matrix. This is a novelty compared with the \emph{SDPD} model of \cite{LeeYu10a}, where the errors must be cross-uncorrelated and homoskedastic in order to get consistency of the estimators. A setup similar to ours for the errors has been also considered by \cite{KelPru10} and \cite{Su12}, but not for panel models. However, their estimators are generally based on the instrumental variables technique, in order to overcome the endogeneity of the \emph{zero-lag} regressor. For the \emph{generalized SDPD} model, instead, \cite{DouAlt15} propose a new estimation procedure based on a generalized Yule--Walker approach. They show the consistency of the estimators under regularity assumptions. They also derive the convergence rate and the conditions under which the estimation procedure does not suffer for high-dimensional setups, notwithstanding the large number of parameters to be estimated (which become infinite with the dimension $p$). In real data applications, it is important to check the validity of the assumptions required for the consistency of the estimation procedure. See, for example, the assumptions and asymptotic analysis in \cite{LeeYu10a} and \cite{DouAlt15} as well as the references therein. Checking such assumptions on real data is often not easy; at times, they are clearly violated. Moreover, the spatial matrix ${\mathbf W}$ is assumed to be known, but in many cases, this is not true, and it must be estimated. For example, the spatial weights can be associated with ``similarities'' between spatial units
c2084d91152fccc47f8e0c5b3c965e679a0d55ea
\section*{Abstract}{\small Here is considered the full evolution of a spherical supernova remnant. We start by calculating the early time ejecta-dominated stage and continue through the different phases of interaction with the circumstellar medium, and end with the dissipation and merger phase. The physical connection between the phases reveals new results. One is that the blast wave radius during the adiabatic phase is significantly smaller than it would be, if one does not account for the blast wave interaction with the ejecta. \vspace{10mm} \normalsize} \end{minipage} \section{Introduction} $\,\!$\indent A supernova remnant (SNR), the aftermath of a supernova explosion, is an important phenomenon of study in astrophysics. The typical $10^{51}$ erg of energy released in the explosion is transferred primarily into the interstellar medium during the course of evolution of a SNR. SNR are also valuable as tools to study the evolution of star, the evolution of the Galaxy, and the evolution of the interstellar medium. A SNR emits in X-rays from its hot shocked gas, in infrared from heated dust, and in radio continuum. The latter is via synchrotron emission from relativistic electrons accelerated at the SNR shock. The evolution of a single SNR can be studied and calculated using a hydrodynamics code. However to study the physical conditions of large numbers of SNR, it is desirable to have analytic methods to obtain input parameters needed to run a detailed hydrodynamic simulation. The short paper describes the basic ideas behind the analytic methods, the creation of software to carry out the calculations and some new results of the calculations. \section{Theory and calculation methods} $\,\!$\indent The general time sequence of events that occur after a supernova explosion, which comprise the supernova remnant can be divided into a number of phases of evolution (Chevalier, 1977). These are summarized as follows. The ejecta dominated (ED) phase is the earliest phase when the ejecta from the explosion are not yet strongly decelerated by interaction. Self-similar solutions were found for the ejecta phase for the case of a supernova with ejecta with a power-law density profile occurring in a circumstellar medium with a power-law density profile (Chevalier, 1982). Solutions were given for ejecta power-law indices of 7 and 12, and circumstellar medium power-law indices of 0 and 2. The latter correspond to uniform a circumstellar medium and one caused by a stellar wind with constant mass-loss rate. The non-self similar evolution between ED to the Sedov-Taylor (ST) self-similar phase was treated by Truelove and McKee (1999). They found the so-called unified solution for the evolution of the forward and reverse shock waves during this phase. The Sedov-Taylor (ST) self-similar phase is that for which the shocked ISM mass dominates over the shocked ejecta mass and for which radiative energy losses from the hot interior supernova remnant gas remain negligible. These solutions are reviewed in numerous works, and are based on the original work on blast waves initiated by instantaneous point energy injection in a uniform medium (Taylor, 1946; Sedov, 1946). The next stage occurs when radiative losses
ea26d50ef9855d66aef5d2296d47fb43cf42e00c
\section{Introduction} The purpose of this paper is to provide a convenient operadic framework for the cumulants of free probability theory. In~\cite{DrummondColeParkTerilla:HPTI,DrummondColeParkTerilla:HPTII}, the author and his collaborators described an operadic framework for so-called Boolean and classical cumulants. In those papers, the fundamental object of study is an algebra $A$ equipped with a linear map $E$, called \emph{expectation}, to some fixed algebra $B$. The expectation is not assumed to be an algebra homomorphism; rather one measures the degree to which $E$ fails to be an algebra homomorphism with a sequence of multilinear maps $\kappa_n$ from powers of $A$ to $B$, called cumulants. The cumulants, in many cases, can be defined recursively in terms of the expectation map via a formula of the form: \begin{equation}\label{outline of cumulants} E(x_1\cdots x_n)= \sum \kappa_{i_1}(\cdots)\cdots \kappa_{i_k}(\cdots). \end{equation} Depending on what kind of probability theory is under consideration, the summation on the left may be over a different index set. See, e.g.,~\cite{Speicher:OUP,Muraki:FIQUP,HasebeSaigo:JCNI}. In~\cite{DrummondColeParkTerilla:HPTI,DrummondColeParkTerilla:HPTII}, these recursive definitions for the collection of cumulants (in the Boolean and classical regimes, respectively) were reinterpreted as the collection of linear maps determining a coalgebraic map into a cofree object. In the Boolean case, the cofree object is the tensor coalgebra. In the classical case it is the symmetric coalgebra. This reformulation is intended as the background for a homotopical enrichment of probability theory; adding a grading, a filtration, and a differential to this coalgebraic picture leads to a rich theory with applications to quantum field theory~\cite{Park:HTPSICIHLA}. This application is motivational and will play no role in this paper. None of the work mentioned above treats the case of \emph{free cumulants}, arguably the most important kind of cumulant in noncommutative probability theory. When the target algebra $B$ is commutative, there is a formula similar to those above and the framework outlined above can be used directly, employing a more exotic type of coalgebra than the tensor or symmetric coalgebra. This point of view is taken in~\cite{DrummondCole:NCWCFFHPT}. However, there is a flaw in this point of view, which is that assuming the target to be commutative is external to the theory; internally it makes perfect sense for the target itself to be noncommutative. This is called \emph{operator-valued} free probability theory because the expectation is valued in a noncommutative algebra, such as an operator algebra. Operator-valued free cumulants, as defined by Speicher,~\cite{Speicher:CTFPAOVFPT} are somewhat more cumbersome to describe explicitly than in the commutative case using classical combinatorial methods. Consequently Speicher develops an \emph{operator-valued $R$-transform} to collect the information concisely. In our setting, there is one evident related obstruction to extending the framework developed in~\cite{DrummondColeParkTerilla:HPTI,DrummondColeParkTerilla:HPTII} to operator-valued free cumulants. The defining formulas for classical and Boolean cumulants and for free cumulants valued in a commutative algebra share a certain property. Namely, they are \emph{string-like}, meaning that the right-hand side of Equation~(\ref{outline of cumulants}) is a product of cumulants. However, the defining formulas for operator-valued free cumulants (that is, free cumulants valued in a not necessarily commutative algebra) contain terms like \[ \kappa_2(x_1\kappa_1(x_2)\otimes x_3). \] or more generally
ba21779ca4df037124955b1d43a715e5c701dcfa
\section{Introduction} The Navier-Stokes equations (NSEs) represent a formulation of the Newton's laws of motion for a continuous distribution of matter in a fluid state, characterized by an inability to support shear stresses, see \cite{Doering-Gibbon}. The NSEs allow to determine the velocity field and the pressure of fluids confined in regions of the space, and they are used to describe many different physics phenomena as weather, water flow in tubes, ocean currents and others. Moreover, these equations are useful in several fields of knowledge such as petroleum industry, plasma physics, meteorology, thermo-hydraulics, among others (see \cite{RT} for instance). Due to this fact, these equations have been attracted to the attention of several mathematicians since they play an important role for applications. See \cite{boldrini, Alexandre, Doering-Gibbon, GRR, GRR1, GRR2, Jiu-Wang-Xin, rosa, RT, Teman} and the references therein. On the other hand, the theory of impulsive dynamical systems has been shown to be a powerful tool to model real-world problems in physics, technology, biology, among others. Because of this fact, the interest in the study of impulsive dynamical systems has increasing considerably. For recents trends on this subject we indicate the works \cite{BonottoDemuner, bonotto1, BBCC, Cortes, Davis, Feroe, Yang, Zhao} and the references therein. However, the study of Navier-Stokes equations with impulse effects is really scarce. Motivated by this fact, in this paper, we investigate existence and uniqueness of mild solutions for the impulsive NSEs \begin{equation}\label{Eq5} \displaystyle\left\{\begin{array}{ll} \displaystyle\frac{\partial u}{\partial t} + q(t)(u \cdot \nabla)u - \nu\Delta u +\nabla p = \phi(t,u), & (t,x) \in \left((0, +\infty)\setminus \displaystyle\bigcup_{k=1}^{+\infty}\{t_k\}\right) \times \Omega, \vspace{1mm}\\ {\rm div}\, u = 0, & (t,x) \in (0, +\infty) \times \Omega, \vspace{1mm}\\ u = 0, & (t,x) \in (0, +\infty) \times \partial\Omega, \vspace{1mm}\\ u(0, \cdot)= u_0 & x \in \Omega, \vspace{1mm}\\ u(t_k^+, \cdot) - u(t_k^-, \cdot) = I_k (u(t_k, \cdot)), & x\in\Omega, \; k=1, 2,\ldots , \end{array} \right. \end{equation} where $\Omega$ is a bounded smooth domain in $\mathbb{R}^2$. Here $u = (u_1,u_2)$ denotes the velocity field of a fluid filling $\Omega$, $p$ is its scalar pressure and $\nu > 0$ is its viscosity. We will assume that $q$ is a bounded function, $\phi$ is a nonlinearity which will be specified later, $\{t_k\}_{k \in \mathbb{N}} \subset (0, +\infty)$ is a sequence of impulse times such that $\displaystyle\lim_{t \rightarrow +\infty} t_k = +\infty$, $u(t_k, \cdot) = u(t_k^+, \cdot) = \displaystyle\lim_{\delta \rightarrow 0+}u(t_k + \delta, \cdot)$, $u(t_k^-, \cdot) = \displaystyle\lim_{\delta \rightarrow 0+}u(t_k - \delta, \cdot)$ and $I_k$, $k \in \mathbb{N}$, are the impulse operators. Besides to impulsive actions in the system \eqref{Eq5}, we also allow that the external force $\phi$ is not continuous and depends on the solution $u$. We point out that the Navier-Stokes equations with impulses make sense physically and allow to describe more precisely the phenomena modeled by these equations, since $u$ represents the velocity of the field of a fluid and moreover, the external force $\phi$ in this case does not need to be continuous. It is well known that the phenomena which occur in the environment have impulsive behavior and
c119738c8140213d35cc0c262585ef2842765f45
\section{Introduction} Germanium is widely used as a detector material in experiments searching for a rare process like the interaction of weakly interacting massive particles (WIMPs)~\cite{dmreview}. It is possible to build detectors with very good energy resolution based on the measurement of the ionization produced in the particle interaction, or of the increase of temperature~\cite{gebolo}. In addition, the combination of the ionization and heat signals is a powerful tool to distinguish nuclear recoils from electron recoils. Moreover, the crystal-growing process used in the semiconductor industry purifies the material to a high level that matches well the stringent ultra-low radioactivity requirements of rare event searches. The potential of germanium detectors for achieving very low threshold below 1~keV is particularly attractive for searches of WIMPs with masses below 10~GeV/c$^{2}$. The background at energies below 20~keV in such a detector is thus of particular interest. Notably, the contribution from tritium beta decays may have a significant impact on the sensitivity of the next generation of these detectors. The crystallization process removes all cosmogenically-produced radioactive atoms, with the exception of unstable germanium isotopes like $^{68}$Ge (see below). Their populations grow back again when the crystal is kept above ground, and therefore exposed to cosmic rays and the associated hadronic showers. Short-lived isotopes decay rapidly as soon as the detectors are stored underground, deep enough to suppress the hadronic component of the cosmic rays~\cite{Farley2006451}. The isotopes that merit attention have lifetimes exceeding a few tens of days, since shorter-lived nuclei can be eliminated just by storing the detectors in the underground site for some reasonable time before starting data taking. The cosmogenic products that have the most noticeable effect on the low-energy spectrum recorded in germanium detectors are those that decay via electronic capture (EC). The capture is often followed by the emission of a $K$-shell X-ray with characteristic energy between 4 and 11~keV. $L$- and $M$-shell captures will produce weaker lines at approximately 1 and 0.1~keV, respectively. The sharp line shapes and known $K$:$L$:$M$ intensity ratios can be used to identify and subtract the associated events. However, it is preferable to reduce their initial intensities to the lowest possible level. Measurements of the production rates of EC-decaying isotopes is helpful in designing a detector-production procedure that limits these backgrounds to acceptable levels, and, more generally, to constrain models predicting the production rates of all isotopes, including those that may prove to elude direct measurements. Another type of background of particular interest is the beta decay of tritium ($^{3}$H) originated from nuclear reactions induced by the interaction of the hadronic component of cosmic rays with atoms in the material~\cite{avignone}. The electron emitted in the beta decay of tritium has an end point $Q_{\beta}$ of only 18.6~keV, and thus contributes to the background of low-energy events over the entire energy range relevant for low-mass WIMP searches. The lifetime of $^{3}$H is particularly long ($\tau$ = 17.79~y), so the tritium activity can essentially be expected to remain almost the same throughout the life of the detector. The only
d1c27e8f8bf8c3fc658b94c78ecb9ad84b736d71
\section{Introduction.} Katz \cite[5.3.1]{Katz90ESDE} discovered that the hypergeometric $\sD$-modules on $\bA^1_{\bC}\setminus\{0\}$ can be described as the multiplicative convolution of hypergeometric $\sD$-modules of rank one. Precisely speaking, Katz proved the statement (ii) in the following theorem (the statement (i) is trivial but put to compare with another theorem later). \begin{cvthm} Let $\balpha=(\alpha_1,\dots,\alpha_m)$ and $\bbeta=(\beta_1,\dots,\beta_n)$ be two sequences of complex numbers and assume that $\alpha_i-\beta_j$ is not an integer for any $i,j$. Let $\sHyp(\balpha;\bbeta)$ be the $\sD$-module on $\bG_{\rmm,\bC}$ defined by the hypergeometric operator \[ \Hyp(\balpha;\bbeta)=\prod_{i=1}^m(x\partial-\alpha_i)-x\prod_{j=1}^n(x\partial-\beta_j), \] that is, \[ \sHyp(\balpha;\bbeta)\defeq\sD_{\bA^1_{\bC}\setminus\{0\}}/\sD_{\bA^1_{\bC}\setminus\{0\}}\Hyp(\balpha;\bbeta). \] Then, $\sHyp(\balpha;\bbeta)$ has the following properties. \textup{(i)} If $m\neq n$, then $\sHyp(\balpha;\bbeta)$ is a free $\sO_{\bG_{\rmm,\bC}}$-module of rank $\max\{m,n\}$. If $m=n$, then the restriction of $\sHyp(\balpha;\bbeta)$ to $\bG_{\rmm,\bC}\setminus\{1\}$ is a free $\sO_{\bG_{\rmm,\bC}\setminus\{1\}}$-module of rank $m$. \textup{(ii)} We have an isomorphism \[ \sHyp(\balpha;\bbeta)\cong \sHyp(\alpha_1;\emptyset)\ast\dots\ast\sHyp(\alpha_m;\emptyset) \ast\sHyp(\emptyset;\beta_1)\ast\dots\ast\sHyp(\emptyset;\beta_n), \] where $\ast$ denotes the multiplicative convolution of $\sD_{\bG_{\rmm,\bC}}$-modules. \end{cvthm} Besides the hypergeometric $\sD$-modules over the complex numbers, Katz also studied the $\ell$-adic theory of hypergeometric sheaves. Let $k$ be a finite field with $q$ elements, let $\psi$ be a non-trivial additive character on $k$ and let $\bchi=(\chi_1,\dots,\chi_m), \brho=(\rho_1,\dots,\rho_n)$ be sequences of characters on $k^{\times}$ satisfying $\chi_i\neq\rho_j$ for all $i,j$. Then, he \emph{defined} the $\ell$-adic hypergeometric sheaves $\sH_{\psi,!}^{\ell}(\bchi,\brho)$ on $\bG_{\rmm,k}$ by using the multiplicative convolution of $\sH_{\psi,!}^{\ell}(\chi_i;\emptyset)$'s and $\sHyp_{\psi,!}^{\ell}(\emptyset;\rho_j)$'s, where these convolvends are defined by using Artin--Schreier sheaves and Kummer sheaves. This $\ell$-adic sheaf $\sH_{\psi,!}^{\ell}(\bchi;\brho)$ has a property similar to (i) in the above theorem. Namely, it is a smooth sheaf on $\bG_{\rmm,k}$ of rank $\max\{m,n\}$ if $m\neq n$, and its restriction to $\bG_{\rmm,k}\setminus\{1\}$ is a smooth sheaf of rank $m$ if $m=n$ \cite[Theorem 8.4.2]{Katz90ESDE}. Moreover, by definition, $\sH_{\psi,!}^{\ell}(\bchi;\brho)$ has a Frobenius structure. The Frobenius trace functions of the $\ell$-adic hypergeometric sheaves are called the ``hypergeometric functions over finite field''. This function gives a generalization of the classical Kloosterman sums. Moreover, this function has an intimate connection with the Frobenius action on the \'etale cohomology of a certain class of algebraic varieties (for example, Calabi--Yau varieties) over finite fields. (The hypergeometric function over finite field is also called the ``Gaussian hypergeometric function'' by Greene \cite{Greene}, who independently of Katz found this function based on a different motivation.) The purpose of this article is to develop a $p$-adic counterpart of these complex and $\ell$-adic hypergeometric objects. This $p$-adic hypergeometric object will have a presentation in terms of the ($p$-adic) differential equation, and at the same time has a Frobenius structure. For its formalisation, we exploit the theory of arithmetic $\sD$-modules introduced by Berthelot. The main theorem of this article is stated as follows. \begin{mainthm} Let $K$ be a complete discrete valuation field of mixed characteristic $(0,p)$ with residue field $k$, a finite field with $q$ elements. Let $\pi$ be an element of $K$ that satisfies $\pi^{q-1}=(-p)^{(q-1)/(p-1)}$. Let $\balpha=(\alpha_1,\dots,\alpha_m)$ and $\bbeta=(\beta_1,\dots,\beta_n)$ be two sequences of elements of $\frac{1}{q-1}\bZ$, and assume that $\alpha_i-\beta_j$ is not an integer for any $i,j$. Let $\sHyp_{\pi}(\balpha;\bbeta)$ be the $\sD^{\dag}_{\widehat{\bP}^1,\bQ}(\pdag{\{0,\infty\}})$-module defined by the $p$-adic hypergeometric differential operator \[ \Hyp_{\pi}(\balpha;\bbeta)=\prod_{i=1}^m(x\partial-\alpha_i)-(-1)^{m+np}\pi^{m-n}x\prod_{j=1}^n(x\partial -\beta_j), \] that is, \[ \sHyp_{\pi}(\balpha;\bbeta)\defeq\sD^{\dag}_{\widehat{\bP^1_V},\bQ}(\pdag{\{0,\infty\}})/
8b38a56a8d3dfec99cda595719feed3583d351f9
\section{Introduction} Many problems, particularly in combinatorics, reduce to asking whether some graph with a given property exists, or alternatively, asking how many such non-isomorphic graphs exist. Such graph search and graph enumeration problems are notoriously difficult, in no small part due to the extremely large number of symmetries in graphs. In practical problem solving, it is often advantageous to eliminate these symmetries which arise naturally due to graph isomorphism: typically, if a graph $G$ is a solution then so is any other graph $G'$ that is isomorphic to $G$. General approaches to graph search problems typically involve either: \emph{generate and test}, explicitly enumerating all (non-isomorphic) graphs and checking each for the given property, or \emph{constrain and generate}, encoding the problem for some general-purpose discrete satisfiability solver (i.e. SAT, integer programming, constraint programming), which does the enumeration implicitly. % In the explicit approach, one typically iterates, repeatedly applying an extend and reduce approach: First \emph{extend} the set of all non-isomorphic graphs with $n$ vertices, in all possible ways, to graphs with $n+1$ vertices; and then \emph{reduce} the extensions to their non-isomorphic (canonical) representatives. % In the constraint based approach, one typically first encodes the problem and then applies a constraint solver in order to produce solutions. The (unknown) graph is represented in terms of Boolean variables describing it as an adjacency matrix $A$. The encoding is a conjunction of constraints that constitute a model, $\varphi_A$, such that any satisfying assignment to $\varphi_A$ is a solution to the graph search problem. Typically, symmetry breaking constraints~\cite{Crawford96,CodishMPS13} are added to the model to reduce the number of isomorphic solutions, while maintaining the correctness of the model. It remains unknown whether a polynomial time algorithm exists to decide the graph isomorphism problem. Nevertheless, finding good graph isomorphism algorithms is critical when exploring graph search and enumeration problems. Recently an algorithm was published by \citeN{Babai15} which runs in time $O\left( {\exp \left( log^c(n) \right) } \right)$, for some constant $c>1$, and solves the graph isomorphism problem. Nevertheless, top of the line graph isomorphism tools use different methods, which are, in practice, faster. \citeN{nauty} introduces an algorithm for graph canonization, and its implementation, called \texttt{nauty}\ (which stands for \emph{no automorphisms, yes?}), is described in \cite{nauty_impl}. % In contrast to earlier works, where the canonical representation of a graph was typically defined to be the smallest graph isomorphic to it (in the lexicographic order), \texttt{nauty}\ introduced a notion which takes structural properties of the graph into account. For details on how \texttt{nauty}~defines canonicity and for the inner workings of the \texttt{nauty}\ algorithm see~\cite{nauty,nauty_impl,hartke_nauty,nautyII}. % In recent years \texttt{nauty}~has gained a great deal of popularity and success. Other, similar tools, are \textsf{bliss}~\cite{bliss} and \textsf{saucy}~\cite{saucy}. The \texttt{nauty}\ graph automorphism tool consists of two main components. (1) a C library, \texttt{nauty}, which may be linked to at runtime, that contains functions applicable to find the canonical labeling of a graph, and (2) a collection of applications, \texttt{gtools}, that implement an assortment of common tasks that \texttt{nauty}\ is typically applied to. %
461c005ff63aee7aa7a74b332faadbc07ca57726
\section{\label{intro}Introduction} The ``instanton calculus'' is a common approach for studying the non-perturbative semiclassical effects in gauge theories and sigma models. One of the first and perhaps the best known illustration of this approach is the $O(3)$ Non-Linear Sigma Model (NLSM) in two dimensions, where multi-instanton configurations admit a simple analytic form \cite{Polyakov:1975yp}. It is less known that the $O(3)$ NLSM provides an opportunity to explore a mechanism of exact summation of the instanton configurations in the path integral. In order to explain the purpose of this paper, we start with a brief overview of the main ideas behind this summation. The instanton contributions in the $O(3)$ NLSM were calculated in a semiclassical approximation in the paper \cite{Fateev:1979dc}. It was shown that the effect of instantons with positive topological charge can be described in terms of the non-interacting theory of Dirac fermions. Moreover, every instanton has its anti-instanton counterpart with the same action and opposite topological charge. Thus, neglecting the instanton-anti-instanton interaction, one arrives to the theory with two non-interacting fermions. Although the classical equation has no solutions containing both instanton-anti-instanton configurations, such configurations must still be taken into account. In ref.\,\cite{Bukhvostov:1980sn} Bukhvostov and Lipatov (BL) have found that the weak instanton-anti-instanton interaction is described by means of a theory of two Dirac fermions, $\psi_\sigma \ (\sigma=\pm)$, with the Lagrangian \begin{eqnarray}\label{Lagr1} {\cal L}= \sum_{\sigma=\pm }{\bar \psi}_\sigma \big({\rm i} \gamma^\mu\partial_\mu-M\big){ \psi}_\sigma- g\, \big({\bar \psi}_+\gamma^\mu{ \psi}_+\big) \big({\bar \psi}_- \gamma_\mu{ \psi}_-\big)\ . \end{eqnarray} The perturbative treatment of \eqref{Lagr1} leads to ultraviolet (UV) divergences and requires renormalization. The renormalization can be performed by adding the following counterterms to the Lagrangian which preserve the invariance w.r.t. two independent $U(1)$ rotations $\psi_\pm\mapsto \mbox{e}^{{\rm i}\alpha_\pm} \, \psi_\pm$, as well as the permutation $\psi_+\leftrightarrow\psi_-$: \begin{eqnarray}\label{Lagr2} {\cal L}_{\rm BL}={\cal L}-\sum_{\sigma=\pm}\Big(\,\delta M\, {\bar \psi}_\sigma{ \psi}_\sigma+ \frac{g_1}{2}\, \big({\bar \psi}_\sigma\gamma^\mu{ \psi}_\sigma\big)^2\Big)\ . \end{eqnarray} In fact the cancellation of the UV divergences leaves undetermined one of the counterterm couplings. It is possible to use the renormalization scheme where the renormalized mass $M$, the bare mass $M_0=M+\delta M$ and UV cut-off energy scale $\Lambda_{\rm UV}$ obey the relation \begin{eqnarray}\label{aoisasosa} \frac{M}{M_0}=\bigg(\frac{M}{\Lambda_{\rm UV}}\bigg)^\nu\ , \end{eqnarray} where the exponent $\nu$ is a renormalization group invariant parameter as well as dimensionless coupling $g$. For $\nu=0$ the fermion mass does not require renormalization and the only divergent quantity is the zero point energy. The theory, in a sense, turns out to be UV finite in this case. Then the specific {\it logarithmic} divergence of the zero point energy can be interpreted as a ``small-instanton'' divergence in the context of $O(3)$ NLSM. Recall, that the standard lattice description of the $O(3)$ sigma model has problems -- for example, the lattice topological susceptibility does not obey naive scaling laws. L\"uscher has shown \cite{Luscher:1981tq} that this is because of the so-called ``small instantons'' -- field configurations such as the winding of the $O(3)$-field around plaquettes of lattice size, giving rise to spurious contribution to quantities related to the zero point energy. To the best of our knowledge, there is no any indication that
50d97946ad5e25c8d148a3d665dd68f161cea3ba
\section{Introduction} The motivation for this note was the observation that the basic recursion relation for the modified Bessel function $K$[1], $$K_0(z)+\left(\frac{2}{z}\right)K_1(z)=K_2(z)$$ can be expressed as the symmetry with respect to $m=0$ and $n=1$ of the sum $$\sum_{k=0}^n K_{k-m-1}(z)\left(\frac{z}{2}\right)^{k+m}.\eqno(1)$$ The attempt to generalize this to arbitrary $m$ and $n$ led to our principal result\vskip .1in \noindent {\bf Theorem 1}\vskip .1in For positive integers $m$ and $n$ the expression $$(n+1)!\sum_{k=0}^n\frac{1}{k!}{m+k+1\choose{m}}K_{k-m-1}(z)\left(\frac{z}{2}\right)^{k+m}\eqno(2)$$ is symmetric with respect to $m$ and $n$.\vskip .1in \noindent This will be proven in the following section and some similar results presented in the concluding paragraph. \section{Calculation} Consider the sum $$F(n,p,q)=\frac{(n+q+1)!}{q!(q+1)!}\sum_{k=0}^p \frac{(q+k+1)!}{(k+1)!}\frac{(n+k)!}{k!}\eqno(3)$$ for $p,q,n\in {\cal{Z}}^+$. One finds that, e.g. $$F(1,p,q)=\frac{(p+q+2)!}{p!q!}$$ $$F(2,p,q)=\frac{(p+q+2)!}{p!q!}[6+2(p+q)+pq]$$ and by induction on $n$ one obtains \vskip .1in \newpage \noindent {\bf Lemma 1}\vskip .1in $$\frac{p!q!}{(p+q+2)!}F(n,p,q)$$ is a polynomial $P(p,q)=P(q,p)$ of degree $n-1$ in $p$ and $q$. Next, by interchanging the order of summation and invoking lemma 1, one has \vskip .1in \noindent {\bf Lemma 2}\vskip .1in $$G(p,q,z)=\sum_{n=0}^{\infty}\frac{1}{(n!)^2}F(n,p,q)z^n=\sum_{k=0}^p {q+k+1\choose{q}}\;_2F_1(k+1,q+2;1;z)$$ is analytic for $ |z|<1$ and symmetric with respect to $p$ and $q$. \vskip .1in Finally, noting that[2] $$\int_0^{\infty}J_0(z\sqrt{x})\;_2F_1(k+1,q+2;1;-x)dx=\frac{2^{-k-q} z^{k+q+1}}{k!(q+1)!}K_{k-q-1}(z)\eqno(4)$$ (changing $q$ to $m$ and $p$ to $n$) we have Theorem 1. For example, with $m=0$ we get the possibly new summation $$\sum_{k=0}^n\frac{1}{k!}K_{k-1}(z)(z/2)^k=\frac{1}{n!}K_{n+1}(z)(z/2)^n.\eqno(5)$$ Setting $z=-ix$ in the relation $$K_{\nu}(z)= \frac{\pi}{2}i^{\nu+1}[J_{\nu}(iz)+iY_{\nu}(iz)]\eqno(6)$$ after a small manipulation one obtains \vskip .1in \noindent {\bf Theorem 2}\vskip .1in $$(-1)^m(n+1)!\sum_{k=0}^n\frac{1}{k!}{m+k+1\choose{m}}\, J_{k-m-1}(x)(x/2)^{k+m}\eqno(7)$$ $$(-1)^m(n+1)!\sum_{k=0}^n\frac{1}{k!} {m+k+1\choose{m}}\, Y_{k-m-1}(x)(x/2)^{k+m}\eqno(8)$$ are both symmetric with respect to $m$ and $n$.\vskip .1in \newpage \noindent {\bf Corollary }\vskip .1in $$\sum_{k=0}^{n}\frac{1}{k!}\, {\cal{C}}_{k-1}(x)(x/2)^k =-\frac{1}{n!}\, {\cal{C}}_{n+1}(x)(x/2)^n\eqno(9)$$ where ${\cal{C}}=aJ +b Y$. \vskip .2in \section{Discussion} Analogous sum relations can be obtained by other means. For example, let us start with the hypergeometric summation formula[3] $$\;_3F_2(-n,1,a;3-a,n+3;-1)=\frac{(n+2)n!}{2(a-1)\Gamma(a-2)}\left[\frac{\Gamma(a-1)}{(n+1)!}+(-1)^n\Gamma(a-n-2)\right].\eqno(9)$$ But, $$\;_3F_2(-n,1,a;3-a,n+3;-1)=\frac{n!(n+2)!}{\Gamma(a-2)\Gamma(a)}\sum_{k=1}^{n+1} (-1)^{k+1}\frac{\Gamma(a-1+k)\Gamma(a-1-k)}{\Gamma(n+k)\Gamma(n-k)}.\eqno(10)$$ With $n$ replaced by $n-1$ and $a=(s+n)/2+1$, the first term of (9) is half of what would be the $k=0$ term of the sum in (10) and one has $$\sum_{k=0}^n(-1)^k(2-\delta_{k,0})\frac{\Gamma\left(\frac{s+n}{2}-k\right)\Gamma\left(\frac{s+n}{2}+k\right)}{(n-k)!(n+k)!}=\frac{(-1)^n}{n!}\Gamma\left(\frac{s+n}{2}\right)\Gamma\left(\frac{s-n}{2}\right).\eqno(11)$$ Next we take the inverse Mellin transform of both sides, noting that $$\int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}(2/x)^s\Gamma\left(\frac{s+n}{2}-k\right)\Gamma\left(\frac{s+n}{2}+k\right)=4x^nK_{2k}(x)\eqno(12)$$ $$\int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}(2/x)^s\Gamma\left(\frac{s-n}{2}\right)\Gamma\left(\frac{s+n}{2}\right)=4K_n(x).\eqno(13)$$ Consequently, $$K_n(x)=\left(\frac{x}{2}\right)^n\sum_{k=0}^{n}(-1)^{k+n}n!\frac{(2-\delta_{k,0})}{(n-k)!(n+k)!}K_{2k}(x).\eqno(14)$$ Since many integrals of the Gauss hypergeometric function are known, one of the most extensive tabulations being[2], Lemma 2 is the gateway to a myriad of unexpected finite sum identities involving various classes of special functions. We conclude by listing a small selection.. From[2] $$\int_0^{\infty}(1-e^{-t})^{\lambda-1}e^{-xt}\;_2F_1(k+1,m+2;1;ze^{-t})dt$$ $$=B(x,\lambda)\;_3F_2(k+1,m+2,x;1,x+\lambda; z) \eqno(15)$$ and one has the symmetry of $$ \sum_{k=0}^n {m+k+1\choose{m}}\;_3F_2(k+1,m+2,x;1,x+\lambda;z)\eqno(16)$$ For example for $m=0$ $$\sum_{k=0}^n\;_3F_2(k+1,2,x;1,x+\lambda;z)=(n+1)\;_2F_1(n+2,x;x+\lambda;z).\eqno(17)$$ Similarly, $$\frac{n!(n+1)!}{\Gamma(n+2-a)}\sum_{k=0}^n\frac{(m+k+1)!\Gamma(k+1-a)}{k!(k+1)!}.$$ $$=\frac{m!(m+1)!}{\Gamma(m+2-a)}\sum_{k=0}^m\frac{(n+k+1)!\Gamma(k+1-a)}{k!(k+1)!}\eqno(18)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}=\sum_{k=0}^m{n+k+1\choose{n}}.\eqno(19)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}\;_3F_2(k+1,m+2,a;1,a+b;z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}\;_3F_2(k+1,n+2,a;1,a+b;z).\eqno(20)$$ $$\sum_{k=0}^n\;_3F_2(k+1,2,a;1,a+b;1)=\frac{(n+1)\Gamma(b-n-2)\Gamma(a+b)}{\Gamma(a+b-n-2)\Gamma(b)}.\eqno(21)$$ $$\sum_{k=0}^n\frac{(p+k)!}{k!}=\frac{(n+p+1)!}{(p+1)n!},\quad p=0,1,2,\cdots\eqno(22)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}z^{(k+m)/2}S_{-k-m-2,k-m-1}(z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}z^{(k+n)/2}S_{-k-n-2,k-n-1}(z).\eqno(23)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}z^{(k+m)/2}W_{-k-m-2,k-m-1}(z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}z^{(k+n)/2}W_{-k-n-2,k-n-1}(z).\eqno(24)$$ \section{References} \noindent [1] G.E. Andrews, R. Askey and R. Roy, {\it Special Functions} [Cambridge University Press, 1999] \noindent [2] A.P. Prudnikov, Yu. A.Brychkov and O.I. Marichev,{\it Integrals and Series, Vol. 3} [Gordon and Breach, NY 1986] Section 2.21.1. \noindent [3] Ibid. Section (2.4.1). \end{document}
702ece0f7a9b408770cf6d81d20d81b11c11d0e5

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card