text
stringlengths
174
147k
context
stringlengths
491
148k
<context>[NEXA_RESTORE] well-known that for Rayleigh [*flat*]{}-fading channels, the error rate decays only linearly with signal-to-noise ratio ($\snr$) [@Proakis:book]. For frequency-selective channels, however, proper exploitation of the available frequency diversity forces the error probability to decay at a possibly higher rate and, therefore, can potentially achieve higher diversity gains, depending on the detection scheme employed at the receiver. While maximum likelihood sequence detection (MLSD) [@forney:ML] achieves optimum performance over ISI channels, its complexity (as measured by the number of MLSD trellis states) grows *exponentially* with the spectral efficiency and the channel memory. As a low-complexity alternative, filtering-based symbol-by-symbol equalizers (both linear and decision feedback) have been widely used over the past four decades (see [@qureshi:adaptive] and [@vitetta] for excellent tutorials). Despite their long history and successful commercial deployment, the performance of symbol-by-symbol linear equalizers over wireless fading channels is not fully characterized. More specifically, it is not known whether their observed sub-optimum performance is due
well-known that for Rayleigh [*flat*]{}-fading channels, the error rate decays only linearly with signal-to-noise ratio ($\snr$) [@Proakis:book]. For frequency-selective channels, however, proper exploitation of the available frequency diversity forces the error probability to decay at a possibly higher rate and, therefore, can potentially achieve higher diversity gains, depending on the detection scheme employed at the receiver. While maximum likelihood sequence detection (MLSD) [@forney:ML] achieves optimum performance over ISI channels, its complexity (as measured by the number of MLSD trellis states) grows *exponentially* with the spectral efficiency and the channel memory. As a low-complexity alternative, filtering-based symbol-by-symbol equalizers (both linear and decision feedback) have been widely used over the past four decades (see [@qureshi:adaptive] and [@vitetta] for excellent tutorials). Despite their long history and successful commercial deployment, the performance of symbol-by-symbol linear equalizers over wireless fading channels is not fully characterized. More specifically, it is not known whether their observed sub-optimum performance is due[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the ForwArd Search ExpeRiment at the LHC (FASER), plans to measure the neutrino cross section at TeV energies [@Ariga:2019ufm]. The IceCube Neutrino Observatory is a cubic-kilometer neutrino detector installed in the ice at the geographic South Pole [@Aartsen:2016nxy], between depths of and , completed in 2010. Reconstruction of the direction, energy and flavor of the neutrinos relies on the optical detection of Cherenkov radiation emitted by charged particles produced in the interactions of neutrinos in the surrounding ice or the nearby bedrock. As the transmission probability through the Earth is dependent on the neutrino cross section, a change in the cross section affects the arrival flux of neutrinos at IceCube as a function of the energy and zenith angle. Recently, IceCube performed the first measurement of the high-energy neutrino-nucleon cross section using a sample of upgoing muon neutrinos [@Aartsen:2017kpd]. In this paper, we present a measurement of the neutrino-nucleon cross section using the high-energy
the ForwArd Search ExpeRiment at the LHC (FASER), plans to measure the neutrino cross section at TeV energies [@Ariga:2019ufm]. The IceCube Neutrino Observatory is a cubic-kilometer neutrino detector installed in the ice at the geographic South Pole [@Aartsen:2016nxy], between depths of and , completed in 2010. Reconstruction of the direction, energy and flavor of the neutrinos relies on the optical detection of Cherenkov radiation emitted by charged particles produced in the interactions of neutrinos in the surrounding ice or the nearby bedrock. As the transmission probability through the Earth is dependent on the neutrino cross section, a change in the cross section affects the arrival flux of neutrinos at IceCube as a function of the energy and zenith angle. Recently, IceCube performed the first measurement of the high-energy neutrino-nucleon cross section using a sample of upgoing muon neutrinos [@Aartsen:2017kpd]. In this paper, we present a measurement of the neutrino-nucleon cross section using the high-energy[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the cone of effective cycles have been explicitly computed is relatively small till date [@F], [@CLO], [@PP] etc. Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ and consider the fibre product $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$. Motivated by the results in [@F], in this paper, we compute the cones of effective cycles on $X$ in the following cases. Case I: When both $E_1$ and $E_2$ are semistable vector bundles of rank $r_1$ and $r_2$ respectively, the cone of effective codimension k-cycles are described in theorem 3.2. Case II: When Neither $E_1$ nor $E_2$ is semistable, the cone of low dimension effective cycles are computed in theorem 3.3 and the remaining cases in therem 3.5. Preliminaries ============= Let $X$ be a smooth projective varity of dimension $n$. $N_k(X)$ is the real vector space of k-cycles on $X$ modulo numerical equivalence. For each $k$, $N_k(X)$ is a real vector space of finite dimension.
the cone of effective cycles have been explicitly computed is relatively small till date [@F], [@CLO], [@PP] etc. Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ and consider the fibre product $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$. Motivated by the results in [@F], in this paper, we compute the cones of effective cycles on $X$ in the following cases. Case I: When both $E_1$ and $E_2$ are semistable vector bundles of rank $r_1$ and $r_2$ respectively, the cone of effective codimension k-cycles are described in theorem 3.2. Case II: When Neither $E_1$ nor $E_2$ is semistable, the cone of low dimension effective cycles are computed in theorem 3.3 and the remaining cases in therem 3.5. Preliminaries ============= Let $X$ be a smooth projective varity of dimension $n$. $N_k(X)$ is the real vector space of k-cycles on $X$ modulo numerical equivalence. For each $k$, $N_k(X)$ is a real vector space of finite dimension.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of velocity: usual measurements of momentum (velocity) distributions are performed globally - with no resolution in position - whereas the detection of the velocity field (or of the flux) implies a local measurement, that may provide values outside the domain of the global velocities, due to non commutativity of momentum and position [@local]. Despite its intriguing nature - obviously counterintuitive from a classical viewpoint - quantum backflow has not yet received as much attention as other quantum effects. Firstly discovered by Allcock in 1969 [@allcock], it only started to be studied in the mid 90’s. Bracken and Melloy [@BM] provided a bound for the maximal fraction of probability that can undergo backflow. Then, additional bounds and analytic examples, and its implications in the definition of arrival times of quantum particles where discussed by Muga *et al.* [@muga; @Leav; @Jus]. Recently, Berry [@BerryBack] analyzed the statistics of backflow for random wavefunctions,
of velocity: usual measurements of momentum (velocity) distributions are performed globally - with no resolution in position - whereas the detection of the velocity field (or of the flux) implies a local measurement, that may provide values outside the domain of the global velocities, due to non commutativity of momentum and position [@local]. Despite its intriguing nature - obviously counterintuitive from a classical viewpoint - quantum backflow has not yet received as much attention as other quantum effects. Firstly discovered by Allcock in 1969 [@allcock], it only started to be studied in the mid 90’s. Bracken and Melloy [@BM] provided a bound for the maximal fraction of probability that can undergo backflow. Then, additional bounds and analytic examples, and its implications in the definition of arrival times of quantum particles where discussed by Muga *et al.* [@muga; @Leav; @Jus]. Recently, Berry [@BerryBack] analyzed the statistics of backflow for random wavefunctions,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and proxy models are examples of research fields where numerous contributions have been proposed. More specifically, global Sensitivity Analysis (SA) is a key method for investigating complex computer codes which model physical phenomena. It involves a set of techniques used to quantify the influence of uncertain input parameters on the variability in numerical model responses. Recently, sensitivity studies have been applied in a large variety of fields, ranging from chemistry [@CUK73; @T90] or oil recovery [@IMDR01] to space science [@Carra07] and nuclear safety [@IVD06].\ In general, global SA refers to the probabilistic framework, meaning that the uncertain input parameters are modelled as a random vector. By propagation, every computer code output is itself a random variable. Global SA techniques then consists in comparing the probability distribution of the output with the conditional probability distribution of the output when some of the inputs are fixed. This yields in particular useful information on
and proxy models are examples of research fields where numerous contributions have been proposed. More specifically, global Sensitivity Analysis (SA) is a key method for investigating complex computer codes which model physical phenomena. It involves a set of techniques used to quantify the influence of uncertain input parameters on the variability in numerical model responses. Recently, sensitivity studies have been applied in a large variety of fields, ranging from chemistry [@CUK73; @T90] or oil recovery [@IMDR01] to space science [@Carra07] and nuclear safety [@IVD06].\ In general, global SA refers to the probabilistic framework, meaning that the uncertain input parameters are modelled as a random vector. By propagation, every computer code output is itself a random variable. Global SA techniques then consists in comparing the probability distribution of the output with the conditional probability distribution of the output when some of the inputs are fixed. This yields in particular useful information on[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the CCD detectors, and that of the alignments of the interferometers in terms of matching of the wave fronts of the sources in the detection regions. Another scheme involving three interferometers and three BEC’s is discussed; it leads to Greenberger-Horne-Zeilinger (GHZ) sign contradictions, as in the usual GHZ case with three particles, but for an arbitrarily large number of them. Finally, generalizations of the Hardy impossibilities to an arbitrarily large number of particles are introduced. BEC’s provide a large versality for observing violations of local realism in a variety of experimental arrangements. author: - | F. Laloë$^{a}$ and W. J. Mullin$^{b}$\ $^{a}$Laboratoire Kastler Brossel, ENS, UPMC, CNRS ; 24 rue Lhomond, 75005 Paris, France\ $^{b}$Department of Physics, University of Massachusetts, Amherst, Massachusetts 01003 USA title: 'Interferometry with independent Bose-Einstein condensates: parity as an EPR/Bell quantum variable' --- PACS numbers: 03.65.Ud, 03.75.Gg, 42.50.Xa The original Einstein-Podolsky-Rosen (EPR)
the CCD detectors, and that of the alignments of the interferometers in terms of matching of the wave fronts of the sources in the detection regions. Another scheme involving three interferometers and three BEC’s is discussed; it leads to Greenberger-Horne-Zeilinger (GHZ) sign contradictions, as in the usual GHZ case with three particles, but for an arbitrarily large number of them. Finally, generalizations of the Hardy impossibilities to an arbitrarily large number of particles are introduced. BEC’s provide a large versality for observing violations of local realism in a variety of experimental arrangements. author: - | F. Laloë$^{a}$ and W. J. Mullin$^{b}$\ $^{a}$Laboratoire Kastler Brossel, ENS, UPMC, CNRS ; 24 rue Lhomond, 75005 Paris, France\ $^{b}$Department of Physics, University of Massachusetts, Amherst, Massachusetts 01003 USA title: 'Interferometry with independent Bose-Einstein condensates: parity as an EPR/Bell quantum variable' --- PACS numbers: 03.65.Ud, 03.75.Gg, 42.50.Xa The original Einstein-Podolsky-Rosen (EPR)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] that there could be many equivalent representations or ‘pictures’ as they are often called in this context. For example the Schrödinger picture, where all operators are independent of time, while the wave function or ket carries the time dependence, is well known. In this picture the Hamiltonian is written in the form $$\begin{aligned} H_S=H(\hat q,\hat p)\hspace{0.5cm} \mbox{with the wave function }\hspace{0.1cm} \psi(q,t).\end{aligned}$$ Here the momentum operator is written in the form $\hat p=-i\hbar\partial/\partial q$. When considering quantum field theory, it is the Heisenberg picture that comes to the fore. In this picture all the time dependence is taken into the operators by introducing a unitary transform $U(t)$ so that $$\begin{aligned} H_H(t)=U^\dag H_SU\hspace{0.3cm}\mbox{with}\hspace{0.3cm} U(t)=e^{iHt/\hbar}\end{aligned}$$ where $H(q,p)$ is the Hamiltonian. Then we have the following relations $$\begin{aligned} \hat q_H(t)=U^\dag \hat q_SU\hspace{0.3cm} \mbox{and}\hspace{0.3cm} \hat p_H(t)= U^\dag \hat p_S U.\end{aligned}$$ Apart from the interaction picture and the Fock picture, a little-known picture was implicitly introduced by Dirac [@pd47] in
that there could be many equivalent representations or ‘pictures’ as they are often called in this context. For example the Schrödinger picture, where all operators are independent of time, while the wave function or ket carries the time dependence, is well known. In this picture the Hamiltonian is written in the form $$\begin{aligned} H_S=H(\hat q,\hat p)\hspace{0.5cm} \mbox{with the wave function }\hspace{0.1cm} \psi(q,t).\end{aligned}$$ Here the momentum operator is written in the form $\hat p=-i\hbar\partial/\partial q$. When considering quantum field theory, it is the Heisenberg picture that comes to the fore. In this picture all the time dependence is taken into the operators by introducing a unitary transform $U(t)$ so that $$\begin{aligned} H_H(t)=U^\dag H_SU\hspace{0.3cm}\mbox{with}\hspace{0.3cm} U(t)=e^{iHt/\hbar}\end{aligned}$$ where $H(q,p)$ is the Hamiltonian. Then we have the following relations $$\begin{aligned} \hat q_H(t)=U^\dag \hat q_SU\hspace{0.3cm} \mbox{and}\hspace{0.3cm} \hat p_H(t)= U^\dag \hat p_S U.\end{aligned}$$ Apart from the interaction picture and the Fock picture, a little-known picture was implicitly introduced by Dirac [@pd47] in[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] as the initial conditions. Finally, a simulation study clarifies the proposed method and verifies its efficiency.' address: - | Control Systems Lab, Department of Mechanical Engineering, National Technical University of Athens, 9 Heroon Polytechniou Street, Zografou 15780.\ E-mail: {shahab, kkyria}@mail.ntua.gr - | ACCESS Linnaeus Center, School of Electrical Engineering and KTH Center for Autonomous Systems, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden.\ E-mail: {anikou, dimos}@kth.se author: - 'Shahab Heshmati-Alamdari, Alexandros Nikou and Kostas J. Kyriakopoulos[^1]' - 'Shahab Heshmati-alamdari' - Alexandros Nikou - 'Kostas J. Kyriakopoulos' - 'Dimos V. Dimarogonas' bibliography: - 'mybibfilealina.bib' title: A Robust Force Control Approach for Underwater Vehicle Manipulator Systems --- Underwater Vehicle Manipulator System, Nonlinear Control, Autonomous Underwater Vehicle, Marine Robotics, Force Control, Robust Control. Introduction ============ In view of the development of autonomous underwater vehicles, the capability of such vehicles to interact with the environment by the use of a robot manipulator, had gained attention in the literature.
as the initial conditions. Finally, a simulation study clarifies the proposed method and verifies its efficiency.' address: - | Control Systems Lab, Department of Mechanical Engineering, National Technical University of Athens, 9 Heroon Polytechniou Street, Zografou 15780.\ E-mail: {shahab, kkyria}@mail.ntua.gr - | ACCESS Linnaeus Center, School of Electrical Engineering and KTH Center for Autonomous Systems, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden.\ E-mail: {anikou, dimos}@kth.se author: - 'Shahab Heshmati-Alamdari, Alexandros Nikou and Kostas J. Kyriakopoulos[^1]' - 'Shahab Heshmati-alamdari' - Alexandros Nikou - 'Kostas J. Kyriakopoulos' - 'Dimos V. Dimarogonas' bibliography: - 'mybibfilealina.bib' title: A Robust Force Control Approach for Underwater Vehicle Manipulator Systems --- Underwater Vehicle Manipulator System, Nonlinear Control, Autonomous Underwater Vehicle, Marine Robotics, Force Control, Robust Control. Introduction ============ In view of the development of autonomous underwater vehicles, the capability of such vehicles to interact with the environment by the use of a robot manipulator, had gained attention in the literature.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] allocating the risk to individual institutions. This has led to further research on risk statistics.\ In their seminal paper, Burgert and Rüschendorf (2006) firstly introduced the concepts of the scalar multivariate coherent and convex risk measures, see also Rüschendorf (2013). However, the traditional risk statistics failed to capture sufficiently of the regulator-based risk. Namely, the regulators almost only focus on the loss of investment rather than revenue. Especially, the axiom of translative invariance in coherent and convex risk statistics will definitely fail when we only deal with the regulator-based risk. Thus, the study of regulator-based risk statistics is particularly interesting.\ Evaluating the risk of a portfolio consisting of several financial positions, Jouini et al. (2004) pointed out that a set-valued risk measure is more appropriate than a scalar risk measure, especially in the case where several different kinds of currencies are involved when one is determining capital requirements for the portfolio. They
allocating the risk to individual institutions. This has led to further research on risk statistics.\ In their seminal paper, Burgert and Rüschendorf (2006) firstly introduced the concepts of the scalar multivariate coherent and convex risk measures, see also Rüschendorf (2013). However, the traditional risk statistics failed to capture sufficiently of the regulator-based risk. Namely, the regulators almost only focus on the loss of investment rather than revenue. Especially, the axiom of translative invariance in coherent and convex risk statistics will definitely fail when we only deal with the regulator-based risk. Thus, the study of regulator-based risk statistics is particularly interesting.\ Evaluating the risk of a portfolio consisting of several financial positions, Jouini et al. (2004) pointed out that a set-valued risk measure is more appropriate than a scalar risk measure, especially in the case where several different kinds of currencies are involved when one is determining capital requirements for the portfolio. They[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] mixtures --- [2]{} In 1977, J. W. Cahn predicted that “in any two-phase mixture of fluids near their critical point, contact angles against any third phase become zero in that one of the critical phases completely wets the third phase and excludes contact with the other critical phase” [@cahn]. This “critical point wetting” is a very general phenomenon [@cahn; @heady; @indekeu; @bonn]. We found an exception to it by studying helium mixtures in contact with a sapphire window [@exceptions]. In fact, de Gennes [@pgg] had noticed that long range forces may prevent complete wetting. Nightingale and Indekeu [@night] further explained that if a long range attraction is exerted by the third phase on the interface between the two critical phases, partial wetting may be observed up to the critical point. We propose that, in $^{3}$He-$^{4}$He mixtures near their tri-critical point, this attraction is provided by the confinement of the fluctuations of superfluidity, i.e. a critical Casimir effect [@pgg2; @night; @krech;
mixtures --- [2]{} In 1977, J. W. Cahn predicted that “in any two-phase mixture of fluids near their critical point, contact angles against any third phase become zero in that one of the critical phases completely wets the third phase and excludes contact with the other critical phase” [@cahn]. This “critical point wetting” is a very general phenomenon [@cahn; @heady; @indekeu; @bonn]. We found an exception to it by studying helium mixtures in contact with a sapphire window [@exceptions]. In fact, de Gennes [@pgg] had noticed that long range forces may prevent complete wetting. Nightingale and Indekeu [@night] further explained that if a long range attraction is exerted by the third phase on the interface between the two critical phases, partial wetting may be observed up to the critical point. We propose that, in $^{3}$He-$^{4}$He mixtures near their tri-critical point, this attraction is provided by the confinement of the fluctuations of superfluidity, i.e. a critical Casimir effect [@pgg2; @night; @krech;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in the centres of weakly active galaxies, such as Sgr $A^{*}$.' bibliography: - 'Sukova.bib' title: Shocks in the relativistic transonic accretion with low angular momentum --- \[firstpage\] accretion, accretion discs – hydrodynamics – shock waves – X-rays: binaries – stars: black holes – Galaxy: centre Introduction {#s:Introduction} ============ The weakly active galaxies, where the central nucleus emits the radiation at a moderate level with respect to the most luminous quasars, are frequenlty described by the so called hot accretion flows model. In such flows, the plasma is virially hot and optically thin, and due to the advection of energy onto black hole, the flow is radiativelly inefficient. The prototype of an object which can be well described by such model, is the low luminosity black hole in the centre of our Galaxy, the source Sgr $A^{*}$ . Also, the black hole X-ray binaries in their hard and quiescent states can be good representatives for the hot mode of accretion.
in the centres of weakly active galaxies, such as Sgr $A^{*}$.' bibliography: - 'Sukova.bib' title: Shocks in the relativistic transonic accretion with low angular momentum --- \[firstpage\] accretion, accretion discs – hydrodynamics – shock waves – X-rays: binaries – stars: black holes – Galaxy: centre Introduction {#s:Introduction} ============ The weakly active galaxies, where the central nucleus emits the radiation at a moderate level with respect to the most luminous quasars, are frequenlty described by the so called hot accretion flows model. In such flows, the plasma is virially hot and optically thin, and due to the advection of energy onto black hole, the flow is radiativelly inefficient. The prototype of an object which can be well described by such model, is the low luminosity black hole in the centre of our Galaxy, the source Sgr $A^{*}$ . Also, the black hole X-ray binaries in their hard and quiescent states can be good representatives for the hot mode of accretion.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] focus of this paper is the asymptotic stability of the wave equation with so-called impedance boundary conditions (IBCs), also known as acoustic boundary conditions. Herein, the impedance operator, related to the Neumann-to-Dirichlet map, is assumed to be continuous linear time-invariant, so that it reduces to a time-domain convolution. *Passive* convolution operators [@beltrami2014distributions § 3.5], the kernels of which have a positive-real Laplace transform, find applications in physics in the modeling of locally-reacting energy absorbing material, such as non perfect conductors in electromagnetism [@yuferev2010sibc] and liners in acoustics [@monteghetti2017tdibc]. As a result, IBCs are commonly used with Maxwell’s equations [@hiptmair2014FastQuadratureIBC], the linearized Euler equations [@monteghetti2017tdibc], or the wave equation [@sauter2017waveCQ]. Two classes of convolution operators are well-known due to the ubiquity of the physical phenomena they model. Slowly decaying kernels, which yield so-called *long-memory* operators, arise from losses without propagation (due to e.g. viscosity or electrical/thermal resistance); they include fractional kernels. On the other
focus of this paper is the asymptotic stability of the wave equation with so-called impedance boundary conditions (IBCs), also known as acoustic boundary conditions. Herein, the impedance operator, related to the Neumann-to-Dirichlet map, is assumed to be continuous linear time-invariant, so that it reduces to a time-domain convolution. *Passive* convolution operators [@beltrami2014distributions § 3.5], the kernels of which have a positive-real Laplace transform, find applications in physics in the modeling of locally-reacting energy absorbing material, such as non perfect conductors in electromagnetism [@yuferev2010sibc] and liners in acoustics [@monteghetti2017tdibc]. As a result, IBCs are commonly used with Maxwell’s equations [@hiptmair2014FastQuadratureIBC], the linearized Euler equations [@monteghetti2017tdibc], or the wave equation [@sauter2017waveCQ]. Two classes of convolution operators are well-known due to the ubiquity of the physical phenomena they model. Slowly decaying kernels, which yield so-called *long-memory* operators, arise from losses without propagation (due to e.g. viscosity or electrical/thermal resistance); they include fractional kernels. On the other[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Applied Mathematics,*]{}\ [*University of Cape Town, 7700 Rondebosch,*]{}\ [*Republic of South Africa*]{} title: The Geometry of classical change of signature --- Introduction ============ Following on recent developments in quantum cosmology \[1-3\], a subject of some interest is the possibility of a change of signature in a classical space-time \[4-12\]. We discuss here in depth the geometry associated with such a classical change of signature. The results obtained differ depending on what smoothness assumptions one makes. We look at the most general case, resulting from concentrating on the 3-dimensional surface where the change of signature occurs, rather than on either the Lorentzian (hyperbolic) or Riemannian (positive definite) enveloping space (the latter is often referred to as Euclidean; however we prefer Riemannian, as ‘Euclidean’ suggests that the space is flat) .\ In our approach we emphasize the initial value problem associated with signature change and the dynamical content of the theory, rather
Applied Mathematics,*]{}\ [*University of Cape Town, 7700 Rondebosch,*]{}\ [*Republic of South Africa*]{} title: The Geometry of classical change of signature --- Introduction ============ Following on recent developments in quantum cosmology \[1-3\], a subject of some interest is the possibility of a change of signature in a classical space-time \[4-12\]. We discuss here in depth the geometry associated with such a classical change of signature. The results obtained differ depending on what smoothness assumptions one makes. We look at the most general case, resulting from concentrating on the 3-dimensional surface where the change of signature occurs, rather than on either the Lorentzian (hyperbolic) or Riemannian (positive definite) enveloping space (the latter is often referred to as Euclidean; however we prefer Riemannian, as ‘Euclidean’ suggests that the space is flat) .\ In our approach we emphasize the initial value problem associated with signature change and the dynamical content of the theory, rather[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] a smooth projective surface over $\mathbb{C}$ with $p_g>0$ is very large. Building on the work of Mumford, in [@R1] and [@R2] Roĭtman studies the map $$\text{Sym}^k (X)\to CH_0(X)$$ for $X$ a smooth complex projective variety. He shows that fibers of this map, which we call orbits of degree $k$ for rational equivalence[^1], are countable unions of Zariski closed subsets. Moreover, he defines birational invariants $d(X)$ and $j(X)\in \mathbb{Z}_{\geq 0}$ such that for $k\gg 0$ the minimal dimension of orbits of degree $k$ for rational equivalence is $k(\dim X-d(X))-j(X)$. Roĭtman’s generalization of Mumford’s theorem is the following statement: $$\text{If }H^0(X,\Omega^q)\neq 0 \text{ then }d(X)\geq q.$$ In particular, if $X$ has a global holomorphic top form then a very general $x_1+\ldots+x_k\in \text{Sym}^kX$ is contained in a zero-dimensional orbit.\ Abelian varieties are among the simplest examples of varieties admitting a global holomorphic top form. In this article we will focus our attention on this
a smooth projective surface over $\mathbb{C}$ with $p_g>0$ is very large. Building on the work of Mumford, in [@R1] and [@R2] Roĭtman studies the map $$\text{Sym}^k (X)\to CH_0(X)$$ for $X$ a smooth complex projective variety. He shows that fibers of this map, which we call orbits of degree $k$ for rational equivalence[^1], are countable unions of Zariski closed subsets. Moreover, he defines birational invariants $d(X)$ and $j(X)\in \mathbb{Z}_{\geq 0}$ such that for $k\gg 0$ the minimal dimension of orbits of degree $k$ for rational equivalence is $k(\dim X-d(X))-j(X)$. Roĭtman’s generalization of Mumford’s theorem is the following statement: $$\text{If }H^0(X,\Omega^q)\neq 0 \text{ then }d(X)\geq q.$$ In particular, if $X$ has a global holomorphic top form then a very general $x_1+\ldots+x_k\in \text{Sym}^kX$ is contained in a zero-dimensional orbit.\ Abelian varieties are among the simplest examples of varieties admitting a global holomorphic top form. In this article we will focus our attention on this[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] show that this stray field can be approximated by a single dipole and estimate the NV-to-sample distance to a few tens of nanometer, which sets the achievable resolution of our scanning probes.' author: - Patrick Appel - Elke Neu - Marc Ganzhorn - Arne Barfuss - Marietta Batzer - Micha Gratz - Andreas Tschöpe - Patrick Maletinsky title: Fabrication of all diamond scanning probes for nanoscale magnetometry --- Introduction \[sec:Int\] ======================== The negatively charged nitrogen vacancy center (NV center) in diamond forms a highly promising sensor: On the one hand, its unique combination of long spin coherence times and efficient optical spin readout enables the detection of magnetic [@Maze2008] and electric fields [@Dolde2011] as well as local temperature.[@Toyli2013a; @Acosta2010b] On the other hand, the NV center is a highly photostable single photon source and therefore an ideal emitter for scanning near field [@Tisler2013a] and single photon microscopy.[@Sekatskii1996] Moreover, all properties relevant for sensing are sustained from cryogenic temperatures [@Thiel2015; @Pelliccione2014] up to $550\,$K,[@Toyli2012]
show that this stray field can be approximated by a single dipole and estimate the NV-to-sample distance to a few tens of nanometer, which sets the achievable resolution of our scanning probes.' author: - Patrick Appel - Elke Neu - Marc Ganzhorn - Arne Barfuss - Marietta Batzer - Micha Gratz - Andreas Tschöpe - Patrick Maletinsky title: Fabrication of all diamond scanning probes for nanoscale magnetometry --- Introduction \[sec:Int\] ======================== The negatively charged nitrogen vacancy center (NV center) in diamond forms a highly promising sensor: On the one hand, its unique combination of long spin coherence times and efficient optical spin readout enables the detection of magnetic [@Maze2008] and electric fields [@Dolde2011] as well as local temperature.[@Toyli2013a; @Acosta2010b] On the other hand, the NV center is a highly photostable single photon source and therefore an ideal emitter for scanning near field [@Tisler2013a] and single photon microscopy.[@Sekatskii1996] Moreover, all properties relevant for sensing are sustained from cryogenic temperatures [@Thiel2015; @Pelliccione2014] up to $550\,$K,[@Toyli2012][memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] electric-magnetic duality" given by one of us (MH) at the 8-th Workshop “Quantum Field Theory and Hamiltonian Systems" (QFTHS), 19-22 September 2012, Craiova, Romania. To appear in the Proceedings of the Conference (special issue of the Romanian Journal of Physics). Introduction ============ Understanding gravitational duality is one of the important challenges for exhibiting the conjectured infinite-dimensional Kac-Moody algebras (or generalizations thereof) of hidden symmetries of supergravities and M-theory [@Julia:1982gx; @West:2001as; @Damour:2002cu; @Henneaux:2010ys]. Independently of the problem of uncovering these conjectured hidden symmetries, gravitational duality is important in itself as it illuminates the structure of Einstein gravity. In [@Henneaux:2004jw], two of the present authors presented a formulation of linearized gravity in four space-time dimensions that was manifestly invariant under “duality rotations" in the internal space spanned by the graviton and its dual. This was followed by further developments covering higher spins [@Deser:2004xt], the inclusion of a cosmological constant [@Julia:2005ze] and supersymmetry [@Bunster:2012jp]. One crucial aspect of
electric-magnetic duality" given by one of us (MH) at the 8-th Workshop “Quantum Field Theory and Hamiltonian Systems" (QFTHS), 19-22 September 2012, Craiova, Romania. To appear in the Proceedings of the Conference (special issue of the Romanian Journal of Physics). Introduction ============ Understanding gravitational duality is one of the important challenges for exhibiting the conjectured infinite-dimensional Kac-Moody algebras (or generalizations thereof) of hidden symmetries of supergravities and M-theory [@Julia:1982gx; @West:2001as; @Damour:2002cu; @Henneaux:2010ys]. Independently of the problem of uncovering these conjectured hidden symmetries, gravitational duality is important in itself as it illuminates the structure of Einstein gravity. In [@Henneaux:2004jw], two of the present authors presented a formulation of linearized gravity in four space-time dimensions that was manifestly invariant under “duality rotations" in the internal space spanned by the graviton and its dual. This was followed by further developments covering higher spins [@Deser:2004xt], the inclusion of a cosmological constant [@Julia:2005ze] and supersymmetry [@Bunster:2012jp]. One crucial aspect of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] exploited for control of the Néel vectors in this family of antiferromagnets.' author: - In Jun Park - Taehwan Lee - Protik Das - Bishwajit Debnath - 'Greg P. Carman' - 'Roger K. Lake' title: 'Strain control of the Néel vector in Mn-based antiferromagnets' --- There has been a rapidly increasing interest in the use of antiferromagnetic (AFM) materials for use as active device elements [@2018_Tserkovnyak_RMP; @2017_AFM_spintronics_Jungwirth_PSSR; @AFM_spintronics_Jungwirth_NNano16]. AFMs are insensitive to parasitic electromagnetic and magnetic interference. The dipolar coupling is minimal, since there is no net magnetic moment. Their lack of macroscopic magnetic fields allows AFM devices and interconnects to be highly scaled with reduced cross talk and insensitivity to geometrical anisotropy effects. AFM resonant frequencies and magnon velocities are several orders of magnitude higher than those in ferromagnetic materials, and these velocities correlate with similarly higher switching speeds [@gomonay2014spintronics; @AFM_spintronics_Jungwirth_NNano16; @KWang_ULowSwitchingAFM_APL16]. AFM metals and insulators are plentiful, and many have Néel temperatures well above room temperature, a requirement
exploited for control of the Néel vectors in this family of antiferromagnets.' author: - In Jun Park - Taehwan Lee - Protik Das - Bishwajit Debnath - 'Greg P. Carman' - 'Roger K. Lake' title: 'Strain control of the Néel vector in Mn-based antiferromagnets' --- There has been a rapidly increasing interest in the use of antiferromagnetic (AFM) materials for use as active device elements [@2018_Tserkovnyak_RMP; @2017_AFM_spintronics_Jungwirth_PSSR; @AFM_spintronics_Jungwirth_NNano16]. AFMs are insensitive to parasitic electromagnetic and magnetic interference. The dipolar coupling is minimal, since there is no net magnetic moment. Their lack of macroscopic magnetic fields allows AFM devices and interconnects to be highly scaled with reduced cross talk and insensitivity to geometrical anisotropy effects. AFM resonant frequencies and magnon velocities are several orders of magnitude higher than those in ferromagnetic materials, and these velocities correlate with similarly higher switching speeds [@gomonay2014spintronics; @AFM_spintronics_Jungwirth_NNano16; @KWang_ULowSwitchingAFM_APL16]. AFM metals and insulators are plentiful, and many have Néel temperatures well above room temperature, a requirement[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] most mathematicians and physicists are concerned. After several centuries of meticulous study by people like Gauss, Faraday, and Maxwell (to name a few), one wonders if it is still possible to find surprising or new results in the field. Throughout this text, we address several seemingly classical electrostatics problems that have not been fully addressed in the literature, to the best of our knowledge. Let us begin by establishing some notations. For vectors $x,y \in \mathbb{R}^d, \, d \geq 3$, we define the function $K_{d}(x,y)$ by the formula $$\label{Riesz} K_{d}(x, y) = \frac{1}{(2-d) \omega_{d-1}}\frac{1}{|x-y|^{d-2}}.$$ Here, $\omega_{d-1}$ is the surface area of the unit sphere in ${\mathbb{R}}^d$. $K_{d}$ is the fundamental solution for the Laplace operator in $\mathbb{R}^d$ (i.e., $\Delta_y K_{d}(x, y) = \delta_x$). Furthermore, given a locally finite (signed) Borel measure $\mu$ with support $\Sigma$, we define the *Newtonian (or Coulomb) potential* of $\mu$ with
most mathematicians and physicists are concerned. After several centuries of meticulous study by people like Gauss, Faraday, and Maxwell (to name a few), one wonders if it is still possible to find surprising or new results in the field. Throughout this text, we address several seemingly classical electrostatics problems that have not been fully addressed in the literature, to the best of our knowledge. Let us begin by establishing some notations. For vectors $x,y \in \mathbb{R}^d, \, d \geq 3$, we define the function $K_{d}(x,y)$ by the formula $$\label{Riesz} K_{d}(x, y) = \frac{1}{(2-d) \omega_{d-1}}\frac{1}{|x-y|^{d-2}}.$$ Here, $\omega_{d-1}$ is the surface area of the unit sphere in ${\mathbb{R}}^d$. $K_{d}$ is the fundamental solution for the Laplace operator in $\mathbb{R}^d$ (i.e., $\Delta_y K_{d}(x, y) = \delta_x$). Furthermore, given a locally finite (signed) Borel measure $\mu$ with support $\Sigma$, we define the *Newtonian (or Coulomb) potential* of $\mu$ with[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] both in accuracy and execution time, of the proposed algorithm and other state-of-the-art solutions.' author: - 'Ricardo Augusto Borsoi, , Tales Imbiriba, , José Carlos Moreira Bermudez,  [^1] [^2] [^3][^4] [^5]' bibliography: - 'references.bib' - 'references\_revpaper.bib' title: A Data Dependent Multiscale Model for Hyperspectral Unmixing With Spectral Variability --- Hyperspectral data, spectral variability, spatial regularization, multiscale, superpixels. Introduction ============ Hyperspectral devices acquire hundreds of contiguous reflectance samples from the observed electromagnetic spectra. This observed reflectance is often mixed at the pixel level and requires unmixing strategies to correctly unveil important information about the materials and their proportion in a target scene [@Keshava:2002p5667]. Hyperspectral unmixing (HU) aims at decomposing the observed reflectance in pure spectral components, *i.e.*, *endmembers*, and their proportions [@Keshava:2002p5667], commonly referred as fractional *abundances*. Different models and strategies have been proposed to solve this problem [@Bioucas-Dias-2013-ID307; @Dobigeon-2014-ID322; @Zare-2014-ID324]. The vast majority of methods considers the Linear Mixing Model (LMM) [@Keshava:2002p5667], which assumes that the observed reflectance vector (*i.e.* a hyperspectral image pixel) can be modeled as a convex combination of
both in accuracy and execution time, of the proposed algorithm and other state-of-the-art solutions.' author: - 'Ricardo Augusto Borsoi, , Tales Imbiriba, , José Carlos Moreira Bermudez,  [^1] [^2] [^3][^4] [^5]' bibliography: - 'references.bib' - 'references\_revpaper.bib' title: A Data Dependent Multiscale Model for Hyperspectral Unmixing With Spectral Variability --- Hyperspectral data, spectral variability, spatial regularization, multiscale, superpixels. Introduction ============ Hyperspectral devices acquire hundreds of contiguous reflectance samples from the observed electromagnetic spectra. This observed reflectance is often mixed at the pixel level and requires unmixing strategies to correctly unveil important information about the materials and their proportion in a target scene [@Keshava:2002p5667]. Hyperspectral unmixing (HU) aims at decomposing the observed reflectance in pure spectral components, *i.e.*, *endmembers*, and their proportions [@Keshava:2002p5667], commonly referred as fractional *abundances*. Different models and strategies have been proposed to solve this problem [@Bioucas-Dias-2013-ID307; @Dobigeon-2014-ID322; @Zare-2014-ID324]. The vast majority of methods considers the Linear Mixing Model (LMM) [@Keshava:2002p5667], which assumes that the observed reflectance vector (*i.e.* a hyperspectral image pixel) can be modeled as a convex combination of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] high-frequency components of data controlled by the kernel phase. With the phase-induced Gabor kernel, the proposed Gabor-Nets gains the ability to automatically adapt to the local harmonic characteristics of the HSI data and thus yields more representative harmonic features. Also, this kernel can fulfill the traditional complex-valued Gabor filtering in a real-valued manner, hence making Gabor-Nets easily perform in a usual CNN thread. We evaluated our newly developed Gabor-Nets on three well-known HSIs, suggesting that our proposed Gabor-Nets can significantly improve the performance of CNNs, particularly with a small training set.' author: - 'Chenying Liu, Jun Li, Lin He, Antonio Plaza, Shutao Li, and Bo Li [^1]' bibliography: - 'References.bib' title: Naive Gabor Networks for Hyperspectral Image Classification --- Hyperspectral images (HSIs), convolutional neural networks (CNNs), naive Gabor networks (Gabor-Nets). Introduction {#sec:intro} ============ Over the past two decades, hyperspectral imaging has witnessed a surge of interest for Earth Observations due to its capability to detect subtle spectral information using hundreds of continuous and
high-frequency components of data controlled by the kernel phase. With the phase-induced Gabor kernel, the proposed Gabor-Nets gains the ability to automatically adapt to the local harmonic characteristics of the HSI data and thus yields more representative harmonic features. Also, this kernel can fulfill the traditional complex-valued Gabor filtering in a real-valued manner, hence making Gabor-Nets easily perform in a usual CNN thread. We evaluated our newly developed Gabor-Nets on three well-known HSIs, suggesting that our proposed Gabor-Nets can significantly improve the performance of CNNs, particularly with a small training set.' author: - 'Chenying Liu, Jun Li, Lin He, Antonio Plaza, Shutao Li, and Bo Li [^1]' bibliography: - 'References.bib' title: Naive Gabor Networks for Hyperspectral Image Classification --- Hyperspectral images (HSIs), convolutional neural networks (CNNs), naive Gabor networks (Gabor-Nets). Introduction {#sec:intro} ============ Over the past two decades, hyperspectral imaging has witnessed a surge of interest for Earth Observations due to its capability to detect subtle spectral information using hundreds of continuous and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] On the other hand, a deci-Hertz observatory, like DECIGO or Tian Qin, would significantly enhance the chances of a joint detection, and shed light on the formation channels of these binaries.' author: - Xian Chen - 'Pau Amaro-Seoane' title: | Revealing the formation of stellar-mass black hole binaries:\ The need for deci-Hertz gravitational wave observatories --- [*Introduction.*]{}–The first LIGO events, GW150914 and GW151226 [@ligo16a; @ligo16b], are consistent with mergers of General-Relativity black holes (BHs). Data analysis reveal that the orbits started at a semi-major axis of $a\sim10$ Schwarzschild radii ($R_S$) with an eccentricity of $e<0.1$. The BH masses are about $M_1\simeq36$ and $M_2\simeq29~M_\odot$ for GW150914 and $M_1\simeq14$ and $M_2\simeq7.5~M_\odot$ for GW151226. The detections can be used to infer new, more realistic event rates, of about $9-240~{\rm Gpc^{-3}~yr^{-1}}$ [@ligo16rate]. This rate agrees with two formation channels: (i) evolution of a binary of two stars in the field of the host galaxy,
On the other hand, a deci-Hertz observatory, like DECIGO or Tian Qin, would significantly enhance the chances of a joint detection, and shed light on the formation channels of these binaries.' author: - Xian Chen - 'Pau Amaro-Seoane' title: | Revealing the formation of stellar-mass black hole binaries:\ The need for deci-Hertz gravitational wave observatories --- [*Introduction.*]{}–The first LIGO events, GW150914 and GW151226 [@ligo16a; @ligo16b], are consistent with mergers of General-Relativity black holes (BHs). Data analysis reveal that the orbits started at a semi-major axis of $a\sim10$ Schwarzschild radii ($R_S$) with an eccentricity of $e<0.1$. The BH masses are about $M_1\simeq36$ and $M_2\simeq29~M_\odot$ for GW150914 and $M_1\simeq14$ and $M_2\simeq7.5~M_\odot$ for GW151226. The detections can be used to infer new, more realistic event rates, of about $9-240~{\rm Gpc^{-3}~yr^{-1}}$ [@ligo16rate]. This rate agrees with two formation channels: (i) evolution of a binary of two stars in the field of the host galaxy,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Felzenszwalb[^1]\ Brown University\ Providence, RI, USA\ [pff@brown.edu]{} - | Benar F. Svaiter[^2]\ IMPA\ Rio de Janeiro, RJ, Brazil\ [benar@impa.br]{} bibliography: - 'prop.bib' title: Diffusion Methods for Classification with Pairwise Relationships --- Introduction ============ In many classification problems there are relationships among a set of items to be classified. For example, in image reconstruction problems adjacent pixels are likely to belong to the same object or image segment. This leads to relationships between the labels of different pixels in an image. Energy minimization methods based on Markov random fields (MRF) address these problems in a common framework [@Besag74; @WJ08; @KF09]. Within this framework we introduce two new algorithms for classification with pairwise information. These algorithms are based on contraction maps and are related to non-linear diffusion and random walks on graphs. The setting under consideration is as follows. Let
Felzenszwalb[^1]\ Brown University\ Providence, RI, USA\ [pff@brown.edu]{} - | Benar F. Svaiter[^2]\ IMPA\ Rio de Janeiro, RJ, Brazil\ [benar@impa.br]{} bibliography: - 'prop.bib' title: Diffusion Methods for Classification with Pairwise Relationships --- Introduction ============ In many classification problems there are relationships among a set of items to be classified. For example, in image reconstruction problems adjacent pixels are likely to belong to the same object or image segment. This leads to relationships between the labels of different pixels in an image. Energy minimization methods based on Markov random fields (MRF) address these problems in a common framework [@Besag74; @WJ08; @KF09]. Within this framework we introduce two new algorithms for classification with pairwise information. These algorithms are based on contraction maps and are related to non-linear diffusion and random walks on graphs. The setting under consideration is as follows. Let[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the overall ET kinetics and lead to an adiabatic dynamics.[@Hynes] An excess electron appearing in the medium introduces local fluctuations of polarization, that in turn contribute to the change of Gibbs energy. Equilibration of those fluctuations leads to a new state with a localized position of a charge. In chemical reactions, the electron may change its location passing from a donoring to an accepting molecule, giving rise to the same scenario of Gibbs energy changes that allows to discriminate between the (equilibrium) states “before” and “after” the transfer (see Fig. \[et\_co\]). The free energy surfaces for “reactants” and “products” are usually multidimensional functions which intersect at the transition point. The deviation from it, or the Gibbs energy change, can be calculated from the reversible work done along the path that forms that state, so that by use of a simple thermodynamic argument, one is able to associate a change in the
the overall ET kinetics and lead to an adiabatic dynamics.[@Hynes] An excess electron appearing in the medium introduces local fluctuations of polarization, that in turn contribute to the change of Gibbs energy. Equilibration of those fluctuations leads to a new state with a localized position of a charge. In chemical reactions, the electron may change its location passing from a donoring to an accepting molecule, giving rise to the same scenario of Gibbs energy changes that allows to discriminate between the (equilibrium) states “before” and “after” the transfer (see Fig. \[et\_co\]). The free energy surfaces for “reactants” and “products” are usually multidimensional functions which intersect at the transition point. The deviation from it, or the Gibbs energy change, can be calculated from the reversible work done along the path that forms that state, so that by use of a simple thermodynamic argument, one is able to associate a change in the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] simply explode with a mini bang of a sort.' author: - 'R. K. Thakur' date: 'Received: date / Accepted: date' title: 'Can a Black Hole Collapse to a Space-time Singularity? ' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ When all the thermonuclear sources of energy of a star are exhausted, the core of the star begins to contract gravitationally because, practically, there is no radiation pressure to arrest the contraction, the pressure of matter being inadequate for this purpose. If the mass of the core is less than the Chandrasekhar limit ($\sim 1.44 \msol$), the contraction stops when the density of matter in the core, $\rho > 2 \times 10^{6} \gcmcui$; at this stage the pressure of the relativistically degenerate electron gas in the core is enough to withstand the force of gravitation. When this happens, the
simply explode with a mini bang of a sort.' author: - 'R. K. Thakur' date: 'Received: date / Accepted: date' title: 'Can a Black Hole Collapse to a Space-time Singularity? ' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ When all the thermonuclear sources of energy of a star are exhausted, the core of the star begins to contract gravitationally because, practically, there is no radiation pressure to arrest the contraction, the pressure of matter being inadequate for this purpose. If the mass of the core is less than the Chandrasekhar limit ($\sim 1.44 \msol$), the contraction stops when the density of matter in the core, $\rho > 2 \times 10^{6} \gcmcui$; at this stage the pressure of the relativistically degenerate electron gas in the core is enough to withstand the force of gravitation. When this happens, the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Heavy Ion Accelerator, Lanzhou 730000, China' - '${}^{4}$The Institute of Physical and Chemical Research (RIKEN), Hirosawa 2-1, Wako-shi, Saitama 351-0198, Japan' author: - 'J. Meng$^{1-3}$[^1], S.-G. Zhou$^{1-3}$ and I. Tanihata$^{4}$' --- =10000 Recent progresses in the accelerator and detection techniques all around the world have made it possible to produce and study the nuclei far away from the stability line — so called “EXOTIC NUCLEI”. Based on the measurement of interaction cross section with radioactive beams at relativistic energy, novel and entirely unexpected features has appeared: e.g., the neutron halo and skin as the rapid increase in the measured interaction cross-sections in the neutron-rich light nuclei [@THH.85b; @HJJ.95]. Systematic investigation of interaction cross sections for an isotope chain or an isotone chain can provide a good opportunity to study the density distributions over a wide range of isospin [@Suz.95; @MTY.97]. However the contribution from proton and neutron are coupled in the measurement of interaction cross section. To
Heavy Ion Accelerator, Lanzhou 730000, China' - '${}^{4}$The Institute of Physical and Chemical Research (RIKEN), Hirosawa 2-1, Wako-shi, Saitama 351-0198, Japan' author: - 'J. Meng$^{1-3}$[^1], S.-G. Zhou$^{1-3}$ and I. Tanihata$^{4}$' --- =10000 Recent progresses in the accelerator and detection techniques all around the world have made it possible to produce and study the nuclei far away from the stability line — so called “EXOTIC NUCLEI”. Based on the measurement of interaction cross section with radioactive beams at relativistic energy, novel and entirely unexpected features has appeared: e.g., the neutron halo and skin as the rapid increase in the measured interaction cross-sections in the neutron-rich light nuclei [@THH.85b; @HJJ.95]. Systematic investigation of interaction cross sections for an isotope chain or an isotone chain can provide a good opportunity to study the density distributions over a wide range of isospin [@Suz.95; @MTY.97]. However the contribution from proton and neutron are coupled in the measurement of interaction cross section. To[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in the insulating barrier of the tunnel junction as well as in the dielectric material used to fabricate the circuit. It is believed that these TLS fluctuators lead to low frequency $1/f$ charge noise $S_Q(f)$ [@Martinis1992; @Mooij1995; @Zorin1996; @Kenyon2000; @Astafiev2006]. However, at high frequencies, one experiment finds that the charge noise increases linearly with frequency [@Astafiev2004]. This has prompted some theorists to use a TLS density of states linear in energy [@Shnirman2005] which is contrary to the constant density of states that has been so successful in explaining the low temperature properties of glasses such as the specific heat that is linear in temperature [@Phillips]. A linear distribution has been proposed in conjunction with a Cooper pair tunneling into a pair of electron traps [@Faoro2005], and with electron hopping between Kondo–like traps to account for the charge noise [@Faoro2006]. However, these previous theoretical efforts have neglected the important issue of the
in the insulating barrier of the tunnel junction as well as in the dielectric material used to fabricate the circuit. It is believed that these TLS fluctuators lead to low frequency $1/f$ charge noise $S_Q(f)$ [@Martinis1992; @Mooij1995; @Zorin1996; @Kenyon2000; @Astafiev2006]. However, at high frequencies, one experiment finds that the charge noise increases linearly with frequency [@Astafiev2004]. This has prompted some theorists to use a TLS density of states linear in energy [@Shnirman2005] which is contrary to the constant density of states that has been so successful in explaining the low temperature properties of glasses such as the specific heat that is linear in temperature [@Phillips]. A linear distribution has been proposed in conjunction with a Cooper pair tunneling into a pair of electron traps [@Faoro2005], and with electron hopping between Kondo–like traps to account for the charge noise [@Faoro2006]. However, these previous theoretical efforts have neglected the important issue of the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] networks can be viewed as the classical [*reversible*]{} multi-class queuing network, which has a product-form stationary distribution. In the language of [@harrison2014bandwidth], we establish that [*baseline*]{} performance for this model class is achievable. Indeed, as the key contribution of this work, we propose a method to [*emulate*]{} such a reversible queuing network while satisfying congestion control and scheduling constraints. Precisely, our policy is an emulation of Store-and-Forward (SFA) congestion control in conjunction with Last-Come-First-Serve Preemptive-Resume (LCFS-PR) scheduling policy. address: - - author: - - bibliography: - 'datacenter.bib' title: Centralized Congestion Control and Scheduling in a Datacenter --- Introduction ============ With an increasing variety of applications and workloads being hosted in datacenters, it is highly desirable to design datacenters that provide high throughput and low latency. Current datacenter networks primarily employ the design principle of Internet, where congestion control and packet scheduling decisions are distributed among endpoints and routers. While distributed architecture provides scalability and fault-tolerance, it
networks can be viewed as the classical [*reversible*]{} multi-class queuing network, which has a product-form stationary distribution. In the language of [@harrison2014bandwidth], we establish that [*baseline*]{} performance for this model class is achievable. Indeed, as the key contribution of this work, we propose a method to [*emulate*]{} such a reversible queuing network while satisfying congestion control and scheduling constraints. Precisely, our policy is an emulation of Store-and-Forward (SFA) congestion control in conjunction with Last-Come-First-Serve Preemptive-Resume (LCFS-PR) scheduling policy. address: - - author: - - bibliography: - 'datacenter.bib' title: Centralized Congestion Control and Scheduling in a Datacenter --- Introduction ============ With an increasing variety of applications and workloads being hosted in datacenters, it is highly desirable to design datacenters that provide high throughput and low latency. Current datacenter networks primarily employ the design principle of Internet, where congestion control and packet scheduling decisions are distributed among endpoints and routers. While distributed architecture provides scalability and fault-tolerance, it[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] classification processes like neural networks in the area of art-history and *Heritage Informatics* have experienced a broad distribution [@lang_AttestingSimilaritySupportingOrganizationStudy_2018]. These methods face several challenges, including the handling of comparatively small amounts of data as well as high-dimensional data in the Digital Humanities. In most cases, these methods map the classification task to flat target space. This “flat” surface loses several relevant dimensions in the search for ontological uniqueness, including taxonomical, mereological, and associative relationships between classes, or the non-formal context, respectively. The proposed solution by @donig_VomBildTextUndWieder_2019a [@donig_VomBildTextUndWieder_2019a] to expand the capabilities of visual classifiers is to take advantage of the greater expressiveness of text-based models. Here, a *Convolutional Neural Network* (CNN) is used that output is not as usual a series of flat text labels but a series of semantically loaded vectors. These vectors result from a *Distributional Semantic Model* (DSM) ([@lenci_DistributionalModelsWordMeaning_2018a]) which is generated from an in-domain text corpus. Here, we
classification processes like neural networks in the area of art-history and *Heritage Informatics* have experienced a broad distribution [@lang_AttestingSimilaritySupportingOrganizationStudy_2018]. These methods face several challenges, including the handling of comparatively small amounts of data as well as high-dimensional data in the Digital Humanities. In most cases, these methods map the classification task to flat target space. This “flat” surface loses several relevant dimensions in the search for ontological uniqueness, including taxonomical, mereological, and associative relationships between classes, or the non-formal context, respectively. The proposed solution by @donig_VomBildTextUndWieder_2019a [@donig_VomBildTextUndWieder_2019a] to expand the capabilities of visual classifiers is to take advantage of the greater expressiveness of text-based models. Here, a *Convolutional Neural Network* (CNN) is used that output is not as usual a series of flat text labels but a series of semantically loaded vectors. These vectors result from a *Distributional Semantic Model* (DSM) ([@lenci_DistributionalModelsWordMeaning_2018a]) which is generated from an in-domain text corpus. Here, we[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] optimality properties and yields a mean waiting time that *vanishes* as $N$ grows large for any fixed subcritical load. However, a nominal implementation of the JSQ policy involves a prohibitive communication burden in large-scale deployments. In contrast, a simple random assignment policy ($d = 1$) does not entail any communication overhead, but the mean waiting time remains constant as $N$ grows large for any fixed positive load. In order to examine the fundamental trade-off between delay performance and implementation overhead, we consider an asymptotic regime where the diversity parameter $d(N)$ depends on $N$. We investigate what growth rate of $d(N)$ is required to match the optimal performance of the JSQ policy on fluid and diffusion scale, and achieve a vanishing waiting time in the limit. The results demonstrate that the asymptotics for the JSQ($d(N)$) policy are insensitive to the exact growth rate of $d(N)$, as long as the
optimality properties and yields a mean waiting time that *vanishes* as $N$ grows large for any fixed subcritical load. However, a nominal implementation of the JSQ policy involves a prohibitive communication burden in large-scale deployments. In contrast, a simple random assignment policy ($d = 1$) does not entail any communication overhead, but the mean waiting time remains constant as $N$ grows large for any fixed positive load. In order to examine the fundamental trade-off between delay performance and implementation overhead, we consider an asymptotic regime where the diversity parameter $d(N)$ depends on $N$. We investigate what growth rate of $d(N)$ is required to match the optimal performance of the JSQ policy on fluid and diffusion scale, and achieve a vanishing waiting time in the limit. The results demonstrate that the asymptotics for the JSQ($d(N)$) policy are insensitive to the exact growth rate of $d(N)$, as long as the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] [it not]{} possible to renormalize light-front Hamiltonians in any useful manner without developing a renormalization procedure that can produce these non-canonical counterterms. The line of investigation I discuss has been developed by a small group of theorists who are working or have worked at Ohio State University and Warsaw University. Ken Wilson provided the initial impetus for this work, and at a very early stage outlined much of the basic strategy we employ. I make no attempt to provide enough details to allow the reader to start doing light-front calculations. The introductory article by Harindranath is helpful in this regard. An earlier version of these lectures also provides many more details. ### A Constituent Approximation Depends on Tailored Renormalization If it is possible to derive a constituent approximation from QCD, we can formulate the hadronic bound state problem as a set of coupled few-body problems. We obtain the states and eigenvalues by solving $$H_\Lambda
[it not]{} possible to renormalize light-front Hamiltonians in any useful manner without developing a renormalization procedure that can produce these non-canonical counterterms. The line of investigation I discuss has been developed by a small group of theorists who are working or have worked at Ohio State University and Warsaw University. Ken Wilson provided the initial impetus for this work, and at a very early stage outlined much of the basic strategy we employ. I make no attempt to provide enough details to allow the reader to start doing light-front calculations. The introductory article by Harindranath is helpful in this regard. An earlier version of these lectures also provides many more details. ### A Constituent Approximation Depends on Tailored Renormalization If it is possible to derive a constituent approximation from QCD, we can formulate the hadronic bound state problem as a set of coupled few-body problems. We obtain the states and eigenvalues by solving $$H_\Lambda[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] (NOAO), Jean-Paul Kneib (EPFL, Switzerland), Ofer Lahav (UCL, UK), Dustin Lang (Perimter Institute, Canada), Alexie Leauthaud (UC Santa Cruz), Betta Lusso (Durham University, UK), Axel de la Macorra (UNAM, Mexico), Marc Manera (IFAE, Spain), Paul Martini (Ohio State University), Shude Mao (Tsinghua University, China), Jeffrey A. Newman (University of Pittsburgh), Nathalie Palanque-Delabrouille (CEA, France), Will J. Percival (University of Waterloo, Canada), Carlos Allende Prieto (IAC, Spain), Constance M. Rockosi (UC Santa Cruz), Vanina Ruhlmann-Kleider (CEA, France), David Schlegel (LBNL), Hee-Jong Seo (Ohio University), Yong-Seon Song (KASI, South Korea), Greg Tarlé (University of Michigan), Risa Wechsler (Stanford University), David Weinberg (Ohio State University), (Christophe Yèche (CEA, France), Ying Zu (Shanghai Jiao Tong University, China)\ **Abstract:** We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opportunities for the coming decade. DESI construction and its initial five years of operations are an approved experiment of the U.S. Department of
(NOAO), Jean-Paul Kneib (EPFL, Switzerland), Ofer Lahav (UCL, UK), Dustin Lang (Perimter Institute, Canada), Alexie Leauthaud (UC Santa Cruz), Betta Lusso (Durham University, UK), Axel de la Macorra (UNAM, Mexico), Marc Manera (IFAE, Spain), Paul Martini (Ohio State University), Shude Mao (Tsinghua University, China), Jeffrey A. Newman (University of Pittsburgh), Nathalie Palanque-Delabrouille (CEA, France), Will J. Percival (University of Waterloo, Canada), Carlos Allende Prieto (IAC, Spain), Constance M. Rockosi (UC Santa Cruz), Vanina Ruhlmann-Kleider (CEA, France), David Schlegel (LBNL), Hee-Jong Seo (Ohio University), Yong-Seon Song (KASI, South Korea), Greg Tarlé (University of Michigan), Risa Wechsler (Stanford University), David Weinberg (Ohio State University), (Christophe Yèche (CEA, France), Ying Zu (Shanghai Jiao Tong University, China)\ **Abstract:** We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opportunities for the coming decade. DESI construction and its initial five years of operations are an approved experiment of the U.S. Department of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] respect to the scenario one is demonstrated via simulations, where the objective is the control of a fixed-wing UAV performing a monitoring mission over a sloped vineyard.' author: - 'Martina Mammarella$^1$, Teodoro Alamo$^2$, Fabrizio Dabbene$^{1}$, and Matthias Lorenzen$^{3}$ [^1] [^2] [^3] [^4]' bibliography: - 'main.bib' title: ' **Computationally efficient stochastic MPC: a probabilistic scaling approach**' --- Introduction {#sec:intro} ============ In recent years, the performance degradation of model predictive control (MPC) schemes in the presence of uncertainty has driven the interest towards stochastic MPC, to overcome the inherent conservativeness of robust approaches. A probabilistic description of the disturbance or uncertainty allows to optimize the average performance or appropriate risk measures. Furthermore, allowing a (small) probability of constraint violation, by introducing so-called chance constraints, seems more appropriate in some applications. As highlighted in [@farina2016stochastic], current SMPC methods can be divided in two main groups, depending on the approach followed to solve the chance-constrained optimization problem: (i) analytic approximation methods; and (ii)
respect to the scenario one is demonstrated via simulations, where the objective is the control of a fixed-wing UAV performing a monitoring mission over a sloped vineyard.' author: - 'Martina Mammarella$^1$, Teodoro Alamo$^2$, Fabrizio Dabbene$^{1}$, and Matthias Lorenzen$^{3}$ [^1] [^2] [^3] [^4]' bibliography: - 'main.bib' title: ' **Computationally efficient stochastic MPC: a probabilistic scaling approach**' --- Introduction {#sec:intro} ============ In recent years, the performance degradation of model predictive control (MPC) schemes in the presence of uncertainty has driven the interest towards stochastic MPC, to overcome the inherent conservativeness of robust approaches. A probabilistic description of the disturbance or uncertainty allows to optimize the average performance or appropriate risk measures. Furthermore, allowing a (small) probability of constraint violation, by introducing so-called chance constraints, seems more appropriate in some applications. As highlighted in [@farina2016stochastic], current SMPC methods can be divided in two main groups, depending on the approach followed to solve the chance-constrained optimization problem: (i) analytic approximation methods; and (ii)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in and around a spectral line poses strict limitations on the instrumentation. Night-time telescopes simply increase their aperture in order to collect more photons, with the currently largest aperture of 10.4 m at the Gran Telescopio Canarias (Grantecan) on La Palma, Spain. This is also desirable for solar telescopes, but the technical requirements are more complicated owing to the heat management and the required corrections for seeing variations by adaptive optics (AO). The construction of the world’s largest solar telescope, the 4 m DKIST, led by the National Solar Observatory, is currently underway with planned first-light in 2019. Compared to the current largest solar telescope, the 1.6 m New Solar Telescope (NST) at the Big Bear Solar Observatory, the photon collecting area will increase by a factor of more than six. A selection of effective aperture size of solar telescopes is shown in Fig. \[telsize\]. Only recently, with the exception of
in and around a spectral line poses strict limitations on the instrumentation. Night-time telescopes simply increase their aperture in order to collect more photons, with the currently largest aperture of 10.4 m at the Gran Telescopio Canarias (Grantecan) on La Palma, Spain. This is also desirable for solar telescopes, but the technical requirements are more complicated owing to the heat management and the required corrections for seeing variations by adaptive optics (AO). The construction of the world’s largest solar telescope, the 4 m DKIST, led by the National Solar Observatory, is currently underway with planned first-light in 2019. Compared to the current largest solar telescope, the 1.6 m New Solar Telescope (NST) at the Big Bear Solar Observatory, the photon collecting area will increase by a factor of more than six. A selection of effective aperture size of solar telescopes is shown in Fig. \[telsize\]. Only recently, with the exception of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] black hole in AdS$_4 \times S^7$ is shown to agree with the topologically twisted index of 3d ABJM model [@Aharony:2008ug] with $k=1$. More recently, many works have been done regarding entropy of black holes in AdS$_5$ [@Cabo-Bizet:2018ehj; @Choi:2018hmj; @Choi:2018vbz; @Benini:2018ywd; @Honda:2019cio; @ArabiArdehali:2019tdm; @Kim:2019yrz; @Cabo-Bizet:2019osg] using refined 4d superconformal index. In this paper, our goal is to understand microscopic origin of a magnetically charged black hole [@Nieder:2000kc] in AdS$_5$ using twisted index of 4d $\mathcal{N}=2$ SCFT on $S^1 \times M_3$, where $M_3$ is a closed hyperbolic 3-manifold. The entropy of magnetically charged black holes of our interest is not easy to analyze quantitatively via localization technique, as we consider the 4d SCFT on closed hyperbolic 3-manifolds. To circumvent this technical difficulty, we suggest alternative way of computing twisted index of a certain class of 4d $\mathcal{N}=2$ SCFTs on $S^1 \times M_3$. We start at 6d $(2,0)$ theory on $S^1 \times M_3 \times
black hole in AdS$_4 \times S^7$ is shown to agree with the topologically twisted index of 3d ABJM model [@Aharony:2008ug] with $k=1$. More recently, many works have been done regarding entropy of black holes in AdS$_5$ [@Cabo-Bizet:2018ehj; @Choi:2018hmj; @Choi:2018vbz; @Benini:2018ywd; @Honda:2019cio; @ArabiArdehali:2019tdm; @Kim:2019yrz; @Cabo-Bizet:2019osg] using refined 4d superconformal index. In this paper, our goal is to understand microscopic origin of a magnetically charged black hole [@Nieder:2000kc] in AdS$_5$ using twisted index of 4d $\mathcal{N}=2$ SCFT on $S^1 \times M_3$, where $M_3$ is a closed hyperbolic 3-manifold. The entropy of magnetically charged black holes of our interest is not easy to analyze quantitatively via localization technique, as we consider the 4d SCFT on closed hyperbolic 3-manifolds. To circumvent this technical difficulty, we suggest alternative way of computing twisted index of a certain class of 4d $\mathcal{N}=2$ SCFTs on $S^1 \times M_3$. We start at 6d $(2,0)$ theory on $S^1 \times M_3 \times[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] voids. The terminal void density is also reduced and the incubation period and terminal swelling rate can be greatly altered by cavity coalescence. Temperature-dependent trapping of voids/bubbles by precipitates and alterations in void surface diffusion from adsorbed impurities and internal gas pressure may give rise to intermediate swelling behavior through their effects on cavity mobility and coalescence. Introduction ============ Irradiation of metals has long been known to culminate in macroscopic property changes including void swelling [@CAWTHORNE:1967]. Characteristic stable voids and steady volumetric swelling develop for a range of temperatures and fluxes, independent of whether radiation bombardment damage occurs as disseminated Frenkel pairs or as small defect clusters. This can occur whether or not helium is generated along with atomic displacements. In either case, small, unstable voids, loops, and other defect clusters will develop almost immediately within the irradiated material. Their subsequent evolution determines the fluence required to create stable voids and achieve steady
voids. The terminal void density is also reduced and the incubation period and terminal swelling rate can be greatly altered by cavity coalescence. Temperature-dependent trapping of voids/bubbles by precipitates and alterations in void surface diffusion from adsorbed impurities and internal gas pressure may give rise to intermediate swelling behavior through their effects on cavity mobility and coalescence. Introduction ============ Irradiation of metals has long been known to culminate in macroscopic property changes including void swelling [@CAWTHORNE:1967]. Characteristic stable voids and steady volumetric swelling develop for a range of temperatures and fluxes, independent of whether radiation bombardment damage occurs as disseminated Frenkel pairs or as small defect clusters. This can occur whether or not helium is generated along with atomic displacements. In either case, small, unstable voids, loops, and other defect clusters will develop almost immediately within the irradiated material. Their subsequent evolution determines the fluence required to create stable voids and achieve steady[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Universe --- Over the past two decades, observations have lent powerful support to a very simple model of the early universe: a flat, radiation-dominated Friedmann-Lemaître-Robinson-Walker (FLRW) background cosmology, with a spectrum of small-amplitude, growing perturbations. In this Letter we study the evolution of these perturbations on very small scales and at very early times. The simplest and most natural possibility is that their spectrum was almost scale-invariant, with the rms fractional density perturbation $\epsilon \sim10^{-4}$ on all scales. However, more complicated spectra are also interesting to consider. For example, LIGO’s recent detection of $\sim 30 M_{\odot}$ black holes [@LIGO] motivated some to propose a bump in the primordial spectrum with $\epsilon \sim 10^{-1}$ on the relevant comoving scale. High peaks on this scale would have collapsed shortly after crossing the Hubble horizon, at $t\sim 10^{-4}$ seconds, to form $30 M_{\odot}$ black holes in sufficient abundance to constitute the cosmological dark matter today [@Bird]. Here we
Universe --- Over the past two decades, observations have lent powerful support to a very simple model of the early universe: a flat, radiation-dominated Friedmann-Lemaître-Robinson-Walker (FLRW) background cosmology, with a spectrum of small-amplitude, growing perturbations. In this Letter we study the evolution of these perturbations on very small scales and at very early times. The simplest and most natural possibility is that their spectrum was almost scale-invariant, with the rms fractional density perturbation $\epsilon \sim10^{-4}$ on all scales. However, more complicated spectra are also interesting to consider. For example, LIGO’s recent detection of $\sim 30 M_{\odot}$ black holes [@LIGO] motivated some to propose a bump in the primordial spectrum with $\epsilon \sim 10^{-1}$ on the relevant comoving scale. High peaks on this scale would have collapsed shortly after crossing the Hubble horizon, at $t\sim 10^{-4}$ seconds, to form $30 M_{\odot}$ black holes in sufficient abundance to constitute the cosmological dark matter today [@Bird]. Here we[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] based on the kind of structures used in such networks. In this paper, we discuss some of the representational expressiveness tradeoffs that are made, often implicitly. In particular we focus on the loss of the ability to encode partial knowledge and explore two different paths to regain this ability. Logic Based Representations {#logic-based-representations .unnumbered} --------------------------- Beginning with McCarthy’s Advice Taker [@advicetaker], logic has been the formal foundation for a wide range of knowledge representations. These have ranged across the spectrum of formal models to the more experimental languages created as part of working systems. The more formal work started with McCarthy and Hayes’s Situation Calculus [@sitcalc], the various flavors of non-monotonic logics [@reiter], [@circumscription] and included proposed axiomatizations such as those suggested in the ‘Naive physics manifesto’ [@naivephysics] and Allen’s temporal representation [@allen]. There were a number of less formal approaches, that also had their roots in logic. Notable amongst these include Minsky’s
based on the kind of structures used in such networks. In this paper, we discuss some of the representational expressiveness tradeoffs that are made, often implicitly. In particular we focus on the loss of the ability to encode partial knowledge and explore two different paths to regain this ability. Logic Based Representations {#logic-based-representations .unnumbered} --------------------------- Beginning with McCarthy’s Advice Taker [@advicetaker], logic has been the formal foundation for a wide range of knowledge representations. These have ranged across the spectrum of formal models to the more experimental languages created as part of working systems. The more formal work started with McCarthy and Hayes’s Situation Calculus [@sitcalc], the various flavors of non-monotonic logics [@reiter], [@circumscription] and included proposed axiomatizations such as those suggested in the ‘Naive physics manifesto’ [@naivephysics] and Allen’s temporal representation [@allen]. There were a number of less formal approaches, that also had their roots in logic. Notable amongst these include Minsky’s[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] show that a certain phenomenon occurs very rarely or occurs rather frequently. This is particularly the case in analysis and probability theory. Hence we have the notion of measure zero set and the notion of almost everywhere or almost sure. Each locally compact group admits a translation invariant regular Borel measure which is finite on compact sets and positive on open sets. Such a measure is unique, up to multiplication by a constant, and is called Haar measure. More precisely, if $\mu$ and $\nu$ are two such measure and $\nu$ is not identically zero, then $\mu = c \nu$ for some $c \ge 0$. Of course, in $\R ^n$ this is the Lebesgue measure. If one has no prior bias towards any particular set of points one uses this measure. However, Haar measures only exist on locally compact groups. Therefore, in many important spaces such as $C([0,1])$, the space of
show that a certain phenomenon occurs very rarely or occurs rather frequently. This is particularly the case in analysis and probability theory. Hence we have the notion of measure zero set and the notion of almost everywhere or almost sure. Each locally compact group admits a translation invariant regular Borel measure which is finite on compact sets and positive on open sets. Such a measure is unique, up to multiplication by a constant, and is called Haar measure. More precisely, if $\mu$ and $\nu$ are two such measure and $\nu$ is not identically zero, then $\mu = c \nu$ for some $c \ge 0$. Of course, in $\R ^n$ this is the Lebesgue measure. If one has no prior bias towards any particular set of points one uses this measure. However, Haar measures only exist on locally compact groups. Therefore, in many important spaces such as $C([0,1])$, the space of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] can support a terrestrial planet. These analyses clarify the stability boundaries in exoplanetary systems and demonstrate that, for most exoplanetary systems, numerical simulations of the stability of potentially habitable planets are only necessary over a narrow region of parameter space. Finally we also identify and provide a catalog of known systems which can host terrestrial planets in their habitable zones.' author: - 'Ravi kumar Kopparapu, Rory Barnes' title: Stability analysis of single planet systems and their habitable zones --- Introduction {#sec1} ============ The dynamical stability of extra-solar planetary systems can constrain planet formation models, reveal commonalities among planetary systems and may even be used to infer the existence of unseen companions. Many authors have studied the dynamical stability of our solar system and extra-solar planetary systems [see @Wisdom1982; @Laskar1989; @RasioFord1996; @Chambers1996; @LaughlinChambers2001; @Gozdziewski2001; @Ji2002; @BQ04; @Ford2005; @Jones2006; @raymond09 for example]. These investigations have revealed that planetary systems are close to dynamical instability, illuminated the boundaries between stable
can support a terrestrial planet. These analyses clarify the stability boundaries in exoplanetary systems and demonstrate that, for most exoplanetary systems, numerical simulations of the stability of potentially habitable planets are only necessary over a narrow region of parameter space. Finally we also identify and provide a catalog of known systems which can host terrestrial planets in their habitable zones.' author: - 'Ravi kumar Kopparapu, Rory Barnes' title: Stability analysis of single planet systems and their habitable zones --- Introduction {#sec1} ============ The dynamical stability of extra-solar planetary systems can constrain planet formation models, reveal commonalities among planetary systems and may even be used to infer the existence of unseen companions. Many authors have studied the dynamical stability of our solar system and extra-solar planetary systems [see @Wisdom1982; @Laskar1989; @RasioFord1996; @Chambers1996; @LaughlinChambers2001; @Gozdziewski2001; @Ji2002; @BQ04; @Ford2005; @Jones2006; @raymond09 for example]. These investigations have revealed that planetary systems are close to dynamical instability, illuminated the boundaries between stable[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] removing degeneracies in the parameter space and reducing the errors on the parameter estimates. The work we present here is an extension of our earlier work, where we demonstrated the advantage of combining measurements of ground and space based GW interferometers in estimating parameters of a compact binary coalescence [@synergy1]. Coalescing compact binaries which are composed of neutron stars (NS) - NS, NS- black hole (BH), or BH - BH produce GW signals during their inspiral, merger and ringdown phases. The merger and ringdown phases of at least five such events have been recorded by the LIGO-VIRGO GW detector network so far [@gw_det]. These ground based detectors are sensitive in the frequency range from a few tens of Hz to a few 1000 Hz. Till now we have seen BH-BH binary mergers with total mass ranging from $\sim 20~M_{\odot}$ to $\sim 70~ M_{\odot}$. There are already plans for third generation detectors
removing degeneracies in the parameter space and reducing the errors on the parameter estimates. The work we present here is an extension of our earlier work, where we demonstrated the advantage of combining measurements of ground and space based GW interferometers in estimating parameters of a compact binary coalescence [@synergy1]. Coalescing compact binaries which are composed of neutron stars (NS) - NS, NS- black hole (BH), or BH - BH produce GW signals during their inspiral, merger and ringdown phases. The merger and ringdown phases of at least five such events have been recorded by the LIGO-VIRGO GW detector network so far [@gw_det]. These ground based detectors are sensitive in the frequency range from a few tens of Hz to a few 1000 Hz. Till now we have seen BH-BH binary mergers with total mass ranging from $\sim 20~M_{\odot}$ to $\sim 70~ M_{\odot}$. There are already plans for third generation detectors[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] of observations or zero correlation. This exact finding of this distribution has applications in multiple fields and in particular provides a way to derive the exact distribution of the Sharpe ratio under normal AR(1) assumptions.' author: - 'Eric Benhamou [^1] ^,^ [^2] ^,^ [^3]' bibliography: - 'mybib.bib' title: 'T-statistic for Autoregressive process' --- *AMS 1991 subject classification:* 62E10, 62E15 *Keywords*: t-Student, Auto regressive process, Toeplitz matrix, circulant matrix, non centered Student distribution Introduction ============ Let $X_1, \ldots, X_n$ be a random sample from a cumulative distribution function (cdf) $F(·)$ with a constant mean $\mu$ and let define the following statistics referred to as the t-statistic $$\label{tstatistic} T_n = T(X_n) = \frac{\sqrt{n} ( \bar X_n - \mu ) }{s_n}$$ where $\bar X_n $ is the empirical mean, $s_n^2$ the empirical Bessel corrected empirical variance, and $X_n$ the regular full history of the random sample defined by: $$\bar{X}_n =\frac{1}{n}\sum_{i=1}^{n}X_i, \quad s_n^2 = \frac{1}{n-1}\sum_{i=1}^{n}(X_i - \bar{X}_n)^2, \quad X_n = (X_1, \ldots, X_n)^T$$ It is well
of observations or zero correlation. This exact finding of this distribution has applications in multiple fields and in particular provides a way to derive the exact distribution of the Sharpe ratio under normal AR(1) assumptions.' author: - 'Eric Benhamou [^1] ^,^ [^2] ^,^ [^3]' bibliography: - 'mybib.bib' title: 'T-statistic for Autoregressive process' --- *AMS 1991 subject classification:* 62E10, 62E15 *Keywords*: t-Student, Auto regressive process, Toeplitz matrix, circulant matrix, non centered Student distribution Introduction ============ Let $X_1, \ldots, X_n$ be a random sample from a cumulative distribution function (cdf) $F(·)$ with a constant mean $\mu$ and let define the following statistics referred to as the t-statistic $$\label{tstatistic} T_n = T(X_n) = \frac{\sqrt{n} ( \bar X_n - \mu ) }{s_n}$$ where $\bar X_n $ is the empirical mean, $s_n^2$ the empirical Bessel corrected empirical variance, and $X_n$ the regular full history of the random sample defined by: $$\bar{X}_n =\frac{1}{n}\sum_{i=1}^{n}X_i, \quad s_n^2 = \frac{1}{n-1}\sum_{i=1}^{n}(X_i - \bar{X}_n)^2, \quad X_n = (X_1, \ldots, X_n)^T$$ It is well[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] is a compound state of two reggeized gluons \[1\]. Next-to-leading corrections to the BFKL equation were also calculated \[2\], which gives a possibility to find its region of applicability. In particular the Möbius invariance of the equation valid in LLA \[1\] turns out to be violated after taking into account next-to-leading terms. The asymptotic behaviour $\propto s^{j_{0}}$ of scattering amplitudes is governed by the $j$-plane singularities of the $t$-channel partial waves $% f_{j}(t)$. The position of these singularities $\omega _{0}=j_{0}-1$ for the Feynman diagrams with $n$ reggeized gluons in the $t$-channel is proportional to eigenvalues of a Schrödinger-like equation \[3\]. For the multicolour QCD $N_{c}\rightarrow \infty $ the colour structure and the coordinate dependence of the eigenfunctions are factorized \[4\]. The wave function $f_{m,\widetilde{m}}(\overrightarrow{\rho _{1}},% \overrightarrow{\rho _{2}},...,\overrightarrow{\rho _{n}};\overrightarrow{% \rho _{0}})$ of the colourless compound state $O_{m\widetilde{,m}}(% \overrightarrow{\rho _{0}})$ depends on the two-dimensional impact parameters $\overrightarrow{\rho _{1}},\overrightarrow{\rho _{2}},...,% \overrightarrow{\rho _{n}}$ of the reggeized gluons. It belongs to the
is a compound state of two reggeized gluons \[1\]. Next-to-leading corrections to the BFKL equation were also calculated \[2\], which gives a possibility to find its region of applicability. In particular the Möbius invariance of the equation valid in LLA \[1\] turns out to be violated after taking into account next-to-leading terms. The asymptotic behaviour $\propto s^{j_{0}}$ of scattering amplitudes is governed by the $j$-plane singularities of the $t$-channel partial waves $% f_{j}(t)$. The position of these singularities $\omega _{0}=j_{0}-1$ for the Feynman diagrams with $n$ reggeized gluons in the $t$-channel is proportional to eigenvalues of a Schrödinger-like equation \[3\]. For the multicolour QCD $N_{c}\rightarrow \infty $ the colour structure and the coordinate dependence of the eigenfunctions are factorized \[4\]. The wave function $f_{m,\widetilde{m}}(\overrightarrow{\rho _{1}},% \overrightarrow{\rho _{2}},...,\overrightarrow{\rho _{n}};\overrightarrow{% \rho _{0}})$ of the colourless compound state $O_{m\widetilde{,m}}(% \overrightarrow{\rho _{0}})$ depends on the two-dimensional impact parameters $\overrightarrow{\rho _{1}},\overrightarrow{\rho _{2}},...,% \overrightarrow{\rho _{n}}$ of the reggeized gluons. It belongs to the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] doublet with the SU(2) gauge part of the electroweak theory give a nonperturbative description of the Higgs mechanism. Early studies [@Fradkin:1978dv; @Osterwalder:1977pc; @Lang:1981qg; @Seiler:1982pw; @Kuhnelt:1983mw; @Montvay:1984wy; @Jersak:1985nf; @Evertz:1985fc; @Gerdt:1984ft; @Langguth:1985dr; @Montvay:1985nk; @Langguth:1987vf; @Evertz:1989hb; @Hasenfratz:1987uc] revealed two regions in the phase diagram: the Higgs region with three massive vector bosons and a single Higgs particle, and the confinement region with QCD-like bound states of the fundamental fields. These two regions are partially separated by a first-order phase transition, but are analytically connected beyond the phase transition’s end point. Subsequent lattice studies of the SU(2)-Higgs model have explored the electroweak finite-temperature phase transition [@Jansen:1995yg; @Rummukainen:1996sx; @Laine:1998jb; @Fodor:1999at] and recent work has incorporated additional scalar doublets [@Wurtz:2009gf; @Lewis:2010ps]. In the present work, we calculate the spectrum of the standard SU(2)-Higgs model at zero temperature in the Higgs region of the phase diagram. As already mentioned, there will be a Higgs boson ($H$) and three
doublet with the SU(2) gauge part of the electroweak theory give a nonperturbative description of the Higgs mechanism. Early studies [@Fradkin:1978dv; @Osterwalder:1977pc; @Lang:1981qg; @Seiler:1982pw; @Kuhnelt:1983mw; @Montvay:1984wy; @Jersak:1985nf; @Evertz:1985fc; @Gerdt:1984ft; @Langguth:1985dr; @Montvay:1985nk; @Langguth:1987vf; @Evertz:1989hb; @Hasenfratz:1987uc] revealed two regions in the phase diagram: the Higgs region with three massive vector bosons and a single Higgs particle, and the confinement region with QCD-like bound states of the fundamental fields. These two regions are partially separated by a first-order phase transition, but are analytically connected beyond the phase transition’s end point. Subsequent lattice studies of the SU(2)-Higgs model have explored the electroweak finite-temperature phase transition [@Jansen:1995yg; @Rummukainen:1996sx; @Laine:1998jb; @Fodor:1999at] and recent work has incorporated additional scalar doublets [@Wurtz:2009gf; @Lewis:2010ps]. In the present work, we calculate the spectrum of the standard SU(2)-Higgs model at zero temperature in the Higgs region of the phase diagram. As already mentioned, there will be a Higgs boson ($H$) and three[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] different from the residue characteristic. Fix a finite Galois extension $F/K$ over which $C$ is semistable [@DM]. Write $\mathcal{O}_F$ for the ring of integers of $F$, $k_F$ for the residue field of $F$, $I_F$ for the inertia group, $\cC/\mathcal{O}_F$ for the minimal regular model of $C/F$, and $\cC_{k_F}/k_F$ for its special fibre. Grothendieck defined a canonical filtration by $G_F$-stable $\Z_l$-lattices [@SGA7I §12], $$\label{eq1} 0\subset T_l(A)^t \subset T_l(A)^{I_F} \subset T_l(A);$$ $T_l(A)^t$ is sometimes referred to as the “toric part”. He showed that its graded pieces are unramified $G_F$-modules and are, canonically, $$\label{eq2} H^1(\Upsilon,\Z) \tensor_\Z\Z_l(1), \qquad T_l \Pic^0 \tilde\cC_{k_F}, \qquad H_1(\Upsilon,\Z) \tensor_\Z\Z_l,$$where $\tilde\cC_{k_F}$ is the normalisation of $\cC_{k_F}$, $\Upsilon$ is the dual graph of $\cC_{{\bar k}_F}$ (a vertex for each irreducible component and an edge for every ordinary double point) and $H^1, H_1$ are singular (co)homology groups. Here the middle piece may be further decomposed as[^2] $$\label{eqind} T_l \Pic^0 (\tilde
different from the residue characteristic. Fix a finite Galois extension $F/K$ over which $C$ is semistable [@DM]. Write $\mathcal{O}_F$ for the ring of integers of $F$, $k_F$ for the residue field of $F$, $I_F$ for the inertia group, $\cC/\mathcal{O}_F$ for the minimal regular model of $C/F$, and $\cC_{k_F}/k_F$ for its special fibre. Grothendieck defined a canonical filtration by $G_F$-stable $\Z_l$-lattices [@SGA7I §12], $$\label{eq1} 0\subset T_l(A)^t \subset T_l(A)^{I_F} \subset T_l(A);$$ $T_l(A)^t$ is sometimes referred to as the “toric part”. He showed that its graded pieces are unramified $G_F$-modules and are, canonically, $$\label{eq2} H^1(\Upsilon,\Z) \tensor_\Z\Z_l(1), \qquad T_l \Pic^0 \tilde\cC_{k_F}, \qquad H_1(\Upsilon,\Z) \tensor_\Z\Z_l,$$where $\tilde\cC_{k_F}$ is the normalisation of $\cC_{k_F}$, $\Upsilon$ is the dual graph of $\cC_{{\bar k}_F}$ (a vertex for each irreducible component and an edge for every ordinary double point) and $H^1, H_1$ are singular (co)homology groups. Here the middle piece may be further decomposed as[^2] $$\label{eqind} T_l \Pic^0 (\tilde[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] in fluid mechanics, reflected mostly a macroscopic character and insensitivity of old style experiments. Modern experiments concluded that although the no-slip postulate is valid for molecularly smooth hydrophilic surfaces down to contact [@vinogradova:03; @charlaix.e:2005; @vinogradova.oi:2009], for many other systems it does not apply when the size of a system is reduced to micro- and nano scales. The changes in hydrodynamic behavior are caused by an impact of interfacial phenomena, first of all hydrophobicity and roughness, on the flow. The effect of hydrophobicity on the flow past smooth surfaces is reasonably clear and suggests an amount of slippage described by the condition $v_s = b \partial v / \partial z$ where $v_s$ is the slip velocity at the wall, $b$ the slip length, and the axis $z$ is normal to the surface. The assumption is justified theoretically [@vinogradova:99; @barrat:99; @andrienko.d:2003; @bib:jens-kunert-herrmann:2005] and was confirmed by surface force apparatus (SFA) [@charlaix.e:2005], atomic force microscope (AFM) [@vinogradova:03],
in fluid mechanics, reflected mostly a macroscopic character and insensitivity of old style experiments. Modern experiments concluded that although the no-slip postulate is valid for molecularly smooth hydrophilic surfaces down to contact [@vinogradova:03; @charlaix.e:2005; @vinogradova.oi:2009], for many other systems it does not apply when the size of a system is reduced to micro- and nano scales. The changes in hydrodynamic behavior are caused by an impact of interfacial phenomena, first of all hydrophobicity and roughness, on the flow. The effect of hydrophobicity on the flow past smooth surfaces is reasonably clear and suggests an amount of slippage described by the condition $v_s = b \partial v / \partial z$ where $v_s$ is the slip velocity at the wall, $b$ the slip length, and the axis $z$ is normal to the surface. The assumption is justified theoretically [@vinogradova:99; @barrat:99; @andrienko.d:2003; @bib:jens-kunert-herrmann:2005] and was confirmed by surface force apparatus (SFA) [@charlaix.e:2005], atomic force microscope (AFM) [@vinogradova:03],[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Subsequently, the correspondence between AdS gravity and fluid dynamics (cf. [@Rangamani:2009xk] for a review) was extended in various directions, for instance to include forcing terms coming from a dilaton [@Bhattacharyya:2008ji] or from electromagnetic fields (magnetohydrodynamics) [@Hansen:2008tq; @Caldarelli:2008ze]. The gravitational dual of non-relativistic incompressible fluid flows was obtained in [@Bhattacharyya:2008kq]. In addition to providing new insights into the dynamics of gravity, the map between hydrodynamics and AdS gravity has contributed to a better understanding of various issues in fluid dynamics. One such example is the role of quantum anomalies in hydrodynamical transport [@Son:2009tf]. Moreover, it has revealed beautiful and unexpected relationships between apparently very different areas of physics, for instance it was argued in [@Caldarelli:2008mv] that the Rayleigh-Plateau instability in a fluid tube is the holographic dual of the Gregory-Laflamme instability of a black string[^2]. The hope is that eventually the fluid/gravity correspondence may shed light on fundamental problems in hydrodynamics like turbulence.
Subsequently, the correspondence between AdS gravity and fluid dynamics (cf. [@Rangamani:2009xk] for a review) was extended in various directions, for instance to include forcing terms coming from a dilaton [@Bhattacharyya:2008ji] or from electromagnetic fields (magnetohydrodynamics) [@Hansen:2008tq; @Caldarelli:2008ze]. The gravitational dual of non-relativistic incompressible fluid flows was obtained in [@Bhattacharyya:2008kq]. In addition to providing new insights into the dynamics of gravity, the map between hydrodynamics and AdS gravity has contributed to a better understanding of various issues in fluid dynamics. One such example is the role of quantum anomalies in hydrodynamical transport [@Son:2009tf]. Moreover, it has revealed beautiful and unexpected relationships between apparently very different areas of physics, for instance it was argued in [@Caldarelli:2008mv] that the Rayleigh-Plateau instability in a fluid tube is the holographic dual of the Gregory-Laflamme instability of a black string[^2]. The hope is that eventually the fluid/gravity correspondence may shed light on fundamental problems in hydrodynamics like turbulence.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] By using the Planck data, the baryon acoustic oscillation data, the JLA sample of supernovae, and the Hubble constant measurement, we get $\beta=-0.010^{+0.037}_{-0.033}$ ($1\sigma$). The fit result becomes $\beta=-0.0148^{+0.0100}_{-0.0089}$ ($1\sigma$) once we further incorporate the RSD data in the analysis. The error of $\beta$ is substantially reduced with the help of the RSD data. Compared with the previous results, our results show that a negative $\beta$ is favored by current observations, and a relatively larger interaction rate is permitted by current RSD data.' author: - 'Yun-He Li' - 'Jing-Fei Zhang' - 'Xin Zhang[^1]' title: 'Exploring the full parameter space for an interacting dark energy model with recent observations including redshift-space distortions: Application of the parametrized post-Friedmann approach' --- Introduction {#sec:intro} ============ Dark energy and dark matter are the dominant sources for the evolution of the current Universe [@Ade:2013zuv]. Both are currently only indirectly detected via their gravitational effects. There might, however, exist a direct non-gravitational interaction between them that
By using the Planck data, the baryon acoustic oscillation data, the JLA sample of supernovae, and the Hubble constant measurement, we get $\beta=-0.010^{+0.037}_{-0.033}$ ($1\sigma$). The fit result becomes $\beta=-0.0148^{+0.0100}_{-0.0089}$ ($1\sigma$) once we further incorporate the RSD data in the analysis. The error of $\beta$ is substantially reduced with the help of the RSD data. Compared with the previous results, our results show that a negative $\beta$ is favored by current observations, and a relatively larger interaction rate is permitted by current RSD data.' author: - 'Yun-He Li' - 'Jing-Fei Zhang' - 'Xin Zhang[^1]' title: 'Exploring the full parameter space for an interacting dark energy model with recent observations including redshift-space distortions: Application of the parametrized post-Friedmann approach' --- Introduction {#sec:intro} ============ Dark energy and dark matter are the dominant sources for the evolution of the current Universe [@Ade:2013zuv]. Both are currently only indirectly detected via their gravitational effects. There might, however, exist a direct non-gravitational interaction between them that[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] bind a positron [@mitroy02b; @mitroy02f]. There have been two sets of calculations that are consistent, in that they tend to predict the same binding energy and annihilation rate. The first set of calculations were those undertaken on $e^+$Be and $e^+$Mg [@ryzhikh98c; @ryzhikh98e; @mitroy01c] with the fixed core stochastic variational method (FCSVM) [@ryzhikh98b; @ryzhikh98e; @mitroy02b]. Some time later, configuration interaction (CI) calculations were undertaken on $e^+$Be, $e^+$Mg, $e^+$Ca and $e^+$Sr [@bromley02a; @bromley02b]. The calculations for $e^+$Be and $e^+$Mg agreed to within the respective computational uncertainties, which were roughly about 5-10$\%$ for the binding energy. One feature common to all the CI calculations is the slow convergence of the binding energy and the annihilation rate. The attractive electron-positron interaction leads to the formation of a Ps cluster (i.e. something akin to a positronium atom) in the outer valence region of the atom [@ryzhikh98e; @dzuba99; @mitroy02b; @saito03a]. The accurate representation of a Ps cluster
bind a positron [@mitroy02b; @mitroy02f]. There have been two sets of calculations that are consistent, in that they tend to predict the same binding energy and annihilation rate. The first set of calculations were those undertaken on $e^+$Be and $e^+$Mg [@ryzhikh98c; @ryzhikh98e; @mitroy01c] with the fixed core stochastic variational method (FCSVM) [@ryzhikh98b; @ryzhikh98e; @mitroy02b]. Some time later, configuration interaction (CI) calculations were undertaken on $e^+$Be, $e^+$Mg, $e^+$Ca and $e^+$Sr [@bromley02a; @bromley02b]. The calculations for $e^+$Be and $e^+$Mg agreed to within the respective computational uncertainties, which were roughly about 5-10$\%$ for the binding energy. One feature common to all the CI calculations is the slow convergence of the binding energy and the annihilation rate. The attractive electron-positron interaction leads to the formation of a Ps cluster (i.e. something akin to a positronium atom) in the outer valence region of the atom [@ryzhikh98e; @dzuba99; @mitroy02b; @saito03a]. The accurate representation of a Ps cluster[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] result in perspective and to relate it to key issues in the quantum theory of gravitation, such as the Wheeler-De Witt equation. The argument is the application to the case of an extreme black hole of an approach to black hole entropy based on the dimensional continuation of the Gauss-Bonnet theorem developed in [@btz]. The approach in question had been previously applied to non extreme black holes only [@ct]. To put into evidence as clearly as possible the distinction between extreme and non extreme holes, we first perform the analysis for the non-extreme case and then see how it is modified in the extreme case. We will deal with gravitation theory in a spacetime of dimension $D$ with positive definite signature (Euclidean formulation). To present the argument in what we believe is its most transparent form for the purpose at hand, we will start with the Hamiltonian action and will only at the
result in perspective and to relate it to key issues in the quantum theory of gravitation, such as the Wheeler-De Witt equation. The argument is the application to the case of an extreme black hole of an approach to black hole entropy based on the dimensional continuation of the Gauss-Bonnet theorem developed in [@btz]. The approach in question had been previously applied to non extreme black holes only [@ct]. To put into evidence as clearly as possible the distinction between extreme and non extreme holes, we first perform the analysis for the non-extreme case and then see how it is modified in the extreme case. We will deal with gravitation theory in a spacetime of dimension $D$ with positive definite signature (Euclidean formulation). To present the argument in what we believe is its most transparent form for the purpose at hand, we will start with the Hamiltonian action and will only at the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] between the 3.3 $\mu$m PAH band and the thermal emission from the Galactic dust.' author: - 'Kohji <span style="font-variant:small-caps;">Tsumura</span>, Toshio <span style="font-variant:small-caps;">Matsumoto</span>, Shuji <span style="font-variant:small-caps;">Matsuura</span>, Itsuki <span style="font-variant:small-caps;">Sakon</span>, Masahiro <span style="font-variant:small-caps;">Tanaka</span>, and Takehiko <span style="font-variant:small-caps;">Wada</span>' title: 'Low-Resolution Spectrum of the Diffuse Galactic Light and 3.3 $\mu$m PAH emission with AKARI InfraRed Camera' --- Introduction ============ The Diffuse Galactic Light (DGL) comprises scattered starlight by dust particles in the interstellar space at $<$3 $\mu$m, and emissions from the dust particles with some band features at longer wavelengths[^1]. Thus observational studies of DGL is important to investigate the dust property in our Galaxy, and it is also important for deriving the extragalactic background light (EBL) since DGL is one of foregrounds for the EBL measurement. However, isolation of DGL from other diffuse emissions, especially the strongest zodiacal light (ZL) foreground, is very difficult due to its diffuse, extended nature. A commonly-used method to estimate DGL is the correlation with
between the 3.3 $\mu$m PAH band and the thermal emission from the Galactic dust.' author: - 'Kohji <span style="font-variant:small-caps;">Tsumura</span>, Toshio <span style="font-variant:small-caps;">Matsumoto</span>, Shuji <span style="font-variant:small-caps;">Matsuura</span>, Itsuki <span style="font-variant:small-caps;">Sakon</span>, Masahiro <span style="font-variant:small-caps;">Tanaka</span>, and Takehiko <span style="font-variant:small-caps;">Wada</span>' title: 'Low-Resolution Spectrum of the Diffuse Galactic Light and 3.3 $\mu$m PAH emission with AKARI InfraRed Camera' --- Introduction ============ The Diffuse Galactic Light (DGL) comprises scattered starlight by dust particles in the interstellar space at $<$3 $\mu$m, and emissions from the dust particles with some band features at longer wavelengths[^1]. Thus observational studies of DGL is important to investigate the dust property in our Galaxy, and it is also important for deriving the extragalactic background light (EBL) since DGL is one of foregrounds for the EBL measurement. However, isolation of DGL from other diffuse emissions, especially the strongest zodiacal light (ZL) foreground, is very difficult due to its diffuse, extended nature. A commonly-used method to estimate DGL is the correlation with[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] *look* similar across modalities. We therefore hypothesize that a network could recognize those structures if the network was not specialized to the gray value distribution of the training images but was invariant to absolute and relative gray value variations. Invariance to certain aspects of the data is commonly enforced by applying random transformations to the training data, such as random rotations to enforce rotational invariance, or similar transformations [@Roth16; @Khal19]. In this paper, we present a simple method for randomly transforming the gray values of an image while retaining most of the information in the image. We demonstrate that this technique enables cross modality learning by training a previously published method for segmentation of the vertebral bodies [@Less19a] with a set of MR images and evaluating its segmentation performance on a set of CT images. Method ====== We define a transformation function $y(x)$ that maps a gray value $x$ to a new value. This
*look* similar across modalities. We therefore hypothesize that a network could recognize those structures if the network was not specialized to the gray value distribution of the training images but was invariant to absolute and relative gray value variations. Invariance to certain aspects of the data is commonly enforced by applying random transformations to the training data, such as random rotations to enforce rotational invariance, or similar transformations [@Roth16; @Khal19]. In this paper, we present a simple method for randomly transforming the gray values of an image while retaining most of the information in the image. We demonstrate that this technique enables cross modality learning by training a previously published method for segmentation of the vertebral bodies [@Less19a] with a set of MR images and evaluating its segmentation performance on a set of CT images. Method ====== We define a transformation function $y(x)$ that maps a gray value $x$ to a new value. This[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] through angular or luminosity distances. As the ramification of geometric probes, we call $H(z)$ a “kinematic probe” because of its kinematic origin. The Hubble parameter $H(z)$ is defined as $H = \dot{a}/a$, where $a$ denotes the cosmic scale factor and $\dot{a}$ is its rate of change with respect to the cosmic time (the age of the universe when the observed photon is emitted). Moreover, the cosmic scale factor $a$ is related to the redshift $z$ by the formula $a(t )/a(t_0) = 1/(1+z)$, where $t_0$ denotes the current time which is taken to be a constant. The observational $H(z)$ data (OHD) are directly related to the expansion history of the universe. The other class is the so-called “dynamical probes”, including weak lensing, galaxy clustering and redshift-space distortions. The dynamical probes can measure the gravitational law on cosmological scales, main of which is the evolution of linear density perturbations $\delta(z)$, where $\delta(z)
through angular or luminosity distances. As the ramification of geometric probes, we call $H(z)$ a “kinematic probe” because of its kinematic origin. The Hubble parameter $H(z)$ is defined as $H = \dot{a}/a$, where $a$ denotes the cosmic scale factor and $\dot{a}$ is its rate of change with respect to the cosmic time (the age of the universe when the observed photon is emitted). Moreover, the cosmic scale factor $a$ is related to the redshift $z$ by the formula $a(t )/a(t_0) = 1/(1+z)$, where $t_0$ denotes the current time which is taken to be a constant. The observational $H(z)$ data (OHD) are directly related to the expansion history of the universe. The other class is the so-called “dynamical probes”, including weak lensing, galaxy clustering and redshift-space distortions. The dynamical probes can measure the gravitational law on cosmological scales, main of which is the evolution of linear density perturbations $\delta(z)$, where $\delta(z)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] isotropic turbulence, Kolmogorov [@k41] considered that small-scale statistics are uniquely determined by the kinematic viscosity $\nu$ and the mean rate of energy dissipation $\langle \varepsilon \rangle$. The Kolmogorov velocity $u_{\rm K} = (\nu \langle \varepsilon \rangle)^{1/4}$ and the Kolmogorov length $\eta = (\nu ^3 / \langle \varepsilon \rangle)^{1/4}$ determine the statistics of velocity increment $\delta u_r = u(x+r)-u(x)$ at scale $r$ as $$\frac{\langle \delta u_r^n \rangle}{u_{\rm K}^n} = F_n \left( \frac{r}{\eta} \right) \quad \mbox{for} \ \ n=2,3,4,....$$ Here $\langle \cdot \rangle$ denotes an average over position $x$, and $F_n$ is a universal function. The universality is known to hold well. While $\langle \delta u_r^n \rangle$ at each $r$ is different in different velocity fields, $\langle \varepsilon \rangle$ and hence $u_{\rm K}^n$ and $\eta$ are accordingly different. That is, $\langle \varepsilon \rangle$ is in equilibrium with the mean rate of energy transfer that determines $\langle \delta u_r^n \rangle$. However, the universality of small-scale statistics
isotropic turbulence, Kolmogorov [@k41] considered that small-scale statistics are uniquely determined by the kinematic viscosity $\nu$ and the mean rate of energy dissipation $\langle \varepsilon \rangle$. The Kolmogorov velocity $u_{\rm K} = (\nu \langle \varepsilon \rangle)^{1/4}$ and the Kolmogorov length $\eta = (\nu ^3 / \langle \varepsilon \rangle)^{1/4}$ determine the statistics of velocity increment $\delta u_r = u(x+r)-u(x)$ at scale $r$ as $$\frac{\langle \delta u_r^n \rangle}{u_{\rm K}^n} = F_n \left( \frac{r}{\eta} \right) \quad \mbox{for} \ \ n=2,3,4,....$$ Here $\langle \cdot \rangle$ denotes an average over position $x$, and $F_n$ is a universal function. The universality is known to hold well. While $\langle \delta u_r^n \rangle$ at each $r$ is different in different velocity fields, $\langle \varepsilon \rangle$ and hence $u_{\rm K}^n$ and $\eta$ are accordingly different. That is, $\langle \varepsilon \rangle$ is in equilibrium with the mean rate of energy transfer that determines $\langle \delta u_r^n \rangle$. However, the universality of small-scale statistics[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] there exists simple necessary and sufficient condition for separability. The celebrated Peres-Horodecki criterium [@Peres; @PPT] states that a state of a bipartite system living in $\mathbb{C}^2 {{\,\otimes\,}}\mathbb{C}^2$ or $\mathbb{C}^2 {{\,\otimes\,}}\mathbb{C}^3$ is separable iff its partial transpose is positive. Unfortunately, for higher-dimensional systems there is no single universal separability condition. The most general approach to separability problem is based on the following observation [@Horodeccy-PM]: a state $\rho$ of a bipartite system living in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$ is separable iff $\mbox{Tr}(W\rho) \geq 0$ for any Hermitian operator $W$ satisfying $\mbox{Tr}(W P_A {{\,\otimes\,}}P_B)\geq 0$, where $P_A$ and $P_B$ are projectors acting on $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. Recall, that a Hermitian operator $W \in \mathcal{B}(\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B)$ is an entanglement witness [@Horodeccy-PM; @Terhal1] iff: i) it is not positively defined, i.e. $W \ngeq 0$, and ii) $\mbox{Tr}(W\sigma) \geq 0$ for all separable states $\sigma$. A bipartite state $\rho$ living in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$ is entangled iff there exists an
there exists simple necessary and sufficient condition for separability. The celebrated Peres-Horodecki criterium [@Peres; @PPT] states that a state of a bipartite system living in $\mathbb{C}^2 {{\,\otimes\,}}\mathbb{C}^2$ or $\mathbb{C}^2 {{\,\otimes\,}}\mathbb{C}^3$ is separable iff its partial transpose is positive. Unfortunately, for higher-dimensional systems there is no single universal separability condition. The most general approach to separability problem is based on the following observation [@Horodeccy-PM]: a state $\rho$ of a bipartite system living in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$ is separable iff $\mbox{Tr}(W\rho) \geq 0$ for any Hermitian operator $W$ satisfying $\mbox{Tr}(W P_A {{\,\otimes\,}}P_B)\geq 0$, where $P_A$ and $P_B$ are projectors acting on $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. Recall, that a Hermitian operator $W \in \mathcal{B}(\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B)$ is an entanglement witness [@Horodeccy-PM; @Terhal1] iff: i) it is not positively defined, i.e. $W \ngeq 0$, and ii) $\mbox{Tr}(W\sigma) \geq 0$ for all separable states $\sigma$. A bipartite state $\rho$ living in $\mathcal{H}_A {{\,\otimes\,}}\mathcal{H}_B$ is entangled iff there exists an[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] drawback of current sub-Riemannian geometry literature is that it almost exclusively focuses on the study of “smooth systems, which is sometimes too much to ask for a mathematical subject that has close connections with physical sciences. For instance, one place where non-differentiable objects appear in a physically motivated mathematical branch (and which is the main motivation of the authors) is the area of dynamical systems. More specifically in (partially and uniformly) hyperbolic dynamics, bundles that are only Hölder continuous are quite abundant and their sub-Riemannian properties (i.e. accessibility and integrability) play an important role in the description and classification of the dynamics. The aim of this paper is to give a little nudge to sub-Riemannian geometry in the direction of non-differentiable objects. To get into more technical details we need some definitions. Let $\Delta$ be a $C^r$ tangent subbundle defined on a smooth manifold $M$ and $g$ a metric on $\Delta$
drawback of current sub-Riemannian geometry literature is that it almost exclusively focuses on the study of “smooth systems, which is sometimes too much to ask for a mathematical subject that has close connections with physical sciences. For instance, one place where non-differentiable objects appear in a physically motivated mathematical branch (and which is the main motivation of the authors) is the area of dynamical systems. More specifically in (partially and uniformly) hyperbolic dynamics, bundles that are only Hölder continuous are quite abundant and their sub-Riemannian properties (i.e. accessibility and integrability) play an important role in the description and classification of the dynamics. The aim of this paper is to give a little nudge to sub-Riemannian geometry in the direction of non-differentiable objects. To get into more technical details we need some definitions. Let $\Delta$ be a $C^r$ tangent subbundle defined on a smooth manifold $M$ and $g$ a metric on $\Delta$[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] for standard European and American options without the need for Rannacher start-up steps.]{}\ *Keywords.* [Heat equation; Crank-Nicolson scheme; convergence; Black-Scholes; European option; American option; asymptotics; time change.]{} address: 'Mathematical Institute, University of Oxford, 24-29 St Giles’, Oxford, OX1 3LB, U.K.' author: - 'C. Reisinger' - 'A. Whitley' title: 'The impact of a natural time change on the convergence of the Crank-Nicolson scheme' --- Introduction ============ The Crank-Nicolson scheme is a popular time stepping scheme for the numerical approximation of diffusion equations and is particularly heavily used for applications in computational finance. This is due to a combination of favourable properties: its simplicity and ease of implementation (especially in one dimension); second order accuracy in the timestep for sufficiently regular data; unconditional stability in an $L_2$ sense. However, there are well-documented problems which arise from the fact that the scheme is at the cusp of stability in the sense that it is *A-stable* [see @S] but does not
for standard European and American options without the need for Rannacher start-up steps.]{}\ *Keywords.* [Heat equation; Crank-Nicolson scheme; convergence; Black-Scholes; European option; American option; asymptotics; time change.]{} address: 'Mathematical Institute, University of Oxford, 24-29 St Giles’, Oxford, OX1 3LB, U.K.' author: - 'C. Reisinger' - 'A. Whitley' title: 'The impact of a natural time change on the convergence of the Crank-Nicolson scheme' --- Introduction ============ The Crank-Nicolson scheme is a popular time stepping scheme for the numerical approximation of diffusion equations and is particularly heavily used for applications in computational finance. This is due to a combination of favourable properties: its simplicity and ease of implementation (especially in one dimension); second order accuracy in the timestep for sufficiently regular data; unconditional stability in an $L_2$ sense. However, there are well-documented problems which arise from the fact that the scheme is at the cusp of stability in the sense that it is *A-stable* [see @S] but does not[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] 63177 Aubière CEDEX, France\ Serguei.Dachian@math.univ-bpclermont.fr title: | On Limiting Likelihood Ratio Processes\ of some Change-Point Type Statistical Models --- **Keywords**: non-regularity, change-point, limiting likelihood ratio process, Bayesian estimators, maximum likelihood estimator, limiting distribution, limiting variance, asymptotic efficiency **Mathematics Subject Classification (2000)**: 62F99, 62M99 Introduction ============ Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In this paper we consider two of these processes. The first one is the random process $Z_\rho$ on ${\mathbb{R}}$ defined by $$\label{proc1} \ln Z_\rho(x)=\begin{cases} \vphantom{\Big)}\rho\,\Pi_+(x)-x, &\text{if } x{\geqslant}0,\\ \vphantom{\Big)}-\rho\,\Pi_-(-x)-x, &\text{if } x{\leqslant}0,\\ \end{cases} $$ where $\rho>0$, and $\Pi_+$ and $\Pi_-$ are two independent Poisson processes on ${\mathbb{R}}_+$ with intensities $1/(e^\rho-1)$ and $1/(1-e^{-\rho})$ respectively. We also consider the random variables $$\label{vars1} \zeta_\rho=\frac{\int_{{\mathbb{R}}}x\,Z_\rho(x)\;dx}{\int_{{\mathbb{R}}}\,Z_\rho(x)\;dx} \quad\text{and}\quad\xi_\rho={\mathop{\rm argsup}\limits}_{x\in{\mathbb{R}}}Z_\rho(x) $$ related to this process, as well as to their second moments $B_\rho={\mathbf{E}}\zeta_\rho^2$ and $M_\rho={\mathbf{E}}\xi_\rho^2$. The process $Z_\rho$ (up to a linear time change) arises in some
63177 Aubière CEDEX, France\ Serguei.Dachian@math.univ-bpclermont.fr title: | On Limiting Likelihood Ratio Processes\ of some Change-Point Type Statistical Models --- **Keywords**: non-regularity, change-point, limiting likelihood ratio process, Bayesian estimators, maximum likelihood estimator, limiting distribution, limiting variance, asymptotic efficiency **Mathematics Subject Classification (2000)**: 62F99, 62M99 Introduction ============ Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In this paper we consider two of these processes. The first one is the random process $Z_\rho$ on ${\mathbb{R}}$ defined by $$\label{proc1} \ln Z_\rho(x)=\begin{cases} \vphantom{\Big)}\rho\,\Pi_+(x)-x, &\text{if } x{\geqslant}0,\\ \vphantom{\Big)}-\rho\,\Pi_-(-x)-x, &\text{if } x{\leqslant}0,\\ \end{cases} $$ where $\rho>0$, and $\Pi_+$ and $\Pi_-$ are two independent Poisson processes on ${\mathbb{R}}_+$ with intensities $1/(e^\rho-1)$ and $1/(1-e^{-\rho})$ respectively. We also consider the random variables $$\label{vars1} \zeta_\rho=\frac{\int_{{\mathbb{R}}}x\,Z_\rho(x)\;dx}{\int_{{\mathbb{R}}}\,Z_\rho(x)\;dx} \quad\text{and}\quad\xi_\rho={\mathop{\rm argsup}\limits}_{x\in{\mathbb{R}}}Z_\rho(x) $$ related to this process, as well as to their second moments $B_\rho={\mathbf{E}}\zeta_\rho^2$ and $M_\rho={\mathbf{E}}\xi_\rho^2$. The process $Z_\rho$ (up to a linear time change) arises in some[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Once the change point is identified, in the second step, all network data before and after it are used together with a clustering algorithm to obtain the corresponding community structures and subsequently estimate the generating stochastic block model parameters. The first method, since it requires knowledge of the community structure and hence clustering at every point in time, is significantly more computationally expensive than the second one. On the other hand, it requires a significantly less stringent identifiability condition for consistent estimation of the change point and the model parameters than the second method; however, it also requires a condition on the misclassification rate of mis-allocating network nodes to their respective communities that may fail to hold in many realistic settings. Despite the apparent stringency of the identifiability condition for the second method, we show that networks generated by a stochastic block mechanism exhibiting a change in their structure can
Once the change point is identified, in the second step, all network data before and after it are used together with a clustering algorithm to obtain the corresponding community structures and subsequently estimate the generating stochastic block model parameters. The first method, since it requires knowledge of the community structure and hence clustering at every point in time, is significantly more computationally expensive than the second one. On the other hand, it requires a significantly less stringent identifiability condition for consistent estimation of the change point and the model parameters than the second method; however, it also requires a condition on the misclassification rate of mis-allocating network nodes to their respective communities that may fail to hold in many realistic settings. Despite the apparent stringency of the identifiability condition for the second method, we show that networks generated by a stochastic block mechanism exhibiting a change in their structure can[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] robustness in challenging real-world environments while maintaining storage growth sub-linearity, we incorporate both multi-exemplar learning and data augmentation techniques. Using large benchmark robotic mapping datasets, we demonstrate the combined system achieving high-performance place recognition with sub-linear storage requirements, and characterize the performance-storage growth trade-off curve. The work serves as the first robotic mapping system with sub-linear storage scaling properties, as well as the first large-scale demonstration in real-world environments of one of the proposed memory benefits of these neurons.' author: - 'Litao Yu, Adam Jacobson and Michael Milford [^1]' bibliography: - 'reference.bib' title: '**Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sub-Linear Storage Cost**' --- Introduction ============ Visual place recognition - recognising whether a current camera image matches to those stored in a map or database - is a fundamental component of most robotic mapping and navigation systems[@TR:SLAM_SURVEY]. These mapping systems are typically developed and evaluated based on the quality of the map they can produce,
robustness in challenging real-world environments while maintaining storage growth sub-linearity, we incorporate both multi-exemplar learning and data augmentation techniques. Using large benchmark robotic mapping datasets, we demonstrate the combined system achieving high-performance place recognition with sub-linear storage requirements, and characterize the performance-storage growth trade-off curve. The work serves as the first robotic mapping system with sub-linear storage scaling properties, as well as the first large-scale demonstration in real-world environments of one of the proposed memory benefits of these neurons.' author: - 'Litao Yu, Adam Jacobson and Michael Milford [^1]' bibliography: - 'reference.bib' title: '**Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sub-Linear Storage Cost**' --- Introduction ============ Visual place recognition - recognising whether a current camera image matches to those stored in a map or database - is a fundamental component of most robotic mapping and navigation systems[@TR:SLAM_SURVEY]. These mapping systems are typically developed and evaluated based on the quality of the map they can produce,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] banning a cosmic conspiracy that puts an end to the injection spectrum at that very energy, UHECR interactions with cosmological background photons produce a sharp cutoff (the Greisen-Zatsepin-Kuzmin limit) in the spectrum corresponding to $\sim60\,\mathrm{EeV}$ [@Greisen:1966jv; @Zatsepin:1966jv], and a cutoff is indeed observed in the data [@Abbasi:2007sv; @Abraham:2008ru]. If the sources of UHECRs are extra-Galactic, they most probably correlate with the large-scale distribution of matter (large-scale structure, or LSS). The interactions with the background cold photons limit UHECR propagation to circa $100\,\mathrm{Mpc}$ (for a review, see Ref. [@Kotera:2011cp]). Therefore, the UHECR flux distribution in the sky should be to some extent anisotropic, since $100\,\mathrm{Mpc}$ is roughly comparable with the scale of homogeneity expected in the standard cosmological model [@Pan:2000yg; @Scrimgeour:2012wt; @Alonso:2014xca]. How the anisotropy of UHECR sources manifests itself in the observed flux on Earth then depends on the original anisotropy of the sources, the UHECR chemical composition, and the properties of intervening
banning a cosmic conspiracy that puts an end to the injection spectrum at that very energy, UHECR interactions with cosmological background photons produce a sharp cutoff (the Greisen-Zatsepin-Kuzmin limit) in the spectrum corresponding to $\sim60\,\mathrm{EeV}$ [@Greisen:1966jv; @Zatsepin:1966jv], and a cutoff is indeed observed in the data [@Abbasi:2007sv; @Abraham:2008ru]. If the sources of UHECRs are extra-Galactic, they most probably correlate with the large-scale distribution of matter (large-scale structure, or LSS). The interactions with the background cold photons limit UHECR propagation to circa $100\,\mathrm{Mpc}$ (for a review, see Ref. [@Kotera:2011cp]). Therefore, the UHECR flux distribution in the sky should be to some extent anisotropic, since $100\,\mathrm{Mpc}$ is roughly comparable with the scale of homogeneity expected in the standard cosmological model [@Pan:2000yg; @Scrimgeour:2012wt; @Alonso:2014xca]. How the anisotropy of UHECR sources manifests itself in the observed flux on Earth then depends on the original anisotropy of the sources, the UHECR chemical composition, and the properties of intervening[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] by taking the essential supremum of the cost functionals over all admissible controls. We give the formulation of related generalized Hamilton-Jacobi-Bellman (HJB) equations, and prove the value function is its viscosity solution. [[**Keywords.**]{}  Fully coupled FBSDEs; value functions; stochastic backward semigroup; dynamic programming principle; algebraic equation; viscosity solution.]{}\ Pardoux and Peng [@PaPe1] first introduced nonlinear backward stochastic differential equations (BSDEs) driven by a Brownian motion. Since then the theory of BSDEs develops very quickly, see El Karoui, Peng and Quenez [@ELPeQu], Peng [@Pe1], [@Pe2], [@Pe3], etc. Associated with the BSDE theory, the theory of fully coupled forward-backward stochastic differential equations (FBSDEs) develops also very quickly, refer to, Antonelli [@An], Cvitanic and Ma [@CM], Delarue [@D], Hu and Peng [@HP], Ma, Protter, and Yong [@MPY], Ma, Wu, Zhang, and Zhang [@MWZZ], Ma and Yong [@MY], Pardoux and Tang [@PaT], Peng and Wu [@PW], Wu [@W], Yong [@Y], [@Y2], and Zhang [@Z], etc. For more details on fully coupled FBSDEs, the reader is referred to the book of Ma and Yong [@MY]; also refer
by taking the essential supremum of the cost functionals over all admissible controls. We give the formulation of related generalized Hamilton-Jacobi-Bellman (HJB) equations, and prove the value function is its viscosity solution. [[**Keywords.**]{}  Fully coupled FBSDEs; value functions; stochastic backward semigroup; dynamic programming principle; algebraic equation; viscosity solution.]{}\ Pardoux and Peng [@PaPe1] first introduced nonlinear backward stochastic differential equations (BSDEs) driven by a Brownian motion. Since then the theory of BSDEs develops very quickly, see El Karoui, Peng and Quenez [@ELPeQu], Peng [@Pe1], [@Pe2], [@Pe3], etc. Associated with the BSDE theory, the theory of fully coupled forward-backward stochastic differential equations (FBSDEs) develops also very quickly, refer to, Antonelli [@An], Cvitanic and Ma [@CM], Delarue [@D], Hu and Peng [@HP], Ma, Protter, and Yong [@MPY], Ma, Wu, Zhang, and Zhang [@MWZZ], Ma and Yong [@MY], Pardoux and Tang [@PaT], Peng and Wu [@PW], Wu [@W], Yong [@Y], [@Y2], and Zhang [@Z], etc. For more details on fully coupled FBSDEs, the reader is referred to the book of Ma and Yong [@MY]; also refer[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] $\Gamma_{1s}$. The shift and the broadening are related to the hadronic scattering lengths $a^h_{\pi^-\ p \to \pi^-\ p }$ and $a^h_{\pi^-\ p \to \pi^0\ n }$, by the Deser-type formulae [@deser]: $$\frac{\epsilon_{1s}}{E_{1s}}=-4\frac{1}{r_B}a^h_{(\pi^- \ p \to \pi^- \ p)}(1+\delta_\epsilon) \label{eq:deser1}$$ $$\frac{\Gamma_{1s}}{E_{1s}}=8 \frac{Q_0}{r_B}(1+\frac{1}{P})(a^h_{(\pi^- \ p \to \pi^0 \ n)}(1+\delta_\Gamma))^2 \label{eq:deser2}$$ where $\epsilon_{1s}$ is the strong interaction shift of the 1s level reflecting the $\pi\,p$ scattering process. $\Gamma_{1s}$ is the width of the ground state caused by the reactions $\pi^- + p \to \pi^0 + n$ and $\pi^- + p \to \pi^0 + \gamma $. $Q_0=0.1421~fm^{-1}$ is the kinetic center of mass momentum of the $\pi^0$ in $\pi^- + p \to \pi^0 + n$ reaction, and $P=1.546 \pm 0.009$[@spuller1977] is the branching ratio of the charge exchange and radiative capture (Panofsky ratio). $\delta_{\epsilon,\Gamma}$ are corrections that permit to connect the pure hadronic scattering lengths to the measurable shift and width[@gasser2002; @sigg1996th;
$\Gamma_{1s}$. The shift and the broadening are related to the hadronic scattering lengths $a^h_{\pi^-\ p \to \pi^-\ p }$ and $a^h_{\pi^-\ p \to \pi^0\ n }$, by the Deser-type formulae [@deser]: $$\frac{\epsilon_{1s}}{E_{1s}}=-4\frac{1}{r_B}a^h_{(\pi^- \ p \to \pi^- \ p)}(1+\delta_\epsilon) \label{eq:deser1}$$ $$\frac{\Gamma_{1s}}{E_{1s}}=8 \frac{Q_0}{r_B}(1+\frac{1}{P})(a^h_{(\pi^- \ p \to \pi^0 \ n)}(1+\delta_\Gamma))^2 \label{eq:deser2}$$ where $\epsilon_{1s}$ is the strong interaction shift of the 1s level reflecting the $\pi\,p$ scattering process. $\Gamma_{1s}$ is the width of the ground state caused by the reactions $\pi^- + p \to \pi^0 + n$ and $\pi^- + p \to \pi^0 + \gamma $. $Q_0=0.1421~fm^{-1}$ is the kinetic center of mass momentum of the $\pi^0$ in $\pi^- + p \to \pi^0 + n$ reaction, and $P=1.546 \pm 0.009$[@spuller1977] is the branching ratio of the charge exchange and radiative capture (Panofsky ratio). $\delta_{\epsilon,\Gamma}$ are corrections that permit to connect the pure hadronic scattering lengths to the measurable shift and width[@gasser2002; @sigg1996th;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] contribution to the [*Fermi*]{} $\gamma$-ray emission.' author: - 'P. Roustazadeh and M. Böttcher' title: 'VHE Gamma-Ray Induced Pair Cascades in the Radiation Fields of Dust Tori of AGN: Application to Cen A' --- Introduction ============ Blazars are a class of radio-loud active galactic nuclei (AGNs) comprised of Flat-Spectrum Radio Quasars (FSRQs) and BL Lac objects. Their spectral energy distributions (SEDs) are characterized by non-thermal continuum spectra with a broad low-frequency component in the radio – UV or X-ray frequency range and a high-frequency component from X-rays to $\gamma$-rays, and they often exhibit substantial variability across the electromagnetic spectrum. In the VHE $\gamma$-ray regime, the time scale of this variability has been observed to be as short as just a few minutes [@albert07; @aharonian07]. While previous generations of ground-based Atmospheric Cherenkov Telescope (ACT) facilities detected almost exclusively high-frequency peaked BL Lac objects (HBLs) as extragalactic sources of VHE $\gamma$-rays (with the notable exception of the radio galaxy M87), in recent years,
contribution to the [*Fermi*]{} $\gamma$-ray emission.' author: - 'P. Roustazadeh and M. Böttcher' title: 'VHE Gamma-Ray Induced Pair Cascades in the Radiation Fields of Dust Tori of AGN: Application to Cen A' --- Introduction ============ Blazars are a class of radio-loud active galactic nuclei (AGNs) comprised of Flat-Spectrum Radio Quasars (FSRQs) and BL Lac objects. Their spectral energy distributions (SEDs) are characterized by non-thermal continuum spectra with a broad low-frequency component in the radio – UV or X-ray frequency range and a high-frequency component from X-rays to $\gamma$-rays, and they often exhibit substantial variability across the electromagnetic spectrum. In the VHE $\gamma$-ray regime, the time scale of this variability has been observed to be as short as just a few minutes [@albert07; @aharonian07]. While previous generations of ground-based Atmospheric Cherenkov Telescope (ACT) facilities detected almost exclusively high-frequency peaked BL Lac objects (HBLs) as extragalactic sources of VHE $\gamma$-rays (with the notable exception of the radio galaxy M87), in recent years,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] amaze and baffle us. Through neurobiology, we have an almost complete understanding of how a single neuron works, to the point that simulations of a few connected neurons can be carried out with high precision. However, human designed neural networks have not fulfilled the promise of emulating these animal behaviors. The problem of designing the neural network [*structure*]{} can be generalized to the problem of designing complex computer programs because, in a sense, an artificial neural network is just a representation of an underlying computer program. Computer scientists have made substantial progress in this area, and routinely create increasingly complicated codes. However, it is a common experience that when these programs are confronted with unexpected situations or data, they stall and literally stop in their tracks. This is quite different from what happens in biological systems, where adequate reactions occur even in the rarest and most uncommon circumstances, as well as
amaze and baffle us. Through neurobiology, we have an almost complete understanding of how a single neuron works, to the point that simulations of a few connected neurons can be carried out with high precision. However, human designed neural networks have not fulfilled the promise of emulating these animal behaviors. The problem of designing the neural network [*structure*]{} can be generalized to the problem of designing complex computer programs because, in a sense, an artificial neural network is just a representation of an underlying computer program. Computer scientists have made substantial progress in this area, and routinely create increasingly complicated codes. However, it is a common experience that when these programs are confronted with unexpected situations or data, they stall and literally stop in their tracks. This is quite different from what happens in biological systems, where adequate reactions occur even in the rarest and most uncommon circumstances, as well as[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] depends on the nanowire parameters as well as on the superlattice dimensions and the external back gate potential. The 3D environment turns out to be essential to correctly capture and understand the phase diagram of the system and the parameter regions where topological superconductivity is established.' author: - 'Samuel D. Escribano' - Alfredo Levy Yeyati - Yuval Oreg - Elsa Prada bibliography: - 'superlattice.bib' title: Effects of the electrostatic environment on superlattice Majorana nanowires --- Introduction {#Introduction} ============ The appearance of Majorana bound states (MBSs) at the edges of topological superconductors in solid-state devices has attracted a great deal of attention both from theorists and experimentalists [@Hasan:RMP10; @Alicea:RPP12; @Beenakker:arxiv11; @Sato:JPSJ16; @Aguado:rnc17; @Lutchyn:NRM18]. These non-Abelian mid-gap zero energy modes are intriguing from a fundamental point of view and germane to topologically protected quantum computing applications [@Nayak:RMP08; @Aasen:PRX16; @Das:NPJ15]. Due to their relative simplicity, most of the scrutiny has fallen onto one-dimensional (1D) proposals such as hybrid superconducting-semiconducting nanowires with strong spin-orbit coupling [@Lutchyn:NRM18] and
depends on the nanowire parameters as well as on the superlattice dimensions and the external back gate potential. The 3D environment turns out to be essential to correctly capture and understand the phase diagram of the system and the parameter regions where topological superconductivity is established.' author: - 'Samuel D. Escribano' - Alfredo Levy Yeyati - Yuval Oreg - Elsa Prada bibliography: - 'superlattice.bib' title: Effects of the electrostatic environment on superlattice Majorana nanowires --- Introduction {#Introduction} ============ The appearance of Majorana bound states (MBSs) at the edges of topological superconductors in solid-state devices has attracted a great deal of attention both from theorists and experimentalists [@Hasan:RMP10; @Alicea:RPP12; @Beenakker:arxiv11; @Sato:JPSJ16; @Aguado:rnc17; @Lutchyn:NRM18]. These non-Abelian mid-gap zero energy modes are intriguing from a fundamental point of view and germane to topologically protected quantum computing applications [@Nayak:RMP08; @Aasen:PRX16; @Das:NPJ15]. Due to their relative simplicity, most of the scrutiny has fallen onto one-dimensional (1D) proposals such as hybrid superconducting-semiconducting nanowires with strong spin-orbit coupling [@Lutchyn:NRM18] and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] on abelian covering spaces of $M$, under mixed Dirichlet and Neumann boundary conditions? 2. What is the long time behaviour of the abelianized winding of trajectories of normally reflected Brownian motion $M$. Our main results are Theorem \[t:hker\] and Theorem \[t:winding\], stated in Sections \[s:mainthm\] and \[s:winding\] respectively. In this section we survey the literature and place this paper in the context of existing results. Long Time Behaviour of Heat Kernels on Abelian Covers. ------------------------------------------------------ The short time behaviour of heat kernels has been extensively studied and is relatively well understood (see for instance [@BerlineGetzlerEA92; @Grigoryan99] and the references therein). The exact long time behaviour, on the other hand, is subtly related to global properties of the manifold, and our understanding of it is far from being complete. There are several scenarios in which the long time asymptotics can be determined precisely. The simplest scenario is when the underlying manifold is compact, in which case the long time asymptotics is governed
on abelian covering spaces of $M$, under mixed Dirichlet and Neumann boundary conditions? 2. What is the long time behaviour of the abelianized winding of trajectories of normally reflected Brownian motion $M$. Our main results are Theorem \[t:hker\] and Theorem \[t:winding\], stated in Sections \[s:mainthm\] and \[s:winding\] respectively. In this section we survey the literature and place this paper in the context of existing results. Long Time Behaviour of Heat Kernels on Abelian Covers. ------------------------------------------------------ The short time behaviour of heat kernels has been extensively studied and is relatively well understood (see for instance [@BerlineGetzlerEA92; @Grigoryan99] and the references therein). The exact long time behaviour, on the other hand, is subtly related to global properties of the manifold, and our understanding of it is far from being complete. There are several scenarios in which the long time asymptotics can be determined precisely. The simplest scenario is when the underlying manifold is compact, in which case the long time asymptotics is governed[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] result for the Affine Scaling methods and it gives us hints about the existence of a new family of methods. Then, we introduce a second algorithm to accelerate the convergence rate of the generalized algorithm by integrating a non-linear series transformation technique. Our numerical results show that the proposed algorithms outperform the original primal Affine Scaling method. ***Key words:*** Linear Programming, Affine Scaling, Nesterov Acceleration, Dikin Process, Shanks Series Transformation Introduction {#sec:into} ============ The *Affine Scaling* (AFS) algorithm was introduced by Dikin [@dikin:1967], which remained unnoticed to the *Operations Research* (OR) community until the seminal work of Karmarkar [@karmarkar:1984]. Karmarkar’s work transformed the research in *Interior Point Methods* (IPMs) and induced a significant development in the theory of IPMs. As a result, several variants of AFS have been studied over the years by researchers (see [@jansen:1996], [@barnes:1986]). We refer to the books of Wright [@wright:1997], Ye [@ye:1997], Bertsimas [@bertsimas:1997] and Vanderbei [@vanderbei:1998] for more
result for the Affine Scaling methods and it gives us hints about the existence of a new family of methods. Then, we introduce a second algorithm to accelerate the convergence rate of the generalized algorithm by integrating a non-linear series transformation technique. Our numerical results show that the proposed algorithms outperform the original primal Affine Scaling method. ***Key words:*** Linear Programming, Affine Scaling, Nesterov Acceleration, Dikin Process, Shanks Series Transformation Introduction {#sec:into} ============ The *Affine Scaling* (AFS) algorithm was introduced by Dikin [@dikin:1967], which remained unnoticed to the *Operations Research* (OR) community until the seminal work of Karmarkar [@karmarkar:1984]. Karmarkar’s work transformed the research in *Interior Point Methods* (IPMs) and induced a significant development in the theory of IPMs. As a result, several variants of AFS have been studied over the years by researchers (see [@jansen:1996], [@barnes:1986]). We refer to the books of Wright [@wright:1997], Ye [@ye:1997], Bertsimas [@bertsimas:1997] and Vanderbei [@vanderbei:1998] for more[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] sequence of letters of $A$ is called a [*finite word*]{} (or [*a word*]{}). An [*infinite word*]{}, or [*sequence*]{} is a map $\mathbb{N} \to A$. The [*length*]{} of a finite word $u$ is the number $|u|$ of letters in it. The [*concatenation*]{} of two words $u_1$ and $u_2$ is denoted by $u_1u_2$. A word $v$ is a [*subword*]{} of a word $u$ if $u = v_1vv_2 $ for some words $ v_1 $, $ v_2 $. If $ v_1 $ or $ v_2 $ is an empty word, then $ v $ is [*prefix*]{} or [*suffix*]{} of $u$ respectively. A sequence $W$ on a finite alphabet is called [*periodic*]{} if it has form $W=u^{\infty}$ for some finite word $u$. A sequence of letters $W$ on a finite alphabet is called [*uniformly recurrent*]{} if for any finite subword $u$ of $W$ there exists a number $C(u, W)$ such that any subword of $W$ with length $C(u,
sequence of letters of $A$ is called a [*finite word*]{} (or [*a word*]{}). An [*infinite word*]{}, or [*sequence*]{} is a map $\mathbb{N} \to A$. The [*length*]{} of a finite word $u$ is the number $|u|$ of letters in it. The [*concatenation*]{} of two words $u_1$ and $u_2$ is denoted by $u_1u_2$. A word $v$ is a [*subword*]{} of a word $u$ if $u = v_1vv_2 $ for some words $ v_1 $, $ v_2 $. If $ v_1 $ or $ v_2 $ is an empty word, then $ v $ is [*prefix*]{} or [*suffix*]{} of $u$ respectively. A sequence $W$ on a finite alphabet is called [*periodic*]{} if it has form $W=u^{\infty}$ for some finite word $u$. A sequence of letters $W$ on a finite alphabet is called [*uniformly recurrent*]{} if for any finite subword $u$ of $W$ there exists a number $C(u, W)$ such that any subword of $W$ with length $C(u,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] obtains deployment ready performances. To validate the consistency of the approach and its industrial applicability, we integrate the Layer-wise Relevance Propagation explainability technique, which enables us to further understand the behavior of the neural network for this task. In the end, the proposed system can provide higher speed, accuracy, and consistency in the process of sewer examination. Our analysis also uncovers some guidelines on how to further improve the quality of the data gathering methodology.' author: - 'Mario A. Gutiérrez-Mondragón' - 'Dario Garcia-Gasulla\' - 'Sergio Alvarez-Napagao' - 'Jaume Brossa-Ordoñez' - 'Rafael Gimenez-Esteban' bibliography: - 'ecai.bib' title: Obstruction level detection of sewer videos using convolutional neural networks --- Introduction ============ In the US, there are roughly 1,200,000 kilometers of sewer lines [@sterling2010state]. That is more than three times the distance between the Earth and the Moon, considering only 4% of world population. The maintenance of such vasts networks of pipes is thus a real challenge world-wide. As of now, the most common approach is
obtains deployment ready performances. To validate the consistency of the approach and its industrial applicability, we integrate the Layer-wise Relevance Propagation explainability technique, which enables us to further understand the behavior of the neural network for this task. In the end, the proposed system can provide higher speed, accuracy, and consistency in the process of sewer examination. Our analysis also uncovers some guidelines on how to further improve the quality of the data gathering methodology.' author: - 'Mario A. Gutiérrez-Mondragón' - 'Dario Garcia-Gasulla\' - 'Sergio Alvarez-Napagao' - 'Jaume Brossa-Ordoñez' - 'Rafael Gimenez-Esteban' bibliography: - 'ecai.bib' title: Obstruction level detection of sewer videos using convolutional neural networks --- Introduction ============ In the US, there are roughly 1,200,000 kilometers of sewer lines [@sterling2010state]. That is more than three times the distance between the Earth and the Moon, considering only 4% of world population. The maintenance of such vasts networks of pipes is thus a real challenge world-wide. As of now, the most common approach is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] the standard model (SM), the partial width of the decay $D^+_s\to \ell^+\nu_\ell$ can be written as [@decayrate] $$\Gamma_{D^+_{s}\to\ell^+\nu_\ell}=\frac{G_F^2}{8\pi}|V_{cs}|^2 f^2_{D^+_{s}} m_\ell^2 m_{D^+_{s}} \left (1-\frac{m_\ell^2}{m_{D^+_{s}}^2} \right )^2,$$ where $f_{D^+_{s}}$ is the $D^+_{s}$ decay constant, $|V_{cs}|$ is the $c\to s$ Cabibbo-Kobayashi-Maskawa (CKM) matrix element, $G_F$ is the Fermi coupling constant, $m_\ell$ is the lepton mass, and $m_{D^+_{s}}$ is the $D^+_{s}$ mass. In recent years, much progress has been achieved in the measurements of $f_{D^+_{s}}$ and $|V_{cs}|$ with $D^+_s\to \ell^+\nu_\ell$ decays at the CLEO [@cleo2009; @cleo2009a; @cleo2009b], BaBar [@babar2010], Belle [@belle2013] and BESIII [@bes2016] experiments. However, compared to the precision of the most accurate lattice quantum chromodynamics (LQCD) calculation of $f_{D^+_s}$ [@FLab2018], the accuracy of the measurements is still limited. Improved measurements of $f_{D^+_{s}}$ and $|V_{cs}|$ are critical to calibrate various theoretical calculations of $f_{D^+_{s}}$ [@FLab2018; @LQCD; @etm2015; @ukqcd2017; @ukqcd2015; @milc2012; @hpqcd2010; @hpqcd2012; @milc2005; @hpqcd2008; @etm2012; @chen2014; @pacs2011; @qcdsf2007; @chiu2005; @ukqcd2001; @becirevic1999; @bordes2005; @narison2002; @badalian2007; @ebert2006; @cvetic2004; @choi2007; @salcedo2004; @wang2004; @amundson1993; @becirevic2013; @lucha2011; @hwang2010; @wang2015], such
the standard model (SM), the partial width of the decay $D^+_s\to \ell^+\nu_\ell$ can be written as [@decayrate] $$\Gamma_{D^+_{s}\to\ell^+\nu_\ell}=\frac{G_F^2}{8\pi}|V_{cs}|^2 f^2_{D^+_{s}} m_\ell^2 m_{D^+_{s}} \left (1-\frac{m_\ell^2}{m_{D^+_{s}}^2} \right )^2,$$ where $f_{D^+_{s}}$ is the $D^+_{s}$ decay constant, $|V_{cs}|$ is the $c\to s$ Cabibbo-Kobayashi-Maskawa (CKM) matrix element, $G_F$ is the Fermi coupling constant, $m_\ell$ is the lepton mass, and $m_{D^+_{s}}$ is the $D^+_{s}$ mass. In recent years, much progress has been achieved in the measurements of $f_{D^+_{s}}$ and $|V_{cs}|$ with $D^+_s\to \ell^+\nu_\ell$ decays at the CLEO [@cleo2009; @cleo2009a; @cleo2009b], BaBar [@babar2010], Belle [@belle2013] and BESIII [@bes2016] experiments. However, compared to the precision of the most accurate lattice quantum chromodynamics (LQCD) calculation of $f_{D^+_s}$ [@FLab2018], the accuracy of the measurements is still limited. Improved measurements of $f_{D^+_{s}}$ and $|V_{cs}|$ are critical to calibrate various theoretical calculations of $f_{D^+_{s}}$ [@FLab2018; @LQCD; @etm2015; @ukqcd2017; @ukqcd2015; @milc2012; @hpqcd2010; @hpqcd2012; @milc2005; @hpqcd2008; @etm2012; @chen2014; @pacs2011; @qcdsf2007; @chiu2005; @ukqcd2001; @becirevic1999; @bordes2005; @narison2002; @badalian2007; @ebert2006; @cvetic2004; @choi2007; @salcedo2004; @wang2004; @amundson1993; @becirevic2013; @lucha2011; @hwang2010; @wang2015], such[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] an infinite group of automorphisms (Barth and Peters [@BP]). On the other hand, Fano [@F] gave an Enriques surface with a finite group of automorphisms. Later Dolgachev [@D1] gave another example of such Enriques surfaces. Then Nikulin [@N] proposed a classification of such Enriques surfaces in terms of the periods. Finally the second author [@Ko] classified all complex Enriques surfaces with a finite group of automorphisms, geometrically. There are seven types ${\I, \II,\ldots, \VII}$ of such Enriques surfaces. The Enriques surfaces of type ${\I}$ or ${\II}$ form an irreducible one dimensional family, and each of the remaining types consists of a unique Enriques surface. The first two types contain exactly twelve nonsingular rational curves, on the other hand, the remaining five types contain exactly twenty nonsingular rational curves. The Enriques surface of type ${\I}$ (resp. of type ${\VII}$) is the example given by Dolgachev (resp. by Fano). We call the
an infinite group of automorphisms (Barth and Peters [@BP]). On the other hand, Fano [@F] gave an Enriques surface with a finite group of automorphisms. Later Dolgachev [@D1] gave another example of such Enriques surfaces. Then Nikulin [@N] proposed a classification of such Enriques surfaces in terms of the periods. Finally the second author [@Ko] classified all complex Enriques surfaces with a finite group of automorphisms, geometrically. There are seven types ${\I, \II,\ldots, \VII}$ of such Enriques surfaces. The Enriques surfaces of type ${\I}$ or ${\II}$ form an irreducible one dimensional family, and each of the remaining types consists of a unique Enriques surface. The first two types contain exactly twelve nonsingular rational curves, on the other hand, the remaining five types contain exactly twenty nonsingular rational curves. The Enriques surface of type ${\I}$ (resp. of type ${\VII}$) is the example given by Dolgachev (resp. by Fano). We call the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Zhuang$^{1}$' title: 'Solid-state calculation of crystalline color superconductivity' --- Introduction ============ The ground state of exotic fermion Cooper pairing with mismatched Fermi surfaces is a longstanding problem in the theory of superconductivity  [@Casalbuoni2004]. In electronic superconductors, the mismatched Fermi surfaces are normally induced by the Zeeman energy splitting $2\delta\mu$ in a magnetic field. For $s$-wave pairing at weak coupling, it is known that, at a critical field $\delta\mu_1=0.707\Delta_0$ where $\Delta_0$ is the pairing gap at vanishing mismatch, a first-order phase transition from the gapped BCS state to the normal state occurs  [@CC1962]. Further theoretical studies showed that the inhomogeneous Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) state can survive in a narrow window $\delta\mu_1<\delta\mu<\delta\mu_2$, where the upper critical field $\delta\mu_2=0.754\Delta_0$ [@LO1964; @FF1964]. However, since the thermodynamic critical field is much lower than $\delta\mu_1$ due to strong orbit effect, it is rather hard to observe the LOFF state in ordinary superconductors [@CC1962]. In recent years, experimental evidences for the LOFF state
Zhuang$^{1}$' title: 'Solid-state calculation of crystalline color superconductivity' --- Introduction ============ The ground state of exotic fermion Cooper pairing with mismatched Fermi surfaces is a longstanding problem in the theory of superconductivity  [@Casalbuoni2004]. In electronic superconductors, the mismatched Fermi surfaces are normally induced by the Zeeman energy splitting $2\delta\mu$ in a magnetic field. For $s$-wave pairing at weak coupling, it is known that, at a critical field $\delta\mu_1=0.707\Delta_0$ where $\Delta_0$ is the pairing gap at vanishing mismatch, a first-order phase transition from the gapped BCS state to the normal state occurs  [@CC1962]. Further theoretical studies showed that the inhomogeneous Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) state can survive in a narrow window $\delta\mu_1<\delta\mu<\delta\mu_2$, where the upper critical field $\delta\mu_2=0.754\Delta_0$ [@LO1964; @FF1964]. However, since the thermodynamic critical field is much lower than $\delta\mu_1$ due to strong orbit effect, it is rather hard to observe the LOFF state in ordinary superconductors [@CC1962]. In recent years, experimental evidences for the LOFF state[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] lowest in the whole sky, i.e., the field is the most ideal sky area for far-infrared (FIR) extragalactic observations. Very deep imaging data were obtained down to $\sim 20$ mJy at $90\;\mu$m [for details, see @shirahata2009]. Catalog {#sec:catalog} ======= We cross-correlated the ADFS point source catalog (based on $90\;\mu$m) with other known and publicly available databases, mainly the SIMBAD and NED. For 500 sources brighter than 0.0482 Jy, we searched for their counterparts in other wavelengths within the radius of $40''$. In total, 110 counterparts for 114 ADFS sources were found. As shown in Figure \[fig:ddist\], the angular distance between the ADFS source and its counterpart is in most cases smaller than $20''$. It is plausible that the more distant identifications are caused by the contamination. In particular, all the three stars in the sample are most probably falsely identified because of the contamination (M. Fukagawa, private communication). Positional scatter map, shown in Figure \[fig:scatter\], displays a
lowest in the whole sky, i.e., the field is the most ideal sky area for far-infrared (FIR) extragalactic observations. Very deep imaging data were obtained down to $\sim 20$ mJy at $90\;\mu$m [for details, see @shirahata2009]. Catalog {#sec:catalog} ======= We cross-correlated the ADFS point source catalog (based on $90\;\mu$m) with other known and publicly available databases, mainly the SIMBAD and NED. For 500 sources brighter than 0.0482 Jy, we searched for their counterparts in other wavelengths within the radius of $40''$. In total, 110 counterparts for 114 ADFS sources were found. As shown in Figure \[fig:ddist\], the angular distance between the ADFS source and its counterpart is in most cases smaller than $20''$. It is plausible that the more distant identifications are caused by the contamination. In particular, all the three stars in the sample are most probably falsely identified because of the contamination (M. Fukagawa, private communication). Positional scatter map, shown in Figure \[fig:scatter\], displays a[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] stationary states of one such system can be useful for the whole class of problems in various areas of science. The subject of this text is the antiferromagnetic network, where the variables are Ising spins $S_i=\pm 1/2$. As it was discussed in [@dgm], the ground state problem of this network can be mapped onto the MAX-CUT problem, which belongs to the class of NP-complete optimization problems. Also, the state of the antiferromagnetic network in a weak magnetic field gives an information on the minimal vertex cover of the network, which is another famous NP-complete problem [@dgm]. Further, in the ground state of the antiferromagnetic network all neighboring spins should be antiparallel, i.e. their product should be equal to -1. This can be seen as an equivalent to the problem of satisfiability of $K$ conditions imposed on $N$ variables, where $N$ is the number of nodes and $K$ is the number
stationary states of one such system can be useful for the whole class of problems in various areas of science. The subject of this text is the antiferromagnetic network, where the variables are Ising spins $S_i=\pm 1/2$. As it was discussed in [@dgm], the ground state problem of this network can be mapped onto the MAX-CUT problem, which belongs to the class of NP-complete optimization problems. Also, the state of the antiferromagnetic network in a weak magnetic field gives an information on the minimal vertex cover of the network, which is another famous NP-complete problem [@dgm]. Further, in the ground state of the antiferromagnetic network all neighboring spins should be antiparallel, i.e. their product should be equal to -1. This can be seen as an equivalent to the problem of satisfiability of $K$ conditions imposed on $N$ variables, where $N$ is the number of nodes and $K$ is the number[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] investigated in [@wey1], [@wey2], [@peeb70], [@sper17], and in the series of papers [@ZS1] - [@ZS4]. The review of theory of interaction of free electrons with a radiation is presented in [@rew]. A main process of interaction of hot plasma inside galaxy clusters with CMB is a Compton scattering, including a Doppler effect. Observations from the satellites WMAP and PLANCK had revealed an existence of CMB fluctuations in the directions of rich galactic clusters [@w1], [@w2], [@p8]-[@p], which had been interpreted in the frame of physical processes, investigated in [@wey1], [@wey2], [@ZS1], [@ZS2]. A study of interaction is based on the solution of the Kompaneets equation [@Komp], see also [@wey1], which is an approximate form of the radiative transfer equation with non-coherent scattering, when energy exchange $\Delta E_{e\gamma}$ between the electron and the photon in one scattering is much less than the photon energy $E_{\gamma}$, and kinetic energy (electron energy), $kT_e$ $$\frac{\Delta E_{e\gamma}}{E_{\gamma}}=\frac{4kT_e-E_{\gamma}}{m_ec^2}\ll 1. %\,E_{ek0},\,
investigated in [@wey1], [@wey2], [@peeb70], [@sper17], and in the series of papers [@ZS1] - [@ZS4]. The review of theory of interaction of free electrons with a radiation is presented in [@rew]. A main process of interaction of hot plasma inside galaxy clusters with CMB is a Compton scattering, including a Doppler effect. Observations from the satellites WMAP and PLANCK had revealed an existence of CMB fluctuations in the directions of rich galactic clusters [@w1], [@w2], [@p8]-[@p], which had been interpreted in the frame of physical processes, investigated in [@wey1], [@wey2], [@ZS1], [@ZS2]. A study of interaction is based on the solution of the Kompaneets equation [@Komp], see also [@wey1], which is an approximate form of the radiative transfer equation with non-coherent scattering, when energy exchange $\Delta E_{e\gamma}$ between the electron and the photon in one scattering is much less than the photon energy $E_{\gamma}$, and kinetic energy (electron energy), $kT_e$ $$\frac{\Delta E_{e\gamma}}{E_{\gamma}}=\frac{4kT_e-E_{\gamma}}{m_ec^2}\ll 1. %\,E_{ek0},\,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] for density ratio estimation [@que1] and semi-supervised learning [@que2]. Fredholm learning can be considered as a kernel method with data-dependent kernel. This kernel usually is called as Fredholm kernel, and can naturally incorporate the data information. Although its empirical performance has been well demonstrated in the previous works, there is no learning theory analysis on generalization bound and learning rate. It is well known that generalization ability and learning rate are important measures to evaluate the learning algorithm [@cucker1; @zou1; @zou2]. In this paper, we focus on this theoretical theme for regularized least square regression with Fredholm kernel. In learning theory literature, extensive studies have been established for least square regression with regularized kernel methods, e.g., [@shi1; @sun1; @wu2]. Although the Fredholm learning in [@que2] also can be considered as a regularized kernel method, there are two key features: one is that Fredholm kernel is associated with the “inner" kernel and
for density ratio estimation [@que1] and semi-supervised learning [@que2]. Fredholm learning can be considered as a kernel method with data-dependent kernel. This kernel usually is called as Fredholm kernel, and can naturally incorporate the data information. Although its empirical performance has been well demonstrated in the previous works, there is no learning theory analysis on generalization bound and learning rate. It is well known that generalization ability and learning rate are important measures to evaluate the learning algorithm [@cucker1; @zou1; @zou2]. In this paper, we focus on this theoretical theme for regularized least square regression with Fredholm kernel. In learning theory literature, extensive studies have been established for least square regression with regularized kernel methods, e.g., [@shi1; @sun1; @wu2]. Although the Fredholm learning in [@que2] also can be considered as a regularized kernel method, there are two key features: one is that Fredholm kernel is associated with the “inner" kernel and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] integer. Let $G={\text{GL}}_n({k})$ be the group of $n$-by-$n$ invertible matrices over ${k}$ and let $\Lambda_n$ stand for the set of partitions of $n$. For $\lambda=({{\lambda}}_i) \in \Lambda_n$, written in a non-increasing order, let $l(\lambda)$ denote its length, namely the number of non-zero parts. The set $\Lambda_n$ is a lattice under the opposite dominance partial order, defined by: ${{\lambda}}\le \mu$ if $\sum_{j=1}^i{{\lambda}}_j \ge\sum_{j=1}^i\mu_j$ for all $i \in \mathbb{N}$. Let $\vee$ and $\wedge$ denote the operations of join and meet, respectively, in the lattice $\Lambda_n$. We call a chain of ${k}$-vector spaces ${k}^n=x_{l(\lambda)} \supset x_{l(\lambda)-1} \supset \cdots \supset x_{0} = (0)$ a ${{\lambda}}$-flag if $\dim_{{k}}(x_{l({{\lambda}})-i+1}/x_{l({{\lambda}})-i}) = {{\lambda}}_i$ for all $1 \le i \le l({{\lambda}})$. Let $$X_\lambda=\{(x_{l(\lambda)-1},\cdots,x_{1}) \mid {k}^n = x_{l(\lambda)} \supset \cdots \supset x_{0} = (0)~\text{is a $\lambda$-flag} \},$$ be the set of all ${{\lambda}}$-flags in ${k}^n$. Let ${{\mathcal F}}_{{\lambda}}$ be the permutation representation of $G$ that arises from its action on $X_{{{\lambda}}}$ (${{\lambda}}\in \Lambda_n$).
integer. Let $G={\text{GL}}_n({k})$ be the group of $n$-by-$n$ invertible matrices over ${k}$ and let $\Lambda_n$ stand for the set of partitions of $n$. For $\lambda=({{\lambda}}_i) \in \Lambda_n$, written in a non-increasing order, let $l(\lambda)$ denote its length, namely the number of non-zero parts. The set $\Lambda_n$ is a lattice under the opposite dominance partial order, defined by: ${{\lambda}}\le \mu$ if $\sum_{j=1}^i{{\lambda}}_j \ge\sum_{j=1}^i\mu_j$ for all $i \in \mathbb{N}$. Let $\vee$ and $\wedge$ denote the operations of join and meet, respectively, in the lattice $\Lambda_n$. We call a chain of ${k}$-vector spaces ${k}^n=x_{l(\lambda)} \supset x_{l(\lambda)-1} \supset \cdots \supset x_{0} = (0)$ a ${{\lambda}}$-flag if $\dim_{{k}}(x_{l({{\lambda}})-i+1}/x_{l({{\lambda}})-i}) = {{\lambda}}_i$ for all $1 \le i \le l({{\lambda}})$. Let $$X_\lambda=\{(x_{l(\lambda)-1},\cdots,x_{1}) \mid {k}^n = x_{l(\lambda)} \supset \cdots \supset x_{0} = (0)~\text{is a $\lambda$-flag} \},$$ be the set of all ${{\lambda}}$-flags in ${k}^n$. Let ${{\mathcal F}}_{{\lambda}}$ be the permutation representation of $G$ that arises from its action on $X_{{{\lambda}}}$ (${{\lambda}}\in \Lambda_n$).[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] ordering is similar to that for charge density wave in quarter-filled systems of low-dimensional organic compounds,[@SeoRev] suggesting some common physics to  and these systems. Soon after the finding of the transition, an x-ray diffraction measurement revealed superlattice formation of $2a\times 2b\times 4c$ in the charge-ordered phase,[@Fujii] but the detailed low-temperature structure has been unknown yet. Recently, two x-ray diffraction studies of the low-temperature structure were reported. These indicate almost the same structure of space group , but their assignments of V electronic states are different. One suggests a structure consists of half-filled (V$^{4+}$) and empty (V$^{5+}$) ladders. [@Luedecke] This charge distribution disagrees with a recent x-ray anomalous scattering measurement, which indicates charge modulation along $b$ axis.[@Nakao00] The other suggests a structure including three different electronic state of V$^{4+}$, V$^{5+}$ and V$^{4.5+}$. [@Boer] This structure is incompatible with the $^{51}$V NMR measurement, [@Ohama99] which clearly shows that all the V sites split into two groups
ordering is similar to that for charge density wave in quarter-filled systems of low-dimensional organic compounds,[@SeoRev] suggesting some common physics to  and these systems. Soon after the finding of the transition, an x-ray diffraction measurement revealed superlattice formation of $2a\times 2b\times 4c$ in the charge-ordered phase,[@Fujii] but the detailed low-temperature structure has been unknown yet. Recently, two x-ray diffraction studies of the low-temperature structure were reported. These indicate almost the same structure of space group , but their assignments of V electronic states are different. One suggests a structure consists of half-filled (V$^{4+}$) and empty (V$^{5+}$) ladders. [@Luedecke] This charge distribution disagrees with a recent x-ray anomalous scattering measurement, which indicates charge modulation along $b$ axis.[@Nakao00] The other suggests a structure including three different electronic state of V$^{4+}$, V$^{5+}$ and V$^{4.5+}$. [@Boer] This structure is incompatible with the $^{51}$V NMR measurement, [@Ohama99] which clearly shows that all the V sites split into two groups[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] sound reflection in room acoustics or reduce sound emissions. There is a need for innovative acoustic absorbent materials, effective in low frequencies while being able to deal with spatial constraints present in real applications. Innovative ultra-thin materials are also useful tools for the scientific community to manipulate sound waves and obtain negative refraction [@cummer2016controlling; @kaina2015negative], sub-wavelength imaging [@zhu2011holey; @qi2018ultrathin], cloaking [@faure2016experiments], etc. Traditional sound absorption structures use perforated and micro-perforated panels covering air or porous materials. [@allard2009; @maa1998]. These materials have a low reflection of normal incident waves at frequencies such that the wavelength ($\lambda = c_0/f$ where $c_0$ and $f$ are the sound velocity in air and the frequency) is about four times the thickness of the material $H$ leading to a sub-wavelength ratio $r_H = \lambda/H \simeq 4$. There has been a very significant reduction in the thickness of the absorbent materials [@yang2017optimal] by using space-coiling structures [@cai2014; @chen2017;
sound reflection in room acoustics or reduce sound emissions. There is a need for innovative acoustic absorbent materials, effective in low frequencies while being able to deal with spatial constraints present in real applications. Innovative ultra-thin materials are also useful tools for the scientific community to manipulate sound waves and obtain negative refraction [@cummer2016controlling; @kaina2015negative], sub-wavelength imaging [@zhu2011holey; @qi2018ultrathin], cloaking [@faure2016experiments], etc. Traditional sound absorption structures use perforated and micro-perforated panels covering air or porous materials. [@allard2009; @maa1998]. These materials have a low reflection of normal incident waves at frequencies such that the wavelength ($\lambda = c_0/f$ where $c_0$ and $f$ are the sound velocity in air and the frequency) is about four times the thickness of the material $H$ leading to a sub-wavelength ratio $r_H = \lambda/H \simeq 4$. There has been a very significant reduction in the thickness of the absorbent materials [@yang2017optimal] by using space-coiling structures [@cai2014; @chen2017;[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] described by constraint minimizers of the following variational problem $$\label{def:eN} e(N):=\inf \Big\{ \mathcal{E}(u):\, u\in H^{\frac{1}{2}}({{\mathbb{R}}}^3)\,\ \mbox{and}\ \int_{{{\mathbb{R}}}^3} |u(x)|^2dx=N\Big\},$$ where $N>0$ denotes the stellar mass of Boson stars, and the pseudo-relativistic Hartree energy functional $ \mathcal{E}(u)$ is of the form $$\label{f} \mathcal{E}(u):=\int_{{{\mathbb{R}}}^3} \bar u\big( \sqrt{-\Delta +m^2}-m\big)udx-\frac{1}{2}\int_{{{\mathbb{R}}}^3}\big(|x|^{-1}\ast |u|^2\big)|u|^2dx,\ \ m>0.$$ Here the operator $\sqrt{-\Delta +m^2}$ is defined via multiplication in the Fourier space with the symbol $\sqrt{|\xi|^2+m^2}$ for $\xi\in{{\mathbb{R}}}^3$, which describes the kinetic and rest energy of many self-gravitating and relativistic bosons with rest mass $m>0$, and the symbol $\ast$ stands for the convolution on ${{\mathbb{R}}}^3$. Because of the physical relevance, without special notations we always focus on the case $m>0$ throughout the whole paper. The main purpose of this paper is to prove the uniqueness of minimizers for $e(N)$, provided that $N>0$ is small enough. The variational problem $e(N)$ is essentially in the class of $L^2-$critical constraint minimization problems, which were
described by constraint minimizers of the following variational problem $$\label{def:eN} e(N):=\inf \Big\{ \mathcal{E}(u):\, u\in H^{\frac{1}{2}}({{\mathbb{R}}}^3)\,\ \mbox{and}\ \int_{{{\mathbb{R}}}^3} |u(x)|^2dx=N\Big\},$$ where $N>0$ denotes the stellar mass of Boson stars, and the pseudo-relativistic Hartree energy functional $ \mathcal{E}(u)$ is of the form $$\label{f} \mathcal{E}(u):=\int_{{{\mathbb{R}}}^3} \bar u\big( \sqrt{-\Delta +m^2}-m\big)udx-\frac{1}{2}\int_{{{\mathbb{R}}}^3}\big(|x|^{-1}\ast |u|^2\big)|u|^2dx,\ \ m>0.$$ Here the operator $\sqrt{-\Delta +m^2}$ is defined via multiplication in the Fourier space with the symbol $\sqrt{|\xi|^2+m^2}$ for $\xi\in{{\mathbb{R}}}^3$, which describes the kinetic and rest energy of many self-gravitating and relativistic bosons with rest mass $m>0$, and the symbol $\ast$ stands for the convolution on ${{\mathbb{R}}}^3$. Because of the physical relevance, without special notations we always focus on the case $m>0$ throughout the whole paper. The main purpose of this paper is to prove the uniqueness of minimizers for $e(N)$, provided that $N>0$ is small enough. The variational problem $e(N)$ is essentially in the class of $L^2-$critical constraint minimization problems, which were[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] forum dataset.' author: - Yue Yu - Siyao Peng - Grace Hui Yang bibliography: - 'citation.bib' title: | Modeling Long-Range Context for\ Concurrent Dialogue Acts Recognition --- Task Definition =============== The task is defined as a CDA recognition problem where for each utterance $u_t$ (the $t$-th utterance) in a dialogue, we predict a subset of DA labels $y_t$ that describes the functionality of the utterance from a candidate set of DA labels $\mathcal{L} = \{l_1, l_2,...,l_c\}$. For a dialog with $s$ utterances, the inputs to the algorithm is $\mathcal{U} = \{u_1, u_2,...,u_s\}$, and the output is $\mathcal{Y}=\{y_1, y_2,...,y_s\}$, where $y_t$ is the annotated DA label set for $u_t$, in which $y_t = \{y_t^{1}, y_t^{2},...,y_t^{c}\}$. Here, $y_t^{j} = \{1, 0\}$ denotes whether the $t$-th utterance of the dialog is labeled with DA label $l_j$ or not. When $\sum_{j=1}^c y_t^j > 1$, we say CDAs are recognized. Given a dialogue $\mathcal{U}$, the goal is to predict the
forum dataset.' author: - Yue Yu - Siyao Peng - Grace Hui Yang bibliography: - 'citation.bib' title: | Modeling Long-Range Context for\ Concurrent Dialogue Acts Recognition --- Task Definition =============== The task is defined as a CDA recognition problem where for each utterance $u_t$ (the $t$-th utterance) in a dialogue, we predict a subset of DA labels $y_t$ that describes the functionality of the utterance from a candidate set of DA labels $\mathcal{L} = \{l_1, l_2,...,l_c\}$. For a dialog with $s$ utterances, the inputs to the algorithm is $\mathcal{U} = \{u_1, u_2,...,u_s\}$, and the output is $\mathcal{Y}=\{y_1, y_2,...,y_s\}$, where $y_t$ is the annotated DA label set for $u_t$, in which $y_t = \{y_t^{1}, y_t^{2},...,y_t^{c}\}$. Here, $y_t^{j} = \{1, 0\}$ denotes whether the $t$-th utterance of the dialog is labeled with DA label $l_j$ or not. When $\sum_{j=1}^c y_t^j > 1$, we say CDAs are recognized. Given a dialogue $\mathcal{U}$, the goal is to predict the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] keV light curves are calculated and a detailed search for quasi periodic oscillations (QPOs) carried out on these spectra by using a new technique for the detection of periodic (or quasi–periodic) signals even in the presence of source noise variability. No significant peaks are found above the 95% confidence detection threshold, except during the second part of the March 1986 observation, most probably as a consequence of the ME detector malfunctioning. We discuss and compare our results with those of Papadakis & Lawrence (1993a). author: - '**G. Tagliaferri**' - '**G. Bao , G. L. Israel**' - '**L. Stella**' - '**A. Treves**' --- =6.0in =-0.5in =9.00in Introduction ============ NGC5548 is a bright, close–by (z=0.017) Seyfert 1 galaxy which was extensively studied in different bands of the electromagnetic spectrum. Large variability of both lines (optical–UV) and continuum have been reported making the source an important laboratory for exploring the mechanisms of spectral formation in AGNs. In particular the study of correlations and
keV light curves are calculated and a detailed search for quasi periodic oscillations (QPOs) carried out on these spectra by using a new technique for the detection of periodic (or quasi–periodic) signals even in the presence of source noise variability. No significant peaks are found above the 95% confidence detection threshold, except during the second part of the March 1986 observation, most probably as a consequence of the ME detector malfunctioning. We discuss and compare our results with those of Papadakis & Lawrence (1993a). author: - '**G. Tagliaferri**' - '**G. Bao , G. L. Israel**' - '**L. Stella**' - '**A. Treves**' --- =6.0in =-0.5in =9.00in Introduction ============ NGC5548 is a bright, close–by (z=0.017) Seyfert 1 galaxy which was extensively studied in different bands of the electromagnetic spectrum. Large variability of both lines (optical–UV) and continuum have been reported making the source an important laboratory for exploring the mechanisms of spectral formation in AGNs. In particular the study of correlations and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] \{ z \in {\mathbb{C}};\ \Im(z) > 0\}$ with possible poles at the cusps. Throughout this paper, we let $N$ be a positive integer and we denote by ${\Gamma}$ the congruence subgroup ${\Gamma}=\Gamma_0(N)=\left\lbrace{\left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)}\in {{\text {\rm SL}}}_2({\mathbb{Z}}); c \equiv 0 \bmod N \right\rbrace$. For a negative integer $D$ congruent to a square modulo $4N$, we consider the set $\mathcal{Q}_{D,N}$ of *positive definite* integral binary quadratic forms $\left[a,b,c\right]=ax^2+bxy+cy^2$ of discriminant $D=b^2-4ac$ such that $c$ is congruent to $0$ modulo $N$. If $N=1$, we simply write $\mathcal{Q}_{D}$. For each form $Q = \left[a,b,c\right] \in \mathcal{Q}_{D,N}$ there is an associated CM point $\alpha_Q=\frac{-b+i\sqrt{D}}{2a}$ in ${\mathbb{H}}$. The group ${\Gamma}$ acts on $\mathcal{Q}_{D,N}$ with finitely many orbits. Let $\Delta \in {\mathbb{Z}}$ be a fundamental discriminant (possibly 1) and $d$ a positive integer such that $-{\operatorname{sgn}}(\Delta)d$ and $\Delta$ are squares modulo $4N$. For a weakly holomorphic modular form $f$ of weight 0 for ${\Gamma}$,
\{ z \in {\mathbb{C}};\ \Im(z) > 0\}$ with possible poles at the cusps. Throughout this paper, we let $N$ be a positive integer and we denote by ${\Gamma}$ the congruence subgroup ${\Gamma}=\Gamma_0(N)=\left\lbrace{\left(\begin{smallmatrix}a & b \\ c & d\end{smallmatrix}\right)}\in {{\text {\rm SL}}}_2({\mathbb{Z}}); c \equiv 0 \bmod N \right\rbrace$. For a negative integer $D$ congruent to a square modulo $4N$, we consider the set $\mathcal{Q}_{D,N}$ of *positive definite* integral binary quadratic forms $\left[a,b,c\right]=ax^2+bxy+cy^2$ of discriminant $D=b^2-4ac$ such that $c$ is congruent to $0$ modulo $N$. If $N=1$, we simply write $\mathcal{Q}_{D}$. For each form $Q = \left[a,b,c\right] \in \mathcal{Q}_{D,N}$ there is an associated CM point $\alpha_Q=\frac{-b+i\sqrt{D}}{2a}$ in ${\mathbb{H}}$. The group ${\Gamma}$ acts on $\mathcal{Q}_{D,N}$ with finitely many orbits. Let $\Delta \in {\mathbb{Z}}$ be a fundamental discriminant (possibly 1) and $d$ a positive integer such that $-{\operatorname{sgn}}(\Delta)d$ and $\Delta$ are squares modulo $4N$. For a weakly holomorphic modular form $f$ of weight 0 for ${\Gamma}$,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] is done via a final optimization step. Note that the method does not provide a generic solution for multi-view photometric stereo problem but it relaxes several common assumptions of this problem. The approach scales very well in size given its piecewise nature, dealing with large scale optimization and with severe missing data. Experiments on a benchmark dataset *Robot data-set* show the method performance against 3D ground truth.' author: - | Reza Sabzevari$^1$, Vittori Murino$^2$, and Alessio Del Bue$^2$\ \ $^1$ Robotics and Perception Group, University of Zurich, Switzerland\ $^2$ Pattern Analysis and Computer Vision (PAVIS),\ Italian Institute of Technology, Genova, Italy bibliography: - 'pimps\_ref.bib' title: 'PiMPeR: Piecewise Dense 3D Reconstruction from Multi-View and Multi-Illumination Images' --- Conclusions {#sec:conclusions} =========== We have presented a novel photo-geometric method for dense reconstruction from multi-view with arbitrary lighting condition. The approach is able to cope with wide-baselines images
is done via a final optimization step. Note that the method does not provide a generic solution for multi-view photometric stereo problem but it relaxes several common assumptions of this problem. The approach scales very well in size given its piecewise nature, dealing with large scale optimization and with severe missing data. Experiments on a benchmark dataset *Robot data-set* show the method performance against 3D ground truth.' author: - | Reza Sabzevari$^1$, Vittori Murino$^2$, and Alessio Del Bue$^2$\ \ $^1$ Robotics and Perception Group, University of Zurich, Switzerland\ $^2$ Pattern Analysis and Computer Vision (PAVIS),\ Italian Institute of Technology, Genova, Italy bibliography: - 'pimps\_ref.bib' title: 'PiMPeR: Piecewise Dense 3D Reconstruction from Multi-View and Multi-Illumination Images' --- Conclusions {#sec:conclusions} =========== We have presented a novel photo-geometric method for dense reconstruction from multi-view with arbitrary lighting condition. The approach is able to cope with wide-baselines images[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] $T=2$ state in $^{96}$Ag is given by $$E^{*}(J=0^{+},T=2)=BE(^{96}\text{Ag})-BE(^{96}\text{Pd})+V_{C}\,,\label{eq:exc}$$ where the $BE$s are the binding energies and $V_{C}$ includes all charge-independence violating effects. The binding energies can be obtained from the latest mass evaluation [@am11] and we assume that $V_{C}$ arises from the Coulomb interaction, which must be estimated. We use the classical form of the Coulomb energy $$E_{C}=\alpha_{C}Z^{2}/A^{1/3}\,,\label{eq:ecd}$$ supplemented by an exchange Coulomb term $$E_{xC}=\alpha_{xC}Z^{4/3}/A^{1/3}\,,\label{eq:ecx}$$ where $\alpha_{C}$ and $\alpha_{xC}$ are coefficients to be obtained from appropriate data. Several sources were compared. The simplest is the Bethe-Weizsäcker semi-empirical mass formula [@key-3; @mwk], which produces $\alpha_{C}=0.691$ MeV, $\alpha_{xC}=0$ from a fit of a four-term semi-empirical mass formula to the measured masses. An extended, ten-term mass formula [@mwk] produces $\alpha_{C}=0.774$ MeV and $\alpha_{xC}=-2.22$ MeV from a similar fit. The best mass formulation currently available is the Duflo-Zuker approach [@key-4; @mwdz] with up to 33 parameters fitted to the mass data. It includes a unified
$T=2$ state in $^{96}$Ag is given by $$E^{*}(J=0^{+},T=2)=BE(^{96}\text{Ag})-BE(^{96}\text{Pd})+V_{C}\,,\label{eq:exc}$$ where the $BE$s are the binding energies and $V_{C}$ includes all charge-independence violating effects. The binding energies can be obtained from the latest mass evaluation [@am11] and we assume that $V_{C}$ arises from the Coulomb interaction, which must be estimated. We use the classical form of the Coulomb energy $$E_{C}=\alpha_{C}Z^{2}/A^{1/3}\,,\label{eq:ecd}$$ supplemented by an exchange Coulomb term $$E_{xC}=\alpha_{xC}Z^{4/3}/A^{1/3}\,,\label{eq:ecx}$$ where $\alpha_{C}$ and $\alpha_{xC}$ are coefficients to be obtained from appropriate data. Several sources were compared. The simplest is the Bethe-Weizsäcker semi-empirical mass formula [@key-3; @mwk], which produces $\alpha_{C}=0.691$ MeV, $\alpha_{xC}=0$ from a fit of a four-term semi-empirical mass formula to the measured masses. An extended, ten-term mass formula [@mwk] produces $\alpha_{C}=0.774$ MeV and $\alpha_{xC}=-2.22$ MeV from a similar fit. The best mass formulation currently available is the Duflo-Zuker approach [@key-4; @mwdz] with up to 33 parameters fitted to the mass data. It includes a unified[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] [@Ge1], [@Ge2], [@Hi]) that for many mathematical objects $X$ (defined over a field of characteristic zero) the formal deformation theory of $X$ is controlled by a DG Lie algebra $\mathfrak{g}=\mathfrak{g}(X)$ of (derived) infinitesimal automorphisms of $X$. This is so in case $X$ is an algebra, a compact complex manifold, a principal $G$-bundle, etc.. Let ${{\mathcal M}}(X)$ denote the base of the universal deformation of $X$ and $o\in {{\mathcal M}}(X)$ be the point corresponding to $X$. Then (under some conditions on $\mathfrak{g}$) the completion of the local ring $\hat{{{\mathcal O}}}_{{{\mathcal M}}(X),o}$ is naturally isomorphic to the linear dual of the homology space $H_0(\mathfrak{g})$. The space $H_0(\mathfrak{g})$ is a co-commutative coalgebra, hence its dual is a commutative algebra. The homology $H_0(\mathfrak{g})$ is the zero cohomology group of $B\mathfrak{g}$ – the bar construction of $\mathfrak{g}$, which is a co-commutative DG coalgebra. It is therefore natural to consider the DG “formal moduli space” ${{\mathcal M}}^{DG}(X)$, so
[@Ge1], [@Ge2], [@Hi]) that for many mathematical objects $X$ (defined over a field of characteristic zero) the formal deformation theory of $X$ is controlled by a DG Lie algebra $\mathfrak{g}=\mathfrak{g}(X)$ of (derived) infinitesimal automorphisms of $X$. This is so in case $X$ is an algebra, a compact complex manifold, a principal $G$-bundle, etc.. Let ${{\mathcal M}}(X)$ denote the base of the universal deformation of $X$ and $o\in {{\mathcal M}}(X)$ be the point corresponding to $X$. Then (under some conditions on $\mathfrak{g}$) the completion of the local ring $\hat{{{\mathcal O}}}_{{{\mathcal M}}(X),o}$ is naturally isomorphic to the linear dual of the homology space $H_0(\mathfrak{g})$. The space $H_0(\mathfrak{g})$ is a co-commutative coalgebra, hence its dual is a commutative algebra. The homology $H_0(\mathfrak{g})$ is the zero cohomology group of $B\mathfrak{g}$ – the bar construction of $\mathfrak{g}$, which is a co-commutative DG coalgebra. It is therefore natural to consider the DG “formal moduli space” ${{\mathcal M}}^{DG}(X)$, so[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] were annotated and the rest unlabeled) gave a word error rate of 44.6%, and this number can be reduced to 34.2% if 4.1 hr of speech data (in which 20000 spoken words were annotated) were given. These results are not satisfactory, but a good starting point.' address: 'National Taiwan University, Taiwan' bibliography: - 'mybib.bib' - 'IR\_bib.bib' - 'ref\_dis.bib' - 'segment.bib' - 'transfer.bib' - 'INTERSPEECH16.bib' - 'ICASSP13.bib' - 'refs.bib' title: 'From Semi-supervised to Almost-unsupervised Speech Recognition with Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text Embeddings' --- **Index Terms**: automatic speech recognition, semi-supervised Introduction {#sec:intro} ============ Automatic speech recognition (ASR) has achieved remarkable success in many applications [@bahdanau2016end; @amodei2016deep; @zhang2017very]. However, with existing technologies, machines have to learn from a huge amount of annotated data to achieve acceptable accuracy, which makes the development of such technologies for new languages with low resource challenging. Collecting a large amount of speech data is expensive, not to mention having the data annotated. This remains true for at least
were annotated and the rest unlabeled) gave a word error rate of 44.6%, and this number can be reduced to 34.2% if 4.1 hr of speech data (in which 20000 spoken words were annotated) were given. These results are not satisfactory, but a good starting point.' address: 'National Taiwan University, Taiwan' bibliography: - 'mybib.bib' - 'IR\_bib.bib' - 'ref\_dis.bib' - 'segment.bib' - 'transfer.bib' - 'INTERSPEECH16.bib' - 'ICASSP13.bib' - 'refs.bib' title: 'From Semi-supervised to Almost-unsupervised Speech Recognition with Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text Embeddings' --- **Index Terms**: automatic speech recognition, semi-supervised Introduction {#sec:intro} ============ Automatic speech recognition (ASR) has achieved remarkable success in many applications [@bahdanau2016end; @amodei2016deep; @zhang2017very]. However, with existing technologies, machines have to learn from a huge amount of annotated data to achieve acceptable accuracy, which makes the development of such technologies for new languages with low resource challenging. Collecting a large amount of speech data is expensive, not to mention having the data annotated. This remains true for at least[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] has been discovered by the ATLAS [@Aad:2012tfa] and CMS [@Chatrchyan:2012ufa]. It is considered to be a highly Standard Model (SM) Higgs-like particle with measured production rate consistent with the SM Higgs boson through $\gamma\gamma$, $ZZ^*$, $WW^*$, and $\tau\tau$ channels [@Aad:2012tfa; @Chatrchyan:2012ufa]. Although further efforts are required in order to determine the features of the new resonance, like the spin, couplings with SM particles, and self-couplings. The spin-1 hypothesis is excluded by the observation of the $\gamma\gamma$ decay mode according to the Landau-Yang theorem [@Landau:1948kw; @Yang:1950rg]. Many proposals have been suggested to distinguish between the spin-0 and spin-2 hypotheses mainly focusing on the kinematic distributions, e.g, angular distributions [@Choi:2002jk; @Gao:2010qx; @DeRujula:2010ys; @Englert:2010ud; @Ellis:2012wg; @Bolognesi:2012mm; @Choi:2012yg; @Ellis:2012jv; @Englert:2012xt; @Banerjee:2012ez; @Modak:2013sb; @Boer:2013fca; @Frank:2013gca], event shapes [@Englert:2013opa] and other observables [@Boughezal:2012tz; @Ellis:2012xd; @Alves:2012fb; @Geng:2012hy; @Djouadi:2013yb]. Recent measurements [@ATLAS:2013xla; @ATLAS:2013mla; @Aad:2013xqa; @CMS:xwa] show a favor of spin-0 over specific spin-2 scenarios. As for the couplings, the current direct information or constraints are for
has been discovered by the ATLAS [@Aad:2012tfa] and CMS [@Chatrchyan:2012ufa]. It is considered to be a highly Standard Model (SM) Higgs-like particle with measured production rate consistent with the SM Higgs boson through $\gamma\gamma$, $ZZ^*$, $WW^*$, and $\tau\tau$ channels [@Aad:2012tfa; @Chatrchyan:2012ufa]. Although further efforts are required in order to determine the features of the new resonance, like the spin, couplings with SM particles, and self-couplings. The spin-1 hypothesis is excluded by the observation of the $\gamma\gamma$ decay mode according to the Landau-Yang theorem [@Landau:1948kw; @Yang:1950rg]. Many proposals have been suggested to distinguish between the spin-0 and spin-2 hypotheses mainly focusing on the kinematic distributions, e.g, angular distributions [@Choi:2002jk; @Gao:2010qx; @DeRujula:2010ys; @Englert:2010ud; @Ellis:2012wg; @Bolognesi:2012mm; @Choi:2012yg; @Ellis:2012jv; @Englert:2012xt; @Banerjee:2012ez; @Modak:2013sb; @Boer:2013fca; @Frank:2013gca], event shapes [@Englert:2013opa] and other observables [@Boughezal:2012tz; @Ellis:2012xd; @Alves:2012fb; @Geng:2012hy; @Djouadi:2013yb]. Recent measurements [@ATLAS:2013xla; @ATLAS:2013mla; @Aad:2013xqa; @CMS:xwa] show a favor of spin-0 over specific spin-2 scenarios. As for the couplings, the current direct information or constraints are for[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] they occur in dense stellar systems such as cores of globular clusters.' author: - bibliography: - 'refs.bib' title: SPH Methods in the Modelling of Compact Objects --- =1 Introduction {#sec:intro} ============ Relevance of compact object encounters -------------------------------------- The vast majority of stars in the Universe will finally become a compact stellar object, either a white dwarf, a neutron star or a black hole. Our Galaxy therefore harbors large numbers of them, probably $\sim 10^{10}$ white dwarfs, several $10^8$ neutron stars and tens of millions of stellar-mass black holes. These objects stretch the physics that is known from terrestrial laboratories to extreme limits. For example, the structure of white dwarfs is governed by electron degeneracy pressure, they are therefore Earth-sized manifestations of the quantum mechanical Pauli-principle. Neutron stars, on the other hand, reach in their cores multiples of the nuclear saturation density ($2.6 \times 10^{14}$ ) which makes them excellent probes for nuclear matter theories. The dimensionless compactness parameter $\mathcal{C}= (G/c^2) (M/R)=
they occur in dense stellar systems such as cores of globular clusters.' author: - bibliography: - 'refs.bib' title: SPH Methods in the Modelling of Compact Objects --- =1 Introduction {#sec:intro} ============ Relevance of compact object encounters -------------------------------------- The vast majority of stars in the Universe will finally become a compact stellar object, either a white dwarf, a neutron star or a black hole. Our Galaxy therefore harbors large numbers of them, probably $\sim 10^{10}$ white dwarfs, several $10^8$ neutron stars and tens of millions of stellar-mass black holes. These objects stretch the physics that is known from terrestrial laboratories to extreme limits. For example, the structure of white dwarfs is governed by electron degeneracy pressure, they are therefore Earth-sized manifestations of the quantum mechanical Pauli-principle. Neutron stars, on the other hand, reach in their cores multiples of the nuclear saturation density ($2.6 \times 10^{14}$ ) which makes them excellent probes for nuclear matter theories. The dimensionless compactness parameter $\mathcal{C}= (G/c^2) (M/R)=[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] dynamical heating scenarios that yield time-averaged SXT intensities that are consistent with the static case. We find that it is possible to reproduce the total observed soft X-ray emission in all of the SXT filters with a dynamical heating model, indicating that nanoflare heating is consistent with the observational properties of the high temperature solar corona.' author: - 'Harry P. Warren' - 'Amy R. Winebarger' title: 'Static and Dynamic Modeling of a Solar Active Region. I: Soft X-Ray Emission' --- Introduction ============ Understanding how the Sun’s corona is heated to high temperatures remains one of the most significant challenges in solar physics. Unfortunately, the complexity of the solar atmosphere, with its many disparate spatial and temporal scales, makes it impossible to represent with a single, all encompassing model. Instead we need to break the problem up into smaller, more manageable pieces (e.g., see the recent review by @klimchuk2006). For example, kinetic theory or generalized MHD is used to
dynamical heating scenarios that yield time-averaged SXT intensities that are consistent with the static case. We find that it is possible to reproduce the total observed soft X-ray emission in all of the SXT filters with a dynamical heating model, indicating that nanoflare heating is consistent with the observational properties of the high temperature solar corona.' author: - 'Harry P. Warren' - 'Amy R. Winebarger' title: 'Static and Dynamic Modeling of a Solar Active Region. I: Soft X-Ray Emission' --- Introduction ============ Understanding how the Sun’s corona is heated to high temperatures remains one of the most significant challenges in solar physics. Unfortunately, the complexity of the solar atmosphere, with its many disparate spatial and temporal scales, makes it impossible to represent with a single, all encompassing model. Instead we need to break the problem up into smaller, more manageable pieces (e.g., see the recent review by @klimchuk2006). For example, kinetic theory or generalized MHD is used to[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] := \textrm{div}\ (|\nabla u|^{p-2} \nabla u)$ and $f\in \mathcal{D}' ({\Omega})$, the space of distributions on the domain ${\Omega}$. Our assumptions are the following:\ **(A)** *$1<p<\infty$, $V\in L^r_{\emph{loc}} \ ({\Omega})$ with $r$ as in , ${\Omega}$ is a domain in $\mathbb{R}^N$, $V \geq 0$ and for all test functions $u\in \mathcal{D} ({\Omega}) \backslash \{0\}$, $$\label{eq1.1} \mathcal{Q}_V (u) := \int_{\Omega}|\nabla u|^p \,dx - \int_{\Omega}V|u|^p \,dx > 0.$$ There exists $1<q\leq p$ such that $p-1<q$, $$\label{eq1.2} r=1 \ (N<q), \quad 1<r<+\infty \ (N=q), \quad 1/r+(p-1)/q^\ast=1 \ (N>q)$$ and there exists $W \in \mathcal{C} ({\Omega})$, $W>0$, such that for all $u\in\mathcal{D}({\Omega})$, $$\label{eq1.3} {\displaystyle}\left(\int_{\Omega}(|\nabla u|^q + |u|^q)W\,dx\right)^{p/q} \leq \mathcal{Q}_V (u).$$* Let us recall that $q^\ast:=Nq/(N-q)$. Our first example of $V$ is the *quadratic Hardy potential* $(N\geq 3$, $p=2)$: $$\label{hardy1} V(x):=\left({N-2\over 2}\right)^2 |x|^{-2}.$$ The corresponding forced problem is solved in [@6] using the Brezis-Vazquez remainder term for the quadratic Hardy inequality ([@4] and [@10]). A second example is the *Hardy potential* $(1<p<N)$: $$\label{hardy2} V(x)
:= \textrm{div}\ (|\nabla u|^{p-2} \nabla u)$ and $f\in \mathcal{D}' ({\Omega})$, the space of distributions on the domain ${\Omega}$. Our assumptions are the following:\ **(A)** *$1<p<\infty$, $V\in L^r_{\emph{loc}} \ ({\Omega})$ with $r$ as in , ${\Omega}$ is a domain in $\mathbb{R}^N$, $V \geq 0$ and for all test functions $u\in \mathcal{D} ({\Omega}) \backslash \{0\}$, $$\label{eq1.1} \mathcal{Q}_V (u) := \int_{\Omega}|\nabla u|^p \,dx - \int_{\Omega}V|u|^p \,dx > 0.$$ There exists $1<q\leq p$ such that $p-1<q$, $$\label{eq1.2} r=1 \ (N<q), \quad 1<r<+\infty \ (N=q), \quad 1/r+(p-1)/q^\ast=1 \ (N>q)$$ and there exists $W \in \mathcal{C} ({\Omega})$, $W>0$, such that for all $u\in\mathcal{D}({\Omega})$, $$\label{eq1.3} {\displaystyle}\left(\int_{\Omega}(|\nabla u|^q + |u|^q)W\,dx\right)^{p/q} \leq \mathcal{Q}_V (u).$$* Let us recall that $q^\ast:=Nq/(N-q)$. Our first example of $V$ is the *quadratic Hardy potential* $(N\geq 3$, $p=2)$: $$\label{hardy1} V(x):=\left({N-2\over 2}\right)^2 |x|^{-2}.$$ The corresponding forced problem is solved in [@6] using the Brezis-Vazquez remainder term for the quadratic Hardy inequality ([@4] and [@10]). A second example is the *Hardy potential* $(1<p<N)$: $$\label{hardy2} V(x)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] temperatures $T_C$ ($\sim$ few K) [@ferrand01] and are, therefore, inadequate for technological applications which would require FM order at room temperature. More recently, the Mn doped III-V semiconductors In$_{1-x}$Mn$_x$As [@munekata89; @ohno92] and Ga$_{1-x}$Mn$_x$As [@ohno96; @jungwirthRMP06] showed ferromagnetism at a much higher temperature, thanks to the development of molecular beam epitaxy (MBE)-growth techniques. The current high $T_C$ record of $173$K achieved in Mn-doped GaAs by using low temperature annealing techniques [@wang02; @edmonds02; @chiba03] is promising, but still too low for actual applications. In all these materials, ferromagnetism has been proven to be carrier mediated, a necessary property for spintronics since this enables the modification of magnetic behavior through charge manipulation. This has motivated a search for alternative spintronics materials with even higher $T_{\rm C}$ and carrier mediated FM. In this direction, dilute magnetic oxides [@spaldin-review], such as magnetically-doped TiO$_2$ [@matsumoto01], ZnO [@ueda01], and SnO$_2$ [@ogale03], could represent this alternative with reported $T_{\rm C}$s above room temperature and as high
temperatures $T_C$ ($\sim$ few K) [@ferrand01] and are, therefore, inadequate for technological applications which would require FM order at room temperature. More recently, the Mn doped III-V semiconductors In$_{1-x}$Mn$_x$As [@munekata89; @ohno92] and Ga$_{1-x}$Mn$_x$As [@ohno96; @jungwirthRMP06] showed ferromagnetism at a much higher temperature, thanks to the development of molecular beam epitaxy (MBE)-growth techniques. The current high $T_C$ record of $173$K achieved in Mn-doped GaAs by using low temperature annealing techniques [@wang02; @edmonds02; @chiba03] is promising, but still too low for actual applications. In all these materials, ferromagnetism has been proven to be carrier mediated, a necessary property for spintronics since this enables the modification of magnetic behavior through charge manipulation. This has motivated a search for alternative spintronics materials with even higher $T_{\rm C}$ and carrier mediated FM. In this direction, dilute magnetic oxides [@spaldin-review], such as magnetically-doped TiO$_2$ [@matsumoto01], ZnO [@ueda01], and SnO$_2$ [@ogale03], could represent this alternative with reported $T_{\rm C}$s above room temperature and as high[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Peter Dekker,$^1$ Martin Ams,$^1$ Michael J. Withford,$^1$ and Jeremy L. O’Brien$^{2,\ddagger}$' title: Laser written waveguide photonic quantum circuits --- [10]{} url \#1[`#1`]{}urlprefix\[2\]\[\][[\#2](#2)]{} M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, 2000). V. Giovannetti, S. Lloyd, and L. Maccone, “Quantum-Enhanced Measurements: Beating the Standard Quantum Limit,” Science **306**, 1330–1336 (2004). A. N. Boto, P. Kok, D. S. Abrams, S. L. Braunstein, C. P. Williams, and J. P. Dowling, “Quantum Interferometric Optical Lithography: Exploiting Entanglement to Beat the Diffraction Limit,” Phys. Rev. Lett. **85**(13), 2733–2736 (2000). N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum Cryptography,” Rev. Mod. Phys. **74**, 145–195 (2002). N. Gisin and R. Thew, “Quantum communication,” Nat. Photon. **1**(3), 165–171 (2007). J. L. O’Brien, “[Optical Quantum Computing]{},” Science **318**(5856), 1567–1570 (2007). A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, and J. L. O’Brien, “Silica-on-Silicon Waveguide Quantum Circuits,” Science **320**(5876), 646–649 (2008). K. M. Davis, K. Miura, N. Sugimoto, and K. Hirao, “Writing waveguides in glass with a femtosecond laser,” Opt. Lett. **21**(21), 1729–1731 (1996). S. Nolte, M. Will, J. Burghoff, and A. Tuennermann, “[Femtosecond waveguide writing: a new avenue to three-dimensional integrated optics]{},” [Appl. Phys. A]{} **77**(1), [109–111]{} (2003). E. Knill,
Peter Dekker,$^1$ Martin Ams,$^1$ Michael J. Withford,$^1$ and Jeremy L. O’Brien$^{2,\ddagger}$' title: Laser written waveguide photonic quantum circuits --- [10]{} url \#1[`#1`]{}urlprefix\[2\]\[\][[\#2](#2)]{} M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, 2000). V. Giovannetti, S. Lloyd, and L. Maccone, “Quantum-Enhanced Measurements: Beating the Standard Quantum Limit,” Science **306**, 1330–1336 (2004). A. N. Boto, P. Kok, D. S. Abrams, S. L. Braunstein, C. P. Williams, and J. P. Dowling, “Quantum Interferometric Optical Lithography: Exploiting Entanglement to Beat the Diffraction Limit,” Phys. Rev. Lett. **85**(13), 2733–2736 (2000). N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum Cryptography,” Rev. Mod. Phys. **74**, 145–195 (2002). N. Gisin and R. Thew, “Quantum communication,” Nat. Photon. **1**(3), 165–171 (2007). J. L. O’Brien, “[Optical Quantum Computing]{},” Science **318**(5856), 1567–1570 (2007). A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, and J. L. O’Brien, “Silica-on-Silicon Waveguide Quantum Circuits,” Science **320**(5876), 646–649 (2008). K. M. Davis, K. Miura, N. Sugimoto, and K. Hirao, “Writing waveguides in glass with a femtosecond laser,” Opt. Lett. **21**(21), 1729–1731 (1996). S. Nolte, M. Will, J. Burghoff, and A. Tuennermann, “[Femtosecond waveguide writing: a new avenue to three-dimensional integrated optics]{},” [Appl. Phys. A]{} **77**(1), [109–111]{} (2003). E. Knill,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] 50\mu$m, generated from the experimental setup as commented before [@hansoon]. Our results thus creates a new challenge to the experimentalist to resolve the shallow energy levels of the neutron in Earth’s gravitational field in future.' author: - Pulak Ranjan Giri title: 'Quantization of neutron in Earth’s gravity' --- The investigation of quantum phenomenon in gravitational field is certainly interesting and challenging [@nes1; @nes3; @peters; @ber] due its weakness of strength. To get an idea of the weakness of gravitational force over other forces a quantitative estimation may be helpful. The the gravitational attraction of two neutrons separated by a distance $r$ is $\sim 10^{-36}$ times weaker [@hartle] than the Coulomb repulsion between the two electrons separated by the same distance. One therefore needs to be very careful while investigating the quantum effects of gravity. Neutron is a possible candidate on which quantum effects of gravity can be investigated because charge neutrality will eliminate electromagnetic force
50\mu$m, generated from the experimental setup as commented before [@hansoon]. Our results thus creates a new challenge to the experimentalist to resolve the shallow energy levels of the neutron in Earth’s gravitational field in future.' author: - Pulak Ranjan Giri title: 'Quantization of neutron in Earth’s gravity' --- The investigation of quantum phenomenon in gravitational field is certainly interesting and challenging [@nes1; @nes3; @peters; @ber] due its weakness of strength. To get an idea of the weakness of gravitational force over other forces a quantitative estimation may be helpful. The the gravitational attraction of two neutrons separated by a distance $r$ is $\sim 10^{-36}$ times weaker [@hartle] than the Coulomb repulsion between the two electrons separated by the same distance. One therefore needs to be very careful while investigating the quantum effects of gravity. Neutron is a possible candidate on which quantum effects of gravity can be investigated because charge neutrality will eliminate electromagnetic force[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] x 22.35$\upmu$m. author: - 'Abdullah M. Zyarah Dhireesha Kudithipudi' title: 'Semi-Trained Memristive Crossbar Computing Engine with In-Situ Learning Accelerator' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010293.10010294&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Neural networks&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010583.10010786&lt;/concept\_id&gt; &lt;concept\_desc&gt;Hardware Emerging technologies&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; This material is based on research sponsored by AirForce Research Laboratory under agreement number FA8750-16-1-0108. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AirForce Research Laboratory or the U.S. Government. Authors’ addresses: A. M. Zyarah and D. Kudithipudi, Neuromorphic AI Lab, Rochester Institute of Technology, Rochester, NY; emails: {amz6011, dxkeec}@rit.edu. Introduction ============ On-device intelligence is gaining significant attention recently as it offers local data processing and low power consumption, suitable for energy constrained platforms (*e.g.* IoT). Porting neural networks on to embedded platforms to enable on-device
x 22.35$\upmu$m. author: - 'Abdullah M. Zyarah Dhireesha Kudithipudi' title: 'Semi-Trained Memristive Crossbar Computing Engine with In-Situ Learning Accelerator' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010293.10010294&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Neural networks&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010583.10010786&lt;/concept\_id&gt; &lt;concept\_desc&gt;Hardware Emerging technologies&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; This material is based on research sponsored by AirForce Research Laboratory under agreement number FA8750-16-1-0108. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AirForce Research Laboratory or the U.S. Government. Authors’ addresses: A. M. Zyarah and D. Kudithipudi, Neuromorphic AI Lab, Rochester Institute of Technology, Rochester, NY; emails: {amz6011, dxkeec}@rit.edu. Introduction ============ On-device intelligence is gaining significant attention recently as it offers local data processing and low power consumption, suitable for energy constrained platforms (*e.g.* IoT). Porting neural networks on to embedded platforms to enable on-device[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] to retrieve, associate and reveal malicious logics at the “[*opcode level*]{}”. We demonstrate the efficacy of DroidAnalytics using 150,368 Android applications, and successfully determine 2,494 Android malware from 102 different families, with 342 of them being [*zero-day*]{} malware samples from six different families. To the best of our knowledge, this is the first reported case in showing such a large Android malware analysis/detection. The evaluation shows the DroidAnalytics is a valuable tool and is effective in analyzing malware repackaging and mutations.' author: - | Min Zheng, Mingshen Sun, John C.S. Lui\ Computer Science & Engineering Department\ The Chinese University of Hong Kong bibliography: - 'paper.bib' title: 'DroidAnalytics: A Signature Based Analytic System to Collect, Extract, Analyze and Associate Android Malware' --- [**Introduction**]{} {#section: introduction} ==================== Smartphones are becoming prevailing devices for many people. Unfortunately, malware on smartphones is also increasing at an unprecedented rate. Android OS-based systems, being the most
to retrieve, associate and reveal malicious logics at the “[*opcode level*]{}”. We demonstrate the efficacy of DroidAnalytics using 150,368 Android applications, and successfully determine 2,494 Android malware from 102 different families, with 342 of them being [*zero-day*]{} malware samples from six different families. To the best of our knowledge, this is the first reported case in showing such a large Android malware analysis/detection. The evaluation shows the DroidAnalytics is a valuable tool and is effective in analyzing malware repackaging and mutations.' author: - | Min Zheng, Mingshen Sun, John C.S. Lui\ Computer Science & Engineering Department\ The Chinese University of Hong Kong bibliography: - 'paper.bib' title: 'DroidAnalytics: A Signature Based Analytic System to Collect, Extract, Analyze and Associate Android Malware' --- [**Introduction**]{} {#section: introduction} ==================== Smartphones are becoming prevailing devices for many people. Unfortunately, malware on smartphones is also increasing at an unprecedented rate. Android OS-based systems, being the most[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] because only a finite amount of energy is available for the amplifier. Furthermore the utility of the device is limited in practice by the fact that the amplifier normally adds noise to the signal. A perfect quantum optical amplifier would increase the coherent amplitude of a state multiplicatively. For example, it would transform the coherent state $|\alpha\rangle$, the nearest quantum equivalent to a classical stable wave, as follows $$\begin{aligned} |\alpha \rangle \rightarrow |g\alpha \rangle.\end{aligned}$$ Quantum-level linear optical amplifiers have stringent limitations on their operation. It is impossible to amplify an unknown quantum optical signal without adding noise [@haus], the minimum value of which is a consequence of the uncertainty principle [@caves]. This extra required added noise swamps the quantum properties of a signal. Were it otherwise it would be possible to violate the no-cloning theorem [@wootters] and achieve superluminal communication [@herbert]. Ralph and Lund [@ralph1] suggested that this noise limit could be beaten
because only a finite amount of energy is available for the amplifier. Furthermore the utility of the device is limited in practice by the fact that the amplifier normally adds noise to the signal. A perfect quantum optical amplifier would increase the coherent amplitude of a state multiplicatively. For example, it would transform the coherent state $|\alpha\rangle$, the nearest quantum equivalent to a classical stable wave, as follows $$\begin{aligned} |\alpha \rangle \rightarrow |g\alpha \rangle.\end{aligned}$$ Quantum-level linear optical amplifiers have stringent limitations on their operation. It is impossible to amplify an unknown quantum optical signal without adding noise [@haus], the minimum value of which is a consequence of the uncertainty principle [@caves]. This extra required added noise swamps the quantum properties of a signal. Were it otherwise it would be possible to violate the no-cloning theorem [@wootters] and achieve superluminal communication [@herbert]. Ralph and Lund [@ralph1] suggested that this noise limit could be beaten[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] some familiar concepts of heat generation that are applicable in macroscopic systems may not be valid in nanoscale systems. So there is an imperative need to comprehend fully the heating effect and to try to mitigate it. Owing to difficulties in probing the heating process in nanostructures, not much progress had been made until several state-of-the-art experimental scenarios were conceived in recent years. [@Smit2004; @Tsutsui2006; @Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010; @Oron-Carl2008; @Ioffe2008] There are both indirect [@Smit2004; @Tsutsui2006; @Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010] and direct [@Oron-Carl2008; @Ioffe2008] methods, which enable the evaluation of the effective local temperatures of nanoscale junctions. It has been experimentally demonstrated that the local heating may induce a substantial temperature increase in single molecular junctions [@Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010] because of the inefficient heat dissipation. From a microscopic point of view, the main factor of heat generation is the electron-phonon (e-p) interaction, which transfers the ordered energy of electric
some familiar concepts of heat generation that are applicable in macroscopic systems may not be valid in nanoscale systems. So there is an imperative need to comprehend fully the heating effect and to try to mitigate it. Owing to difficulties in probing the heating process in nanostructures, not much progress had been made until several state-of-the-art experimental scenarios were conceived in recent years. [@Smit2004; @Tsutsui2006; @Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010; @Oron-Carl2008; @Ioffe2008] There are both indirect [@Smit2004; @Tsutsui2006; @Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010] and direct [@Oron-Carl2008; @Ioffe2008] methods, which enable the evaluation of the effective local temperatures of nanoscale junctions. It has been experimentally demonstrated that the local heating may induce a substantial temperature increase in single molecular junctions [@Huang2006; @Huang2007; @Tsutsui2008; @Tsutsui2010] because of the inefficient heat dissipation. From a microscopic point of view, the main factor of heat generation is the electron-phonon (e-p) interaction, which transfers the ordered energy of electric[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] and D. L. Dill. A theory of timed automata. , 126(2):183–235, 1994. F. Cassez and K. G. Larsen. The impressive power of stopwatches. In [*Proc. of CONCUR*]{}, LNCS 1877, pages 138–152. Springer, 1877. J. Ferrante and C. Rackoff. A decision procedure for the first order theory of real addition with order. , 4(1):69–76, 1975. G. Frehse. Phaver: algorithmic verification of hybrid systems past hytech. , 10:263–279, May 2008. T. A. Henzinger, P.-H. Ho, and H. Wong-Toi. Hytech: A model checker for hybrid systems. In [*Proc. of CAV*]{}, LNCS 1254, pages 460–463. Springer, 1997. T. A. Henzinger, P. W. Kopke, A. Puri, and P. Varaiya. What’s decidable about hybrid automata? , 57(1):94–124, 1998. T. A. Henzinger and J.-F. Raskin. Robust undecidability of timed and hybrid systems. In [*Proc. of HSCC*]{}, LNCS 1790, pages 145–159. Springer, 2000. M. L. Minsky. . Prentice-Hall Inc., Englewood Cliffs, N.J., 1967. Prentice-Hall Series in Automatic Computation. J. Ouaknine, A. Rabinovich, and J. Worrell. Time-bounded verification. In [*Proc. of CONCUR*]{}, LNCS 5710, pages 496–510. Springer, 2009. J. Ouaknine and J. Worrell. Towards a theory of
and D. L. Dill. A theory of timed automata. , 126(2):183–235, 1994. F. Cassez and K. G. Larsen. The impressive power of stopwatches. In [*Proc. of CONCUR*]{}, LNCS 1877, pages 138–152. Springer, 1877. J. Ferrante and C. Rackoff. A decision procedure for the first order theory of real addition with order. , 4(1):69–76, 1975. G. Frehse. Phaver: algorithmic verification of hybrid systems past hytech. , 10:263–279, May 2008. T. A. Henzinger, P.-H. Ho, and H. Wong-Toi. Hytech: A model checker for hybrid systems. In [*Proc. of CAV*]{}, LNCS 1254, pages 460–463. Springer, 1997. T. A. Henzinger, P. W. Kopke, A. Puri, and P. Varaiya. What’s decidable about hybrid automata? , 57(1):94–124, 1998. T. A. Henzinger and J.-F. Raskin. Robust undecidability of timed and hybrid systems. In [*Proc. of HSCC*]{}, LNCS 1790, pages 145–159. Springer, 2000. M. L. Minsky. . Prentice-Hall Inc., Englewood Cliffs, N.J., 1967. Prentice-Hall Series in Automatic Computation. J. Ouaknine, A. Rabinovich, and J. Worrell. Time-bounded verification. In [*Proc. of CONCUR*]{}, LNCS 5710, pages 496–510. Springer, 2009. J. Ouaknine and J. Worrell. Towards a theory of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] is present. Compared to a clean interface, we find that incompressibility substantially weakens flow directed normal to the interface. Interestingly, for both driven and active colloids, we find that the leading-order flow normal to the interface is associated with colloid asymmetry with respect to the interfacial plane. Flow parallel to the interface, however, is not weakened. Moreover, surface-viscous stresses, if present, potentially generate very long-ranged flow on the interface itself and into the surrounding fluids. We examine the limiting forms of such flows. Our results have important implications for advective mass transport enhancement near fluid boundaries.' author: - 'Nicholas G. Chisholm' - 'Kathleen J. Stebe ,' bibliography: - 'main.bib' title: Driven and active colloids at fluid interfaces --- Intoduction {#sec:intro} =========== Fluid-fluid interfaces provide a rich setting for driven and active colloidal systems. Here, a “driven” colloid moves through a fluid due to external forces or torques, for example, a magnetic bead forced by a magnetic field. “Active” colloids, on
is present. Compared to a clean interface, we find that incompressibility substantially weakens flow directed normal to the interface. Interestingly, for both driven and active colloids, we find that the leading-order flow normal to the interface is associated with colloid asymmetry with respect to the interfacial plane. Flow parallel to the interface, however, is not weakened. Moreover, surface-viscous stresses, if present, potentially generate very long-ranged flow on the interface itself and into the surrounding fluids. We examine the limiting forms of such flows. Our results have important implications for advective mass transport enhancement near fluid boundaries.' author: - 'Nicholas G. Chisholm' - 'Kathleen J. Stebe ,' bibliography: - 'main.bib' title: Driven and active colloids at fluid interfaces --- Intoduction {#sec:intro} =========== Fluid-fluid interfaces provide a rich setting for driven and active colloidal systems. Here, a “driven” colloid moves through a fluid due to external forces or torques, for example, a magnetic bead forced by a magnetic field. “Active” colloids, on[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]