text
stringlengths
222
52.6k
context
stringlengths
539
52.9k
<context>[NEXA_RESTORE] --- abstract: 'In frequency-selective channels linear receivers enjoy significantly-reduced complexity compared with maximum likelihood receivers at the cost of performance degradation which can be in the form of a loss of the inherent frequency diversity order or reduced coding gain. This paper demonstrates that the minimum mean-square error symbol-by-symbol linear equalizer incurs no diversity loss compared to the maximum likelihood receivers. In particular, for a channel with memory $\nu$, it achieves the full diversity order of ($\nu+1$) while the zero-forcing symbol-by-symbol linear equalizer always achieves a diversity order of one.' author: - 'Ali Tajer[^1]Aria Nosratinia [^2] Naofal Al-Dhahir' bibliography: - 'IEEEabrv.bib' - 'IIR\_LE.bib' title: 'Diversity Analysis of Symbol-by-Symbol Linear Equalizers' --- Introduction {#sec:intro} ============ In broadband wireless communication systems, the coherence bandwidth of the fading channel is significantly less than the transmission bandwidth. This results in inter-symbol interference (ISI) and at the same time provides frequency diversity that can be exploited at the receiver to enhance transmission reliability [@Proakis:book]. It is
--- abstract: 'In frequency-selective channels linear receivers enjoy significantly-reduced complexity compared with maximum likelihood receivers at the cost of performance degradation which can be in the form of a loss of the inherent frequency diversity order or reduced coding gain. This paper demonstrates that the minimum mean-square error symbol-by-symbol linear equalizer incurs no diversity loss compared to the maximum likelihood receivers. In particular, for a channel with memory $\nu$, it achieves the full diversity order of ($\nu+1$) while the zero-forcing symbol-by-symbol linear equalizer always achieves a diversity order of one.' author: - 'Ali Tajer[^1]Aria Nosratinia [^2] Naofal Al-Dhahir' bibliography: - 'IEEEabrv.bib' - 'IIR\_LE.bib' title: 'Diversity Analysis of Symbol-by-Symbol Linear Equalizers' --- Introduction {#sec:intro} ============ In broadband wireless communication systems, the coherence bandwidth of the fading channel is significantly less than the transmission bandwidth. This results in inter-symbol interference (ISI) and at the same time provides frequency diversity that can be exploited at the receiver to enhance transmission reliability [@Proakis:book]. It is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - | The IceCube Collaboration[^1]\ [*<http://icecube.wisc.edu/collaboration/authors/icrc19_icecube>*]{}\ E-mail: bibliography: - 'references.bib' title: 'Measurement of the high-energy all-flavor neutrino-nucleon cross section with IceCube' --- Introduction {#sec:intro} ============ Neutrinos above TeV energies that traverse through the Earth may interact before exiting [@Gandhi:1995tf]. At these energies neutrino-nucleon interactions proceed via deep-inelastic scattering (DIS), whereby the neutrino interacts with the constituent quarks within the nucleon. The DIS cross sections can be derived from parton distribution functions (PDF) which are in turn constrained experimentally [@CooperSarkar:2011pa] or by using a color dipole model of the nucleon and assuming that cross-sections increase at high energies as $\ln^2 s$ [@Arguelles:2015wba]. At energies above a PeV, more exotic effects beyond the Standard Model have been proposed that predict a neutrino cross section of up to at $E_\nu > \SI{e19}{eV}$ [@Jain:2000pu]. Thus far, measurements of the high-energy neutrino cross section have been performed using data from the IceCube Neutrino Observatory. One proposed experiment,
--- author: - | The IceCube Collaboration[^1]\ [*<http://icecube.wisc.edu/collaboration/authors/icrc19_icecube>*]{}\ E-mail: bibliography: - 'references.bib' title: 'Measurement of the high-energy all-flavor neutrino-nucleon cross section with IceCube' --- Introduction {#sec:intro} ============ Neutrinos above TeV energies that traverse through the Earth may interact before exiting [@Gandhi:1995tf]. At these energies neutrino-nucleon interactions proceed via deep-inelastic scattering (DIS), whereby the neutrino interacts with the constituent quarks within the nucleon. The DIS cross sections can be derived from parton distribution functions (PDF) which are in turn constrained experimentally [@CooperSarkar:2011pa] or by using a color dipole model of the nucleon and assuming that cross-sections increase at high energies as $\ln^2 s$ [@Arguelles:2015wba]. At energies above a PeV, more exotic effects beyond the Standard Model have been proposed that predict a neutrino cross section of up to at $E_\nu > \SI{e19}{eV}$ [@Jain:2000pu]. Thus far, measurements of the high-energy neutrino cross section have been performed using data from the IceCube Neutrino Observatory. One proposed experiment,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ where $C$ is a smooth curve and $E_1$, $E_2$ are vector bundles over $C$.In this paper we compute the pseudo effective cones of higher codimension cycles on $X$.' address: | Institute of Mathematical Sciences\ CIT Campus, Taramani, Chennai 600113, India and Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India author: - Rupam Karmakar title: Effective cones of cycles on products of projective bundles over curves --- Introduction ============ The cones of divisors and curves on projective varieties have been extensively studied over the years and by now are quite well understood. However, more recently the theory of cones of cycles of higher dimension has been the subject of increasing interest(see [@F], [@DELV], [@DJV], [@CC] etc). Lately, there has been significant progress in the theoretical understanding of such cycles, due to [@FL1], [@FL2] and others. But the the number of examples where
--- abstract: 'Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ where $C$ is a smooth curve and $E_1$, $E_2$ are vector bundles over $C$.In this paper we compute the pseudo effective cones of higher codimension cycles on $X$.' address: | Institute of Mathematical Sciences\ CIT Campus, Taramani, Chennai 600113, India and Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India author: - Rupam Karmakar title: Effective cones of cycles on products of projective bundles over curves --- Introduction ============ The cones of divisors and curves on projective varieties have been extensively studied over the years and by now are quite well understood. However, more recently the theory of cones of cycles of higher dimension has been the subject of increasing interest(see [@F], [@DELV], [@DJV], [@CC] etc). Lately, there has been significant progress in the theoretical understanding of such cycles, due to [@FL1], [@FL2] and others. But the the number of examples where[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We classify periodically driven quantum systems on a one-dimensional lattice, where the driving process is local and subject to a chiral symmetry condition. The analysis is in terms of the unitary operator at a half-period and also covers systems in which this operator is implemented directly, and does not necessarily arise from a continuous time evolution. The full-period evolution operator is called a quantum walk, and starting the period at half time, which is called choosing another timeframe, leads to a second quantum walk. We assume that these walks have gaps at the spectral points $\pm1$, up to at most finite dimensional eigenspaces. Walks with these gap properties have been completely classified by triples of integer indices (arXiv:1611.04439). These indices, taken for both timeframes, thus become classifying for half-step operators. In addition a further index quantity is required to classify the half step operators, which decides whether a continuous
--- abstract: 'We classify periodically driven quantum systems on a one-dimensional lattice, where the driving process is local and subject to a chiral symmetry condition. The analysis is in terms of the unitary operator at a half-period and also covers systems in which this operator is implemented directly, and does not necessarily arise from a continuous time evolution. The full-period evolution operator is called a quantum walk, and starting the period at half time, which is called choosing another timeframe, leads to a second quantum walk. We assume that these walks have gaps at the spectral points $\pm1$, up to at most finite dimensional eigenspaces. Walks with these gap properties have been completely classified by triples of integer indices (arXiv:1611.04439). These indices, taken for both timeframes, thus become classifying for half-step operators. In addition a further index quantity is required to classify the half step operators, which decides whether a continuous[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Quantum backflow is a classically forbidden effect consisting in a negative flux for states with negligible negative momentum components. It has never been observed experimentally so far. We derive a general relation that connects backflow with a critical value of the particle density, paving the way for the detection of backflow by a density measurement. To this end, we propose an explicit scheme with Bose-Einstein condensates, at reach with current experimental technologies. Remarkably, the application of a positive momentum kick, via a Bragg pulse, to a condensate with a positive velocity may cause a current flow in the negative direction.' author: - 'M. Palmero' - 'E. Torrontegui' - 'J. G. Muga' - 'M. Modugno' title: 'Detecting quantum backflow by the density of a Bose-Einstein condensate' --- introduction ============ Quantum backflow is a fascinating quantum interference effect consisting in a negative current density for quantum wave packets without negative momentum components [@allcock]. It reflects a fundamental point about quantum measurements
--- abstract: 'Quantum backflow is a classically forbidden effect consisting in a negative flux for states with negligible negative momentum components. It has never been observed experimentally so far. We derive a general relation that connects backflow with a critical value of the particle density, paving the way for the detection of backflow by a density measurement. To this end, we propose an explicit scheme with Bose-Einstein condensates, at reach with current experimental technologies. Remarkably, the application of a positive momentum kick, via a Bragg pulse, to a condensate with a positive velocity may cause a current flow in the negative direction.' author: - 'M. Palmero' - 'E. Torrontegui' - 'J. G. Muga' - 'M. Modugno' title: 'Detecting quantum backflow by the density of a Bose-Einstein condensate' --- introduction ============ Quantum backflow is a fascinating quantum interference effect consisting in a negative current density for quantum wave packets without negative momentum components [@allcock]. It reflects a fundamental point about quantum measurements[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In this paper we address the problem of efficient estimation of Sobol sensitivy indices. First, we focus on general functional integrals of conditional moments of the form ${\mathbb{E}}(\psi({\mathbb{E}}(\varphi(Y)|X)))$ where $(X,Y)$ is a random vector with joint density $f$ and $\psi$ and $\varphi$ are functions that are differentiable enough. In particular, we show that asymptotical efficient estimation of this functional boils down to the estimation of crossed quadratic functionals. An efficient estimate of first-order sensitivity indices is then derived as a special case. We investigate its properties on several analytical functions and illustrate its interest on a reservoir engineering case.' author: - 'Sébastien Da Veiga[^1] and Fabrice Gamboa[^2]' title: Efficient Estimation of Sensitivity Indices --- density estimation, semiparametric Cramér-Rao bound, global sensitivity analysis. 2G20, 62G06, 62G07, 62P30 Introduction ============ In the past decade, the increasing interest in the design and analysis of computer experiments motivated the development of dedicated and sharp statistical tools [@santner03]. Design of experiments, sensitivity analysis
--- abstract: 'In this paper we address the problem of efficient estimation of Sobol sensitivy indices. First, we focus on general functional integrals of conditional moments of the form ${\mathbb{E}}(\psi({\mathbb{E}}(\varphi(Y)|X)))$ where $(X,Y)$ is a random vector with joint density $f$ and $\psi$ and $\varphi$ are functions that are differentiable enough. In particular, we show that asymptotical efficient estimation of this functional boils down to the estimation of crossed quadratic functionals. An efficient estimate of first-order sensitivity indices is then derived as a special case. We investigate its properties on several analytical functions and illustrate its interest on a reservoir engineering case.' author: - 'Sébastien Da Veiga[^1] and Fabrice Gamboa[^2]' title: Efficient Estimation of Sensitivity Indices --- density estimation, semiparametric Cramér-Rao bound, global sensitivity analysis. 2G20, 62G06, 62G07, 62P30 Introduction ============ In the past decade, the increasing interest in the design and analysis of computer experiments motivated the development of dedicated and sharp statistical tools [@santner03]. Design of experiments, sensitivity analysis[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | When independent Bose-Einstein condensates (BEC), described quantum mechanically by Fock (number) states, are sent into interferometers, the measurement of the output port at which the particles are detected provides a binary measurement, with two possible results $\pm1$. With two interferometers and two BEC’s, the parity (product of all results obtained at each interferometer) has all the features of an Einstein-Podolsky-Rosen quantity, with perfect correlations predicted by quantum mechanics when the settings (phase shifts of the interferometers) are the same. When they are different, significant violations of Bell inequalities are obtained. These violations do not tend to zero when the number $N$ of particles increases, and can therefore be obtained with arbitrarily large systems, but a condition is that all particles should be detected. We discuss the general experimental requirements for observing such effects, the necessary detection of all particles in correlation, the role of the pixels of
--- abstract: | When independent Bose-Einstein condensates (BEC), described quantum mechanically by Fock (number) states, are sent into interferometers, the measurement of the output port at which the particles are detected provides a binary measurement, with two possible results $\pm1$. With two interferometers and two BEC’s, the parity (product of all results obtained at each interferometer) has all the features of an Einstein-Podolsky-Rosen quantity, with perfect correlations predicted by quantum mechanics when the settings (phase shifts of the interferometers) are the same. When they are different, significant violations of Bell inequalities are obtained. These violations do not tend to zero when the number $N$ of particles increases, and can therefore be obtained with arbitrarily large systems, but a condition is that all particles should be detected. We discuss the general experimental requirements for observing such effects, the necessary detection of all particles in correlation, the role of the pixels of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We examine Dirac’s early algebraic approach which introduces the [*standard*]{} ket and show that it emerges more clearly from a unitary transformation of the operators based on the action. This establishes a new picture that is unitarily equivalent to both the Schrödinger and Heisenberg pictures. We will call this the Dirac-Bohm picture for the reasons we discuss in the paper. This picture forms the basis of the Feynman path theory and allows us to show that the so-called ‘Bohm trajectories’ are averages of an ensemble of Feynman paths.' author: - 'B. J. Hiley[^1] and G. Dennis.' bibliography: - 'myfile.bib' date: | TPRU, Birkbeck, University of London, Malet Street,\ London WC1E 7HX.\ Physics Department, University College London, Gower Street, London WC1E 6BT. title: 'The Dirac-Bohm Picture' --- Representations and Pictures ============================ The Stone-von Neumann theorem [@jn31; @jn32; @ms30] proves that the Schrödinger representation is unique up to a unitary transformation. This means
--- abstract: 'We examine Dirac’s early algebraic approach which introduces the [*standard*]{} ket and show that it emerges more clearly from a unitary transformation of the operators based on the action. This establishes a new picture that is unitarily equivalent to both the Schrödinger and Heisenberg pictures. We will call this the Dirac-Bohm picture for the reasons we discuss in the paper. This picture forms the basis of the Feynman path theory and allows us to show that the so-called ‘Bohm trajectories’ are averages of an ensemble of Feynman paths.' author: - 'B. J. Hiley[^1] and G. Dennis.' bibliography: - 'myfile.bib' date: | TPRU, Birkbeck, University of London, Malet Street,\ London WC1E 7HX.\ Physics Department, University College London, Gower Street, London WC1E 6BT. title: 'The Dirac-Bohm Picture' --- Representations and Pictures ============================ The Stone-von Neumann theorem [@jn31; @jn32; @ms30] proves that the Schrödinger representation is unique up to a unitary transformation. This means[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In various interaction tasks using Underwater Vehicle Manipulator Systems (UVMSs) (e.g. sampling of the sea organisms, underwater welding), important factors such as: i) uncertainties and complexity of UVMS dynamic model ii) external disturbances (e.g. sea currents and waves) iii) imperfection and noises of measuring sensors iv) steady state performance as well as v) inferior overshoot of interaction force error, should be addressed during the force control design. Motivated by the above factors, this paper presents a model-free control protocol for force controlling of an Underwater Vehicle Manipulator System which is in contact with a compliant environment, without incorporating any knowledge of the UVMS’s dynamic model, exogenous disturbances and sensor’s noise model. Moreover, the transient and steady state response as well as reduction of overshooting force error are solely determined by certain designer-specified performance functions and are fully decoupled by the UVMS’s dynamic model, the control gain selection, as well
--- abstract: 'In various interaction tasks using Underwater Vehicle Manipulator Systems (UVMSs) (e.g. sampling of the sea organisms, underwater welding), important factors such as: i) uncertainties and complexity of UVMS dynamic model ii) external disturbances (e.g. sea currents and waves) iii) imperfection and noises of measuring sensors iv) steady state performance as well as v) inferior overshoot of interaction force error, should be addressed during the force control design. Motivated by the above factors, this paper presents a model-free control protocol for force controlling of an Underwater Vehicle Manipulator System which is in contact with a compliant environment, without incorporating any knowledge of the UVMS’s dynamic model, exogenous disturbances and sensor’s noise model. Moreover, the transient and steady state response as well as reduction of overshooting force error are solely determined by certain designer-specified performance functions and are fully decoupled by the UVMS’s dynamic model, the control gain selection, as well[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | The portfolio are a critical factor not only in risk analysis, but also in insurance and financial applications. In this paper, we consider a special class of risk statistics from the perspective of regulator. This new risk statistic can be uesd for the quantification of portfolio risk. By further developing the properties related to regulator-based risk statistics, we are able to derive dual representation for them. Finally, examples are also given to demonstrate the application of this risk statistic.\ author: - Xiaochuan Deng - Fei Sun title: 'Regulator-based risk statistics for portfolios' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore **Introduction** ================ Research on risk is a hot topic in both quantitative and theoretical research, and risk models have attracted considerable attention. The quantitative calculation of risk involves two problems: choosing an appropriate risk model and
--- abstract: | The portfolio are a critical factor not only in risk analysis, but also in insurance and financial applications. In this paper, we consider a special class of risk statistics from the perspective of regulator. This new risk statistic can be uesd for the quantification of portfolio risk. By further developing the properties related to regulator-based risk statistics, we are able to derive dual representation for them. Finally, examples are also given to demonstrate the application of this risk statistic.\ author: - Xiaochuan Deng - Fei Sun title: 'Regulator-based risk statistics for portfolios' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore **Introduction** ================ Research on risk is a hot topic in both quantitative and theoretical research, and risk models have attracted considerable attention. The quantitative calculation of risk involves two problems: choosing an appropriate risk model and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We have measured the contact angle of the interface of phase-separated $^{3}$He-$^{4}$He mixtures against a sapphire window. We have found that this angle is finite and does not tend to zero when the temperature approaches $T_t$, the temperature of the tri-critical point. On the contrary, it increases with temperature. This behavior is a remarkable exception to what is generally observed near critical points, i.e. “critical point wetting”. We propose that it is a consequence of the “critical Casimir effect” which leads to an effective attraction of the $^{3}$He-$^{4}$He interface by the sapphire near $T_{t}$.' address: | Laboratoire de Physique Statistique de l’Ecole Normale Supérieure\ associé aux Universités Paris 6 et Paris 7 et au CNRS\ 24 rue Lhomond 75231 Paris Cedex 05, France\ author: - 'T. Ueno[^1], S. Balibar, T. Mizusaki[^2], F. Caupin and E. Rolley' title: Critical Casimir effect and wetting by helium
--- abstract: 'We have measured the contact angle of the interface of phase-separated $^{3}$He-$^{4}$He mixtures against a sapphire window. We have found that this angle is finite and does not tend to zero when the temperature approaches $T_t$, the temperature of the tri-critical point. On the contrary, it increases with temperature. This behavior is a remarkable exception to what is generally observed near critical points, i.e. “critical point wetting”. We propose that it is a consequence of the “critical Casimir effect” which leads to an effective attraction of the $^{3}$He-$^{4}$He interface by the sapphire near $T_{t}$.' address: | Laboratoire de Physique Statistique de l’Ecole Normale Supérieure\ associé aux Universités Paris 6 et Paris 7 et au CNRS\ 24 rue Lhomond 75231 Paris Cedex 05, France\ author: - 'T. Ueno[^1], S. Balibar, T. Mizusaki[^2], F. Caupin and E. Rolley' title: Critical Casimir effect and wetting by helium[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We perform 1D/2D/3D relativistic hydrodynamical simulations of accretion flows with low angular momentum, filling the gap between spherically symmetric Bondi accretion and disc-like accretion flows. Scenarios with different directional distributions of angular momentum of falling matter and varying values of key parameters such as spin of central black hole, energy and angular momentum of matter are considered. In some of the scenarios the shock front is formed. We identify ranges of parameters for which the shock after formation moves towards or outwards the central black hole or the long lasting oscillating shock is observed. The frequencies of oscillations of shock positions which can cause flaring in mass accretion rate are extracted. The results are scalable with mass of central black hole and can be compared to the quasi-periodic oscillations of selected microquasars (such as GRS 1915+105, XTE J1550-564 or IGR J17091-3624), as well as to the supermassive black holes
--- abstract: 'We perform 1D/2D/3D relativistic hydrodynamical simulations of accretion flows with low angular momentum, filling the gap between spherically symmetric Bondi accretion and disc-like accretion flows. Scenarios with different directional distributions of angular momentum of falling matter and varying values of key parameters such as spin of central black hole, energy and angular momentum of matter are considered. In some of the scenarios the shock front is formed. We identify ranges of parameters for which the shock after formation moves towards or outwards the central black hole or the long lasting oscillating shock is observed. The frequencies of oscillations of shock positions which can cause flaring in mass accretion rate are extracted. The results are scalable with mass of central black hole and can be compared to the quasi-periodic oscillations of selected microquasars (such as GRS 1915+105, XTE J1550-564 or IGR J17091-3624), as well as to the supermassive black holes[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'This paper proves the asymptotic stability of the multidimensional wave equation posed on a bounded open Lipschitz set, coupled with various classes of positive-real impedance boundary conditions, chosen for their physical relevance: time-delayed, standard diffusive (which includes the Riemann-Liouville fractional integral) and extended diffusive (which includes the Caputo fractional derivative). The method of proof consists in formulating an abstract Cauchy problem on an extended state space using a dissipative realization of the impedance operator, be it finite or infinite-dimensional. The asymptotic stability of the corresponding strongly continuous semigroup is then obtained by verifying the sufficient spectral conditions derived by Arendt and Batty (Trans. Amer. Math. Soc., 306 (1988)) as well as Lyubich and V[ũ]{} (Studia Math., 88 (1988)).' author: - Florian Monteghetti and Ghislain Haine and Denis Matignon title: 'Asymptotic stability of the multidimensional wave equation coupled with classes of positive-real impedance boundary conditions' --- <span style="font-variant:small-caps;">Florian Monteghetti$^*$</span> <span style="font-variant:small-caps;">Ghislain Haine and Denis Matignon</span> \#1 Introduction ============ The broad
--- abstract: 'This paper proves the asymptotic stability of the multidimensional wave equation posed on a bounded open Lipschitz set, coupled with various classes of positive-real impedance boundary conditions, chosen for their physical relevance: time-delayed, standard diffusive (which includes the Riemann-Liouville fractional integral) and extended diffusive (which includes the Caputo fractional derivative). The method of proof consists in formulating an abstract Cauchy problem on an extended state space using a dissipative realization of the impedance operator, be it finite or infinite-dimensional. The asymptotic stability of the corresponding strongly continuous semigroup is then obtained by verifying the sufficient spectral conditions derived by Arendt and Batty (Trans. Amer. Math. Soc., 306 (1988)) as well as Lyubich and V[ũ]{} (Studia Math., 88 (1988)).' author: - Florian Monteghetti and Ghislain Haine and Denis Matignon title: 'Asymptotic stability of the multidimensional wave equation coupled with classes of positive-real impedance boundary conditions' --- <span style="font-variant:small-caps;">Florian Monteghetti$^*$</span> <span style="font-variant:small-caps;">Ghislain Haine and Denis Matignon</span> \#1 Introduction ============ The broad[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The proposal of the possibility of change of signature in quantum cosmology has led to the study of this phenomenon in classical general relativity theory, where there has been some controversy about what is and is not possible. We here present a new analysis of such a change of signature, based on previous studies of the initial value problem in general relativity. We emphasize that there are various continuity suppositions one can make at a classical change of signature, and consider more general assumptions than made up to now. We confirm that in general such a change can take place even when the second fundamental form of the surface of change does not vanish.' author: - | [**Mauro Carfora**]{}$^1$ and [**George Ellis**]{}$^{1,2}$\  \ [*$^1$SISSA, Via Beirut 2-4,*]{}\ [*34013 Trieste, Italy*]{}\  \ [*$^2$Department of
--- abstract: 'The proposal of the possibility of change of signature in quantum cosmology has led to the study of this phenomenon in classical general relativity theory, where there has been some controversy about what is and is not possible. We here present a new analysis of such a change of signature, based on previous studies of the initial value problem in general relativity. We emphasize that there are various continuity suppositions one can make at a classical change of signature, and consider more general assumptions than made up to now. We confirm that in general such a change can take place even when the second fundamental form of the surface of change does not vanish.' author: - | [**Mauro Carfora**]{}$^1$ and [**George Ellis**]{}$^{1,2}$\  \ [*$^1$SISSA, Via Beirut 2-4,*]{}\ [*34013 Trieste, Italy*]{}\  \ [*$^2$Department of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We adapt a method of Voisin to powers of abelian varieties in order to study orbits for rational equivalence of zero-cycles on very general abelian varieties. We deduce that a very general abelian variety of dimension at least $2k-2$ has gonality at least $k+1$. This settles a conjecture of Voisin. We also discuss how upper bounds for the dimension of orbits for rational equivalence can be used to provide new lower bounds on other measures of irrationality. In particular, we obtain a strengthening of the Sommese bound on the degree of irrationality of abelian varieties. In the appendix we present some new identities in the Chow group of zero-cycles of abelian varieties.' address: 'Department of Mathematics, University of Chicago, IL, 60637' author: - Olivier Martin title: On a Conjecture of Voisin on the Gonality of Very General Abelian Varieties --- Introduction ============ In his seminal 1969 paper [@M] Mumford shows that the Chow group of zero-cycles of
--- abstract: 'We adapt a method of Voisin to powers of abelian varieties in order to study orbits for rational equivalence of zero-cycles on very general abelian varieties. We deduce that a very general abelian variety of dimension at least $2k-2$ has gonality at least $k+1$. This settles a conjecture of Voisin. We also discuss how upper bounds for the dimension of orbits for rational equivalence can be used to provide new lower bounds on other measures of irrationality. In particular, we obtain a strengthening of the Sommese bound on the degree of irrationality of abelian varieties. In the appendix we present some new identities in the Chow group of zero-cycles of abelian varieties.' address: 'Department of Mathematics, University of Chicago, IL, 60637' author: - Olivier Martin title: On a Conjecture of Voisin on the Gonality of Very General Abelian Varieties --- Introduction ============ In his seminal 1969 paper [@M] Mumford shows that the Chow group of zero-cycles of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The electronic spin of the nitrogen vacancy (NV) center in diamond forms an atomically sized, highly sensitive sensor for magnetic fields. To harness the full potential of individual NV centers for sensing with high sensitivity and nanoscale spatial resolution, NV centers have to be incorporated into scanning probe structures enabling controlled scanning in close proximity to the sample surface. Here, we present an optimized procedure to fabricate single-crystal, all-diamond scanning probes starting from commercially available diamond and show a highly efficient and robust approach for integrating these devices in a generic atomic force microscope. Our scanning probes consisting of a scanning nanopillar (200 nm diameter, $1-2\,\mu$m length) on a thin ($< 1\mu$m) cantilever structure, enable efficient light extraction from diamond in combination with a high magnetic field sensitivity ($\mathrm{\eta_{AC}}\approx50\pm20\,\mathrm{nT}/\sqrt{\mathrm{Hz}}$). As a first application of our scanning probes, we image the magnetic stray field of a single Ni nanorod. We
--- abstract: 'The electronic spin of the nitrogen vacancy (NV) center in diamond forms an atomically sized, highly sensitive sensor for magnetic fields. To harness the full potential of individual NV centers for sensing with high sensitivity and nanoscale spatial resolution, NV centers have to be incorporated into scanning probe structures enabling controlled scanning in close proximity to the sample surface. Here, we present an optimized procedure to fabricate single-crystal, all-diamond scanning probes starting from commercially available diamond and show a highly efficient and robust approach for integrating these devices in a generic atomic force microscope. Our scanning probes consisting of a scanning nanopillar (200 nm diameter, $1-2\,\mu$m length) on a thin ($< 1\mu$m) cantilever structure, enable efficient light extraction from diamond in combination with a high magnetic field sensitivity ($\mathrm{\eta_{AC}}\approx50\pm20\,\mathrm{nT}/\sqrt{\mathrm{Hz}}$). As a first application of our scanning probes, we image the magnetic stray field of a single Ni nanorod. We[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] [**Duality-invariant bimetric formulation of linearized gravity**]{} Claudio Bunster$^{1,2}$, Marc Henneaux$^{1,3}$ and Sergio Hörtner$^3$ ${}^1$[*Centro de Estudios Científicos (CECs), Casilla 1469, Valdivia, Chile*]{} ${}^2$[*Universidad Andrés Bello, Av. República 440, Santiago, Chile*]{} ${}^3$[*Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels, Belgium*]{}\ **Abstract** A formulation of linearized gravity which is manifestly invariant under electric-magnetic duality rotations in the internal space of the metric and its dual, and which contains both metrics as basic variables (rather than the corresponding prepotentials), is derived. In this bimetric formulation, the variables have a more immediate geometrical significance, but the action is non-local in space, contrary to what occurs in the prepotential formulation. More specifically, one finds that: (i) the kinetic term is non-local in space (but local in time); (ii) the Hamiltonian is local in space and in time; (iii) the variables are subject to two Hamiltonian constraints, one for each metric. Based in part on the talk “Gravitational
[**Duality-invariant bimetric formulation of linearized gravity**]{} Claudio Bunster$^{1,2}$, Marc Henneaux$^{1,3}$ and Sergio Hörtner$^3$ ${}^1$[*Centro de Estudios Científicos (CECs), Casilla 1469, Valdivia, Chile*]{} ${}^2$[*Universidad Andrés Bello, Av. República 440, Santiago, Chile*]{} ${}^3$[*Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels, Belgium*]{}\ **Abstract** A formulation of linearized gravity which is manifestly invariant under electric-magnetic duality rotations in the internal space of the metric and its dual, and which contains both metrics as basic variables (rather than the corresponding prepotentials), is derived. In this bimetric formulation, the variables have a more immediate geometrical significance, but the action is non-local in space, contrary to what occurs in the prepotential formulation. More specifically, one finds that: (i) the kinetic term is non-local in space (but local in time); (ii) the Hamiltonian is local in space and in time; (iii) the variables are subject to two Hamiltonian constraints, one for each metric. Based in part on the talk “Gravitational[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Control of the Néel vector in antiferromagnetic materials is one of the challenges preventing their use as active device components. Several methods have been investigated such as exchange bias, electric current, and spin injection, but little is known about strain-mediated anisotropy. This study of the antiferromagnetic [*L*]{}1$_0$-type MnX alloys MnIr, MnRh, MnNi, MnPd, and MnPt shows that a small amount of strain effectively rotates the direction of the N'' eel vector by 90$^{\circ}$ for all of the materials. For MnIr, MnRh, MnNi, and MnPd, the Néel vector rotates within the basal plane. For MnPt, the Néel vector rotates from out-of-plane to in-plane under tensile strain. The effectiveness of strain control is quantified by a metric of efficiency and by direct calculation of the magnetostriction coefficients. The values of the magnetostriction coefficients are comparable with those from ferromagnetic materials. These results indicate that strain is a mechanism that can be
--- abstract: 'Control of the Néel vector in antiferromagnetic materials is one of the challenges preventing their use as active device components. Several methods have been investigated such as exchange bias, electric current, and spin injection, but little is known about strain-mediated anisotropy. This study of the antiferromagnetic [*L*]{}1$_0$-type MnX alloys MnIr, MnRh, MnNi, MnPd, and MnPt shows that a small amount of strain effectively rotates the direction of the N'' eel vector by 90$^{\circ}$ for all of the materials. For MnIr, MnRh, MnNi, and MnPd, the Néel vector rotates within the basal plane. For MnPt, the Néel vector rotates from out-of-plane to in-plane under tensile strain. The effectiveness of strain control is quantified by a metric of efficiency and by direct calculation of the magnetostriction coefficients. The values of the magnetostriction coefficients are comparable with those from ferromagnetic materials. These results indicate that strain is a mechanism that can be[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | We discuss several results in electrostatics: Onsager’s inequality, an extension of Earnshaw’s theorem, and a result stemming from the celebrated conjecture of Maxwell on the number of points of electrostatic equilibrium. Whenever possible, we try to provide a brief historical context and references. [**Keywords:**]{}[*[ Electrostatics, potential theory, Onsager’s inequality, Maxwell’s problem, energy equilibria.]{}*]{} address: - 'MS 4242,Texas A$\&$M University, College Station, TX 77843-4242' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' author: - Artem Abanov - Nathan Hayford - 'Dima Khavinson$^\sharp$' - Razvan Teodorescu date: | December 2019.\ $\quad \quad ^\sharp$The author’s research is supported by the Simons Foundation, under the grant 513381. title: 'Around a theorem of F. Dyson and A. Lenard: Energy Equilibria for Point Charge Distributions in Classical Electrostatics' --- Introduction. ============= Electrostatics is an ancient subject, as far as
--- abstract: | We discuss several results in electrostatics: Onsager’s inequality, an extension of Earnshaw’s theorem, and a result stemming from the celebrated conjecture of Maxwell on the number of points of electrostatic equilibrium. Whenever possible, we try to provide a brief historical context and references. [**Keywords:**]{}[*[ Electrostatics, potential theory, Onsager’s inequality, Maxwell’s problem, energy equilibria.]{}*]{} address: - 'MS 4242,Texas A$\&$M University, College Station, TX 77843-4242' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' - '4202 E. Fowler Ave., CMC342, Tampa, FL 33620' author: - Artem Abanov - Nathan Hayford - 'Dima Khavinson$^\sharp$' - Razvan Teodorescu date: | December 2019.\ $\quad \quad ^\sharp$The author’s research is supported by the Simons Foundation, under the grant 513381. title: 'Around a theorem of F. Dyson and A. Lenard: Energy Equilibria for Point Charge Distributions in Classical Electrostatics' --- Introduction. ============= Electrostatics is an ancient subject, as far as[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Spectral variability in hyperspectral images can result from factors including environmental, illumination, atmospheric and temporal changes. Its occurrence may lead to the propagation of significant estimation errors in the unmixing process. To address this issue, extended linear mixing models have been proposed which lead to large scale nonsmooth ill-posed inverse problems. Furthermore, the regularization strategies used to obtain meaningful results have introduced interdependencies among abundance solutions that further increase the complexity of the resulting optimization problem. In this paper we present a novel data dependent multiscale model for hyperspectral unmixing accounting for spectral variability. The new method incorporates spatial contextual information to the abundances in extended linear mixing models by using a multiscale transform based on superpixels. The proposed method results in a fast algorithm that solves the abundance estimation problem only once in each scale during each iteration. Simulation results using synthetic and real images compare the performances,
--- abstract: 'Spectral variability in hyperspectral images can result from factors including environmental, illumination, atmospheric and temporal changes. Its occurrence may lead to the propagation of significant estimation errors in the unmixing process. To address this issue, extended linear mixing models have been proposed which lead to large scale nonsmooth ill-posed inverse problems. Furthermore, the regularization strategies used to obtain meaningful results have introduced interdependencies among abundance solutions that further increase the complexity of the resulting optimization problem. In this paper we present a novel data dependent multiscale model for hyperspectral unmixing accounting for spectral variability. The new method incorporates spatial contextual information to the abundances in extended linear mixing models by using a multiscale transform based on superpixels. The proposed method results in a fast algorithm that solves the abundance estimation problem only once in each scale during each iteration. Simulation results using synthetic and real images compare the performances,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce good representations of data, which greatly benefits from a huge number of parameters. However, solving such a high-dimensional optimization problem often requires a large amount of training samples in order to avoid overfitting. Additionally, it is a typical non-convex problem affected by many local minima and flat regions. To address these problems, in this paper, we introduce naive Gabor Networks or Gabor-Nets which, for the first time in the literature, design and learn CNN kernels strictly in the form of Gabor filters, aiming to reduce the number of involved parameters and constrain the solution space, and hence improve the performances of CNNs. Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and
--- abstract: 'Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce good representations of data, which greatly benefits from a huge number of parameters. However, solving such a high-dimensional optimization problem often requires a large amount of training samples in order to avoid overfitting. Additionally, it is a typical non-convex problem affected by many local minima and flat regions. To address these problems, in this paper, we introduce naive Gabor Networks or Gabor-Nets which, for the first time in the literature, design and learn CNN kernels strictly in the form of Gabor filters, aiming to reduce the number of involved parameters and constrain the solution space, and hence improve the performances of CNNs. Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The formation of compact stellar-mass binaries is a difficult, but interesting problem in astrophysics. There are two main formation channels: In the field via binary star evolution, or in dense stellar systems via dynamical interactions. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has detected black hole binaries (BHBs) via their gravitational radiation. These detections provide us with information about the physical parameters of the system. It has been claimed that when the Laser Interferometer Space Antenna (LISA) is operating, the joint observation of these binaries with LIGO will allow us to derive the channels that lead to their formation. However, we show that for BHBs in dense stellar systems dynamical interactions could lead to high eccentricities such that a fraction of the relativistic mergers are not audible to LISA. A non-detection by LISA puts a lower limit of about $0.005$ on the eccentricity of a BHB entering the LIGO band.
--- abstract: 'The formation of compact stellar-mass binaries is a difficult, but interesting problem in astrophysics. There are two main formation channels: In the field via binary star evolution, or in dense stellar systems via dynamical interactions. The Laser Interferometer Gravitational-Wave Observatory (LIGO) has detected black hole binaries (BHBs) via their gravitational radiation. These detections provide us with information about the physical parameters of the system. It has been claimed that when the Laser Interferometer Space Antenna (LISA) is operating, the joint observation of these binaries with LIGO will allow us to derive the channels that lead to their formation. However, we show that for BHBs in dense stellar systems dynamical interactions could lead to high eccentricities such that a fraction of the relativistic mergers are not audible to LISA. A non-detection by LISA puts a lower limit of about $0.005$ on the eccentricity of a BHB entering the LIGO band.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We define two algorithms for propagating information in classification problems with pairwise relationships. The algorithms are based on contraction maps and are related to non-linear diffusion and random walks on graphs. The approach is also related to message passing algorithms, including belief propagation and mean field methods. The algorithms we describe are guaranteed to converge on graphs with arbitrary topology. Moreover they always converge to a unique fixed point, independent of initialization. We prove that the fixed points of the algorithms under consideration define lower-bounds on the energy function and the max-marginals of a Markov random field. The theoretical results also illustrate a relationship between message passing algorithms and value iteration for an infinite horizon Markov decision process. We illustrate the practical application of the algorithms under study with numerical experiments in image restoration, stereo depth estimation and binary classification on a grid.' author: - | Pedro F.
--- abstract: 'We define two algorithms for propagating information in classification problems with pairwise relationships. The algorithms are based on contraction maps and are related to non-linear diffusion and random walks on graphs. The approach is also related to message passing algorithms, including belief propagation and mean field methods. The algorithms we describe are guaranteed to converge on graphs with arbitrary topology. Moreover they always converge to a unique fixed point, independent of initialization. We prove that the fixed points of the algorithms under consideration define lower-bounds on the energy function and the max-marginals of a Markov random field. The theoretical results also illustrate a relationship between message passing algorithms and value iteration for an infinite horizon Markov decision process. We illustrate the practical application of the algorithms under study with numerical experiments in image restoration, stereo depth estimation and binary classification on a grid.' author: - | Pedro F.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | The problem of escape of a Brownian particle in a cusp-shaped metastable potential is of special importance in nonadiabatic and weakly-adiabatic rate theory for electron transfer (ET) reactions. Especially, for the weakly-adiabatic reactions, the reaction follows an adiabaticity criterion in the presence of a sharp barrier. In contrast to the non-adiabatic case, the ET kinetics can be, however considerably influenced by the medium dynamics.\ In this paper, the problem of the escape time over a dichotomously fluctuating cusp barrier is discussed with its relevance to the high temperature ET reactions in condensed media. author: - 'Bart[ł]{}omiej' - Ewa - 'Pawe[ł]{} F.' title: Implication of Barrier Fluctuations on the Rate of Weakly Adiabatic Electron Transfer --- Introduction ============ Mechanism of the electron transfer (ET) in condensed and biological media goes beyond universal nonadiabatic approach of the Marcus theory.[@Marcus1; @Marcus2; @Ulstrup; @Kuznetsov; @Chandler; @Makarov] In particular, relaxation properties of medium may slow down
--- abstract: | The problem of escape of a Brownian particle in a cusp-shaped metastable potential is of special importance in nonadiabatic and weakly-adiabatic rate theory for electron transfer (ET) reactions. Especially, for the weakly-adiabatic reactions, the reaction follows an adiabaticity criterion in the presence of a sharp barrier. In contrast to the non-adiabatic case, the ET kinetics can be, however considerably influenced by the medium dynamics.\ In this paper, the problem of the escape time over a dichotomously fluctuating cusp barrier is discussed with its relevance to the high temperature ET reactions in condensed media. author: - 'Bart[ł]{}omiej' - Ewa - 'Pawe[ł]{} F.' title: Implication of Barrier Fluctuations on the Rate of Weakly Adiabatic Electron Transfer --- Introduction ============ Mechanism of the electron transfer (ET) in condensed and biological media goes beyond universal nonadiabatic approach of the Marcus theory.[@Marcus1; @Marcus2; @Ulstrup; @Kuznetsov; @Chandler; @Makarov] In particular, relaxation properties of medium may slow down[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'A critique of the singularity theorems of Penrose, Hawking, and Geroch is given. It is pointed out that a gravitationally collapsing black hole acts as an ultrahigh energy particle accelerator that can accelerate particles to energies inconceivable in any terrestrial particle accelerator, and that when the energy $E$ of the particles comprising matter in a black hole is $\sim 10^{2} GeV$ or more, or equivalently, the temperature $T$ is $\sim 10^{15} K$ or more, the entire matter in the black hole is converted into quark-gluon plasma permeated by leptons. As quarks and leptons are fermions, it is emphasized that the collapse of a black-hole to a space-time singularity is inhibited by Pauli’s exclusion principle. It is also suggested that ultimately a black hole may end up either as a stable quark star, or as a pulsating quark star which may be a source of gravitational radiation, or it may
--- abstract: 'A critique of the singularity theorems of Penrose, Hawking, and Geroch is given. It is pointed out that a gravitationally collapsing black hole acts as an ultrahigh energy particle accelerator that can accelerate particles to energies inconceivable in any terrestrial particle accelerator, and that when the energy $E$ of the particles comprising matter in a black hole is $\sim 10^{2} GeV$ or more, or equivalently, the temperature $T$ is $\sim 10^{15} K$ or more, the entire matter in the black hole is converted into quark-gluon plasma permeated by leptons. As quarks and leptons are fermions, it is emphasized that the collapse of a black-hole to a space-time singularity is inhibited by Pauli’s exclusion principle. It is also suggested that ultimately a black hole may end up either as a stable quark star, or as a pulsating quark star which may be a source of gravitational radiation, or it may[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The ground state properties including radii, density distribution and one neutron separation energy for C, N, O and F isotopes up to the neutron drip line are systematically studied by the fully self-consistent microscopic Relativistic Continuum Hartree-Bogoliubov (RCHB) theory. With the proton density distribution thus obtained, the charge-changing cross sections for C, N, O and F isotopes are calculated using the Glauber model. Good agreement with the data has been achieved. The charge changing cross sections change only slightly with the neutron number except for proton-rich nuclei. Similar trends of variations of proton radii and of charge changing cross sections for each isotope chain is observed which implies that the proton density plays important role in determining the charge-changing cross sections.' address: - '${}^{1}$Department of Technical Physics, Peking University, Beijing 100871, China' - '${}^{2}$Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100080, China' - '${}^{3}$Center of Theoretical Nuclear Physics, National Laboratory of
--- abstract: 'The ground state properties including radii, density distribution and one neutron separation energy for C, N, O and F isotopes up to the neutron drip line are systematically studied by the fully self-consistent microscopic Relativistic Continuum Hartree-Bogoliubov (RCHB) theory. With the proton density distribution thus obtained, the charge-changing cross sections for C, N, O and F isotopes are calculated using the Glauber model. Good agreement with the data has been achieved. The charge changing cross sections change only slightly with the neutron number except for proton-rich nuclei. Similar trends of variations of proton radii and of charge changing cross sections for each isotope chain is observed which implies that the proton density plays important role in determining the charge-changing cross sections.' address: - '${}^{1}$Department of Technical Physics, Peking University, Beijing 100871, China' - '${}^{2}$Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100080, China' - '${}^{3}$Center of Theoretical Nuclear Physics, National Laboratory of[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We show that charge noise $S_Q$ in Josephson qubits can be produced by fluctuating two level systems (TLS) with electric dipole moments in the substrate using a flat density of states. At high frequencies the frequency and temperature dependence of the charge noise depends on the ratio $J/J_c$ of the electromagnetic flux $J$ to the critical flux $J_c$. It is not widely appreciated that TLS in small qubits can easily be strongly saturated with $J/J_c\gg 1$. Our results are consistent with experimental conclusions that $S_Q\sim 1/f$ at low frequencies and $S_Q\sim f$ at high frequencies.' author: - 'Clare C. Yu$^1$, Magdalena Constantin$^1$, and John M. Martinis$^2$' title: Effect of Two Level System Saturation on Charge Noise in Josephson Junction Qubits --- Noise and decoherence are a major obstacle to using superconducting Josephson junction qubits to construct quantum computers. Recent experiments [@Simmonds2004; @Martinis2005] indicate that a dominant source of decoherence is two level systems (TLS)
--- abstract: 'We show that charge noise $S_Q$ in Josephson qubits can be produced by fluctuating two level systems (TLS) with electric dipole moments in the substrate using a flat density of states. At high frequencies the frequency and temperature dependence of the charge noise depends on the ratio $J/J_c$ of the electromagnetic flux $J$ to the critical flux $J_c$. It is not widely appreciated that TLS in small qubits can easily be strongly saturated with $J/J_c\gg 1$. Our results are consistent with experimental conclusions that $S_Q\sim 1/f$ at low frequencies and $S_Q\sim f$ at high frequencies.' author: - 'Clare C. Yu$^1$, Magdalena Constantin$^1$, and John M. Martinis$^2$' title: Effect of Two Level System Saturation on Charge Noise in Josephson Junction Qubits --- Noise and decoherence are a major obstacle to using superconducting Josephson junction qubits to construct quantum computers. Recent experiments [@Simmonds2004; @Martinis2005] indicate that a dominant source of decoherence is two level systems (TLS)[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | We consider the problem of designing a packet-level congestion control and scheduling policy for datacenter networks. Current datacenter networks primarily inherit the principles that went into the design of Internet, where congestion control and scheduling are distributed. While distributed architecture provides robustness, it suffers in terms of performance. Unlike Internet, data center is fundamentally a “controlled” environment. This raises the possibility of designing a centralized architecture to achieve better performance. Recent solutions such as Fastpass [@perry2014fastpass] and Flowtune [@perry17flowtune] have provided the proof of this concept. This raises the question: what is theoretically optimal performance achievable in a data center? We propose a centralized policy that guarantees a per-flow end-to-end flow delay bound of $O$(\#hops $\times$ flow-size $/$ gap-to-capacity). Effectively such an end-to-end delay will be experienced by flows even if we removed congestion control and scheduling constraints as the resulting queueing
--- abstract: | We consider the problem of designing a packet-level congestion control and scheduling policy for datacenter networks. Current datacenter networks primarily inherit the principles that went into the design of Internet, where congestion control and scheduling are distributed. While distributed architecture provides robustness, it suffers in terms of performance. Unlike Internet, data center is fundamentally a “controlled” environment. This raises the possibility of designing a centralized architecture to achieve better performance. Recent solutions such as Fastpass [@perry2014fastpass] and Flowtune [@perry17flowtune] have provided the proof of this concept. This raises the question: what is theoretically optimal performance achievable in a data center? We propose a centralized policy that guarantees a per-flow end-to-end flow delay bound of $O$(\#hops $\times$ flow-size $/$ gap-to-capacity). Effectively such an end-to-end delay will be experienced by flows even if we removed congestion control and scheduling constraints as the resulting queueing[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - | Simon Donig [![image](orcid.png)](https://orcid.org/0000-0002-1741-466X)\ Chair for Digital Humanities\ University Passau, Germany\ [simon.donig@uni-passau.de](simon.donig@uni-passau.de)\ Maria Christoforaki\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [maria.christoforaki@unisg.ch](maria.christoforaki@unisg.ch)\ Bernhard Bermeitinger [![image](orcid.png)](https://orcid.org/0000-0002-2524-1850)\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [bernhard.bermeitinger@unisg.ch](bernhard.bermeitinger@unisg.ch)\ Siegfried Handschuh\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [siegfried.handschuh@unisg.ch](siegfried.handschuh@unisg.ch)\ bibliography: - 'references.bib' date: December 2019 title: | Multimodal Semantic Transfer\ from Text to Image.\ Fine-Grained Image Classification\ by Distributional Semantics. --- Introduction ============ In the last years, image
--- author: - | Simon Donig [![image](orcid.png)](https://orcid.org/0000-0002-1741-466X)\ Chair for Digital Humanities\ University Passau, Germany\ [simon.donig@uni-passau.de](simon.donig@uni-passau.de)\ Maria Christoforaki\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [maria.christoforaki@unisg.ch](maria.christoforaki@unisg.ch)\ Bernhard Bermeitinger [![image](orcid.png)](https://orcid.org/0000-0002-2524-1850)\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [bernhard.bermeitinger@unisg.ch](bernhard.bermeitinger@unisg.ch)\ Siegfried Handschuh\ Chair for Data Science\ Institute for Computer Science\ University of St.Gallen, Switzerland\ [siegfried.handschuh@unisg.ch](siegfried.handschuh@unisg.ch)\ bibliography: - 'references.bib' date: December 2019 title: | Multimodal Semantic Transfer\ from Text to Image.\ Fine-Grained Image Classification\ by Distributional Semantics. --- Introduction ============ In the last years, image[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | We present an overview of scalable load balancing algorithms which provide favorable delay performance in large-scale systems, and yet only require minimal implementation overhead. Aimed at a broad audience, the paper starts with an introduction to the basic load balancing scenario – referred to as the *supermarket model* – consisting of a single dispatcher where tasks arrive that must immediately be forwarded to one of $N$ single-server queues. The supermarket model is a dynamic counterpart of the classical balls-and-bins setup where balls must be sequentially distributed across bins. A popular class of load balancing algorithms are so-called power-of-$d$ or JSQ($d$) policies, where an incoming task is assigned to a server with the shortest queue among $d$ servers selected uniformly at random. As the name reflects, this class includes the celebrated Join-the-Shortest-Queue (JSQ) policy as a special case ($d = N$), which has strong stochastic
--- abstract: | We present an overview of scalable load balancing algorithms which provide favorable delay performance in large-scale systems, and yet only require minimal implementation overhead. Aimed at a broad audience, the paper starts with an introduction to the basic load balancing scenario – referred to as the *supermarket model* – consisting of a single dispatcher where tasks arrive that must immediately be forwarded to one of $N$ single-server queues. The supermarket model is a dynamic counterpart of the classical balls-and-bins setup where balls must be sequentially distributed across bins. A popular class of load balancing algorithms are so-called power-of-$d$ or JSQ($d$) policies, where an incoming task is assigned to a server with the shortest queue among $d$ servers selected uniformly at random. As the name reflects, this class includes the celebrated Join-the-Shortest-Queue (JSQ) policy as a special case ($d = N$), which has strong stochastic[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - 'Robert J. Perry' --- 0[[H]{}\^\_0]{} [*v*]{}\^ [H]{}\^ LIGHT-FRONT QCD: A CONSTITUENT PICTURE OF HADRONS ================================================= MOTIVATION AND STRATEGY ----------------------- We seek to derive the structure of hadrons from the fundamental theory of the strong interaction, QCD. Our work is founded on the hypothesis that a constituent approximation can be [*derived*]{} from QCD, so that a relatively small number of quark [*and gluon*]{} degrees of freedom need be explicitly included in the state vectors for low-lying hadrons. To obtain a constituent picture, we use a Hamiltonian approach in light-front coordinates. I do not believe that light-front Hamiltonian field theory is extremely useful for the study of low energy QCD unless a constituent approximation can be made, and I do not believe such an approximation is possible unless cutoffs that [it violate]{} manifest gauge invariance and covariance are employed. Such cutoffs [*inevitably*]{} lead to relevant and marginal effective interactions ([*i.e.*]{}, counterterms) that contain functions of longitudinal momenta. It is
--- author: - 'Robert J. Perry' --- 0[[H]{}\^\_0]{} [*v*]{}\^ [H]{}\^ LIGHT-FRONT QCD: A CONSTITUENT PICTURE OF HADRONS ================================================= MOTIVATION AND STRATEGY ----------------------- We seek to derive the structure of hadrons from the fundamental theory of the strong interaction, QCD. Our work is founded on the hypothesis that a constituent approximation can be [*derived*]{} from QCD, so that a relatively small number of quark [*and gluon*]{} degrees of freedom need be explicitly included in the state vectors for low-lying hadrons. To obtain a constituent picture, we use a Hamiltonian approach in light-front coordinates. I do not believe that light-front Hamiltonian field theory is extremely useful for the study of low energy QCD unless a constituent approximation can be made, and I do not believe such an approximation is possible unless cutoffs that [it violate]{} manifest gauge invariance and covariance are employed. Such cutoffs [*inevitably*]{} lead to relevant and marginal effective interactions ([*i.e.*]{}, counterterms) that contain functions of longitudinal momenta. It is[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] Astro2020 APC White Paper The Dark Energy Spectroscopic Instrument (DESI) **Thematic Areas:** $\square$ Planetary Systems $\square$ Star and Planet Formation $\square$ Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution $\square$ Multi-Messenger Astronomy and Astrophysics **Principal Authors:** Michael E. Levi (Lawrence Berkeley National Laboratory)\ & Lori E. Allen (National Optical Astronomy Observatory) **Email:** melevi@lbl.gov, lallen@noao.edu **Co-authors:** Anand Raichoor (EPFL, Switzerland), Charles Baltay (Yale University), Segev BenZvi (University of Rochester), Florian Beutler (University of Portsmouth, UK), Adam Bolton (NOAO), Francisco J. Castander (IEEC, Spain), Chia-Hsun Chuang (KIPAC), Andrew Cooper (National Tsing Hua University, Taiwan), Jean-Gabriel Cuby (Aix-Marseille University, France), Arjun Dey (NOAO), Daniel Eisenstein (Harvard University), Xiaohui Fan (University of Arizona), Brenna Flaugher (FNAL), Carlos Frenk (Durham University, UK), Alma X. González-Morales (Universidad de Guanajuato, México), Or Graur (CfA), Julien Guy (LBNL), Salman Habib (ANL), Klaus Honscheid (Ohio State University), Stephanie Juneau
Astro2020 APC White Paper The Dark Energy Spectroscopic Instrument (DESI) **Thematic Areas:** $\square$ Planetary Systems $\square$ Star and Planet Formation $\square$ Formation and Evolution of Compact Objects Cosmology and Fundamental Physics Stars and Stellar Evolution Resolved Stellar Populations and their Environments Galaxy Evolution $\square$ Multi-Messenger Astronomy and Astrophysics **Principal Authors:** Michael E. Levi (Lawrence Berkeley National Laboratory)\ & Lori E. Allen (National Optical Astronomy Observatory) **Email:** melevi@lbl.gov, lallen@noao.edu **Co-authors:** Anand Raichoor (EPFL, Switzerland), Charles Baltay (Yale University), Segev BenZvi (University of Rochester), Florian Beutler (University of Portsmouth, UK), Adam Bolton (NOAO), Francisco J. Castander (IEEC, Spain), Chia-Hsun Chuang (KIPAC), Andrew Cooper (National Tsing Hua University, Taiwan), Jean-Gabriel Cuby (Aix-Marseille University, France), Arjun Dey (NOAO), Daniel Eisenstein (Harvard University), Xiaohui Fan (University of Arizona), Brenna Flaugher (FNAL), Carlos Frenk (Durham University, UK), Alma X. González-Morales (Universidad de Guanajuato, México), Or Graur (CfA), Julien Guy (LBNL), Salman Habib (ANL), Klaus Honscheid (Ohio State University), Stephanie Juneau[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In recent years, the increasing interest in Stochastic model predictive control (SMPC) schemes has highlighted the limitation arising from their inherent computational demand, which has restricted their applicability to slow-dynamics and high-performing systems. To reduce the computational burden, in this paper we extend the probabilistic scaling approach to obtain low-complexity inner approximation of chance-constrained sets. This approach provides probabilistic guarantees at a lower computational cost than other schemes for which the sample complexity depends on the design space dimension. To design candidate simple approximating sets, which approximate the shape of the probabilistic set, we introduce two possibilities: i) fixed-complexity polytopes, and ii) $\ell_p$-norm based sets. Once the candidate approximating set is obtained, it is scaled around its center so to enforce the expected probabilistic guarantees. The resulting scaled set is then exploited to enforce constraints in the classical SMPC framework. The computational gain obtained with the proposed approach with
--- abstract: 'In recent years, the increasing interest in Stochastic model predictive control (SMPC) schemes has highlighted the limitation arising from their inherent computational demand, which has restricted their applicability to slow-dynamics and high-performing systems. To reduce the computational burden, in this paper we extend the probabilistic scaling approach to obtain low-complexity inner approximation of chance-constrained sets. This approach provides probabilistic guarantees at a lower computational cost than other schemes for which the sample complexity depends on the design space dimension. To design candidate simple approximating sets, which approximate the shape of the probabilistic set, we introduce two possibilities: i) fixed-complexity polytopes, and ii) $\ell_p$-norm based sets. Once the candidate approximating set is obtained, it is scaled around its center so to enforce the expected probabilistic guarantees. The resulting scaled set is then exploited to enforce constraints in the classical SMPC framework. The computational gain obtained with the proposed approach with[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In this review we present an overview of observing facilities for solar research, which are planned or will come to operation in near future. We concentrate on facilities, which harbor specific potential for solar magnetometry. We describe the challenges and science goals of future magnetic measurements, the status of magnetic field measurements at different major solar observatories, and provide an outlook on possible upgrades of future instrumentation.' author: - Lucia Kleint - Achim Gandorfer bibliography: - 'journals.bib' - 'papers.bib' date: 'Received: date / Accepted: date' title: 'Prospects of solar magnetometry - from ground and in space' --- Introduction: Complementary worlds - the advantages and drawbacks of ground-based and space-borne instruments {#se:intro} ============================================================================================================= To nighttime astronomers it usually sounds as a paradox that solar magnetic measurements are photon-starved. Detecting four polarization states ($I$, $Q$, $U$, $V$) with high enough spatial resolution (sub-arcsecond) in a relatively large field-of-view (several dozen arcsec), in a short time (less than a second), plus in sufficient wavelengths
--- abstract: 'In this review we present an overview of observing facilities for solar research, which are planned or will come to operation in near future. We concentrate on facilities, which harbor specific potential for solar magnetometry. We describe the challenges and science goals of future magnetic measurements, the status of magnetic field measurements at different major solar observatories, and provide an outlook on possible upgrades of future instrumentation.' author: - Lucia Kleint - Achim Gandorfer bibliography: - 'journals.bib' - 'papers.bib' date: 'Received: date / Accepted: date' title: 'Prospects of solar magnetometry - from ground and in space' --- Introduction: Complementary worlds - the advantages and drawbacks of ground-based and space-borne instruments {#se:intro} ============================================================================================================= To nighttime astronomers it usually sounds as a paradox that solar magnetic measurements are photon-starved. Detecting four polarization states ($I$, $Q$, $U$, $V$) with high enough spatial resolution (sub-arcsecond) in a relatively large field-of-view (several dozen arcsec), in a short time (less than a second), plus in sufficient wavelengths[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - 'Jin-Beom Bae,' - 'Dongmin Gang,' - and Kimyeong Lee bibliography: - 'ref-AdS5.bib' title: 'Magnetically charged $AdS_5$ black holes from class $\CS$ theories on hyperbolic 3-manifolds' --- [KIAS-P19038]{}\ Introduction and Conclusion =========================== Microscopic understanding of black hole entropy is one of the prominent success of string theory. Indeed, it is well known that string theory can provide microscopic interpretation to the Bekenstein-Hawking entropy of asymptotically flat black hole [@Strominger:1996sh]. A lot of work has been done [@Dijkgraaf:1996it; @Shih:2005uc; @David:2006yn] in order to analyze black hole entropy involving quantum corrections, based on 2d field theory approach. Meanwhile, the entropy of black hole in AdS$_3$ has been analyzed via AdS/CFT correspondence [@Maldacena:1997re], by counting microscopic states of 2d conformal field theory(CFT) [@Kraus:2006nb]. It has been believed that entropy of higher-dimensional supersymmetric black hole in AdS$_d$($d>3$) can be understood from boundary superconformal field theory(SCFT) using AdS/CFT. Recently, there has been remarkable progresses in this direction. In [@Benini:2015noa; @Benini:2015eyy], the entropy of static dyonic BPS
--- author: - 'Jin-Beom Bae,' - 'Dongmin Gang,' - and Kimyeong Lee bibliography: - 'ref-AdS5.bib' title: 'Magnetically charged $AdS_5$ black holes from class $\CS$ theories on hyperbolic 3-manifolds' --- [KIAS-P19038]{}\ Introduction and Conclusion =========================== Microscopic understanding of black hole entropy is one of the prominent success of string theory. Indeed, it is well known that string theory can provide microscopic interpretation to the Bekenstein-Hawking entropy of asymptotically flat black hole [@Strominger:1996sh]. A lot of work has been done [@Dijkgraaf:1996it; @Shih:2005uc; @David:2006yn] in order to analyze black hole entropy involving quantum corrections, based on 2d field theory approach. Meanwhile, the entropy of black hole in AdS$_3$ has been analyzed via AdS/CFT correspondence [@Maldacena:1997re], by counting microscopic states of 2d conformal field theory(CFT) [@Kraus:2006nb]. It has been believed that entropy of higher-dimensional supersymmetric black hole in AdS$_d$($d>3$) can be understood from boundary superconformal field theory(SCFT) using AdS/CFT. Recently, there has been remarkable progresses in this direction. In [@Benini:2015noa; @Benini:2015eyy], the entropy of static dyonic BPS[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- address: 'Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, California, 94551, USA; $^\dagger$ Perceptive Software, Shawnee, KS 66226 ' author: - 'Michael P. Surh, Jess B. Sturgeon$^\dagger$, and Wilhelm G. Wolfer' bibliography: - 'CoalMethods4.bib' title: 'Void Nucleation, Growth, and Coalescence in Irradiated Metals' --- Abstract ======== A novel computational treatment of dense, stiff, coupled reaction rate equations is introduced to study the nucleation, growth, and possible coalescence of cavities during neutron irradiation of metals. Radiation damage is modeled by the creation of Frenkel pair defects and helium impurity atoms. A multi-dimensional cluster size distribution function allows independent evolution of the vacancy and helium content of cavities, distinguishing voids and bubbles. A model with sessile cavities and no cluster-cluster coalescence can result in a bimodal final cavity size distribution with coexistence of small, high-pressure bubbles and large, low-pressure voids. A model that includes unhindered cavity diffusion and coalescence ultimately removes the small helium bubbles from the system, leaving only large
--- address: 'Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, California, 94551, USA; $^\dagger$ Perceptive Software, Shawnee, KS 66226 ' author: - 'Michael P. Surh, Jess B. Sturgeon$^\dagger$, and Wilhelm G. Wolfer' bibliography: - 'CoalMethods4.bib' title: 'Void Nucleation, Growth, and Coalescence in Irradiated Metals' --- Abstract ======== A novel computational treatment of dense, stiff, coupled reaction rate equations is introduced to study the nucleation, growth, and possible coalescence of cavities during neutron irradiation of metals. Radiation damage is modeled by the creation of Frenkel pair defects and helium impurity atoms. A multi-dimensional cluster size distribution function allows independent evolution of the vacancy and helium content of cavities, distinguishing voids and bubbles. A model with sessile cavities and no cluster-cluster coalescence can result in a bimodal final cavity size distribution with coexistence of small, high-pressure bubbles and large, low-pressure voids. A model that includes unhindered cavity diffusion and coalescence ultimately removes the small helium bubbles from the system, leaving only large[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We point out a surprising consequence of the usually assumed initial conditions for cosmological perturbations. Namely, a spectrum of Gaussian, linear, adiabatic, scalar, growing mode perturbations not only creates acoustic oscillations of the kind observed on very large scales today, it also leads to the production of shocks in the radiation fluid of the very early universe. Shocks cause departures from local thermal equilibrium as well as creating vorticity and gravitational waves. For a scale-invariant spectrum and standard model physics, shocks form for temperatures $1$ GeV$<T<10^{7}$ GeV. For more general power spectra, such as have been invoked to form primordial black holes, shock formation and the consequent gravitational wave emission provides a signal detectable by current and planned gravitational wave experiments, allowing them to strongly constrain conditions present in the primordial universe as early as $10^{-30}$ seconds after the big bang.' author: - 'Ue-Li Pen' - Neil Turok title: Shocks in the Early
--- abstract: 'We point out a surprising consequence of the usually assumed initial conditions for cosmological perturbations. Namely, a spectrum of Gaussian, linear, adiabatic, scalar, growing mode perturbations not only creates acoustic oscillations of the kind observed on very large scales today, it also leads to the production of shocks in the radiation fluid of the very early universe. Shocks cause departures from local thermal equilibrium as well as creating vorticity and gravitational waves. For a scale-invariant spectrum and standard model physics, shocks form for temperatures $1$ GeV$<T<10^{7}$ GeV. For more general power spectra, such as have been invoked to form primordial black holes, shock formation and the consequent gravitational wave emission provides a signal detectable by current and planned gravitational wave experiments, allowing them to strongly constrain conditions present in the primordial universe as early as $10^{-30}$ seconds after the big bang.' author: - 'Ue-Li Pen' - Neil Turok title: Shocks in the Early[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Representing domain knowledge is crucial for any task. There has been a wide range of techniques developed to represent this knowledge, from older logic based approaches to the more recent deep learning based techniques (i.e. embeddings). In this paper, we discuss some of these methods, focusing on the representational expressiveness tradeoffs that are often made. In particular, we focus on the the ability of various techniques to encode ‘partial knowledge’ - a key component of successful knowledge systems. We introduce and describe the concepts of *ensembles of embeddings* and *aggregate embeddings* and demonstrate how they allow for partial knowledge.' author: - 'R.V.Guha' bibliography: - 'emt.bib' title: Partial Knowledge in Embeddings --- Motivation {#motivation .unnumbered} ========== Knowledge about the domain is essential to performing any task. Representations of this knowledge have ranged over a broad spectrum in terms of the features and tradeoffs. Recently, with the increased interest in deep neural networks, work has focussed on developing knowledge representations
--- abstract: 'Representing domain knowledge is crucial for any task. There has been a wide range of techniques developed to represent this knowledge, from older logic based approaches to the more recent deep learning based techniques (i.e. embeddings). In this paper, we discuss some of these methods, focusing on the representational expressiveness tradeoffs that are often made. In particular, we focus on the the ability of various techniques to encode ‘partial knowledge’ - a key component of successful knowledge systems. We introduce and describe the concepts of *ensembles of embeddings* and *aggregate embeddings* and demonstrate how they allow for partial knowledge.' author: - 'R.V.Guha' bibliography: - 'emt.bib' title: Partial Knowledge in Embeddings --- Motivation {#motivation .unnumbered} ========== Knowledge about the domain is essential to performing any task. Representations of this knowledge have ranged over a broad spectrum in terms of the features and tradeoffs. Recently, with the increased interest in deep neural networks, work has focussed on developing knowledge representations[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The notion of Haar null set was introduced by J. P. R. Christensen in 1973 and reintroduced in 1992 in the context of dynamical systems by Hunt, Sauer and Yorke. During the last twenty years this notion has been useful in studying exceptional sets in diverse areas. These include analysis, dynamical systems, group theory, and descriptive set theory. Inspired by these various results, we introduce the topological analogue of the notion of Haar null set. We call it Haar meager set. We prove some basic properties of this notion, state some open problems and suggest a possible line of investigation which may lead to the unification of these two notions in certain context.' address: 'Department of Mathematics University of Louisville Louisville, KY 40292,USA' author: - 'Udayan B. Darji' title: On Haar Meager Sets --- \[section\] \[defn\][Theorem]{} \[defn\][Example]{} \[defn\][Lemma]{} \[defn\][Remark]{} \[defn\][Proposition]{} \[defn\][Corollary]{} \[defn\][Conjecture]{} \[defn\][Exercise]{} [H]{} ¶[[P]{}]{} ß introduction ============ Often in various branches of mathematics one would like to
--- abstract: 'The notion of Haar null set was introduced by J. P. R. Christensen in 1973 and reintroduced in 1992 in the context of dynamical systems by Hunt, Sauer and Yorke. During the last twenty years this notion has been useful in studying exceptional sets in diverse areas. These include analysis, dynamical systems, group theory, and descriptive set theory. Inspired by these various results, we introduce the topological analogue of the notion of Haar null set. We call it Haar meager set. We prove some basic properties of this notion, state some open problems and suggest a possible line of investigation which may lead to the unification of these two notions in certain context.' address: 'Department of Mathematics University of Louisville Louisville, KY 40292,USA' author: - 'Udayan B. Darji' title: On Haar Meager Sets --- \[section\] \[defn\][Theorem]{} \[defn\][Example]{} \[defn\][Lemma]{} \[defn\][Remark]{} \[defn\][Proposition]{} \[defn\][Corollary]{} \[defn\][Conjecture]{} \[defn\][Exercise]{} [H]{} ¶[[P]{}]{} ß introduction ============ Often in various branches of mathematics one would like to[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with “Hierarchical Accumulation” to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT’14 English-German translation task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.' author: - | Xuan-Phi Nguyen$^\ddagger$[^1] , Shafiq Joty$^{\dagger \ddagger}$, Steven C.H. Hoi$^{\dagger}$, Richard Socher$^{\dagger}$\
--- abstract: 'Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with “Hierarchical Accumulation” to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT’14 English-German translation task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.' author: - | Xuan-Phi Nguyen$^\ddagger$[^1] , Shafiq Joty$^{\dagger \ddagger}$, Steven C.H. Hoi$^{\dagger}$, Richard Socher$^{\dagger}$\ [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We study the dynamical stability of planetary systems consisting of one hypothetical terrestrial mass planet ($1~ $ or $10 \mearth$) and one massive planet ($10 \mearth - 10 \mjup$). We consider masses and orbits that cover the range of observed planetary system architectures (including non-zero initial eccentricities), determine the stability limit through N-body simulations, and compare it to the analytic Hill stability boundary. We show that for given masses and orbits of a two planet system, a single parameter, which can be calculated analytically, describes the Lagrange stability boundary (no ejections or exchanges) but which diverges significantly from the Hill stability boundary. However, we do find that the actual boundary is fractal, and therefore we also identify a second parameter which demarcates the transition from stable to unstable evolution. We show the portions of the habitable zones of $\rho$ CrB, HD 164922, GJ 674, and HD 7924 which
--- abstract: 'We study the dynamical stability of planetary systems consisting of one hypothetical terrestrial mass planet ($1~ $ or $10 \mearth$) and one massive planet ($10 \mearth - 10 \mjup$). We consider masses and orbits that cover the range of observed planetary system architectures (including non-zero initial eccentricities), determine the stability limit through N-body simulations, and compare it to the analytic Hill stability boundary. We show that for given masses and orbits of a two planet system, a single parameter, which can be calculated analytically, describes the Lagrange stability boundary (no ejections or exchanges) but which diverges significantly from the Hill stability boundary. However, we do find that the actual boundary is fractal, and therefore we also identify a second parameter which demarcates the transition from stable to unstable evolution. We show the portions of the habitable zones of $\rho$ CrB, HD 164922, GJ 674, and HD 7924 which[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - Remya Nair - and Takahiro Tanaka title: 'Synergy between ground and space based gravitational wave detectors II: Localisation' --- Introduction ============ We are in an exciting era of gravitational wave (GW) astronomy. After the multiple GW detections by the LIGO-VIRGO network [@gw_det], and the successful pathfinder mission of Laser Interferometer Space Antenna (LISA) [@lisa_pf], we can now look forward to a future where multiple GW detections by both the space and ground based interferometers will be a norm. Astronomers rely on various cosmological observations to probe our Universe, some of which include: the type Ia supernovae, the baryon acoustic oscillations, the cosmic microwave background, gravitational lensing, etc. Requiring consistency between these measurements and combining them helped us converge on what we now know as the standard model of cosmology. Consistency checks between different observations of the same physical phenomenon help us identify the systematic effects. On the other hand, combining measurements aids parameter estimations by
--- author: - Remya Nair - and Takahiro Tanaka title: 'Synergy between ground and space based gravitational wave detectors II: Localisation' --- Introduction ============ We are in an exciting era of gravitational wave (GW) astronomy. After the multiple GW detections by the LIGO-VIRGO network [@gw_det], and the successful pathfinder mission of Laser Interferometer Space Antenna (LISA) [@lisa_pf], we can now look forward to a future where multiple GW detections by both the space and ground based interferometers will be a norm. Astronomers rely on various cosmological observations to probe our Universe, some of which include: the type Ia supernovae, the baryon acoustic oscillations, the cosmic microwave background, gravitational lensing, etc. Requiring consistency between these measurements and combining them helped us converge on what we now know as the standard model of cosmology. Consistency checks between different observations of the same physical phenomenon help us identify the systematic effects. On the other hand, combining measurements aids parameter estimations by[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In this paper, we discuss the distribution of the t-statistic under the assumption of normal autoregressive distribution for the underlying discrete time process. This result generalizes the classical result of the traditional t-distribution where the underlying discrete time process follows an uncorrelated normal distribution. However, for AR(1), the underlying process is correlated. All traditional results break down and the resulting t-statistic is a new distribution that converges asymptotically to a normal. We give an explicit formula for this new distribution obtained as the ratio of two dependent distribution (a normal and the distribution of the norm of another independent normal distribution). We also provide a modified statistic that is follows a non central t-distribution. Its derivation comes from finding an orthogonal basis for the the initial circulant Toeplitz covariance matrix. Our findings are consistent with the asymptotic distribution for the t-statistic derived for the asympotic case of large number
--- abstract: 'In this paper, we discuss the distribution of the t-statistic under the assumption of normal autoregressive distribution for the underlying discrete time process. This result generalizes the classical result of the traditional t-distribution where the underlying discrete time process follows an uncorrelated normal distribution. However, for AR(1), the underlying process is correlated. All traditional results break down and the resulting t-statistic is a new distribution that converges asymptotically to a normal. We give an explicit formula for this new distribution obtained as the ratio of two dependent distribution (a normal and the distribution of the norm of another independent normal distribution). We also provide a modified statistic that is follows a non central t-distribution. Its derivation comes from finding an orthogonal basis for the the initial circulant Toeplitz covariance matrix. Our findings are consistent with the asymptotic distribution for the t-statistic derived for the asympotic case of large number[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- --- =-0.75cm -.25cm 16.5cm 23.0cm CERN-TH/98-360 $ $\ **Duality symmetry of Reggeon interactions in multicolour QCD** L.N. Lipatov [$^{*}$\ Petersburg Nuclear Physics Institute,\ Gatchina, 188350, St.Petersburg, Russia]{} 15.0pt **Abstract** The duality symmetry of the Hamiltonian and integrals of motion for Reggeon interactions in multicolour QCD is formulated as an integral equation for the wave function of compound states of $n$ reggeized gluons. In particular the Odderon problem in QCD is reduced to the solution of the one-dimensional Schrödinger equation. The Odderon Hamiltonian is written in a normal form, which gives a possibility to express it as a function of its integrals of motion. ------------------------------------------------------------------------ $^{*}$ [*Supported by the CRDF, INTAS and INTAS-RFBR grants: RP1-253, 1867-93, 95-0311*]{} Introduction ============ The hadron scattering amplitude at high energies $\sqrt{s}$ in the leading logarithmic approximation (LLA) of the perturbation theory is obtained by calculating and summing all contributions $\left( g^{2}\ln (s)\right) ^{n}$, where $g$ is the coupling constant. In this approximation the gluon is reggeized and the BFKL Pomeron
--- --- =-0.75cm -.25cm 16.5cm 23.0cm CERN-TH/98-360 $ $\ **Duality symmetry of Reggeon interactions in multicolour QCD** L.N. Lipatov [$^{*}$\ Petersburg Nuclear Physics Institute,\ Gatchina, 188350, St.Petersburg, Russia]{} 15.0pt **Abstract** The duality symmetry of the Hamiltonian and integrals of motion for Reggeon interactions in multicolour QCD is formulated as an integral equation for the wave function of compound states of $n$ reggeized gluons. In particular the Odderon problem in QCD is reduced to the solution of the one-dimensional Schrödinger equation. The Odderon Hamiltonian is written in a normal form, which gives a possibility to express it as a function of its integrals of motion. ------------------------------------------------------------------------ $^{*}$ [*Supported by the CRDF, INTAS and INTAS-RFBR grants: RP1-253, 1867-93, 95-0311*]{} Introduction ============ The hadron scattering amplitude at high energies $\sqrt{s}$ in the leading logarithmic approximation (LLA) of the perturbation theory is obtained by calculating and summing all contributions $\left( g^{2}\ln (s)\right) ^{n}$, where $g$ is the coupling constant. In this approximation the gluon is reggeized and the BFKL Pomeron[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The spectrum of energy levels is computed for all available angular momentum and parity quantum numbers in the SU(2)-Higgs model, with parameters chosen to match experimental data from the Higgs-$W$ boson sector of the standard model. Several multiboson states are observed, with and without linear momentum, and all are consistent with weakly interacting Higgs and $W$ bosons. The creation operators used in this study are gauge-invariant so, for example, the Higgs operator is quadratic rather than linear in the Lagrangian’s scalar field.' author: - Mark Wurtz and Randy Lewis title: Higgs and $W$ boson spectrum from lattice simulations --- Introduction ============ The complex scalar doublet of the standard model accommodates all of the necessary masses for elementary particles. A testable prediction of this theory is the presence of a fundamental scalar particle: the Higgs boson. Recently, ATLAS and CMS have discovered a Higgs-like boson with a mass near 125 GeV [@Aad:2012tfa; @Chatrchyan:2012ufa]. Lattice simulations of the scalar
--- abstract: 'The spectrum of energy levels is computed for all available angular momentum and parity quantum numbers in the SU(2)-Higgs model, with parameters chosen to match experimental data from the Higgs-$W$ boson sector of the standard model. Several multiboson states are observed, with and without linear momentum, and all are consistent with weakly interacting Higgs and $W$ bosons. The creation operators used in this study are gauge-invariant so, for example, the Higgs operator is quadratic rather than linear in the Lagrangian’s scalar field.' author: - Mark Wurtz and Randy Lewis title: Higgs and $W$ boson spectrum from lattice simulations --- Introduction ============ The complex scalar doublet of the standard model accommodates all of the necessary masses for elementary particles. A testable prediction of this theory is the presence of a fundamental scalar particle: the Higgs boson. Recently, ATLAS and CMS have discovered a Higgs-like boson with a mass near 125 GeV [@Aad:2012tfa; @Chatrchyan:2012ufa]. Lattice simulations of the scalar[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Let $C/K$ be a curve over a local field. We study the natural semilinear action of Galois on the minimal regular model of $C$ over a field $F$ where it becomes semistable. This allows us to describe the Galois action on the $l$-adic Tate module of the Jacobian of $C/K$ in terms of the special fibre of this model over $F$.' address: - 'Department of Mathematics, University of Bristol, Bristol BS8 1TW, UK' - 'King’s College London, Strand, London WC2R 2LS, UK' - 'King’s College London, Strand, London WC2R 2LS, UK' author: - 'Tim and Vladimir Dokchitser, Adam Morgan' title: Tate module and bad reduction --- -1cm Introduction ============ Let $C/K$ be a curve[^1] of positive genus over a non-Archimedean local field, with Jacobian $A/K$. Our goal is to describe the action of the absolute Galois group $G_K$ on the $l$-adic Tate module $T_l A$ in terms of the reduction of $C$ over a field where $C$ becomes semistable, for $l$
--- abstract: 'Let $C/K$ be a curve over a local field. We study the natural semilinear action of Galois on the minimal regular model of $C$ over a field $F$ where it becomes semistable. This allows us to describe the Galois action on the $l$-adic Tate module of the Jacobian of $C/K$ in terms of the special fibre of this model over $F$.' address: - 'Department of Mathematics, University of Bristol, Bristol BS8 1TW, UK' - 'King’s College London, Strand, London WC2R 2LS, UK' - 'King’s College London, Strand, London WC2R 2LS, UK' author: - 'Tim and Vladimir Dokchitser, Adam Morgan' title: Tate module and bad reduction --- -1cm Introduction ============ Let $C/K$ be a curve[^1] of positive genus over a non-Archimedean local field, with Jacobian $A/K$. Our goal is to describe the action of the absolute Galois group $G_K$ on the $l$-adic Tate module $T_l A$ in terms of the reduction of $C$ over a field where $C$ becomes semistable, for $l$[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We report results of lattice Boltzmann simulations of a high-speed drainage of liquid films squeezed between a smooth sphere and a randomly rough plane. A significant decrease in the hydrodynamic resistance force as compared with that predicted for two smooth surfaces is observed. However, this force reduction does not represent slippage. The computed force is exactly the same as that between equivalent smooth surfaces obeying no-slip boundary conditions, but located at an intermediate position between peaks and valleys of asperities. The shift in hydrodynamic thickness is shown to depend on the height and density of roughness elements. Our results do not support some previous experimental conclusions on very large and shear-dependent boundary slip for similar systems.' author: - Christian Kunert - Jens Harting - 'Olga I. Vinogradova' title: 'Random-roughness hydrodynamic boundary conditions' --- [**Introduction.–**]{} It has been recently well recognized that the famous no-slip boundary condition, for more than a hundred years applied to model experiments
--- abstract: 'We report results of lattice Boltzmann simulations of a high-speed drainage of liquid films squeezed between a smooth sphere and a randomly rough plane. A significant decrease in the hydrodynamic resistance force as compared with that predicted for two smooth surfaces is observed. However, this force reduction does not represent slippage. The computed force is exactly the same as that between equivalent smooth surfaces obeying no-slip boundary conditions, but located at an intermediate position between peaks and valleys of asperities. The shift in hydrodynamic thickness is shown to depend on the height and density of roughness elements. Our results do not support some previous experimental conclusions on very large and shear-dependent boundary slip for similar systems.' author: - Christian Kunert - Jens Harting - 'Olga I. Vinogradova' title: 'Random-roughness hydrodynamic boundary conditions' --- [**Introduction.–**]{} It has been recently well recognized that the famous no-slip boundary condition, for more than a hundred years applied to model experiments[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - 'Dietmar Klemm$^{a,b}$' - and Andrea Maiorana$^a$ title: Fluid dynamics on ultrastatic spacetimes and dual black holes --- Introduction ============ The AdS/CFT correspondence has provided us with a powerful tool to get insight into the dynamics of certain field theories at strong coupling by studying classical gravity solutions. In the long wavelength limit, where the mean free path is much smaller then any other scale, one expects that these interacting field theories admit an effective hydrodynamical description. In fact, it was shown in [@Bhattacharyya:2008jc][^1] that the five-dimensional Einstein equations with negative cosmological constant reduce to the Navier-Stokes equations on the conformal boundary of AdS$_5$. The analysis of [@Bhattacharyya:2008jc] is perturbative in a boundary derivative expansion, in which the zeroth order terms describe a conformal perfect fluid. The coefficient of the first subleading term yields the shear viscosity $\eta$ and confirms the famous result $\eta/s=1/(4\pi)$ by Policastro, Son and Starinets [@Policastro:2001yc], which was obtained by different methods.
--- author: - 'Dietmar Klemm$^{a,b}$' - and Andrea Maiorana$^a$ title: Fluid dynamics on ultrastatic spacetimes and dual black holes --- Introduction ============ The AdS/CFT correspondence has provided us with a powerful tool to get insight into the dynamics of certain field theories at strong coupling by studying classical gravity solutions. In the long wavelength limit, where the mean free path is much smaller then any other scale, one expects that these interacting field theories admit an effective hydrodynamical description. In fact, it was shown in [@Bhattacharyya:2008jc][^1] that the five-dimensional Einstein equations with negative cosmological constant reduce to the Navier-Stokes equations on the conformal boundary of AdS$_5$. The analysis of [@Bhattacharyya:2008jc] is perturbative in a boundary derivative expansion, in which the zeroth order terms describe a conformal perfect fluid. The coefficient of the first subleading term yields the shear viscosity $\eta$ and confirms the famous result $\eta/s=1/(4\pi)$ by Policastro, Son and Starinets [@Policastro:2001yc], which was obtained by different methods.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Dark energy can modify the dynamics of dark matter if there exists a direct interaction between them. Thus a measurement of the structure growth, e.g., redshift-space distortions (RSD), can provide a powerful tool to constrain the interacting dark energy (IDE) models. For the widely studied $Q=3\beta H\rho_{de}$ model, previous works showed that only a very small coupling ($\beta\sim\mathcal{O}(10^{-3})$) can survive in current RSD data. However, all of these analyses had to assume $w>-1$ and $\beta>0$ due to the existence of the large-scale instability in the IDE scenario. In our recent work \[Phys. Rev. D [**90**]{}, 063005 (2014)\], we successfully solved this large-scale instability problem by establishing a parametrized post-Friedmann (PPF) framework for the IDE scenario. So we, for the first time, have the ability to explore the full parameter space of the IDE models. In this work, we reexamine the observational constraints on the $Q=3\beta H\rho_{de}$ model within the PPF framework.
--- abstract: 'Dark energy can modify the dynamics of dark matter if there exists a direct interaction between them. Thus a measurement of the structure growth, e.g., redshift-space distortions (RSD), can provide a powerful tool to constrain the interacting dark energy (IDE) models. For the widely studied $Q=3\beta H\rho_{de}$ model, previous works showed that only a very small coupling ($\beta\sim\mathcal{O}(10^{-3})$) can survive in current RSD data. However, all of these analyses had to assume $w>-1$ and $\beta>0$ due to the existence of the large-scale instability in the IDE scenario. In our recent work \[Phys. Rev. D [**90**]{}, 063005 (2014)\], we successfully solved this large-scale instability problem by establishing a parametrized post-Friedmann (PPF) framework for the IDE scenario. So we, for the first time, have the ability to explore the full parameter space of the IDE models. In this work, we reexamine the observational constraints on the $Q=3\beta H\rho_{de}$ model within the PPF framework.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The Configuration Interaction (CI) method is applied to the calculation of the structures of a number of positron binding systems, including $e^+$Be, $e^+$Mg, $e^+$Ca and $e^+$Sr. These calculations were carried out in orbital spaces containing about 200 electron and 200 positron orbitals up to $\ell = 12$. Despite the very large dimensions, the binding energy and annihilation rate converge slowly with $\ell$, and the final values do contain an appreciable correction obtained by extrapolating the calculation to the $\ell \to \infty$ limit. The binding energies were 0.00317 hartree for $e^+$Be, 0.0170 hartree for $e^+$Mg, 0.0189 hartree for $e^+$Ca, and 0.0131 hartree for $e^+$Sr.' author: - 'M.W.J.Bromley' - 'J.Mitroy' title: Large dimension Configuration Interaction calculations of positron binding to the group II atoms --- Introduction ============ The ability of positrons to bind to a number of atoms is now well established [@mitroy02b; @schrader01a; @strasburger03a], and all of the group II elements of the periodic table are expected to
--- abstract: 'The Configuration Interaction (CI) method is applied to the calculation of the structures of a number of positron binding systems, including $e^+$Be, $e^+$Mg, $e^+$Ca and $e^+$Sr. These calculations were carried out in orbital spaces containing about 200 electron and 200 positron orbitals up to $\ell = 12$. Despite the very large dimensions, the binding energy and annihilation rate converge slowly with $\ell$, and the final values do contain an appreciable correction obtained by extrapolating the calculation to the $\ell \to \infty$ limit. The binding energies were 0.00317 hartree for $e^+$Be, 0.0170 hartree for $e^+$Mg, 0.0189 hartree for $e^+$Ca, and 0.0131 hartree for $e^+$Sr.' author: - 'M.W.J.Bromley' - 'J.Mitroy' title: Large dimension Configuration Interaction calculations of positron binding to the group II atoms --- Introduction ============ The ability of positrons to bind to a number of atoms is now well established [@mitroy02b; @schrader01a; @strasburger03a], and all of the group II elements of the periodic table are expected to[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The Hamiltonian actions for extreme and non-extreme black holes are compared and contrasted and a simple derivation of the lack of entropy of extreme black holes is given. In the non-extreme case the wave function of the black hole depends on horizon degrees of freedom which give rise to the entropy. Those additional degrees of freedom are absent in the extreme case.' address: | Centro de Estudios Científicos de Santiago, Casilla 16443, Santiago 9, Chile\ and\ Institute for Advanced Study, Olden Lane, Princeton, New Jersey 0854, USA. author: - 'Claudio Teitelboim [^1]' date: September 1994 title: 'Action and Entropy of Extreme and Non-Extreme Black Holes' --- It has been recently proposed [@hawking], [@horowitz] that extreme black holes have zero entropy [@wilczek]. The purpose of this note is to adhere to this claim by providing an economical derivation of it. The derivation also helps to set the
--- abstract: 'The Hamiltonian actions for extreme and non-extreme black holes are compared and contrasted and a simple derivation of the lack of entropy of extreme black holes is given. In the non-extreme case the wave function of the black hole depends on horizon degrees of freedom which give rise to the entropy. Those additional degrees of freedom are absent in the extreme case.' address: | Centro de Estudios Científicos de Santiago, Casilla 16443, Santiago 9, Chile\ and\ Institute for Advanced Study, Olden Lane, Princeton, New Jersey 0854, USA. author: - 'Claudio Teitelboim [^1]' date: September 1994 title: 'Action and Entropy of Extreme and Non-Extreme Black Holes' --- It has been recently proposed [@hawking], [@horowitz] that extreme black holes have zero entropy [@wilczek]. The purpose of this note is to adhere to this claim by providing an economical derivation of it. The derivation also helps to set the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We first obtained the spectrum of the diffuse Galactic light (DGL) at general interstellar space in 1.8-5.3 $\mu$m wavelength region with the low-resolution prism spectroscopy mode of the AKARI Infra-Red Camera (IRC) NIR channel. The 3.3 $\mu$m PAH band is detected in the DGL spectrum at Galactic latitude $\mid b \mid < 15^{\circ }$, and its correlations with the Galactic dust and gas are confirmed. The correlation between the 3.3 $\mu$m PAH band and the thermal emission from the Galactic dust is expressed not by a simple linear correlation but by a relation with extinction. Using this correlation, the spectral shape of DGL at optically thin region ($5^{\circ } < \mid b \mid < 15^{\circ }$) was derived as a template spectrum. Assuming that the spectral shape of this template spectrum is uniform at any position, DGL spectrum can be estimated by scaling this template spectrum using the correlation
--- abstract: 'We first obtained the spectrum of the diffuse Galactic light (DGL) at general interstellar space in 1.8-5.3 $\mu$m wavelength region with the low-resolution prism spectroscopy mode of the AKARI Infra-Red Camera (IRC) NIR channel. The 3.3 $\mu$m PAH band is detected in the DGL spectrum at Galactic latitude $\mid b \mid < 15^{\circ }$, and its correlations with the Galactic dust and gas are confirmed. The correlation between the 3.3 $\mu$m PAH band and the thermal emission from the Galactic dust is expressed not by a simple linear correlation but by a relation with extinction. Using this correlation, the spectral shape of DGL at optically thin region ($5^{\circ } < \mid b \mid < 15^{\circ }$) was derived as a template spectrum. Assuming that the spectral shape of this template spectrum is uniform at any position, DGL spectrum can be estimated by scaling this template spectrum using the correlation[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Random transformations are commonly used for augmentation of the training data with the goal of reducing the uniformity of the training samples. These transformations normally aim at variations that can be expected in images from the same modality. Here, we propose a simple method for transforming the gray values of an image with the goal of reducing cross modality differences. This approach enables segmentation of the lumbar vertebral bodies in CT images using a network trained exclusively with MR images. The source code is made available at <https://github.com/nlessmann/rsgt>' title: Random smooth gray value transformations for cross modality learning with gray value invariant networks --- Introduction ============ Detection and segmentation networks are typically trained for a specific type of images, for instance MR images. Networks that reliably recognize an anatomical structure in those images most often completely fail to recognize the same structure in images from another imaging modality. However, a lot of structures arguably
--- abstract: 'Random transformations are commonly used for augmentation of the training data with the goal of reducing the uniformity of the training samples. These transformations normally aim at variations that can be expected in images from the same modality. Here, we propose a simple method for transforming the gray values of an image with the goal of reducing cross modality differences. This approach enables segmentation of the lumbar vertebral bodies in CT images using a network trained exclusively with MR images. The source code is made available at <https://github.com/nlessmann/rsgt>' title: Random smooth gray value transformations for cross modality learning with gray value invariant networks --- Introduction ============ Detection and segmentation networks are typically trained for a specific type of images, for instance MR images. Networks that reliably recognize an anatomical structure in those images most often completely fail to recognize the same structure in images from another imaging modality. However, a lot of structures arguably[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - 'Xiao-Wei Duan,' - 'Min Zhou,' - 'Tong-Jie Zhang' title: Testing consistency of general relativity with kinematic and dynamical probes --- Introduction {#sec:intro} ============ Einstein’s theory of general relativity (GR) has, unchangeably, remained evergreen at the heart of astrophysics over almost one century after its formulation. The testing of the general relativity theory takes the central position all the time in the modern physics[@Berti2015]. The theory has already passed the precisive experimental tests to data on the scale of the Solar System and smaller ones with flying colors. Naturally, testing general relativity on cosmological scales[@Peebles2004] will be the current and future target for gravitational physics. The cosmological observations, testing the general relativity, include two traditional classes of probes[@Perivolaropoulos2010]. One class is the so-called “geometric probes”, which includes Type Ia supernovae (as standard candles), Baryon Acoustic Oscillations (BAO) and geometric properties of weak lensing. These probes can determinate the Hubble parameter $H(z)$ as a function of the redshift $z$
--- author: - 'Xiao-Wei Duan,' - 'Min Zhou,' - 'Tong-Jie Zhang' title: Testing consistency of general relativity with kinematic and dynamical probes --- Introduction {#sec:intro} ============ Einstein’s theory of general relativity (GR) has, unchangeably, remained evergreen at the heart of astrophysics over almost one century after its formulation. The testing of the general relativity theory takes the central position all the time in the modern physics[@Berti2015]. The theory has already passed the precisive experimental tests to data on the scale of the Solar System and smaller ones with flying colors. Naturally, testing general relativity on cosmological scales[@Peebles2004] will be the current and future target for gravitational physics. The cosmological observations, testing the general relativity, include two traditional classes of probes[@Perivolaropoulos2010]. One class is the so-called “geometric probes”, which includes Type Ia supernovae (as standard candles), Baryon Acoustic Oscillations (BAO) and geometric properties of weak lensing. These probes can determinate the Hubble parameter $H(z)$ as a function of the redshift $z$[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'To study subregions of a turbulence velocity field, a long record of velocity data of grid turbulence is divided into smaller segments. For each segment, we calculate statistics such as the mean rate of energy dissipation and the mean energy at each scale. Their values significantly fluctuate, in lognormal distributions at least as a good approximation. Each segment is not under equilibrium between the mean rate of energy dissipation and the mean rate of energy transfer that determines the mean energy. These two rates still correlate among segments when their length exceeds the correlation length. Also between the mean rate of energy dissipation and the mean total energy, there is a correlation characterized by the Reynolds number for the whole record, implying that the large-scale flow affects each of the segments.' author: - Hideaki Mouri - Akihiro Hori - Masanori Takaoka title: Fluctuations of statistics among subregions of a turbulence velocity field --- Introduction {#s1} ============ For locally
--- abstract: 'To study subregions of a turbulence velocity field, a long record of velocity data of grid turbulence is divided into smaller segments. For each segment, we calculate statistics such as the mean rate of energy dissipation and the mean energy at each scale. Their values significantly fluctuate, in lognormal distributions at least as a good approximation. Each segment is not under equilibrium between the mean rate of energy dissipation and the mean rate of energy transfer that determines the mean energy. These two rates still correlate among segments when their length exceeds the correlation length. Also between the mean rate of energy dissipation and the mean total energy, there is a correlation characterized by the Reynolds number for the whole record, implying that the large-scale flow affects each of the segments.' author: - Hideaki Mouri - Akihiro Hori - Masanori Takaoka title: Fluctuations of statistics among subregions of a turbulence velocity field --- Introduction {#s1} ============ For locally[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We construct a new class of positive indecomposable maps in the algebra of $d \times d$ complex matrices. These maps are characterized by the ‘weakest’ positivity property and for this reason they are called atomic. This class provides a new reach family of atomic entanglement witnesses which define important tool for investigating quantum entanglement. It turns out that they are able to detect states with the ‘weakest’ quantum entanglement.' author: - | Dariusz Chruściński and Andrzej Kossakowski\ Institute of Physics, Nicolaus Copernicus University,\ Grudziadzka 5/7, 87–100 Toruń, Poland title: '**A class of positive atomic maps**' --- Introduction ============ One of the most important problems of quantum information theory [@QIT] is the characterization of mixed states of composed quantum systems. In particular it is of primary importance to test whether a given quantum state exhibits quantum correlation, i.e. whether it is separable or entangled. For low dimensional systems
--- abstract: 'We construct a new class of positive indecomposable maps in the algebra of $d \times d$ complex matrices. These maps are characterized by the ‘weakest’ positivity property and for this reason they are called atomic. This class provides a new reach family of atomic entanglement witnesses which define important tool for investigating quantum entanglement. It turns out that they are able to detect states with the ‘weakest’ quantum entanglement.' author: - | Dariusz Chruściński and Andrzej Kossakowski\ Institute of Physics, Nicolaus Copernicus University,\ Grudziadzka 5/7, 87–100 Toruń, Poland title: '**A class of positive atomic maps**' --- Introduction ============ One of the most important problems of quantum information theory [@QIT] is the characterization of mixed states of composed quantum systems. In particular it is of primary importance to test whether a given quantum state exhibits quantum correlation, i.e. whether it is separable or entangled. For low dimensional systems[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: '[We show that an analogue of the Ball-Box Theorem holds true for a class of corank 1, non-differentiable tangent subbundles that satisfy a geometric condition. In the final section of the paper we give examples of such bundles and an application to dynamical systems.]{}' author: - Sina Türeli title: 'The Ball-Box Theorem for a Class of Non-differentiable Tangent Subbundles ' --- Introduction ============ Sub-Riemannian geometry is a generalization of Riemannian geometry which is motivated by very physical and concrete problems. It is the language for formalizing questions like: “Can we connect two thermodynamical states by adiabatic paths?"[@Car09], “Can a robot with a certain set of movement rules reach everywhere in a factory?"[@Agr04], “Can a business man evade tax by following the rules that were set to avoid tax evasion?", “By adjusting the current we give to a neural system, can we change the initial phase of the system to any other phase we want?"[@Li10]. However one
--- abstract: '[We show that an analogue of the Ball-Box Theorem holds true for a class of corank 1, non-differentiable tangent subbundles that satisfy a geometric condition. In the final section of the paper we give examples of such bundles and an application to dynamical systems.]{}' author: - Sina Türeli title: 'The Ball-Box Theorem for a Class of Non-differentiable Tangent Subbundles ' --- Introduction ============ Sub-Riemannian geometry is a generalization of Riemannian geometry which is motivated by very physical and concrete problems. It is the language for formalizing questions like: “Can we connect two thermodynamical states by adiabatic paths?"[@Car09], “Can a robot with a certain set of movement rules reach everywhere in a factory?"[@Agr04], “Can a business man evade tax by following the rules that were set to avoid tax evasion?", “By adjusting the current we give to a neural system, can we change the initial phase of the system to any other phase we want?"[@Li10]. However one[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: | [We first analyse the effect of a square root transformation to the time variable on the convergence of the Crank-Nicolson scheme when applied to the solution of the heat equation with Dirac delta function initial conditions. In the original variables, the scheme is known to diverge as the time step is reduced with the ratio, $\lambda$, of the time step to space step held constant and the value of $\lambda$ controls how fast the divergence occurs. After introducing the square root of time variable we prove that the numerical scheme for the transformed partial differential equation now always converges and that $\lambda$ controls the order of convergence, quadratic convergence being achieved for $\lambda$ below a critical value. Numerical results indicate that the time change used with an appropriate value of $\lambda$ also results in quadratic convergence for the calculation of the price, delta and gamma
--- abstract: | [We first analyse the effect of a square root transformation to the time variable on the convergence of the Crank-Nicolson scheme when applied to the solution of the heat equation with Dirac delta function initial conditions. In the original variables, the scheme is known to diverge as the time step is reduced with the ratio, $\lambda$, of the time step to space step held constant and the value of $\lambda$ controls how fast the divergence occurs. After introducing the square root of time variable we prove that the numerical scheme for the transformed partial differential equation now always converges and that $\lambda$ controls the order of convergence, quadratic convergence being achieved for $\lambda$ below a critical value. Numerical results indicate that the time change used with an appropriate value of $\lambda$ also results in quadratic convergence for the calculation of the price, delta and gamma[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In this paper we consider two such likelihood ratios. The first one is an exponential functional of a two-sided Poisson process driven by some parameter, while the second one is an exponential functional of a two-sided Brownian motion. We establish that for sufficiently small values of the parameter, the Poisson type likelihood ratio can be approximated by the Brownian type one. As a consequence, several statistically interesting quantities (such as limiting variances of different estimators) related to the first likelihood ratio can also be approximated by those related to the second one. Finally, we discuss the asymptotics of the large values of the parameter and illustrate the results by numerical simulations.' author: - | Sergueï <span style="font-variant:small-caps;">Dachian</span>\ Laboratoire de Mathématiques\ Université Blaise Pascal\
--- abstract: 'Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In this paper we consider two such likelihood ratios. The first one is an exponential functional of a two-sided Poisson process driven by some parameter, while the second one is an exponential functional of a two-sided Brownian motion. We establish that for sufficiently small values of the parameter, the Poisson type likelihood ratio can be approximated by the Brownian type one. As a consequence, several statistically interesting quantities (such as limiting variances of different estimators) related to the first likelihood ratio can also be approximated by those related to the second one. Finally, we discuss the asymptotics of the large values of the parameter and illustrate the results by numerical simulations.' author: - | Sergueï <span style="font-variant:small-caps;">Dachian</span>\ Laboratoire de Mathématiques\ Université Blaise Pascal\ [memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We consider the problem of estimating the location of a single change point in a network generated by a dynamic stochastic block model mechanism. This model produces community structure in the network that exhibits change at a single time epoch. We propose two methods of estimating the change point, together with the model parameters, before and after its occurrence. The first employs a least squares criterion function and takes into consideration the full structure of the stochastic block model and is evaluated at each point in time. Hence, as an intermediate step, it requires estimating the community structure based on a clustering algorithm at every time point. The second method comprises of the following two steps: in the first one, a least squares function is used and evaluated at each time point, but [*ignores the community structure*]{} and just considers a random graph generating mechanism exhibiting a change point.
--- abstract: 'We consider the problem of estimating the location of a single change point in a network generated by a dynamic stochastic block model mechanism. This model produces community structure in the network that exhibits change at a single time epoch. We propose two methods of estimating the change point, together with the model parameters, before and after its occurrence. The first employs a least squares criterion function and takes into consideration the full structure of the stochastic block model and is evaluated at each point in time. Hence, as an intermediate step, it requires estimating the community structure based on a clustering algorithm at every time point. The second method comprises of the following two steps: in the first one, a least squares function is used and evaluated at each time point, but [*ignores the community structure*]{} and just considers a random graph generating mechanism exhibiting a change point.[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Robotic and animal mapping systems share many challenges and characteristics: they must function in a wide variety of environmental conditions, enable the robot or animal to navigate effectively to find food or shelter, and be computationally tractable from both a speed and storage perspective. With regards to map storage, the mammalian brain appears to take a diametrically opposed approach to all current robotic mapping systems. Where robotic mapping systems attempt to solve the data association problem to minimise representational aliasing, neurons in the brain intentionally break data association by encoding large (potentially unlimited) numbers of places with a single neuron. In this paper, we propose a novel method based on supervised learning techniques that seeks out regularly repeating visual patterns in the environment with mutually complementary co-prime frequencies, and an encoding scheme that enables storage requirements to grow sub-linearly with the size of the environment being mapped. To improve
--- abstract: 'Robotic and animal mapping systems share many challenges and characteristics: they must function in a wide variety of environmental conditions, enable the robot or animal to navigate effectively to find food or shelter, and be computationally tractable from both a speed and storage perspective. With regards to map storage, the mammalian brain appears to take a diametrically opposed approach to all current robotic mapping systems. Where robotic mapping systems attempt to solve the data association problem to minimise representational aliasing, neurons in the brain intentionally break data association by encoding large (potentially unlimited) numbers of places with a single neuron. In this paper, we propose a novel method based on supervised learning techniques that seeks out regularly repeating visual patterns in the environment with mutually complementary co-prime frequencies, and an encoding scheme that enables storage requirements to grow sub-linearly with the size of the environment being mapped. To improve[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - 'Federico R. Urban,' - 'Stefano Camera,' - and David Alonso bibliography: - 'references.bib' title: 'Detecting ultra-high energy cosmic ray anisotropies through cross-correlations' --- Introduction {#sec:intro} ============ Ultra-high energy cosmic rays (UHECRs), impacting the atmosphere of the Earth with energies in excess of $1\,\mathrm{EeV}$ ($10^{18}\,\mathrm{eV}$), have remained a mystery since their discovery 59 years ago [@Linsley:1961kt; @AlvesBatista:2019tlv]. We do not know what they are: observational data can not yet fully distinguish between several variants of pure and mixed primary compositions [@Castellina:2019huz; @Bergman:2019aaa]. We do not know where they come from: the astrophysical sources that generate and accelerate UHECRs have not been identified yet; the type of acceleration mechanism that is responsible for their formidable energies has not been discovered, either [@Kotera:2011cp]. What we do know is that the highest energy rays are most likely extra-Galactic. First, if UHECRs were produced within the Galaxy, their arrival directions in the sky would be very different from what we observe [@Tinyakov:2015qfz; @Abbasi:2016kgr; @Aab:2017tyv]. Second,
--- author: - 'Federico R. Urban,' - 'Stefano Camera,' - and David Alonso bibliography: - 'references.bib' title: 'Detecting ultra-high energy cosmic ray anisotropies through cross-correlations' --- Introduction {#sec:intro} ============ Ultra-high energy cosmic rays (UHECRs), impacting the atmosphere of the Earth with energies in excess of $1\,\mathrm{EeV}$ ($10^{18}\,\mathrm{eV}$), have remained a mystery since their discovery 59 years ago [@Linsley:1961kt; @AlvesBatista:2019tlv]. We do not know what they are: observational data can not yet fully distinguish between several variants of pure and mixed primary compositions [@Castellina:2019huz; @Bergman:2019aaa]. We do not know where they come from: the astrophysical sources that generate and accelerate UHECRs have not been identified yet; the type of acceleration mechanism that is responsible for their formidable energies has not been discovered, either [@Kotera:2011cp]. What we do know is that the highest energy rays are most likely extra-Galactic. First, if UHECRs were produced within the Galaxy, their arrival directions in the sky would be very different from what we observe [@Tinyakov:2015qfz; @Abbasi:2016kgr; @Aab:2017tyv]. Second,[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - | Juan Li\ [School of Mathematics and Statistics, Shandong University at Weihai, Weihai 264209, P. R. China.]{}\ date: 'June 21, 2012' title: 'Note on stochastic control problems related with general fully coupled forward-backward stochastic differential equations' --- [**Abstract.**]{} In this paper we study stochastic optimal control problems of general fully coupled forward-backward stochastic differential equations (FBSDEs). In Li and Wei [@LW] the authors studied two cases of diffusion coefficients $\sigma$ of FSDEs, in one case when $\sigma$ depends on the control and does not depend on the second component of the solution $(Y, Z)$ of the BSDE, and in the other case $\sigma$ depends on $Z$ and doesn’t depend on the control. Here we study the general case when $\sigma$ depends on both $Z$ and the control at the same time. The recursive cost functionals are defined by controlled general fully coupled FBSDEs, then the value functions are given
--- author: - | Juan Li\ [School of Mathematics and Statistics, Shandong University at Weihai, Weihai 264209, P. R. China.]{}\ date: 'June 21, 2012' title: 'Note on stochastic control problems related with general fully coupled forward-backward stochastic differential equations' --- [**Abstract.**]{} In this paper we study stochastic optimal control problems of general fully coupled forward-backward stochastic differential equations (FBSDEs). In Li and Wei [@LW] the authors studied two cases of diffusion coefficients $\sigma$ of FSDEs, in one case when $\sigma$ depends on the control and does not depend on the second component of the solution $(Y, Z)$ of the BSDE, and in the other case $\sigma$ depends on $Z$ and doesn’t depend on the control. Here we study the general case when $\sigma$ depends on both $Z$ and the control at the same time. The recursive cost functionals are defined by controlled general fully coupled FBSDEs, then the value functions are given[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Preliminary results of the strong interaction shift and width in pionic hydrogen ($\pi H$) using an X-ray spectrometer with spherically bent crystals and CCDs as X-ray detector are presented. In the experiment at the Paul Scherrer Institute three different $(np\to1s)$ transitions in $\pi H$ were measured. Moreover the pion mass measurement using the $(5 \to 4)$ transitions in pionic nitrogen and muonic oxygen is presented' author: - | Martino Trassinelli [^1]\ *Laboratoire Kastler Brossel,Université P. et M. Curie, F-75252 Paris, France* title: ' PRECISION SPECTROSCOPY OF PIONIC ATOMS: FROM PION MASS EVALUATION TO TESTS OF CHIRAL PERTURBATION THEORY' --- =11.6pt Introduction ============ Pionic hydrogen atoms are unique systems to study the strong interaction at low energies[@gotta2004]. The influence of the strong interaction in pionic hydrogen can be extracted from the $(np\to1s)$ transitions. Compared to pure electromagnetic interaction, the 1s level is affected by an energy shift $\epsilon_{1s}$ and a line broadening
--- abstract: 'Preliminary results of the strong interaction shift and width in pionic hydrogen ($\pi H$) using an X-ray spectrometer with spherically bent crystals and CCDs as X-ray detector are presented. In the experiment at the Paul Scherrer Institute three different $(np\to1s)$ transitions in $\pi H$ were measured. Moreover the pion mass measurement using the $(5 \to 4)$ transitions in pionic nitrogen and muonic oxygen is presented' author: - | Martino Trassinelli [^1]\ *Laboratoire Kastler Brossel,Université P. et M. Curie, F-75252 Paris, France* title: ' PRECISION SPECTROSCOPY OF PIONIC ATOMS: FROM PION MASS EVALUATION TO TESTS OF CHIRAL PERTURBATION THEORY' --- =11.6pt Introduction ============ Pionic hydrogen atoms are unique systems to study the strong interaction at low energies[@gotta2004]. The influence of the strong interaction in pionic hydrogen can be extracted from the $(np\to1s)$ transitions. Compared to pure electromagnetic interaction, the 1s level is affected by an energy shift $\epsilon_{1s}$ and a line broadening[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'The growing number of extragalactic high-energy (HE, $E > 100$ MeV) and very-high-energy (VHE, $E > 100$ GeV) $\gamma$-ray sources that do not belong to the blazar class suggests that VHE $\gamma$-ray production may be a common property of most radio-loud Active Galactic Nuclei (AGN). In a previous paper, we have investigated the signatures of Compton-supported pair cascades initiated by VHE $\gamma$-ray absorption in monochromatic radiation fields, dominated by Ly$\alpha$ line emission from the Broad Line Region. In this paper, we investigate the interaction of nuclear VHE $\gamma$-rays with the thermal infrared radiation field from a circumnuclear dust torus. Our code follows the spatial development of the cascade in full 3-dimensional geometry. We provide a model fit to the broadband SED of the dust-rich, $\gamma$-ray loud radio galaxy Cen A and show that typical blazar-like jet parameters may be used to model the broadband SED, if one allows for an additional cascade
--- abstract: 'The growing number of extragalactic high-energy (HE, $E > 100$ MeV) and very-high-energy (VHE, $E > 100$ GeV) $\gamma$-ray sources that do not belong to the blazar class suggests that VHE $\gamma$-ray production may be a common property of most radio-loud Active Galactic Nuclei (AGN). In a previous paper, we have investigated the signatures of Compton-supported pair cascades initiated by VHE $\gamma$-ray absorption in monochromatic radiation fields, dominated by Ly$\alpha$ line emission from the Broad Line Region. In this paper, we investigate the interaction of nuclear VHE $\gamma$-rays with the thermal infrared radiation field from a circumnuclear dust torus. Our code follows the spatial development of the cascade in full 3-dimensional geometry. We provide a model fit to the broadband SED of the dust-rich, $\gamma$-ray loud radio galaxy Cen A and show that typical blazar-like jet parameters may be used to model the broadband SED, if one allows for an additional cascade[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'We present the first evolved solutions to a computational task within the [*N*]{}euronal [*Org*]{}anism [*Ev*]{}olution model (***Norgev***) of artificial neural network development. These networks display a remarkable robustness to external noise sources, and can regrow to functionality when severely damaged. In this framework, we evolved a doubling of network functionality (double-NAND circuit). The network structure of these evolved solutions does not follow the logic of human coding, and instead more resembles the decentralized dendritic connection pattern of more biological networks such as the brain.' author: - 'Alan N. Hampton$^{1}$' - | Christoph Adami$^{1,2}$\ \ $^1$Digital Life Laboratory 136-93, California Institute of Technology, Pasadena, CA 91125\ $^2$Jet Propulsion Laboratory 126-347, California Institute of Technology, Pasadena, CA 91109\ adami@caltech.edu nocite: - '[@Dittrich01]' - '[@Astor00]' title: Evolution of Robust Developmental Neural Networks --- Introduction ============ The complexity of mammalian brains, and the animal behaviors they elicit, continue to
--- abstract: 'We present the first evolved solutions to a computational task within the [*N*]{}euronal [*Org*]{}anism [*Ev*]{}olution model (***Norgev***) of artificial neural network development. These networks display a remarkable robustness to external noise sources, and can regrow to functionality when severely damaged. In this framework, we evolved a doubling of network functionality (double-NAND circuit). The network structure of these evolved solutions does not follow the logic of human coding, and instead more resembles the decentralized dendritic connection pattern of more biological networks such as the brain.' author: - 'Alan N. Hampton$^{1}$' - | Christoph Adami$^{1,2}$\ \ $^1$Digital Life Laboratory 136-93, California Institute of Technology, Pasadena, CA 91125\ $^2$Jet Propulsion Laboratory 126-347, California Institute of Technology, Pasadena, CA 91109\ adami@caltech.edu nocite: - '[@Dittrich01]' - '[@Astor00]' title: Evolution of Robust Developmental Neural Networks --- Introduction ============ The complexity of mammalian brains, and the animal behaviors they elicit, continue to[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Finding ways of creating, measuring and manipulating Majorana bound states (MBSs) in superconducting-semiconducting nanowires is a highly pursued goal in condensed matter physics. It was recently proposed that a periodic covering of the semiconducting nanowire with superconductor fingers would allow both gating and tuning the system into a topological phase while leaving room for a local detection of the MBS wavefunction. We perform a detailed, self-consistent numerical study of a three-dimensional (3D) model for a finite-length nanowire with a superconductor superlattice including the effect of the surrounding electrostatic environment, and taking into account the surface charge created at the semiconductor surface. We consider different experimental scenarios where the superlattice is on top or at the bottom of the nanowire with respect to a back gate. The analysis of the 3D electrostatic profile, the charge density, the low energy spectrum and the formation of MBSs reveals a rich phenomenology that
--- abstract: 'Finding ways of creating, measuring and manipulating Majorana bound states (MBSs) in superconducting-semiconducting nanowires is a highly pursued goal in condensed matter physics. It was recently proposed that a periodic covering of the semiconducting nanowire with superconductor fingers would allow both gating and tuning the system into a topological phase while leaving room for a local detection of the MBS wavefunction. We perform a detailed, self-consistent numerical study of a three-dimensional (3D) model for a finite-length nanowire with a superconductor superlattice including the effect of the surrounding electrostatic environment, and taking into account the surface charge created at the semiconductor surface. We consider different experimental scenarios where the superlattice is on top or at the bottom of the nanowire with respect to a back gate. The analysis of the 3D electrostatic profile, the charge density, the low energy spectrum and the formation of MBSs reveals a rich phenomenology that[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Let $M$ be a compact Riemannian manifold with smooth boundary. We obtain the exact long time asymptotic behaviour of the heat kernel on abelian coverings of $M$ with mixed Dirichlet and Neumann boundary conditions. As an application, we study the long time behaviour of the abelianized winding of reflected Brownian motions in $M$. In particular, we prove a Gaussian type central limit theorem showing that when rescaled appropriately, the fluctuations of the abelianized winding are normally distributed with an explicit covariance matrix.' address: - ' ^1^ Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213.' - ' ^2^ Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213.' author: - Xi Geng^1^ - Gautam Iyer^2^ bibliography: - 'refs.bib' title: Long Time Asymptotics of Heat Kernels and Brownian Winding Numbers on Manifolds with Boundary --- [^1] Introduction. ============= Consider a compact Riemannian manifold $M$ with boundary. We address the following questions in this paper: 1. What is the long time asymptotic behaviour of the heat kernel
--- abstract: 'Let $M$ be a compact Riemannian manifold with smooth boundary. We obtain the exact long time asymptotic behaviour of the heat kernel on abelian coverings of $M$ with mixed Dirichlet and Neumann boundary conditions. As an application, we study the long time behaviour of the abelianized winding of reflected Brownian motions in $M$. In particular, we prove a Gaussian type central limit theorem showing that when rescaled appropriately, the fluctuations of the abelianized winding are normally distributed with an explicit covariance matrix.' address: - ' ^1^ Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213.' - ' ^2^ Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213.' author: - Xi Geng^1^ - Gautam Iyer^2^ bibliography: - 'refs.bib' title: Long Time Asymptotics of Heat Kernels and Brownian Winding Numbers on Manifolds with Boundary --- [^1] Introduction. ============= Consider a compact Riemannian manifold $M$ with boundary. We address the following questions in this paper: 1. What is the long time asymptotic behaviour of the heat kernel[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- author: - | Md Sarowar Morshed and Md Noor-E-Alam\ [Department of Mechanical & Industrial Engineering]{}\ [Northeastern University ]{}\ [360 Huntington Avenue, Boston, MA 02115, USA]{}\ [Email : mnalam@neu.edu]{} bibliography: - 'aafs.bib' title: Generalized Affine Scaling Algorithms for Linear Programming Problems --- Abstract {#abstract .unnumbered} ======== Interior Point Methods are widely used to solve Linear Programming problems. In this work, we present two primal Affine Scaling algorithms to achieve faster convergence in solving Linear Programming problems. In the first algorithm, we integrate Nesterov’s restarting strategy in the primal Affine Scaling method with an extra parameter, which in turn generalizes the original primal Affine Scaling method. We provide the proof of convergence for the proposed generalized algorithm considering long step size. We also provide the proof of convergence for the primal and dual sequence without the degeneracy assumption. This convergence result generalizes the original convergence
--- author: - | Md Sarowar Morshed and Md Noor-E-Alam\ [Department of Mechanical & Industrial Engineering]{}\ [Northeastern University ]{}\ [360 Huntington Avenue, Boston, MA 02115, USA]{}\ [Email : mnalam@neu.edu]{} bibliography: - 'aafs.bib' title: Generalized Affine Scaling Algorithms for Linear Programming Problems --- Abstract {#abstract .unnumbered} ======== Interior Point Methods are widely used to solve Linear Programming problems. In this work, we present two primal Affine Scaling algorithms to achieve faster convergence in solving Linear Programming problems. In the first algorithm, we integrate Nesterov’s restarting strategy in the primal Affine Scaling method with an extra parameter, which in turn generalizes the original primal Affine Scaling method. We provide the proof of convergence for the proposed generalized algorithm considering long step size. We also provide the proof of convergence for the primal and dual sequence without the degeneracy assumption. This convergence result generalizes the original convergence[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'For a sequence $W$ we count the number $O_W(n)$ of minimal forbidden words no longer then $n$ and prove that $$\overline{\lim_{n \to \infty}} \frac{O_W(n)}{\log_3n} \geq 1.$$ [^1]' address: - 'Moscow Institute of Physics and Technology, Dolgoprudny, Russia' - 'C.N.R.S., École Normale Superieur, PSL Research University, France' author: - Igor Melnikov - Ivan Mitrofanov title: On cogrowth function of uniformly recurrent sequences --- Introduction ============ A language (or a subshift) can be defined by the list of [*forbidden subwords*]{}. The linearly equivalence class of the counting function for minimal forbidden words is an topological invariant of the corresponding symbolic dynamical system [@Beal]. G. Chelnokov, P. Lavrov and I. Bogdanov [@BogdCheln], [@Cheln1], [@Lavr1], [@Lavr2] estimated the minimum number of forbidden words that define a periodic sequence with a given length of period. We investigate a similar question for uniformly recurrent sequences and prove a logarithmic estimation for [*the cogrowth function*]{}. Preliminaries ============= An [*alphabet*]{} $A$ is a finite set of elements, [*letters*]{} are the elements of an alphabet. The finite
--- abstract: 'For a sequence $W$ we count the number $O_W(n)$ of minimal forbidden words no longer then $n$ and prove that $$\overline{\lim_{n \to \infty}} \frac{O_W(n)}{\log_3n} \geq 1.$$ [^1]' address: - 'Moscow Institute of Physics and Technology, Dolgoprudny, Russia' - 'C.N.R.S., École Normale Superieur, PSL Research University, France' author: - Igor Melnikov - Ivan Mitrofanov title: On cogrowth function of uniformly recurrent sequences --- Introduction ============ A language (or a subshift) can be defined by the list of [*forbidden subwords*]{}. The linearly equivalence class of the counting function for minimal forbidden words is an topological invariant of the corresponding symbolic dynamical system [@Beal]. G. Chelnokov, P. Lavrov and I. Bogdanov [@BogdCheln], [@Cheln1], [@Lavr1], [@Lavr2] estimated the minimum number of forbidden words that define a periodic sequence with a given length of period. We investigate a similar question for uniformly recurrent sequences and prove a logarithmic estimation for [*the cogrowth function*]{}. Preliminaries ============= An [*alphabet*]{} $A$ is a finite set of elements, [*letters*]{} are the elements of an alphabet. The finite[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Worldwide, sewer networks are designed to transport wastewater to a centralized treatment plant to be treated and returned to the environment. This process is critical for the current society, preventing waterborne illnesses, providing safe drinking water and enhancing general sanitation. To keep a sewer network perfectly operational, sampling inspections are performed constantly to identify obstructions. Typically, a Closed-Circuit Television system is used to record the inside of pipes and report the obstruction level, which may trigger a cleaning operative. Currently, the obstruction level assessment is done manually, which is time-consuming and inconsistent. In this work, we design a methodology to train a *Convolutional Neural Network* for identifying the level of obstruction in pipes, thus reducing the human effort required on such a frequent and repetitive task. We gathered a database of videos that are explored and adapted to generate useful frames to fed into the model. Our resulting classifier
--- abstract: 'Worldwide, sewer networks are designed to transport wastewater to a centralized treatment plant to be treated and returned to the environment. This process is critical for the current society, preventing waterborne illnesses, providing safe drinking water and enhancing general sanitation. To keep a sewer network perfectly operational, sampling inspections are performed constantly to identify obstructions. Typically, a Closed-Circuit Television system is used to record the inside of pipes and report the obstruction level, which may trigger a cleaning operative. Currently, the obstruction level assessment is done manually, which is time-consuming and inconsistent. In this work, we design a methodology to train a *Convolutional Neural Network* for identifying the level of obstruction in pipes, thus reducing the human effort required on such a frequent and repetitive task. We gathered a database of videos that are explored and adapted to generate useful frames to fed into the model. Our resulting classifier[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Using a $3.19~\mathrm{fb}^{-1}$ data sample collected at an $e^+e^-$ center-of-mass energy of $E_{\rm cm}=4.178$GeV with the BESIII detector, we measure the branching fraction of the leptonic decay $D_s^+\to\mu^+\nu_\mu$ to be $\mathcal{B}_{D_s^+\to\mu^+\nu_\mu}=(5.49\pm0.16_{\rm stat.}\pm0.15_{\rm syst.})\times10^{-3}$. Combining our branching fraction with the masses of the $D_s^+$ and $\mu^+$ and the lifetime of the $D_s^+$, we determine $f_{D_s^+}|V_{cs}|=246.2\pm3.6_{\rm stat.}\pm3.5_{\rm syst.}~\mathrm{MeV}$. Using the $c\to s$ quark mixing matrix element $|V_{cs}|$ determined from a global standard model fit, we evaluate the $D_s^+$ decay constant $f_{D_s^+}=252.9\pm3.7_{\rm stat.}\pm3.6_{\rm syst.}$MeV. Alternatively, using the value of $f_{D_s^+}$ calculated by lattice quantum chromodynamics, we find $|V_{cs}| = 0.985\pm0.014_{\rm stat.}\pm0.014_{\rm syst.}$. These values of $\mathcal{B}_{D_s^+\to\mu^+\nu_\mu}$, $f_{D_s^+}|V_{cs}|$, $f_{D_s^+}$ and $|V_{cs}|$ are each the most precise results to date.' title: '**Determination of the pseudoscalar decay constant $f_{D_s^+}$ via $D_s^+\to\mu^+\nu_\mu$** ' --- -0.2cm -0.2cm The leptonic decay $D^+_s\to \ell^+\nu_\ell$ ($\ell=e$, $\mu$ or $\tau$) offers a unique window into both strong and weak effects in the charm quark sector. In
--- abstract: 'Using a $3.19~\mathrm{fb}^{-1}$ data sample collected at an $e^+e^-$ center-of-mass energy of $E_{\rm cm}=4.178$GeV with the BESIII detector, we measure the branching fraction of the leptonic decay $D_s^+\to\mu^+\nu_\mu$ to be $\mathcal{B}_{D_s^+\to\mu^+\nu_\mu}=(5.49\pm0.16_{\rm stat.}\pm0.15_{\rm syst.})\times10^{-3}$. Combining our branching fraction with the masses of the $D_s^+$ and $\mu^+$ and the lifetime of the $D_s^+$, we determine $f_{D_s^+}|V_{cs}|=246.2\pm3.6_{\rm stat.}\pm3.5_{\rm syst.}~\mathrm{MeV}$. Using the $c\to s$ quark mixing matrix element $|V_{cs}|$ determined from a global standard model fit, we evaluate the $D_s^+$ decay constant $f_{D_s^+}=252.9\pm3.7_{\rm stat.}\pm3.6_{\rm syst.}$MeV. Alternatively, using the value of $f_{D_s^+}$ calculated by lattice quantum chromodynamics, we find $|V_{cs}| = 0.985\pm0.014_{\rm stat.}\pm0.014_{\rm syst.}$. These values of $\mathcal{B}_{D_s^+\to\mu^+\nu_\mu}$, $f_{D_s^+}|V_{cs}|$, $f_{D_s^+}$ and $|V_{cs}|$ are each the most precise results to date.' title: '**Determination of the pseudoscalar decay constant $f_{D_s^+}$ via $D_s^+\to\mu^+\nu_\mu$** ' --- -0.2cm -0.2cm The leptonic decay $D^+_s\to \ell^+\nu_\ell$ ($\ell=e$, $\mu$ or $\tau$) offers a unique window into both strong and weak effects in the charm quark sector. In[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'Complex Enriques surfaces with a finite group of automorphisms are classified into seven types. In this paper, we determine which types of such Enriques surfaces exist in characteristic 2. In particular we give a one dimensional family of classical and supersingular Enriques surfaces with the automorphism group $\Aut(X)$ isomorphic to the symmetric group $\mathfrak{S}_5$ of degree five.' address: - 'Faculty of Science and Engineering, Hosei University, Koganei-shi, Tokyo 184-8584, Japan' - 'Graduate School of Mathematics, Nagoya University, Nagoya, 464-8602, Japan' author: - Toshiyuki Katsura - Shigeyuki Kondō title: On Enriques surfaces in characteristic 2 with a finite group of automorphisms --- [^1] Introduction {#sec1} ============ We work over an algebraically closed field $k$ of characteristic 2. Complex Enriques surfaces with a finite group of automorphisms are completely classified into seven types. The main purpose of this paper is to determine which types of such Enriques surfaces exist in characteristic 2. Recall that, over the complex numbers, a generic Enriques surface has
--- abstract: 'Complex Enriques surfaces with a finite group of automorphisms are classified into seven types. In this paper, we determine which types of such Enriques surfaces exist in characteristic 2. In particular we give a one dimensional family of classical and supersingular Enriques surfaces with the automorphism group $\Aut(X)$ isomorphic to the symmetric group $\mathfrak{S}_5$ of degree five.' address: - 'Faculty of Science and Engineering, Hosei University, Koganei-shi, Tokyo 184-8584, Japan' - 'Graduate School of Mathematics, Nagoya University, Nagoya, 464-8602, Japan' author: - Toshiyuki Katsura - Shigeyuki Kondō title: On Enriques surfaces in characteristic 2 with a finite group of automorphisms --- [^1] Introduction {#sec1} ============ We work over an algebraically closed field $k$ of characteristic 2. Complex Enriques surfaces with a finite group of automorphisms are completely classified into seven types. The main purpose of this paper is to determine which types of such Enriques surfaces exist in characteristic 2. Recall that, over the complex numbers, a generic Enriques surface has[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'It is generally believed that the inhomogeneous Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) phase appears in a color superconductor when the pairing between different quark flavors is under the circumstances of mismatched Fermi surfaces. However, the real crystal structure of the LOFF phase is still unclear because an exact treatment of 3D crystal structures is rather difficult. In this work we present a solid-state-like calculation of the ground-state energy of the body-centered cubic (BCC) structure for two-flavor pairing by diagonalizing the Hamiltonian matrix in the Bloch space without assuming a small amplitude of the order parameter. We develop a computational scheme to overcome the difficulties in diagonalizing huge matrices. Our results show that the BCC structure is energetically more favorable than the 1D modulation in a narrow window around the conventional LOFF-normal phase transition point, which indicates the significance of the higher-order terms in the Ginzburg-Landau approach.' author: - 'Gaoqing Cao,$^{1}$ Lianyi He,$^{2}$ and Pengfei
--- abstract: 'It is generally believed that the inhomogeneous Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) phase appears in a color superconductor when the pairing between different quark flavors is under the circumstances of mismatched Fermi surfaces. However, the real crystal structure of the LOFF phase is still unclear because an exact treatment of 3D crystal structures is rather difficult. In this work we present a solid-state-like calculation of the ground-state energy of the body-centered cubic (BCC) structure for two-flavor pairing by diagonalizing the Hamiltonian matrix in the Bloch space without assuming a small amplitude of the order parameter. We develop a computational scheme to overcome the difficulties in diagonalizing huge matrices. Our results show that the BCC structure is energetically more favorable than the 1D modulation in a narrow window around the conventional LOFF-normal phase transition point, which indicates the significance of the higher-order terms in the Ginzburg-Landau approach.' author: - 'Gaoqing Cao,$^{1}$ Lianyi He,$^{2}$ and Pengfei[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In order to find counterparts of the detected objects in the [*AKARI*]{} Deep Field South (ADFS) in all available wavelengths, we searched public databases (NED, SIMBAD and others). Checking 500 sources brighter than 0.0482 Jy in the [*AKARI*]{} Wide-S band, we found 114 sources with possible counterparts, among which 78 were known galaxies. We present these sources as well as our first attempt to construct spectral energy distributions (SEDs) for the most secure and most interesting sources among them, taking into account all the known data together with the [*AKARI*]{} measurements in four bands.' author: - 'Katarzyna Ma[ł]{}ek$^1$, Agnieszka Pollo$^{2,3}$, Mai Shirahata$^4$, Shuji Matsuura$^{4}$, Mitsunobu Kawada$^5$, and Tsutomu T. Takeuchi$^5$' title: 'Identifications and SEDs of the detected sources from the [*AKARI*]{} Deep Field South' --- Introduction ============ The [*AKARI*]{} Deep Field South (ADFS) is one of the deep fields close to the Ecliptic Pole. The unique property of the ADFS is that the cirrus emission density is the
--- abstract: 'In order to find counterparts of the detected objects in the [*AKARI*]{} Deep Field South (ADFS) in all available wavelengths, we searched public databases (NED, SIMBAD and others). Checking 500 sources brighter than 0.0482 Jy in the [*AKARI*]{} Wide-S band, we found 114 sources with possible counterparts, among which 78 were known galaxies. We present these sources as well as our first attempt to construct spectral energy distributions (SEDs) for the most secure and most interesting sources among them, taking into account all the known data together with the [*AKARI*]{} measurements in four bands.' author: - 'Katarzyna Ma[ł]{}ek$^1$, Agnieszka Pollo$^{2,3}$, Mai Shirahata$^4$, Shuji Matsuura$^{4}$, Mitsunobu Kawada$^5$, and Tsutomu T. Takeuchi$^5$' title: 'Identifications and SEDs of the detected sources from the [*AKARI*]{} Deep Field South' --- Introduction ============ The [*AKARI*]{} Deep Field South (ADFS) is one of the deep fields close to the Ecliptic Pole. The unique property of the ADFS is that the cirrus emission density is the[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
<context>[NEXA_RESTORE] --- abstract: 'In random networks decorated with Ising spins, an increase of the density of frustrations reduces the transition temperature of the spin-glass ordering. This result is in contradiction to the Bethe theory. Here we investigate if this effect depends on the small-world property of the network. The results on the specific heat and the spin susceptibility indicate that the effect appears also in spatial networks.' author: - 'Anna Mańka-Krasoń and Krzysztof Ku[ł]{}akowski' title: Frustration and collectivity in spatial networks --- [*PACS numbers:*]{} 75.30.Kz, 64.60.aq, 05.10.Ln\ [*Keywords:*]{} spatial networks; spin-glass; Introduction ============ A random network is an archetypal example of a complex system [@dgm]. If we decorate the network nodes with some additional variables, the problem can be mapped to several applications. In the simplest case, these variables are two-valued; these can be sex or opinion (yes or no) in social networks, states ON and OFF in genetic networks, ’sell’ and ’buy’ in trade networks and so on. Information on
--- abstract: 'In random networks decorated with Ising spins, an increase of the density of frustrations reduces the transition temperature of the spin-glass ordering. This result is in contradiction to the Bethe theory. Here we investigate if this effect depends on the small-world property of the network. The results on the specific heat and the spin susceptibility indicate that the effect appears also in spatial networks.' author: - 'Anna Mańka-Krasoń and Krzysztof Ku[ł]{}akowski' title: Frustration and collectivity in spatial networks --- [*PACS numbers:*]{} 75.30.Kz, 64.60.aq, 05.10.Ln\ [*Keywords:*]{} spatial networks; spin-glass; Introduction ============ A random network is an archetypal example of a complex system [@dgm]. If we decorate the network nodes with some additional variables, the problem can be mapped to several applications. In the simplest case, these variables are two-valued; these can be sex or opinion (yes or no) in social networks, states ON and OFF in genetic networks, ’sell’ and ’buy’ in trade networks and so on. Information on[memory_0][memory_1][memory_2][memory_3][memory_4][memory_5][memory_6][memory_7][memory_8][memory_9][memory_10][memory_11][memory_12][memory_13][memory_14][memory_15][memory_16][memory_17][memory_18][memory_19][memory_20][memory_21][memory_22][memory_23][memory_24][memory_25][memory_26][memory_27][memory_28][memory_29][memory_30][memory_31]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card