text
stringlengths
121
2.54k
summary
stringlengths
23
219
Right-handed neutrinos with MeV to GeV mass are very promising candidates for dark matter (DM). Not only can they solve the missing satellite puzzle, the cusp-core problem of inner DM density profiles, and the too-big-to fail problem, {\it i.e.} that the unobserved satellites are too big to not have visible stars, but they can also account for the Standard Model (SM) neutrino masses at one loop. We perform a comprehensive study of the right-handed neutrino parameter space and impose the correct observed relic density and SM neutrino mass differences and mixings. We find that the DM masses are in agreement with bounds from big-bang nucleosynthesis, but that these constraints induce sizeable DM couplings to the charged SM leptons. We then point out that previously overlooked limits from current and future lepton flavour violation experiments such as MEG and SINDRUM heavily constrain the allowed parameter space. Since the DM is leptophilic, we also investigate electron recoil as a possible direct detection signal, in particular in the XENON1T experiment. We find that despite the large coupling and low backgrounds, the energy thresholds are still too high and the predicted cross sections too low due to the heavy charged mediator, whose mass is constrained by LEP limits.
MeV neutrino dark matter: Relic density, lepton flavour violation and electron recoil
The concept of intrinsic credibility has been recently introduced to check the credibility of "out of the blue" findings without any prior support. A significant result is deemed intrinsically credible if it is in conflict with a sceptical prior derived from the very same data that would make the effect non-significant. In this paper I propose to use Bayesian prior-predictive tail probabilities to assess intrinsic credibility. For the standard 5% significance level, this leads to a new p-value threshold that is remarkably close to the recently proposed p<0.005 standard. I also introduce the credibility ratio, the ratio of the upper to the lower limit of a standard confidence interval for the corresponding effect size. I show that the credibility ratio has to be smaller than 5.8 such that a significant finding is also intrinsically credible. Finally, a p-value for intrinsic credibility is proposed that is a simple function of the ordinary p-value and has a direct frequentist interpretation in terms of the probability of replicating an effect.
The Assessment of Intrinsic Credibility and a New Argument for p<0.005
This is the continuation of previous article. For subspaces $M^n(t)$ and $M^{n-m}(t)$ which are invariant manifolds of the differential equation under consideration we build a change of variables which splits this equation into a system of two independent equations. A notion of equivalence of linear differential equations of different orders is introduced. Necessary and sufficient conditions of this equivalence are given. These results are applied to the Flocke-Lyapunov theory for linear equations with periodic coefficients with a period T. In the case when monodromy matrix of the equation has negative eigenvalues, thus reduction in $R^m$ to an equation with constant coeficcients is possible only with doubling of reduction matrix period, we prove the possibility of splitting off in $R^m$ of equations with negative eigenvalues of monodromy matrix with the help of a real matrix without period doubling. For the fundamental matrix of solutions of an equation with periodic coefficients $X(t), X(t)=E$, we find representation $X(t)=\Phi(t)e^{Ht}\Phi^{+}(0)$ with real rectangular matrices $H$ and $\Phi(t), \Phi(t)=\Phi(t+T)$. We bring two applications of these results: 1) reduction of nonlinear differential equation in $R^n$ with distinguished linear part which is periodic with period T to the equation in $R^m, m>n$, with a constiant matrix of coefficients of the linear part; 2) for introdusing of amplitude-phase coordinates in the neigbourhood of periodic orbit of autonomous differential equation with separation of the linear part with constant matrix of coefficients.
On invariant manifolds of linear differential equations. II
The hydrodynamic limit of a kinetic Cucker-Smale model is investigated. In addition to the free-transport of individuals and the Cucker-Smale alignment operator, the model under consideration includes a strong local alignment term. This term was recently derived as the singular limit of an alignment operator due to Motsch and Tadmor. The model is enhanced with the addition of noise and a confinement potential. The objective of this work is the rigorous investigation of the singular limit corresponding to strong noise and strong local alignment. The proof relies on the relative entropy method and entropy inequalities which yield the appropriate convergence results. The resulting limiting system is an Euler-type flocking system.
Hydrodynamic limit of the kinetic Cucker-Smale flocking model
In conjunction with huge recent progress in camera and computer vision technology, camera-based sensors have increasingly shown considerable promise in relation to tactile sensing. In comparison to competing technologies (be they resistive, capacitive or magnetic based), they offer super-high-resolution, while suffering from fewer wiring problems. The human tactile system is composed of various types of mechanoreceptors, each able to perceive and process distinct information such as force, pressure, texture, etc. Camera-based tactile sensors such as GelSight mainly focus on high-resolution geometric sensing on a flat surface, and their force measurement capabilities are limited by the hysteresis and non-linearity of the silicone material. In this paper, we present a miniaturised dome-shaped camera-based tactile sensor that allows accurate force and tactile sensing in a single coherent system. The key novelty of the sensor design is as follows. First, we demonstrate how to build a smooth silicone hemispheric sensing medium with uniform markers on its curved surface. Second, we enhance the illumination of the rounded silicone with diffused LEDs. Third, we construct a force-sensitive mechanical structure in a compact form factor with usage of springs to accurately perceive forces. Our multi-modal sensor is able to acquire tactile information from multi-axis forces, local force distribution, and contact geometry, all in real-time. We apply an end-to-end deep learning method to process all the information.
A Miniaturised Camera-based Multi-Modal Tactile Sensor
The large variation in seed mass among species inspired a vast array of theoretical and empirical research attempting to explain this variation. So far, seed mass variation was investigated by two classes of studies: one class focuses on species varying in seed mass within communities, while the second focuses on variation between communities, most often with respect to resource gradients. Here, we develop a model capable of simultaneously explaining variation in seed mass within and between communities. The model describes resource competition (for both soil and light resources) in annual communities and incorporates two fundamental aspects: light asymmetry (higher light acquisition per unit biomass for larger individuals) and growth allometry (negative dependency of relative growth rate on plant biomass). Results show that both factors are critical in determining patterns of seed mass variation. In general, growth allometry increases the reproductive success of small-seeded species while light asymmetry increases the reproductive success of large-seeded species. Increasing availability of soil resources increases light competition, thereby increasing the reproductive success of large-seeded species and ultimately the community (weighted) mean seed mass. An unexpected prediction of the model is that maximum variation in community seed mass (a measure of functional diversity) occurs under intermediate levels of soil resources. Extensions of the model incorporating size-dependent seed survival and disturbance also show patterns consistent with empirical observations. These overall results suggest that the mechanisms captured by the model are important in determining patterns of species and functional diversity.
Seed mass diversity along resource gradients: the role of allometric growth rate and size-asymmetric competition
In this work we study families of pairs of window functions and lattices which lead to Gabor frames which all possess the same frame bounds. To be more precise, for every generalized Gaussian $g$, we will construct an uncountable family of lattices $\lbrace \Lambda_\tau \rbrace$ such that each pairing of $g$ with some $\Lambda_\tau$ yields a Gabor frame, and all pairings yield the same frame bounds. On the other hand, for each lattice we will find a countable family of generalized Gaussians $\lbrace g_i \rbrace$ such that each pairing leaves the frame bounds invariant. Therefore, we are tempted to speak about "Gabor Frame Sets of Invariance".
Gabor Frame Sets of Invariance - A Hamiltonian Approach to Gabor Frame Deformations
X-ray observations of solar flares routinely reveal an impulsive high-energy and a gradual low-energy emission component, whose relationship is one of the key issues of solar flare study. The gradual and impulsive emission components are believed to be associated with, respectively, the thermal and nonthermal components identified in spectral fitting. In this paper, a prominent about 50 second hard X-ray (HXR) pulse of a simple GOES class C7.5 flare on 20 February 2002 is used to study the association between high energy, non-thermal and impulsive evolution, and low energy, thermal and gradual evolution. We use regularized methods to obtain time derivatives of photon fluxes to quantify the time evolution as a function of photon energy, obtaining a break energy between impulsive and gradual behavior. These break energies are consistent with a constant value of about 11 keV in agreement with those found spectroscopically between thermal and non-thermal components, but the relative errors of the former are greater than 15% and much greater than the a few percent errors found from the spectral fitting. These errors only weakly depend on assuming an underlying spectral model for the photons, pointing to the current data being inadequate to reduce the uncertainties rather than there being a problem associated with an assumed model. The time derivative method is used to test for the presence of a 'pivot energy' in this flare. Although these pivot energies are marginally consistent with a constant value of about 9 keV, its values in the HXR rise phase appear to be lower than those in the decay phase.
Relationship between Hard and Soft X-ray Emission Components of a Solar Flare
To a rational homology sphere graph manifold one can associate a weighted tree invariant called splice diagram. It was shown earlier that the splice diagram determines the universal abelian cover of the manifold. We will in this article turn the proof of this in to an algorithm to explicitly construct the universal abelian cover from the splice diagram.
Constructing Universal Abelian Covers of Graph Manifolds
We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that lead to this state-of-the-art result. First, we show that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to several metrics. Second, we modify the recurrent architecture to improve spatial diffusion, which allows the network to more effectively capture and propagate image information through the network's hidden state. Finally, in addition to lossless entropy coding, we use a spatially adaptive bit allocation algorithm to more efficiently use the limited number of bits to encode visually complex image regions. We evaluate our method on the Kodak and Tecnick image sets and compare against standard codecs as well recently published methods based on deep neural networks.
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
The competent programmer hypothesis states that most programmers are competent enough to create correct or almost correct source code. Because this implies that bugs should usually manifest through small variations of the correct code, the competent programmer hypothesis is one of the fundamental assumptions of mutation testing. Unfortunately, it is still unclear if the competent programmer hypothesis holds and past research presents contradictory claims. Within this article, we provide a new perspective on the competent programmer hypothesis and its relation to mutation testing. We try to re-create real-world bugs through chains of mutations to understand if there is a direct link between mutation testing and bugs. The lengths of these paths help us to understand if the source code is really almost correct, or if large variations are required. Our results indicate that while the competent programmer hypothesis seems to be true, mutation testing is missing important operators to generate representative real-world bugs.
A new perspective on the competent programmer hypothesis through the reproduction of bugs with repeated mutations
We prove that any $\ell$ positive definite $d \times d$ matrices, $M_1,\ldots,M_\ell$, of full rank, can be simultaneously spectrally balanced in the following sense: for any $k < d$ such that $\ell \leq \lfloor \frac{d-1}{k-1} \rfloor$, there exists a matrix $A$ satisfying $\frac{\lambda_1(A^T M_i A) }{ \mathrm{Tr}( A^T M_i A ) } < \frac{1}{k}$ for all $i$, where $\lambda_1(M)$ denotes the largest eigenvalue of a matrix $M$. This answers a question posed by Peres, Popov and Sousi and completes the picture described in that paper regarding sufficient conditions for transience of self-interacting random walks. Furthermore, in some cases we give quantitative bounds on the transience of such walks.
How many matrices can be spectrally balanced simultaneously?
We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark $\beta$-method for the momentum equation and a $\theta$-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain.
Numerical Modelling of the Brain Poromechanics by High-Order Discontinuous Galerkin Methods
We study contraction under a Markov semi-group and influence bounds for functions in $L^2$ tail spaces, i.e. functions all of whose low level Fourier coefficients vanish. It is natural to expect that certain analytic inequalities are stronger for such functions than for general functions in $L^2$. In the positive direction we prove an $L^{p}$ Poincar\'{e} inequality and moment decay estimates for mean $0$ functions and for all $1<p<\infty$, proving the degree one case of a conjecture of Mendel and Naor as well as the general degree case of the conjecture when restricted to Boolean functions. In the negative direction, we answer negatively two questions of Hatami and Kalai concerning extensions of the Kahn-Kalai-Linial and Harper Theorems to tail spaces. That is, we construct a function $f\colon\{-1,1\}^{n}\to\{-1,1\}$ whose Fourier coefficients vanish up to level $c \log n$, with all influences bounded by $C \log n/n$ for some constants $0<c,C< \infty$. We also construct a function $f\colon\{-1,1\}^{n}\to\{0,1\}$ with nonzero mean whose remaining Fourier coefficients vanish up to level $c' \log n$, with the sum of the influences bounded by $C'(\mathbb{E}f)\log(1/\mathbb{E}f)$ for some constants $0<c',C'<\infty$.
Strong Contraction and Influences in Tail Spaces
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a "realistic" relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.
Neural Sketch Learning for Conditional Program Generation
This article constructs examples of associative submanifolds in $G_2$-manifolds obtained by resolving $G_2$-orbifolds using Joyce's generalised Kummer construction. As the $G_2$-manifolds approach the $G_2$-orbifolds, the volume of the associative submanifolds tends to zero. This partially verifies a prediction due to Halverson and Morrison.
Associative submanifolds in Joyce's generalised Kummer constructions
Gaussian process (GP) audio source separation is a time-domain approach that circumvents the inherent phase approximation issue of spectrogram based methods. Furthermore, through its kernel, GPs elegantly incorporate prior knowledge about the sources into the separation model. Despite these compelling advantages, the computational complexity of GP inference scales cubically with the number of audio samples. As a result, source separation GP models have been restricted to the analysis of short audio frames. We introduce an efficient application of GPs to time-domain audio source separation, without compromising performance. For this purpose, we used GP regression, together with spectral mixture kernels, and variational sparse GPs. We compared our method with LD-PSDTF (positive semi-definite tensor factorization), KL-NMF (Kullback-Leibler non-negative matrix factorization), and IS-NMF (Itakura-Saito NMF). Results show that the proposed method outperforms these techniques.
Sparse Gaussian Process Audio Source Separation Using Spectrum Priors in the Time-Domain
We investigate the short-term dynamical evolution of stellar grand-design spiral arms in barred spiral galaxies using a three-dimensional (3D) $N$-body/hydrodynamic simulation. Similar to previous numerical simulations of unbarred, multiple-arm spirals, we find that grand-design spiral arms in barred galaxies are not stationary, but rather dynamic. This means that the amplitudes, pitch angles, and rotational frequencies of the spiral arms are not constant, but change within a few hundred million years (i.e. the typical rotational period of a galaxy). We also find that the clear grand-design spirals in barred galaxies appear it only when the spirals connect with the ends of the bar. Furthermore, we find that the short-term behaviour of spiral arms in the outer regions ($R>$ 1.5--2 bar radius) can be explained by the swing amplification theory and that the effects of the bar are not negligible in the inner regions ($R<$ 1.5--2 bar radius). These results suggest that, although grand-design spiral arms in barred galaxies are affected by the stellar bar, the grand-design spiral arms essentially originate not as bar-driven stationary density waves, but rather as self-excited dynamic patterns. We imply that a rigidly rotating grand-design spiral could not be a reasonable dynamical model for investigating gas flows and cloud formation even in barred spiral galaxies.
Short-term dynamical evolution of grand-design spirals in barred galaxies
This paper is concerned with the existence of transition fronts for a one-dimensional twopatch model with KPP reaction terms. Density and flux conditions are imposed at the interface between the two patches. We first construct a pair of suitable super-and subsolutions by making full use of information of the leading edges of two KPP fronts and gluing them through the interface conditions. Then, an entire solution obtained thanks to a limiting argument is shown to be a transition front moving from one patch to the other one. This propagating solution admits asymptotic past and future speeds, and it connects two different fronts, each associated with one of the two patches. The paper thus provides the first example of a transition front for a KPP-type two-patch model with interface conditions.
KPP transition fronts in a one-dimensional two-patch habitat
This work is concerned with the study of the adaptivity properties of nonparametric regression estimators over the $d$-dimensional sphere within the global thresholding framework. The estimators are constructed by means of a form of spherical wavelets, the so-called needlets, which enjoy strong concentration properties in both harmonic and real domains. The author establishes the convergence rates of the $L^p$-risks of these estimators, focussing on their minimax properties and proving their optimality over a scale of nonparametric regularity function spaces, namely, the Besov spaces.
Adaptive global thresholding on the sphere
Most of today's popular deep architectures are hand-engineered to be generalists. However, this design procedure usually leads to massive redundant, useless, or even harmful features for specific tasks. Unnecessarily high complexities render deep nets impractical for many real-world applications, especially those without powerful GPU support. In this paper, we attempt to derive task-dependent compact models from a deep discriminant analysis perspective. We propose an iterative and proactive approach for classification tasks which alternates between (1) a pushing step, with an objective to simultaneously maximize class separation, penalize co-variances, and push deep discriminants into alignment with a compact set of neurons, and (2) a pruning step, which discards less useful or even interfering neurons. Deconvolution is adopted to reverse 'unimportant' filters' effects and recover useful contributing sources. A simple network growing strategy based on the basic Inception module is proposed for challenging tasks requiring larger capacity than what the base net can offer. Experiments on the MNIST, CIFAR10, and ImageNet datasets demonstrate our approach's efficacy. On ImageNet, by pushing and pruning our grown Inception-88 model, we achieve more accurate models than Inception nets generated during growing, residual nets, and popular compact nets at similar sizes. We also show that our grown Inception nets (without hard-coded dimension alignment) clearly outperform residual nets of similar complexities.
Grow-Push-Prune: aligning deep discriminants for effective structural network compression
The Klein group contains only four elements. Nevertheless this little group contains a number of remarkable entry points to current highways of modern representation theory of groups. In this paper, we shall describe all possible ways in which the Klein group can act on vector spaces over a field of two elements. These are called representations of the Klein group. This description involves some powerful visual methods of representation theory which builds on the work of generations of mathematicians starting roughly with the work of K. Weiestrass. We also discuss some applications to properties of duality and Heller shifts of the representations of the Klein group.
Representations of The miraculous Klein group
In the present work, we explore the existence, stability and dynamics of single and multiple vortex ring states that can arise in Bose-Einstein condensates. Earlier works have illustrated the bifurcation of such states, in the vicinity of the linear limit, for isotropic or anisotropic three-dimensional harmonic traps. Here, we extend these states to the regime of large chemical potentials, the so-called Thomas-Fermi limit, and explore their properties such as equilibrium radii and inter-ring distance, for multi-ring states, as well as their vibrational spectra and possible instabilities. In this limit, both the existence and stability characteristics can be partially traced to a particle picture that considers the rings as individual particles oscillating within the trap and interacting pairwise with one another. Finally, we examine some representative instability scenarios of the multi-ring dynamics including breakup and reconnections, as well as the transient formation of vortex lines.
Single and Multiple Vortex Rings in Three-Dimensional Bose-Einstein Condensates: Existence, Stability and Dynamics
Nonperturbative model of glueball is studied. The model is based on the nonperturbative quantization technique suggested by Heisenberg. 2- and 4-point Green functions for a gauge potential are expressed in terms of two scalar fields. The first scalar field describes quantum fluctuations of the subgroup $SU(n) \subset SU(N)$, and the second one describes quantum fluctuations of the coset $SU(N) / SU(n)$. An effective Lagrangian for the scalar fields is obtained. The coefficients for all terms in the Lagrangian are calculated, and it is shown that they depend on $\dim SU(n), \dim SU(N)$. It is demonstrated that a spherically symmetric solution describing the glueball does exist.
Scalar model of SU(N) glueball \`a la Heisenberg
Light extra U(1) gauge bosons, so called hidden photons, which reside in a hidden sector have attracted much attention since they are a well motivated feature of many scenarios beyond the Standard Model and furthermore could mediate the interaction with hidden sector dark matter. We review limits on hidden photons from past electron beam dump experiments including two new limits from such experiments at KEK and Orsay. In addition, we study the possibility of having dark matter in the hidden sector. A simple toy model and different supersymmetric realisations are shown to provide viable dark matter candidates in the hidden sector that are in agreement with recent direct detection limits.
Hidden Photons in connection to Dark Matter
Currently, there are about 3 dozen known super-Earth (M < 10 MEarth), of which 8 are transiting planets suitable for atmospheric follow-up observations. Some of the planets are exposed to extreme temperatures as they orbit close to their host stars, e.g., CoRot-7b, and all of these planets have equilibrium temperatures significantly hotter than the Earth. Such planets can develop atmospheres through (partial) vaporization of their crustal and/or mantle silicates. We investigated the chemical equilibrium composition of such heated systems from 500 - 4000 K and total pressures from 10-6 to 10+2 bars. The major gases are H2O and CO2 over broad temperature and pressure ranges, and Na, K, O2, SiO, and O at high temperatures and low pressures. We discuss the differences in atmospheric composition arising from vaporization of SiO2-rich (i.e., felsic) silicates (like Earth's continental crust) and MgO-, FeO-rich (i.e., mafic) silicates like the bulk silicate Earth. The computational results will be useful in planning spectroscopic studies of the atmospheres of Earth-like exoplanets.
Vaporization of the Earth: Application to Exoplanet Atmospheres
We consider the ability of deep neural networks to represent data that lies near a low-dimensional manifold in a high-dimensional space. We show that deep networks can efficiently extract the intrinsic, low-dimensional coordinates of such data. We first show that the first two layers of a deep network can exactly embed points lying on a monotonic chain, a special type of piecewise linear manifold, mapping them to a low-dimensional Euclidean space. Remarkably, the network can do this using an almost optimal number of parameters. We also show that this network projects nearby points onto the manifold and then embeds them with little error. We then extend these results to more general manifolds.
Efficient Representation of Low-Dimensional Manifolds using Deep Networks
The formulation of quasi-local conformal Killling horizons(CKH) is extended to include rotation. This necessitates that the horizon be foliated by 2-spheres which may be distorted. Matter degrees of freedom which fall through the horizon is taken to be a real scalar field. We show that these rotating CKHs also admit a first law in differential form.
Quasilocal rotating conformal Killing horizons
We study function spaces consisting of analytic functions with fast decay on horizontal strips of the complex plane with respect to a given weight function. Their duals, so called spaces of (ultra)hyperfunctions of fast growth, generalize the spaces of Fourier hyperfunctions and Fourier ultrahyperfunctions. An analytic representation theory for their duals is developed and applied to characterize the non-triviality of these function spaces in terms of the growth order of the weight function. In particular, we show that the Gelfand-Shilov spaces of Beurling type $\mathcal{S}^{(p!)}_{(M_p)}$ and Roumieu type $\mathcal{S}^{\{p!\}}_{\{M_p\}}$ are non-trivial if and only if $$ \sup_{p \geq 2}\frac{(\log p)^p}{h^pM_p} < \infty, $$ for all $h > 0$ and some $h > 0$, respectively. We also study boundary values of holomorphic functions in spaces of ultradistributions of exponential type, which may be of quasianalytic type.
On the non-triviality of certain spaces of analytic functions. Hyperfunctions and ultrahyperfunctions of fast growth
Open Science has been a rising theme in the landscape of science policy in recent years. The goal is to make research that emerges from publicly funded science to become findable, accessible, interoperable and reusable (FAIR) for use by other researchers. Knowledge utilization policies aim to efficiently make scientific knowledge beneficial for society at large. This paper demonstrates how Astronomy aspires to be open and transparent given their criteria for high research quality, which aim at pushing knowledge forward and clear communication of findings. However, the use of quantitative metrics in research evaluation puts pressure on the researcher, such that taking the extra time for transparent publishing of data and results is difficult, given that astronomers are not rewarded for the quality of research papers, but rather their quantity. This paper explores the current mode of openness in Astronomy and how incentives due to funding, publication practices and indicators affect this field. The paper concludes with some recommendations on how policies such as making science more open have the potential to contribute to scientific quality in Astronomy.
Knowledge Utilization and Open Science Policies: Noble aims that ensure quality research or Ordering discoveries like a pizza?
Determinantal point processes (DPPs) offer an elegant tool for encoding probabilities over subsets of a ground set. Discrete DPPs are parametrized by a positive semidefinite matrix (called the DPP kernel), and estimating this kernel is key to learning DPPs from observed data. We consider the task of learning the DPP kernel, and develop for it a surprisingly simple yet effective new algorithm. Our algorithm offers the following benefits over previous approaches: (a) it is much simpler; (b) it yields equally good and sometimes even better local maxima; and (c) it runs an order of magnitude faster on large problems. We present experimental results on both real and simulated data to illustrate the numerical performance of our technique.
Fixed-point algorithms for learning determinantal point processes
We discuss the hypotheses that cosmological baryon asymmetry and entropy were produced in the early Universe by phase transition of the scalar fields in the framework of spontaneous baryogenesis scenario. We show that annihilation of the matter-antimatter clouds during the cosmological hydrogen recombination could distort of the CMB anisotropies and polarization by delay of the recombination. After recombination the annihilation of the antibaryonic clouds (ABC) and baryonic matter can produce peak-like reionization at the high redshifts before formation of quasars and early galaxy formation. We discuss the constraints on the parameters of spontaneous baryogenesis scenario by the recent WMAP CMB anisotropy and polarization data and on possible manifestation of the antimatter clouds in the upcoming PLANCK data.
Antimatter from the cosmological baryogenesis and the anisotropies and polarization of the CMB radiation
We study the effects of coupling between layers of stochastic neural field models with laminar structure. In particular, we focus on how the propagation of waves of neural activity in each layer is affected by the coupling. Synaptic connectivities within and between each layer are determined by integral kernels of an integrodifferential equation describing the temporal evolution of neural activity. Excitatory neural fields, with purely positive connectivities, support traveling fronts in each layer, whose speeds are increased when coupling between layers is considered. Studying the effects of noise, we find coupling also serves to reduce the variance in the position of traveling fronts, as long as the noise sources to each layer are not completely correlated. Neural fields with asymmetric connectivity support traveling pulses whose speeds are decreased by interlaminar coupling. Again, coupling reduces the variance in traveling pulse position, when noise is considered that is not totally correlated between layers. To derive our stochastic results, we employ a small-noise expansion, also assuming interlaminar connectivity scales similarly. Our asymptotic results agree reasonably with accompanying numerical simulations.
Coupling layers regularizes wave propagation in laminar stochastic neural fields
We consider K-mouflage models which are K-essence theories coupled to matter. We analyse their quantum properties and in particular the quantum corrections to the classical Lagrangian. We setup the renormalisation programme for these models and show that K-mouflage theories involve a recursive construction whereby each set of counter-terms introduces new divergent quantum contributions which in turn must be subtracted by new counter-terms. This tower of counter-terms can be constructed by recursion and allows one to calculate the finite renormalised action of the model. In particular, the classical action is not renormalised and the finite corrections to the renormalised action contain only higher derivative operators. We establish an operational criterion for classicality, where the corrections to the classical action are negligible, and show that this is satisfied in cosmological and astrophysical situations for (healthy) K-mouflage models which pass the solar system tests. We also find that these models are quantum stable around astrophysical and cosmological backgrounds. We then consider the possible embedding of the K-mouflage models in an Ultra-Violet completion. We find that the healthy models which pass the solar system tests all violate the positivity constraint which would follow from the unitarity of the putative UV completion, implying that these healthy K-mouflage theories have no UV completion. We then analyse their behaviour at high energy and we find that the classicality criterion is satisfied in the vicinity of a high energy collision implying that the classical K-mouflage theory can be applied in this context. Moreover, the classical description becomes more accurate as the energy increases, in a way compatible with the classicalisation concept.
The Quantum Field Theory of K-mouflage
We introduce a notion of freeness for $RO$-graded equivariant generalized homology theories, considering spaces or spectra $E$ such that the $R$-homology of $E$ splits as a wedge of the $R$-homology of induced virtual representation spheres. The full subcategory of these spectra is closed under all of the basic equivariant operations, and this greatly simplifies computation. Many examples of spectra and homology theories are included along the way. We refine this to a collection of spectra analogous to the pure and isotropic spectra considered by Hill--Hopkins--Ravenel. For these spectra, the $RO$-graded Bredon homology is extremely easy to compute, and if these spaces have additional structure, then this can also be easily determined. In particular, the homology of a space with this property naturally has the structure of a co-Tambara functor (and compatibly with any additional product structure). We work this out in the example of $BU_{\mathbb R}$ and coinduced versions of this. We finish by describing a readily computable bar and twisted bar spectra sequence, giving Bredon homology for various $E_{\infty}$ pushouts, and we apply this to describe the homology of $BBU_{\mathbb R}$.
Freeness and equivariant stable homotopy
An asymptotic expansion is established for time averages of translation flows on flat surfaces. This result, which extends earlier work of A.Zorich and G.Forni, yields limit theorems for translation flows. The argument, close in spirit to that of G.Forni, uses the approximation of ergodic integrals by holonomy-invariant Hoelder cocycles on trajectories of the flows. The space of holonomy-invariant Hoelder cocycles is finite-dimensional, and is given by an explicit construction. First, a symbolic representation for a uniquely ergodic translation flow is obtained following S.Ito and A.M. Vershik, and then, the space of cocycles is constructed using a family of finitely-additive complex-valued holonomy-invariant measures on the asymptotic foliations of a Markov compactum.
Finitely-additive measures on the asymptotic foliations of a Markov compactum
In the context of the spectral action and the noncommutative geometry approach to the standard model, we build a model based on a larger symmetry. With this "grand symmetry" it is natural to have the scalar field necessary to obtain the Higgs mass in the vicinity of 126 GeV. This larger symmetry mixes gauge and spin degrees of freedom without introducing extra fermions. Requiring the noncommutative space to be an almost commutative geometry (i.e. the product of manifold by a finite dimensional internal space) gives conditions for the breaking of this grand symmetry to the standard model.
Grand Symmetry, Spectral Action, and the Higgs mass
The main goal of this paper is to discuss how to integrate the possibilities of crowdsourcing platforms with systems supporting workflow to enable the engagement and interaction with business tasks of a wider group of people. Thus, this work is an attempt to expand the functional capabilities of typical business systems by allowing selected process tasks to be performed by unlimited human resources. Opening business tasks to crowdsourcing, within established Business Process Management Systems (BPMS) will improve the flexibility of company processes and allow for lower work-load and greater specialization among the staff employed on-site. The presented conceptual work is based on the current international standards in this field, promoted by Workflows Management Coalition. To this end, the functioning of business platforms was analysed and their functionality was presented visually, followed by a proposal and a discussion of how to implement crowdsourcing into workflow systems.
Deploying Crowdsourcing for Workflow Driven Business Process
Keeping the two fundamental postulates of the special theory of relativity, the principle of relativity and the constancy of the one-way velocity of light in all inertial frames of reference, and assuming two generalized Finslerian structures of gravity-free space and time in the usual inertial coordinate system, we can modify the special theory of relativity. The modified theory is still characterized by the localized Lorentz transformation between any two usual inertial coordinate systems. It together with the quantum mechanics theory features a convergent and invariant quantum field theory. The modified theory also involves a new velocity distribution for free particles that is different from the Maxwell distribution. It is claimed that the deviation of the new distribution from its previous formula will provide experimental means of judging the modified special relativity theory.
Velocity distribution of free particles in the modified special relativity theory
While solid-state devices offer naturally reliable hardware for modern classical computers, thus far quantum information processors resemble vacuum tube computers in being neither reliable nor scalable. Strongly correlated many body states stabilized in topologically ordered matter offer the possibility of naturally fault tolerant computing, but are both challenging to engineer and coherently control and cannot be easily adapted to different physical platforms. We propose an architecture which achieves some of the robustness properties of topological models but with a drastically simpler construction. Quantum information is stored in the symmetry-protected degenerate ground states of spin-1 chains, while quantum gates are performed by adiabatic non-Abelian holonomies using only single-site fields and nearest-neighbor couplings. Gate operations respect the symmetry, and so inherit some protection from noise and disorder from the symmetry-protected ground states.
Holonomic quantum computing in symmetry-protected ground states of spin chains
The design of better automated dialogue evaluation metrics offers the potential of accelerate evaluation research on conversational AI. However, existing trainable dialogue evaluation models are generally restricted to classifiers trained in a purely supervised manner, which suffer a significant risk from adversarial attacking (e.g., a nonsensical response that enjoys a high classification score). To alleviate this risk, we propose an adversarial training approach to learn a robust model, ATT (Adversarial Turing Test), that discriminates machine-generated responses from human-written replies. In contrast to previous perturbation-based methods, our discriminator is trained by iteratively generating unrestricted and diverse adversarial examples using reinforcement learning. The key benefit of this unrestricted adversarial training approach is allowing the discriminator to improve robustness in an iterative attack-defense game. Our discriminator shows high accuracy on strong attackers including DialoGPT and GPT-3.
An Adversarially-Learned Turing Test for Dialog Generation Models
The Cauchy problem for the Yang-Mills system in two space dimensions is treated for data with minimal regularity assumptions. In the classical case of data in $L^2$-based Sobolev spaces we have to assume that the number of derivatives is more than $3/4$ above the critical regularity with respect to scaling. For data in $L^r$-based Fourier-Lebesgue spaces this result can be improved by $1/4$ derivative in the sense of scaling as $r \to 1$ .
Low regularity well-posedness for the Yang-Mills system in 2D
Most of the existing works for dialogue generation are data-driven models trained directly on corpora crawled from websites. They mainly focus on improving the model architecture to produce better responses but pay little attention to considering the quality of the training data contrastively. In this paper, we propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query. A Rank-aware Calibration (RC) network is designed to construct the multi-level contrastive optimization objectives. Since these objectives are calculated based on the sentence level, which may erroneously encourage/suppress the generation of uninformative/informative words. To tackle this incidental issue, on one hand, we design an exquisite token-level strategy for estimating the instance loss more accurately. On the other hand, we build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words. We evaluate the proposed model on a carefully annotated dialogue dataset and the results suggest that our model can generate more relevant and diverse responses compared to the baseline models.
Enhancing Dialogue Generation via Multi-Level Contrastive Learning
We prove that elliptic tubes over properly convex domains of the real projective space are C-convex and complete Kobayashi-hyperbolic. We also study a natural construction of complexification of convex real projective manifolds.
Convexity properties and complete hyperbolicity of Lempert's elliptic tubes
Quasar-galaxy pairs at small separations are important probes of gas flows in the disk-halo interface in galaxies. We study host galaxies of 198 MgII absorbers at $0.39\le z_{abs}\le1.05$ that show detectable nebular emission lines in the SDSS spectra. We report measurements of impact parameter (5.9$\le D[kpc]\le$16.9) and absolute B-band magnitude ($-18.7\le {\rm M_B}\le -22.3$ mag) of host galaxies of 74 of these absorbers using multi-band images from the DESI Legacy Imaging Survey, more than doubling the number of known host galaxies with $D\le17$ kpc. This has allowed us to quantify the relationship between MgII rest equivalent width($W_{2796}$) and D, with best-fit parameters of $W_{2796}(D=0) = 3.44\pm 0.20$ Angstrom and an exponential scale length of 21.6$^{+2.41}_{-1.97}$ $kpc$. We find a significant anti-correlation between $M_B$ and D, and $M_B$ and $W_{2796}$, consistent with the brighter galaxies producing stronger MgII absorption. We use stacked images to detect average emissions from galaxies in the full sample. Using these images and stacked spectra, we derive the mean stellar mass ($9.4\le log(M_*/M_\odot) \le 9.8$), star formation rate ($2.3\le{\rm SFR}[M_\odot yr^{-1}] \le 4.5$), age (2.5$-$4 Gyr), metallicity (12+log(O/H)$\sim$8.3) and ionization parameter (log~q[cm s$^{-1}$]$\sim$ 7.7) for these galaxies. The average $M_*$ found is less compared to those of MgII absorbers studied in the literature. The average SFR and metallicity inferred are consistent with that expected in the main sequence and the known stellar mass-metallicity relation, respectively. High spatial resolution follow-up spectroscopic and imaging observations of this sample are imperative for probing gas flows close to the star-forming regions of high-$z$ galaxies.
Nature of the Galaxies On Top Of Quasars producing MgII absorption
The collapse of a massive star with low angular momentum content is commonly argued to result in the formation of a black hole without an accompanying bright transient. Our goal in this Letter is to understand the flow in and around a newly-formed black hole, involving accretion and rotation, via general relativistic hydrodynamics simulations aimed at studying the conditions under which infalling material can accrete without forming a centrifugally supported structure and, as a result, generate no effective feedback. If the feedback from the black hole is, on the other hand, significant, the collapse would be halted and we suggest that the event is likely to be followed by a bright transient. We find that feedback is only efficient if the specific angular momentum of the infalling material at the innermost stable circular orbit exceeds that of geodesic circular flow at that radius by at least $\approx 20\%$. We use the results of our simulations to constrain the maximal stellar rotation rates of the disappearing massive progenitors PHL293B-LBV and N6946-BH1, and to provide an estimate of the overall rate of disappearing massive stars. We find that about a few percent of single O-type stars with measured rotational velocities are expected to spin below the critical value before collapse and are thus predicted to vanish without a trace.
On the maximum stellar rotation to form a black hole without an accompanying luminous transient
Relativistic runaway electron avalanches (RREAs) are generally accepted as a source of thunderstorms gamma-ray radiation. Avalanches can multiply in the electric field via the relativistic feedback mechanism based on processes with gamma-rays and positrons. This paper shows that a non-uniform electric field geometry can lead to the new RREAs multiplication mechanism - "reactor feedback", due to the exchange of high-energy particles between different accelerating regions within a thundercloud. A new method for the numerical simulation of RREA dynamics within heterogeneous electric field structures is proposed. The developed analytical description and the numerical simulation enables us to derive necessary conditions for TGF occurrence in the system with the reactor feedback Observable properties of TGFs influenced by the proposed mechanism are discussed.
Relativistic runaway electron avalanches within complex thunderstorm electric field structures
An oscillating universe model is discussed, in which the singularity of the initial state of the universe is avoided by postulating an upper limit on spacetime curvature. This also results in the devising of the simplest possible structure - a primitive particle (usually called preon), which can be considered as the basic constituent of matter that have no properties except for the property of carrying a unit chromoelectric charge. The SU(3)xU(1)-symmetry of its field results in the emergence of a unique set of structures reproducing exactly the observed variety of the fundamental fermions and gauge bosons. The discussed scheme allows finding answers to many fundamental questions of the standard particle physics and cosmology based on a very few primary constituents.
An oscillating universe model based on chromoelectric fields
Vehicle to Vehicle (V2V) communication has a great potential to improve reaction accuracy of different driver assistance systems in critical driving situations. Cooperative Adaptive Cruise Control (CACC), which is an automated application, provides drivers with extra benefits such as traffic throughput maximization and collision avoidance. CACC systems must be designed in a way that are sufficiently robust against all special maneuvers such as cutting-into the CACC platoons by interfering vehicles or hard braking by leading cars. To address this problem, a Neural- Network (NN)-based cut-in detection and trajectory prediction scheme is proposed in the first part of this paper. Next, a probabilistic framework is developed in which the cut-in probability is calculated based on the output of the mentioned cut-in prediction block. Finally, a specific Stochastic Model Predictive Controller (SMPC) is designed which incorporates this cut-in probability to enhance its reaction against the detected dangerous cut-in maneuver. The overall system is implemented and its performance is evaluated using realistic driving scenarios from Safety Pilot Model Deployment (SPMD).
A Learning-based Stochastic MPC Design for Cooperative Adaptive Cruise Control to Handle Interfering Vehicles
This paper studies the optimal mechanisms for the vertically integrated utility to dispatch and incentivize the third-party demand response (DR) providers in its territory. A framework is proposed, with three-layer coupled Stackelberg and simultaneous games, to study the interactions and competitions among the profit-seeking process of the utility, the third-party DR providers, and the individual end users (EUs) in the DR programs. Two coupled single-leader-multiple-followers Stackelberg games with a three-layer structure are proposed to capture the interactions among the utility (modeled in the upper layer), the third-party DR providers (modeled in the middle layer), and the EUs in each DR program (modeled in the lower layer). The competitions among the EUs in each DR program is captured through a non-cooperative simultaneous game. An inconvenience cost function is proposed to model the DR provision willingness and capacity of different EUs. The Stackelberg game between the middle-layer DR provider and the lower-layer EUs is solved by converting the original bi-level programming to a singlelevel programming. This converted single-level programming is embedded in an iterative algorithm toward solving the entire coupled games framework. Case studies are performed on IEEE 34-bus and IEEE 69-bus test systems to illustrate the application of the proposed framework.
Optimal Utilization of Third-Party Demand Response Resources in Vertically Integrated Utilities: A Game Theoretic Approach
We study the early work scheduling problem on identical parallel machines in order to maximize the total early work, i.e., the parts of non-preemptive jobs executed before a common due date. By preprocessing and constructing an auxiliary instance which has several good properties, we propose an efficient polynomial time approximation scheme with running time $O(n)$, which improves the result in [Gy\"{o}rgyi, P., Kis, T. (2020). A common approximation framework for early work, late work, and resource leveling problems. {\it European Journal of Operational Research}, 286(1), 129-137], and a fully polynomial time approximation scheme with running time $O(n)$ when the number of machines is a fixed number, which improves the result in [Chen, X., Liang, Y., Sterna, M., Wang, W., B{\l}a\.{z}ewicz, J. (2020b). Fully polynomial time approximation scheme to maximize early work on parallel machines with common due date. {\it European Journal of Operational Research}, 284(1), 67-74], where $n$ is the number of jobs, and the hidden constant depends on the desired accuracy.
Improved approximation schemes for early work scheduling on identical parallel machines with common due date
For a graph $H$, a graph $G$ is $H$-saturated if $G$ does not contain $H$ as a subgraph but for any $e \in E(\overline{G})$, $G+e$ contains $H$. In this note, we prove a sharp lower bound for the number of paths and walks on length $2$ in $n$-vertex $K_{r+1}$-saturated graphs. We then use this bound to give a lower bound on the spectral radii of such graphs which is asymptotically tight for each fixed $r$ and $n\to\infty$.
$K_{r+1}$-saturated graphs with small spectral radius
Eta Carinae is considered to be a massive colliding wind binary system with a highly eccentric (e \sim 0.9), 5.54-yr orbit. However, the companion star continues to evade direct detection as the primary dwarfs its emission at most wavelengths. Using three-dimensional (3-D) SPH simulations of Eta Car's colliding winds and radiative transfer codes, we are able to compute synthetic observables across multiple wavebands for comparison to the observations. The models show that the presence of a companion star has a profound influence on the observed HST/STIS UV spectrum and H-alpha line profiles, as well as the ground-based photometric monitoring. Here, we focus on the Bore Hole effect, wherein the fast wind from the hot secondary star carves a cavity in the dense primary wind, allowing increased escape of radiation from the hotter/deeper layers of the primary's extended wind photosphere. The results have important implications for interpretations of Eta Car's observables at multiple wavelengths.
Multi-Wavelength Implications of the Companion Star in Eta Carinae
It is known that the Kadison-Singer Problem (KS) and the Paving Conjecture (PC) are equivalent to the Bourgain-Tzafriri Conjecture (BT). Also, it is known that (PC) fails for $2$-paving projections with constant diagonal $1/2$. But the proofs of this fact are existence proofs. We will use variations of the discrete Fourier Transform matrices to construct concrete examples of these projections and projections with constant diagonal $1/r$ which are not $r$-pavable in a very strong sense. In 1989, Bourgain and Tzafriri showed that the class of zero diagonal matrices with small entries (on the order of $\le 1/log^{1+\epsilon}n$, for an $n$-dimensional Hilbert space) are {\em pavable}. It has always been assumed that this result also holds for the BT-Conjecture - although no one formally checked it. We will show that this is not the case. We will show that if the BT-Conjecture is true for vectors with small coefficients (on the order of $\le C/\sqrt{n}$) then the BT-Conjecture is true and hence KS and PC are true.
The Bourgain-Tzafriri conjecture and concrete constructions of non-pavable projections
We report the detection of the cyclotron(CR) and magneto-plasmon(MPR) resonances near the Fermi surface of high-mobility 2DES, by microwave photoresistance measurements. We observe large amplitude photoresistance oscillations originating from higher-order CR, transitions between non-adjacent Landau levels. Such transitions are drastically enhanced in low magnetic field as compared to those previously known in the high field limit. The scattering time of the CR is found to be nearly one order of magnitude higher than that of Shubnikov-de-Haas oscillations. Finally, distinct photoresistance peaks are observed in addition to the CR features. These are identified as resonances of the low-frequency MP modes at a cut-off wavelength determined by the width of the 2DES sample.
Microwave Photoresistance Measurements of Magneto-excitations near a 2D Fermi Surface
Expanding a lower-dimensional problem to a higher-dimensional space and then projecting back is often beneficial. This article rigorously investigates this perspective in the context of finite mixture models, namely how to improve inference for mixture models by using auxiliary variables. Despite the large literature in mixture models and several empirical examples, there is no previous work that gives general theoretical justification for including auxiliary variables in mixture models, even for special cases. We provide a theoretical basis for comparing inference for mixture multivariate models with the corresponding inference for marginal univariate mixture models. Analytical results for several special cases are established. We show that the probability of correctly allocating mixture memberships and the information number for the means of the primary outcome in a bivariate model with two Gaussian mixtures are generally larger than those in each univariate model. Simulations under a range of scenarios, including misspecified models, are conducted to examine the improvement. The method is illustrated by two real applications in ecology and causal inference.
Improving Inference of Gaussian Mixtures Using Auxiliary Variables
Solar coronal mass ejections (CMEs) produce adverse space weather effects at Earth. Planets in the close habitable zone of magnetically active M dwarfs may experience more extreme space weather than at Earth, including frequent CME impacts leading to atmospheric erosion and leaving the surface exposed to extreme flare activity. Similar erosion may occur for hot Jupiters with close orbits around solar-like stars. We have developed a model, Forecasting a CME's Altered Trajectory (ForeCAT), which predicts a CME's deflection. We adapt ForeCAT to simulate CME deflections for the mid-type M dwarf V374 Peg and hot Jupiters with solar-type hosts. V374 Peg's strong magnetic fields can trap CMEs at the M dwarfs's Astrospheric Current Sheet, the location of the minimum in the background magnetic field. Solar-type CMEs behave similarly, but have much smaller deflections and do not get trapped at the Astrospheric Current Sheet. The probability of planetary impact decreases with increasing inclination of the planetary orbit with respect to the Astrospheric Current Sheet - 0.5 to 5 CME impacts per day for M dwarf exoplanets, 0.05 to 0.5 CME impacts per day for solar-type hot Jupiters. We determine the minimum planetary magnetic field necessary to shield a planet's atmosphere from the CME impacts. M dwarf exoplanets require values between tens and hundreds of Gauss. Hot Jupiters around a solar-type star, however, require a more reasonable <30 G. These values exceed the magnitude required to shield a planet from the stellar wind, suggesting CMEs may be the key driver of atmospheric losses.
Probability of CME Impact on Exoplanets Orbiting M Dwarfs and Solar-Like Stars
Wereportonanewmultiscalemethodapproachforthestudyofsystemswith wide separation of short-range forces acting on short time scales and long-range forces acting on much slower scales. We consider the case of the Poisson-Boltzmann equation that describes the long-range forces using the Boltzmann formula (i.e. we assume the medium to be in quasi local thermal equilibrium). We developed a new approach where fields and particle information (mediated by the equations for their moments) are solved self-consistently. The new approach is implicit and numerically stable, providing exact energy conservation. We tested different implementations all leading to exact energy conservation. The new method requires the solution of a large set of non-linear equations. We considered three solution strategies: Jacobian Free Newton Krylov, an alternative, called field hiding, based on hiding part of the residual calculation and replacing them with direct solutions and a Direct Newton Schwarz solver that considers simplified single particle-based Jacobian. The field hiding strategy proves to be the most efficient approach.
Implicit temporal discretization and exact energy conservation for particle methods applied to the Poisson-Boltzmann equation
Cosmic chronometers may be used to measure the age difference between passively evolving galaxy populations to calculate the Hubble parameter H(z). The age estimator emerges from the relationship between the amplitude of the rest frame Balmer break at 4000 angstroms and the age of a galaxy, assuming that there is one single stellar population within each galaxy. However, recent literature has shown possible contamination (up to 2.4% of the stellar mass in a high redshift sample) of a young component embedded within the predominantly old population of the quiescent galaxy. We compared the data with the predictions of each model, using a new approach of distinguishing between systematic and statistical errors (in previous works, these had incorrectly been added in quadrature) and evaluating the effects of contamination by a young stellar component. The ages inferred using cosmic chronometers represent a galaxy-wide average rather than a characteristic of the oldest population alone. The average contribution from the young component to the rest luminosity at 4000 angstroms may constitute a third of the luminosity in some samples, which means that this is far from negligible. This ratio is significantly dependent on stellar mass, proportional to M^{-0.7}. Consequently, the measurements of the absolute value of the age or the differential age between different redshifts are incorrect and make the previous calculations of H(z) very inaccurate. Some cosmological models, such as the Einstein-de Sitter model or quasi-steady state cosmology, which are rejected under the assumption of a purely old population, can be made compatible with the predicted ages of the Universe as a function of redshift if we take this contamination into account. However, the static Universe models are rejected by these H(z) measurements, even when this contamination is taken into account.
Impact of young stellar components on quiescent galaxies: deconstructing cosmic chronometers
In this paper, we study locally strongly convex affine hyperspheres in the unimodular affine space $\mathbb{R}^{n+1}$ which, as Riemannian manifolds, are locally isometric to the Riemannian product of two Riemannian manifolds both possessing constant sectional curvatures. As the main result, a complete classification of such affine hyperspheres is established. Moreover, as direct consequences, affine hyperspheres of dimensions 3 and 4 with parallel Ricci tensor are also classified.
On product affine hyperspheres in $\mathbb{R}^{n+1}$
Fully homomorphic encryption is an encryption method with the property that any computation on the plaintext can be performed by a party having access to the ciphertext only. Here, we formally define and give schemes for quantum homomorphic encryption, which is the encryption of quantum information such that quantum computations can be performed given the ciphertext only. Our schemes allows for arbitrary Clifford group gates, but become inefficient for circuits with large complexity, measured in terms of the non-Clifford portion of the circuit (we use the "$\pi/8$" non-Clifford group gate, which is also known as the $T$-gate). More specifically, two schemes are proposed: the first scheme has a decryption procedure whose complexity scales with the square of the number of $T$-gates (compared with a trivial scheme in which the complexity scales with the total number of gates); the second scheme uses a quantum evaluation key of length given by a polynomial of degree exponential in the circuit's $T$-gate depth, yielding a homomorphic scheme for quantum circuits with constant $T$-depth. Both schemes build on a classical fully homomorphic encryption scheme. A further contribution of ours is to formally define the security of encryption schemes for quantum messages: we define quantum indistinguishability under chosen plaintext attacks in both the public and private-key settings. In this context, we show the equivalence of several definitions. Our schemes are the first of their kind that are secure under modern cryptographic definitions, and can be seen as a quantum analogue of classical results establishing homomorphic encryption for circuits with a limited number of multiplication gates. Historically, such results appeared as precursors to the breakthrough result establishing classical fully homomorphic encryption.
Quantum homomorphic encryption for circuits of low $T$-gate complexity
In this paper, the power flow solution of the two bus network is used to analytically characterise maximum power transfer limits of distribution networks, when subject to both thermal and voltage constraints. Traditional analytic methods are shown to reach contradictory conclusions on the suitability of reactive power for increasing power transfer. Therefore, a more rigorous analysis is undertaken, yielding two solutions, both fully characterised by losses. The first is the well-known thermal limit. The second we define as the `marginal loss-induced maximum power transfer limit'. This is a point at which the marginal increases in losses are greater than increases in generated power. The solution is parametrised in terms of the ratio of resistive to reactive impedance, and yields the reactive power required. The accuracy and existence of these solutions are investigated using the IEEE 34 bus distribution test feeder, and show good agreement with the two bus approximation. The work has implications for the analysis of reactive power interventions in distribution networks, and for the optimal sizing of distributed generation.
Loss Induced Maximum Power Transfer in Distribution Networks
We show that the Lipschitz-free space with the Radon--Nikod\'{y}m property and a Daugavet point recently constructed by Veeorg is in fact a dual space isomorphic to $\ell_1$. Furthermore, we answer an open problem from the literature by showing that there exists a superreflexive space, in the form of a renorming of $\ell_2$, with a $\Delta$-point. Building on these two results, we are able to renorm every infinite-dimensional Banach space with a $\Delta$-point. Next, we establish powerful relations between existence of $\Delta$-points in Banach spaces and their duals. As an application, we obtain sharp results about the influence of $\Delta$-points for the asymptotic geometry of Banach spaces. In addition, we prove that if $X$ is a Banach space with a shrinking $k$-unconditional basis with $k < 2$, or if $X$ is a Hahn--Banach smooth space with a dual satisfying the Kadets--Klee property, then $X$ and its dual $X^*$ fail to contain $\Delta$-points. In particular, we get that no Lipschitz-free space with a Hahn--Banach smooth predual contains $\Delta$-points. Finally we present a purely metric characterization of the molecules in Lipschitz-free spaces that are $\Delta$-points, and we solve an open problem about representation of finitely supported $\Delta$-points in Lipschitz-free spaces.
Delta-points and their implications for the geometry of Banach spaces
We establish a spectral multiplier theorem associated with a Schr\"odinger operator H=-\Delta+V(x) in \mathbb{R}^3. We present a new approach employing the Born series expansion for the resolvent. This approach provides an explicit integral representation for the difference between a spectral multiplier and a Fourier multiplier, and it allows us to treat a large class of Schr\"odinger operators without Gaussian heat kernel estimates. As an application to nonlinear PDEs, we show the local-in-time well-posedness of a 3d quintic nonlinear Schr\"odinger equation with a potential.
A Spectral Multiplier Theorem associated with a Schr\"odinger Operator
A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the influence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties.
A Logic for Default Reasoning About Probabilities
We report a search for muon neutrino disappearance in the $\Delta m^{2}$ region of 0.5-40 $eV^2$ using data from both SciBooNE and MiniBooNE experiments. SciBooNE data provides a constraint on the neutrino flux, so that the sensitivity to $\nu_{\mu}$ disappearance with both detectors is better than with just MiniBooNE alone. The preliminary sensitivity for a joint $\nu_\mu$ disappearance search is presented.
Search for Muon Neutrino Disappearance in a Short-Baseline Accelerator Neutrino Beam
Individual laser cooled atoms are delivered on demand from a single atom magneto-optic trap to a high-finesse optical cavity using an atom conveyor. Strong coupling of the atom with the cavity field allows simultaneous cooling and detection of individual atoms for time scales exceeding 15 s. The single atom scatter rate is studied as a function of probe-cavity detuning and probe Rabi frequency, and the experimental results are in good agreement with theoretical predictions. We demonstrate the ability to manipulate the position of a single atom relative to the cavity mode with excellent control and reproducibility.
Deterministic loading of individual atoms to a high-finesse optical cavity
We study nonintersecting Brownian motions with two prescribed starting and ending positions, in the neighborhood of a tacnode in the time-space plane. Several expressions have been obtained in the literature for the critical correlation kernel $K\tac(x,y)$ that describes the microscopic behavior of the Brownian motions near the tacnode. One approach, due to Kuijlaars, Zhang and the author, expresses the kernel (in the single time case) in terms of a $4\times 4$ matrix valued Riemann-Hilbert problem. Another approach, due to Adler, Ferrari, Johansson, van Moerbeke and Vet\H o in a series of papers, expresses the kernel in terms of resolvents and Fredholm determinants of the Airy integral operator acting on a semi-infinite interval $[\sigma,\infty)$, involving some objects introduced by Tracy and Widom. In this paper we prove the equivalence of both approaches. We also obtain a rank-2 property for the derivative of the tacnode kernel. Finally, we find a Riemann-Hilbert expression for the multi-time extended tacnode kernel.
The tacnode kernel: equality of Riemann-Hilbert and Airy resolvent formulas
Localization transitions as a function of temperature require a many-body mobility edge in energy, separating localized from ergodic states. We argue that this scenario is inconsistent because local fluctuations into the ergodic phase within the supposedly localized phase can serve as mobile bubbles that induce global delocalization. Such fluctuations inevitably appear with a low but finite density anywhere in any typical state. We conclude that the only possibility for many-body localization to occur are lattice models that are localized at all energies. Building on a close analogy with a model of assisted two-particle hopping, where interactions induce delocalization, we argue why hot bubbles are mobile and do not localize upon diluting their energy. Numerical tests of our scenario show that previously reported mobility edges cannot be distinguished from finite-size effects.
Absence of many-body mobility edges
This paper presents a data-driven receding horizon fault estimation method for additive actuator and sensor faults in unknown linear time-invariant systems, with enhanced robustness to stochastic identification errors. State-of-the-art methods construct fault estimators with identified state-space models or Markov parameters, but they do not compensate for identification errors. Motivated by this limitation, we first propose a receding horizon fault estimator parameterized by predictor Markov parameters. This estimator provides (asymptotically) unbiased fault estimates as long as the subsystem from faults to outputs has no unstable transmission zeros. When the identified Markov parameters are used to construct the above fault estimator, zero-mean stochastic identification errors appear as model uncertainty multiplied with unknown fault signals and online system inputs/outputs (I/O). Based on this fault estimation error analysis, we formulate a mixed-norm problem for the offline robust design that regards online I/O data as unknown. An alternative online mixed-norm problem is also proposed that can further reduce estimation errors when the online I/O data have large amplitudes, at the cost of increased computational burden. Based on a geometrical interpretation of the two proposed mixed-norm problems, systematic methods to tune the user-defined parameters therein are given to achieve desired performance trade-offs. Simulation examples illustrate the benefits of our proposed methods compared to recent literature.
Data-Driven Robust Receding Horizon Fault Estimation
Axisymmetric three-dimensional solitary waves in uniform two-component mixture Bose-Einstein condensates are obtained as solutions of the coupled Gross-Pitaevskii equations with equal intracomponent but varying intercomponent interaction strengths. Several families of solitary wave complexes are found: (1) vortex rings of various radii in each of the components, (2) a vortex ring in one component coupled to a rarefaction solitary wave of the other component, (3) two coupled rarefaction waves, (4) either a vortex ring or a rarefaction pulse coupled to a localised disturbance of a very low momentum. The continuous families of such waves are shown in the momentum-energy plane for various values of the interaction strengths and the relative differences between the chemical potentials of two components. Solitary wave formation, their stability and solitary wave complexes in two-dimensions are discussed.
Solitary wave complexes in two-component mixture condensates
Spherically symmetric simulations of stellar core collapse and post-bounce evolution are used to test the sensitivity of the supernova dynamics to different variations of the input physics. We consider a state-of-the-art description of the neutrino-nucleon interactions, possible lepton-number changing neutrino reactions in the neutron star, and the potential impact of hydrodynamic mixing behind the supernova shock.
Core-collapse supernova simulations: Variations of the input physics
We study the magnetization dynamics in a ferromagnet-insulator-superconductor tunnel junction and the associated buildup of the electrical polarization. We show that for an open circuit, the induced voltage varies strongly and nonmonotonically with the precessional frequency, and can be enhanced significantly by the superconducting correlations. For frequencies much smaller or much larger than the superconducting gap, the voltage drops to zero, while when these two energy scales are comparable, the voltage is peaked at a value determined by the driving frequency. We comment on the potential utilization of the effect for the low-temperature spatially-resolved spectroscopy of magnetic dynamics.
Dynamic Magnetoelectric Effect in Ferromagnet-Superconductor Tunnel Junctions
We prove that for renormalizable Yang-Mills gauge theory with arbitrary compact gauge group (of at most a single abelian factor) and matter coupling, the absence of gauge anomalies can be established at the one-loop level. This proceeds by relating the gauge anomaly to perturbative agreement, which formalizes background independence.
Background independence and the Adler-Bardeen theorem
Thus, the results of our studies lie in developing and implementing the basic principles of digital sorting the Laguerre-Gauss modes by radial numbers both for a non-degenerate and a degenerate state of a vortex beam subject to perturbations in the form of a hard-edged aperture of variable radius. The digital sorting of LG beams by the orthogonal basis involves the use of higher-order intensity moments, and subsequent scanning of the modulated beam images at the focal plane of a spherical lens. As a result, we obtain a system of linear equations for the squared mode amplitudes and the cross amplitudes of the perturbed beam. The solution of the equations allows one to determine the amplitudes of each LG mode and restore both the real mode array and the combined beam as a whole. First, we developed a digital sorting algorithm, and then two types of vortex beams were experimentally studied on its basis: a single LG beam and a composition of single LG beams with the same topological charges(azimuthal numbers) and different radial numbers . The beam was perturbed by means of a circular hard-edged aperture with different radii R. As a result of the perturbation, a set of secondary LG modes with different radial numbers k is appeared that is characterized by an amplitude spectrum . The spectrum obtained makes it possible to restore both the real array of LG modes and the perturbed beam itself with a degree of correlation not lower than. As a measure of uncertainty induced by the perturbation we measured the informational entropy (Shannon's entropy)
Digital sorting perturbed laguerre-gaussian beams by radial numbers via high order intensity moments
We prove local Lipschitz regularity for bounded minimizers of functionals with nonstandard $p,q$-growth with the source term in the Lorentz space $L(N,1)$ under the restriction $q<p+1+p\,\min\left\{\frac 1N,\frac{2(p-1)}{Np-2p+2}\right\}$. This extends the recent work by Beck-Mingione to bounded minimizers under weaker hypothesis and is sharp for some special ranges of $p$, $q$ and $N$.
Borderline Lipschitz regularity for bounded minimizers of functionals with (p,q)-growth
We investigate the shift current bulk photovoltaic response of materials close to a band inversion topological phase transition. We find that the bulk photocurrent reverses direction across the band inversion transition, and that its magnitude is enhanced in the vicinity of the phase transition. These results are demonstrated with first principles DFT calculations of BiTeI and CsPbI$_3$ under hydrostatic pressure, and explained with an analytical model, suggesting that this phenomenon remains robust across disparate material systems.
Enhancement of bulk photovoltaic effect in topological insulators
We compute the causality/positivity bounds on the Wilson coefficients of scalar-tensor effective field theories. Two-sided bounds are obtained by extracting IR information from UV physics via dispersion relations of scattering amplitudes, making use of the full crossing symmetry. The graviton $t$-channel pole is carefully treated in the numerical optimization, taking into account the constraints with fixed impact parameters. It is shown that the typical sizes of the Wilson coefficients can be estimated by simply inspecting the dispersion relations. We carve out sharp bounds on the leading coefficients, particularly, the scalar-Gauss-Bonnet couplings, and discuss how some bounds vary with the leading $(\partial\phi)^4$ coefficient and as well as phenomenological implications of the causality bounds.
Causality bounds on scalar-tensor EFTs
Cosmic Birefringence (CB) is the in-vacuo rotation of the linear polarization direction of photons during propagation, caused by parity-violating extensions of Maxwell electromagnetism. We build low resolution CB angle maps using Planck Legacy and NPIPE products and provide for the first time estimates of the cross-correlation spectra $C_L^{\alpha E}$ and $C_L^{\alpha B}$ between the CB and the CMB polarization fields. We also provide updated CB auto-correlation spectra $C_L^{\alpha\alpha}$ as well as the cross-correlation $C_L^{\alpha T}$ with the CMB temperature field. We report constraints by defining the scale-invariant amplitudes $A^{\alpha X} \equiv L(L + 1)C_L^{\alpha X}/2\pi$, where $X = \alpha, T, E, B$, finding no evidence of CB. In particular, we find $A^{\alpha E} = (-7.8 \pm 5.6)$ nK deg and $A^{\alpha B} = (0.3 \pm 4.0)$ nK deg at 68% C.L..
Planck constraints on cross-correlations between anisotropic cosmic birefringence and CMB polarization
This is (raw) lecture notes of the course read on 6th European intensive course on Complex Analysis (Coimbra, Portugal) in 2000. Our purpose is to describe a general framework for generalizations of the complex analysis. As a consequence a classification scheme for different generalizations is obtained. The framework is based on wavelets (coherent states) in Banach spaces generated by ``admissible'' group representations. Reduced wavelet transform allows naturally describe in abstract term main objects of an analytical function theory: the Cauchy integral formula, the Hardy and Bergman spaces, the Cauchy-Riemann equation, and the Taylor expansion. Among considered examples are classical analytical function theories (one complex variables, several complex variables, Clifford analysis, Segal-Bargmann space) as well as new function theories which were developed within our framework (function theory of hyperbolic type, Clifford version of Segal-Bargmann space). We also briefly discuss applications to the operator theory (functional calculus) and quantum mechanics.
Spaces of Analytical Functions and Wavelets--Lecture Notes
In this contribution we deal with the problem of learning an undirected graph which encodes the conditional dependence relationship between variables of a complex system, given a set of observations of this system. This is a very central problem of modern data analysis and it comes out every time we want to investigate a deeper relationship between random variables, which is different from the classical dependence usually measured by the covariance. In particular, in this contribution we deal with the case of Gaussian Graphical Models (GGMs) for which the system of variables has a multivariate gaussian distribution. We study all the existing techniques for such a problem and propose a smart implementation of the symmetric parallel regression technique which turns out to be very competitive for learning sparse GGMs under high dimensional data regime.
Learning Gaussian Graphical Models by symmetric parallel regression technique
This paper proposes an approach to domain transfer based on a pairwise loss function that helps transfer control policies learned in simulation onto a real robot. We explore the idea in the context of a 'category level' manipulation task where a control policy is learned that enables a robot to perform a mating task involving novel objects. We explore the case where depth images are used as the main form of sensor input. Our experimental results demonstrate that proposed method consistently outperforms baseline methods that train only in simulation or that combine real and simulated data in a naive way.
Adapting control policies from simulation to reality using a pairwise loss
In this paper, we report on the integration of a Correlation-OTDR into a portable unit (30 cm x 30 cm), based on a multi-processor system on chip (MPSoC) and a small formfactor pluggable (SFP) 2.5G transceiver. Going beyond telecommunication applications, this system is demonstrated for temperature measurements, based on the change of propagation delay with temperature in a short section of optical fiber. The temperature measurement accuracy is investigated as a function of the fiber length from 4 to 25 meters.
Fiber as a temperature sensor with portable Correlation-OTDR as interrogator
We prove that a formula predicted on the basis of non-rigorous physics arguments [Zdeborova and Krzakala: Phys. Rev. E (2007)] provides a lower bound on the chromatic number of sparse random graphs. The proof is based on the interpolation method from mathematical physics. In the case of random regular graphs the lower bound can be expressed algebraically, while in the case of the binomial random we obtain a variational formula. As an application we calculate improved explicit lower bounds on the chromatic number of random graphs for small (average) degrees. Additionally, show how asymptotic formulas for large degrees that were previously obtained by lengthy and complicated combinatorial arguments can be re-derived easily from these new results.
Lower bounds on the chromatic number of random graphs
Restricting a linear system for the KP hierarchy to those independent variables t\_n with odd n, its compatibility (Zakharov-Shabat conditions) leads to the "odd KP hierarchy". The latter consists of pairs of equations for two dependent variables, taking values in a (typically noncommutative) associative algebra. If the algebra is commutative, the odd KP hierarchy is known to admit reductions to the BKP and the CKP hierarchy. We approach the odd KP hierarchy and its relation to BKP and CKP in different ways, and address the question whether noncommutative versions of the BKP and the CKP equation (and some of their reductions) exist. In particular, we derive a functional representation of a linear system for the odd KP hierarchy, which in the commutative case produces functional representations of the BKP and CKP hierarchies in terms of a tau function. Furthermore, we consider a functional representation of the KP hierarchy that involves a second (auxiliary) dependent variable and features the odd KP hierarchy directly as a subhierarchy. A method to generate large classes of exact solutions to the KP hierarchy from solutions to a linear matrix ODE system, via a hierarchy of matrix Riccati equations, then also applies to the odd KP hierarchy, and this in turn can be exploited, in particular, to obtain solutions to the BKP and CKP hierarchies.
BKP and CKP revisited: The odd KP system
Although deep learning models for chest X-ray interpretation are commonly trained on labels generated by automatic radiology report labelers, the impact of improvements in report labeling on the performance of chest X-ray classification models has not been systematically investigated. We first compare the CheXpert, CheXbert, and VisualCheXbert labelers on the task of extracting accurate chest X-ray image labels from radiology reports, reporting that the VisualCheXbert labeler outperforms the CheXpert and CheXbert labelers. Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers. Our work suggests that recent improvements in radiology report labeling can translate to the development of higher performing chest X-ray classification models.
Effect of Radiology Report Labeler Quality on Deep Learning Models for Chest X-Ray Interpretation
We prove a weighted $L_p$-estimate for the stochastic convolution associated to the stochastic heat equation with zero Dirichlet boundary condition on a planar angular domain $\mathcal{D}_{\kappa_0}\subset\mathbb{R}^2$ with angle $\kappa_0\in(0,2\pi)$. Furthermore, we use this estimate to establish existence and uniqueness of a solution to the corresponding equation in suitable weighted $L_p$-Sobolev spaces. In order to capture the singular behaviour of the solution and its derivatives at the vertex, we use powers of the distance to the vertex as weight functions. The admissible range of weight parameters depends explicitly on the angle $\kappa_0$.
An $L_p$-estimate for the stochastic heat equation on an angular domain in $\mathbb{R}^2$
Published data from long-term observations of a strip of sky at declination +5 degrees carried out at 7.6 cm on the RATAN-600 radio telescope are used to estimate some statistical properties of radio sources. Limits on the sensitivity of the survey due to noise imposed by background sources, which dominates the radiometer sensitivity, are refined. The vast majority of noise due to background sources is associated with known radio sources (for example, from the NVSS with a detection threshold of 2.3 mJy) with normal steep spectra ({\alpha} = 0.7-0.8, S \propto {\nu}^{- \alpha}), which have also been detected in new deep surveys at decimeter wavelengths. When all such objects are removed from the observational data, this leaves another noise component that is observed to be roughly identical in independent groups of observations. We suggest this represents a new population of radio sources that are not present in known catalogs at the 0.6 mJy level at 7.6 cm. The studied redshift dependence of the number of steep-spectrum objects shows that the sensitivity of our survey is sufficient to detect powerful FRII radio sources at any redshift, right to the epoch of formation of the first galaxies. The inferred new population is most likely associated with low-luminosity objects at redshifts z < 1. In spite of the appearance of new means of carrying out direct studies of distant galaxies, searches for objects with very high redshifts among steep and ultra-steep spectrum radio sources remains an effective method for studying the early Universe.
The KHOLOD Experiment: A Search for a New Population of Radio Sources
We measure the length of stretched wire pairs by using them as the delay line pulse shaping element in an avalanche oscillator. The circuitry and method are extremely simple, and insensitive to oscillator supply voltage, to the particular transistor (2N3904) used and to wire tension. Rudimentary tests with simulated broken wires show that these can be detected easily. It remains to be seen whether the technique will work at scale with realistic wire planes, but it should be at least as good as reflecting short pulses.
Measuring the length of stretched wires with a avalanche oscillator
This letter considers cascaded model predictive control (MPC) as a computationally lightweight method for controlling a tandem-rotor helicopter. A traditional single MPC structure is split into separate outer and inner-loops. The outer-loop MPC uses an $SE_2(3)$ error to linearize the translational dynamics about a reference trajectory. The inner-loop MPC uses the optimal angular velocity sequence of the outer-loop MPC to linearize the rotational dynamics. The outer-loop MPC is run at a slower rate than the inner-loop allowing for longer prediction time and improved performance. Monte-Carlo simulations demonstrate robustness to model uncertainty and environmental disturbances. The proposed control structure is benchmarked against a single MPC algorithm where it shows significant improvements in position and velocity tracking while using significantly less computational resources.
Cascaded Model Predictive Control of a Tandem-Rotor Helicopter
Increased capabilities such as recognition and self-adaptability are now required from IoT applications. While IoT node power consumption is a major concern for these applications, cloud-based processing is becoming unsustainable due to continuous sensor or image data transmission over the wireless network. Thus optimized ML capabilities and data transfers should be integrated in the IoT node. Moreover, IoT applications are torn between sporadic data-logging and energy-hungry data processing (e.g. image classification). Thus, the versatility of the node is key in addressing this wide diversity of energy and processing needs. This paper presents SamurAI, a versatile IoT node bridging this gap in processing and in energy by leveraging two on-chip sub-systems: a low power, clock-less, event-driven Always-Responsive (AR) part and an energy-efficient On-Demand (OD) part. AR contains a 1.7MOPS event-driven, asynchronous Wake-up Controller (WuC) with a 207ns wake-up time optimized for sporadic computing, while OD combines a deep-sleep RISC-V CPU and 1.3TOPS/W Machine Learning (ML) for more complex tasks up to 36GOPS. This architecture partitioning achieves best in class versatility metrics such as peak performance to idle power ratio. On an applicative classification scenario, it demonstrates system power gains, up to 3.5x compared to cloud-based processing, and thus extended battery lifetime.
SamurAI: A Versatile IoT Node With Event-Driven Wake-Up and Embedded ML Acceleration
We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t) = a(t) exp(x(t)). These models include the Black, Derman, Toy and Black, Karasinski models in the terminal measure. We show that such interest rate models are equivalent with lattice gases with attractive two-body interaction V(t1,t2)= -Cov(x(t1),x(t2)). We consider in some detail the Black, Karasinski model with x(t) an Ornstein, Uhlenbeck process, and show that it is similar with a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions V(x,y) = -\alpha (e^{-\gamma |x - y|} - e^{-\gamma (x + y)}). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black, Derman, Toy model in the terminal measure.
Equivalence of interest rate models and lattice gases
Ultrafast ultrasound imaging remains an active area of interest in the ultrasound community due to its ultra-high frame rates. Recently, a wide variety of studies based on deep learning have sought to improve ultrafast ultrasound imaging. Most of these approaches have been performed on radio frequency (RF) signals. However, inphase/quadrature (I/Q) digital beamformers are now widely used as low-cost strategies. In this work, we used complex convolutional neural networks for reconstruction of ultrasound images from I/Q signals. We recently described a convolutional neural network architecture called ID-Net, which exploited an inception layer designed for reconstruction of RF diverging-wave ultrasound images. In the present study, we derive the complex equivalent of this network; i.e., the Complex-valued Inception for Diverging-wave Network (CID-Net) that operates on I/Q data. We provide experimental evidence that CID-Net provides the same image quality as that obtained from RF-trained convolutional neural networks; i.e., using only three I/Q images, the CID-Net produces high-quality images that can compete with those obtained by coherently compounding 31 RF images. Moreover, we show that CID-Net outperforms the straightforward architecture that consists of processing the real and imaginary parts of the I/Q signal separately, which thereby indicates the importance of consistently processing the I/Q signals using a network that exploits the complex nature of such signals.
Complex Convolutional Neural Networks for Ultrafast Ultrasound Image Reconstruction from In-Phase/Quadrature Signal
In this paper a new approach to investigation of Quantum and Statistical Mechanics of the Early Universe (Planck scale) - density matrix deformation - is proposed. The deformation is understood as an extension of a particular theory by inclusion of one or several additional parameters in such a way that the initial theory appears in the limiting transition...
The Density Matrix Deformation in Physics of the Early Universe and Some of its Implications
GX 3+1 is a low-mass X-ray binary that is persistently bright since its discovery in 1964. It was found to be an X-ray burster twenty years ago proving that the compact object in this system is a neutron star. The burst rate is so low that only 18 bursts were reported prior to 1996. The Wide Field Cameras on BeppoSAX have, through a dedicated monitoring program on the Galactic center region, increased the number of X-ray bursts from GX 3+1 by 61. Since GX 3+1 exhibits a slow (order of years) modulation in the persistent flux of about 50%, these observations opens up the unique possibility to study burst properties as a function of mass accretion rate for very low burst rates. This is the first time that bursts are detected from GX 3+1 in the high state. From the analysis we learn that all bursts are short with e-folding decay times smaller than 10 s. Therefore, all bursts are due to unstable helium burning. Furthermore, the burst rate drops sixfold in a fairly narrow range of 2-20 keV flux; we discuss possible origins for this.
Burst-properties as a function of mass accretion rate in GX 3+1
Recent works have proposed that software developers' positive emotion has a positive impact on software developers' productivity. In this paper we investigate two data sources: developers chat messages (from Slack and Hipchat) and source code commits of a single co-located Agile team over 200 working days. Our regression analysis shows that the number of chat messages is the best predictor and predicts productivity measured both in the number of commits and lines of code with $R^2$ of 0.33 and 0.27 respectively. We then add sentiment analysis variables until AIC of our model no longer improves and gets $R^2$ values of 0.37 (commits) and 0.30 (lines of code). Thus, analyzing chat sentiment improves productivity prediction over chat activity alone but the difference is not massive. This work supports the idea that emotional state and productivity are linked in software development. We find that three positive sentiment metrics, but surprisingly also one negative sentiment metric is associated with higher productivity.
Chat activity is a better predictor than chat sentiment on software developers productivity
Adaptive designs are commonly used in clinical and drug development studies for optimum utilization of available resources. In this article, we consider the problem of estimating the effect of the selected (better) treatment using a two-stage adaptive design. Consider two treatments with their effectiveness characterized by two normal distributions having different unknown means and a common unknown variance. The treatment associated with the larger mean effect is labeled as the better treatment. In the first stage of the design, each of the two treatments is independently administered to different sets of $n_1$ subjects, and the treatment with the larger sample mean is chosen as the better treatment. In the second stage, the selected treatment is further administered to $n_2$ additional subjects. In this article, we deal with the problem of estimating the mean of the selected treatment using the above adaptive design. We extend the result of \cite{cohen1989two} by obtaining the uniformly minimum variance conditionally unbiased estimator (UMVCUE) of the mean effect of the selected treatment when multiple observations are available in the second stage. We show that the maximum likelihood estimator (a weighted sample average based on the first and the second stage data) is minimax and admissible for estimating the mean effect of the selected treatment. We also propose some plug-in estimators obtained by plugging in the pooled sample variance in place of the common variance $\sigma^2$, in some of the estimators proposed by \cite{misra2022estimation} for the situations where $\sigma^2$ is known. The performances of various estimators of the mean effect of the selected treatment are compared via a simulation study. For the illustration purpose, we also provide a real-data application.
On Estimating the Selected Treatment Mean under a Two-Stage Adaptive Design
The great success of Helioseismology resides in the remarkable progress achieved in the understanding of the structure and dynamics of the solar interior. This success mainly relies on the ability to conceive, implement, and operate specific instrumentation with enough sensitivity to detect and measure small fluctuations (in velocity and/or intensity) on the solar surface that are well below one meter per second or a few parts per million. Furthermore the limitation of the ground observations imposing the day-night cycle (thus a periodic discontinuity in the observations) was overcome with the deployment of ground-based networks --properly placed at different longitudes all over the Earth-- allowing longer and continuous observations of the Sun and consequently increasing their duty cycles. In this chapter, we start by a short historical overview of helioseismology. Then we describe the different techniques used to do helioseismic analyses along with a description of the main instrumental concepts. We in particular focus on the instruments that have been operating long enough to study the solar magnetic activity. Finally, we give a highlight of the main results obtained with such high-duty cycle observations (>80%) lasting over the last few decades.
Helioseismology: Observations and Space Missions
Consider a remote estimation problem where a sensor wants to communicate the state of an uncertain source to a remote estimator over a finite time horizon. The uncertain source is modeled as an autoregressive process with bounded noise. Given that the sensor has a limited communication budget, the sensor must decide when to transmit the state to the estimator who has to produce real-time estimates of the source state. In this paper, we consider the problem of finding a scheduling strategy for the sensor and an estimation strategy for the estimator to jointly minimize the worst-case maximum instantaneous estimation error over the time horizon. This leads to a decentralized minimax decision-making problem. We obtain a complete characterization of optimal strategies for this decentralized minimax problem. In particular, we show that an open loop communication scheduling strategy is optimal and the optimal estimate depends only on the most recently received sensor observation.
Worst-case Guarantees for Remote Estimation of an Uncertain Source
In this work we explore many directions in the framework of gauge-gravity dualities. In type IIB theory we give an explicit derivation of the local metric for five branes wrapped on rigid two-cycles. Our derivation involves various interplays between warp factors, dualities and fluxes and the final result confirms our earlier predictions. We also find a novel dipole-like deformation of the background due to an inherent orientifold projection in the full global geometry. The supergravity solution for this deformation takes into account various things like the presence of a non-trivial background topology and fluxes as well as branes. Considering these, we manage to calculate the precise local solution using equations of motion. We also show that this dipole-like deformation has the desired property of decoupling the Kaluza-Klein modes from the IR gauge theory. Finally, for the heterotic theory we find new non-Kahler complex manifolds that partake in the full gauge-gravity dualities and study the mathematical structures of these manifolds including the torsion classes, Betti numbers and other topological data.
Gauge-Gravity Dualities, Dipoles and New Non-Kahler Manifolds