text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: Spherical polyharmonics and Poisson kernels for polyharmonic functions,
Abstract: We introduce and develop the notion of spherical polyharmonics, which are a
natural generalisation of spherical harmonics. In particular we study the
theory of zonal polyharmonics, which allows us, analogously to zonal harmonics,
to construct Poisson kernels for polyharmonic functions on the union of rotated
balls. We find the representation of Poisson kernels and zonal polyharmonics in
terms of the Gegenbauer polynomials. We show the connection between the
classical Poisson kernel for harmonic functions on the ball, Poisson kernels
for polyharmonic functions on the union of rotated balls, and the Cauchy-Hua
kernel for holomorphic functions on the Lie ball. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: SPH calculations of Mars-scale collisions: the role of the Equation of State, material rheologies, and numerical effects,
Abstract: We model large-scale ($\approx$2000km) impacts on a Mars-like planet using a
Smoothed Particle Hydrodynamics code. The effects of material strength and of
using different Equations of State on the post-impact material and temperature
distributions are investigated. The properties of the ejected material in terms
of escaping and disc mass are analysed as well. We also study potential
numerical effects in the context of density discontinuities and rigid body
rotation. We find that in the large-scale collision regime considered here
(with impact velocities of 4km/s), the effect of material strength is
substantial for the post-impact distribution of the temperature and the
impactor material, while the influence of the Equation of State is more subtle
and present only at very high temperatures. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A global sensitivity analysis and reduced order models for hydraulically-fractured horizontal wells,
Abstract: We present a systematic global sensitivity analysis using the Sobol method
which can be utilized to rank the variables that affect two quantity of
interests -- pore pressure depletion and stress change -- around a
hydraulically-fractured horizontal well based on their degree of importance.
These variables include rock properties and stimulation design variables. A
fully-coupled poroelastic hydraulic fracture model is used to account for pore
pressure and stress changes due to production. To ease the computational cost
of a simulator, we also provide reduced order models (ROMs), which can be used
to replace the complex numerical model with a rather simple analytical model,
for calculating the pore pressure and stresses at different locations around
hydraulic fractures. The main findings of this research are: (i) mobility,
production pressure, and fracture half-length are the main contributors to the
changes in the quantities of interest. The percentage of the contribution of
each parameter depends on the location with respect to pre-existing hydraulic
fractures and the quantity of interest. (ii) As the time progresses, the effect
of mobility decreases and the effect of production pressure increases. (iii)
These two variables are also dominant for horizontal stresses at large
distances from hydraulic fractures. (iv) At zones close to hydraulic fracture
tips or inside the spacing area, other parameters such as fracture spacing and
half-length are the dominant factors that affect the minimum horizontal stress.
The results of this study will provide useful guidelines for the stimulation
design of legacy wells and secondary operations such as refracturing and infill
drilling. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: The PdBI Arcsecond Whirlpool Survey (PAWS). The Role of Spiral Arms in Cloud and Star Formation,
Abstract: The process that leads to the formation of the bright star forming sites
observed along prominent spiral arms remains elusive. We present results of a
multi-wavelength study of a spiral arm segment in the nearby grand-design
spiral galaxy M51 that belongs to a spiral density wave and exhibits nine gas
spurs. The combined observations of the(ionized, atomic, molecular, dusty)
interstellar medium (ISM) with star formation tracers (HII regions, young
<10Myr stellar clusters) suggest (1) no variation in giant molecular cloud
(GMC) properties between arm and gas spurs, (2) gas spurs and extinction
feathers arising from the same structure with a close spatial relation between
gas spurs and ongoing/recent star formation (despite higher gas surface
densities in the spiral arm), (3) no trend in star formation age either along
the arm or along a spur, (4) evidence for strong star formation feedback in gas
spurs: (5) tentative evidence for star formation triggered by stellar feedback
for one spur, and (6) GMC associations (GMAs) being no special entities but the
result of blending of gas arm/spur cross-sections in lower resolution
observations. We conclude that there is no evidence for a coherent star
formation onset mechanism that can be solely associated to the presence of the
spiral density wave. This suggests that other (more localized) mechanisms are
important to delay star formation such that it occurs in spurs. The evidence of
star formation proceeding over several million years within individual spurs
implies that the mechanism that leads to star formation acts or is sustained
over a longer time-scale. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Higher structure in the unstable Adams spectral sequence,
Abstract: We describe a variant construction of the unstable Adams spectral the
sequence for a space $Y$, associated to any free simplicial resolution of
$H^*(Y;R)$ for $R=\mathbb{F}_p$ or $\mathbb{Q}$. We use this construction to
describe the differentials and filtration in the spectral sequence in terms of
appropriate systems of higher cohomology operations. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Deciphering noise amplification and reduction in open chemical reaction networks,
Abstract: The impact of random fluctuations on the dynamical behavior a complex
biological systems is a longstanding issue, whose understanding would shed
light on the evolutionary pressure that nature imposes on the intrinsic noise
levels and would allow rationally designing synthetic networks with controlled
noise. Using the Itō stochastic differential equation formalism, we performed
both analytic and numerical analyses of several model systems containing
different molecular species in contact with the environment and interacting
with each other through mass-action kinetics. These systems represent for
example biomolecular oligomerization processes, complex-breakage reactions,
signaling cascades or metabolic networks. For chemical reaction networks with
zero deficiency values, which admit a detailed- or complex-balanced steady
state, all molecular species are uncorrelated. The number of molecules of each
species follow a Poisson distribution and their Fano factors, which measure the
intrinsic noise, are equal to one. Systems with deficiency one have an
unbalanced non-equilibrium steady state and a non-zero S-flux, defined as the
flux flowing between the complexes multiplied by an adequate stoichiometric
coefficient. In this case, the noise on each species is reduced if the flux
flows from the species of lowest to highest complexity, and is amplified is the
flux goes in the opposite direction. These results are generalized to systems
of deficiency two, which possess two independent non-vanishing S-fluxes, and we
conjecture that a similar relation holds for higher deficiency systems. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Mathematics"
] |
Title: Diffraction-Aware Sound Localization for a Non-Line-of-Sight Source,
Abstract: We present a novel sound localization algorithm for a non-line-of-sight
(NLOS) sound source in indoor environments. Our approach exploits the
diffraction properties of sound waves as they bend around a barrier or an
obstacle in the scene. We combine a ray tracing based sound propagation
algorithm with a Uniform Theory of Diffraction (UTD) model, which simulate
bending effects by placing a virtual sound source on a wedge in the
environment. We precompute the wedges of a reconstructed mesh of an indoor
scene and use them to generate diffraction acoustic rays to localize the 3D
position of the source. Our method identifies the convergence region of those
generated acoustic rays as the estimated source position based on a particle
filter. We have evaluated our algorithm in multiple scenarios consisting of a
static and dynamic NLOS sound source. In our tested cases, our approach can
localize a source position with an average accuracy error, 0.7m, measured by
the L2 distance between estimated and actual source locations in a 7m*7m*3m
room. Furthermore, we observe 37% to 130% improvement in accuracy over a
state-of-the-art localization method that does not model diffraction effects,
especially when a sound source is not visible to the robot. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Density large deviations for multidimensional stochastic hyperbolic conservation laws,
Abstract: We investigate the density large deviation function for a multidimensional
conservation law in the vanishing viscosity limit, when the probability
concentrates on weak solutions of a hyperbolic conservation law conservation
law. When the conductivity and dif-fusivity matrices are proportional, i.e. an
Einstein-like relation is satisfied, the problem has been solved in [4]. When
this proportionality does not hold, we compute explicitly the large deviation
function for a step-like density profile, and we show that the associated
optimal current has a non trivial structure. We also derive a lower bound for
the large deviation function, valid for a general weak solution, and leave the
general large deviation function upper bound as a conjecture. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: mixup: Beyond Empirical Risk Minimization,
Abstract: Large deep neural networks are powerful, but exhibit undesirable behaviors
such as memorization and sensitivity to adversarial examples. In this work, we
propose mixup, a simple learning principle to alleviate these issues. In
essence, mixup trains a neural network on convex combinations of pairs of
examples and their labels. By doing so, mixup regularizes the neural network to
favor simple linear behavior in-between training examples. Our experiments on
the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show
that mixup improves the generalization of state-of-the-art neural network
architectures. We also find that mixup reduces the memorization of corrupt
labels, increases the robustness to adversarial examples, and stabilizes the
training of generative adversarial networks. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Suzaku Analysis of the Supernova Remnant G306.3-0.9 and the Gamma-ray View of Its Neighborhood,
Abstract: We present an investigation of the supernova remnant (SNR) G306.3$-$0.9 using
archival multi-wavelength data. The Suzaku spectra are well described by
two-component thermal plasma models: The soft component is in ionization
equilibrium and has a temperature $\sim$0.59 keV, while the hard component has
temperature $\sim$3.2 keV and ionization time-scale $\sim$$2.6\times10^{10}$
cm$^{-3}$ s. We clearly detected Fe K-shell line at energy of $\sim$6.5 keV
from this remnant. The overabundances of Si, S, Ar, Ca, and Fe confirm that the
X-ray emission has an ejecta origin. The centroid energy of the Fe-K line
supports that G306.3$-$0.9 is a remnant of a Type Ia supernova (SN) rather than
a core-collapse SN. The GeV gamma-ray emission from G306.3$-$0.9 and its
surrounding were analyzed using about 6 years of Fermi data. We report about
the non-detection of G306.3$-$0.9 and the detection of a new extended gamma-ray
source in the south-west of G306.3$-$0.9 with a significance of
$\sim$13$\sigma$. We discuss several scenarios for these results with the help
of data from other wavebands to understand the SNR and its neighborhood. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Japanese Sentiment Classification using a Tree-Structured Long Short-Term Memory with Attention,
Abstract: Previous approaches to training syntax-based sentiment classification models
required phrase-level annotated corpora, which are not readily available in
many languages other than English. Thus, we propose the use of tree-structured
Long Short-Term Memory with an attention mechanism that pays attention to each
subtree of the parse tree. Experimental results indicate that our model
achieves the state-of-the-art performance in a Japanese sentiment
classification task. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Covariances, Robustness, and Variational Bayes,
Abstract: Mean-field Variational Bayes (MFVB) is an approximate Bayesian posterior
inference technique that is increasingly popular due to its fast runtimes on
large-scale datasets. However, even when MFVB provides accurate posterior means
for certain parameters, it often mis-estimates variances and covariances.
Furthermore, prior robustness measures have remained undeveloped for MFVB. By
deriving a simple formula for the effect of infinitesimal model perturbations
on MFVB posterior means, we provide both improved covariance estimates and
local robustness measures for MFVB, thus greatly expanding the practical
usefulness of MFVB posterior approximations. The estimates for MFVB posterior
covariances rely on a result from the classical Bayesian robustness literature
relating derivatives of posterior expectations to posterior covariances and
include the Laplace approximation as a special case. Our key condition is that
the MFVB approximation provides good estimates of a select subset of posterior
means---an assumption that has been shown to hold in many practical settings.
In our experiments, we demonstrate that our methods are simple, general, and
fast, providing accurate posterior uncertainty estimates and robustness
measures with runtimes that can be an order of magnitude faster than MCMC. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Generalized Approximate Message-Passing Decoder for Universal Sparse Superposition Codes,
Abstract: Sparse superposition (SS) codes were originally proposed as a
capacity-achieving communication scheme over the additive white Gaussian noise
channel (AWGNC) [1]. Very recently, it was discovered that these codes are
universal, in the sense that they achieve capacity over any memoryless channel
under generalized approximate message-passing (GAMP) decoding [2], although
this decoder has never been stated for SS codes. In this contribution we
introduce the GAMP decoder for SS codes, we confirm empirically the
universality of this communication scheme through its study on various channels
and we provide the main analysis tools: state evolution and potential. We also
compare the performance of GAMP with the Bayes-optimal MMSE decoder. We
empirically illustrate that despite the presence of a phase transition
preventing GAMP to reach the optimal performance, spatial coupling allows to
boost the performance that eventually tends to capacity in a proper limit. We
also prove that, in contrast with the AWGNC case, SS codes for binary input
channels have a vanishing error floor in the limit of large codewords.
Moreover, the performance of Hadamard-based encoders is assessed for practical
implementations. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Simultaneous non-vanishing for Dirichlet L-functions,
Abstract: We extend the work of Fouvry, Kowalski and Michel on correlation between
Hecke eigenvalues of modular forms and algebraic trace functions in order to
establish an asymptotic formula for a generalized cubic moment of modular
L-functions at the central point s = 1/2 and for prime moduli q. As an
application, we exploit our recent result on the mollification of the fourth
moment of Dirichlet L-functions to derive that for any pair
$(\omega_1,\omega_2)$ of multiplicative characters modulo q, there is a
positive proportion of $\chi$ (mod q) such that $L(\chi, 1/2 ), L(\chi\omega_1,
1/2 )$ and $L(\chi\omega_2, 1/2)$ are simultaneously not too small. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Parallelism, Concurrency and Distribution in Constraint Handling Rules: A Survey,
Abstract: Constraint Handling Rules is an effective concurrent declarative programming
language and a versatile computational logic formalism. CHR programs consist of
guarded reactive rules that transform multisets of constraints. One of the main
features of CHR is its inherent concurrency. Intuitively, rules can be applied
to parts of a multiset in parallel. In this comprehensive survey, we give an
overview of concurrent and parallel as well as distributed CHR semantics,
standard and more exotic, that have been proposed over the years at various
levels of refinement. These semantics range from the abstract to the concrete.
They are related by formal soundness results. Their correctness is established
as correspondence between parallel and sequential computations. We present
common concise sample CHR programs that have been widely used in experiments
and benchmarks. We review parallel CHR implementations in software and
hardware. The experimental results obtained show a consistent parallel speedup.
Most implementations are available online. The CHR formalism can also be used
to implement and reason with models for concurrency. To this end, the Software
Transaction Model, the Actor Model, Colored Petri Nets and the Join-Calculus
have been faithfully encoded in CHR. Under consideration in Theory and Practice
of Logic Programming (TPLP). | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: The Query Complexity of Cake Cutting,
Abstract: We study the query complexity of cake cutting and give lower and upper bounds
for computing approximately envy-free, perfect, and equitable allocations with
the minimum number of cuts. The lower bounds are tight for computing connected
envy-free allocations among n=3 players and for computing perfect and equitable
allocations with minimum number of cuts between n=2 players.
We also formalize moving knife procedures and show that a large subclass of
this family, which captures all the known moving knife procedures, can be
simulated efficiently with arbitrarily small error in the Robertson-Webb query
model. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Superconducting properties of Cu intercalated Bi$_2$Se$_3$ studied by Muon Spin Spectroscopy,
Abstract: We present muon spin rotation measurements on superconducting Cu intercalated
Bi$_2$Se$_3$, which was suggested as a realization of a topological
superconductor. We observe a clear evidence of the superconducting transition
below 4 K, where the width of magnetic field distribution increases as the
temperature is decreased. The measured broadening at mK temperatures suggests a
large London penetration depth in the $ab$ plane ($\lambda_{\mathrm{eff}}\sim
1.6$ $\mathrm{\mu}$m). We show that the temperature dependence of this
broadening follows the BCS prediction, but could be consistent with several gap
symmetries. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Efficient and consistent inference of ancestral sequences in an evolutionary model with insertions and deletions under dense taxon sampling,
Abstract: In evolutionary biology, the speciation history of living organisms is
represented graphically by a phylogeny, that is, a rooted tree whose leaves
correspond to current species and branchings indicate past speciation events.
Phylogenies are commonly estimated from molecular sequences, such as DNA
sequences, collected from the species of interest. At a high level, the idea
behind this inference is simple: the further apart in the Tree of Life are two
species, the greater is the number of mutations to have accumulated in their
genomes since their most recent common ancestor. In order to obtain accurate
estimates in phylogenetic analyses, it is standard practice to employ
statistical approaches based on stochastic models of sequence evolution on a
tree. For tractability, such models necessarily make simplifying assumptions
about the evolutionary mechanisms involved. In particular, commonly omitted are
insertions and deletions of nucleotides -- also known as indels.
Properly accounting for indels in statistical phylogenetic analyses remains a
major challenge in computational evolutionary biology. Here we consider the
problem of reconstructing ancestral sequences on a known phylogeny in a model
of sequence evolution incorporating nucleotide substitutions, insertions and
deletions, specifically the classical TKF91 process. We focus on the case of
dense phylogenies of bounded height, which we refer to as the taxon-rich
setting, where statistical consistency is achievable. We give the first
polynomial-time ancestral reconstruction algorithm with provable guarantees
under constant rates of mutation. Our algorithm succeeds when the phylogeny
satisfies the "big bang" condition, a necessary and sufficient condition for
statistical consistency in this context. | [
1,
0,
1,
1,
0,
0
] | [
"Quantitative Biology",
"Statistics",
"Computer Science"
] |
Title: Pattern-forming fronts in a Swift-Hohenberg equation with directional quenching - parallel and oblique stripes,
Abstract: We study the effect of domain growth on the orientation of striped phases in
a Swift-Hohenberg equation. Domain growth is encoded in a step-like parameter
dependence that allows stripe formation in a half plane, and suppresses
patterns in the complement, while the boundary of the pattern-forming region is
propagating with fixed normal velocity. We construct front solutions that leave
behind stripes in the pattern-forming region that are parallel to or at a small
oblique angle to the boundary.
Technically, the construction of stripe formation parallel to the boundary
relies on ill-posed, infinite-dimensional spatial dynamics. Stripes forming at
a small oblique angle are constructed using a functional-analytic, perturbative
approach. Here, the main difficulties are the presence of continuous spectrum
and the fact that small oblique angles appear as a singular perturbation in a
traveling-wave problem. We resolve the former difficulty using a farfield-core
decomposition and Fredholm theory in weighted spaces. The singular perturbation
problem is resolved using preconditioners and boot-strapping. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Exploring RNN-Transducer for Chinese Speech Recognition,
Abstract: End-to-end approaches have drawn much attention recently for significantly
simplifying the construction of an automatic speech recognition (ASR) system.
RNN transducer (RNN-T) is one of the popular end-to-end methods. Previous
studies have shown that RNN-T is difficult to train and a very complex training
process is needed for a reasonable performance. In this paper, we explore RNN-T
for a Chinese large vocabulary continuous speech recognition (LVCSR) task and
aim to simplify the training process while maintaining performance. First, a
new strategy of learning rate decay is proposed to accelerate the model
convergence. Second, we find that adding convolutional layers at the beginning
of the network and using ordered data can discard the pre-training process of
the encoder without loss of performance. Besides, we design experiments to find
a balance among the usage of GPU memory, training circle and model performance.
Finally, we achieve 16.9% character error rate (CER) on our test set which is
2% absolute improvement from a strong BLSTM CE system with language model
trained on the same text corpus. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Stationary crack propagation in a two-dimensional visco-elastic network model,
Abstract: We investigate crack propagation in a simple two-dimensional visco-elastic
model and find a scaling regime in the relation between the propagation
velocity and energy release rate or fracture energy, together with lower and
upper bounds of the scaling regime. On the basis of our result, the existence
of the lower and upper bounds is expected to be universal or model-independent:
the present simple simulation model provides generic insight into the physics
of crack propagation, and the model will be a first step towards the
development of a more refined coarse-grained model. Relatively abrupt changes
of velocity are predicted near the lower and upper bounds for the scaling
regime and the positions of the bounds could be good markers for the
development of tough polymers, for which we provide simple views that could be
useful as guiding principles for toughening polymer-based materials. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A note on the fundamental group of Kodaira fibrations,
Abstract: The fundamental group $\pi$ of a Kodaira fibration is, by definition, the
extension of a surface group $\Pi_b$ by another surface group $\Pi_g$, i.e. \[
1 \rightarrow \Pi_g \rightarrow \pi \rightarrow \Pi_b \rightarrow 1. \]
Conversely, we can inquire about what conditions need to be satisfied by a
group of that sort in order to be the fundamental group of a Kodaira fibration.
In this short note we collect some restriction on the image of the classifying
map $m \colon \Pi_b \to \Gamma_g$ in terms of the coinvariant homology of
$\Pi_g$. In particular, we observe that if $\pi$ is the fundamental group of a
Kodaira fibration with relative irregularity $g-s$, then $g \leq 1+ 6s$, and we
show that this effectively constrains the possible choices for $\pi$, namely
that there are group extensions as above that fail to satisfy this bound, hence
cannot be the fundamental group of a Kodaira fibration. In particular this
provides examples of symplectic $4$--manifolds that fail to admit a Kähler
structure for reasons that eschew the usual obstructions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Split-and-augmented Gibbs sampler - Application to large-scale inference problems,
Abstract: This paper derives two new optimization-driven Monte Carlo algorithms
inspired from variable splitting and data augmentation. In particular, the
formulation of one of the proposed approaches is closely related to the
alternating direction method of multipliers (ADMM) main steps. The proposed
framework enables to derive faster and more efficient sampling schemes than the
current state-of-the-art methods and can embed the latter. By sampling
efficiently the parameter to infer as well as the hyperparameters of the
problem, the generated samples can be used to approximate Bayesian estimators
of the parameters to infer. Additionally, the proposed approach brings
confidence intervals at a low cost contrary to optimization methods.
Simulations on two often-studied signal processing problems illustrate the
performance of the two proposed samplers. All results are compared to those
obtained by recent state-of-the-art optimization and MCMC algorithms used to
solve these problems. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Primordial perturbations from inflation with a hyperbolic field-space,
Abstract: We study primordial perturbations from hyperinflation, proposed recently and
based on a hyperbolic field-space. In the previous work, it was shown that the
field-space angular momentum supported by the negative curvature modifies the
background dynamics and enhances fluctuations of the scalar fields
qualitatively, assuming that the inflationary background is almost de Sitter.
In this work, we confirm and extend the analysis based on the standard approach
of cosmological perturbation in multi-field inflation. At the background level,
to quantify the deviation from de Sitter, we introduce the slow-varying
parameters and show that steep potentials, which usually can not drive
inflation, can drive inflation. At the linear perturbation level, we obtain the
power spectrum of primordial curvature perturbation and express the spectral
tilt and running in terms of the slow-varying parameters. We show that
hyperinflation with power-law type potentials has already been excluded by the
recent Planck observations, while exponential-type potential with the exponent
of order unity can be made consistent with observations as far as the power
spectrum is concerned. We also argue that, in the context of a simple $D$-brane
inflation, the hyperinflation requires exponentially large hyperbolic extra
dimensions but that masses of Kaluza-Klein gravitons can be kept relatively
heavy. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Learning Sparse Representations in Reinforcement Learning with Sparse Coding,
Abstract: A variety of representation learning approaches have been investigated for
reinforcement learning; much less attention, however, has been given to
investigating the utility of sparse coding. Outside of reinforcement learning,
sparse coding representations have been widely used, with non-convex objectives
that result in discriminative representations. In this work, we develop a
supervised sparse coding objective for policy evaluation. Despite the
non-convexity of this objective, we prove that all local minima are global
minima, making the approach amenable to simple optimization strategies. We
empirically show that it is key to use a supervised objective, rather than the
more straightforward unsupervised sparse coding approach. We compare the
learned representations to a canonical fixed sparse representation, called
tile-coding, demonstrating that the sparse coding representation outperforms a
wide variety of tilecoding representations. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A Variational Characterization of Rényi Divergences,
Abstract: Atar, Chowdhary and Dupuis have recently exhibited a variational formula for
exponential integrals of bounded measurable functions in terms of Rényi
divergences. We develop a variational characterization of the Rényi
divergences between two probability distributions on a measurable sace in terms
of relative entropies. When combined with the elementary variational formula
for exponential integrals of bounded measurable functions in terms of relative
entropy, this yields the variational formula of Atar, Chowdhary and Dupuis as a
corollary. We also develop an analogous variational characterization of the
Rényi divergence rates between two stationary finite state Markov chains in
terms of relative entropy rates. When combined with Varadhan's variational
characterization of the spectral radius of square matrices with nonnegative
entries in terms of relative entropy, this yields an analog of the variational
formula of Atar, Chowdary and Dupuis in the framework of finite state Markov
chains. | [
1,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Interlayer coupling and gate-tunable excitons in transition metal dichalcogenide heterostructures,
Abstract: Bilayer van der Waals (vdW) heterostructures such as MoS2/WS2 and MoSe2/WSe2
have attracted much attention recently, particularly because of their type II
band alignments and the formation of interlayer exciton as the lowest-energy
excitonic state. In this work, we calculate the electronic and optical
properties of such heterostructures with the first-principles GW+Bethe-Salpeter
Equation (BSE) method and reveal the important role of interlayer coupling in
deciding the excited-state properties, including the band alignment and
excitonic properties. Our calculation shows that due to the interlayer
coupling, the low energy excitons can be widely tunable by a vertical gate
field. In particular, the dipole oscillator strength and radiative lifetime of
the lowest energy exciton in these bilayer heterostructures is varied by over
an order of magnitude within a practical external gate field. We also build a
simple model that captures the essential physics behind this tunability and
allows the extension of the ab initio results to a large range of electric
fields. Our work clarifies the physical picture of interlayer excitons in
bilayer vdW heterostructures and predicts a wide range of gate-tunable
excited-state properties of 2D optoelectronic devices. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Enumeration of singular varieties with tangency conditions,
Abstract: We construct the algebraic cobordism theory of bundles and divisors on
varieties. It has a simple basis (over Q) from projective spaces and its rank
is equal to the number of Chern numbers. An application of this algebraic
cobordism theory is the enumeration of singular subvarieties with give tangent
conditions with a fixed smooth divisor, where the subvariety is the zero locus
of a section of a vector bundle. We prove that the generating series of numbers
of such subvarieties gives a homomorphism from the algebraic cobordism group to
the power series ring. This implies that the enumeration of singular
subvarieties with tangency conditions is governed by universal polynomials of
Chern numbers, when the vector bundle is sufficiently ample. This result
combines and generalizes the Caporaso-Harris recursive formula, Gottsche's
conjecture, classical De Jonquiere's Formula and node polynomials from tropical
geometry. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: ClusterNet: Detecting Small Objects in Large Scenes by Exploiting Spatio-Temporal Information,
Abstract: Object detection in wide area motion imagery (WAMI) has drawn the attention
of the computer vision research community for a number of years. WAMI proposes
a number of unique challenges including extremely small object sizes, both
sparse and densely-packed objects, and extremely large search spaces (large
video frames). Nearly all state-of-the-art methods in WAMI object detection
report that appearance-based classifiers fail in this challenging data and
instead rely almost entirely on motion information in the form of background
subtraction or frame-differencing. In this work, we experimentally verify the
failure of appearance-based classifiers in WAMI, such as Faster R-CNN and a
heatmap-based fully convolutional neural network (CNN), and propose a novel
two-stage spatio-temporal CNN which effectively and efficiently combines both
appearance and motion information to significantly surpass the state-of-the-art
in WAMI object detection. To reduce the large search space, the first stage
(ClusterNet) takes in a set of extremely large video frames, combines the
motion and appearance information within the convolutional architecture, and
proposes regions of objects of interest (ROOBI). These ROOBI can contain from
one to clusters of several hundred objects due to the large video frame size
and varying object density in WAMI. The second stage (FoveaNet) then estimates
the centroid location of all objects in that given ROOBI simultaneously via
heatmap estimation. The proposed method exceeds state-of-the-art results on the
WPAFB 2009 dataset by 5-16% for moving objects and nearly 50% for stopped
objects, as well as being the first proposed method in wide area motion imagery
to detect completely stationary objects. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Viscous dynamics of drops and bubbles in Hele-Shaw cells: drainage, drag friction, coalescence, and bursting,
Abstract: In this review article, we discuss recent studies on drops and bubbles in
Hele-Shaw cells, focusing on how scaling laws exhibit crossovers from the
three-dimensional counterparts and focusing on topics in which viscosity plays
an important role. By virtue of progresses in analytical theory and high-speed
imaging, dynamics of drops and bubbles have actively been studied with the aid
of scaling arguments. However, compared with three dimensional problems,
studies on the corresponding problems in Hele-Shaw cells are still limited.
This review demonstrates that the effect of confinement in the Hele-Shaw cell
introduces new physics allowing different scaling regimes to appear. For this
purpose, we discuss various examples that are potentially important for
industrial applications handling drops and bubbles in confined spaces by
showing agreement between experiments and scaling theories. As a result, this
review provides a collection of problems in hydrodynamics that may be
analytically solved or that may be worth studying numerically in the near
future. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Detection of Nonlinearly Distorted OFDM Signals via Generalized Approximate Message Passing,
Abstract: In this paper, we propose a practical receiver for multicarrier signals
subjected to a strong memoryless nonlinearity. The receiver design is based on
a generalized approximate message passing (GAMP) framework, and this allows
real-time algorithm implementation in software or hardware with moderate
complexity. We demonstrate that the proposed receiver can provide more than a
2dB gain compared with an ideal uncoded linear OFDM transmission at a BER range
$10^{-4}\div10^{-6}$ in the AWGN channel, when the OFDM signal is subjected to
clipping nonlinearity and the crest-factor of the clipped waveform is only
1.9dB. Simulation results also demonstrate that the proposed receiver provides
significant performance gain in frequency-selective multipath channels | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Poynting's theorem in magnetic turbulence,
Abstract: Poynting's theorem is used to obtain an expression for the turbulent
power-spectral density as function of frequency and wavenumber in low-frequency
magnetic turbulence. No reference is made to Elsasser variables as is usually
done in magnetohydrodynamic turbulence mixing mechanical and electromagnetic
turbulence. We rather stay with an implicit form of the mechanical part of
turbulence as suggested by electromagnetic theory in arbitrary media. All of
mechanics and flows is included into a turbulent response function which by
appropriate observations can be determined from knowledge of the turbulent
fluctuation spectra. This approach is not guided by the wish of developing a
complete theory of turbulence. It aims on the identification of the response
function from observations as input into a theory which afterwards attempts its
interpretation. Combination of both the magnetic and electric power spectral
densities leads to a representation of the turbulent response function, i.e.
the turbulent conductivity spectrum $\sigma_{\omega k}$ as function of
frequency $\omega$ and wavenumber $k$. {It is given as the ratio of magnetic to
electric power spectral densities in frequency space. This knowledge allows for
formally writing down a turbulent dispersion relation. Power law inertial range
spectra result in a power law turbulent conductivity spectrum. These can be
compared with observations in the solar wind. Keywords: MHD turbulence,
turbulent dispersion relation, turbulent response function, solar wind
turbulence | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Exploration-exploitation tradeoffs dictate the optimal distributions of phenotypes for populations subject to fitness fluctuations,
Abstract: We study a minimal model for the growth of a phenotypically heterogeneous
population of cells subject to a fluctuating environment in which they can
replicate (by exploiting available resources) and modify their phenotype within
a given landscape (thereby exploring novel configurations). The model displays
an exploration-exploitation trade-off whose specifics depend on the statistics
of the environment. Most notably, the phenotypic distribution corresponding to
maximum population fitness (i.e. growth rate) requires a non-zero exploration
rate when the magnitude of environmental fluctuations changes randomly over
time, while a purely exploitative strategy turns out to be optimal in two-state
environments, independently of the statistics of switching times. We obtain
analytical insight into the limiting cases of very fast and very slow
exploration rates by directly linking population growth to the features of the
environment. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Statistics"
] |
Title: Optimizing Mission Critical Data Dissemination in Massive IoT Networks,
Abstract: Mission critical data dissemination in massive Internet of things (IoT)
networks imposes constraints on the message transfer delay between devices. Due
to low power and communication range of IoT devices, data is foreseen to be
relayed over multiple device-to-device (D2D) links before reaching the
destination. The coexistence of a massive number of IoT devices poses a
challenge in maximizing the successful transmission capacity of the overall
network alongside reducing the multi-hop transmission delay in order to support
mission critical applications. There is a delicate interplay between the
carrier sensing threshold of the contention based medium access protocol and
the choice of packet forwarding strategy selected at each hop by the devices.
The fundamental problem in optimizing the performance of such networks is to
balance the tradeoff between conflicting performance objectives such as the
spatial frequency reuse, transmission quality, and packet progress towards the
destination. In this paper, we use a stochastic geometry approach to quantify
the performance of multi-hop massive IoT networks in terms of the spatial
frequency reuse and the transmission quality under different packet forwarding
schemes. We also develop a comprehensive performance metric that can be used to
optimize the system to achieve the best performance. The results can be used to
select the best forwarding scheme and tune the carrier sensing threshold to
optimize the performance of the network according to the delay constraints and
transmission quality requirements. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Gaussian fluctuations of Jack-deformed random Young diagrams,
Abstract: We introduce a large class of random Young diagrams which can be regarded as
a natural one-parameter deformation of some classical Young diagram ensembles;
a deformation which is related to Jack polynomials and Jack characters. We show
that each such a random Young diagram converges asymptotically to some limit
shape and that the fluctuations around the limit are asymptotically Gaussian. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Revisiting (logarithmic) scaling relations using renormalization group,
Abstract: We explicitly compute the critical exponents associated with logarithmic
corrections (the so-called hatted exponents) starting from the renormalization
group equations and the mean field behavior for a wide class of models at the
upper critical behavior (for short and long range $\phi^n$-theories) and below
it. This allows us to check the scaling relations among these critical
exponents obtained by analysing the complex singularities (Lee-Yang and Fisher
zeroes) of these models. Moreover, we have obtained an explicit method to
compute the $\hat{\coppa}$ exponent [defined by $\xi\sim L (\log
L)^{\hat{\coppa}}$] and, finally, we have found a new derivation of the scaling
law associated with it. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods,
Abstract: We obtain a Bernstein-type inequality for sums of Banach-valued random
variables satisfying a weak dependence assumption of general type and under
certain smoothness assumptions of the underlying Banach norm. We use this
inequality in order to investigate in the asymptotical regime the error upper
bounds for the broad family of spectral regularization methods for reproducing
kernel decision rules, when trained on a sample coming from a $\tau-$mixing
process. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics",
"Computer Science"
] |
Title: Inverse monoids and immersions of cell complexes,
Abstract: An immersion $f : {\mathcal D} \rightarrow \mathcal C$ between cell complexes
is a local homeomorphism onto its image that commutes with the characteristic
maps of the cell complexes. We study immersions between finite-dimensional
connected $\Delta$-complexes by replacing the fundamental group of the base
space by an appropriate inverse monoid. We show how conjugacy classes of the
closed inverse submonoids of this inverse monoid may be used to classify
connected immersions into the complex. This extends earlier results of Margolis
and Meakin for immersions between graphs and of Meakin and Szakács on
immersions into $2$-dimensional $CW$-complexes. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Optimal Experiment Design for Causal Discovery from Fixed Number of Experiments,
Abstract: We study the problem of causal structure learning over a set of random
variables when the experimenter is allowed to perform at most $M$ experiments
in a non-adaptive manner. We consider the optimal learning strategy in terms of
minimizing the portions of the structure that remains unknown given the limited
number of experiments in both Bayesian and minimax setting. We characterize the
theoretical optimal solution and propose an algorithm, which designs the
experiments efficiently in terms of time complexity. We show that for bounded
degree graphs, in the minimax case and in the Bayesian case with uniform
priors, our proposed algorithm is a $\rho$-approximation algorithm, where
$\rho$ is independent of the order of the underlying graph. Simulations on both
synthetic and real data show that the performance of our algorithm is very
close to the optimal solution. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Economically Efficient Combined Plant and Controller Design Using Batch Bayesian Optimization: Mathematical Framework and Airborne Wind Energy Case Study,
Abstract: We present a novel data-driven nested optimization framework that addresses
the problem of coupling between plant and controller optimization. This
optimization strategy is tailored towards instances where a closed-form
expression for the system dynamic response is unobtainable and simulations or
experiments are necessary. Specifically, Bayesian Optimization, which is a
data-driven technique for finding the optimum of an unknown and
expensive-to-evaluate objective function, is employed to solve a nested
optimization problem. The underlying objective function is modeled by a
Gaussian Process (GP); then, Bayesian Optimization utilizes the predictive
uncertainty information from the GP to determine the best subsequent control or
plant parameters. The proposed framework differs from the majority of co-design
literature where there exists a closed-form model of the system dynamics.
Furthermore, we utilize the idea of Batch Bayesian Optimization at the plant
optimization level to generate a set of plant designs at each iteration of the
overall optimization process, recognizing that there will exist economies of
scale in running multiple experiments in each iteration of the plant design
process. We validate the proposed framework for a Buoyant Airborne Turbine
(BAT). We choose the horizontal stabilizer area, longitudinal center of mass
relative to center of buoyancy (plant parameters), and the pitch angle
set-point (controller parameter) as our decision variables. Our results
demonstrate that these plant and control parameters converge to their
respective optimal values within only a few iterations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Quantitative Finance"
] |
Title: Lagrangian fibers of Gelfand-Cetlin systems,
Abstract: Motivated by the study of Nishinou-Nohara-Ueda on the Floer thoery of
Gelfand-Cetlin systems over complex partial flag manifolds, we provide a
complete description of the topology of Gelfand-Cetlin fibers. We prove that
all fibers are \emph{smooth} isotropic submanifolds and give a complete
description of the fiber to be Lagrangian in terms of combinatorics of
Gelfand-Cetlin polytope. Then we study (non-)displaceability of Lagrangian
fibers. After a few combinatorial and numercal tests for the displaceability,
using the bulk-deformation of Floer cohomology by Schubert cycles, we prove
that every full flag manifold $\mathcal{F}(n)$ ($n \geq 3$) with a monotone
Kirillov-Kostant-Souriau symplectic form carries a continuum of
non-displaceable Lagrangian tori which degenerates to a non-torus fiber in the
Hausdorff limit. In particular, the Lagrangian $S^3$-fiber in $\mathcal{F}(3)$
is non-displaceable the question of which was raised by Nohara-Ueda who
computed its Floer cohomology to be vanishing. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: A local ensemble transform Kalman particle filter for convective scale data assimilation,
Abstract: Ensemble data assimilation methods such as the Ensemble Kalman Filter (EnKF)
are a key component of probabilistic weather forecasting. They represent the
uncertainty in the initial conditions by an ensemble which incorporates
information coming from the physical model with the latest observations.
High-resolution numerical weather prediction models ran at operational centers
are able to resolve non-linear and non-Gaussian physical phenomena such as
convection. There is therefore a growing need to develop ensemble assimilation
algorithms able to deal with non-Gaussianity while staying computationally
feasible. In the present paper we address some of these needs by proposing a
new hybrid algorithm based on the Ensemble Kalman Particle Filter. It is fully
formulated in ensemble space and uses a deterministic scheme such that it has
the ensemble transform Kalman filter (ETKF) instead of the stochastic EnKF as a
limiting case. A new criterion for choosing the proportion of particle filter
and ETKF update is also proposed. The new algorithm is implemented in the COSMO
framework and numerical experiments in a quasi-operational convective-scale
setup are conducted. The results show the feasibility of the new algorithm in
practice and indicate a strong potential for such local hybrid methods, in
particular for forecasting non-Gaussian variables such as wind and hourly
precipitation. | [
0,
1,
0,
1,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: Resolving the age bimodality of galaxy stellar populations on kpc scales,
Abstract: Galaxies in the local Universe are known to follow bimodal distributions in
the global stellar populations properties. We analyze the distribution of the
local average stellar-population ages of 654,053 sub-galactic regions resolved
on ~1-kpc scales in a volume-corrected sample of 394 galaxies, drawn from the
CALIFA-DR3 integral-field-spectroscopy survey and complemented by SDSS imaging.
We find a bimodal local-age distribution, with an old and a young peak
primarily due to regions in early-type galaxies and star-forming regions of
spirals, respectively. Within spiral galaxies, the older ages of bulges and
inter-arm regions relative to spiral arms support an internal age bimodality.
Although regions of higher stellar-mass surface-density, mu*, are typically
older, mu* alone does not determine the stellar population age and a bimodal
distribution is found at any fixed mu*. We identify an "old ridge" of regions
of age ~9 Gyr, independent of mu*, and a "young sequence" of regions with age
increasing with mu* from 1-1.5 Gyr to 4-5 Gyr. We interpret the former as
regions containing only old stars, and the latter as regions where the relative
contamination of old stellar populations by young stars decreases as mu*
increases. The reason why this bimodal age distribution is not inconsistent
with the unimodal shape of the cosmic-averaged star-formation history is that
i) the dominating contribution by young stars biases the age low with respect
to the average epoch of star formation, and ii) the use of a single average age
per region is unable to represent the full time-extent of the star-formation
history of "young-sequence" regions. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: From 4G to 5G: Self-organized Network Management meets Machine Learning,
Abstract: In this paper, we provide an analysis of self-organized network management,
with an end-to-end perspective of the network. Self-organization as applied to
cellular networks is usually referred to Self-organizing Networks (SONs), and
it is a key driver for improving Operations, Administration, and Maintenance
(OAM) activities. SON aims at reducing the cost of installation and management
of 4G and future 5G networks, by simplifying operational tasks through the
capability to configure, optimize and heal itself. To satisfy 5G network
management requirements, this autonomous management vision has to be extended
to the end to end network. In literature and also in some instances of products
available in the market, Machine Learning (ML) has been identified as the key
tool to implement autonomous adaptability and take advantage of experience when
making decisions. In this paper, we survey how network management can
significantly benefit from ML solutions. We review and provide the basic
concepts and taxonomy for SON, network management and ML. We analyse the
available state of the art in the literature, standardization, and in the
market. We pay special attention to 3rd Generation Partnership Project (3GPP)
evolution in the area of network management and to the data that can be
extracted from 3GPP networks, in order to gain knowledge and experience in how
the network is working, and improve network performance in a proactive way.
Finally, we go through the main challenges associated with this line of
research, in both 4G and in what 5G is getting designed, while identifying new
directions for research. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A simulation technique for slurries interacting with moving parts and deformable solids with applications,
Abstract: A numerical method for particle-laden fluids interacting with a deformable
solid domain and mobile rigid parts is proposed and implemented in a full
engineering system. The fluid domain is modeled with a lattice Boltzmann
representation, the particles and rigid parts are modeled with a discrete
element representation, and the deformable solid domain is modeled using a
Lagrangian mesh. The main issue of this work, since separately each of these
methods is a mature tool, is to develop coupling and model-reduction approaches
in order to efficiently simulate coupled problems of this nature, as occur in
various geological and engineering applications. The lattice Boltzmann method
incorporates a large-eddy simulation technique using the Smagorinsky turbulence
model. The discrete element method incorporates spherical and polyhedral
particles for stiff contact interactions. A neo-Hookean hyperelastic model is
used for the deformable solid. We provide a detailed description of how to
couple the three solvers within a unified algorithm. The technique we propose
for rubber modeling/coupling exploits a simplification that prevents having to
solve a finite-element problem each time step. We also develop a technique to
reduce the domain size of the full system by replacing certain zones with
quasi-analytic solutions, which act as effective boundary conditions for the
lattice Boltzmann method. The major ingredients of the routine are are
separately validated. To demonstrate the coupled method in full, we simulate
slurry flows in two kinds of piston-valve geometries. The dynamics of the valve
and slurry are studied and reported over a large range of input parameters. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Computer Science"
] |
Title: On the Spectrum of Random Features Maps of High Dimensional Data,
Abstract: Random feature maps are ubiquitous in modern statistical machine learning,
where they generalize random projections by means of powerful, yet often
difficult to analyze nonlinear operators. In this paper, we leverage the
"concentration" phenomenon induced by random matrix theory to perform a
spectral analysis on the Gram matrix of these random feature maps, here for
Gaussian mixture models of simultaneously large dimension and size. Our results
are instrumental to a deeper understanding on the interplay of the nonlinearity
and the statistics of the data, thereby allowing for a better tuning of random
feature-based techniques. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Solving the multi-site and multi-orbital Dynamical Mean Field Theory using Density Matrix Renormalization,
Abstract: We implement an efficient numerical method to calculate response functions of
complex impurities based on the Density Matrix Renormalization Group (DMRG) and
use it as the impurity-solver of the Dynamical Mean Field Theory (DMFT). This
method uses the correction vector to obtain precise Green's functions on the
real frequency axis at zero temperature. By using a self-consistent bath
configuration with very low entanglement, we take full advantage of the DMRG to
calculate dynamical response functions paving the way to treat large effective
impurities such as those corresponding to multi-orbital interacting models and
multi-site or multi-momenta clusters. This method leads to reliable
calculations of non-local self energies at arbitrary dopings and interactions
and at any energy scale. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Learning from Between-class Examples for Deep Sound Recognition,
Abstract: Deep learning methods have achieved high performance in sound recognition
tasks. Deciding how to feed the training data is important for further
performance improvement. We propose a novel learning method for deep sound
recognition: Between-Class learning (BC learning). Our strategy is to learn a
discriminative feature space by recognizing the between-class sounds as
between-class sounds. We generate between-class sounds by mixing two sounds
belonging to different classes with a random ratio. We then input the mixed
sound to the model and train the model to output the mixing ratio. The
advantages of BC learning are not limited only to the increase in variation of
the training data; BC learning leads to an enlargement of Fisher's criterion in
the feature space and a regularization of the positional relationship among the
feature distributions of the classes. The experimental results show that BC
learning improves the performance on various sound recognition networks,
datasets, and data augmentation schemes, in which BC learning proves to be
always beneficial. Furthermore, we construct a new deep sound recognition
network (EnvNet-v2) and train it with BC learning. As a result, we achieved a
performance surpasses the human level. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: On nonlinear profile decompositions and scattering for a NLS-ODE model,
Abstract: In this paper, we consider a Hamiltonian system combining a nonlinear Schr\"
odinger equation (NLS) and an ordinary differential equation (ODE). This system
is a simplified model of the NLS around soliton solutions. Following Nakanishi
\cite{NakanishiJMSJ}, we show scattering of $L^2$ small $H^1$ radial solutions.
The proof is based on Nakanishi's framework and Fermi Golden Rule estimates on
$L^4$ in time norms. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Chain effects of clean water: The Mills-Reincke phenomenon in early twentieth-century Japan,
Abstract: This study explores the validity of chain effects of clean water, which are
known as the "Mills-Reincke phenomenon," in early twentieth-century Japan.
Recent studies have reported that water purifications systems are responsible
for huge contributions to human capital. Although a few studies have
investigated the short-term effects of water-supply systems in pre-war Japan,
little is known about the benefits associated with these systems. By analyzing
city-level cause-specific mortality data from the years 1922-1940, we found
that eliminating typhoid fever infections decreased the risk of deaths due to
non-waterborne diseases. Our estimates show that for one additional typhoid
death, there were approximately one to three deaths due to other causes, such
as tuberculosis and pneumonia. This suggests that the observed Mills-Reincke
phenomenon could have resulted from the prevention typhoid fever in a
previously-developing Asian country. | [
0,
0,
0,
1,
1,
0
] | [
"Quantitative Biology",
"Statistics"
] |
Title: Learning Transferable Architectures for Scalable Image Recognition,
Abstract: Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Clamped seismic metamaterials: Ultra-low broad frequency stop-bands,
Abstract: The regularity of earthquakes, their destructive power, and the nuisance of
ground vibration in urban environments, all motivate designs of defence
structures to lessen the impact of seismic and ground vibration waves on
buildings. Low frequency waves, in the range $1$ to $10$ Hz for earthquakes and
up to a few tens of Hz for vibrations generated by human activities, cause a
large amount of damage, or inconvenience, depending on the geological
conditions they can travel considerable distances and may match the resonant
fundamental frequency of buildings. The ultimate aim of any seismic
metamaterial, or any other seismic shield, is to protect over this entire range
of frequencies, the long wavelengths involved, and low frequency, have meant
this has been unachievable to date.
Elastic flexural waves, applicable in the mechanical vibrations of thin
elastic plates, can be designed to have a broad zero-frequency stop-band using
a periodic array of very small clamped circles. Inspired by this experimental
and theoretical observation, all be it in a situation far removed from seismic
waves, we demonstrate that it is possible to achieve elastic surface (Rayleigh)
and body (pressure P and shear S) wave reflectors at very large wavelengths in
structured soils modelled as a fully elastic layer periodically clamped to
bedrock.
We identify zero frequency stop-bands that only exist in the limit of columns
of concrete clamped at their base to the bedrock. In a realistic configuration
of a sedimentary basin 15 meters deep we observe a zero frequency stop-band
covering a broad frequency range of $0$ to $30$ Hz. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Difference analogue of second main theorems for meromorphic mapping into algebraic variety,
Abstract: In this paper, we prove some difference analogue of second main theorems of
meromorphic mapping from Cm into an algebraic variety V intersecting a finite
set of fixed hypersurfaces in subgeneral position. As an application, we prove
a result on algebraically degenerate of holomorphic curves intersecting
hypersurfaces and difference analogue of Picard's theorem on holomorphic
curves. Furthermore, we obtain a second main theorem of meromorphic mappings
intersecting hypersurfaces in N-subgeneral position for Veronese embedding in
Pn(C) and a uniqueness theorem sharing hypersurfaces. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Intersections of $ω$ classes in $\overline{\mathcal{M}}_{g,n}$,
Abstract: We provide a graph formula which describes an arbitrary monomial in {\omega}
classes (also referred to as stable {\psi} classes) in terms of a simple family
of dual graphs (pinwheel graphs) with edges decorated by rational functions in
{\psi} classes. We deduce some numerical consequences and in particular a
combinatorial formula expressing top intersections of \k{appa} classes on Mg in
terms of top intersections of {\psi} classes. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: SPIRou Input Catalog: Activity, Rotation and Magnetic Field of Cool Dwarfs,
Abstract: Based on optical high-resolution spectra obtained with CFHT/ESPaDOnS, we
present new measurements of activity and magnetic field proxies of 442 low-mass
K5-M7 dwarfs. The objects were analysed as potential targets to search for
planetary-mass companions with the new spectropolarimeter and high-precision
velocimeter, SPIRou. We have analysed their high-resolution spectra in an
homogeneous way: circular polarisation, chromospheric features, and Zeeman
broadening of the FeH infrared line. The complex relationship between these
activity indicators is analysed: while no strong connection is found between
the large-scale and small-scale magnetic fields, the latter relates with the
non-thermal flux originating in the chromosphere.
We then examine the relationship between various activity diagnostics and the
optical radial-velocity jitter available in the literature, especially for
planet host stars. We use this to derive for all stars an activity merit
function (higher for quieter stars) with the goal of identifying the most
favorable stars where the radial-velocity jitter is low enough for planet
searches. We find that the main contributors to the RV jitter are the
large-scale magnetic field and the chromospheric non-thermal emission.
In addition, three stars (GJ 1289, GJ 793, and GJ 251) have been followed
along their rotation using the spectropolarimetric mode, and we derive their
magnetic topology. These very slow rotators are good representatives of future
SPIRou targets. They are compared to other stars where the magnetic topology is
also known. The poloidal component of the magnetic field is predominent in all
three stars. | [
0,
1,
0,
0,
0,
0
] | [
"Astrophysics",
"Physics"
] |
Title: Objective Procedure for Reconstructing Couplings in Complex Systems,
Abstract: Inferring directional connectivity from point process data of multiple
elements is desired in various scientific fields such as neuroscience,
geography, economics, etc. Here, we propose an inference procedure for this
goal based on the kinetic Ising model. The procedure is composed of two steps:
(1) determination of the time-bin size for transforming the point-process data
to discrete time binary data and (2) screening of relevant couplings from the
estimated networks. For these, we develop simple methods based on information
theory and computational statistics. Applications to data from artificial and
\textit{in vitro} neuronal networks show that the proposed procedure performs
fairly well when identifying relevant couplings, including the discrimination
of their signs, with low computational cost. These results highlight the
potential utility of the kinetic Ising model to analyze real interacting
systems with event occurrences. | [
0,
0,
0,
0,
1,
0
] | [
"Physics",
"Statistics",
"Quantitative Biology"
] |
Title: Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization--Minimization Algorithm Approach,
Abstract: Support vector machines (SVMs) are an important tool in modern data analysis.
Traditionally, support vector machines have been fitted via quadratic
programming, either using purpose-built or off-the-shelf algorithms. We present
an alternative approach to SVM fitting via the majorization--minimization (MM)
paradigm. Algorithms that are derived via MM algorithm constructions can be
shown to monotonically decrease their objectives at each iteration, as well as
be globally convergent to stationary points. We demonstrate the construction of
iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm,
for SVM risk minimization problems involving the hinge, least-square,
squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net
penalizations. Successful implementations of our algorithms are presented via
some numerical examples. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Scholars on Twitter: who and how many are they?,
Abstract: In this paper we present a novel methodology for identifying scholars with a
Twitter account. By combining bibliometric data from Web of Science and Twitter
users identified by Altmetric.com we have obtained the largest set of
individual scholars matched with Twitter users made so far. Our methodology
consists of a combination of matching algorithms, considering different
linguistic elements of both author names and Twitter names; followed by a
rule-based scoring system that weights the common occurrence of several
elements related with the names, individual elements and activities of both
Twitter users and scholars matched. Our results indicate that about 2% of the
overall population of scholars in the Web of Science is active on Twitter. By
domain we find a strong presence of researchers from the Social Sciences and
the Humanities. Natural Sciences is the domain with the lowest level of
scholars on Twitter. Researchers on Twitter also tend to be younger than those
that are not on Twitter. As this is a bibliometric-based approach, it is
important to highlight the reliance of the method on the number of publications
produced and tweeted by the scholars, thus the share of scholars on Twitter
ranges between 1% and 5% depending on their level of productivity. Further
research is suggested in order to improve and expand the methodology. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: General notions of regression depth function,
Abstract: As a measure for the centrality of a point in a set of multivariate data,
statistical depth functions play important roles in multivariate analysis,
because one may conveniently construct descriptive as well as inferential
procedures relying on them. Many depth notions have been proposed in the
literature to fit to different applications. However, most of them are mainly
developed for the location setting. In this paper, we discuss the possibility
of extending some of them into the regression setting. A general concept of
regression depth function is also provided. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: The g-Good-Neighbor Conditional Diagnosability of Locally Twisted Cubes,
Abstract: In the work of Peng et al. in 2012, a new measure was proposed for fault
diagnosis of systems: namely, g-good-neighbor conditional diagnosability, which
requires that any fault-free vertex has at least g fault-free neighbors in the
system. In this paper, we establish the g-good-neighbor conditional
diagnosability of locally twisted cubes under the PMC model and the MM^* model. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Preliminary corrosion studies of IN-RAFM steel with stagnant Lead Lithium at 550 C,
Abstract: Corrosion of Indian RAFMS (reduced activation ferritic martensitic steel)
material with liquid metal, Lead Lithium ( Pb-Li) has been studied under static
condition, maintaining Pb-Li at 550 C for different time durations, 2500, 5000
and 9000 hours. Corrosion rate was calculated from weight loss measurements.
Microstructure analysis was carried out using SEM and chemical composition by
SEM-EDX measurements. Micro Vickers hardness and tensile testing were also
carried out. Chromium was found leaching from the near surface regions and
surface hardness was found to decrease in all the three cases. Grain boundaries
were affected. Some grains got detached from the surface giving rise to pebble
like structures in the surface micrographs. There was no significant reduction
in the tensile strength, after exposure to liquid metal. This paper discusses
the experimental details and the results obtained. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Robust Estimation of Change-Point Location,
Abstract: We introduce a robust estimator of the location parameter for the
change-point in the mean based on the Wilcoxon statistic and establish its
consistency for $L_1$ near epoch dependent processes. It is shown that the
consistency rate depends on the magnitude of change. A simulation study is
performed to evaluate finite sample properties of the Wilcoxon-type estimator
in standard cases, as well as under heavy-tailed distributions and disturbances
by outliers, and to compare it with a CUSUM-type estimator. It shows that the
Wilcoxon-type estimator is equivalent to the CUSUM-type estimator in standard
cases, but outperforms the CUSUM-type estimator in presence of heavy tails or
outliers in the data. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Linear time-periodic dynamical systems: An H2 analysis and a model reduction framework,
Abstract: Linear time-periodic (LTP) dynamical systems frequently appear in the
modeling of phenomena related to fluid dynamics, electronic circuits, and
structural mechanics via linearization centered around known periodic orbits of
nonlinear models. Such LTP systems can reach orders that make repeated
simulation or other necessary analysis prohibitive, motivating the need for
model reduction.
We develop here an algorithmic framework for constructing reduced models that
retains the linear time-periodic structure of the original LTP system. Our
approach generalizes optimal approaches that have been established previously
for linear time-invariant (LTI) model reduction problems. We employ an
extension of the usual H2 Hardy space defined for the LTI setting to
time-periodic systems and within this broader framework develop an a posteriori
error bound expressible in terms of related LTI systems. Optimization of this
bound motivates our algorithm. We illustrate the success of our method on two
numerical examples. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Physics",
"Computer Science"
] |
Title: Utilizing artificial neural networks to predict demand for weather-sensitive products at retail stores,
Abstract: One key requirement for effective supply chain management is the quality of
its inventory management. Various inventory management methods are typically
employed for different types of products based on their demand patterns,
product attributes, and supply network. In this paper, our goal is to develop
robust demand prediction methods for weather sensitive products at retail
stores. We employ historical datasets from Walmart, whose customers and markets
are often exposed to extreme weather events which can have a huge impact on
sales regarding the affected stores and products. We want to accurately predict
the sales of 111 potentially weather-sensitive products around the time of
major weather events at 45 of Walmart retails locations in the U.S.
Intuitively, we may expect an uptick in the sales of umbrellas before a big
thunderstorm, but it is difficult for replenishment managers to predict the
level of inventory needed to avoid being out-of-stock or overstock during and
after that storm. While they rely on a variety of vendor tools to predict sales
around extreme weather events, they mostly employ a time-consuming process that
lacks a systematic measure of effectiveness. We employ all the methods critical
to any analytics project and start with data exploration. Critical features are
extracted from the raw historical dataset for demand forecasting accuracy and
robustness. In particular, we employ Artificial Neural Network for forecasting
demand for each product sold around the time of major weather events. Finally,
we evaluate our model to evaluate their accuracy and robustness. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Finance"
] |
Title: Continuously tempered Hamiltonian Monte Carlo,
Abstract: Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC)
method for performing approximate inference in complex probabilistic models of
continuous variables. In common with many MCMC methods, however, the standard
HMC approach performs poorly in distributions with multiple isolated modes. We
present a method for augmenting the Hamiltonian system with an extra continuous
temperature control variable which allows the dynamic to bridge between
sampling a complex target distribution and a simpler unimodal base
distribution. This augmentation both helps improve mixing in multimodal targets
and allows the normalisation constant of the target distribution to be
estimated. The method is simple to implement within existing HMC code,
requiring only a standard leapfrog integrator. We demonstrate experimentally
that the method is competitive with annealed importance sampling and simulating
tempering methods at sampling from challenging multimodal distributions and
estimating their normalising constants. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Scaling Law for Three-body Collisions in Identical Fermions with $p$-wave Interactions,
Abstract: We experimentally confirmed the threshold behavior and scattering length
scaling law of the three-body loss coefficients in an ultracold spin-polarized
gas of $^6$Li atoms near a $p$-wave Feshbach resonance. We measured the
three-body loss coefficients as functions of temperature and scattering volume,
and found that the threshold law and the scattering length scaling law hold in
limited temperature and magnetic field regions. We also found that the
breakdown of the scaling laws is due to the emergence of the effective-range
term. This work is an important first step toward full understanding of the
loss of identical fermions with $p$-wave interactions. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: An attentive neural architecture for joint segmentation and parsing and its application to real estate ads,
Abstract: In processing human produced text using natural language processing (NLP)
techniques, two fundamental subtasks that arise are (i) segmentation of the
plain text into meaningful subunits (e.g., entities), and (ii) dependency
parsing, to establish relations between subunits. In this paper, we develop a
relatively simple and effective neural joint model that performs both
segmentation and dependency parsing together, instead of one after the other as
in most state-of-the-art works. We will focus in particular on the real estate
ad setting, aiming to convert an ad to a structured description, which we name
property tree, comprising the tasks of (1) identifying important entities of a
property (e.g., rooms) from classifieds and (2) structuring them into a tree
format. In this work, we propose a new joint model that is able to tackle the
two tasks simultaneously and construct the property tree by (i) avoiding the
error propagation that would arise from the subtasks one after the other in a
pipelined fashion, and (ii) exploiting the interactions between the subtasks.
For this purpose, we perform an extensive comparative study of the pipeline
methods and the new proposed joint model, reporting an improvement of over
three percentage points in the overall edge F1 score of the property tree.
Also, we propose attention methods, to encourage our model to focus on salient
tokens during the construction of the property tree. Thus we experimentally
demonstrate the usefulness of attentive neural architectures for the proposed
joint model, showcasing a further improvement of two percentage points in edge
F1 score for our application. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: On algebraically integrable domains in Euclidean spaces,
Abstract: Let $D$ be a bounded domain $D$ in $\mathbb R^n $ with infinitely smooth
boundary and $n$ is odd. We prove that if the volume cut off from the domain by
a hyperplane is an algebraic function of the hyperplane, free of real singular
points, then the domain is an ellipsoid. This partially answers a question of
V.I. Arnold: whether odd-dimensional ellipsoids are the only algebraically
integrable domains? | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Survey of Model Compression and Acceleration for Deep Neural Networks,
Abstract: Deep convolutional neural networks (CNNs) have recently achieved great
success in many visual recognition tasks. However, existing deep neural network
models are computationally expensive and memory intensive, hindering their
deployment in devices with low memory resources or in applications with strict
latency requirements. Therefore, a natural thought is to perform model
compression and acceleration in deep networks without significantly decreasing
the model performance. During the past few years, tremendous progress has been
made in this area. In this paper, we survey the recent advanced techniques for
compacting and accelerating CNNs model developed. These techniques are roughly
categorized into four schemes: parameter pruning and sharing, low-rank
factorization, transferred/compact convolutional filters, and knowledge
distillation. Methods of parameter pruning and sharing will be described at the
beginning, after that the other techniques will be introduced. For each scheme,
we provide insightful analysis regarding the performance, related applications,
advantages, and drawbacks etc. Then we will go through a few very recent
additional successful methods, for example, dynamic capacity networks and
stochastic depths networks. After that, we survey the evaluation matrix, the
main datasets used for evaluating the model performance and recent benchmarking
efforts. Finally, we conclude this paper, discuss remaining challenges and
possible directions on this topic. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Stochastic Gradient Monomial Gamma Sampler,
Abstract: Recent advances in stochastic gradient techniques have made it possible to
estimate posterior distributions from large datasets via Markov Chain Monte
Carlo (MCMC). However, when the target posterior is multimodal, mixing
performance is often poor. This results in inadequate exploration of the
posterior distribution. A framework is proposed to improve the sampling
efficiency of stochastic gradient MCMC, based on Hamiltonian Monte Carlo. A
generalized kinetic function is leveraged, delivering superior stationary
mixing, especially for multimodal distributions. Techniques are also discussed
to overcome the practical issues introduced by this generalization. It is shown
that the proposed approach is better at exploring complex multimodal posterior
distributions, as demonstrated on multiple applications and in comparison with
other stochastic gradient MCMC methods. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: Training Neural Networks Using Features Replay,
Abstract: Training a neural network using backpropagation algorithm requires passing
error gradients sequentially through the network. The backward locking prevents
us from updating network layers in parallel and fully leveraging the computing
resources. Recently, there are several works trying to decouple and parallelize
the backpropagation algorithm. However, all of them suffer from severe accuracy
loss or memory explosion when the neural network is deep. To address these
challenging issues, we propose a novel parallel-objective formulation for the
objective function of the neural network. After that, we introduce features
replay algorithm and prove that it is guaranteed to converge to critical points
for the non-convex problem under certain conditions. Finally, we apply our
method to training deep convolutional neural networks, and the experimental
results show that the proposed method achieves {faster} convergence, {lower}
memory consumption, and {better} generalization error than compared methods. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: The anti-spherical category,
Abstract: We study a diagrammatic categorification (the "anti-spherical category") of
the anti-spherical module for any Coxeter group. We deduce that Deodhar's
(sign) parabolic Kazhdan-Lusztig polynomials have non-negative coefficients,
and that a monotonicity conjecture of Brenti's holds. The main technical
observation is a localisation procedure for the anti-spherical category, from
which we construct a "light leaves" basis of morphisms. Our techniques may be
used to calculate many new elements of the $p$-canonical basis in the
anti-spherical module. The results use generators and relations for Soergel
bimodules ("Soergel calculus") in a crucial way. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Unified Treatment of Spin Torques using a Coupled Magnetisation Dynamics and Three-Dimensional Spin Current Solver,
Abstract: A three-dimensional spin current solver based on a generalised spin
drift-diffusion description, including the spin Hall effect, is integrated with
a magnetisation dynamics solver. The resulting model is shown to simultaneously
reproduce the spin-orbit torques generated using the spin Hall effect, spin
pumping torques generated by magnetisation dynamics in multilayers, as well as
the spin transfer torques acting on magnetisation regions with spatial
gradients, whilst field-like and spin-like torques are reproduced in a spin
valve geometry. Two approaches to modelling interfaces are analysed, one based
on the spin mixing conductance and the other based on continuity of spin
currents where the spin dephasing length governs the absorption of transverse
spin components. In both cases analytical formulas are derived for the
spin-orbit torques in a heavy metal / ferromagnet bilayer geometry, showing in
general both field-like and damping-like torques are generated. The limitations
of the analytical approach are discussed, showing that even in a simple bilayer
geometry, due to the non-uniformity of the spin currents, a full
three-dimensional treatment is required. Finally the model is applied to the
quantitative analysis of the spin Hall angle in Pt by reproducing published
experimental data on the ferromagnetic resonance linewidth in the bilayer
geometry. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Quantum Speed Limit is Not Quantum,
Abstract: The quantum speed limit (QSL), or the energy-time uncertainty relation,
describes the fundamental maximum rate for quantum time evolution and has been
regarded as being unique in quantum mechanics. In this study, we obtain a
classical speed limit corresponding to the QSL using the Hilbert space for the
classical Liouville equation. Thus, classical mechanics has a fundamental speed
limit, and QSL is not a purely quantum phenomenon but a universal dynamical
property of the Hilbert space. Furthermore, we obtain similar speed limits for
the imaginary-time Schroedinger equations such as the master equation. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A dual framework for low-rank tensor completion,
Abstract: One of the popular approaches for low-rank tensor completion is to use the
latent trace norm regularization. However, most existing works in this
direction learn a sparse combination of tensors. In this work, we fill this gap
by proposing a variant of the latent trace norm that helps in learning a
non-sparse combination of tensors. We develop a dual framework for solving the
low-rank tensor completion problem. We first show a novel characterization of
the dual solution space with an interesting factorization of the optimal
solution. Overall, the optimal solution is shown to lie on a Cartesian product
of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian
optimization framework for proposing computationally efficient trust region
algorithm. The experiments illustrate the efficacy of the proposed algorithm on
several real-world datasets across applications. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Human perception in computer vision,
Abstract: Computer vision has made remarkable progress in recent years. Deep neural
network (DNN) models optimized to identify objects in images exhibit
unprecedented task-trained accuracy and, remarkably, some generalization
ability: new visual problems can now be solved more easily based on previous
learning. Biological vision (learned in life and through evolution) is also
accurate and general-purpose. Is it possible that these different learning
regimes converge to similar problem-dependent optimal computations? We
therefore asked whether the human system-level computation of visual perception
has DNN correlates and considered several anecdotal test cases. We found that
perceptual sensitivity to image changes has DNN mid-computation correlates,
while sensitivity to segmentation, crowding and shape has DNN end-computation
correlates. Our results quantify the applicability of using DNN computation to
estimate perceptual loss, and are consistent with the fascinating theoretical
view that properties of human perception are a consequence of
architecture-independent visual learning. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Analyses and estimation of certain design parameters of micro-grooved heat pipes,
Abstract: A numerical analysis of heat conduction through the cover plate of a heat
pipe is carried out to determine the temperature of the working substance,
average temperature of heating and cooling surfaces, heat spread in the
transmitter, and the heat bypass through the cover plate. Analysis has been
extended for the estimation of heat transfer requirements at the outer surface
of the con- denser under different heat load conditions using Genetic
Algorithm. This paper also presents the estimation of an average heat transfer
coefficient for the boiling and condensation of the working substance inside
the microgrooves corresponding to a known temperature of the heat source. The
equation of motion of the working fluid in the meniscus of an equilateral
triangular groove has been presented from which a new term called the minimum
surface tension required for avoiding the dry out condition is defined.
Quantitative results showing the effect of thickness of cover plate, heat load,
angle of inclination and viscosity of the working fluid on the different
aspects of the heat transfer, minimum surface tension required to avoid dry
out, velocity distribution of the liquid, and radius of liquid meniscus inside
the micro-grooves have been presented and discussed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Merge decompositions, two-sided Krohn-Rhodes, and aperiodic pointlikes,
Abstract: This paper provides short proofs of two fundamental theorems of finite
semigroup theory whose previous proofs were significantly longer, namely the
two-sided Krohn-Rhodes decomposition theorem and Henckell's aperiodic pointlike
theorem, using a new algebraic technique that we call the merge decomposition.
A prototypical application of this technique decomposes a semigroup $T$ into a
two-sided semidirect product whose components are built from two subsemigroups
$T_1,T_2$, which together generate $T$, and the subsemigroup generated by their
setwise product $T_1T_2$. In this sense we decompose $T$ by merging the
subsemigroups $T_1$ and $T_2$. More generally, our technique merges semigroup
homomorphisms from free semigroups. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Retrospective Higher-Order Markov Processes for User Trails,
Abstract: Users form information trails as they browse the web, checkin with a
geolocation, rate items, or consume media. A common problem is to predict what
a user might do next for the purposes of guidance, recommendation, or
prefetching. First-order and higher-order Markov chains have been widely used
methods to study such sequences of data. First-order Markov chains are easy to
estimate, but lack accuracy when history matters. Higher-order Markov chains,
in contrast, have too many parameters and suffer from overfitting the training
data. Fitting these parameters with regularization and smoothing only offers
mild improvements. In this paper we propose the retrospective higher-order
Markov process (RHOMP) as a low-parameter model for such sequences. This model
is a special case of a higher-order Markov chain where the transitions depend
retrospectively on a single history state instead of an arbitrary combination
of history states. There are two immediate computational advantages: the number
of parameters is linear in the order of the Markov chain and the model can be
fit to large state spaces. Furthermore, by providing a specific structure to
the higher-order chain, RHOMPs improve the model accuracy by efficiently
utilizing history states without risks of overfitting the data. We demonstrate
how to estimate a RHOMP from data and we demonstrate the effectiveness of our
method on various real application datasets spanning geolocation data, review
sequences, and business locations. The RHOMP model uniformly outperforms
higher-order Markov chains, Kneser-Ney regularization, and tensor
factorizations in terms of prediction accuracy. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Geometrically stopped Markovian random growth processes and Pareto tails,
Abstract: Many empirical studies document power law behavior in size distributions of
economic interest such as cities, firms, income, and wealth. One mechanism for
generating such behavior combines independent and identically distributed
Gaussian additive shocks to log-size with a geometric age distribution. We
generalize this mechanism by allowing the shocks to be non-Gaussian (but
light-tailed) and dependent upon a Markov state variable. Our main results
provide sharp bounds on tail probabilities and simple formulas for Pareto
exponents. We present two applications: (i) we show that the tails of the
wealth distribution in a heterogeneous-agent dynamic general equilibrium model
with idiosyncratic endowment risk decay exponentially, unlike models with
investment risk where the tails may be Paretian, and (ii) we show that a random
growth model for the population dynamics of Japanese prefectures is consistent
with the observed Pareto exponent but only after allowing for Markovian
dynamics. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Quantitative Finance"
] |
Title: Self-supervised learning: When is fusion of the primary and secondary sensor cue useful?,
Abstract: Self-supervised learning (SSL) is a reliable learning mechanism in which a
robot enhances its perceptual capabilities. Typically, in SSL a trusted,
primary sensor cue provides supervised training data to a secondary sensor cue.
In this article, a theoretical analysis is performed on the fusion of the
primary and secondary cue in a minimal model of SSL. A proof is provided that
determines the specific conditions under which it is favorable to perform
fusion. In short, it is favorable when (i) the prior on the target value is
strong or (ii) the secondary cue is sufficiently accurate. The theoretical
findings are validated with computational experiments. Subsequently, a
real-world case study is performed to investigate if fusion in SSL is also
beneficial when assumptions of the minimal model are not met. In particular, a
flying robot learns to map pressure measurements to sonar height measurements
and then fuses the two, resulting in better height estimation. Fusion is also
beneficial in the opposite case, when pressure is the primary cue. The analysis
and results are encouraging to study SSL fusion also for other robots and
sensors. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Playing Atari with Six Neurons,
Abstract: Deep reinforcement learning on Atari games maps pixel directly to actions;
internally, the deep neural network bears the responsibility of both extracting
useful information and making decisions based on it. Aiming at devoting entire
deep networks to decision making alone, we propose a new method for learning
policies and compact state representations separately but simultaneously for
policy approximation in reinforcement learning. State representations are
generated by a novel algorithm based on Vector Quantization and Sparse Coding,
trained online along with the network, and capable of growing its dictionary
size over time. We also introduce new techniques allowing both the neural
network and the evolution strategy to cope with varying dimensions. This
enables networks of only 6 to 18 neurons to learn to play a selection of Atari
games with performance comparable---and occasionally superior---to
state-of-the-art techniques using evolution strategies on deep networks two
orders of magnitude larger. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Contraction and uniform convergence of isotonic regression,
Abstract: We consider the problem of isotonic regression, where the underlying signal
$x$ is assumed to satisfy a monotonicity constraint, that is, $x$ lies in the
cone $\{ x\in\mathbb{R}^n : x_1 \leq \dots \leq x_n\}$. We study the isotonic
projection operator (projection to this cone), and find a necessary and
sufficient condition characterizing all norms with respect to which this
projection is contractive. This enables a simple and non-asymptotic analysis of
the convergence properties of isotonic regression, yielding uniform confidence
bands that adapt to the local Lipschitz properties of the signal. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Radially distributed values and normal families,
Abstract: Let $L_0$ and $L_1$ be two distinct rays emanating from the origin and let
${\mathcal F}$ be the family of all functions holomorphic in the unit disk
${\mathbb D}$ for which all zeros lie on $L_0$ while all $1$-points lie on
$L_1$. It is shown that ${\mathcal F}$ is normal in ${\mathbb
D}\backslash\{0\}$. The case where $L_0$ is the positive real axis and $L_1$ is
the negative real axis is studied in more detail. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Characterizing Exoplanet Habitability,
Abstract: A habitable exoplanet is a world that can maintain stable liquid water on its
surface. Techniques and approaches to characterizing such worlds are essential,
as performing a census of Earth-like planets that may or may not have life will
inform our understanding of how frequently life originates and is sustained on
worlds other than our own. Observational techniques like high contrast imaging
and transit spectroscopy can reveal key indicators of habitability for
exoplanets. Both polarization measurements and specular reflectance from oceans
(also known as "glint") can provide direct evidence for surface liquid water,
while constraining surface pressure and temperature (from moderate resolution
spectra) can indicate liquid water stability. Indirect evidence for
habitability can come from a variety of sources, including observations of
variability due to weather, surface mapping studies, and/or measurements of
water vapor or cloud profiles that indicate condensation near a surface.
Approaches to making the types of measurements that indicate habitability are
diverse, and have different considerations for the required wavelength range,
spectral resolution, maximum noise levels, stellar host temperature, and
observing geometry. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Combining learned and analytical models for predicting action effects,
Abstract: One of the most basic skills a robot should possess is predicting the effect
of physical interactions with objects in the environment. This enables optimal
action selection to reach a certain goal state. Traditionally, dynamics are
approximated by physics-based analytical models. These models rely on specific
state representations that may be hard to obtain from raw sensory data,
especially if no knowledge of the object shape is assumed. More recently, we
have seen learning approaches that can predict the effect of complex physical
interactions directly from sensory input. It is however an open question how
far these models generalize beyond their training data. In this work, we
investigate the advantages and limitations of neural network based learning
approaches for predicting the effects of actions based on sensory input and
show how analytical and learned models can be combined to leverage the best of
both worlds. As physical interaction task, we use planar pushing, for which
there exists a well-known analytical model and a large real-world dataset. We
propose to use a convolutional neural network to convert raw depth images or
organized point clouds into a suitable representation for the analytical model
and compare this approach to using neural networks for both, perception and
prediction. A systematic evaluation of the proposed approach on a very large
real-world dataset shows two main advantages of the hybrid architecture.
Compared to a pure neural network, it significantly (i) reduces required
training data and (ii) improves generalization to novel physical interaction. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Qualification Conditions in Semi-algebraic Programming,
Abstract: For an arbitrary finite family of semi-algebraic/definable functions, we
consider the corresponding inequality constraint set and we study qualification
conditions for perturbations of this set. In particular we prove that all
positive diagonal perturbations, save perhaps a finite number of them, ensure
that any point within the feasible set satisfies Mangasarian-Fromovitz
constraint qualification. Using the Milnor-Thom theorem, we provide a bound for
the number of singular perturbations when the constraints are polynomial
functions. Examples show that the order of magnitude of our exponential bound
is relevant. Our perturbation approach provides a simple protocol to build
sequences of "regular" problems approximating an arbitrary
semi-algebraic/definable problem. Applications to sequential quadratic
programming methods and sum of squares relaxation are provided. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A one-dimensional model for water desalination by flow-through electrode capacitive deionization,
Abstract: Capacitive deionization (CDI) is a fast-emerging water desalination
technology in which a small cell voltage of ~1 V across porous carbon
electrodes removes salt from feedwaters via electrosorption. In flow-through
electrode (FTE) CDI cell architecture, feedwater is pumped through macropores
or laser perforated channels in porous electrodes, enabling highly compact
cells with parallel flow and electric field, as well as rapid salt removal. We
here present a one-dimensional model describing water desalination by FTE CDI,
and a comparison to data from a custom-built experimental cell. The model
employs simple cell boundary conditions derived via scaling arguments. We show
good model-to-data fits with reasonable values for fitting parameters such as
the Stern layer capacitance, micropore volume, and attraction energy. Thus, we
demonstrate that from an engineering modeling perspective, an FTE CDI cell may
be described with simpler one-dimensional models, unlike more typical
flow-between electrodes architecture where 2D models are required. | [
1,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Preventing Hospital Acquired Infections Through a Workflow-Based Cyber-Physical System,
Abstract: Hospital acquired infections (HAI) are infections acquired within the
hospital from healthcare workers, patients or from the environment, but which
have no connection to the initial reason for the patient's hospital admission.
HAI are a serious world-wide problem, leading to an increase in mortality
rates, duration of hospitalisation as well as significant economic burden on
hospitals. Although clear preventive guidelines exist, studies show that
compliance to them is frequently poor. This paper details the software
perspective for an innovative, business process software based cyber-physical
system that will be implemented as part of a European Union-funded research
project. The system is composed of a network of sensors mounted in different
sites around the hospital, a series of wearables used by the healthcare workers
and a server side workflow engine. For better understanding, we describe the
system through the lens of a single, simple clinical workflow that is
responsible for a significant portion of all hospital infections. The goal is
that when completed, the system will be configurable in the sense of
facilitating the creation and automated monitoring of those clinical workflows
that when combined, account for over 90\% of hospital infections. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: OpenML Benchmarking Suites and the OpenML100,
Abstract: We advocate the use of curated, comprehensive benchmark suites of machine
learning datasets, backed by standardized OpenML-based interfaces and
complementary software toolkits written in Python, Java and R. Major
distinguishing features of OpenML benchmark suites are (a) ease of use through
standardized data formats, APIs, and existing client libraries; (b)
machine-readable meta-information regarding the contents of the suite; and (c)
online sharing of results, enabling large scale comparisons. As a first such
suite, we propose the OpenML100, a machine learning benchmark suite of
100~classification datasets carefully curated from the thousands of datasets
available on OpenML.org. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: From mindless mathematics to thinking meat?,
Abstract: Deconstruction of the theme of the 2017 FQXi essay contest is already an
interesting exercise in its own right: Teleology is rarely useful in physics
--- the only known mainstream physics example (black hole event horizons) has a
very mixed score-card --- so the "goals" and "aims and intentions" alluded to
in the theme of the 2017 FQXi essay contest are already somewhat pushing the
limits. Furthermore, "aims and intentions" certainly carries the implication of
consciousness, and opens up a whole can of worms related to the mind-body
problem. As for "mindless mathematical laws", that allusion is certainly in
tension with at least some versions of the "mathematical universe hypothesis".
Finally "wandering towards a goal" again carries the implication of
consciousness, with all its attendant problems.
In this essay I will argue, simply because we do not yet have any really good
mathematical or physical theory of consciousness, that the theme of this essay
contest is premature, and unlikely to lead to any resolution that would be
widely accepted in the mathematics or physics communities. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: When Streams of Optofluidics Meet the Sea of Life,
Abstract: Luke P. Lee is a Tan Chin Tuan Centennial Professor at the National
University of Singapore. In this contribution he describes the power of
optofluidics as a research tool and reviews new insights within the areas of
single cell analysis, microphysiological analysis, and integrated systems. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology"
] |
Title: Willis Theory via Graphs,
Abstract: We study the scale and tidy subgroups of an endomorphism of a totally
disconnected locally compact group using a geometric framework. This leads to
new interpretations of tidy subgroups and the scale function. Foremost, we
obtain a geometric tidying procedure which applies to endomorphisms as well as
a geometric proof of the fact that tidiness is equivalent to being minimizing
for a given endomorphism. Our framework also yields an endomorphism version of
the Baumgartner-Willis tree representation theorem. We conclude with a
construction of new endomorphisms of totally disconnected locally compact
groups from old via HNN-extensions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: The Effect of Site-Specific Spectral Densities on the High-Dimensional Exciton-Vibrational Dynamics in the FMO Complex,
Abstract: The coupled exciton-vibrational dynamics of a three-site model of the FMO
complex is investigated using the Multi-layer Multi-configuration
Time-dependent Hartree (ML-MCTDH) approach. Emphasis is put on the effect of
the spectral density on the exciton state populations as well as on the
vibrational and vibronic non-equilibrium excitations. Models which use either a
single or site-specific spectral densities are contrasted to a spectral density
adapted from experiment. For the transfer efficiency, the total integrated
Huang-Rhys factor is found to be more important than details of the spectral
distributions. However, the latter are relevant for the obtained
non-equilibrium vibrational and vibronic distributions and thus influence the
actual pattern of population relaxation. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: On the nonparametric maximum likelihood estimator for Gaussian location mixture densities with application to Gaussian denoising,
Abstract: We study the Nonparametric Maximum Likelihood Estimator (NPMLE) for
estimating Gaussian location mixture densities in $d$-dimensions from
independent observations. Unlike usual likelihood-based methods for fitting
mixtures, NPMLEs are based on convex optimization. We prove finite sample
results on the Hellinger accuracy of every NPMLE. Our results imply, in
particular, that every NPMLE achieves near parametric risk (up to logarithmic
multiplicative factors) when the true density is a discrete Gaussian mixture
without any prior information on the number of mixture components. NPMLEs can
naturally be used to yield empirical Bayes estimates of the Oracle Bayes
estimator in the Gaussian denoising problem. We prove bounds for the accuracy
of the empirical Bayes estimate as an approximation to the Oracle Bayes
estimator. Here our results imply that the empirical Bayes estimator performs
at nearly the optimal level (up to logarithmic multiplicative factors) for
denoising in clustering situations without any prior knowledge of the number of
clusters. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: PRE-render Content Using Tiles (PRECUT). 1. Large-Scale Compound-Target Relationship Analyses,
Abstract: Visualizing a complex network is computationally intensive process and
depends heavily on the number of components in the network. One way to solve
this problem is not to render the network in real time. PRE-render Content
Using Tiles (PRECUT) is a process to convert any complex network into a
pre-rendered network. Tiles are generated from pre-rendered images at different
zoom levels, and navigating the network simply becomes delivering relevant
tiles. PRECUT is exemplified by performing large-scale compound-target
relationship analyses. Matched molecular pair (MMP) networks were created using
compounds and the target class description found in the ChEMBL database. To
visualize MMP networks, the MMP network viewer has been implemented in COMBINE
and as a web application, hosted at this http URL. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Convolution Semigroups of Probability Measures on Gelfand Pairs, Revisited,
Abstract: Our goal is to find classes of convolution semigroups on Lie groups $G$ that
give rise to interesting processes in symmetric spaces $G/K$. The
$K$-bi-invariant convolution semigroups are a well-studied example. An
appealing direction for the next step is to generalise to right $K$-invariant
convolution semigroups, but recent work of Liao has shown that these are in
one-to-one correspondence with $K$-bi-invariant convolution semigroups. We
investigate a weaker notion of right $K$-invariance, but show that this is, in
fact, the same as the usual notion. Another possible approach is to use
generalised notions of negative definite functions, but this also leads to
nothing new. We finally find an interesting class of convolution semigroups
that are obtained by making use of the Cartan decomposition of a semisimple Lie
group, and the solution of certain stochastic differential equations. Examples
suggest that these are well-suited for generating random motion along geodesics
in symmetric spaces. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Temporal correlation detection using computational phase-change memory,
Abstract: For decades, conventional computers based on the von Neumann architecture
have performed computation by repeatedly transferring data between their
processing and their memory units, which are physically separated. As
computation becomes increasingly data-centric and as the scalability limits in
terms of performance and power are being reached, alternative computing
paradigms are searched for in which computation and storage are collocated. A
fascinating new approach is that of computational memory where the physics of
nanoscale memory devices are used to perform certain computational tasks within
the memory unit in a non-von Neumann manner. Here we present a large-scale
experimental demonstration using one million phase-change memory devices
organized to perform a high-level computational primitive by exploiting the
crystallization dynamics. Also presented is an application of such a
computational memory to process real-world data-sets. The results show that
this co-existence of computation and storage at the nanometer scale could be
the enabler for new, ultra-dense, low power, and massively parallel computing
systems. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Quantum Interference of Glory Rescattering in Strong-Field Atomic Ionization,
Abstract: During the ionization of atoms irradiated by linearly polarized intense laser
fields, we find for the first time that the transverse momentum distribution of
photoelectrons can be well fitted by a squared zeroth-order Bessel function
because of the quantum interference effect of Glory rescattering. The
characteristic of the Bessel function is determined by the common angular
momentum of a bunch of semiclassical paths termed as Glory trajectories, which
are launched with different nonzero initial transverse momenta distributed on a
specific circle in the momentum plane and finally deflected to the same
asymptotic momentum, which is along the polarization direction, through
post-tunneling rescattering. Glory rescattering theory (GRT) based on the
semiclassical path-integral formalism is developed to address this effect
quantitatively. Our theory can resolve the long-standing discrepancies between
existing theories and experiments on the fringe location, predict the sudden
transition of the fringe structure in holographic patterns, and shed light on
the quantum interference aspects of low-energy structures in strong-field
atomic ionization. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On vector measures and extensions of transfunctions,
Abstract: We are interested in extending operators defined on positive measures, called
here transfunctions, to signed measures and vector measures. Our methods use a
somewhat nonstandard approach to measures and vector measures. The necessary
background, including proofs of some auxiliary results, is included. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 92