abstract
stringlengths 6
6.09k
| id
stringlengths 9
16
| time
int64 725k
738k
|
---|---|---|
It is by now well established that Dirac fermions coupled to non-Abelian
gauge theories can undergo an Anderson-type localization transition. This
transition affects eigenmodes in the lowest part of the Dirac spectrum, the
ones most relevant to the low-energy physics of these models. Here we review
several aspects of this phenomenon, mostly using the tools of lattice gauge
theory. In particular, we discuss how the transition is related to the
finite-temperature transitions leading to the deconfinement of fermions, as
well as to the restoration of chiral symmetry that is spontaneously broken at
low temperature. Other topics we touch upon are the universality of the
transition, and its connection to topological excitations (instantons) of the
gauge field and the associated fermionic zero modes. While the main focus is on
Quantum Chromodynamics, we also discuss how the localization transition appears
in other related models with different fermionic contents (including the
quenched approximation), gauge groups, and in different space-time dimensions.
Finally we offer some speculations about the physical relevance of the
localization transition in these models.
| 2104.14388 | 737,909 |
Quantum spins of mesoscopic size are a well-studied playground for
engineering non-classical states. If the spin represents the collective state
of an ensemble of qubits, its non-classical behavior is linked to entanglement
between the qubits. In this work, we report on an experimental study of
entanglement in dysprosium's electronic spin. Its ground state, of angular
momentum $J=8$, can formally be viewed as a set of $2J$ qubits symmetric upon
exchange. To access entanglement properties, we partition the spin by optically
coupling it to an excited state $J'=J-1$, which removes a pair of qubits in a
state defined by the light polarization. Starting with the well-known W and
squeezed states, we extract the concurrence of qubit pairs, which quantifies
their non-classical character. We also directly demonstrate entanglement
between the 14- and 2-qubit subsystems via an increase in entropy upon
partition. In a complementary set of experiments, we probe decoherence of a
state prepared in the excited level $J'=J+1$ and interpret spontaneous emission
as a loss of a qubit pair in a random state. This allows us to contrast the
robustness of pairwise entanglement of the W state with the fragility of the
coherence involved in a Schr\"odinger cat state. Our findings open up the
possibility to engineer novel types of entangled atomic ensembles, in which
entanglement occurs within each atom's electronic spin as well as between
different atoms.
| 2104.14389 | 737,909 |
We provide a class of quantum evolution beyond Markovian semigroup. This
class is governed by a hybrid Davies like generator such that dissipation is
controlled by a suitable memory kernel and decoherence by standard GKLS
generator. These two processes commute and both of them commute with the
unitary evolution controlled by the systems Hamiltonian. The corresponding
memory kernel gives rise to semi-Markov evolution of the diagonal elements of
the density matrix. However, the corresponding evolution needs not be
completely positive. The role of decoherence generator is to restore complete
positivity. Hence, to pose the dynamical problem one needs two processes
generated by classical semi-Markov memory kernel and purely quantum decoherence
generator. This scheme is illustrated for a qubit evolution.
| 2104.14390 | 737,909 |
Atomic interference experiments can probe the gravitational redshift via the
internal energy splitting of atoms and thus give direct access to test the
universality of the coupling between matter-energy and gravity at different
spacetime points. By including possible violations of the equivalence principle
in a fully quantized treatment of all degrees of freedom, we characterize how
the sensitivity to gravitational redshift violations arises in atomic clocks
and atom interferometers, as well as their underlying limitations.
Specifically, we show that: (i.) Contributions beyond linear order to trapping
potentials lead to such a sensitivity of trapped atomic clocks. (ii.) While
Bragg-type interferometers, even with a superposition of internal states, with
state-independent, linear interaction potentials are at first insensitive to
gravitational redshift tests, modified configurations, for example by
relaunching the atoms, can mimic such tests tests under certain conditions.
(iii.) Guided atom interferometers are comparable to atomic clocks. (iv.)
Internal transitions lead to state-dependent interaction potentials through
which light-pulse atom interferometers can become sensitive to gravitational
redshift violations.
| 2104.14391 | 737,909 |
Intelligent task placement and management of tasks in large-scale fog
platforms is challenging due to the highly volatile nature of modern workload
applications and sensitive user requirements of low energy consumption and
response time. Container orchestration platforms have emerged to alleviate this
problem with prior art either using heuristics to quickly reach scheduling
decisions or AI driven methods like reinforcement learning and evolutionary
approaches to adapt to dynamic scenarios. The former often fail to quickly
adapt in highly dynamic environments, whereas the latter have run-times that
are slow enough to negatively impact response time. Therefore, there is a need
for scheduling policies that are both reactive to work efficiently in volatile
environments and have low scheduling overheads. To achieve this, we propose a
Gradient Based Optimization Strategy using Back-propagation of gradients with
respect to Input (GOBI). Further, we leverage the accuracy of predictive
digital-twin models and simulation capabilities by developing a Coupled
Simulation and Container Orchestration Framework (COSCO). Using this, we create
a hybrid simulation driven decision approach, GOBI*, to optimize Quality of
Service (QoS) parameters. Co-simulation and the back-propagation approaches
allow these methods to adapt quickly in volatile environments. Experiments
conducted using real-world data on fog applications using the GOBI and GOBI*
methods, show a significant improvement in terms of energy consumption,
response time, Service Level Objective and scheduling time by up to 15, 40, 4,
and 82 percent respectively when compared to the state-of-the-art algorithms.
| 2104.14392 | 737,909 |
In a quantum-noise limited system, weak-value amplification using
post-selection normally does not produce more sensitive measurements than
standard methods for ideal detectors: the increased weak value is compensated
by the reduced power due to the small post-selection probability. Here we
experimentally demonstrate recycled weak-value measurements using a pulsed
light source and optical switch to enable nearly deterministic weak-value
amplification of a mirror tilt. Using photon counting detectors, we demonstrate
a signal improvement by a factor of $4.4 \pm 0.2$ and a signal-to-noise ratio
improvement of $2.10 \pm 0.06$, compared to a single-pass weak-value
experiment, and also compared to a conventional direct measurement of the tilt.
The signal-to-noise ratio improvement could reach around 6 for the parameters
of this experiment, assuming lower loss elements.
| 2104.14393 | 737,909 |
We experimentally achieve wave mode conversion and rainbow trapping in an
elastic waveguide loaded with an array of resonators. Rainbow trapping is a
phenomenon that induces wave confinement as a result of a spatial variation of
the wave velocity, here promoted by gently varying the length of consecutive
resonators. By breaking the geometrical symmetry of the waveguide, we combine
the wave speed reduction with a reflection mechanism that mode-converts
flexural waves impinging on the array into torsional waves travelling along
opposite directions. The framework presented herein may open new opportunities
in the context of wave manipulation through the realization of novel structural
components with concurrent wave conversion and energy trapping capabilities.
| 2104.14394 | 737,909 |
What does it mean today to study a problem from a computational point of
view? We focus on parameterized complexity and on Column 16 "Graph Restrictions
and Their Effect" of D. S. Johnson's Ongoing guide, where several puzzles were
proposed in a summary table with 30 graph classes as rows and 11 problems as
columns. Several of the 330 entries remain unclassified into Polynomial or
NP-complete after 35 years. We provide a full dichotomy for the Steiner Tree
column by proving that the problem is NP-complete when restricted to Undirected
Path graphs. We revise Johnson's summary table according to the granularity
provided by the parameterized complexity for NP-complete problems.
| 2104.14395 | 737,909 |
In robotics, accurate ground-truth position fostered the development of
mapping and localization algorithms through the creation of cornerstone
datasets. In outdoor environments and over long distances, total stations are
the most accurate and precise measurement instruments for this purpose. Most
total station-based systems in the literature are limited to three Degrees Of
Freedoms (DOFs), due to the use of a single-prism tracking approach. In this
paper, we present preliminary work on measuring a full pose of a vehicle,
bringing the referencing system to six DOFs. Three total stations are used to
track in real time three prisms attached to a target platform. We describe the
structure of the referencing system and the protocol for acquiring the ground
truth with this system. We evaluated its precision in a variety of different
outdoor environments, ranging from open-sky to forest trails, and compare this
system with another popular source of reference position, the Real Time
Kinematics (RTK) positioning solution. Results show that our approach is the
most precise, reaching an average positional error of 10 mm and 0.6 deg. This
difference in performance was particularly stark in environments where Global
Navigation Satellite System (GNSS) signals can be weaker due to overreaching
vegetation.
| 2104.14396 | 737,909 |
In this paper, we study the feasibility of coupling the PN ranging with
filtered high-order modulations, and investigate the simultaneous demodulation
of a high-rate telemetry stream while tracking the PN ranging sequence.
Accordingly, we design a receiver scheme that is able to perform a parallel
cancellation, in closed-loop, of the ranging and the telemetry signal
reciprocally. From our analysis, we find that the non-constant envelope
property of the modulation causes an additional jitter on the PN ranging timing
estimation that, on the other hand, can be limited by properly sizing the
receiver loop bandwidth.
Our study proves that the use of filtered high-order modulations combined
with PN ranging outperforms the state-of-the-art in terms of spectral
efficiency and achievable data rate, while having comparable ranging
performance.
| 2104.14397 | 737,909 |
We study the MaxCut problem for graphs $G=(V,E)$. The problem is NP-hard,
there are two main approximation algorithms with theoretical guarantees: (1)
the Goemans \& Williamson algorithm uses semi-definite programming to provide a
0.878MaxCut approximation (which, if the Unique Games Conjecture is true, is
the best that can be done in polynomial time) and (2) Trevisan proposed an
algorithm using spectral graph theory from which a 0.614MaxCut approximation
can be obtained. We discuss a new approach using a specific quadratic program
and prove that its solution can be used to obtain at least a 0.502MaxCut
approximation. The algorithm seems to perform well in practice.
| 2104.14404 | 737,909 |
A communicating system is $k$-synchronizable if all of the message sequence
charts representing the executions can be divided into slices of $k$ sends
followed by $k$ receptions. It was previously shown that, for a fixed given
$k$, one could decide whether a communicating system is $k$-synchronizable.
This result is interesting because the reachability problem can be solved for
$k$-synchronizable systems. However, the decision procedure assumes that the
bound $k$ is fixed. In this paper we improve this result and show that it is
possible to decide if such a bound $k$ exists.
| 2104.14408 | 737,909 |
Within the framework of Kohn-Sham density functional theory (DFT), the
ability to provide good predictions of water properties by employing a strongly
constrained and appropriately normed (SCAN) functional has been extensively
demonstrated in recent years. Here, we further advance the modeling of water by
building a more accurate model on the fourth rung of Jacob's ladder with the
hybrid functional, SCAN0. In particular, we carry out both classical and
Feynman path-integral molecular dynamics calculations of water with the SCAN0
functional and the isobaric-isothermal ensemble. In order to generate the
equilibrated structure of water, a deep neural network potential is trained
from the atomic potential energy surface based on ab initio data obtained from
SCAN0 DFT calculations. For the electronic properties of water, a separate deep
neural network potential is trained using the Deep Wannier method based on the
maximally localized Wannier functions of the equilibrated trajectory at the
SCAN0 level. The structural, dynamic, and electric properties of water were
analyzed. The hydrogen-bond structures, density, infrared spectra, diffusion
coefficients, and dielectric constants of water, in the electronic ground
state, are computed using a large simulation box and long simulation time. For
the properties involving electronic excitations, we apply the GW approximation
within many-body perturbation theory to calculate the quasiparticle density of
states and bandgap of water. Compared to the SCAN functional, mixing exact
exchange mitigates the self-interaction error in the meta-generalized-gradient
approximation and further softens liquid water towards the experimental
direction. For most of the water properties, the SCAN0 functional shows a
systematic improvement over the SCAN functional.
| 2104.14410 | 737,909 |
In this note we confirm the guess of Calegari, Garoufalidis and Zagier in
Arxiv 1712.04887 that $R_\zeta=c_\zeta^2$, where $R_\zeta$ is their map on
$K_3$ defined using the cyclic quantum dilogarithm and $c_\zeta$ is the Chern
class map on $K_3$.
| 2104.14413 | 737,909 |
We present a search for continuous gravitational-wave emission due to r-modes
in the pulsar PSR J0537-6910 using data from the LIGO-Virgo Collaboration
observing run O3. PSR J0537-6910 is a young energetic X-ray pulsar and is the
most frequent glitcher known. The inter-glitch braking index of the pulsar
suggests that gravitational-wave emission due to r-mode oscillations may play
an important role in the spin evolution of this pulsar. Theoretical models
confirm this possibility and predict emission at a level that can be probed by
ground-based detectors. In order to explore this scenario, we search for r-mode
emission in the epochs between glitches by using a contemporaneous timing
ephemeris obtained from NICER data. We do not detect any signals in the
theoretically expected band of 86-97 Hz, and report upper limits on the
amplitude of the gravitational waves. Our results improve on previous amplitude
upper limits from r-modes in J0537-6910 by a factor of up to 3 and place
stringent constraints on theoretical models for r-mode driven spin-down in PSR
J0537-6910, especially for higher frequencies at which our results reach below
the spin-down limit defined by energy conservation.
| 2104.14417 | 737,909 |
The paper discusses how robots enable occupant-safe continuous protection for
students when schools reopen. Conventionally, fixed air filters are not used as
a key pandemic prevention method for public indoor spaces because they are
unable to trap the airborne pathogens in time in the entire room. However, by
combining the mobility of a robot with air filtration, the efficacy of cleaning
up the air around multiple people is largely increased. A disinfection co-robot
prototype is thus developed to provide continuous and occupant-friendly
protection to people gathering indoors, specifically for students in a
classroom scenario. In a static classroom with students sitting in a grid
pattern, the mobile robot is able to serve up to 14 students per cycle while
reducing the worst-case pathogen dosage by 20%, and with higher robustness
compared to a static filter. The extent of robot protection is optimized by
tuning the passing distance and speed, such that a robot is able to serve more
people given a threshold of worst-case dosage a person can receive.
| 2104.14418 | 737,909 |
We report on experimental results obtained from collisions of slow highly
charged Ar9+ ions with a carbon monoxide dimer (CO)2 target. A COLTRIMS setup
and a Coulomb explosion imaging approach are used to reconstruct the structure
of the CO dimers. The three dimensional structure is deduced from the 2-body
and 3-body dissociation channels from which both the intermolecular bond length
and the relative orientation of the two molecules are determined. For the
3-body channels, the experimental data are interpreted with the help of a
classical model in which the trajectories of the three emitted fragments are
numerically integrated. We measured the equilibrium intermolecular distance to
be Re = 4.2 A. The orientation of both CO molecules with respect to the dimer
axis is found to be quasi-isotropic due to the large vibrational temperature of
the gas jet.
| 2104.14419 | 737,909 |
Non-small cell lung cancer (NSCLC) is a serious disease and has a high
recurrence rate after the surgery. Recently, many machine learning methods have
been proposed for recurrence prediction. The methods using gene data have high
prediction accuracy but require high cost. Although the radiomics signatures
using only CT image are not expensive, its accuracy is relatively low. In this
paper, we propose a genotype-guided radiomics method (GGR) for obtaining high
prediction accuracy with low cost. We used a public radiogenomics dataset of
NSCLC, which includes CT images and gene data. The proposed method is a
two-step method, which consists of two models. The first model is a gene
estimation model, which is used to estimate the gene expression from radiomics
features and deep features extracted from computer tomography (CT) image. The
second model is used to predict the recurrence using the estimated gene
expression data. The proposed GGR method designed based on hybrid features
which is combination of handcrafted-based and deep learning-based. The
experiments demonstrated that the prediction accuracy can be improved
significantly from 78.61% (existing radiomics method) and 79.14% (deep learning
method) to 83.28% by the proposed GGR.
| 2104.14420 | 737,909 |
The posterior over Bayesian neural network (BNN) parameters is extremely
high-dimensional and non-convex. For computational reasons, researchers
approximate this posterior using inexpensive mini-batch methods such as
mean-field variational inference or stochastic-gradient Markov chain Monte
Carlo (SGMCMC). To investigate foundational questions in Bayesian deep
learning, we instead use full-batch Hamiltonian Monte Carlo (HMC) on modern
architectures. We show that (1) BNNs can achieve significant performance gains
over standard training and deep ensembles; (2) a single long HMC chain can
provide a comparable representation of the posterior to multiple shorter
chains; (3) in contrast to recent studies, we find posterior tempering is not
needed for near-optimal performance, with little evidence for a "cold
posterior" effect, which we show is largely an artifact of data augmentation;
(4) BMA performance is robust to the choice of prior scale, and relatively
similar for diagonal Gaussian, mixture of Gaussian, and logistic priors; (5)
Bayesian neural networks show surprisingly poor generalization under domain
shift; (6) while cheaper alternatives such as deep ensembles and SGMCMC methods
can provide good generalization, they provide distinct predictive distributions
from HMC. Notably, deep ensemble predictive distributions are similarly close
to HMC as standard SGLD, and closer than standard variational inference.
| 2104.14421 | 737,909 |
The IPv6 over Low-powered Wireless Personal Area Network (6LoWPAN) protocol
was introduced to allow the transmission of Internet Protocol version 6 (IPv6)
packets using the smaller-size frames of the IEEE 802.15.4 standard, which is
used in many Internet of Things (IoT) networks. The primary duty of the 6LoWPAN
protocol is packet fragmentation and reassembly. However, the protocol standard
currently does not include any security measures, not even authenticating the
fragments immediate sender. This lack of immediate-sender authentication opens
the door for adversaries to launch several attacks on the fragmentation
process, such as the buffer-reservation attacks that lead to a Denial of
Service (DoS) attack and resource exhaustion of the victim nodes. This paper
proposes a security integration between 6LoWPAN and the Routing Protocol for
Low Power and Lossy Networks (RPL) through the Chained Secure Mode (CSM)
framework as a possible solution. Since the CSM framework provides a mean of
immediate-sender trust, through the use of Network Coding (NC), and an
integration interface for the other protocols (or mechanisms) to use this trust
to build security decisions, 6LoWPAN can use this integration to build a
chain-of-trust along the fragments routing path. A proof-of-concept
implementation was done in Contiki Operating System (OS), and its security and
performance were evaluated against an external adversary launching a
buffer-reservation attack. The results from the evaluation showed significant
mitigation of the attack with almost no increase in power consumption, which
presents the great potential for such integration to secure the forwarding
process at the 6LoWPAN Adaptation Layer
| 2104.14422 | 737,909 |
Multicollinearity produces an inflation in the variance of the Ordinary Least
Squares estimators due to the correlation between two or more independent
variables (including the constant term). A widely applied solution is to
estimate with penalized estimators (such as the ridge estimator, the Liu
estimator, etc.) which exchange the mean square error by the bias. Although the
variance diminishes with these procedures, all seems to indicate that the
inference is lost and also the goodness of fit. Alternatively, the raise
regression (\cite{Garcia2011} and \cite{Salmeron2017}) allows the mitigation of
the problems generated by multicollinearity but without losing the inference
and keeping the coefficient of determination. This paper completely formalizes
the raise estimator summarizing all the previous contributions: its mean square
error, the variance inflation factor, the condition number, the adequate
selection of the variable to be raised, the successive raising and the relation
between the raise and the ridge estimator. As a novelty, it is also presented
the estimation method, the relation between the raise and the residualization,
it is analyzed the norm of the estimator and the behaviour of the individual
and joint significance test and the behaviour of the mean square error and the
coefficient of variation. The usefulness of the raise regression as alternative
to mitigate the multicollinearity is illustrated with two empirical
applications.
| 2104.14423 | 737,909 |
We present a novel method for finite element analysis of inelastic structures
containing Shape Memory Alloys (SMAs). Phenomenological constitutive models for
SMAs lead to material nonlinearities, that require substantial computational
effort to resolve. Finite element analysis methods, which rely on Gauss
quadrature integration schemes, must solve two sets of coupled differential
equations: one at the global level and the other at the local, i.e. Gauss point
level. In contrast to the conventional return mapping algorithm, which solves
these two sets of coupled differential equations separately using a nested
Newton procedure, we propose a scheme to solve the local and global
differential equations simultaneously. In the process we also derive
closed-form expressions used to update the internal state variables, and unify
the popular closest-point and cutting plane methods with our formulas.
Numerical testing indicates that our method allows for larger thermomechanical
loading steps and provides increased computational efficiency, over the
standard return mapping algorithm.
| 2104.14424 | 737,909 |
We discuss the fundamental noise limitations of a ferromagnetic torque sensor
based on a levitated magnet in the tipping regime. We evaluate the optimal
magnetic field resolution taking into account the thermomechanical noise and
the mechanical detection noise at the standard quantum limit (SQL). We find
that the Energy Resolution Limit (ERL), pointed out in recent literature as a
relevant benchmark for most classes of magnetometers, can be surpassed by many
orders of magnitude. Moreover, similarly to the case of a ferromagnetic
gyroscope, it is also possible to surpass the standard quantum limit for
magnetometry with independent spins, arising from spin-projection noise. Our
finding indicates that magnetomechanical systems optimized for magnetometry can
achieve a magnetic field resolution per unit volume several orders of magnitude
better than any conventional magnetometer. We discuss possible implications,
focusing on fundamental physics problems such as the search for exotic
interactions beyond the standard model.
| 2104.14425 | 737,909 |
Discovering novel high-level concepts is one of the most important steps
needed for human-level AI. In inductive logic programming (ILP), discovering
novel high-level concepts is known as predicate invention (PI). Although seen
as crucial since the founding of ILP, PI is notoriously difficult and most ILP
systems do not support it. In this paper, we introduce POPPI, an ILP system
that formulates the PI problem as an answer set programming problem. Our
experiments show that (i) PI can drastically improve learning performance when
useful, (ii) PI is not too costly when unnecessary, and (iii) POPPI can
substantially outperform existing ILP systems.
| 2104.14426 | 737,909 |
Network visualisation, drawn from attitudinal survey data, exposes the
structure of opinion-based groups. We make use of these network projections to
identify the groups reliably through community detection algorithms and to
examine social-identity-based polarisation.Our goal is to present a method for
revealing polarisation in attitudinal surveys. This method can be broken down
into the following steps: data preparation, construction of similarity-based
net-works, algorithmic identification of opinion-based groups, and
identification of item importance for community structure. We examine the
method's performance and possible scope through applying it to empirical data
and to a broad range of synthetic data sets. The empirical data application
points out possible conclusions (i.e. social-identity polarization), whereas
the synthetic data sets marks out the method's boundaries. Next to an
application example on political attitude survey, our results suggest that the
method works for various surveys but is also moderated by the efficacy of the
community detection algorithms. Concerning the identification of opinion-based
groups, we provide a solid method to rank the item's influence on group
formation and as a group identifier. We discuss how this network approach to
identifying polarization can classify non-overlapping opinion-based groups even
in the absence of extreme opinions.
| 2104.14427 | 737,909 |
We take the opportunity offered by Schirmacher, Bryk, Ruocco et al. to
explain more in depth the details of our theoretical model (Phys. Rev. Lett.
112, 145501 (2019)) and its improved and extended version (Phys. Rev. Research
2, 013267, (2020)). We unequivocally show the presence of a boson peak (BP)
anomaly in solids due to anharmonic diffusive damping (of the Akhiezer type)
and anharmonic corrections to the low-energy plane wave $\omega=v\,k$
dispersion relation. Finally, we emphasize the need of going beyond the
old-fashion "harmonic disorder" BP picture which is clearly unable to explain
the recent experimental observations of BP in perfectly ordered crystals. For
the sake of openness and transparency, we provide a Mathematica file
downloadable by all interested readers who wish to check our calculations.
| 2104.14428 | 737,909 |
Randomized Numerical Linear Algebra (RandNLA) is a powerful class of methods,
widely used in High Performance Computing (HPC). RandNLA provides approximate
solutions to linear algebra functions applied to large signals, at reduced
computational costs. However, the randomization step for dimensionality
reduction may itself become the computational bottleneck on traditional
hardware. Leveraging near constant-time linear random projections delivered by
LightOn Optical Processing Units we show that randomization can be
significantly accelerated, at negligible precision loss, in a wide range of
important RandNLA algorithms, such as RandSVD or trace estimators.
| 2104.14429 | 737,909 |
Recently, people tried to use a few anomalies for video anomaly detection
(VAD) instead of only normal data during the training process. A side effect of
data imbalance occurs when a few abnormal data face a vast number of normal
data. The latest VAD works use triplet loss or data re-sampling strategy to
lessen this problem. However, there is still no elaborately designed structure
for discriminative VAD with a few anomalies. In this paper, we propose a
DiscRiminative-gEnerative duAl Memory (DREAM) anomaly detection model to take
advantage of a few anomalies and solve data imbalance. We use two shallow
discriminators to tighten the normal feature distribution boundary along with a
generator for the next frame prediction. Further, we propose a dual memory
module to obtain a sparse feature representation in both normality and
abnormality space. As a result, DREAM not only solves the data imbalance
problem but also learn a reasonable feature space. Further theoretical analysis
shows that our DREAM also works for the unknown anomalies. Comparing with the
previous methods on UCSD Ped1, UCSD Ped2, CUHK Avenue, and ShanghaiTech, our
model outperforms all the baselines with no extra parameters. The ablation
study demonstrates the effectiveness of our dual memory module and
discriminative-generative network.
| 2104.14430 | 737,909 |
This work considers a Poisson noise channel with an amplitude constraint. It
is well-known that the capacity-achieving input distribution for this channel
is discrete with finitely many points. We sharpen this result by introducing
upper and lower bounds on the number of mass points. In particular, the upper
bound of order $\mathsf{A} \log^2(\mathsf{A})$ and lower bound of order
$\sqrt{\mathsf{A}}$ are established where $\mathsf{A}$ is the constraint on the
input amplitude. In addition, along the way, we show several other properties
of the capacity and capacity-achieving distribution. For example, it is shown
that the capacity is equal to $ - \log P_{Y^\star}(0)$ where $P_{Y^\star}$ is
the optimal output distribution. Moreover, an upper bound on the values of the
probability masses of the capacity-achieving distribution and a lower bound on
the probability of the largest mass point are established.
| 2104.14431 | 737,909 |
Hydrodynamic flow of charge carriers in graphene is an energy flow unlike the
usual mass flow in conventional fluids. In neutral graphene, the energy flow is
decoupled from the electric current, making it diffcult to observe the
hydrodynamic effects and measure the viscosity of the electronic fluid by means
of electric current measurements. Nevertheless one can observe nonuniform
current densities by confining the charge flow to a narrow channel, where the
current can exhibit the well known ballistic-diffusive crossover. The standard
diffusive behavior with the uniform current density across the channel is
achieved under the assumptions of specular scattering on the channel
boundaries. This flow can also be made nonuniform by applying weak magnetic
fields. In this case, the curvature of the current density profile is
determined by the quasiparticle recombination processes dominated by the
disorder-assisted electron-phonon scattering - the so-called supercollisions.
| 2104.14432 | 737,909 |
In this note, we present deterministic algorithms for the Hidden Subgroup
Problem. The algorithm for abelian groups achieves the same asymptotic query
complexity as the optimal randomized algorithm. The algorithm for non-abelian
groups comes within a polylogarithmic factor of the optimal randomized query
complexity.
| 2104.14436 | 737,909 |
Imagine, you enter a grocery store to buy food. How many peopledo you overlap
with in this store? How much time do you overlap witheach person in the store?
In this paper, we answer these questions bystudying the overlap times between
customers in the infinite serverqueue. We compute in closed form the steady
state distribution ofthe overlap time between a pair of customers and the
distribution ofthe number of customers that an arriving customer will overlap
with.Finally, we define a residual process that counts the number of
over-lapping customers that overlap in the queue for at least{\delta}time
unitsand compute its mean, variance, and distribution in the exponentialservice
setting
| 2104.14437 | 737,909 |
Strong field processes in the non-relativistic regime are insensitive to the
electron spin, i.e. the observables appear to be independent of this electron
property. This does not have to be the case for several active electrons where
Pauli principle may affect the their dynamics. We exemplify this statement
studying model atoms with three active electrons interacting with strong pulsed
radiation, using an ab-initio time-dependent Schr\"odinger equation on a grid.
In our restricted dimensionality model we are able, for the first time, to
analyse momenta correlations of the three outgoing electrons using Dalitz
plots. We show that significant differences are obtained between model Neon and
Nitrogen atoms. These differences are traced back to the different symmetries
of the electronic wavefunctions, and directly related to the different initial
state spin components.
| 2104.14438 | 737,909 |
Machine learning potentials have emerged as a powerful tool to extend the
time and length scales of first principles-quality simulations. Still, most
machine learning potentials cannot distinguish different electronic spin
orientations and thus are not applicable to materials in different magnetic
states. Here, we propose spin-dependent atom-centered symmetry functions as a
new type of descriptor taking the atomic spin degrees of freedom into account.
When used as input for a high-dimensional neural network potential (HDNNP),
accurate potential energy surfaces of multicomponent systems describing
multiple magnetic states can be constructed. We demonstrate the performance of
these magnetic HDNNPs for the case of manganese oxide, MnO. We show that the
method predicts the magnetically distorted rhombohedral structure in excellent
agreement with density functional theory and experiment. Its efficiency allows
to determine the N\'{e}el temperature considering structural fluctuations,
entropic effects, and defects. The method is general and is expected to be
useful also for other types of systems like oligonuclear transition metal
complexes.
| 2104.14439 | 737,909 |
By generalizing our automated algebra approach from homogeneous space to
harmonically trapped systems, we have calculated the fourth- and fifth-order
virial coefficients of universal spin-1/2 fermions in the unitary limit,
confined in an isotropic harmonic potential. We present results for said
coefficients as a function of trapping frequency (or, equivalently,
temperature), which compare favorably with previous Monte Carlo calculations
(available only at fourth order) as well as with our previous estimates in the
untrapped limit (high temperature, low frequency). We use our estimates of the
virial expansion, together with resummation techniques, to calculate the
compressibility and spin susceptibility.
| 2104.14440 | 737,909 |
Inconsistencies regarding the nature of globular cluster multiple population
radial distributions is a matter for concern given their role in testing or
validating cluster dynamical evolution modeling. In this study, we present a
re-analysis of eight globular cluster radial distributions using publicly
available ground-based ugriz and UBVRI photometry; correcting for a systematic
error identified in the literature. We detail the need for including and
considering not only K-S probabilities but critical K-S statistic values as
well when drawing conclusions from radial distributions, as well as the impact
of sample incompleteness. Revised cumulative radial distributions are
presented, and the literature of each cluster reviewed to provide a fuller
picture of our results. We find that many multiple populations are not as
segregated as once thought, and that there is a pressing need for better
understanding of the spatial distributions of multiple populations in globular
clusters.
| 2104.14441 | 737,909 |
Starting from $\mathbb{C}^*$-actions on complex projective varieties, we
construct and investigate birational maps among the corresponding extremal
fixed point components. We study the case in which such birational maps are
locally described by toric flips, either of Atiyah type or so called
non-equalized. We relate this notion of toric flip with the property of the
action being non-equalized. Moreover, we find explicit examples of rational
homogeneous varieties admitting a $\mathbb{C}^*$-action whose weighted blow-up
at the extremal fixed point components gives a birational map among two
projective varieties that is locally a toric non-equalized flip.
| 2104.14442 | 737,909 |
We present an effective field theory describing the relevant interactions of
the Standard Model with an electrically neutral particle that can account for
the dark matter in the Universe. The possible mediators of these interactions
are assumed to be heavy. The dark matter candidates that we consider have spin
0, 1/2 or 1, belong to an electroweak multiplet with arbitrary isospin and
hypercharge and their stability at cosmological scales is guaranteed by
imposing a $\mathbb{Z}_2$ symmetry. We present the most general framework for
describing the interaction of the dark matter with standard particles, and
construct a general non-redundant basis of the gauge-invariant operators up to
dimension six. The basis includes multiplets with non-vanishing hypercharge,
which can also be viable DM candidates. We give two examples illustrating the
phenomenological use of such a general effective framework. First, we consider
the case of a scalar singlet, provide convenient semi-analytical expressions
for the relevant dark matter observables, use present experimental data to set
constraints on the Wilson coefficients of the operators, and show how the
interplay of different operators can open new allowed windows in the parameter
space of the model. Then we study the case of a lepton isodoublet, which
involves co-annihilation processes, and we discuss the impact of the operators
on the particle mass splitting and direct detection cross sections. These
examples highlight the importance of the contribution of the various
non-renormalizable operators, which can even dominate over the gauge
interactions in certain cases.
| 2104.14443 | 737,909 |
The dynamics of physical systems is often constrained to lower dimensional
sub-spaces due to the presence of conserved quantities. Here we propose a
method to learn and exploit such symmetry constraints building upon Hamiltonian
Neural Networks. By enforcing cyclic coordinates with appropriate loss
functions, we find that we can achieve improved accuracy on simple classical
dynamics tasks. By fitting analytic formulae to the latent variables in our
network we recover that our networks are utilizing conserved quantities such as
(angular) momentum.
| 2104.14444 | 737,909 |
We study finite first-order satisfiability (FSAT) in the constructive setting
of dependent type theory. Employing synthetic accounts of enumerability and
decidability, we give a full classification of FSAT depending on the
first-order signature of non-logical symbols. On the one hand, our development
focuses on Trakhtenbrot's theorem, stating that FSAT is undecidable as soon as
the signature contains an at least binary relation symbol. Our proof proceeds
by a many-one reduction chain starting from the Post correspondence problem. On
the other hand, we establish the decidability of FSAT for monadic first-order
logic, i.e. where the signature only contains at most unary function and
relation symbols, as well as the enumerability of FSAT for arbitrary enumerable
signatures. To showcase an application of Trakthenbrot's theorem, we continue
our reduction chain with a many-one reduction from FSAT to separation logic.
All our results are mechanised in the framework of a growing Coq library of
synthetic undecidability proofs.
| 2104.14445 | 737,909 |
We consider the influence of transverse confinement on the instability
properties of velocity and density distributions reminiscent of those
pertaining to exchange flows in stratified inclined ducts, such as the recent
experiment of Lefauve et al. (J. Fluid Mech. 848, 508-544, 2018). Using a
normal mode streamwise and temporal expansion for flows in ducts with various
aspect ratios $B$ and non-trivial transverse velocity profiles, we calculate
two-dimensional (2D) dispersion relations with associated eigenfunctions
varying in the 'crosswise' direction, in which the density varies, and the
spanwise direction, both normal to the duct walls and to the flow direction. We
also compare these 2D dispersion relations to the so-called one-dimensional
(1D) dispersion relation obtained for spanwise invariant perturbations, for
different aspect ratios $B$ and bulk Richardson numbers $Ri_b$. In this limited
parameter space, the presence of lateral walls has a stabilizing effect.
Furthermore, accounting for spanwise-varying perturbations results in a
plethora of unstable modes, the number of which increases as the aspect ratio
is increased. These modes present an odd-even regularity in their spatial
structures, which is rationalized by comparison to the so-called
one-dimensional oblique (1D-O) dispersion relation obtained for oblique waves.
Finally, we show that in most cases, the most unstable 2D mode is the one that
oscillates the least in the spanwise direction, as a consequence of viscous
damping. However, in a limited region of the parameter space and in the absence
of stratification, we show that a secondary mode with a more complex `twisted'
structure dominated by crosswise vorticity becomes more unstable than the least
oscillating Kelvin-Helmholtz mode associated with spanwise vorticity.
| 2104.14446 | 737,909 |
A parallel implementation of a compatible discretization scheme for
steady-state Stokes problems is presented in this work. The scheme uses
generalized moving least squares to generate differential operators and apply
boundary conditions. This meshless scheme allows a high-order convergence for
both the velocity and pressure, while also incorporates finite-difference-like
sparse discretization. Additionally, the method is inherently scalable: the
stencil generation process requires local inversion of matrices amenable to GPU
acceleration, and the divergence-free treatment of velocity replaces the
traditional saddle point structure of the global system with elliptic diagonal
blocks amenable to algebraic multigrid. The implementation in this work uses a
variety of Trilinos packages to exploit this local and global parallelism, and
benchmarks demonstrating high-order convergence and weak scalability are
provided.
| 2104.14447 | 737,909 |
We prove that for every $n \ge 2$, there exists a pseudoconvex domain $\Omega
\subset \mathbb{C}^n$ such that $\mathfrak{c}^0(\Omega) \subsetneq
\mathfrak{c}^1(\Omega)$, where $\mathfrak{c}^k(\Omega)$ denotes the core of
$\Omega$ with respect to $\mathcal{C}^k$-smooth plurisubharmonic functions on
$\Omega$. Moreover, we show that there exists a bounded continuous
plurisubharmonic function on $\Omega$ that is not the pointwise limit of a
sequence of $\mathcal{C}^1$-smooth bounded plurisubharmonic functions on
$\Omega$.
| 2104.14448 | 737,909 |
Signed network embedding is an approach to learn low-dimensional
representations of nodes in signed networks with both positive and negative
links, which facilitates downstream tasks such as link prediction with general
data mining frameworks. Due to the distinct properties and significant added
value of negative links, existing signed network embedding methods usually
design dedicated methods based on social theories such as balance theory and
status theory. However, existing signed network embedding methods ignore the
characteristics of multiple facets of each node and mix them up in one single
representation, which limits the ability to capture the fine-grained attentions
between node pairs. In this paper, we propose MUSE, a MUlti-faceted
attention-based Signed network Embedding framework to tackle this problem.
Specifically, a joint intra- and inter-facet attention mechanism is introduced
to aggregate fine-grained information from neighbor nodes. Moreover, balance
theory is also utilized to guide information aggregation from multi-order
balanced and unbalanced neighbors. Experimental results on four real-world
signed network datasets demonstrate the effectiveness of our proposed
framework.
| 2104.14449 | 737,909 |
In the first part of the paper, we study the discontinuous Galerkin (DG) and
$C^0$ interior penalty ($C^0$-IP) finite element approximation of the periodic
strong solution to the fully nonlinear second-order
Hamilton--Jacobi--Bellman--Isaacs (HJBI) equation with coefficients satisfying
the Cordes condition. We prove well-posedness and perform abstract a posteriori
and a priori analyses which apply to a wide family of numerical schemes. These
periodic problems arise as the corrector problems in the homogenization of HJBI
equations. The second part of the paper focuses on the numerical approximation
to the effective Hamiltonian of ergodic HJBI operators via DG/$C^0$-IP finite
element approximations to approximate corrector problems. Finally, we provide
numerical experiments demonstrating the performance of the numerical schemes.
| 2104.14450 | 737,909 |
We determine exactly the short-distance effective potential between two
"guest" charges immersed in a two-dimensional two-component charge-asymmetric
plasma composed of positively ($q_1 = +1$) and negatively ($q_2 = -1/2$)
charged point particles. The result is valid over the whole regime of
stability, where the Coulombic coupling (dimensionless inverse temperature)
$\beta <4$. At high Coulombic coupling $\beta>2$, this model features
like-charge attraction. Also, there cannot be repulsion between
opposite-charges at short-distances, at variance with large-distance
interactions.
| 2104.14451 | 737,909 |
The problem of restoring images corrupted by Poisson noise is common in many
application fields and, because of its intrinsic ill posedness, it requires
regularization techniques for its solution. The effectiveness of such
techniques depends on the value of the regularization parameter balancing data
fidelity and regularity of the solution. Here we consider the Total Generalized
Variation regularization introduced in [SIAM J. Imag. Sci, 3(3), 492-526,
2010], which has demonstrated its ability of preserving sharp features as well
as smooth transition variations, and introduce an automatic strategy for
defining the value of the regularization parameter. We solve the corresponding
optimization problem by using a 3-block version of ADMM. Preliminary numerical
experiments support the proposed approach.
| 2104.14452 | 737,909 |
X-ray and gamma-ray emissions observed in lightning and long sparks are
usually connected with the bremsstrahlung of high-energy runaway electrons.
Here, an alternative physical mechanism for producing X-ray and gamma-ray
emissions caused by the polarization current and associated electromagnetic
field moving with relativistic velocity along a curved discharge channel has
been proposed. It is pointed out that lightning and spark discharges should
also produce a coherent radio-frequency radiation. The influence of the
conductivity and the radius of the lightning channel on the propagation
velocity of electromagnetic waves, taking into account the absorption, have
been investigated. The existence of fast electromagnetic surface waves
propagating along the lightning discharge channel at a speed close to the speed
of light in vacuum is shown. The possibility of the production of microwave,
X-ray and gamma-ray emissions by a polarization current pulse moving along a
curved path via synchrotron radiation mechanism during the lightning leader
steps formation and the very beginning of the return stroke stage is pointed
out. The existence of long tails in the power spectrum is shown, which explains
observations of photon energies in the range of 10-100 MeV in the TGF, as well
as measured power spectrum of laboratory spark discharge.
| 2104.14454 | 737,909 |
Selenium is a crucial earth-abundant and non-toxic semiconductor with a wide
range of applications across the semiconductor industries. Selenium has drawn
attention from scientific communities for its wide range of applicability: from
photovoltaics to imaging devices. Its usage as a photosensitive material
largely involves the synthesis of the amorphous phase (a-Se) via various
experimental techniques. However, the ground state crystalline phase of this
material, known as the trigonal selenium (\textit{t}-Se), is not extensively
studied for its optimum electronic and optical properties. In this work, we
present density functional theory (DFT) based systematic studies on the
ultra-thin $(10\overline{1}0)$ surface slabs of \textit{t}-Se. We report the
surface energies, work function, electronic and optical properties as a
function of number of layers for $(10\overline{1}0)$ surface slabs to access
its suitability for applications as a photosensitive material.
| 2104.14455 | 737,909 |
This chapter addresses the question of how to efficiently solve
many-objective optimization problems in a computationally demanding black-box
simulation context. We shall motivate the question by applications in machine
learning and engineering, and discuss specific harsh challenges in using
classical Pareto approaches when the number of objectives is four or more.
Then, we review solutions combining approaches from Bayesian optimization,
e.g., with Gaussian processes, and concepts from game theory like Nash
equilibria, Kalai-Smorodinsky solutions and detail extensions like
Nash-Kalai-Smorodinsky solutions. We finally introduce the corresponding
algorithms and provide some illustrating results.
| 2104.14456 | 737,909 |
There have recently been detections of radio emission from low-mass stars,
some of which are indicative of star-planet interactions. Motivated by these
exciting new results, here we present stellar wind models for the active
planet-hosting M dwarf AU Mic. Our models incorporate the large-scale
photospheric magnetic field map of the star, reconstructed using the
Zeeman-Doppler Imaging method. We use our models to assess if planet-induced
radio emission could be generated in the corona of AU Mic, through a mechanism
analogous to the sub-Alfv\'enic Jupiter-Io interaction. In the case that AU Mic
has a mass-loss rate of 27 times that of the Sun, we find that both planets b
and c in the system can induce radio emission from 10 MHz to 3 GHz in the
corona of the host star for the majority of their orbits, with peak flux
densities of 10 mJy. Our predicted emission bears a striking similarity to that
recently reported from GJ 1151 by Vedantham et al. (2020), which is indicative
of being induced by a planet. Detection of such radio emission would allow us
to place an upper limit on the mass-loss rate of the star.
| 2104.14457 | 737,909 |
This paper studies the identification of causal effects of a continuous
treatment using a new difference-in-difference strategy. Our approach allows
for endogeneity of the treatment, and employs repeated cross-sections. It
requires an exogenous change over time which affects the treatment in a
heterogeneous way, stationarity of the distribution of unobservables and a rank
invariance condition on the time trend. On the other hand, we do not impose any
functional form restrictions or an additive time trend, and we are invariant to
the scaling of the dependent variable. Under our conditions, the time trend can
be identified using a control group, as in the binary difference-in-differences
literature. In our scenario, however, this control group is defined by the
data. We then identify average and quantile treatment effect parameters. We
develop corresponding nonparametric estimators and study their asymptotic
properties. Finally, we apply our results to the effect of disposable income on
consumption.
| 2104.14458 | 737,909 |
In this Tutorial, we give a pedagogical introduction to Majorana bound states
(MBSs) arising in semiconducting nanostructures. We start by briefly reviewing
the well-known Kitaev chain toy model in order to introduce some of the basic
properties of MBSs before proceeding to describe more experimentally relevant
platforms. Here, our focus lies on simple `minimal' models where the Majorana
wave functions can be obtained explicitly by standard methods. In a first part,
we review the paradigmatic model of a Rashba nanowire with strong spin-orbit
interaction (SOI) placed in a magnetic field and proximitized by a conventional
$s$-wave superconductor. We identify the topological phase transition
separating the trivial phase from the topological phase and demonstrate how the
explicit Majorana wave functions can be obtained in the limit of strong SOI. In
a second part, we discuss MBSs engineered from proximitized edge states of
two-dimensional (2D) topological insulators. We introduce the Jackiw-Rebbi
mechanism leading to the emergence of bound states at mass domain walls and
show how this mechanism can be exploited to construct MBSs. Due to their recent
interest, we also include a discussion of Majorana corner states in 2D
second-order topological superconductors. This Tutorial is mainly aimed at
graduate students -- both theorists and experimentalists -- seeking to
familiarize themselves with some of the basic concepts in the field.
| 2104.14459 | 737,909 |
Plasma accelerators driven by intense laser or particle beams provide
gigavolt-per-meter accelerating fields, promising to drastically shrink
particle accelerators for high-energy physics and photon science. Applications
such as linear colliders and free-electron lasers (FELs) require high energy
and energy efficiency, but also high stability and beam quality. The latter
includes low energy spread, which can be achieved by precise beam loading of
the plasma wakefield using longitudinally shaped bunches, resulting in
efficient and uniform acceleration. However, the plasma wavelength, which sets
the scale for the region of very large accelerating fields to be 100 {\mu}m or
smaller, requires bunches to be synchronized and shaped with extreme temporal
precision, typically on the femtosecond scale. Here, a self-correction
mechanism is introduced, greatly reducing the susceptibility to jitter. Using
multiple accelerating stages, each with a small bunch compression between them,
almost any initial bunch, regardless of current profile or injection phase,
will self-correct into the current profile that flattens the wakefield, damping
the relative energy spread and any energy offsets. As a consequence, staging
can be used not only to reach high energies, but also to produce the exquisite
beam quality and stability required for a variety of applications.
| 2104.14460 | 737,909 |
Recently, it has been proposed that fruitful synergies may exist between Deep
Learning (DL) and Case Based Reasoning (CBR); that there are insights to be
gained by applying CBR ideas to problems in DL (what could be called DeepCBR).
In this paper, we report on a program of research that applies CBR solutions to
the problem of Explainable AI (XAI) in the DL. We describe a series of
twin-systems pairings of opaque DL models with transparent CBR models that
allow the latter to explain the former using factual, counterfactual and
semi-factual explanation strategies. This twinning shows that functional
abstractions of DL (e.g., feature weights, feature importance and decision
boundaries) can be used to drive these explanatory solutions. We also raise the
prospect that this research also applies to the problem of Data Augmentation in
DL, underscoring the fecundity of these DeepCBR ideas.
| 2104.14461 | 737,909 |
We present an approach to studying and predicting the spatio-temporal
progression of infectious diseases. We treat the problem by adopting a partial
differential equation (PDE) version of the Susceptible, Infected, Recovered,
Deceased (SIRD) compartmental model of epidemiology, which is achieved by
replacing compartmental populations by their densities. Building on our recent
work (Computational Mechanics, 66, 1177, 2020), we replace our earlier use of
global polynomial basis functions with those having local support, as
epitomized in the finite element method, for the spatial representation of the
SIRD parameters. The time dependence is treated by inferring constant
parameters over time intervals that coincide with the time step in
semi-discrete numerical implementations. In combination, this amounts to a
scheme of field inversion of the SIRD parameters over each time step. Applied
to data over ten months of 2020 for the pandemic in the US state of Michigan
and to all of Mexico, our system inference via field inversion infers
spatio-temporally varying PDE SIRD parameters that replicate the progression of
the pandemic with high accuracy. It also produces accurate predictions, when
compared against data, for a three week period into 2021. Of note is the
insight that is suggested on the spatio-temporal variation of infection,
recovery and death rates, as well as patterns of the population's mobility
revealed by diffusivities of the compartments.
| 2104.14462 | 737,909 |
In this paper we define and explore the analytic spread $\ell(\mathcal I)$ of
a filtration in a local ring. We show that, especially for divisorial and
symbolic filtrations, some basic properties of the analytic spread of an ideal
extend to filtrations, even when the filtration is non Noetherian. We also
illustrate some significant differences between the analytic spread of a
filtration and the analytic spread of an ideal with examples.
In the case of an ideal $I$, we have the classical bounds
$\mbox{ht}(I)\le\ell(I)\le \dim R$. The upper bound $\ell(\mathcal I)\le \dim
R$ is true for filtrations $\mathcal I$, but the lower bound is not true for
all filtrations. We show that for the filtration $\mathcal I$ of symbolic
powers of a height two prime ideal $\mathfrak p$ in a regular local ring of
dimension three (a space curve singularity), so that $\mbox{ht}(\mathcal I) =2$
and $\dim R=3$, we have that $0\le \ell(\mathcal I)\le 2$ and all values of 0,1
and 2 can occur. In the cases of analytic spread 0 and 1 the symbolic algebra
is necessarily non-Noetherian. The symbolic algebra is non-Noetherian if and
only if $\ell(\mathfrak p^{(n)})=3$ for all symbolic powers of $\mathfrak p$
and if and only if $\ell(\mathcal I_a)=3$ for all truncations $\mathcal I_a$ of
$\mathcal I$.
| 2104.14463 | 737,909 |
We investigate the electronic structure of tungsten ditelluride (WTe$_2$)
flakes with different thicknesses in magneto-transport studies. The
temperature-dependent resistance and magnetoresistance (MR) measurements both
confirm the breaking of carrier balance induced by thickness reduction, which
suppresses the `turn-on' behavior and large positive MR. The Shubnikov-de-Haas
oscillation studies further confirm the thickness-dependent change of
electronic structure of WTe$_2$ and reveal a possible temperature-sensitive
electronic structure change. Finally, we report the thickness-dependent
anisotropy of Fermi surface, which reveals that multi-layer WTe$_2$ is an
electronic 3D material and the anisotropy decreases as thickness decreases.
| 2104.14464 | 737,909 |
In this work, we propose a Cross-view Contrastive Learning framework for
unsupervised 3D skeleton-based action Representation (CrosSCLR), by leveraging
multi-view complementary supervision signal. CrosSCLR consists of both
single-view contrastive learning (SkeletonCLR) and cross-view consistent
knowledge mining (CVC-KM) modules, integrated in a collaborative learning
manner. It is noted that CVC-KM works in such a way that high-confidence
positive/negative samples and their distributions are exchanged among views
according to their embedding similarity, ensuring cross-view consistency in
terms of contrastive context, i.e., similar distributions. Extensive
experiments show that CrosSCLR achieves remarkable action recognition results
on NTU-60 and NTU-120 datasets under unsupervised settings, with observed
higher-quality action representations. Our code is available at
https://github.com/LinguoLi/CrosSCLR.
| 2104.14466 | 737,909 |
Articulatory-to-acoustic (forward) mapping is a technique to predict speech
using various articulatory acquisition techniques as input (e.g. ultrasound
tongue imaging, MRI, lip video). The advantage of lip video is that it is
easily available and affordable: most modern smartphones have a front camera.
There are already a few solutions for lip-to-speech synthesis, but they mostly
concentrate on offline training and inference. In this paper, we propose a
system built from a backend for deep neural network training and inference and
a fronted as a form of a mobile application. Our initial evaluation shows that
the scenario is feasible: a top-5 classification accuracy of 74% is combined
with feedback from the mobile application user, making sure that the speaking
impaired might be able to communicate with this solution.
| 2104.14467 | 737,909 |
In this paper, we address the speech denoising problem, where white Gaussian
additive noise is to be removed from a given speech signal. Our approach is
based on a redundant, analysis-sparse representation of the original speech
signal. We pick an eigenvector of the Zauner unitary matrix and -- under
certain assumptions on the ambient dimension -- we use it as window vector to
generate a spark deficient Gabor frame. The analysis operator associated with
such a frame, is a (highly) redundant Gabor transform, which we use as a
sparsifying transform in denoising procedure. We conduct computational
experiments on real-world speech data, solving the analysis basis pursuit
denoising problem, with four different choices of analysis operators, including
our Gabor analysis operator. The results show that our proposed redundant Gabor
transform outperforms -- in all cases -- Gabor transforms generated by
state-of-the-art window vectors of time-frequency analysis.
| 2104.14468 | 737,909 |
Hybrid Morphology Radio Sources (HyMoRS) are a very rare and newly discovered
subclass of radio galaxies that have mixed FR morphology i.e., these galaxies
have FR-I structure on one side of the core and FR-II structure on the other
side of the core. We systematically searched for HyMoRS using VLA Faint Images
of the Radio Sky at Twenty-cm (FIRST) survey at 1400 MHz and identified
forty-five confirmed HyMoRS and five candidates HyMoRS. Our finding
significantly increased the known sample size of HyMoRS. HyMoRS may play an
essential role in understanding the interaction of jets with the interstellar
medium and a very debated topic of the FR dichotomy. We identified optical/IR
counterparts for thirty-nine sources in our catalogue. In our sample of
sources, five sources had Quasar-like behavior. We had estimated the spectral
index and radio luminosity of HyMoR sources in our catalogue, when possible. We
found that the source J1336+2329 ($\log L=26.93$ W Hz$^{-1}$sr$^{-1}$) was the
most luminous and the source J1204+3801, a Quasar, was the farthest HyMoRS
(with redshift $z$=1.28) in our sample. With the help of a large sample size of
the newly discovered sources, various statistical properties were studied.
| 2104.14469 | 737,909 |
Boosted by the simultaneous translation shared task at IWSLT 2020, promising
end-to-end online speech translation approaches were recently proposed. They
consist in incrementally encoding a speech input (in a source language) and
decoding the corresponding text (in a target language) with the best possible
trade-off between latency and translation quality. This paper investigates two
key aspects of end-to-end simultaneous speech translation: (a) how to encode
efficiently the continuous speech flow, and (b) how to segment the speech flow
in order to alternate optimally between reading (R: encoding input) and writing
(W: decoding output) operations. We extend our previously proposed end-to-end
online decoding strategy and show that while replacing BLSTM by ULSTM encoding
degrades performance in offline mode, it actually improves both efficiency and
performance in online mode. We also measure the impact of different methods to
segment the speech signal (using fixed interval boundaries, oracle word
boundaries or randomly set boundaries) and show that our best end-to-end online
decoding strategy is surprisingly the one that alternates R/W operations on
fixed size blocks on our English-German speech translation setup.
| 2104.14470 | 737,909 |
The transformation method is a powerful tool for providing the constitutive
parameters of the transformed material in the new coordinates. In
transformation elasticity, a general curvilinear change of coordinates
transforms conventional Hooke's law into a different constitutive law in which
the transformed material is not only anisotropic but also polar and chiral and
no known elastic solid satisfies. However, this state-of-the-art description
provides no insight as to what the underlying microstructure of this
transformed material could be, the design of which is a major challenge in this
field. The study aims to theoretically justify the fundamental need for the
polar material by critically revisiting the discrete transformation method. The
key idea is to let transformation gradient operate not only on the elastic
properties but on the underlying architectures of the mechanical lattice. As an
outstanding application, we leverage the proposed design paradigm to physically
construct a polar lattice metamaterial for the observation of elastic carpet
cloaking. Numerical simulations are then implemented to show excellent cloaking
performance under different static and dynamic mechanical loads. The approach
presented herein could promote and accelerate new designs of lattice topologies
for transformation elasticity in particular and is able to be extended for
realizing other emerging elastic properties and unlocking peculiar functions
including statics and dynamics in general.
| 2104.14471 | 737,909 |
In previous work, we study the Gan-Gross-Prasad problem for unipotent
representations of finite classical groups. In this paper, we deduce the
Gan-Gross-Prasad problem for arbitrary representations from the unipotent
representations by Lusztig correspondence.
| 2104.14473 | 737,909 |
Given a set P of n points in the plane, a unit-disk graph G_{r}(P) with
respect to a radius r is an undirected graph whose vertex set is P such that an
edge connects two points p, q \in P if the Euclidean distance between p and q
is at most r. The length of any path in G_r(P) is the number of edges of the
path. Given a value \lambda>0 and two points s and t of P, we consider the
following reverse shortest path problem: finding the smallest r such that the
shortest path length between s and t in G_r(P) is at most \lambda. It was known
previously that the problem can be solved in O(n^{4/3} \log^3 n) time. In this
paper, we present an algorithm of O(\lfloor \lambda \rfloor \cdot n \log n)
time and another algorithm of O(n^{5/4} \log^2 n) time.
| 2104.14476 | 737,909 |
Usually, opinion formation models assume that individuals have an opinion
about a given topic which can change due to interactions with others. However,
individuals can have different opinions in different topics and therefore
n-dimensional models are best suited to deal with these cases. While there have
been many efforts to develop analytical models for one dimensional opinion
models, less attention has been paid to multidimensional ones. In this work, we
develop an analytical approach for multidimensional models of continuous
opinions where dimensions can be correlated or uncorrelated. We show that for
any generic reciprocal interactions between agents, the mean value of initial
opinion distribution is conserved. Moreover, for positive social influence
interaction mechanisms, the variance of opinion distributions decreases with
time and the system converges to a delta distributed function. In particular,
we calculate the convergence time when agents get closer in a discrete quantity
after interacting, showing a clear difference between correlated and
uncorrelated cases.
| 2104.14477 | 737,909 |
Human evaluation of modern high-quality machine translation systems is a
difficult problem, and there is increasing evidence that inadequate evaluation
procedures can lead to erroneous conclusions. While there has been considerable
research on human evaluation, the field still lacks a commonly-accepted
standard procedure. As a step toward this goal, we propose an evaluation
methodology grounded in explicit error analysis, based on the Multidimensional
Quality Metrics (MQM) framework. We carry out the largest MQM research study to
date, scoring the outputs of top systems from the WMT 2020 shared task in two
language pairs using annotations provided by professional translators with
access to full document context. We analyze the resulting data extensively,
finding among other results a substantially different ranking of evaluated
systems from the one established by the WMT crowd workers, exhibiting a clear
preference for human over machine output. Surprisingly, we also find that
automatic metrics based on pre-trained embeddings can outperform human crowd
workers. We make our corpus publicly available for further research.
| 2104.14478 | 737,909 |
This paper describes my personal appreciation of some of Tini Veltman's great
research achievements and how my own research career has followed the pathways
he opened. Among the topics where he has been the most influential have been
the pursuit and study of the Higgs boson and the calculation of radiative
corrections that enabled the masses of the top quark and the Higgs boson to be
predicted ahead of their discoveries. The search for physics beyond the
Standard Model may require a complementary approach, such as the search for
non-renormalizable interactions via the Standard Model Effective Field Theory.
| 2104.14479 | 737,909 |
In this paper, we rigorously derive a Boltzmann equation for mixtures from
the many body dynamics of two types of hard sphere gases. We prove that the
microscopic dynamics of two gases with different masses and diameters is well
defined, and introduce the concept of a two parameter BBGKY hierarchy to handle
the non-symmetric interaction of these gases. As a corollary of the derivation,
we prove Boltzmann's propagation of chaos assumption for the case of a mixtures
of gases.
| 2104.14480 | 737,909 |
We use an up to date compilation of Tully-Fisher data to search for
transitions in the evolution of the Tully-Fisher relation. Using a recently
published data compilation, we find hints at $\approx 3\sigma$ level for a
transition at a critical distance $D_c \simeq 17 Mpc$. The zero point
(intercept) amplitude of the transition is $\Delta \log A_B \simeq 0.2 \pm
0.06$ while the slope remains practically unchanged. If the transition is
interpreted as due to a gravitational strength transition, it would imply a
shift of the effective gravitational constant to lower values for distances
larger than $D_c\simeq 17 Mpc$ by $\frac{\Delta G}{G}=-0.1 \pm 0.03$. Such a
shift is of the anticipated sign and magnitude but at somewhat lower distance
(redshift) than the gravitational transition recently proposed to address the
Hubble and growth tensions ($\frac{\Delta G}{G}\simeq -0.1$ at transition
redshift $z_t\lesssim 0.01$ ($D_c\lesssim 40 Mpc$)).
| 2104.14481 | 737,909 |
In addition to its suite of narrow dense rings, Uranus is surrounded by an
extremely complex system of dusty rings that were most clearly seen by the
Voyager spacecraft after it flew past the planet. A new analysis of the highest
resolution images of these dusty rings reveals that a number of them are less
than 20 km wide. The extreme narrowness of these rings, along with the fact
that most of them do not appear to fall close to known satellite resonances,
should provide new insights into the forces responsible for sculpting the
Uranian ring system.
| 2104.14482 | 737,909 |
Multi-state survival analysis considers several potential events of interest
along a disease pathway. Such analyses are crucial to model complex patient
trajectories and are increasingly being used in epidemiological and health
economic settings. Multi-state models often make the Markov assumption, whereby
an individual's future trajectory is dependent only upon their present state,
not their past. In reality, there may be transitional dependence upon either
previous events and/or more than one timescale, for example time since entry to
the current or previous state(s). The aim of this study was to develop an
illness-death Weibull model allowing for multiple timescales to impact the
future risk of death. Following this, we evaluated the performance of the
multiple timescale model against a Markov illness-death model in a set of
plausible simulation scenarios when the Markov assumption was violated. Guided
by a study in breast cancer, data were simulated from Weibull baseline
distributions, with hazard functions dependent on single and multiple
timescales. Markov and non-Markov models were fitted to account for/ignore the
underlying data structure. Ignoring the presence of multiple timescales led to
bias in underlying transition rates between states and associated covariate
effects, while transition probabilities and lengths of stay were fairly
robustly estimated. Further work may be needed to evaluate different estimands
or more complex multi-state models. Software implementations in Stata are also
described for simulating and estimating multiple timescale multi-state models.
| 2104.14483 | 737,909 |
We formalize the notion of vector semi-inner products and introduce a class
of vector seminorms which are built from these maps. The classical Pythagorean
theorem and parallelogram law are then generalized to vector seminorms that
have a geometric mean closed vector lattice for codomain. In the special case
that this codomain is a square root closed, semiprime $f$-algebra, we provide a
sharpening of the triangle inequality as well as a condition for equality.
| 2104.14484 | 737,909 |
The $\{0,\frac{1}{2}\}$-closure of a rational polyhedron $\{ x \colon Ax \le
b \}$ is obtained by adding all Gomory-Chv\'atal cuts that can be derived from
the linear system $Ax \le b$ using multipliers in $\{0,\frac{1}{2}\}$. We show
that deciding whether the $\{0,\frac{1}{2}\}$-closure coincides with the
integer hull is strongly NP-hard. A direct consequence of our proof is that,
testing whether the linear description of the $\{0,\frac{1}{2}\}$-closure
derived from $Ax \le b$ is totally dual integral, is strongly NP-hard.
| 2104.14486 | 737,909 |
We present high-pressure electrical transport measurements on the newly
discovered V-based superconductors $A$V$_3$Sb$_5$ ($A$ = Rb and K), which have
an ideal Kagome lattice of vanadium. Two superconducting domes under pressure
are observed in both compounds, as previously observed in their sister compound
CsV$_3$Sb$_5$. For RbV$_3$Sb$_5$, the $T_c$ increases from 0.93 K at ambient
pressure to the maximum of 4.15 K at 0.38 GPa in the first dome. The second
superconducting dome has the highest $T_c$ of 1.57 K at 28.8 GPa. KV$_3$Sb$_5$
displays a similar double-dome phase diagram, however, its two maximum $T_c$s
are lower, and the $T_c$ drops faster in the second dome than RbV$_3$Sb$_5$. An
integrated temperature-pressure phase diagram of $A$V$_3$Sb$_5$ ($A$ = Cs, Rb
and K) is constructed, showing that the ionic radius of the intercalated
alkali-metal atoms has a significant effect. Our work demonstrates that
double-dome superconductivity under pressure is a common feature of these
V-based Kagome metals.
| 2104.14487 | 737,909 |
We consider affine representable algebras, that is, finitely generated
algebras over a field that can be embedded into some matrix algebra over a
commutative algebra. We show that this algebra can in fact be chosen to be a
polynomial algebra. We also give a hopefully palatable proof of a theorem of
V.T. Markov stating that the Gelfand-Kirillov dimension of any affine
representable algebra is an integer.
| 2104.14488 | 737,909 |
We consider Dirac-like operators with piecewise constant mass terms on spin
manifolds, and we study the behaviour of their spectra when the mass parameters
become large. In several asymptotic regimes, effective operators appear: the
extrinsic Dirac operator and a generalized MIT Bag Dirac operator. This extends
some results previously known for the Euclidean spaces to the case of general
spin geometry.
| 2104.14489 | 737,909 |
We present theoretical transmission spectra of a strongly driven, damped,
flux qubit coupled to a dissipative resonator in the ultrastrong coupling
regime. Such a qubit-oscillator system, described within a dissipative Rabi
model, constitutes the building block of superconducting circuit QED platforms.
The addition of a strong drive allows one to characterize the system properties
and study novel phenomena, leading to a better understanding and control of the
qubit-oscillator system. In this work, the calculated transmission of a weak
probe field quantifies the response of the qubit, in frequency domain, under
the influence of the quantized resonator and of the strong microwave drive. We
find distinctive features of the entangled driven qubit-resonator spectrum,
namely resonant features and avoided crossings, modified by the presence of the
dissipative environment. The magnitude, positions, and broadening of these
features are determined by the interplay among qubit-oscillator detuning, the
strength of their coupling, the driving amplitude, and the interaction with the
heat bath. This work establishes the theoretical basis for future experiments
in the driven ultrastrong coupling regime and their impact to develop novel
quantum technologies with superconducting circuits.
| 2104.14490 | 737,909 |
We compute explicitly the Khovanov polynomials (using the computer program
from katlas.org) for the two simplest families of the satellite knots, which
are the twisted Whitehead doubles and the two-strand cables. We find that a
quantum group decomposition for the HOMFLY polynomial of a satellite knot can
be extended to the Khovanov polynomial, whose quantum group properties are not
manifest. Namely, the Khovanov polynomial of a twisted Whitehead double or
two-strand cable (the two simplest satellite families) can be presented as a
naively deformed linear combination of the pattern and companion invariants.
For a given companion, the satellite polynomial "smoothly" depends on the
pattern but for the "jump" at one critical point defined by the s-invariant of
the companion knot. A similar phenomenon is known for the knot Floer homology
and tau-invariant for the same kind of satellites.
| 2104.14491 | 737,909 |
The COVID-19 pandemic has spurred a large amount of observational studies
reporting linkages between the risk of developing severe COVID-19 or dying from
it, and sex and gender. By reviewing a large body of related literature and
conducting a fine grained analysis based on sex-disaggregated data of 61
countries spanning 5 continents, we discover several confounding factors that
could possibly explain the supposed male vulnerability to COVID-19. We thus
highlight the challenge of making causal claims based on available data, given
the lack of statistical significance and potential existence of biases.
Informed by our findings on potential variables acting as confounders, we
contribute a broad overview on the issues bias, explainability and fairness
entail in data-driven analyses. Thus, we outline a set of discriminatory policy
consequences that could, based on such results, lead to unintended
discrimination. To raise awareness on the dimensionality of such foreseen
impacts, we have compiled an encyclopedia-like reference guide, the Bias
Catalog for Pandemics (BCP), to provide definitions and emphasize realistic
examples of bias in general, and within the COVID-19 pandemic context. These
are categorized within a division of bias families and a 2-level priority
scale, together with preventive steps. In addition, we facilitate the Bias
Priority Recommendations on how to best use and apply this catalog, and provide
guidelines in order to address real world research questions. The objective is
to anticipate and avoid disparate impact and discrimination, by considering
causality, explainability, bias and techniques to mitigate the latter. With
these, we hope to 1) contribute to designing and conducting fair and equitable
data-driven studies and research; and 2) interpret and draw meaningful and
actionable conclusions from these.
| 2104.14492 | 737,909 |
We propose a new way to compute the genus zero Gopakumar-Vafa invariants for
two families of non-toric non-compact Calabi-Yau threefolds that admit simple
flops: Reid's Pagodas, and Laufer's examples. We exploit the duality between
M-theory on these threefolds, and IIA string theory with D6-branes and
O6-planes. From this perspective, the GV invariants are detected as
five-dimensional open string zero modes. We propose a definition for genus zero
GV invariants for threefolds that do not admit small crepant resolutions. We
find that in most cases, non-geometric T-brane data is required in order to
fully specify the invariants.
| 2104.14493 | 737,909 |
We present a detailed analysis of the temperature dependence of the thermal
conductivity of a ferroelectric PbTiO3 thin film deposited in a
composition-spread geometry enabling a continuous range of compositions from
~25% titanium-deficient to ~20% titanium-rich to be studied. By fitting the
experimental results to the Debye model we deconvolve and quantify the two main
phonon scattering sources in the system: ferroelectric domain walls (DWs) and
point defects. Our results prove that ferroelectric DWs are the main agent
limiting the thermal conductivity in this system, not only in the
stoichiometric region of the thin film ([Pb]/[Ti]~1), but also when the
concentration of cation point defects is significant (up to ~15%). Hence, DWs
in ferroelectric materials are a source of phonon scattering at least as
effective as point defects. Our results demonstrate the viability and
effectiveness of using reconfigurable DWs to control the thermal conductivity
in solid-state devices.
| 2104.14494 | 737,909 |
We study Krasnoselskii-Mann style iterative algorithms for approximating
fixpoints of asymptotically weakly contractive mappings, with a focus on
providing generalised convergence proofs along with explicit rates of
convergence. More specifically, we define a new notion of being asymptotically
$\psi$-weakly contractive with modulus, and present a series of abstract
convergence theorems which both generalise and unify known results from the
literature. Rates of convergence are formulated in terms of our modulus of
contractivity, in conjunction with other moduli and functions which form
quantitative analogues of additional assumptions that are required in each
case. Our approach makes use of ideas from proof theory, in particular our
emphasis on abstraction and on formulating our main results in a quantitative
manner. As such, the paper can be seen as a contribution to the proof mining
program.
| 2104.14495 | 737,909 |
We study the possibility of low scale leptogenesis along with dark matter
(DM) in the presence of primordial black holes (PBH). For a common setup to
study both leptogenesis and DM we consider the minimal scotogenic model which
also explains light neutrino mass at radiative level. While PBH in the mass
range of $0.1-10^5$ g can, in principle, affect leptogenesis, the required
initial PBH fraction usually leads to overproduction of DM whose thermal
freeze-out occurs before PBH evaporation. PBH can lead to non-thermal source of
leptogenesis as well as dilution of thermally generated lepton asymmetry via
entropy injection, with the latter being dominant. The parameter space of
scotogenic model which leads to overproduction of baryon or lepton asymmetry in
standard cosmology can be made consistent in the presence of PBH with
appropriate initial mass and energy fraction. On the other hand, for such PBH
parameters, the DM is constrained to be in light mass regime where its
freeze-out occurs after PBH evaporation.
| 2104.14496 | 737,909 |
We present a non perturbative and formally exact approach for charge
transport in interacting nanojunctions based on a real time path integral
formulation of the reduced system dynamics. An expansion of the influence
functional in terms of the number of tunneling transitions, and integration of
the Grassmann variables between the tunneling times, allows us to obtain a
still exact generalized master equation (GME) for the populations of the
reduced density matrix (RDM) in the occupation number representation, as well
as a formally exact expression for the current. By borrowing the nomenclature
of the famous spin-boson problem, we characterize the two-state dynamics of
such degrees of freedom on the forward and backward branches in terms of single
four-state paths with alternating blips and sojourns. This allows a
diagrammatic representation of the GME kernel and its parametrization in terms
of sequences of blips and sojourns. We apply our formalism to the exactly
solvable resonant level model (RLM) and to the the single impurity Anderson
model (SIAM), the latter being a prototype system for studying strong
correlations. For both systems, we demonstrate a hierarchical diagrammatic
structure of the exact GME kernel. While the hierarchy closes at the
second-tier level for the RLM, this is not the case for the interacting SIAM.
Upon inspection of the GME, known results from various perturbative and
nonperturbative approximation schemes to quantum transport in the SIAM are
recovered. Finally, a noncrossing approximation for the hierarchical kernel is
developed, which enables us to systematically decrease temperature at each next
level of the approximation. Analytical results for a simplified fourth-tier
scheme are presented.
| 2104.14497 | 737,909 |
Understanding how electrolyte solutions behave out of thermal equilibrium is
a long-standing endeavor in many areas of chemistry and biology. Although
mean-field theories are widely used to model the dynamics of electrolytes, it
is also important to characterize the effects of fluctuations in these systems.
In a previous work, we showed that the dynamics of the ions in a strong
electrolyte that is driven by an external electric field can generate
long-ranged correlations manifestly different from the equilibrium screened
correlations; in the nonequilibrium steady state, these correlations give rise
to a novel long-range fluctuation-induced force (FIF). Here, we extend these
results by considering the dynamics of the strong electrolyte after it is
quenched from thermal equilibrium upon the application of a constant electric
field. We show that the asymptotic long-distance limit of both charge and
density correlations is generally diffusive in time. These correlations give
rise to long-ranged FIFs acting on the neutral confining plates with long-time
regimes that are governed by power-law temporal decays toward the steady-state
value of the force amplitude. These findings show that nonequilibrium
fluctuations have nontrivial implications on the dynamics of objects immersed
in a driven electrolyte, and they could be useful for exploring new ways of
controlling long-distance forces in charged solutions.
| 2104.14498 | 737,909 |
We address an inherent difficulty in welfare-theoretic fair machine learning,
proposing an equivalently-axiomatically justified alternative, and studying the
resulting computational and statistical learning questions. Welfare metrics
quantify overall wellbeing across a population of one or more groups, and
welfare-based objectives and constraints have recently been proposed to
incentivize fair machine learning methods to produce satisfactory solutions
that consider the diverse needs of multiple groups. Unfortunately, many
machine-learning problems are more naturally cast as loss minimization, rather
than utility maximization tasks, which complicates direct application of
welfare-centric methods to fair-ML tasks. In this work, we define a
complementary measure, termed malfare, measuring overall societal harm (rather
than wellbeing), with axiomatic justification via the standard axioms of
cardinal welfare. We then cast fair machine learning as a direct malfare
minimization problem, where a group's malfare is their risk (expected loss).
Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not
equivalent to simply defining utility as negative loss. Building upon these
concepts, we define fair-PAC learning, where a fair PAC-learner is an algorithm
that learns an $\varepsilon$-$\delta$ malfare-optimal model with bounded sample
complexity, for any data distribution, and for any malfare concept. We show
broad conditions under which, with appropriate modifications, many standard
PAC-learners may be converted to fair-PAC learners. This places fair-PAC
learning on firm theoretical ground, as it yields statistical, and in some
cases computational, efficiency guarantees for many well-studied
machine-learning models, and is also practically relevant, as it democratizes
fair ML by providing concrete training algorithms and rigorous generalization
guarantees for these models.
| 2104.14504 | 737,909 |
Difference schemes are considered for dynamical systems $ \dot x = f (x) $
with a quadratic right-hand side, which have $t$-symmetry and are reversible.
Reversibility is interpreted in the sense that the Cremona transformation is
performed at each step in the calculations using a difference scheme. The
inheritance of periodicity and the Painlev\'e property by the approximate
solution is investigated. In the computer algebra system Sage, such values are
found for the step $ \Delta t $, for which the approximate solution is a
sequence of points with the period $ n \in \mathbb N $. Examples are given and
hypotheses about the structure of the sets of initial data generating sequences
with the period $ n $ are formulated.
| 2104.14507 | 737,909 |
For every tuple $d_1,\dots, d_l\geq 2,$ let
$\mathbb{R}^{d_1}\otimes\cdots\otimes\mathbb{R}^{d_l}$ denote the tensor
product of $\mathbb{R}^{d_i},$ $i=1,\dots,l.$ Let us denote by $\mathcal{B}(d)$
the hyperspace of centrally symmetric convex bodies in $\mathbb{R}^d,$
$d=d_1\cdots d_l,$ endowed with the Hausdorff distance, and by
$\mathcal{B}_\otimes(d_1,\dots,d_l)$ the subset of $\mathcal{B}(d)$ consisting
of the convex bodies that are closed unit balls of reasonable crossnorms on
$\mathbb{R}^{d_1}\otimes\cdots\otimes\mathbb{R}^{d_l}.$ It is known that
$\mathcal{B}_\otimes(d_1,\dots,d_l)$ is a closed, contractible and locally
compact subset of $\mathcal{B}(d).$ The hyperspace
$\mathcal{B}_\otimes(d_1,\dots,d_l)$ is called the space of tensorial bodies.
In this work we determine the homeomorphism type of
$\mathcal{B}_\otimes(d_1,\dots,d_l).$ We show that even if
$\mathcal{B}_\otimes(d_1,\dots,d_l)$ is not convex with respect to the
Minkowski sum, it is an Absolute Retract homeomorphic to
$\mathcal{Q}\times\mathbb{R}^p,$ where $\mathcal{Q}$ is the Hilbert cube and
$p=\frac{d_1(d_1+1)+\cdots+d_l(d_l+1)}{2}.$ Among other results, the relation
between the Banach-Mazur compactum and the Banach-Mazur type compactum
associated to $\mathcal{B}_\otimes(d_1,\dots,d_l)$ is examined.
| 2104.14509 | 737,909 |
In an edge modification problem, we are asked to modify at most $k$ edges to
a given graph to make the graph satisfy a certain property. Depending on the
operations allowed, we have the completion problems and the edge deletion
problems. A great amount of efforts have been devoted to understanding the
kernelization complexity of these problems. We revisit several well-studied
edge modification problems, and develop improved kernels for them:
\begin{itemize}
\item a $2 k$-vertex kernel for the cluster edge deletion problem,
\item a $3 k^2$-vertex kernel for the trivially perfect completion problem,
\item a $5 k^{1.5}$-vertex kernel for the split completion problem and the
split edge deletion problem, and
\item a $5 k^{1.5}$-vertex kernel for the pseudo-split completion problem and
the pseudo-split edge deletion problem.
\end{itemize}
Moreover, our kernels for split completion and pseudo-split completion have
only $O(k^{2.5})$ edges. Our results also include a $2 k$-vertex kernel for the
strong triadic closure problem, which is related to cluster edge deletion.
| 2104.14510 | 737,909 |
In event-based sensing, many sensors independently and asynchronously emit
events when there is a change in their input. Event-based sensing can present
significant improvements in power efficiency when compared to traditional
sampling, because (1) the output is a stream of events where the important
information lies in the timing of the events, and (2) the sensor can easily be
controlled to output information only when interesting activity occurs at the
input.
Moreover, event-based sampling can often provide better resolution than
standard uniform sampling. Not only does this occur because individual
event-based sensors have higher temporal resolution, it also occurs because the
asynchrony of events allows for less redundant and more informative encoding.
We would like to explain how such curious results come about.
To do so, we use ideal time encoding machines as a proxy for event-based
sensors. We explore time encoding of signals with low rank structure, and apply
the resulting theory to video. We then see how the asynchronous firing times of
the time encoding machines allow for better reconstruction than in the standard
sampling case, if we have a high spatial density of time encoding machines that
fire less frequently.
| 2104.14511 | 737,909 |
The AGM postulates by Alchourr\'{o}n, G\"{a}rdenfors, and Makinson continue
to represent a cornerstone in research related to belief change. We generalize
the approach of Katsuno and Mendelzon (KM) for characterizing AGM base revision
from propositional logic to the setting of (multiple) base revision in
arbitrary monotonic logics. Our core result is a representation theorem using
the assignment of total - yet not transitive - "preference" relations to belief
bases. We also provide a characterization of all logics for which our result
can be strengthened to preorder assignments (as in KM's original work).
| 2104.14512 | 737,909 |
Starting from a general many-body fermionic Hamiltonian, we derive the
equations of motion (EOM) for nucleonic propagators in a superfluid system. The
resulting EOM is of the Dyson type formulated in the basis of Bogoliubov's
quasiparticles. As the leading contributions to the dynamical kernel of this
EOM in strongly-coupled regimes contain phonon degrees of freedom in various
channels, an efficient method of calculating phonon's characteristics is
required to successfully model these kernels. The traditional quasiparticle
random phase approximation (QRPA) solvers are typically used for this purpose
in nuclear structure calculations, however, they become very prohibitive in
non-spherical geometries. In this work, by linking the notion of the
quasiparticle-phonon vertex to the variation of the Bogoliubov's Hamiltonian,
we show that the recently developed finite-amplitude method (FAM) can be
efficiently employed to compute the vertices within the FAM-QRPA. To illustrate
the validity of the method, calculations based on the relativistic
density-dependent point-coupling Lagrangian are performed for the
single-nucleon states in heavy and medium-mass nuclei with axial deformations.
The cases of $^{38}$Si and $^{250}$Cf are presented and discussed.
| 2104.14513 | 737,909 |
We present a spinning black hole solution in $d$ dimensions with a maximal
number of rotation parameters in the context of the Eistein-Maxwell-Dilaton
theory. An interesting feature of such a solution is that it accommodates
Lifshitz black holes when the rotation parameters are set to zero. We verify
the rotating nature of the black hole solution by performing the quasi-local
analysis of conserved charges and defining the corresponding angular momenta.
In addition, we perform the thermodynamical analysis of the black hole
configuration, show that the first law of thermodynamics is completely
consistent, and obtain a Smarr-like formula. We further study the thermodynamic
stability of the constructed solution from a local viewpoint, by computing the
associated specific heats, and a global perspective, by using the so-called new
thermodynamic geometry. We finally make some comments related to a pathology
found in the causal structure of the obtained rotating black hole spacetime.
| 2104.14514 | 737,909 |
The DES-CMASS sample (DMASS) is designed to optimally combine the weak
lensing measurements from the Dark Energy Survey (DES) and redshift-space
distortions (RSD) probed by the CMASS galaxy sample from the Baryonic
Oscillation Spectroscopic Survey (BOSS). In this paper, we demonstrate the
feasibility of adopting DMASS as the equivalent of BOSS CMASS for a joint
analysis of DES and BOSS in the framework of modified gravity. We utilize the
angular clustering of the DMASS galaxies, cosmic shear of the DES
METACALIBRATION sources, and cross-correlation of the two as data vectors. By
jointly fitting the combination of the data with the RSD measurements from the
BOSS CMASS sample and Planck data, we obtain the constraints on modified
gravity parameters $\mu_0 = -0.37^{+0.47}_{-0.45}$ and $\Sigma_0 =
0.078^{+0.078}_{-0.082}$. We do not detect any significant deviation from
General Relativity. Our constraints of modified gravity measured with DMASS are
tighter than those with the DES Year 1 redMaGiC galaxy sample with the same
external data sets by $29\%$ for $\mu_0$ and $21\%$ for $\Sigma_0$, and
comparable to the published results of the DES Year 1 modified gravity analysis
despite this work using fewer external data sets. This improvement is mainly
because the galaxy bias parameter is shared and more tightly constrained by
both CMASS and DMASS, effectively breaking the degeneracy between the galaxy
bias and other cosmological parameters. Such an approach to optimally combine
photometric and spectroscopic surveys using a photometric sample equivalent to
a spectroscopic sample can be applied to combining future surveys having a
limited overlap such as DESI and LSST.
| 2104.14515 | 737,909 |
We demonstrate how by using a reinforcement learning algorithm, the deep
cross-entropy method, one can find explicit constructions and counterexamples
to several open conjectures in extremal combinatorics and graph theory. Amongst
the conjectures we refute are a question of Brualdi and Cao about maximizing
permanents of pattern avoiding matrices, and several problems related to the
adjacency and distance eigenvalues of graphs.
| 2104.14516 | 737,909 |
New computing technologies inspired by the brain promise fundamentally
different ways to process information with extreme energy efficiency and the
ability to handle the avalanche of unstructured and noisy data that we are
generating at an ever-increasing rate. To realise this promise requires a brave
and coordinated plan to bring together disparate research communities and to
provide them with the funding, focus and support needed. We have done this in
the past with digital technologies; we are in the process of doing it with
quantum technologies; can we now do it for brain-inspired computing?
| 2104.14517 | 737,909 |
Generalizing our recent joint paper with Vasily Pestun, we construct a family
of $SO(2r),Sp(2r),SO(2r+1)$ rational Lax matrices polynomial in the spectral
parameter, parametrized by the divisors on the projective line with
coefficients being dominant integral coweights of associated Lie algebras. To
this end, we provide the RTT realization of the antidominantly shifted extended
Drinfeld Yangians of $\mathfrak{so}_{2r}, \mathfrak{sp}_{2r},
\mathfrak{so}_{2r+1}$, and their coproduct homomorphisms.
| 2104.14518 | 737,909 |
We introduce an automata model for describing interesting classes of
differential privacy mechanisms/algorithms that include known mechanisms from
the literature. These automata can model algorithms whose inputs can be an
unbounded sequence of real-valued query answers. We consider the problem of
checking whether there exists a constant $d$ such that the algorithm described
by these automata are $d\epsilon$-differentially private for all positive
values of the privacy budget parameter $\epsilon$. We show that this problem
can be decided in time linear in the automaton's size by identifying a
necessary and sufficient condition on the underlying graph of the automaton.
This paper's results are the first decidability results known for algorithms
with an unbounded number of query answers taking values from the set of reals.
| 2104.14519 | 737,909 |