abstract
stringlengths 6
6.09k
| id
stringlengths 9
16
| time
int64 725k
738k
|
---|---|---|
Deflection of light due to massive objects was predicted by Einstein in his
General Theory of Relativity. This deflection of light has been calculated by
many researchers in past, for spherically symmetric objects. But, in reality,
most of these gravitating objects are not spherical instead they are
ellipsoidal ( oblate) in shape. The objective of the present work is to study
theoretically the effect of this ellipticity on the trajectory of a light ray.
Here, we obtain a converging series expression for the deflection of a light
ray due to an ellipsoidal gravitating object, characterised by an ellipticity
parameter. As a boundary condition, by setting the ellipticity parameter to be
equal to zero, we get back the same expression for deflection as due to
Schwarzschild object. It is also found that the additional contribution in
deflection angle due to this ellipticity though small, but could be typically
higher than the similar contribution caused by the rotation of a celestial
object. Therefore for a precise estimate of the deflection due to a celestial
object, the calculations presented here would be useful.
| 2104.14168 | 737,909 |
Recently, learning-based approaches for 3D model reconstruction have
attracted attention owing to its modern applications such as Extended
Reality(XR), robotics and self-driving cars. Several approaches presented good
performance on reconstructing 3D shapes by learning solely from images, i.e.,
without using 3D models in training. Challenges, however, remain in texture
generation due to the gap between 2D and 3D modals. In previous work, the grid
sampling mechanism from Spatial Transformer Networks was adopted to sample
color from an input image to formulate texture. Despite its success, the
existing framework has limitations on searching scope in sampling, resulting in
flaws in generated texture and consequentially on rendered 3D models. In this
paper, to solve that issue, we present a novel sampling algorithm by optimizing
the gradient of predicted coordinates based on the variance on the sampling
image. Taking into account the semantics of the image, we adopt Frechet
Inception Distance (FID) to form a loss function in learning, which helps
bridging the gap between rendered images and input images. As a result, we
greatly improve generated texture. Furthermore, to optimize 3D shape
reconstruction and to accelerate convergence at training, we adopt part
segmentation and template learning in our model. Without any 3D supervision in
learning, and with only a collection of single-view 2D images, the shape and
texture learned by our model outperform those from previous work. We
demonstrate the performance with experimental results on a publically available
dataset.
| 2104.14169 | 737,909 |
Proactive tile-based virtual reality (VR) video streaming employs the current
tracking data of a user to predict future requested tiles, then renders and
delivers the predicted tiles to be requested before playback. Very recently,
privacy protection in VR video streaming starts to raise concerns. However,
existing privacy protection may fail even with federated learning at head
mounted display (HMD). This is because when the HMD requests the predicted
requested tiles and the prediction is accurate, the real requested tiles and
corresponding user behavior-related data can still be recovered at multi-access
edge computing server. In this paper, we consider how to protect privacy even
with accurate predictors and investigate the impact of privacy requirement on
the quality of experience (QoE). To this end, we first add extra camouflaged
tile requests in addition to real tile requests and model the privacy
requirement as the spatial degree of privacy (sDoP). By ensuring sDoP, the real
tile requests can be hidden and privacy can be protected. Then, we jointly
optimize the durations for prediction, computing, and transmitting, aimed at
maximizing the privacy-aware QoE given arbitrary predictor and configured
resources. From the obtained optimal closed-form solution, we find that the
increase of sDoP improves the capability of communication and computing hence
improves QoE, but degrades the prediction performance hence degrades the QoE.
The overall impact depends on which factor dominates the QoE. Simulation with
two predictors on a real dataset verifies the analysis and shows that the
overall impact of sDoP is to improve the QoE.
| 2104.14170 | 737,909 |
We study systems of String Equations where block variables need to be
assigned strings so that their concatenation gives a specified target string.
We investigate this problem under a multivariate complexity framework,
searching for tractable special cases such as systems of equations with few
block variables or few equations. Our main results include a polynomial-time
algorithm for size-2 equations, and hardness for size-3 equations, as well as
hardness for systems of two equations, even with tight constraints on the block
variables. We also study a variant where few deletions are allowed in the
target string, and give XP algorithms in this setting when the number of block
variables is constant.
| 2104.14171 | 737,909 |
We study the average number $\mathcal{A}(G)$ of colors in the non-equivalent
colorings of a graph $G$. We show some general properties of this graph
invariant and determine its value for some classes of graphs. We then
conjecture several lower bounds on $\mathcal{A}(G)$ and prove that these
conjectures are true for specific classes of graphs such as triangulated graphs
and graphs with maximum degree at most 2.
| 2104.14172 | 737,909 |
Many fundamental machine learning tasks can be formulated as a problem of
learning with vector-valued functions, where we learn multiple scalar-valued
functions together. Although there is some generalization analysis on different
specific algorithms under the empirical risk minimization principle, a unifying
analysis of vector-valued learning under a regularization framework is still
lacking. In this paper, we initiate the generalization analysis of regularized
vector-valued learning algorithms by presenting bounds with a mild dependency
on the output dimension and a fast rate on the sample size. Our discussions
relax the existing assumptions on the restrictive constraint of hypothesis
spaces, smoothness of loss functions and low-noise condition. To understand the
interaction between optimization and learning, we further use our results to
derive the first generalization bounds for stochastic gradient descent with
vector-valued functions. We apply our general results to multi-class
classification and multi-label classification, which yield the first bounds
with a logarithmic dependency on the output dimension for extreme multi-label
classification with the Frobenius regularization. As a byproduct, we derive a
Rademacher complexity bound for loss function classes defined in terms of a
general strongly convex function.
| 2104.14173 | 737,909 |
Molecular communication via diffusion (MCvD) is considered as one of the most
feasible communication paradigms for nanonetworks, especially for
bio-nanonetworks which are usually in water-rich biological environments. Two
effects that deteriorates the signal in MCvD are noise and inter-symbol
interference (ISI). The expected channel impulse response of MCvD has a long
and slow attenuating tail due to molecular diffusion which causes ISI and
further limits the slow data rate of MCvD. The extent that ISI and noise are
suppressed in an MCvD system determines its effectiveness, especially at a high
data rate. Although ISI-suppression approaches have been investigated, most of
them are addressed as non-essential parts in other topics, such as signal
detection or modulation. Furthermore, most of the state-of-the-art
ISI-suppression approaches are performed by subtracting the estimated ISI from
the total signal. In this work, we investigate ISI-suppression from a new
perspective of filters to filter ISI out without any ISI estimation. The
principles for a good design of ISI-suppression filters in MCvD are
investigated. Based on the principles, an ISI-suppression filter with good
anti-noise capability and an associated signal detection scheme is proposed for
MCvD scenarios with both ISI and noise. We compare the proposed scheme with the
state-of-the-art ISI-suppression approaches. The result manifests that the
proposed ISI-suppression scheme could recover signals deteriorated severely by
both ISI and noise, which could not be effectively detected by the
state-of-the-art ISI-suppression approaches.
| 2104.14174 | 737,909 |
We present initial limit Datalog, a new extensible class of constrained Horn
clauses for which the satisfiability problem is decidable. The class may be
viewed as a generalisation to higher-order logic (with a simple restriction on
types) of the first-order language limit Datalog$_Z$ (a fragment of Datalog
modulo linear integer arithmetic), but can be instantiated with any suitable
background theory. For example, the fragment is decidable over any countable
well-quasi-order with a decidable first-order theory, such as natural number
vectors under componentwise linear arithmetic, and words of a bounded,
context-free language ordered by the subword relation. Formulas of initial
limit Datalog have the property that, under some assumptions on the background
theory, their satisfiability can be witnessed by a new kind of term model which
we call entwined structures. Whilst the set of all models is typically
uncountable, the set of all entwined structures is recursively enumerable, and
model checking is decidable.
| 2104.14175 | 737,909 |
We introduce a dense and a dilute loop model on causal dynamical
triangulations. Both models are characterised by a geometric coupling constant
$g$ and a loop parameter $\alpha$ in such a way that the purely geometric
causal triangulation model is recovered for $\alpha=1$. We show that the dense
loop model can be mapped to a solvable planar tree model, whose partition
function we compute explicitly and use to determine the critical behaviour of
the loop model. The dilute loop model can likewise be mapped to a planar tree
model; however, a closed-form expression for the corresponding partition
function is not obtainable using the standard methods employed in the dense
case. Instead, we derive bounds on the critical coupling $g_c$ and apply
transfer matrix techniques to examine the critical behaviour for $\alpha$
small.
| 2104.14176 | 737,909 |
The evaluation of robot capabilities to navigate human crowds is essential to
conceive new robots intended to operate in public spaces. This paper initiates
the development of a benchmark tool to evaluate such capabilities; our long
term vision is to provide the community with a simulation tool that generates
virtual crowded environment to test robots, to establish standard scenarios and
metrics to evaluate navigation techniques in terms of safety and efficiency,
and thus, to install new methods to benchmarking robots' crowd navigation
capabilities. This paper presents the architecture of the simulation tools,
introduces first scenarios and evaluation metrics, as well as early results to
demonstrate that our solution is relevant to be used as a benchmark tool.
| 2104.14177 | 737,909 |
We report the synthesis and crystal structure of an organic inorganic
compound, ethylenediammonium lead iodide, NH3CH2CH2NH3PbI4. Synchrotron based
single crystal X-ray diffraction experiments revealed that the pristine and
thermally treated crystals differ in the organic cation behaviour, which is
characterized by a partial disorder in the thermally treated crystal. Based on
current voltage measurements, increased disorder of the organic cation is
associated with enhanced photoconductivity. This compound could be a potential
candidate for interface engineering in lead halide perovskite-based
optoelectronic devices.
| 2104.14178 | 737,909 |
The time evolution of a two-component collisionless plasma is modeled by the
Vlasov-Poisson system. In this work, the setting is two and one-half
dimensional, that is, the distribution functions of the particles species are
independent of the third space dimension. We consider the case that an external
magnetic field is present in order to confine the plasma in a given infinitely
long cylinder. After discussing global well-posedness of the corresponding
Cauchy problem, we construct stationary solutions which indeed have support
away from their confinement device. Then, in the main part of this work we
investigate the stability of such steady states, both with respect to
perturbations in the initial data, where we employ the energy-Casimir method,
and also with respect to perturbations in the external magnetic field.
| 2104.14179 | 737,909 |
Kinetic models of biochemical systems used in the modern literature often
contain hundreds or even thousands of variables. While these models are
convenient for detailed simulations, their size is often an obstacle to
deriving mechanistic insights. One way to address this issue is to perform an
exact model reduction by finding a self-consistent lower-dimensional projection
of the corresponding dynamical system.
Recently, a new algorithm CLUE has been designed and implemented, which
allows one to construct an exact linear reduction of the smallest possible
dimension such that the fixed variables of interest are preserved. It turned
out that allowing arbitrary linear combinations (as opposed to zero-one
combinations used in the prior approaches) may yield a much smaller reduction.
However, there was a drawback: some of the new variables did not have clear
physical meaning, thus making the reduced model harder to interpret.
We design and implement an algorithm that, given an exact linear reduction,
re-parametrizes it by performing an invertible transformation of the new
coordinates to improve the interpretability of the new variables. We apply our
algorithm to three case studies and show that "uninterpretable" variables
disappear entirely in all the case studies.
The implementation of the algorithm and the files for the case studies are
available at https://github.com/xjzhaang/LumpingPostiviser.
| 2104.14180 | 737,909 |
We consider an electronic bound state of the usual, non-relativistic,
molecular Hamiltonian with Coulomb interactions and fixed nuclei. Away from
appropriate collisions, we prove the real analyticity of all the reduced
densities and density matrices, that are associated to this bound state. We
provide a similar result for the associated reduced current density.
| 2104.14181 | 737,909 |
Particle scattering is a powerful tool to unveil the nature of various
subatomic phenomena. The key quantity is the scattering amplitude whose
analytic structure carries the information of the quantum states. In this work,
we demonstrate our first step attempt to extract the pole configuration of
inelastic scatterings using the deep learning method. Among various problems,
motivated by the recent new hadron phenomena, we develop a curriculum learning
method of deep neural network to analyze coupled channel scattering problems.
We show how effectively the method works to extract the pole configuration
associated with resonances in the $\pi N$ scatterings.
| 2104.14182 | 737,909 |
We consider finite and infinite-dimensional first-order consensus systems
with timeconstant interaction coefficients. For symmetric coefficients,
convergence to consensus is classically established by proving, for instance,
that the usual variance is an exponentially decreasing Lyapunov function. We
investigate here the convergence to consensus in the non-symmetric case: we
identify a positive weight which allows to define a weighted mean corresponding
to the consensus, and obtain exponential convergence towards consensus.
Moreover, we compute the sharp exponential decay rate.
| 2104.14183 | 737,909 |
This work investigates uncertainty-aware deep learning (DL) in tactile
robotics based on a general framework introduced recently for robot vision. For
a test scenario, we consider optical tactile sensing in combination with DL to
estimate the edge pose as a feedback signal to servo around various 2D test
objects. We demonstrate that uncertainty-aware DL can improve the pose
estimation over deterministic DL methods. The system estimates the uncertainty
associated with each prediction, which is used along with temporal coherency to
improve the predictions via a Kalman filter, and hence improve the tactile
servo control. The robot is able to robustly follow all of the presented
contour shapes to reduce not only the error by a factor of two but also smooth
the trajectory from the undesired noisy behaviour caused by previous
deterministic networks. In our view, as the field of tactile robotics matures
in its use of DL, the estimation of uncertainty will become a key component in
the control of physically interactive tasks in complex environments.
| 2104.14184 | 737,909 |
We investigate inference of variable-length codes in other domains of
computer science, such as noisy information transmission or information
retrieval-storage: in such topics, traditionally mostly constant-length
codewords act. The study is relied upon the two concepts of independent and
closed sets. We focus to those word relations whose images are computed by
applying some peculiar combinations of deletion, insertion, or substitution. In
particular, characterizations of variable-length codes that are maximal in the
families of $\tau$-independent or $\tau$-closed codes are provided.
| 2104.14185 | 737,909 |
Current dense symmetric eigenvalue (EIG) and singular value decomposition
(SVD) implementations may suffer from the lack of concurrency during the
tridiagonal and bidiagonal reductions, respectively. This performance
bottleneck is typical for the two-sided transformations due to the Level-2 BLAS
memory-bound calls. Therefore, the current state-of-the-art EIG and SVD
implementations may achieve only a small fraction of the system's sustained
peak performance. The QR-based Dynamically Weighted Halley (QDWH) algorithm may
be used as a pre-processing step toward the EIG and SVD solvers, while
mitigating the aforementioned bottleneck. QDWH-EIG and QDWH-SVD expose more
parallelism, while relying on compute-bound matrix operations. Both run closer
to the sustained peak performance of the system, but at the expense of
performing more FLOPS than the standard EIG and SVD algorithms. In this paper,
we introduce a new QDWH-based solver for computing the partial spectrum for EIG
(QDWHpartial-EIG) and SVD (QDWHpartial-SVD) problems. By optimizing the
rational function underlying the algorithms only in the desired part of the
spectrum, QDWHpartial-EIG and QDWHpartial-SVD algorithms efficiently compute a
fraction (say 1-20%) of the corresponding spectrum. We develop high-performance
implementations of QDWHpartial-EIG and QDWHpartial-SVD on distributed-memory
anymore systems and demonstrate their numerical robustness. Experimental
results using up to 36K MPI processes show performance speedups for
QDWHpartial-SVD up to 6X and 2X against PDGESVD from ScaLAPACK and KSVD,
respectively. QDWHpartial-EIG outperforms PDSYEVD from ScaLAPACK up to 3.5X but
remains slower compared to ELPA. QDWHpartial-EIG achieves, however, a better
occupancy of the underlying hardware by extracting higher sustained peak
performance than ELPA, which is critical moving forward with accelerator-based
supercomputers.
| 2104.14186 | 737,909 |
Let g be a complex semi-simple Lie algebra and g be a semisimple subalgebra
of g. Consider the branching problem of decomposing the simple
g-representations V as a sum of simple grepresentations V. When g = g x g, it
is the tensor product decomposition. The multiplicity space Mult(V, V)
satisfies V = $\oplus$ V Mult(V, V) $\otimes$ V, where the sum runs over the
isomorphism classes of simple g-representations. In the case when g is
spherical of minimal rank, we describe Mult(V, V) as the intersection of
kernels of powers of root operators in some weight space of the dual space V *
of V. When g = g x g, we recover by geometric methods a well known result.
| 2104.14187 | 737,909 |
Since its inception, the E.U.'s Common Agricultural Policy (CAP) aimed at
ensuring an adequate and stable farm income. While recognizing that the CAP
pursues a larger set of objectives, this thesis focuses on the impact of the
CAP on the level and the stability of farm income in Italian farms. It uses
microdata from a high standardized dataset, the Farm Accountancy Data Network
(FADN), that is available in all E.U. countries. This allows if perceived as
useful, to replicate the analyses to other countries. The thesis first assesses
the Income Transfer Efficiency (i.e., how much of the support translate to farm
income) of several CAP measures. Secondly, it analyses the role of a specific
and relatively new CAP measure (i.e., the Income Stabilisation Tool - IST) that
is specifically aimed at stabilising farm income. The assessment of the
potential use of Machine Learning procedures to develop an adequate ratemaking
in IST. These are used to predict indemnity levels because this is an essential
point for a similar insurance scheme. The assessment of ratemaking is
challenging: indemnity distribution is zero-inflated, not-continuous,
right-skewed, and several factors can potentially explain it. We address these
problems by using Tweedie distributions and three Machine Learning procedures.
The objective is to assess whether this improves the ratemaking by using the
prospective application of the Income Stabilization Tool in Italy as a case
study. We look at the econometric performance of the models and the impact of
using their predictions in practice. Some of these procedures efficiently
predict indemnities, using a limited number of regressors, and ensuring the
scheme's financial stability.
| 2104.14188 | 737,909 |
We introduce and parameterize a chemomechanical model of microtubule dynamics
on the dimer level, which is based on the allosteric tubulin model and includes
attachment, detachment and hydrolysis of tubulin dimers as well as stretching
of lateral bonds, bending at longitudinal junctions, and the possibility of
lateral bond rupture and formation. The model is computationally efficient such
that we reach sufficiently long simulation times to observe repeated
catastrophe and rescue events at realistic tubulin concentrations and
hydrolysis rates, which allows us to deduce catastrophe and rescue rates. The
chemomechanical model also allows us to gain insight into microscopic features
of the GTP-tubulin cap structure and microscopic structural features triggering
microtubule catastrophes and rescues. Dilution simulations show qualitative
agreement with experiments. We also explore the consequences of a possible
feedback of mechanical forces onto the hydrolysis process and the GTP-tubulin
cap structure.
| 2104.14189 | 737,909 |
This paper aims at solving FX market volatility modeling problem and finding
the most becoming approach to this task. Validity of two competing approaches,
classical econometric generalized conditional heteroscedasticity and
mathematical (singular spectrum analysis and dynamical systems stability
analysis) are tested on major currency pairs (EUR/USD, USD/JPY, GBP/USD) and
unique high-frequency USD/RUB data. The study shows that both mathematical
tools, understudied in econometric discourse, have a great potential in scope
of discussed problematic, as for all experiments covered in this research, both
of them show promising results.
| 2104.14190 | 737,909 |
We investigate the Hi envelope of the young, massive GMCs in the star-forming
regions N48 and N49, which are located within the high column density Hi ridge
between two kpc-scale supergiant shells, LMC 4 and LMC 5. New long-baseline Hi
21 cm line observations with the Australia Telescope Compact Array (ATCA) were
combined with archival shorter baseline data and single dish data from the
Parkes telescope, for a final synthesized beam size of 24.75" by 20.48", which
corresponds to a spatial resolution of ~ 6 pc in the LMC. It is newly revealed
that the Hi gas is highly filamentary, and that the molecular clumps are
distributed along filamentary Hi features. In total 39 filamentary features are
identified and their typical width is ~ 21 (8-49) [pc]. We propose a scenario
in which the GMCs were formed via gravitational instabilities in atomic gas
which was initially accumulated by the two shells and then further compressed
by their collision. This suggests that GMC formation involves the filamentary
nature of the atomic medium.
| 2104.14191 | 737,909 |
Ultralight scalars, which are states that are either exactly massless or much
lighter than any other massive particle in the model, appear in many new
physics scenarios. Axions and majorons constitute well-motivated examples of
this type of particle. In this work, we explore the phenomenology of these
states in low-energy leptonic observables adopting a model independent approach
that includes both scalar and pseudoscalar interactions. Then, we consider
processes in which the ultralight scalar $\phi$ is directly produced, such as
$\mu \to e \, \phi$, or acts as a mediator, as in $\tau \to \mu \mu \mu$.
Finally, contributions to the charged leptons magnetic and electric moments are
studied as well. In particular, it is shown that the muon $g-2$ anomaly can be
explained provided a mechanism for suppressing the experimental bounds on the
coupling between the ultralight scalar and a pair of muons is introduced.
| 2104.14192 | 737,909 |
This paper is concerned with multiplicity results for parametric singular
double phase problems in $\mathbb{R}^N$ via the Nehari manifold approach. It is
shown that the problem under consideration has at least two nontrivial weak
solutions provided the parameter is sufficiently small. The idea is to split
the Nehari manifold into three disjoint parts minimizing the energy functional
on two of them. The third set turns out to be the empty set for small values of
the parameter.
| 2104.14193 | 737,909 |
Two-loop MHV amplitudes in planar ${\cal N} = 4$ supersymmetric Yang Mills
theory are known to exhibit many intriguing forms of cluster-algebraic
structure. We leverage this structure to upgrade the symbols of the eight- and
nine-particle amplitudes to complete analytic functions. This is done by
systematically projecting onto the components of these amplitudes that take
different functional forms, and matching each component to an ansatz of
multiple polylogarithms with negative cluster-coordinate arguments. The
remaining additive constant can be determined analytically by comparing the
collinear limit of each amplitude to known lower-multiplicity results. We also
observe that the nonclassical part of each of these amplitudes admits a unique
decomposition in terms of a specific $A_3$ cluster polylogarithm, and explore
the numerical behavior of the remainder function along lines in the positive
region.
| 2104.14194 | 737,909 |
Nickel-based complex oxides have served as a playground for decades in the
quest for a copper-oxide analog of the high-temperature (high-Tc)
superconductivity. They may provide key points towards understanding the
mechanism of the high-Tc and an alternative route for a room-temperature
superconductor. The recent discovery of superconductivity in the infinite-layer
nickelate thin films has put this pursuit to an end. Having complete control in
material preparation and a full understanding of the properties and electronic
structures becomes the center of gravity of current research in nickelates.
Thus far, material synthesis remains challenging. The demonstration of perfect
diamagnetism is still missing, and understanding the role of the interface and
bulk to the superconducting properties is still lacking. Here, we synthesized
high-quality Nd0.8Sr0.2NiO2 thin films with different thicknesses and
investigated the interface and strain effects on the electrical, magnetic and
optical properties. The perfect diamagnetism is demonstrated, confirming the
occurrence of superconductivity in the thin films. Unlike the thick films in
which the normal state Hall coefficient (RH) changes signs from negative to
positive as the temperature decreases, the RH of the films thinner than 6.1-nm
remains negative at the whole temperature range below 300 K, suggesting a
thickness-driven band structure modification. The X-ray spectroscopy reveals
the Ni-O hybridization nature in doped finite-layer nickelates, and the
hybridization is enhanced as the thickness decreases. Consistent with band
structure calculations on nickelate/SrTiO3 interfaces, the interface and strain
effect induce the dominating electron-like band in the ultrathin film, thus
causing the sign-change of the RH.
| 2104.14195 | 737,909 |
We consider a one-dimensional stochastic differential equation driven by a
Wiener process, where the diffusion coefficient depends on an ergodic fast
process. The averaging principle is satisfied: it is well-known that the slow
component converges in distribution to the solution of an averaged equation,
with generator determined by averaging the square of the diffusion coefficient.
We propose a version of the averaging principle, where the solution is
interpreted as the sum of two terms: one depending on the average of the
diffusion coefficient, the other giving fluctuations around that average. Both
the average and fluctuation terms contribute to the limit, which illustrates
why it is required to average the square of the diffusion coefficient to find
the limit behavior.
| 2104.14196 | 737,909 |
We present the hybrid hadron string dynamic (HydHSD) model connecting the
parton-hadron-string dynamic model (PHSD) and a hydrodynamic model taking into
account shear viscosity within the Israel-Stewart approach. The performance of
the code is tested on the pion and proton rapidity and transverse mass
distributions calculated for Au+Au and Pb+Pb collision at AGS--SPS energies.
The influence of the switch time from transport to hydro models, the viscous
parameter, and freeze-out time are discussed. Since the applicability of the
Israel-Stewart hydrodynamics assumes the perturbative character of the viscous
stress tensor, $\pi^{\mu\nu}$, which should not exceed the ideal
energy-momentum tensor, $T_{\rm id}^{\mu\nu}$, hydrodynamical codes usually
rescale the shear stress tensor if the inequality $\|\pi^{\mu\nu}\|\ll \|T_{\rm
id}^{\mu\nu}\|$ is not fulfilled in some sense. We show that the form of the
corresponding condition plays an important role in the sensitivity of
hydrodynamic calculations to the viscous parameter -- a ratio of the shear
viscosity to the entropy density, $\eta/s$. It is shown that the constraints
used in the vHLLE and MUSIC models give the same results for the observables.
With these constraints, the rapidity distributions and transverse momentum
spectra are most sensitive to a change of the $\eta/s$ ratio. As an
alternative, a strict condition is used. We performed global fits the rapidity
and transverse mass distribution of pion and protons. It was also found that
$\eta/s$ as a function of the collision energy monotonically increases from
$E_{\rm lab}=6A$GeV up to $E_{\rm lab}=40A$GeV and saturates for higher SPS
energies. We observe that it is difficult to reproduce simultaneously pion and
proton rapidity distribution within our model with the present choice of the
equation of state without a phase transition.
| 2104.14197 | 737,909 |
We design numerical schemes for a class of slow-fast systems of stochastic
differential equations, where the fast component is an Ornstein-Uhlenbeck
process and the slow component is driven by a fractional Brownian motion with
Hurst index $H>1/2$. We establish the asymptotic preserving property of the
proposed scheme: when the time-scale parameter goes to $0$, a limiting scheme
which is consistent with the averaged equation is obtained. With this numerical
analysis point of view, we thus illustrate the recently proved averaging result
for the considered SDE systems and the main differences with the standard
Wiener case.
| 2104.14198 | 737,909 |
We estimate the short- to medium term impact of six major past pandemic
crises on the CO2 emissions and energy transition to renewable electricity. The
results show that the previous pandemics led on average to a 3.4-3.7% fall in
the CO2 emissions in the short-run (1-2 years since the start of the pandemic).
The effect is present only in the rich countries, as well as in countries with
the highest pandemic death toll (where it disappears only after 8 years) and in
countries that were hit by the pandemic during economic recessions. We found
that the past pandemics increased the share of electricity generated from
renewable sources within the fiveyear horizon by 1.9-2.3 percentage points in
the OECD countries and by 3.2-3.9 percentage points in countries experiencing
economic recessions. We discuss the implications of our findings in the context
of CO2 emissions and the transition to renewable energy in the post-COVID-19
era.
| 2104.14199 | 737,909 |
Recommender systems have achieved great success in modeling user's
preferences on items and predicting the next item the user would consume.
Recently, there have been many efforts to utilize time information of users'
interactions with items to capture inherent temporal patterns of user behaviors
and offer timely recommendations at a given time. Existing studies regard the
time information as a single type of feature and focus on how to associate it
with user preferences on items. However, we argue they are insufficient for
fully learning the time information because the temporal patterns of user
preference are usually heterogeneous. A user's preference for a particular item
may 1) increase periodically or 2) evolve over time under the influence of
significant recent events, and each of these two kinds of temporal pattern
appears with some unique characteristics. In this paper, we first define the
unique characteristics of the two kinds of temporal pattern of user preference
that should be considered in time-aware recommender systems. Then we propose a
novel recommender system for timely recommendations, called TimelyRec, which
jointly learns the heterogeneous temporal patterns of user preference
considering all of the defined characteristics. In TimelyRec, a cascade of two
encoders captures the temporal patterns of user preference using a proposed
attention module for each encoder. Moreover, we introduce an evaluation
scenario that evaluates the performance on predicting an interesting item and
when to recommend the item simultaneously in top-K recommendation (i.e.,
item-timing recommendation). Our extensive experiments on a scenario for item
recommendation and the proposed scenario for item-timing recommendation on
real-world datasets demonstrate the superiority of TimelyRec and the proposed
attention modules.
| 2104.14200 | 737,909 |
Very recently, To et al.~have experimentally explored granular flow in a
cylindrical silo, with a bottom wall that rotates horizontally with respect to
the lateral wall \cite{Kiwing2019}. Here, we numerically reproduce their
experimental findings, in particular, the peculiar behavior of the mass flow
rate $Q$ as a function of the frequency of rotation $f$. Namely, we find that
for small outlet diameters $D$ the flow rate increased with $f$, while for
larger $D$ a non-monotonic behavior is confirmed. Furthermore, using a
coarse-graining technique, we compute the macroscopic density, momentum, and
the stress tensor fields. These results show conclusively that changes in the
discharge process are directly related to changes in the flow pattern from
funnel flow to mass flow. Moreover, by decomposing the mass flux (linear
momentum field) at the orifice into two main factors: macroscopic velocity and
density fields, we obtain that the non-monotonic behavior of the linear
momentum is caused by density changes rather than by changes in the macroscopic
velocity. In addition, by analyzing the spatial distribution of the kinetic
stress, we find that for small orifices increasing rotational shear enhances
the mean kinetic pressure $\langle p^k \rangle$ and the system dilatancy. This
reduces the stability of the arches, and, consequently, the volumetric flow
rate increases monotonically. For large orifices, however, we detected that
$\langle p^k \rangle$ changes non-monotonically, which might explain the
non-monotonic behavior of $Q$ when varying the rotational shear.
| 2104.14201 | 737,909 |
Uncertainty quantification is a key aspect in robotic perception, as
overconfident or point estimators can lead to collisions and damages to the
environment and the robot. In this paper, we evaluate scalable approaches to
uncertainty quantification in single-view supervised depth learning,
specifically MC dropout and deep ensembles. For MC dropout, in particular, we
explore the effect of the dropout at different levels in the architecture. We
demonstrate that adding dropout in the encoder leads to better results than
adding it in the decoder, the latest being the usual approach in the literature
for similar problems. We also propose the use of depth uncertainty in the
application of pseudo-RGBD ICP and demonstrate its potential for improving the
accuracy in such a task.
| 2104.14202 | 737,909 |
Recent researches on unsupervised domain adaptation (UDA) have demonstrated
that end-to-end ensemble learning frameworks serve as a compelling option for
UDA tasks. Nevertheless, these end-to-end ensemble learning methods often lack
flexibility as any modification to the ensemble requires retraining of their
frameworks. To address this problem, we propose a flexible
ensemble-distillation framework for performing semantic segmentation based UDA,
allowing any arbitrary composition of the members in the ensemble while still
maintaining its superior performance. To achieve such flexibility, our
framework is designed to be robust against the output inconsistency and the
performance variation of the members within the ensemble. To examine the
effectiveness and the robustness of our method, we perform an extensive set of
experiments on both GTA5 to Cityscapes and SYNTHIA to Cityscapes benchmarks to
quantitatively inspect the improvements achievable by our method. We further
provide detailed analyses to validate that our design choices are practical and
beneficial. The experimental evidence validates that the proposed method indeed
offer superior performance, robustness and flexibility in semantic segmentation
based UDA tasks against contemporary baseline methods.
| 2104.14203 | 737,909 |
Electricity exchanges offer several trading possibilities for market
participants: starting with futures products through the spot market consisting
of the auction and continuous part, and ending with the balancing market. This
variety of choice creates a new question for traders - when to trade to
maximize the gain. This problem is not trivial especially for trading larger
volumes as the market participants should also consider their own price impact.
The following paper raises this issue considering two markets: the hourly EPEX
Day-Ahead Auction and the quarter-hourly EPEX Intraday Auction. We consider a
realistic setting which includes a forecasting study and a suitable evaluation.
For a meaningful optimization many price scenarios are considered that we
obtain using bootstrap with models that are well-known and researched in the
electricity price forecasting literature. The own market impact is predicted by
mimicking the demand or supply shift in the respectful auction curves. A number
of trading strategies is considered, e.g. minimization of the trading costs,
risk neutral or risk averse agents. Additionally, we provide theoretical
results for risk neutral agents. Especially we show when the optimal trading
path coincides with the solution that minimizes transaction costs. The
application study is conducted using the German market data, but the presented
methods can be easily utilized with other two auction-based markets. They could
be also generalized to other market types, what is discussed in the paper as
well. The empirical results show that market participants could increase their
gains significantly compared to simple benchmark strategies.
| 2104.14204 | 737,909 |
We present the novel Efficient Line Segment Detector and Descriptor (ELSD) to
simultaneously detect line segments and extract their descriptors in an image.
Unlike the traditional pipelines that conduct detection and description
separately, ELSD utilizes a shared feature extractor for both detection and
description, to provide the essential line features to the higher-level tasks
like SLAM and image matching in real time. First, we design the one-stage
compact model, and propose to use the mid-point, angle and length as the
minimal representation of line segment, which also guarantees the
center-symmetry. The non-centerness suppression is proposed to filter out the
fragmented line segments caused by lines' intersections. The fine offset
prediction is designed to refine the mid-point localization. Second, the line
descriptor branch is integrated with the detector branch, and the two branches
are jointly trained in an end-to-end manner. In the experiments, the proposed
ELSD achieves the state-of-the-art performance on the Wireframe dataset and
YorkUrban dataset, in both accuracy and efficiency. The line description
ability of ELSD also outperforms the previous works on the line matching task.
| 2104.14205 | 737,909 |
Quasi-equilibrium approximation is a widely used closure approximation
approach for model reduction with applications in complex fluids, materials
science, etc. It is based on the maximum entropy principle and leads to
thermodynamically consistent coarse-grain models. However, its high
computational cost is a known barrier for fast and accurate applications.
Despite its good mathematical properties, there are very few works on the fast
and efficient implementations of quasi-equilibrium approximations. In this
paper, we give efficient implementations of quasi-equilibrium approximations
for antipodally symmetric problems on unit circle and unit sphere using
polynomial and piecewise polynomial approximations. Comparing to the existing
methods using linear or cubic interpolations, our approach achieves high
accuracy (double precision) with much less storage cost. The methods proposed
in this paper can be directly extended to handle other moment closure
approximation problems.
| 2104.14206 | 737,909 |
Scene graph generation has emerged as an important problem in computer
vision. While scene graphs provide a grounded representation of objects, their
locations and relations in an image, they do so only at the granularity of
proposal bounding boxes. In this work, we propose the first, to our knowledge,
framework for pixel-level segmentation-grounded scene graph generation. Our
framework is agnostic to the underlying scene graph generation method and
address the lack of segmentation annotations in target scene graph datasets
(e.g., Visual Genome) through transfer and multi-task learning from, and with,
an auxiliary dataset (e.g., MS COCO). Specifically, each target object being
detected is endowed with a segmentation mask, which is expressed as a
lingual-similarity weighted linear combination over categories that have
annotations present in an auxiliary dataset. These inferred masks, along with a
novel Gaussian attention mechanism which grounds the relations at a pixel-level
within the image, allow for improved relation prediction. The entire framework
is end-to-end trainable and is learned in a multi-task manner with both target
and auxiliary datasets.
| 2104.14207 | 737,909 |
Deep learning (DL) frameworks have been extensively designed, implemented,
and used in software projects across many domains. However, due to the lack of
knowledge or information, time pressure, complex context, etc., various
uncertainties emerge during the development, leading to assumptions made in DL
frameworks. Though not all the assumptions are negative to the frameworks,
being unaware of certain assumptions can result in critical problems (e.g.,
system vulnerability and failures, inconsistencies, and increased cost). As the
first step of addressing the critical problems, there is a need to explore and
understand the assumptions made in DL frameworks. To this end, we conducted an
exploratory study to understand self-claimed assumptions (SCAs) about their
distribution, classification, and impacts using code comments from nine popular
DL framework projects on GitHub. The results are that: (1) 3,084 SCAs are
scattered across 1,775 files in the nine DL frameworks, ranging from 1,460
(TensorFlow) to 8 (Keras) SCAs. (2) There are four types of validity of SCAs:
Valid SCA, Invalid SCA, Conditional SCA, and Unknown SCA, and four types of
SCAs based on their content: Configuration and Context SCA, Design SCA, Tensor
and Variable SCA, and Miscellaneous SCA. (3) Both valid and invalid SCAs may
have an impact within a specific scope (e.g., in a function) on the DL
frameworks. Certain technical debt is induced when making SCAs. There are
source code written and decisions made based on SCAs. This is the first study
on investigating SCAs in DL frameworks, which helps researchers and
practitioners to get a comprehensive understanding on the assumptions made. We
also provide the first dataset of SCAs for further research and practice in
this area.
| 2104.14208 | 737,909 |
To determine whether some often-used lexical association measures assign high
scores to n-grams that chance could have produced as frequently as observed, we
used an extension of Fisher's exact test to sequences longer than two words to
analyse a corpus of four million words. The results, based on the
precision-recall curve and a new index called chance-corrected average
precision, show that, as expected, simple-ll is extremely effective. They also
show, however, that MI3 is more efficient than the other hypothesis tests-based
measures and even reaches a performance level almost equal to simple-ll for
3-grams. It is additionally observed that some measures are more efficient for
3-grams than for 2-grams, while others stagnate.
| 2104.14209 | 737,909 |
Graph representation learning has become a ubiquitous component in many
scenarios, ranging from social network analysis to energy forecasting in smart
grids. In several applications, ensuring the fairness of the node (or graph)
representations with respect to some protected attributes is crucial for their
correct deployment. Yet, fairness in graph deep learning remains
under-explored, with few solutions available. In particular, the tendency of
similar nodes to cluster on several real-world graphs (i.e., homophily) can
dramatically worsen the fairness of these procedures. In this paper, we propose
a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve
fairness in graph representation learning. FairDrop can be plugged in easily on
many existing algorithms, is efficient, adaptable, and can be combined with
other fairness-inducing solutions. After describing the general algorithm, we
demonstrate its application on two benchmark tasks, specifically, as a random
walk model for producing node embeddings, and to a graph convolutional network
for link prediction. We prove that the proposed algorithm can successfully
improve the fairness of all models up to a small or negligible drop in
accuracy, and compares favourably with existing state-of-the-art solutions. In
an ablation study, we demonstrate that our algorithm can flexibly interpolate
between biasing towards fairness and an unbiased edge dropout. Furthermore, to
better evaluate the gains, we propose a new dyadic group definition to measure
the bias of a link prediction task when paired with group-based fairness
metrics. In particular, we extend the metric used to measure the bias in the
node embeddings to take into account the graph structure.
| 2104.14210 | 737,909 |
We theoretically investigate the phase and voltage correlation dynamics under
a current noise including thermal and quantum fluctuations in a resistively and
capacitively shunted Josephson (RCSJ) junction. Within the linear regime, an
external current is found to shift and intensify the deterministic
contributions in phase and voltage. In addition to the deterministic
contribution, we observe the relaxation of autocorrelation functions of phase
and voltage to finite values due to the current noise. We also find an earlier
decay of coherence at a higher temperature in which thermal fluctuations
dominate over quantum ones.
| 2104.14211 | 737,909 |
We seek to find the precursors of the Herbig Ae/Be stars in the solar
vicinity within 500 pc from the Sun. We do this by creating an optically
selected sample of intermediate mass T-Tauri stars (IMTT stars) here defined as
stars of masses $1.5 M_{\odot}\leq M_* \leq 5 M_{\odot}$ and spectral type
between F and K3, from literature. We use literature optical photometry
(0.4-1.25$\mu$m) and distances determined from \textit{Gaia} DR2 parallax
measurements together with Kurucz stellar model spectra to place the stars in a
HR-diagram. With Siess evolutionary tracks we identify intermediate mass
T-Tauri stars from literature and derive masses and ages. We use Spitzer
spectra to classify the disks around the stars into Meeus Group I and Group II
disks based on their [F$_{30}$/F$_{13.5}$] spectral index. We also examine the
10$\mu$m silicate dust grain emission and identify emission from Polycyclic
Aromatic Hydrocarbons (PAH). From this we build a qualitative picture of the
disks around the intermediate mass T-Tauri stars and compare this with
available spatially resolved images at infrared and at sub-millimeter
wavelengths to confirm our classification. We find 49 intermediate mass T-Tauri
stars with infrared excess. The identified disks are similar to the older
Herbig Ae/Be stars in disk geometries and silicate dust grain population.
Spatially resolved images at infra-red and sub-mm wavelengths suggest gaps and
spirals are also present around the younger precursors to the Herbig Ae/Be
stars. Comparing the timescale of stellar evolution towards the main sequence
and current models of protoplanetary disk evolution the similarity between
Herbig Ae/Be stars and the intermediate mass T-Tauri stars points towards an
evolution of Group I and Group II disks that are disconnected, and that they
represent two different evolutionary paths.
| 2104.14212 | 737,909 |
We introduce the tree distance, a new distance measure on graphs. The tree
distance can be computed in polynomial time with standard methods from convex
optimization. It is based on the notion of fractional isomorphism, a
characterization based on a natural system of linear equations whose integer
solutions correspond to graph isomorphism. By results of Tinhofer (1986, 1991)
and Dvo\v{r}\'ak (2010), two graphs G and H are fractionally isomorphic if and
only if, for every tree T, the number of homomorphisms from T to G equals the
corresponding number from T to H, which means that the tree distance of G and H
is zero. Our main result is that this correspondence between the equivalence
relations "fractional isomorphism" and "equal tree homomorphism densities" can
be extended to a correspondence between the associated distance measures. Our
result is inspired by a similar result due to Lov\'asz and Szegedy (2006) and
Borgs, Chayes, Lov\'asz, S\'os, and Vesztergombi (2008) that connects the cut
distance of graphs to their homomorphism densities (over all graphs), which is
a fundamental theorem in the theory of graph limits. We also introduce the path
distance of graphs and take the corresponding result of Dell, Grohe, and Rattan
(2018) for exact path homomorphism counts to an approximate level. Our results
answer an open question of Grohe (2020).
We establish our main results by generalizing our definitions to graphons as
this allows us to apply techniques from functional analysis. We prove the
fairly general statement that, for every "reasonably" defined graphon
pseudometric, an exact correspondence to homomorphism densities can be turned
into an approximate one. We also provide an example of a distance measure that
violates this reasonableness condition. This incidentally answers an open
question of Greb\'ik and Rocha (2021).
| 2104.14213 | 737,909 |
Quantitative trading is an integral part of financial markets with high
calculation speed requirements, while no quantum algorithms have been
introduced into this field yet. We propose quantum algorithms for
high-frequency statistical arbitrage trading in this work by utilizing variable
time condition number estimation and quantum linear regression.The algorithm
complexity has been reduced from the classical benchmark O(N^2d) to
O(sqrt(d)(kappa)^2(log(1/epsilon))^2 )). It shows quantum advantage, where N is
the length of trading data, and d is the number of stocks, kappa is the
condition number and epsilon is the desired precision. Moreover, two tool
algorithms for condition number estimation and cointegration test are
developed.
| 2104.14214 | 737,909 |
The stochastic network calculus (SNC) holds promise as a framework to
calculate probabilistic performance bounds in networks of queues. A great
challenge to accurate bounds and efficient calculations are stochastic
dependencies between flows due to resource sharing inside the network. However,
by carefully utilizing the basic SNC concepts in the network analysis the
necessity of taking these dependencies into account can be minimized. To that
end, we fully unleash the power of the pay multiplexing only once principle
(PMOO, known from the deterministic network calculus) in the SNC analysis. We
choose an analytic combinatorics presentation of the results in order to ease
complex calculations. In tree-reducible networks, a subclass of a general
feedforward networks, we obtain a perfect analysis in terms of avoiding the
need to take internal flow dependencies into account. In a comprehensive
numerical evaluation, we demonstrate how this unleashed PMOO analysis can
reduce the known gap between simulations and SNC calculations significantly,
and how it favourably compares to state-of-the art SNC calculations in terms of
accuracy and computational effort. Driven by these promising results, we also
consider general feedforward networks, when some flow dependencies have to be
taken into account. To that end, the unleashed PMOO analysis is extended to the
partially dependent case and a case study of a canonical example topology,
known as the diamond network, is provided, again displaying favourable results
over the state of the art.
| 2104.14215 | 737,909 |
(abridged) Context. The origin of hot exozodiacal dust and its connection
with outer dust reservoirs remains unclear. Aims. We aim to explore the
possible connection between hot exozodiacal dust and warm dust reservoirs (>
100 K) in asteroid belts. Methods. We use precision near-infrared
interferometry with VLTI/PIONIER to search for resolved emission at H band
around a selected sample of nearby stars. Results. Our observations reveal the
presence of resolved near-infrared emission around 17 out of 52 stars, four of
which are shown to be due to a previously unknown stellar companion. The 13
other H-band excesses are thought to originate from the thermal emission of hot
dust grains. Taking into account earlier PIONIER observations, and after
reevaluating the warm dust content of all our PIONIER targets through spectral
energy distribution modeling, we find a detection rate of 17.1(+8.1)(-4.6)% for
H-band excess around main sequence stars hosting warm dust belts, which is
statistically compatible with the occurrence rate of 14.6(+4.3)(-2.8)% found
around stars showing no signs of warm dust. After correcting for the
sensitivity loss due to partly unresolved hot disks, under the assumption that
they are arranged in a thin ring around their sublimation radius, we however
find tentative evidence at the 3{\sigma} level that H-band excesses around
stars with outer dust reservoirs (warm or cold) could be statistically larger
than H-band excesses around stars with no detectable outer dust. Conclusions.
Our observations do not suggest a direct connection between warm and hot dust
populations, at the sensitivity level of the considered instruments, although
they bring to light a possible correlation between the level of H-band excesses
and the presence of outer dust reservoirs in general.
| 2104.14216 | 737,909 |
With the aim of better understanding the numerical properties of the lattice
Boltzmann method (LBM), a general methodology is proposed to derive its
hydrodynamic limits in the discrete setting. It relies on a Taylor expansion in
the limit of low Knudsen numbers. With a single asymptotic analysis, two kinds
of deviations with the Navier-Stokes (NS) equations are explicitly evidenced:
consistency errors, inherited from the kinetic description of the LBM, and
numerical errors attributed to its space and time discretization. The
methodology is applied to the Bhatnagar-Gross-Krook (BGK), the regularized and
the multiple relaxation time (MRT) collision models in the isothermal
framework. Deviation terms are systematically confronted to linear analyses in
order to validate their expressions, interpret them and provide explanations
for their numerical properties. The low dissipation of the BGK model is then
related to a particular pattern of its error terms in the Taylor expansion.
Similarly, dissipation properties of the regularized and MRT models are
explained by a phenomenon referred to as hyperviscous degeneracy. The latter
consists in an unexpected resurgence of high-order Knudsen effects induced by a
large numerical pre-factor. It is at the origin of over-dissipation and severe
instabilities in the low-viscosity regime.
| 2104.14217 | 737,909 |
In this paper an approximation of the image of the closed ball of the space
$L_p$ $(p>1)$ centered at the origin with radius $r$ under Hilbert-Schmidt
integral operator $F(\cdot):L_p\rightarrow L_q$ $\displaystyle
\left(\frac{1}{p}+\frac{1}{q}=1\right)$ is presented. An error estimation for
given approximation is obtained.
| 2104.14218 | 737,909 |
We investigate which plane curves admit rational families of quasi-toric
relations. This extends previous results of Takahashi and Tokunaga in the
positive case and of the author in the negative case.
| 2104.14219 | 737,909 |
X-ray flux from the inner hot region around central compact object in a
binary system illuminates the upper surface of an accretion disc and it behaves
like a corona. This region can be photoionised by the illuminating radiation,
thus can emit different emission lines. We study those line spectra in black
hole X-ray binaries for different accretion flow parameters including its
geometry. The varying range of model parameters captures maximum possible
observational features. We also put light on the routinely observed Fe line
emission properties based on different model parameters, ionization rate, and
Fe abundances. We find that the Fe line equivalent width $W_{\rm E}$ decreases
with increasing disc accretion rate and increases with the column density of
the illuminated gas. Our estimated line properties are in agreement with
observational signatures.
| 2104.14220 | 737,909 |
In this paper, we investigate influence of Earth's orbit on the shadow of Sgr
A*. Motivated by inclination of the Earth's orbit that is not located at
galactic plane, we consider the black hole shadow for arbitrary inclinations
and different velocities of observers. It is found that rotation axis of a
black hole might not be extracted from its shadow, since the ways of the shadow
getting distorted depend not only on the spin of the black hole, but also
velocities of observers. Namely, appearance of the shadow could be rotated by
an angle in observers' celestial sphere for an observer in motion. In order to
consider the Earth's orbit for the shadow of Sgr A*, we present a formalism for
calculating the shadow in terms of the local velocity expansion. It shows that
influence of the orbital velocity of the Earth on the shadows of Sgr A* is much
larger than that of the displacement in Earth's orbit. The deviation of size of
the shadow is around $10^{-4}$. And the deviation of the distortion parameter
of the shadow is around $10^{-14}$.
| 2104.14221 | 737,909 |
Recently, there has been an increasing concern about the privacy issue raised
by using personally identifiable information in machine learning. However,
previous portrait matting methods were all based on identifiable portrait
images. To fill the gap, we present P3M-10k in this paper, which is the first
large-scale anonymized benchmark for Privacy-Preserving Portrait Matting.
P3M-10k consists of 10,000 high-resolution face-blurred portrait images along
with high-quality alpha mattes. We systematically evaluate both trimap-free and
trimap-based matting methods on P3M-10k and find that existing matting methods
show different generalization capabilities when following the
Privacy-Preserving Training (PPT) setting, i.e., "training on face-blurred
images and testing on arbitrary images". To devise a better trimap-free
portrait matting model, we propose P3M-Net, which leverages the power of a
unified framework for both semantic perception and detail matting, and
specifically emphasizes the interaction between them and the encoder to
facilitate the matting process. Extensive experiments on P3M-10k demonstrate
that P3M-Net outperforms the state-of-the-art methods in terms of both
objective metrics and subjective visual quality. Besides, it shows good
generalization capacity under the PPT setting, confirming the value of P3M-10k
for facilitating future research and enabling potential real-world
applications. The source code and dataset will be made publicly available.
| 2104.14222 | 737,909 |
Complicated assembly processes can be described as a sequence of two main
activities: grasping and insertion. While general grasping solutions are common
in industry, insertion is still only applicable to small subsets of problems,
mainly ones involving simple shapes in fixed locations and in which the
variations are not taken into consideration. Recently, RL approaches with prior
knowledge (e.g., LfD or residual policy) have been adopted. However, these
approaches might be problematic in contact-rich tasks since interaction might
endanger the robot and its equipment. In this paper, we tackled this challenge
by formulating the problem as a regression problem. By combining visual and
force inputs, we demonstrate that our method can scale to 16 different
insertion tasks in less than 10 minutes. The resulting policies are robust to
changes in the socket position, orientation or peg color, as well as to small
differences in peg shape. Finally, we demonstrate an end-to-end solution for 2
complex assembly tasks with multi-insertion objectives when the assembly board
is randomly placed on a table.
| 2104.14223 | 737,909 |
Due to the noticeable structural similarity and being neighborhood in
periodic table of group-IV and -V elemental monolayers, whether the combination
of group-IV and -V elements could have stable nanosheet structures with
optimistic properties has attracted great research interest. In this work, we
performed first-principles simulations to investigate the elastic, vibrational
and electronic properties of the carbon nitride (CN) nanosheet in the puckered
honeycomb structure with covalent interlayer bonding. It has been demonstrated
that the structural stability of CN nanosheet is essentially maintained by the
strong interlayer \so\ bonding between adjacent carbon atoms in the opposite
atomic layers. A negative Poisson's ratio in the out-of-plane direction under
biaxial deformation, and the extreme in-plane stiffness of CN nanosheet, only
slightly inferior to the monolayer graphene, are revealed. Moreover, the highly
anisotropic mechanical and electronic response of CN nanosheet to tensile
strain have been explored.
| 2104.14224 | 737,909 |
"Changing-look quasars" (CLQs) are active galactic nuclei (AGN) showing
extreme variability that results in a transition from Type 1 to Type 2. The
short timescales of these transitions present a challenge to the unified model
of AGN and the physical processes causing these transitions remain poorly
understood. CLQs also provide interesting samples for the study of AGN host
galaxies since the central emission disappears almost entirely. Previous
searches for CLQs have utilised photometric variability or SDSS classification
changes to systematically identify CLQs, this approach may miss lower
luminosity CLQs. In this paper, we aim to use spectroscopic data to asses if
analysis difference spectra can be used to detect further changing look quasars
missed by photometric searches. We search SDSS-II DR 7 repeat spectra for
sources that exhibit either a disappearance or appearance of both broad line
emission and accretion disk continuum emission by directly analysing the
difference spectrum between two epochs of observation. From a sample of 24,782
objects with difference spectra, our search yielded six CLQs within the
redshift range $0.1 \leq z \leq 0.3$, including four newly identified sources.
Spectral analysis indicates that changes in accretion rate can explain the
changing-look behaviour. While a change in dust extinction fits the changes in
spectral shape, the time-scales of the changes observed are too short for
obscuration from torus clouds. Using difference spectra was shown to be an
effective and sensitive way to detect CLQs. We recover CLQs an order of
magnitude lower in luminosities than those found by photometric searches and
achieve higher completeness than spectroscopic searches relying on pipeline
classification.
| 2104.14225 | 737,909 |
We investigate how different fairness assumptions affect results concerning
lock-freedom, a typical liveness property targeted by session type systems. We
fix a minimal session calculus and systematically take into account all known
fairness assumptions, thereby identifying precisely three interesting and
semantically distinct notions of lock-freedom, all of which having a sound
session type system. We then show that, by using a general merge operator in an
otherwise standard approach to global session types, we obtain a session type
system complete for the strongest amongst those notions of lock-freedom, which
assumes only justness of execution paths, a minimal fairness assumption for
concurrent systems.
| 2104.14226 | 737,909 |
Results. We illustrate our profile-fitting technique and present the K\,{\sc
i} velocity structure of the dense ISM along the paths to all targets. As a
validation test of the dust map, we show comparisons between distances to
several reconstructed clouds with recent distance assignments based on
different techniques. Target star extinctions estimated by integration in the
3D map are compared with their K\,{\sc i} 7699 A absorptions and the degree of
correlation is found comparable to the one between the same K\,{\sc i} line and
the total hydrogen column for stars distributed over the sky that are part of a
published high resolution survey. We show images of the updated dust
distribution in a series of vertical planes in the Galactic longitude interval
150-182.5 deg and our estimated assignments of radial velocities to the opaque
regions. Most clearly defined K\,{\sc i} absorptions may be assigned to a dense
dust cloud between the Sun and the target star. It appeared relatively
straightforward to find a velocity pattern consistent will all absorptions and
ensuring coherence between adjacent lines of sight, at the exception of a few
weak lines. We compare our results with recent determinations of velocities of
several clouds and find good agreement. These results demonstrate that the
extinction-K\,{\sc i} relationship is tight enough to allow linking the radial
velocity of the K\,{\sc i} lines to the dust clouds seen in 3D, and that their
combination may be a valuable tool in building a 3D kinetic structure of the
dense ISM. We discuss limitations and perspectives for this technique.
| 2104.14227 | 737,909 |
Gamma-Ray Integrated Detectors (GRID) is a student project designed to use
multiple gamma-ray detectors carried by nanosatellites (CubeSat), forming a
full-time and all-sky gamma-ray detection network to monitor the transient
gamma-ray sky in the multi-messenger astronomy era. A compact CubeSat gamma-ray
detector has been designed and implemented for GRID, including its hardware and
firmware. The detector employs four Gd2Al2Ga3O12 : Ce (GAGG:Ce) scintillators
coupled with four silicon photomultiplier (SiPM) arrays to achieve a high
detection efficiency of gamma rays between 10 keV and 2 MeV with low power and
small dimensions. The first detector designed by the undergraduate student team
onboard a commercial CubeSat was launched into a Sun-synchronous orbit on 29
October 2018. The detector has been in a normal observation state and
accumulated data for approximately 1 month after on-orbit functional and
performance tests in 2019.
| 2104.14228 | 737,909 |
Due to the widespread use of tools and the development of text processing
techniques, the size and range of clinical data are not limited to structured
data. The rapid growth of recorded information has led to big data platforms in
healthcare that could be used to improve patients' primary care and serve
various secondary purposes. Patient similarity assessment is one of the
secondary tasks in identifying patients who are similar to a given patient, and
it helps derive insights from similar patients' records to provide better
treatment. This type of assessment is based on calculating the distance between
patients. Since representing and calculating the similarity of patients plays
an essential role in many secondary uses of electronic records, this article
examines a new data representation method for Electronic Medical Records (EMRs)
while taking into account the information in clinical narratives for similarity
computing. Some previous works are based on structured data types, while other
works only use unstructured data. However, a comprehensive representation of
the information contained in the EMR requires the effective aggregation of both
structured and unstructured data. To address the limitations of previous
methods, we propose a method that captures the co-occurrence of different
medical events, including signs, symptoms, and diseases extracted via
unstructured data and structured data. It integrates data as discriminative
features to construct a temporal tree, considering the difference between
events that have short-term and long-term impacts. Our results show that
considering signs, symptoms, and diseases in every time interval leads to less
MSE and more precision compared to baseline representations that do not
consider this information or consider them separately from structured data.
| 2104.14229 | 737,909 |
Geometrical chirality is a universal phenomenon that is encountered on many
different length scales ranging from geometrical shapes of various living
organisms to protein and DNA molecules. Interaction of chiral matter with
chiral light - that is, electromagnetic field possessing a certain handedness -
underlies our ability to discriminate enantiomers of chiral molecules. In this
context, it is often desired to have an optical cavity that would efficiently
couple to only a specific (right or left) molecular enantiomer, and not couple
to the opposite one. Here, we demonstrate a single-handedness chiral optical
cavity supporting only an eigenmode of a given handedness without the presence
of modes of other helicity. Resonant excitation of the cavity with light of
appropriate handedness enables formation of a helical standing wave with a
uniform chirality density, while the opposite handedness does not cause any
resonant effects. Furthermore, only chiral emitters of the matching handedness
efficiently interact with such a chiral eigenmode, enabling the
handedness-selective coupling light-matter strength. The proposed system
expands the set of tools available for investigations of chiral matter and
opens the door to studies of chiral electromagnetic vacuum.
| 2104.14230 | 737,909 |
Although the expansion of the Universe explicitly breaks the time-translation
symmetry, cosmological predictions for the stochastic gravitational wave
background (SGWB) are usually derived under the so-called stationary
hypothesis. By dropping this assumption and keeping track of the time
dependence of gravitational waves at all length scales, we derive the expected
unequal-time (and equal-time) waveform of the SGWB generated by scaling
sources, such as cosmic defects. For extinct and smooth enough sources, we show
that all observable quantities are uniquely and analytically determined by the
holomorphic Fourier transform of the anisotropic stress correlator. Both the
strain power spectrum and the energy density parameter are shown to have an
oscillatory fine structure, they significantly differ on large scales while
running in phase opposition at large wavenumbers $k$. We then discuss scaling
sources that are never extinct nor smooth and which generate a singular Fourier
transform of the anisotropic stress correlator. For these, we find the
appearance of interferences on top of the above-mentioned fine-structure as
well as atypical behaviour at small scales. For instance, we expect the
rescaled strain power spectrum $k^2 \mathcal{P}_h$ generated by long cosmic
strings in the matter era to oscillate around a scale invariant plateau. These
singular sources are also shown to produce orders of magnitude difference
between the rescaled strain spectra and the energy density parameter suggesting
that only the former should be used for making reliable observable predictions.
Finally, we discuss how measuring such a fine structure in the SGWB could
disambiguate the possible cosmological sources.
| 2104.14231 | 737,909 |
Reconfigurable optical systems are the object of continuing, intensive
research activities, as they hold great promise for realizing a new generation
of compact, miniaturized, and flexible optical devices. However, current
reconfigurable systems often tune only a single state variable triggered by an
external stimulus, thus, leaving out many potential applications. Here we
demonstrate a reconfigurable multistate optical system enabled by phase
transitions in vanadium dioxide (VO2). By controlling the phase-transition
characteristics of VO2 with simultaneous stimuli, the responses of the optical
system can be reconfigured among multiple states. In particular, we show a
quadruple-state dynamic plasmonic display that responds to both temperature
tuning and hydrogen-doping. Furthermore, we introduce an electron-doping scheme
to locally control the phase-transition behavior of VO2, enabling an optical
encryption device encoded by multiple keys. Our work points the way toward
advanced multistate reconfigurable optical systems, which substantially
outperform current optical devices in both breadth of capabilities and
functionalities.
| 2104.14232 | 737,909 |
Transit photometry is perhaps the most successful method for detecting
exoplanets to date. However, a substantial amount of signal processing is
needed since the dip in the signal detected, an indication that there is a
planet in transit, is minuscule compared to the overall background signal due
mainly to its host star. In this paper, we put forth a doable and
straightforward method to enhance the signal and reduce noise. We discuss how
to achieve higher planetary signals by subtracting equal halves of the host
star - a folded detection. This results in a light curve with a double
peak-to-peak signal, 2R_p^2/R_s^2, compared to the usual transit. We derive an
expression of the light curve and investigate the effect of two common noises:
the white Gaussian background noise and the noise due to the occurrences of
sunspots. We show that in both simulation and analytical expression, the folded
transit reduces the effective noise by a factor of 1/sqrt(2). This reduction
and the doubling of the signal enables: (1) less number of transit measurements
to get a definitive transiting planet signal and (2) detection of smaller
planetary radii with the usual transit with the same number of transit data.
Furthermore, we show that in the presence of multiple sunspots, the estimation
of planetary parameters is more accurate. While our calculations may be very
simple, it covers the basic concept of planetary transits.
| 2104.14233 | 737,909 |
Attracted by its scalability towards practical codeword lengths, we revisit
the idea of Turbo-autoencoders for end-to-end learning of PHY-Layer
communications. For this, we study the existing concepts of Turbo-autoencoders
from the literature and compare the concept with state-of-the-art classical
coding schemes. We propose a new component-wise training algorithm based on the
idea of Gaussian a priori distributions that reduces the overall training time
by almost a magnitude. Further, we propose a new serial architecture inspired
by classical serially concatenated Turbo code structures and show that a
carefully optimized interface between the two component autoencoders is
required. To the best of our knowledge, these serial Turbo autoencoder
structures are the best known neural network based learned sequences that can
be trained from scratch without any required expert knowledge in the domain of
channel codes.
| 2104.14234 | 737,909 |
The use of deep neural networks (DNNs) in safety-critical applications like
mobile health and autonomous driving is challenging due to numerous
model-inherent shortcomings. These shortcomings are diverse and range from a
lack of generalization over insufficient interpretability to problems with
malicious inputs. Cyber-physical systems employing DNNs are therefore likely to
suffer from safety concerns. In recent years, a zoo of state-of-the-art
techniques aiming to address these safety concerns has emerged. This work
provides a structured and broad overview of them. We first identify categories
of insufficiencies to then describe research activities aiming at their
detection, quantification, or mitigation. Our paper addresses both machine
learning experts and safety engineers: The former ones might profit from the
broad range of machine learning topics covered and discussions on limitations
of recent methods. The latter ones might gain insights into the specifics of
modern ML methods. We moreover hope that our contribution fuels discussions on
desiderata for ML systems and strategies on how to propel existing approaches
accordingly.
| 2104.14235 | 737,909 |
Learning to re-identify or retrieve a group of people across non-overlapped
camera systems has important applications in video surveillance. However, most
existing methods focus on (single) person re-identification (re-id), ignoring
the fact that people often walk in groups in real scenarios. In this work, we
take a step further and consider employing context information for identifying
groups of people, i.e., group re-id. We propose a novel unified framework based
on graph neural networks to simultaneously address the group-based re-id tasks,
i.e., group re-id and group-aware person re-id. Specifically, we construct a
context graph with group members as its nodes to exploit dependencies among
different people. A multi-level attention mechanism is developed to formulate
both intra-group and inter-group context, with an additional self-attention
module for robust graph-level representations by attentively aggregating
node-level features. The proposed model can be directly generalized to tackle
group-aware person re-id using node-level representations. Meanwhile, to
facilitate the deployment of deep learning models on these tasks, we build a
new group re-id dataset that contains more than 3.8K images with 1.5K annotated
groups, an order of magnitude larger than existing group re-id datasets.
Extensive experiments on the novel dataset as well as three existing datasets
clearly demonstrate the effectiveness of the proposed framework for both
group-based re-id tasks. The code is available at
https://github.com/daodaofr/group_reid.
| 2104.14236 | 737,909 |
Table Structure Recognition is an essential part of end-to-end tabular data
extraction in document images. The recent success of deep learning model
architectures in computer vision remains to be non-reflective in table
structure recognition, largely because extensive datasets for this domain are
still unavailable while labeling new data is expensive and time-consuming.
Traditionally, in computer vision, these challenges are addressed by standard
augmentation techniques that are based on image transformations like color
jittering and random cropping. As demonstrated by our experiments, these
techniques are not effective for the task of table structure recognition. In
this paper, we propose TabAug, a re-imagined Data Augmentation technique that
produces structural changes in table images through replication and deletion of
rows and columns. It also consists of a data-driven probabilistic model that
allows control over the augmentation process. To demonstrate the efficacy of
our approach, we perform experimentation on ICDAR 2013 dataset where our
approach shows consistent improvements in all aspects of the evaluation
metrics, with cell-level correct detections improving from 92.16% to 96.11%
over the baseline.
| 2104.14237 | 737,909 |
We prove that the absolute extendability constant of a finite metric space
may be determined by computing relative projection constants of certain
Lipschitz-free spaces. As an application, we show that $\mbox{ae}(3)=4/3$ and
$\mbox{ae}(4)\geq (5+4\sqrt{2})/7$. Moreover, we discuss how to compute
relative projection constants by solving linear programming problems.
| 2104.14238 | 737,909 |
Dense high-energy monoenergetic proton beams are vital for wide applications,
thus modern laser-plasma-based ion acceleration methods are aiming to obtain
high-energy proton beams with energy spread as low as possible. In this work,
we put forward a quantum radiative compression method to post-compress a highly
accelerated proton beam and convert it to a dense quasi-monoenergetic one. We
find that when the relativistic plasma produced by radiation pressure
acceleration collides head-on with an ultraintense laser beam, large-amplitude
plasma oscillations are excited due to quantum radiation-reaction and the
ponderomotive force, which induce compression of the phase space of protons
located in its acceleration phase with negative gradient. Our three-dimensional
spin-resolved QED particle-in-cell simulations show that hollow-structure
proton beams with a peak energy $\sim$ GeV, relative energy spread of few
percents and number $N_p\sim10^{10}$ (or $N_p\sim 10^9$ with a $1\%$ energy
spread) can be produced in near future laser facilities, which may fulfill the
requirements of important applications, such as, for radiography of ultra-thick
dense materials, or as injectors of hadron colliders.
| 2104.14239 | 737,909 |
Although there is a clear indication that stages of residential decision
making are characterized by their own stakeholders, activities, and outcomes,
many studies on residential low-carbon technology adoption only implicitly
address stage-specific dynamics. This paper explores stakeholder influences on
residential photovoltaic adoption from a procedural perspective, so-called
stakeholder dynamics. The major objective is the understanding of underlying
mechanisms to better exploit the potential for residential photovoltaic uptake.
Four focus groups have been conducted in close collaboration with the
independent institute for social science research SINUS Markt- und
Sozialforschung in East Germany. By applying a qualitative content analysis,
major influence dynamics within three decision stages are synthesized with the
help of egocentric network maps from the perspective of residential
decision-makers. Results indicate that actors closest in terms of emotional and
spatial proximity such as members of the social network represent the major
influence on residential PV decision-making throughout the stages. Furthermore,
decision-makers with a higher level of knowledge are more likely to move on to
the subsequent stage. A shift from passive exposure to proactive search takes
place through the process, but this shift is less pronounced among risk-averse
decision-makers who continuously request proactive influences. The discussions
revealed largely unexploited potential regarding the stakeholders local
utilities and local governments who are perceived as independent, trustworthy
and credible stakeholders. Public stakeholders must fulfill their
responsibility in achieving climate goals by advising, assisting, and financing
services for low-carbon technology adoption at the local level. Supporting
community initiatives through political frameworks appears to be another
promising step.
| 2104.14240 | 737,909 |
This paper investigates the problem of straight-line path following for
magnetic helical microswimmers. The control objective is to make the helical
microswimmer to converge to a straight line without violating the step-out
frequency constraint. The proposed feedback control solution is based on an
optimal decision strategy (ODS) that is cast as a trust-region subproblem
(TRS), i.e., a quadratic program over a sphere. The ODS-based control strategy
minimizes the difference between the microrobot velocity and an integral
line-of-sight (ILOS)-based reference vector field while respecting the magnetic
saturation constraints and ensuring the absolute continuity of the control
input. Due to the embedded integral action in the reference vector field, the
microswimmer will follow the desired straight line by compensating for the
drift effect of the environmental disturbances as well as the microswimmer
weight.
| 2104.14241 | 737,909 |
We report experimental and theoretical evidence of strong electron-plasmon
interaction in n-doped single-layer MoS2. Angle-resolved photoemission
spectroscopy (ARPES) measurements reveal the emergence of distinctive
signatures of polaronic coupling in the electron spectral function.
Calculations based on many-body perturbation theory illustrate that electronic
coupling to two-dimensional (2D) carrier plasmons provides an exhaustive
explanation of the experimental spectral features and their energies. These
results constitute compelling evidence of the formation of plasmon-induced
polaronic quasiparticles, suggesting that highly-doped transition-metal
dichalcogenides may provide a new platform to explore strong-coupling phenomena
between electrons and plasmons in 2D.
| 2104.14242 | 737,909 |
In this article, we analyze perinatal data with birth weight (BW) as
primarily interesting response variable. Gestational age (GA) is usually an
important covariate and included in polynomial form. However, in opposition to
this univariate regression, bivariate modeling of BW and GA is recommended to
distinguish effects on each, on both, and between them. Rather than a
parametric bivariate distribution, we apply conditional copula regression,
where marginal distributions of BW and GA (not necessarily of the same form)
can be estimated independently, and where the dependence structure is modeled
conditional on the covariates separately from these marginals. In the resulting
distributional regression models, all parameters of the two marginals and the
copula parameter are observation-specific. Besides biometric and obstetric
information, data on drinking water contamination and maternal smoking are
included as environmental covariates. While the Gaussian distribution is
suitable for BW, the skewed GA data are better modeled by the three-parametric
Dagum distribution. The Clayton copula performs better than the Gumbel and the
symmetric Gaussian copula, indicating lower tail dependence (stronger
dependence when both variables are low), although this non-linear dependence
between BW and GA is surprisingly weak and only influenced by Cesarean section.
A non-linear trend of BW on GA is detected by a classical univariate model that
is polynomial with respect to the effect of GA. Linear effects on BW mean are
similar in both models, while our distributional copula regression also reveals
covariates' effects on all other parameters.
| 2104.14243 | 737,909 |
The goal of this work is to find the simplest UV completion of Accidental
Composite Dark Matter Models (ACDM) that can dynamically generate an asymmetry
for the DM candidate, the lightest \textit{dark baryon} (DCb), and
simultaneously annihilate the symmetric component. In this framework the DCb is
a bound state of a confining $\text{SU}(N)_{\text{DC}}$ gauge group, and can
interact weakly with the visible sector. The constituents of the DCb can
possess non-trivial charges under the Standard Model gauge group. The
generation of asymmetry for such candidate is a two-flavor variation of the
\emph{out-of-equilibrium} decay of a heavy scalar, with mass $M_\phi\gtrsim
10^{15}$ GeV. Below the scale of the scalars, the models recover accidental
stability, or long-livedness, of the DM candidate. The symmetric component is
annihilated by residual confined interactions provided that the mass of the DCb
$m_{\text{DCb}} \lesssim 75$ TeV. We implement the mechanism of asymmetry
generation, or a variation of it, in all the original ACDM models, managing to
generate the correct asymmetry for DCb of masses in this range. For some of the
models found, the stability of the DM candidate is not spoiled even considering
generic GUT completions or asymmetry generation mechanisms in the visible
sector.
| 2104.14244 | 737,909 |
Wasserstein distance induces a natural Riemannian structure for the
probabilities on the Euclidean space. This insight of classical transport
theory is fundamental for tremendous applications in various fields of pure and
applied mathematics.
We believe that an appropriate probabilistic variant, the adapted Wasserstein
distance AW, can play a similar role for the class FP of filtered processes,
i.e. stochastic processes together with a filtration. In contrast to other
topologies for stochastic processes, probabilistic operations such as the
Doob-decomposition, optimal stopping and stochastic control are continuous
w.r.t. AW. We also show that (FP,AW) is a geodesic space, isometric to a
classical Wasserstein space, and that martingales form a closed geodesically
convex subspace.
| 2104.14245 | 737,909 |
Due to the increasing size of HPC machines, the fault presence is becoming an
eventuality that applications must face. Natively, MPI provides no support for
the execution past the detection of a fault, and this is becoming more and more
constraining. With the introduction of ULFM (User Level Fault Mitigation
library), it has been provided with a possible way to overtake a fault during
the application execution at the cost of code modifications. ULFM is intrusive
in the application and requires also a deep understanding of its recovery
procedures.
In this paper we propose Legio, a framework that lowers the complexity of
introducing resiliency in an embarrassingly parallel MPI application. By hiding
ULFM behind the MPI calls, the library is capable to expose resiliency features
to the application in a transparent manner thus removing any integration
effort. Upon fault, the failed nodes are discarded and the execution continues
only with the non-failed ones. A hierarchical implementation of the solution
has been also proposed to reduce the overhead of the repair process when
scaling towards a large number of nodes.
We evaluated our solutions on the Marconi100 cluster at CINECA, showing that
the overhead introduced by the library is negligible and it does not limit the
scalability properties of MPI. Moreover, we also integrated the solution in
real-world applications to further prove its robustness by injecting faults.
| 2104.14246 | 737,909 |
In 2017 Skabelund constructed two new examples of maximal curves
$\tilde{\mathcal{S}}_q$ and $\tilde{\mathcal{R}}_q$ as covers of the Suzuki and
Ree curves, respectively. The resulting Skabelund curves are analogous to the
Giulietti-Korchm\'aros cover of the Hermitian curve. In this paper a complete
characterization of all Galois subcovers of the Skabelund curves
$\tilde{\mathcal{S}}_q$ and $\tilde{\mathcal{R}}_q$ is given. Calculating the
genera of the corresponding curves, we find new additions to the list of known
genera of maximal curves over finite fields.
| 2104.14247 | 737,909 |
These lectures present some basic ideas and techniques in the spectral
analysis of lattice Schrodinger operators with disordered potentials. In
contrast to the classical Anderson tight binding model, the randomness is also
allowed to possess only finitely many degrees of freedom. This refers to
dynamically defined potentials, i.e., those given by evaluating a function
along an orbit of some ergodic transformation (or of several commuting such
transformations on higher-dimensional lattices). Classical localization
theorems by Frohlich--Spencer for large disorders are presented, both for
random potentials in all dimensions, as well as even quasi-periodic ones on the
line. After providing the needed background on subharmonic functions, we then
discuss the Bourgain-Goldstein theorem on localization for quasiperiodic
Schrodinger cocycles assuming positive Lyapunov exponents.
| 2104.14248 | 737,909 |
In this paper we study a family of limsup sets that are defined using
iterated function systems. Our main result is an analogue of Khintchine's
theorem for these sets. We then apply this result to the topic of intrinsic
Diophantine Approximation on self-similar sets. In particular, we define a new
height function for an element of $\mathbb{Q}^d$ contained in a self-similar
set in terms of its eventually periodic representations. For limsup sets
defined with respect to this height function, we obtain a detailed description
of their metric properties. The results of this paper hold in arbitrary
dimensions and without any separation conditions on the underlying iterated
function system.
| 2104.14249 | 737,909 |
Tractable safety-ensuring algorithms for cyber-physical systems are important
in critical applications. Approaches based on Control Barrier Functions assume
continuous enforcement, which is not possible in an online fashion. This paper
presents two tractable algorithms to ensure forward invariance of discrete-time
controlled cyber-physical systems. Both approaches are based on Control Barrier
Functions to provide strict mathematical safety guarantees. The first algorithm
exploits Lipschitz continuity and formulates the safety condition as a robust
program which is subsequently relaxed to a set of affine conditions. The second
algorithm is inspired by tube-NMPC and uses an affine Control Barrier Function
formulation in conjunction with an auxiliary controller to guarantee safety of
the system. We combine an approximate NMPC controller with the second algorithm
to guarantee strict safety despite approximated constraints and show its
effectiveness experimentally on a mini-Segway.
| 2104.14250 | 737,909 |
In this letter, we propose a computationally efficient method for joint
selection of cancellation carriers (CCs) and calculation of their values
minimizing the out-of-band (OOB) power in non-contiguous (NC-) OFDM
transmission. The proposed new CCs selection method achieves higher OOB power
attenuation than algorithms known from literature as well as noticable
reception performance improvement.
| 2104.14251 | 737,909 |
The role of gravity in human motor control is at the same time obvious and
difficult to isolate. It can be assessed by performing experiments in variable
gravity. We propose that adiabatic invariant theory may be used to reveal
nearly-conserved quantities in human voluntary rhythmic motion, an individual
being seen as a complex time-dependent dynamical system with bounded motion in
phase-space. We study an explicit realization of our proposal: An experiment in
which we asked participants to perform $\infty-$ shaped motion of their right
arm during a parabolic flight, either at self-selected pace or at a metronome's
given pace. Gravity varied between $0$ and $1.8$ $g$ during a parabola. We
compute the adiabatic invariants in participant's frontal plane assuming a
separable dynamics. It appears that the adiabatic invariant in vertical
direction increases linearly with $g$, in agreement with our model. Differences
between the free and metronome-driven conditions show that participants'
adaptation to variable gravity is maximal without constraint. Furthermore,
motion in the participant's transverse plane induces trajectories that may be
linked to higher-derivative dynamics. Our results show that adiabatic
invariants are relevant quantities to show the changes in motor strategy in
time-dependent environments.
| 2104.14252 | 737,909 |
Thanks to Atkinson (1938), we know the first two terms of the asymptotic
formula for the square mean integral value of the Riemann zeta function $\zeta$
on the critical line. Following both his work and the approach of Titchmarsh
(1986), we present an explicit version of the Atkinson formula, improving on a
recent bound by Simoni\v{c} (2019). We use mostly classical tools, such as the
approximate functional equation and the explicit convexity bounds of the zeta
function given by Backlund (1918).
| 2104.14253 | 737,909 |
In the high energy limit of hadron collisions, the evolution of the gluon
density in the longitudinal momentum fraction can be deduced from the Balitsky
hierarchy of equations or, equivalently, from the nonlinear
Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner (JIMWLK) equation. The
solutions of the latter can be studied numerically by using its reformulation
in terms of a Langevin equation. In this paper, we present a comprehensive
study of systematic effects associated with the numerical framework, in
particular the ones related to the inclusion of the running coupling. We
consider three proposed ways in which the running of the coupling constant can
be included: "square root" and "noise" prescriptions and the recent proposal by
Hatta and Iancu. We implement them both in position and momentum spaces and we
investigate and quantify the differences in the resulting evolved gluon
distributions. We find that the systematic differences associated with the
implementation technicalities can be of a similar magnitude as differences in
running coupling prescriptions in some cases, or much smaller in other cases.
| 2104.14254 | 737,909 |
Low-rank tensors are an established framework for high-dimensional
least-squares problems. We propose to extend this framework by including the
concept of block-sparsity. In the context of polynomial regression each
sparsity pattern corresponds to some subspace of homogeneous multivariate
polynomials. This allows us to adapt the ansatz space to align better with
known sample complexity results. The resulting method is tested in numerical
experiments and demonstrates improved computational resource utilization and
sample efficiency.
| 2104.14255 | 737,909 |
We study how to design edge server placement and server scheduling policies
under workload uncertainty for 5G networks. We introduce a new metric called
resource pooling factor to handle unexpected workload bursts. Maximizing this
metric offers a strong enhancement on top of robust optimization against
workload uncertainty. Using both real traces and synthetic traces, we show that
the proposed server placement and server scheduling policies not only
demonstrate better robustness against workload uncertainty than existing
approaches, but also significantly reduce the cost of service providers.
Specifically, in order to achieve close-to-zero workload rejection rate, the
proposed server placement policy reduces the number of required edge servers by
about 25% compared with the state-of-the-art approach; the proposed server
scheduling policy reduces the energy consumption of edge servers by about 13%
without causing much impact on the service quality.
| 2104.14256 | 737,909 |
The quantum geometry of Bloch bands fundamentally affects a wide range of
physical phenomena. For example, the quantum Hall effect is governed by the
Chern number, and superconductivity by the distance between the Bloch states --
the quantum metric. Here, we show that key properties of a weakly interacting
Bose-Einstein condensate (BEC) depend on the underlying quantum geometry, and
in the flat band limit they radically depart from those of a dispersive system.
The speed of sound becomes proportional to the quantum metric of the condensed
state, and depends linearly on the interaction energy. The fraction of
particles depleted out of the condensate and the quantum fluctuations of the
density-density correlation obtain a finite value for infinitesimally small
interactions directly determined by the quantum distance, in striking contrast
to dispersive bands where they vanish with the interaction strength. Our
results reveal that non-trivial quantum geometry allows stability of a flat
band BEC and anomalously strong quantum correlation effects.
| 2104.14257 | 737,909 |
We construct the global phase portraits of inflationary dynamics in
teleparallel gravity models with a scalar field nonminimally coupled to torsion
scalar. The adopted set of variables can clearly distinguish between different
asymptotic states as fixed points, including the kinetic and inflationary
regimes. The key role in the description of inflation is played by the
heteroclinic orbits which run from the asymptotic saddle points to the late
time attractor point and are approximated by nonminimal slow roll conditions.
To seek the asymptotic fixed points we outline a heuristic method in terms of
the "effective potential" and "effective mass", which can be applied for any
nonminimally coupled theories. As particular examples we study positive
quadratic nonminimal couplings with quadratic and quartic potentials, and note
how the portraits differ qualitatively from the known scalar-curvature
counterparts. For quadratic models inflation can only occur at small nonminimal
coupling to torsion, as for larger coupling the asymptotic de Sitter saddle
point disappears from the physical phase space. Teleparallel models with
quartic potentials are not viable for inflation at all, since for small
nonminimal coupling the asymptotic saddle point exhibits weaker than
exponential expansion, and for larger coupling disappears too.
| 2104.14258 | 737,909 |
The Dual-Frequency synthetic aperture radar (DFSAR) system manifested on the
Chandrayaan-2 spacecraft represents a significant step forward in radar
exploration of solid solar system objects. It combines SAR at two wavelengths
(L- and S-bands) and multiple resolutions with several polarimetric modes in
one lightweight ($\sim$ 20 kg) package. The resulting data from DFSAR support
calculation of the 2$\times$2 complex scattering matrix for each resolution
cell, which enables lunar near surface characterization in terms of radar
polarization properties at different wavelengths and incidence angles. In this
paper, we report on the calibration and preliminary performance
characterization of DFSAR data based on the analysis of a sample set of crater
regions on the Moon. Our calibration analysis provided a means to compare
on-orbit performance with pre-launch measurements and the results matched with
the pre-launch expected values. Our initial results show that craters in both
permanently shadowed regions (PSRs) and non-PSRs that are classified as
Circular Polarization Ratio (CPR)-anomalous in previous S-band radar analyses
appear anomalous at L-band also. We also observe that material evolution and
physical properties at their interior and proximal ejecta are decoupled. For
Byrgius C crater region, we compare our analysis of dual-frequency radar data
with the predicted behaviours of theoretical scattering models. If crater age
estimates are available, comparison of their radar polarization properties at
multiple wavelengths similar to that of the three unnamed south polar crater
regions shown in this study may provide new insights into how the rockiness of
craters evolves with time.
| 2104.14259 | 737,909 |
A formalisation of G\"odel's incompleteness theorems using the Isabelle proof
assistant is described. This is apparently the first mechanical verification of
the second incompleteness theorem. The work closely follows {\'S}wierczkowski
(2003), who gave a detailed proof using hereditarily finite set theory. The
adoption of this theory is generally beneficial, but it poses certain technical
issues that do not arise for Peano arithmetic. The formalisation itself should
be useful to logicians, particularly concerning the second incompleteness
theorem, where existing proofs are lacking in detail.
| 2104.14260 | 737,909 |
Practical quantum computing is rapidly becoming a reality. To harness quantum
computers' real potential in software applications, one needs to have an
in-depth understanding of all such characteristics of quantum computing
platforms (QCPs), relevant from the Software Engineering (SE) perspective.
Restrictions on copying, deletion, the transmission of qubit states, a hard
dependency on quantum algorithms are few, out of many, examples of QCP
characteristics that have significant implications for building quantum
software.
Thus, developing quantum software requires a paradigm shift in thinking by
software engineers. This paper presents the key findings from the SE
perspective, resulting from an in-depth examination of state-of-the-art QCPs
available today. The main contributions that we present include i) Proposing a
general architecture of the QCPs, ii) Proposing a programming model for
developing quantum software, iii) Determining architecturally significant
characteristics of QCPs, and \textbf{iv)} Determining the impact of these
characteristics on various Quality Attributes (QAs) and Software Development
Life Cycle (SDLC) activities.
We show that the nature of QCPs makes them useful mainly in specialized
application areas such as scientific computing. Except for performance and
scalability, most of the other QAs (e.g., maintainability, testability, and
reliability) are adversely affected by different characteristics of a QCP.
| 2104.14261 | 737,909 |
We have developed spin-resolved resonant electron energy-loss spectroscopy
(SR-rEELS) in the primary energy of 0.3--1.5 keV, which corresponds to the core
excitations of $2p\to3d$ absorption of transition metals and $3d\to4f$
absorption of rare earths. Element-specific carrier and valence plasmons can be
observed by using the resonance enhancement of core absorptions. Spin-resolved
plasmons were also observed using a spin-polarized electron source from a
GaAs/GaAsP strained superlattice photocathode. Furthermore, this primary energy
corresponds to an electron penetration depth of 1 to 10 nm and thus provides
bulk-sensitive EELS spectra. The methodology is expected to complement the
element-selective observation of elementary excitations by resonant inelastic
x-ray scattering and resonant photoelectron spectroscopy.
| 2104.14262 | 737,909 |
Ziegler introduced the idea of a good partition $\{X_{p}:p\in P\}$ of a
$T_{3}$-topological space, where $P$ is a finite partially ordered set,
satisfying $\overline{X_{p}}=\bigcup_{q\leqslant p}X_{q}$ for all $p\in P$.
Good partitions of Stone spaces arise naturally in the study of
$\omega$-categorical structures, and a key concept for studying them is that of
a $p$-trim open set which meets precisely those $X_{q}$ for which $q\geqslant
p$. This paper develops the theory of infinite partitions of Stone spaces
indexed by a poset where the trim sets form a neighbourhood base for the
topology. We study the interplay between order properties of the poset and
topological properties of the partition, examine extensions and completions of
such partitions, and derive necessary and sufficient conditions on the poset
for the existence of the various types of partition studied. We also identify
circumstances in which a second countable Stone space with a trim partition
indexed by a given poset is unique up to homeomorphism, subject to choices on
the isolated point structure and boundedness of the partition elements. One
corollary of our results is that there is a partition $\{X_{r}:r\in[0,1]\}$ of
the Cantor set such that $\overline{X_{r}}=\bigcup_{s\leqslant r}X_{s}\text{
for all }r\in[0,1]$.
| 2104.14263 | 737,909 |
Liquid State Machines are brain inspired spiking neural networks (SNNs) with
random reservoir connectivity and bio-mimetic neuronal and synaptic models.
Reservoir computing networks are proposed as an alternative to deep neural
networks to solve temporal classification problems. Previous studies suggest
2nd order (double exponential) synaptic waveform to be crucial for achieving
high accuracy for TI-46 spoken digits recognition. The proposal of long-time
range (ms) bio-mimetic synaptic waveforms is a challenge to compact and power
efficient neuromorphic hardware. In this work, we analyze the role of synaptic
orders namely: {\delta} (high output for single time step), 0th (rectangular
with a finite pulse width), 1st (exponential fall) and 2nd order (exponential
rise and fall) and synaptic timescales on the reservoir output response and on
the TI-46 spoken digits classification accuracy under a more comprehensive
parameter sweep. We find the optimal operating point to be correlated to an
optimal range of spiking activity in the reservoir. Further, the proposed 0th
order synapses perform at par with the biologically plausible 2nd order
synapses. This is substantial relaxation for circuit designers as synapses are
the most abundant components in an in-memory implementation for SNNs. The
circuit benefits for both analog and mixed-signal realizations of 0th order
synapse are highlighted demonstrating 2-3 orders of savings in area and power
consumptions by eliminating Op-Amps and Digital to Analog Converter circuits.
This has major implications on a complete neural network implementation with
focus on peripheral limitations and algorithmic simplifications to overcome
them.
| 2104.14264 | 737,909 |
Code reviews are one of the effective methods to estimate defectiveness in
source code. However, the existing methods are dependent on experts or
inefficient. In this paper, we improve the performance (in terms of speed and
memory usage) of our existing code review assisting tool--CRUSO. The central
idea of the approach is to estimate the defectiveness for an input source code
by using the defectiveness score of similar code fragments present in various
StackOverflow (SO) posts.
The significant contributions of our paper are i) SOpostsDB: a dataset
containing the PVA vectors and the SO posts information, ii) CRUSO-P: a code
review assisting system based on PVA models trained on \emph{SOpostsDB}. For a
given input source code, CRUSO-P labels it as {Likely to be defective, Unlikely
to be defective, Unpredictable}. To develop CRUSO-P, we processed >3 million SO
posts and 188200+ GitHub source files. CRUSO-P is designed to work with source
code written in the popular programming languages {C, C#, Java, JavaScript, and
Python}.
CRUSO-P outperforms CRUSO with an improvement of 97.82% in response time and
a storage reduction of 99.15%. CRUSO-P achieves the highest mean accuracy score
of 99.6% when tested with the C programming language, thus achieving an
improvement of 5.6% over the existing method.
| 2104.14265 | 737,909 |
Weighted monadic second-order logic is a weighted extension of monadic
second-order logic that captures exactly the behaviour of weighted automata.
Its semantics is parameterized with respect to a semiring on which the values
that weighted formulas output are evaluated. Gastin and Monmege (2018) gave
abstract semantics for a version of weighted monadic second-order logic to give
a more general and modular proof of the equivalence of the logic with weighted
automata. We focus on the abstract semantics of the logic and we give a
complete axiomatization both for the full logic and for a fragment without
general sum, thus giving a more fine-grained understanding of the logic. We
discuss how common decision problems for logical languages can be adapted to
the weighted setting, and show that many of these are decidable, though they
inherit bad complexity from the underlying first- and second-order logics.
However, we show that a weighted adaptation of satisfiability is undecidable
for the logic when one uses the abstract interpretation.
| 2104.14266 | 737,909 |
We present the design and experimental validation of source seeking control
algorithms for a unicycle mobile robot that is equipped with novel 3D-printed
flexible graphene-based piezoresistive airflow sensors. Based solely on a local
gradient measurement from the airflow sensors, we propose and analyze a
projected gradient ascent algorithm to solve the source seeking problem. In the
case of partial sensor failure, we propose a combination of Extremum-Seeking
Control with our projected gradient ascent algorithm. For both control laws, we
prove the asymptotic convergence of the robot to the source. Numerical
simulations were performed to validate the algorithms and experimental
validations are presented to demonstrate the efficacy of the proposed methods.
| 2104.14267 | 737,909 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.