abstract
stringlengths 42
2.09k
|
---|
The optical matrix formalism is applied to find parameters such as focal
distance, back and front focal points, principal planes, and the equation
relating object and image distances for a thick spherical lens immerse in air.
Then, the formalism is applied to systems compound of two, three and N thick
lenses in cascade. It is found that a simple Gaussian equation is enough to
relate object and image distances no matter the number of lenses.
|
Recent works have shown that learned models can achieve significant
performance gains, especially in terms of perceptual quality measures, over
traditional methods. Hence, the state of the art in image restoration and
compression is getting redefined. This special issue covers the state of the
art in learned image/video restoration and compression to promote further
progress in innovative architectures and training methods for effective and
efficient networks for image/video restoration and compression.
|
The Laser Interferometer Space Antenna, LISA, will detect gravitational wave
signals from Extreme Mass Ratio Inspirals, where a stellar mass compact object
orbits a supermassive black hole and eventually plunges into it. Here we report
on LISA's capability to detect whether the smaller compact object in an Extreme
Mass Ratio Inspiral is endowed with a scalar field, and to measure its scalar
charge -- a dimensionless quantity that acts as a measure of how much scalar
field the object carries. By direct comparison of signals, we show that LISA
will be able to detect and measure the scalar charge with an accuracy of the
order of percent, which is an unprecedented level of precision. This result is
independent of the origin of the scalar field and of the structure and other
properties of the small compact object, so it can be seen as a generic
assessment of LISA's capabilities to detect new fundamental fields.
|
Communication between workers and the master node to collect local stochastic
gradients is a key bottleneck in a large-scale federated learning system.
Various recent works have proposed to compress the local stochastic gradients
to mitigate the communication overhead. However, robustness to malicious
attacks is rarely considered in such a setting. In this work, we investigate
the problem of Byzantine-robust federated learning with compression, where the
attacks from Byzantine workers can be arbitrarily malicious. We point out that
a vanilla combination of compressed stochastic gradient descent (SGD) and
geometric median-based robust aggregation suffers from both stochastic and
compression noise in the presence of Byzantine attacks. In light of this
observation, we propose to jointly reduce the stochastic and compression noise
so as to improve the Byzantine-robustness. For the stochastic noise, we adopt
the stochastic average gradient algorithm (SAGA) to gradually eliminate the
inner variations of regular workers. For the compression noise, we apply the
gradient difference compression and achieve compression for free. We
theoretically prove that the proposed algorithm reaches a neighborhood of the
optimal solution at a linear convergence rate, and the asymptotic learning
error is in the same order as that of the state-of-the-art uncompressed method.
Finally, numerical experiments demonstrate effectiveness of the proposed
method.
|
Many emerging cyber-physical systems, such as autonomous vehicles and robots,
rely heavily on artificial intelligence and machine learning algorithms to
perform important system operations. Since these highly parallel applications
are computationally intensive, they need to be accelerated by graphics
processing units (GPUs) to meet stringent timing constraints. However, despite
the wide adoption of GPUs, efficiently scheduling multiple GPU applications
while providing rigorous real-time guarantees remains a challenge. In this
paper, we propose RTGPU, which can schedule the execution of multiple GPU
applications in real-time to meet hard deadlines. Each GPU application can have
multiple CPU execution and memory copy segments, as well as GPU kernels. We
start with a model to explicitly account for the CPU and memory copy segments
of these applications. We then consider the GPU architecture in the development
of a precise timing model for the GPU kernels and leverage a technique known as
persistent threads to implement fine-grained kernel scheduling with improved
performance through interleaved execution. Next, we propose a general method
for scheduling parallel GPU applications in real time. Finally, to schedule
multiple parallel GPU applications, we propose a practical real-time scheduling
algorithm based on federated scheduling and grid search (for GPU kernel
segments) with uniprocessor fixed priority scheduling (for multiple CPU and
memory copy segments). Our approach provides superior schedulability compared
with previous work, and gives real-time guarantees to meet hard deadlines for
multiple GPU applications according to comprehensive validation and evaluation
on a real NVIDIA GTX1080Ti GPU system.
|
The Supervisory control and data acquisition (SCADA) systems have been
continuously leveraging the evolution of network architecture, communication
protocols, next-generation communication techniques (5G, 6G, Wi-Fi 6), and the
internet of things (IoT). However, SCADA system has become the most profitable
and alluring target for ransomware attackers. This paper proposes the deep
learning-based novel ransomware detection framework in the SCADA controlled
electric vehicle charging station (EVCS) with the performance analysis of three
deep learning algorithms, namely deep neural network (DNN), 1D convolution
neural network (CNN), and long short-term memory (LSTM) recurrent neural
network. All three-deep learning-based simulated frameworks achieve around 97%
average accuracy (ACC), more than 98% of the average area under the curve
(AUC), and an average F1-score under 10-fold stratified cross-validation with
an average false alarm rate (FAR) less than 1.88%. Ransomware driven
distributed denial of service (DDoS) attack tends to shift the SOC profile by
exceeding the SOC control thresholds. The severity has been found to increase
as the attack progress and penetration increases. Also, ransomware driven false
data injection (FDI) attack has the potential to damage the entire BES or
physical system by manipulating the SOC control thresholds. It's a design
choice and optimization issue that a deep learning algorithm can deploy based
on the tradeoffs between performance metrics.
|
Spectral lines from formaldehyde (H2CO) molecules at cm wavelengths are
typically detected in absorption and trace a broad range of environments, from
diffuse gas to giant molecular clouds. In contrast, thermal emission of
formaldehyde lines at cm wavelengths is rare. In previous observations with the
100m Robert C. Byrd Green Bank Telescope (GBT), we detected 2 cm formaldehyde
emission toward NGC7538 IRS1 - a high-mass protostellar object in a prominent
star-forming region of our Galaxy. We present further GBT observations of the 2
cm and 1 cm H2CO lines to investigate the nature of the 2 cm H2CO emission. We
conducted observations to constrain the angular size of the 2 cm emission
region based on a East-West and North-South cross-scan map. Gaussian fits of
the spatial distribution in the East-West direction show a deconvolved size (at
half maximum) of the 2 cm emission of 50" +/- 8". The 1 cm H2CO observations
revealed emission superimposed on a weak absorption feature. A non-LTE
radiative transfer analysis shows that the H2CO emission is consistent with
quasi-thermal radiation from dense gas (~10^5 to 10^6 cm^-3). We also report
detection of 4 transitions of CH3OH (12.2, 26.8, 28.3, 28.9 GHz), the (8,8)
transition of NH3 (26.5 GHz), and a cross-scan map of the 13 GHz SO line that
shows extended emission (> 50").
|
Dynamically switchable half-/quarter-wave plates have recently been the focus
in the terahertz regime. Conventional design philosophy leads to multilayer
metamaterials or narrowband metasurfaces. Here we propose a novel design
philosophy and a VO2-metal hybrid metasurface for achieving broadband
dynamically switchable half-/quarter-wave plate (HWP/QWP) based on the
transition from the overdamped to the underdamped resonance. Results show that,
by varying the VO2 conductivity by three orders of magnitude, the proposed
metasurface's function can be switched between an HWP with polarization
conversion ratio larger than 96% and a QWP with ellipticity close to -1 over
the broad working band of 0.8-1.2 THz. We expect that the proposed design
philosophy will advance the engineering of metasurfaces for dynamically
switchable functionalities beyond the terahertz regime.
|
We develop a new type of orthogonal polynomial, the modified discrete
Laguerre (MDL) polynomials, designed to accelerate the computation of bosonic
Matsubara sums in statistical physics. The MDL polynomials lead to a rapidly
convergent Gaussian "quadrature" scheme for Matsubara sums, and more generally
for any sum $F(0)/2 + F(h) + F(2h) + \cdots$ of exponentially decaying summands
$F(nh) = f(nh)e^{-nhs}$ where $hs>0$. We demonstrate this technique for
computation of finite-temperature Casimir forces arising from quantum field
theory, where evaluation of the summand $F$ requires expensive electromagnetic
simulations. A key advantage of our scheme, compared to previous methods, is
that the convergence rate is nearly independent of the spacing $h$
(proportional to the thermodynamic temperature). We also prove convergence for
any polynomially decaying $F$.
|
The growth rate of the number of scientific publications is constantly
increasing, creating important challenges in the identification of valuable
research and in various scholarly data management applications, in general. In
this context, measures which can effectively quantify the scientific impact
could be invaluable. In this work, we present BIP! DB, an open dataset that
contains a variety of impact measures calculated for a large collection of more
than 100 million scientific publications from various disciplines.
|
Context{The high energy emission regions of rotation powered pulsars are
studied using folded light curve (FLCs) and phase resolved spectra (PRS).}
aims{This work uses the NICER observatory to obtain the highest resolution FLC
and PRS of the Crab pulsar at soft X-ray energies.} methods{NICER has
accumulated about 347 ksec of data on the Crab pulsar. The data are processed
using the standard analysis pipeline. Stringent filtering is done for spectral
analysis. The individual detectors are calibrated in terms of long time light
curve (LTLC), raw spectrum and deadtime. The arrival times of the photons are
referred to the solar system's barycenter and the rotation frequency $\nu$ and
its time derivative $\dot \nu$ are used to derive the rotation phase of each
photon.} results{The LTLCs, raw spectra and deadtimes of the individual
detectors are statistically similar; the latter two show no evolution with
epoch; detector deadtime is independent of photon energy. The deadtime for the
Crab pulsar, taking into account the two types of deadtime, is only approx 7%
to 8% larger than that obtained using the cleaned events. Detector 00 behaves
slightly differently from the rest, but can be used for spectral work. The PRS
of the two peaks of the Crab pulsar are obtained at a resolution of better than
1/512 in rotation phase. The FLC very close to the first peak rises slowly and
falls faster. The spectral index of the PRS is almost constant very close to
the first peak.} conclusions{The high resolution FLC and PRS of the {{peaks}}
of the Crab pulsar provide important constraints for the formation of caustics
in the emission zone.}
|
The many-body-theory approach to positronium-atom interactions developed in
[Phys. Rev. Lett. \textbf{120}, 183402 (2018)] is applied to the sequence of
noble-gas atoms He-Xe. The Dyson equation is solved separately for an electron
and positron moving in the field of the atom, with the entire system enclosed
in a hard-wall spherical cavity. The two-particle Dyson equation is solved to
give the energies and wave functions of the Ps eigenstates in the cavity. From
these, we determine the scattering phase shifts and cross sections, and values
of the pickoff annihilation parameter $^1Z_\text{eff}$ including short-range
electron-positron correlations via vertex enhancement factors. Comparisons are
made with available experimental data for elastic and momentum-transfer cross
sections and $^1Z_\text{eff}$. Values of $^1Z_\text{eff}$ for He and Ne,
previously reported in [Phys. Rev. Lett. \textbf{120}, 183402 (2018)], are
found to be in near-perfect agreement with experiment, and for Ar, Kr, and Xe
within a factor of 1.2.
|
Search-based test generation is guided by feedback from one or more fitness
functions - scoring functions that judge solution optimality. Choosing
informative fitness functions is crucial to meeting the goals of a tester.
Unfortunately, many goals - such as forcing the class-under-test to throw
exceptions, increasing test suite diversity, and attaining Strong Mutation
Coverage - do not have effective fitness function formulations. We propose that
meeting such goals requires treating fitness function identification as a
secondary optimization step. An adaptive algorithm that can vary the selection
of fitness functions could adjust its selection throughout the generation
process to maximize goal attainment, based on the current population of test
suites. To test this hypothesis, we have implemented two reinforcement learning
algorithms in the EvoSuite unit test generation framework, and used these
algorithms to dynamically set the fitness functions used during generation for
the three goals identified above.
We have evaluated our framework, EvoSuiteFIT, on a set of Java case examples.
EvoSuiteFIT techniques attain significant improvements for two of the three
goals, and show limited improvements on the third when the number of
generations of evolution is fixed. Additionally, for two of the three goals,
EvoSuiteFIT detects faults missed by the other techniques. The ability to
adjust fitness functions allows strategic choices that efficiently produce more
effective test suites, and examining these choices offers insight into how to
attain our testing goals. We find that adaptive fitness function selection is a
powerful technique to apply when an effective fitness function does not already
exist for achieving a testing goal.
|
While object semantic understanding is essential for most service robotic
tasks, 3D object classification is still an open problem. Learning from
artificial 3D models alleviates the cost of annotation necessary to approach
this problem, but most methods still struggle with the differences existing
between artificial and real 3D data. We conjecture that the cause of those
issue is the fact that many methods learn directly from point coordinates,
instead of the shape, as the former is hard to center and to scale under
variable occlusions reliably. We introduce spherical kernel point convolutions
that directly exploit the object surface, represented as a graph, and a voting
scheme to limit the impact of poor segmentation on the classification results.
Our proposed approach improves upon state-of-the-art methods by up to 36% when
transferring from artificial objects to real objects.
|
After observing the Higgs boson by the ATLAS and CMS experiments at the LHC,
accurate measurements of its properties, which allow us to study the
electroweak symmetry breaking mechanism, become a high priority for particle
physics. The most promising of extracting the Higgs self-coupling at hadron
colliders is by examining the double Higgs production, especially in the $b
\bar{b} \gamma \gamma$ channel. In this work, we presented full loop
calculation for both SM and New Physics effects of the Higgs pair production to
next-to-leading-order (NLO), including loop-induced processes $gg\to HH$,
$gg\to HHg$, and $qg \to qHH$. We also included the calculation of the
corrections from diagrams with only one QCD coupling in $qg \to qHH$, which was
neglected in the previous studies. With the latest observed limit on the HH
production cross-section, we studied the constraints on the effective Higgs
couplings for the LHC at center-of-mass energies of 14 TeV and a provisional
100 TeV proton collider within the Future-Circular-Collider (FCC) project. To
obtain results better than using total cross-section alone, we focused on the
$b \bar{b} \gamma \gamma$ channel and divided the differential cross-section
into low and high bins based on the total invariant mass and $p_{T}$ spectra.
The new physics effects are further constrained by including extra kinematic
information. However, some degeneracy persists, as shown in previous studies,
especially in determining the Higgs trilinear coupling. Our analysis shows that
the degeneracy is reduced by including the full NLO corrections.
|
In this paper, we propose an energy-efficient optimal altitude for an aerial
access point (AAP), which acts as a flying base station to serve a set of
ground user equipment (UE). Since the ratio of total energy consumed by the
aerial vehicle to the communication energy is very large, we include the aerial
vehicle's energy consumption in the problem formulation. After considering the
energy consumption model of the aerial vehicle, our objective is translated
into a non-convex optimization problem of maximizing the global energy
efficiency (GEE) of the aerial communication system, subject to altitude and
minimum individual data rate constraints. At first, the non-convex fractional
objective function is solved by using sequential convex programming (SCP)
optimization technique. To compare the result of SCP with the global optimum of
the problem, we reformulate the initial problem as a monotonic fractional
optimization problem (MFP) and solve it using the polyblock outer approximation
(PA) algorithm. Numerical results show that the candidate solution obtained
from SCP is the same as the global optimum found using the monotonic fractional
programming technique. Furthermore, the impact of the aerial vehicle's energy
consumption on the optimal altitude determination is also studied.
|
Generative Adversarial Networks (GANs) have demonstrated unprecedented
success in various image generation tasks. The encouraging results, however,
come at the price of a cumbersome training process, during which the generator
and discriminator are alternately updated in two stages. In this paper, we
investigate a general training scheme that enables training GANs efficiently in
only one stage. Based on the adversarial losses of the generator and
discriminator, we categorize GANs into two classes, Symmetric GANs and
Asymmetric GANs, and introduce a novel gradient decomposition method to unify
the two, allowing us to train both classes in one stage and hence alleviate the
training effort. We also computationally analyze the efficiency of the proposed
method, and empirically demonstrate that, the proposed method yields a solid
$1.5\times$ acceleration across various datasets and network architectures.
Furthermore, we show that the proposed method is readily applicable to other
adversarial-training scenarios, such as data-free knowledge distillation. The
code is available at https://github.com/zju-vipa/OSGAN.
|
In order to satisfy timing constraints, modern real-time applications require
massively parallel accelerators such as General Purpose Graphic Processing
Units (GPGPUs). Generation after generation, the number of computing clusters
made available in novel GPU architectures is steadily increasing, hence,
investigating suitable scheduling approaches is now mandatory. Such scheduling
approaches are related to mapping different and concurrent compute kernels
within the GPU computing clusters, hence grouping GPU computing clusters into
schedulable partitions. In this paper we propose novel techniques to define GPU
partitions; this allows us to define suitable task-to-partition allocation
mechanisms in which tasks are GPU compute kernels featuring different timing
requirements. Such mechanisms will take into account the interference that GPU
kernels experience when running in overlapping time windows. Hence, an
effective and simple way to quantify the magnitude of such interference is also
presented. We demonstrate the efficiency of the proposed approaches against the
classical techniques that considered the GPU as a single, non-partitionable
resource.
|
Electrons in low-temperature solids are governed by the non-relativistic
Schr$\ddot{o}$dinger equation, since the electron velocities are much slower
than the speed of light. Remarkably, the low-energy quasi-particles given by
electrons in various materials can behave as relativistic Dirac/Weyl fermions
that obey the relativistic Dirac/Weyl equation. We refer to these materials as
"Dirac/Weyl materials", which provide a tunable platform to test relativistic
quantum phenomena in table-top experiments. More interestingly, different types
of physical fields in these Weyl/Dirac materials, such as magnetic
fluctuations, lattice vibration, strain, and material inhomogeneity, can couple
to the "relativistic" quasi-particles in a similar way as the $U(1)$ gauge
coupling. As these fields do not have gauge-invariant dynamics in general, we
refer to them as "pseudo-gauge fields". In this chapter, we overview the
concept and physical consequences of pseudo-gauge fields in Weyl/Dirac
materials. In particular, we will demonstrate that pseudo-gauge fields can
provide a unified understanding of a variety of physical phenomena, including
chiral zero modes inside a magnetic vortex core of magnetic Weyl semimetals, a
giant current response at magnetic resonance in magnetic topological
insulators, and piezo-electromagnetic response in time-reversal invariant
systems. These phenomena are deeply related to various concepts in high-energy
physics, such as chiral anomaly and axion electrodynamics.
|
The bug triaging process, an essential process of assigning bug reports to
the most appropriate developers, is related closely to the quality and costs of
software development. As manual bug assignment is a labor-intensive task,
especially for large-scale software projects, many machine-learning-based
approaches have been proposed to automatically triage bug reports. Although
developer collaboration networks (DCNs) are dynamic and evolving in the
real-world, most automated bug triaging approaches focus on static tossing
graphs at a single time slice. Also, none of the previous studies consider
periodic interactions among developers. To address the problems mentioned
above, in this article, we propose a novel spatial-temporal dynamic graph
neural network (ST-DGNN) framework, including a joint random walk (JRWalk)
mechanism and a graph recurrent convolutional neural network (GRCNN) model. In
particular, JRWalk aims to sample local topological structures in a graph with
two sampling strategies by considering both node importance and edge
importance. GRCNN has three components with the same structure, i.e.,
hourly-periodic, daily-periodic, and weekly-periodic components, to learn the
spatial-temporal features of dynamic DCNs. We evaluated our approach's
effectiveness by comparing it with several state-of-the-art graph
representation learning methods in two domain-specific tasks that belong to
node classification. In the two tasks, experiments on two real-world,
large-scale developer collaboration networks collected from the Eclipse and
Mozilla projects indicate that the proposed approach outperforms all the
baseline methods.
|
We propose an algorithm that uses linear function approximation (LFA) for
stochastic shortest path (SSP). Under minimal assumptions, it obtains sublinear
regret, is computationally efficient, and uses stationary policies. To our
knowledge, this is the first such algorithm in the LFA literature (for SSP or
other formulations). Our algorithm is a special case of a more general one,
which achieves regret square root in the number of episodes given access to a
certain computation oracle.
|
This paper develops a new empirical Bayesian inference algorithm for solving
a linear inverse problem given multiple measurement vectors (MMV) of
under-sampled and noisy observable data. Specifically, by exploiting the joint
sparsity across the multiple measurements in the sparse domain of the
underlying signal or image, we construct a new support informed sparsity
promoting prior. Several applications can be modeled using this framework, and
as a prototypical example we consider reconstructing an image from synthetic
aperture radar (SAR) observations using nearby azimuth angles. Our numerical
experiments demonstrate that using this new prior not only improves accuracy of
the recovery, but also reduces the uncertainty in the posterior when compared
to standard sparsity producing priors.
|
The current interests in the universe motivate us to go beyond Einstein's
General theory of relativity. One of the interesting proposals comes from a new
class of teleparallel gravity named symmetric teleparallel gravity, i.e.,
$f(Q)$ gravity, where the non-metricity term $Q$ is accountable for fundamental
interaction. These alternative modified theories of gravity's vital role are to
deal with the recent interests and to present a realistic cosmological model.
This manuscript's main objective is to study the traversable wormhole
geometries in $f(Q)$ gravity. We construct the wormhole geometries for three
cases: (i) by assuming a relation between the radial and lateral pressure, (ii)
considering phantom energy equation of state (EoS), and (iii) for a specific
shape function in the fundamental interaction of gravity (i.e. for linear form
of $f(Q)$). Besides, we discuss two wormhole geometries for a general case of
$f(Q)$ with two specific shape functions. Then, we discuss the viability of
shape functions and the stability analysis of the wormhole solutions for each
case. We have found that the null energy condition (NEC) violates each wormhole
model which concluded that our outcomes are realistic and stable. Finally, we
discuss the embedding diagrams and volume integral quantifier to have a
complete view of wormhole geometries.
|
We study temporally localized structures in doubly resonant degenerate
optical parametric oscillators in the absence of temporal walk-off. We focus on
states formed through the locking of domain walls between the zero and a
non-zero continuous wave solution. We show that these states undergo collapsed
snaking and we characterize their dynamics in the parameter space.
|
Experience replay enables off-policy reinforcement learning (RL) agents to
utilize past experiences to maximize the cumulative reward. Prioritized
experience replay that weighs experiences by the magnitude of their
temporal-difference error ($|\text{TD}|$) significantly improves the learning
efficiency. But how $|\text{TD}|$ is related to the importance of experience is
not well understood. We address this problem from an economic perspective, by
linking $|\text{TD}|$ to value of experience, which is defined as the value
added to the cumulative reward by accessing the experience. We theoretically
show the value metrics of experience are upper-bounded by $|\text{TD}|$ for
Q-learning. Furthermore, we successfully extend our theoretical framework to
maximum-entropy RL by deriving the lower and upper bounds of these value
metrics for soft Q-learning, which turn out to be the product of $|\text{TD}|$
and "on-policyness" of the experiences. Our framework links two important
quantities in RL: $|\text{TD}|$ and value of experience. We empirically show
that the bounds hold in practice, and experience replay using the upper bound
as priority improves maximum-entropy RL in Atari games.
|
We tackle the problem of unsupervised synthetic-to-real domain adaptation for
single image depth estimation. An essential building block of single image
depth estimation is an encoder-decoder task network that takes RGB images as
input and produces depth maps as output. In this paper, we propose a novel
training strategy to force the task network to learn domain invariant
representations in a selfsupervised manner. Specifically, we extend
self-supervised learning from traditional representation learning, which works
on images from a single domain, to domain invariant representation learning,
which works on images from two different domains by utilizing an image-to-image
translation network. Firstly, we use an image-to-image translation network to
transfer domain-specific styles between synthetic and real domains. This style
transfer operation allows us to obtain similar images from the different
domains. Secondly, we jointly train our task network and Siamese network with
the same images from the different domains to obtain domain invariance for the
task network. Finally, we fine-tune the task network using labeled synthetic
and unlabeled realworld data. Our training strategy yields improved
generalization capability in the real-world domain. We carry out an extensive
evaluation on two popular datasets for depth estimation, KITTI and Make3D. The
results demonstrate that our proposed method outperforms the state-of-the-art
on all metrics, e.g. by 14.7% on Sq Rel on KITTI. The source code and model
weights will be made available.
|
We propose an algorithm for automatic, targetless, extrinsic calibration of a
LiDAR and camera system using semantic information. We achieve this goal by
maximizing mutual information (MI) of semantic information between sensors,
leveraging a neural network to estimate semantic mutual information, and matrix
exponential for calibration computation. Using kernel-based sampling to sample
data from camera measurement based on LiDAR projected points, we formulate the
problem as a novel differentiable objective function which supports the use of
gradient-based optimization methods. We also introduce an initial calibration
method using 2D MI-based image registration. Finally, we demonstrate the
robustness of our method and quantitatively analyze the accuracy on a synthetic
dataset and also evaluate our algorithm qualitatively on KITTI360 and RELLIS-3D
benchmark datasets, showing improvement over recent comparable approaches.
|
Providing multi-connectivity services is an important goal for next
generation wireless networks, where multiple access networks are available and
need to be integrated into a coherent solution that efficiently supports both
reliable and non reliable traffic. Based on virtual network interfaces and per
path congestion controlled tunnels, the MP-DCCP based multiaccess aggregation
framework presents as a novel solution that flexibly supports different path
schedulers and congestion control algorithms as well as reordering modules. The
framework has been implemented within the Linux kernel space and has been
tested over different prototypes. Experimental results have shown that the
overall performance strongly depends upon the congestion control algorithm used
on the individual DCCP tunnels, denoted as CCID. In this paper, we present an
implementation of the BBR (Bottleneck Bandwidth Round Trip propagation time)
congestion control algorithm for DCCP in the Linux kernel. We show how BBR is
integrated into the MP-DCCP multi-access framework and evaluate its performance
over both single and multi-path environments. Our evaluation results show that
BBR improves the performance compared to CCID2 for multi-path scenarios due to
the faster response to changes in the available bandwidth, which reduces
latency and increases performance, especially for unreliable traffic. the
MP-DCCP framework code, including the new CCID5 is available as OpenSource.
|
Recent developments in representational learning for information retrieval
can be organized in a conceptual framework that establishes two pairs of
contrasts: sparse vs. dense representations and unsupervised vs. learned
representations. Sparse learned representations can further be decomposed into
expansion and term weighting components. This framework allows us to understand
the relationship between recently proposed techniques such as DPR, ANCE,
DeepCT, DeepImpact, and COIL, and furthermore, gaps revealed by our analysis
point to "low hanging fruit" in terms of techniques that have yet to be
explored. We present a novel technique dubbed "uniCOIL", a simple extension of
COIL that achieves to our knowledge the current state-of-the-art in sparse
retrieval on the popular MS MARCO passage ranking dataset. Our implementation
using the Anserini IR toolkit is built on the Lucene search library and thus
fully compatible with standard inverted indexes.
|
We prove a Gannon-Lee theorem for non-globally hyperbolic Lo\-rentzian
metrics of regularity $C^1$, the most general regularity class currently
available in the context of the classical singularity theorems. Along the way
we also prove that any maximizing causal curve in a $C^1$-spacetime is a
geodesic and hence of $C^2$-regularity.
|
Alphas are stock prediction models capturing trading signals in a stock
market. A set of effective alphas can generate weakly correlated high returns
to diversify the risk. Existing alphas can be categorized into two classes:
Formulaic alphas are simple algebraic expressions of scalar features, and thus
can generalize well and be mined into a weakly correlated set. Machine learning
alphas are data-driven models over vector and matrix features. They are more
predictive than formulaic alphas, but are too complex to mine into a weakly
correlated set. In this paper, we introduce a new class of alphas to model
scalar, vector, and matrix features which possess the strengths of these two
existing classes. The new alphas predict returns with high accuracy and can be
mined into a weakly correlated set. In addition, we propose a novel alpha
mining framework based on AutoML, called AlphaEvolve, to generate the new
alphas. To this end, we first propose operators for generating the new alphas
and selectively injecting relational domain knowledge to model the relations
between stocks. We then accelerate the alpha mining by proposing a pruning
technique for redundant alphas. Experiments show that AlphaEvolve can evolve
initial alphas into the new alphas with high returns and weak correlations.
|
In this article we generalize the concepts that were used in the PhD thesis
of Drudge to classify Cameron-Liebler line classes in PG$(n,q), n\geq 3$, to
Cameron-Liebler sets of $k$-spaces in PG$(n,q)$ and AG$(n,q)$. In his PhD
thesis, Drudge proved that every Cameron-Liebler line class in PG$(n,q)$
intersects every $3$-dimensional subspace in a Cameron-Liebler line class in
that subspace. We are using the generalization of this result for sets of
$k$-spaces in PG$(n,q)$ and AG$(n,q)$. Together with a basic counting argument
this gives a very strong non-existence condition, $n\geq 3k+3$. This condition
can also be improved for $k$-sets in AG$(n,q)$, with $n\geq 2k+2$.
|
We consider the problem of communicating a general bivariate function of two
classical sources observed at the encoders of a classical-quantum multiple
access channel. Building on the techniques developed for the case of a
classical channel, we propose and analyze a coding scheme based on coset codes.
The proposed technique enables the decoder recover the desired function without
recovering the sources themselves. We derive a new set of sufficient conditions
that are weaker than the current known for identified examples. This work is
based on a new ensemble of coset codes that are proven to achieve the capacity
of a classical-quantum point-to-point channel.
|
Motivated by recent observations of ergodicity breaking due to Hilbert space
fragmentation in 1D Fermi-Hubbard chains with a tilted potential [Scherg et
al., arXiv:2010.12965], we show that the same system also hosts quantum
many-body scars in a regime $U\approx \Delta \gg J$ at electronic filling
factor $\nu=1$. We numerically demonstrate that the scarring phenomenology in
this model is similar to other known realisations such as Rydberg atom chains,
including persistent dynamical revivals and ergodicity-breaking many-body
eigenstates. At the same time, we show that the mechanism of scarring in the
Fermi-Hubbard model is different from other examples in the literature: the
scars originate from a subgraph, representing a free spin-1 paramagnet, which
is weakly connected to the rest of the Hamiltonian's adjacency graph. Our work
demonstrates that correlated fermions in tilted optical lattices provide a
platform for understanding the interplay of many-body scarring and other forms
of ergodicity breaking, such as localisation and Hilbert space fragmentation.
|
We map the likelihood of GW190521, the heaviest detected binary black hole
(BBH) merger, by sampling under different mass and spin priors designed to be
uninformative. We find that a source-frame total mass of $\sim$$150 M_{\odot}$
is consistently supported, but posteriors in mass ratio and spin depend
critically on the choice of priors. We confirm that the likelihood has a
multi-modal structure with peaks in regions of mass ratio representing very
different astrophysical scenarios. The unequal-mass region ($m_2 / m_1 < 0.3$)
has an average likelihood $\sim$$e^6$ times larger than the equal-mass region
($m_2 / m_1 > 0.3$) and a maximum likelihood $\sim$$e^2$ larger. Using
ensembles of samples across priors, we examine the implications of
qualitatively different BBH sources that fit the data. We find that the
equal-mass solution has poorly constrained spins and at least one black hole
mass that is difficult to form via stellar collapse due to pair instability.
The unequal-mass solution can avoid this mass gap entirely but requires a
negative effective spin and a precessing primary. Either of these scenarios is
more easily produced by dynamical formation channels than field binary
co-evolution. The sensitive comoving volume-time of the mass gap solution is
$\mathcal{O}(10)$ times larger than the gap-avoiding solution. After accounting
for this distance effect, the likelihood still reverses the advantage to favor
the gap-avoiding scenario by a factor of $\mathcal{O}(100)$ before considering
mass and spin priors. Posteriors are easily driven away from this
high-likelihood region by common prior choices meant to be uninformative,
making GW190521 parameter inference sensitive to the assumed mass and spin
distributions of mergers in the source's astrophysical channel. This may be a
generic issue for similarly heavy events given current detector sensitivity and
waveform degeneracies.
|
The ESO workshop "Ground-based thermal infrared astronomy" was held on-line
October 12-16, 2020. Originally planned as a traditional in-person meeting at
ESO in Garching in April 2020, it was rescheduled and transformed into a fully
on-line event due to the COVID-19 pandemic. With 337 participants from 36
countries the workshop was a resounding success, demonstrating the wide
interest of the astronomical community in the science goals and the toolkit of
ground-based thermal infrared astronomy.
|
Federated Learning (FL) is an emerging decentralized learning framework
through which multiple clients can collaboratively train a learning model.
However, a major obstacle that impedes the wide deployment of FL lies in
massive communication traffic. To train high dimensional machine learning
models (such as CNN models), heavy communication traffic can be incurred by
exchanging model updates via the Internet between clients and the parameter
server (PS), implying that the network resource can be easily exhausted.
Compressing model updates is an effective way to reduce the traffic amount.
However, a flexible unbiased compression algorithm applicable for both uplink
and downlink compression in FL is still absent from existing works. In this
work, we devise the Model Update Compression by Soft Clustering (MUCSC)
algorithm to compress model updates transmitted between clients and the PS. In
MUCSC, it is only necessary to transmit cluster centroids and the cluster ID of
each model update. Moreover, we prove that: 1) The compressed model updates are
unbiased estimation of their original values so that the convergence rate by
transmitting compressed model updates is unchanged; 2) MUCSC can guarantee that
the influence of the compression error on the model accuracy is minimized.
Then, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased
compression algorithm that can achieve an extremely high compression rate by
grouping insignificant model updates into a super cluster. B-MUCSC is suitable
for scenarios with very scarce network resource. Ultimately, we conduct
extensive experiments with the CIFAR-10 and FEMNIST datasets to demonstrate
that our algorithms can not only substantially reduce the volume of
communication traffic in FL, but also improve the training efficiency in
practical networks.
|
We consider string theory vacua with tadpoles for dynamical fields and
uncover universal features of the resulting spacetime-dependent solutions. We
argue that the solutions can extend only a finite distance $\Delta$ away in the
spacetime dimensions over which the fields vary, scaling as $\Delta^n\sim {\cal
T}$ with the strength of the tadpole ${\cal T}$. We show that naive
singularities arising at this distance scale are physically replaced by ends of
spacetime, related to the cobordism defects of the swampland cobordism
conjecture and involving stringy ingredients like orientifold planes and
branes, or exotic variants thereof. We illustrate these phenomena in large
classes of examples, including AdS$_5\times T^{1,1}$ with 3-form fluxes, 10d
massive IIA, M-theory on K3, the 10d non-supersymmetric $USp(32)$ strings, and
type IIB compactifications with 3-form fluxes and/or magnetized D-branes. We
also describe a 6d string model whose tadpole triggers spontaneous
compactification to a semirealistic 3-family MSSM-like particle physics model.
|
For a bounded domain $D \subset \mathbb{C}^n$, let $K_D = K_D(z) > 0$ denote
the Bergman kernel on the diagonal and consider the reproducing kernel Hilbert
space of holomorphic functions on $D$ that are square integrable with respect
to the weight $K_D^{-d}$, where $d \geq 0$ is an integer. The corresponding
weighted kernel $K_{D, d}$ transforms appropriately under biholomorphisms and
hence produces an invariant K\"{a}hler metric on $D$. Thus, there is a
hierarchy of such metrics starting with the classical Bergman metric that
corresponds to the case $d=0$. This note is an attempt to study this class of
metrics in much the same way as the Bergman metric has been with a view towards
identifying properties that are common to this family. When $D$ is strongly
pseudoconvex, the scaling principle is used to obtain the boundary asymptotics
of these metrics and several invariants associated to them. It turns out that
all these metrics are complete on strongly pseudoconvex domains.
|
We present evolutionary models for solar-like stars with an improved
treatment of convection that results in a more accurate estimate of the radius
and effective temperature. This is achieved by improving the calibration of the
mixing-length parameter, which sets the length scale in the 1D convection model
implemented in the stellar evolution code. Our calibration relies on the
results of 2D and 3D radiation hydrodynamics simulations of convection to
specify the value of the adiabatic specific entropy at the bottom of the
convective envelope in stars as a function of their effective temperature,
surface gravity and metallicity. For the first time, this calibration is fully
integrated within the flow of a stellar evolution code, with the mixing-length
parameter being continuously updated at run-time. This approach replaces the
more common, but questionable, procedure of calibrating the length scale
parameter on the Sun, and then applying the solar-calibrated value in modeling
other stars, regardless of their mass, composition and evolutionary status. The
internal consistency of our current implementation makes it suitable for
application to evolved stars, in particular to red giants. We show that the
entropy calibrated models yield a revised position of the red giant branch that
is in better agreement with observational constraints than that of standard
models.
|
In this chapter we describe the history and evolution of the iCub humanoid
platform. We start by describing the first version as it was designed during
the RobotCub EU project and illustrate how it evolved to become the platform
that is adopted by more than 30 laboratories world wide. We complete the
chapter by illustrating some of the research activities that are currently
carried out on the iCub robot, i.e. visual perception, event driven sensing and
dynamic control. We conclude the Chapter with a discussion of the lessons we
learned and a preview of the upcoming next release of the robot, iCub 3.0.
|
Unfitted finite element methods have emerged as a popular alternative to
classical finite element methods for the solution of partial differential
equations and allow modeling arbitrary geometries without the need for a
boundary-conforming mesh. On the other hand, the efficient solution of the
resultant system is a challenging task because of the numerical
ill-conditioning that typically entails from the formulation of such methods.
We use an adaptive geometric multigrid solver for the solution of the mixed
finite cell formulation of saddle-point problems and investigate its
convergence in the context of the Stokes and Navier-Stokes equations. We
present two smoothers for the treatment of cutcells in the finite cell method
and analyze their effectiveness for the model problems using a numerical
benchmark. Results indicate that the presented multigrid method is capable of
solving the model problems independently of the problem size and is robust with
respect to the depth of the grid hierarchy.
|
For finite samples with binary outcomes penalized logistic regression such as
ridge logistic regression (RR) has the potential of achieving smaller mean
squared errors (MSE) of coefficients and predictions than maximum likelihood
estimation. There is evidence, however, that RR is sensitive to small or sparse
data situations, yielding poor performance in individual datasets. In this
paper, we elaborate this issue further by performing a comprehensive simulation
study, investigating the performance of RR in comparison to Firth's correction
that has been shown to perform well in low-dimensional settings. Performance of
RR strongly depends on the choice of complexity parameter that is usually tuned
by minimizing some measure of the out-of-sample prediction error or information
criterion. Alternatively, it may be determined according to prior assumptions
about true effects. As shown in our simulation and illustrated by a data
example, values optimized in small or sparse datasets are negatively correlated
with optimal values and suffer from substantial variability which translates
into large MSE of coefficients and large variability of calibration slopes. In
contrast, if the degree of shrinkage is pre-specified, accurate coefficients
and predictions can be obtained even in non-ideal settings such as encountered
in the context of rare outcomes or sparse predictors.
|
In this paper, for a given compact 3-manifold with an initial Riemannian
metric and a symmetric tensor, we establish the short-time existence and
uniqueness theorem for extension of cross curvature flow. We give an example of
this flow on manifolds.
|
We describe Substitutional Neural Image Compression (SNIC), a general
approach for enhancing any neural image compression model, that requires no
data or additional tuning of the trained model. It boosts compression
performance toward a flexible distortion metric and enables bit-rate control
using a single model instance. The key idea is to replace the image to be
compressed with a substitutional one that outperforms the original one in a
desired way. Finding such a substitute is inherently difficult for conventional
codecs, yet surprisingly favorable for neural compression models thanks to
their fully differentiable structures. With gradients of a particular loss
backpropogated to the input, a desired substitute can be efficiently crafted
iteratively. We demonstrate the effectiveness of SNIC, when combined with
various neural compression models and target metrics, in improving compression
quality and performing bit-rate control measured by rate-distortion curves.
Empirical results of control precision and generation speed are also discussed.
|
When two solids at different temperatures are separated by a vacuum gap they
relax toward their equilibrium state by exchanging heat either by radiation,
phonon or electron tunneling, depending on their separation distance and on the
nature of materials. The interplay between this exchange of energy and its
spreading through each solid entirely drives the relaxation dynamics. Here we
highlight a significant slowing down of this process in the extreme near-field
regime at distances where the heat flux exchanged between the two solids is
comparable or even dominates over the flux carried by conduction inside each
solid. This mechanism, leading to a strong effective increase of the system
thermal inertia, should play an important role in the temporal evolution of
thermal state of interacting solids systems at nanometric and subnanometric
scales.
|
We investigate the modification of gravitational fields generated by
topological defects on a generalized Duffin-Kemmer-Petiau (DKP) oscillator for
spin-0 particle under spinning cosmic string background. The generalized DKP
oscillator equation under spinning cosmic string background is established, and
the impact of the Cornell potential on the generalized DKP oscillator is
presented. We give the influence of space-time and potential parameters on
energy levels.
|
Two non-intrusive uncertainty propagation approaches are proposed for the
performance analysis of engineering systems described by expensive-to-evaluate
deterministic computer models with parameters defined as interval variables.
These approaches employ a machine learning based optimization strategy, the
so-called Bayesian optimization, for evaluating the upper and lower bounds of a
generic response variable over the set of possible responses obtained when each
interval variable varies independently over its range. The lack of knowledge
caused by not evaluating the response function for all the possible
combinations of the interval variables is accounted for by developing a
probabilistic description of the response variable itself by using a Gaussian
Process regression model. An iterative procedure is developed for selecting a
small number of simulations to be evaluated for updating this statistical model
by using well-established acquisition functions and to assess the response
bounds. In both approaches, an initial training dataset is defined. While one
approach builds iteratively two distinct training datasets for evaluating
separately the upper and lower bounds of the response variable, the other
builds iteratively a single training dataset. Consequently, the two approaches
will produce different bound estimates at each iteration. The upper and lower
bound responses are expressed as point estimates obtained from the mean
function of the posterior distribution. Moreover, a confidence interval on each
estimate is provided for effectively communicating to engineers when these
estimates are obtained for a combination of the interval variables for which no
deterministic simulation has been run. Finally, two metrics are proposed to
define conditions for assessing if the predicted bound estimates can be
considered satisfactory.
|
We report constraints on light dark matter through its interactions with
shell electrons in the PandaX-II liquid xenon detector with a total 46.9
tonne$\cdot$day exposure. To effectively search for these very low energy
electron recoils, ionization-only signals are selected from the data. 1821
candidates are identified within ionization signal range between 50 to 75
photoelectrons, corresponding to a mean electronic recoil energy from 0.08 to
0.15 keV. The 90% C.L. exclusion limit on the scattering cross section between
the dark matter and electron is calculated based on Poisson statistics. Under
the assumption of point interaction, we provide the world's most stringent
limit within the dark matter mass range from 15 to 30 $\rm MeV/c^2$, with the
corresponding cross section from $2.5\times10^{-37}$ to $3.1\times10^{-38}$
cm$^2$.
|
As of 2020, the international workshop on Procedural Content Generation
enters its second decade. The annual workshop, hosted by the international
conference on the Foundations of Digital Games, has collected a corpus of 95
papers published in its first 10 years. This paper provides an overview of the
workshop's activities and surveys the prevalent research topics emerging over
the years.
|
Ma-Ma-Yeh made a beautiful observation that a transformation of the grammar
of Dumont instantly leads to the $\gamma$-positivity of the Eulerian
polynomials. We notice that the transformed grammar bears a striking
resemblance to the grammar for 0-1-2 increasing trees also due to Dumont. The
appearance of the factor of two fits perfectly in a grammatical labeling of
0-1-2 increasing plane trees. Furthermore, the grammatical calculus is
instrumental to the computation of the generating functions. This approach can
be adapted to study the $e$-positivity of the trivariate second-order Eulerian
polynomials first introduced by Dumont in the contexts of ternary trees and
Stirling permutations, and independently defined by Janson, in connection with
the joint distribution of the numbers of ascents, descents and plateaux over
Stirling permutations.
|
High-Performance Big Data Analytics (HPDA) applications are characterized by
huge volumes of distributed and heterogeneous data that require efficient
computation for knowledge extraction and decision making. Designers are moving
towards a tight integration of computing systems combining HPC, Cloud, and IoT
solutions with artificial intelligence (AI). Matching the application and data
requirements with the characteristics of the underlying hardware is a key
element to improve the predictions thanks to high performance and better use of
resources.
We present EVEREST, a novel H2020 project started on October 1st, 2020 that
aims at developing a holistic environment for the co-design of HPDA
applications on heterogeneous, distributed, and secure platforms. EVEREST
focuses on programmability issues through a data-driven design approach, the
use of hardware-accelerated AI, and an efficient runtime monitoring with
virtualization support. In the different stages, EVEREST combines
state-of-the-art programming models, emerging communication standards, and
novel domain-specific extensions. We describe the EVEREST approach and the use
cases that drive our research.
|
This article summarises the current status of classical communication
networks and identifies some critical open research challenges that can only be
solved by leveraging quantum technologies. By now, the main goal of quantum
communication networks has been security. However, quantum networks can do more
than just exchange secure keys or serve the needs of quantum computers. In
fact, the scientific community is still investigating on the possible use
cases/benefits that quantum communication networks can bring. Thus, this
article aims at pointing out and clearly describing how quantum communication
networks can enhance in-network distributed computing and reduce the overall
end-to-end latency, beyond the intrinsic limits of classical technologies.
Furthermore, we also explain how entanglement can reduce the communication
complexity (overhead) that future classical virtualised networks will
experience.
|
We present a new multiscale method to study the N-Methyl-D-Aspartate (NMDA)
neuroreceptor starting from the reconstruction of its crystallographic
structure. Thanks to the combination of homology modelling, Molecular Dynamics
and Lattice Boltzmann simulations, we analyse the allosteric transition of NDMA
upon ligand binding and compute the receptor response to ionic passage across
the membrane.
|
This paper presents an analytical model to quantify noise in a bolometer
readout circuit. A frequency domain analysis of the noise model is presented
which includes the effect of noise from the bias resistor, sensor resistor,
voltage and current noise of amplifier and cable capacitance. The analytical
model is initially verified by using several standard SMD resistors as a sensor
in the range of 0.1 - 100 Mohm and measuring the RMS noise of the bolometer
readout circuit. Noise measurement on several indigenously developed neutron
transmutation doped Ge temperature sensor has been carried out over a
temperature range of 20 - 70 mK and the measured data is compared with the
noise calculated using analytical model. The effect of different sensor
resistances on the noise of bolometer readout circuit, in line with the
analytical model and measured data, is presented in this paper.
|
This paper presents an AI system applied to location and robotic grasping.
Experimental setup is based on a parameter study to train a deep-learning
network based on Mask-RCNN to perform waste location in indoor and outdoor
environment, using five different classes and generating a new waste dataset.
Initially the AI system obtain the RGBD data of the environment, followed by
the detection of objects using the neural network. Later, the 3D object shape
is computed using the network result and the depth channel. Finally, the shape
is used to compute grasping for a robot arm with a two-finger gripper. The
objective is to classify the waste in groups to improve a recycling strategy.
|
The unscented Kalman inversion (UKI) method presented in [1] is a general
derivative-free approach for the inverse problem. UKI is particularly suitable
for inverse problems where the forward model is given as a black box and may
not be differentiable. The regularization strategies, convergence property, and
speed-up strategies [1,2] of the UKI are thoroughly studied, and the method is
capable of handling noisy observation data and solving chaotic inverse
problems. In this paper, we study the uncertainty quantification capability of
the UKI. We propose a modified UKI, which allows to well approximate the mean
and covariance of the posterior distribution for well-posed inverse problems
with large observation data. Theoretical guarantees for both linear and
nonlinear inverse problems are presented. Numerical results, including learning
of permeability parameters in subsurface flow and of the Navier-Stokes initial
condition from solution data at positive times are presented. The results
obtained by the UKI require only $O(10)$ iterations, and match well with the
expected results obtained by the Markov Chain Monte Carlo method.
|
Following G.~Gr\"atzer and E.~Knapp, 2009, a planar semimodular lattice $L$
is \emph{rectangular}, if~the left boundary chain has exactly one
doubly-irreducible element, $c_l$, and the right boundary chain has exactly one
doubly-irreducible element, $c_r$, and these elements are complementary.
The Cz\'edli-Schmidt Sequences, introduced in 2012, construct rectangular
lattices. We use them to prove some structure theorems. In particular, we prove
that for a slim (no $\mathsf{M}_3$ sublattice) rectangular lattice~$L$, the
congruence lattice $\Con L$ has exactly $\length[c_l,1] + \length[c_r,1]$ dual
atoms and a dual atom in $\Con L$ is a congruence with exactly two classes. We
also describe the prime ideals in a slim rectangular lattice.
|
We explore the tail of various waiting time datasets of processes that follow
a nonstationary Poisson distribution with a sinusoidal driver. Analytically, we
find that the distribution of large waiting times of such processes can be
described using a power law slope of -2.5. We show that this result applies
more broadly to any nonstationary Poisson process driven periodically. Examples
of such processes include solar flares, coronal mass ejections, geomagnetic
storms, and substorms. We also discuss how the power law specifically relates
to the behavior of driver near its minima.
|
The present work looks at semiautomatic rings with automatic addition and
comparisons which are dense subrings of the real numbers and asks how these can
be used to represent geometric objects such that certain operations and
transformations are automatic. The underlying ring has always to be a countable
dense subring of the real numbers and additions and comparisons and
multiplications with constants need to be automatic. It is shown that the ring
can be selected such that equilateral triangles can be represented and
rotations by 30 degrees are possible, while the standard representation of the
b-adic rationals does not allow this.
|
To further improve the learning efficiency and performance of reinforcement
learning (RL), in this paper we propose a novel uncertainty-aware model-based
RL (UA-MBRL) framework, and then implement and validate it in autonomous
driving under various task scenarios. First, an action-conditioned ensemble
model with the ability of uncertainty assessment is established as the virtual
environment model. Then, a novel uncertainty-aware model-based RL framework is
developed based on the adaptive truncation approach, providing virtual
interactions between the agent and environment model, and improving RL's
training efficiency and performance. The developed algorithms are then
implemented in end-to-end autonomous vehicle control tasks, validated and
compared with state-of-the-art methods under various driving scenarios. The
validation results suggest that the proposed UA-MBRL method surpasses the
existing model-based and model-free RL approaches, in terms of learning
efficiency and achieved performance. The results also demonstrate the good
ability of the proposed method with respect to the adaptiveness and robustness,
under various autonomous driving scenarios.
|
Electronic health records represent a holistic overview of patients'
trajectories. Their increasing availability has fueled new hopes to leverage
them and develop accurate risk prediction models for a wide range of diseases.
Given the complex interrelationships of medical records and patient outcomes,
deep learning models have shown clear merits in achieving this goal. However, a
key limitation of these models remains their capacity in processing long
sequences. Capturing the whole history of medical encounters is expected to
lead to more accurate predictions, but the inclusion of records collected for
decades and from multiple resources can inevitably exceed the receptive field
of the existing deep learning architectures. This can result in missing
crucial, long-term dependencies. To address this gap, we present Hi-BEHRT, a
hierarchical Transformer-based model that can significantly expand the
receptive field of Transformers and extract associations from much longer
sequences. Using a multimodal large-scale linked longitudinal electronic health
records, the Hi-BEHRT exceeds the state-of-the-art BEHRT 1% to 5% for area
under the receiver operating characteristic (AUROC) curve and 3% to 6% for area
under the precision recall (AUPRC) curve on average, and 3% to 6% (AUROC) and
3% to 11% (AUPRC) for patients with long medical history for 5-year heart
failure, diabetes, chronic kidney disease, and stroke risk prediction.
Additionally, because pretraining for hierarchical Transformer is not
well-established, we provide an effective end-to-end contrastive pre-training
strategy for Hi-BEHRT using EHR, improving its transferability on predicting
clinical events with relatively small training dataset.
|
Microscopic organisms, such as bacteria, have the ability of colonizing
surfaces and developing biofilms that can determine diseases and infections.
Most bacteria secrete a significant amount of extracellular polymer substances
that are relevant for biofilm stabilization and growth. In this work, we apply
computer simulation and perform experiments to investigate the impact of
polymer size and concentration on early biofilm formation and growth. We
observe as bacterial cells formed loose, disorganized clusters whenever the
effect of diffusion exceeded that of cell growth and division. Addition of
model polymeric molecules induced particle self-assembly and aggregation to
form compact clusters in a polymer size- and concentration-dependent fashion.
We also find that large polymer size or concentration lead to the development
of intriguing stripe-like and dendritic colonies. The results obtained by
Brownian dynamic simulation closely resemble the morphologies that we
experimentally observe in biofilms of a Pseudomonas Putida strain with added
polymers. The analysis of the Brownian dynamic simulation results suggests the
existence of a threshold polymer concentration that distinguishes between two
growth regimes. Below this threshold, the main force driving polymer-induced
compaction is hindrance of bacterial cell diffusion, while collective effects
play a minor role. Above this threshold, especially for large polymers,
polymer-induced compaction is a collective phenomenon driven by depletion
forces. Well above this concentration threshold, severely limited diffusion
drives the formation of filaments and dendritic colonies.
|
In this paper, the interfacial motion between two immiscible viscous fluids
in the confined geometry of a Hele-Shaw cell is studied. We consider the
influence of a thin wetting film trailing behind the displaced fluid, which
dynamically affects the pressure drop at the fluid-fluid interface by
introducing a nonlinear dependence on the interfacial velocity. In this
framework, two cases of interest are analyzed: The injection-driven flow
(expanding evolution), and the lifting plate flow (shrinking evolution). In
particular, we investigate the possibility of controlling the development of
fingering instabilities in these two different Hele-Shaw setups when wetting
effects are taken into account. By employing linear stability theory, we find
the proper time-dependent injection rate $Q(t)$ and the time-dependent lifting
speed ${\dot b}(t)$ required to control the number of emerging fingers during
the expanding and shrinking evolution, respectively. Our results indicate that
the consideration of wetting leads to an increase in the magnitude of $Q(t)$
[and ${\dot b}(t)$] in comparison to the non-wetting strategy. Moreover, a
spectrally accurate boundary integral approach is utilized to examine the
validity and effectiveness of the controlling protocols at the fully nonlinear
regime of the dynamics and confirms that the proposed injection and lifting
schemes are feasible strategies to prescribe the morphologies of the resulting
patterns in the presence of the wetting film.
|
We revisit the theoretical properties of Hamiltonian stochastic differential
equations (SDES) for Bayesian posterior sampling, and we study the two types of
errors that arise from numerical SDE simulation: the discretization error and
the error due to noisy gradient estimates in the context of data subsampling.
Our main result is a novel analysis for the effect of mini-batches through the
lens of differential operator splitting, revising previous literature results.
The stochastic component of a Hamiltonian SDE is decoupled from the gradient
noise, for which we make no normality assumptions. This leads to the
identification of a convergence bottleneck: when considering mini-batches, the
best achievable error rate is $\mathcal{O}(\eta^2)$, with $\eta$ being the
integrator step size. Our theoretical results are supported by an empirical
study on a variety of regression and classification tasks for Bayesian neural
networks.
|
We present the results that are necessary in the ongoing lattice calculations
of the gluon parton distribution functions (PDFs) within the pseudo-PDF
approach. We identify the two-gluon correlator functions that contain the
invariant amplitude determining the gluon PDF in the light-cone $z^2 \to 0$
limit, and perform one-loop calculations in the coordinate representation in an
explicitly gauge-invariant form. Ultraviolet (UV) terms, which contain $\ln
(-z^2)$-dependence cancel in the reduced Ioffe-time distribution (ITD), and we
obtain the matching relation between the reduced ITD and the light-cone ITD.
Using a kernel form, we get a direct connection between lattice data for the
reduced ITD and the normalized gluon PDF.
|
Considerable research effort has been guided towards algorithmic fairness but
real-world adoption of bias reduction techniques is still scarce. Existing
methods are either metric- or model-specific, require access to sensitive
attributes at inference time, or carry high development or deployment costs.
This work explores the unfairness that emerges when optimizing ML models solely
for predictive performance, and how to mitigate it with a simple and easily
deployed intervention: fairness-aware hyperparameter optimization (HO). We
propose and evaluate fairness-aware variants of three popular HO algorithms:
Fair Random Search, Fair TPE, and Fairband. We validate our approach on a
real-world bank account opening fraud case-study, as well as on three datasets
from the fairness literature. Results show that, without extra training cost,
it is feasible to find models with 111% mean fairness increase and just 6%
decrease in performance when compared with fairness-blind HO.
|
LIDAR sensors are usually used to provide autonomous vehicles with 3D
representations of their environment. In ideal conditions, geometrical models
could detect the road in LIDAR scans, at the cost of a manual tuning of
numerical constraints, and a lack of flexibility. We instead propose an
evidential pipeline, to accumulate road detection results obtained from neural
networks. First, we introduce RoadSeg, a new convolutional architecture that is
optimized for road detection in LIDAR scans. RoadSeg is used to classify
individual LIDAR points as either belonging to the road, or not. Yet, such
point-level classification results need to be converted into a dense
representation, that can be used by an autonomous vehicle. We thus secondly
present an evidential road mapping algorithm, that fuses consecutive road
detection results. We benefitted from a reinterpretation of logistic
classifiers, which can be seen as generating a collection of simple evidential
mass functions. An evidential grid map that depicts the road can then be
obtained, by projecting the classification results from RoadSeg into grid
cells, and by handling moving objects via conflict analysis. The system was
trained and evaluated on real-life data. A python implementation maintains a 10
Hz framerate. Since road labels were needed for training, a soft labelling
procedure, relying lane-level HD maps, was used to generate coarse training and
validation sets. An additional test set was manually labelled for evaluation
purposes. So as to reach satisfactory results, the system fuses road detection
results obtained from three variants of RoadSeg, processing different LIDAR
features.
|
Spiral structure is ubiquitous in the Universe, and the pitch angle of arms
in spiral galaxies provide an important observable in efforts to discriminate
between different mechanisms of spiral arm formation and evolution. In this
paper, we present a hierarchical Bayesian approach to galaxy pitch angle
determination, using spiral arm data obtained through the Galaxy Builder
citizen science project. We present a new approach to deal with the large
variations in pitch angle between different arms in a single galaxy, which
obtains full posterior distributions on parameters. We make use of our pitch
angles to examine previously reported links between bulge and bar strength and
pitch angle, finding no correlation in our data (with a caveat that we use
observational proxies for both bulge size and bar strength which differ from
other work). We test a recent model for spiral arm winding, which predicts
uniformity of the cotangent of pitch angle between some unknown upper and lower
limits, finding our observations are consistent with this model of transient
and recurrent spiral pitch angle as long as the pitch angle at which most
winding spirals dissipate or disappear is larger than 10 degrees.
|
Source code spends most of its time in a broken or incomplete state during
software development. This presents a challenge to machine learning for code,
since high-performing models typically rely on graph structured representations
of programs derived from traditional program analyses. Such analyses may be
undefined for broken or incomplete code. We extend the notion of program graphs
to work-in-progress code by learning to predict edge relations between tokens,
training on well-formed code before transferring to work-in-progress code. We
consider the tasks of code completion and localizing and repairing variable
misuse in a work-in-process scenario. We demonstrate that training
relation-aware models with fine-tuned edges consistently leads to improved
performance on both tasks.
|
The objective of Federated Learning (FL) is to perform statistical inference
for data which are decentralised and stored locally on networked clients. FL
raises many constraints which include privacy and data ownership, communication
overhead, statistical heterogeneity, and partial client participation. In this
paper, we address these problems in the framework of the Bayesian paradigm. To
this end, we propose a novel federated Markov Chain Monte Carlo algorithm,
referred to as Quantised Langevin Stochastic Dynamics which may be seen as an
extension to the FL setting of Stochastic Gradient Langevin Dynamics, which
handles the communication bottleneck using gradient compression. To improve
performance, we then introduce variance reduction techniques, which lead to two
improved versions coined \texttt{QLSD}$^{\star}$ and \texttt{QLSD}$^{++}$. We
give both non-asymptotic and asymptotic convergence guarantees for the proposed
algorithms. We illustrate their performances using various Bayesian Federated
Learning benchmarks.
|
The complex-step derivative approximation is a numerical differentiation
technique that can achieve analytical accuracy, to machine precision, with a
single function evaluation. In this letter, the complex-step derivative
approximation is extended to be compatible with elements of matrix Lie groups.
As with the standard complex-step derivative, the method is still able to
achieve analytical accuracy, up to machine precision, with a single function
evaluation. Compared to a central-difference scheme, the proposed complex-step
approach is shown to have superior accuracy. The approach is applied to two
different pose estimation problems, and is able to recover the same results as
an analytical method when available.
|
The problem of simultaneous rigid alignment of multiple unordered point sets
which is unbiased towards any of the inputs has recently attracted increasing
interest, and several reliable methods have been newly proposed. While being
remarkably robust towards noise and clustered outliers, current approaches
require sophisticated initialisation schemes and do not scale well to large
point sets. This paper proposes a new resilient technique for simultaneous
registration of multiple point sets by interpreting the latter as particle
swarms rigidly moving in the mutually induced force fields. Thanks to the
improved simulation with altered physical laws and acceleration of globally
multiply-linked point interactions with a 2^D-tree (D is the space
dimensionality), our Multi-Body Gravitational Approach (MBGA) is robust to
noise and missing data while supporting more massive point sets than previous
methods (with 10^5 points and more). In various experimental settings, MBGA is
shown to outperform several baseline point set alignment approaches in terms of
accuracy and runtime. We make our source code available for the community to
facilitate the reproducibility of the results.
|
It is nontrivial to store rapidly growing big data nowadays, which demands
high-performance lossless compression techniques. Likelihood-based generative
models have witnessed their success on lossless compression, where flow based
models are desirable in allowing exact data likelihood optimisation with
bijective mappings. However, common continuous flows are in contradiction with
the discreteness of coding schemes, which requires either 1) imposing strict
constraints on flow models that degrades the performance or 2) coding numerous
bijective mapping errors which reduces the efficiency. In this paper, we
investigate volume preserving flows for lossless compression and show that a
bijective mapping without error is possible. We propose Numerical Invertible
Volume Preserving Flow (iVPF) which is derived from the general volume
preserving flows. By introducing novel computation algorithms on flow models,
an exact bijective mapping is achieved without any numerical error. We also
propose a lossless compression algorithm based on iVPF. Experiments on various
datasets show that the algorithm based on iVPF achieves state-of-the-art
compression ratio over lightweight compression algorithms.
|
This paper presents a model addressing welfare optimal policies of demand
responsive transportation service, where passengers cause external travel time
costs for other passengers due to the route changes. Optimal pricing and trip
production policies are modelled both on the aggregate level and on the network
level. The aggregate model is an extension from Jokinen (2016) with flat
pricing model, but occupancy rate is now modelled as an endogenous variable
depending on demand and capacity levels. The network model enables to describe
differences between routes from the viewpoint of occupancy rate and efficient
trip combining. Moreover, the model defines the optimal differentiated pricing
for routes.
|
In this article, we introduce a notion of relative mean metric dimension with
potential for a factor map $\pi: (X,d, T)\to (Y, S)$ between two topological
dynamical systems. To link it with ergodic theory, we establish four
variational principles in terms of metric entropy of partitions, Shapira's
entropy, Katok's entropy and Brin-Katok local entropy respectively. Some
results on local entropy with respect to a fixed open cover are obtained in the
relative case. We also answer an open question raised by Shi \cite{Shi}
partially for a very well-partitionable compact metric space, and in general we
obtain a variational inequality involving box dimension of the space.
Corresponding inner variational principles given an invariant measure of
$(Y,S)$ are also investigated.
|
We introduce a quantum interferometric scheme that uses states that are sharp
in frequency and delocalized in position. The states are frequency modes of a
quantum field that is trapped at all times in a finite volume potential, such
as a small box potential. This allows for significant miniaturization of
interferometric devices. Since the modes are in contact at all times, it is
possible to estimate physical parameters of global multi-mode channels. As an
example, we introduce a three-mode scheme and calculate precision bounds in the
estimation of parameters of two-mode Gaussian channels. This scheme can be
implemented in several systems, including superconducting circuits, cavity-QED
and cold atoms. We consider a concrete implementation using the ground state
and two phononic modes of a trapped Bose-Einstein condensate. We apply this to
show that frequency interferometry can improve the sensitivity of phononic
gravitational waves detectors by several orders of magnitude, even in the case
that squeezing is much smaller than assumed previously and that the system
suffers from short phononic lifetimes. Other applications range from
magnetometry, gravimetry and gradiometry to dark matter/energy searches.
|
Analytic solution is presented of the nonlinear semiclassical dynamics of
superradiant photonic condensate that arises in the Dicke model of two-level
atoms dipolar coupled to the electromagnetic field in the microwave cavity. In
adiabatic limit with respect to photon degree of freedom the system is
approximately integrable and its evolution is expressed via Jacobi elliptic
functions of real time. Periodic trajectories of the semiclassical coordinate
of photonic condensate either localise around two degenerate minima of the
condensate ground state energy or traverse between them over the saddle point.
An exact mapping of the semiclassical dynamics of photonic condensate on the
motion of unstable Lagrange 'sleeping top' is found. Analytic expression is
presented for the frequency dependence of transmission coefficient along a
transmission line inductively coupled to the resonant cavity with superradiant
condensate. Sharp transmission drops reflect Fourier spectrum of the
semiclassical motion of photonic condensate and of 'sleeping top' nodding.
|
Molecular ferroelectrics have captured immense attention due to their
superiority over inorganic oxide ferroelectrics, such as environmentally
friendly, low-cost, flexible, foldable. However, the mechanisms of
ferroelectric switching and phase transition for the molecular ferroelectrics
are still missing, leaving the development of novel molecular ferroelectrics
less efficient. In this work, we have provided a methodology combining
molecular dynamics (MD) simulation on a polarized force field named polarized
crystal charge (PCC) and enhanced sampling technique, replica-exchange
molecular dynamics (REMD) to simulate such mechanisms. With this procedure, we
have investigated a promising molecular ferroelectric material,
(R)/(S)-3-quinuclidinol crystal. We have simulated the ferroelectric hysteresis
loops of both enantiomers and obtained spontaneous polarization of 7/8 \mu C
cm-2 and a corresponding coercive electric field of 15 kV cm-1. We also find
the Curie temperature as 380/385 K for ferro-/para-electric phase transition of
both enantiomers. All of the simulated results are highly compatible with
experimental values. Besides of that, we predict a novel Curie temperature of
about 600 K. This finding is further validated by principal component analysis
(PCA). Our work would significantly promote the future exploration of
multifunctional molecular ferroelectrics for the next generation of intelligent
devices.
|
We report on new stability conditions for evolutionary dynamics in the
context of population games. We adhere to the prevailing framework consisting
of many agents, grouped into populations, that interact noncooperatively by
selecting strategies with a favorable payoff. Each agent is repeatedly allowed
to revise its strategy at a rate referred to as revision rate. Previous
stability results considered either that the payoff mechanism was a memoryless
potential game, or allowed for dynamics (in the payoff mechanism) at the
expense of precluding any explicit dependence of the agents' revision rates on
their current strategies. Allowing the dependence of revision rates on
strategies is relevant because the agents' strategies at any point in time are
generally unequal. To allow for strategy-dependent revision rates and payoff
mechanisms that are dynamic (or memoryless games that are not potential), we
focus on an evolutionary dynamics class obtained from a straightforward
modification of one that stems from the so-called impartial pairwise comparison
strategy revision protocol. Revision protocols consistent with the modified
class retain from those in the original one the advantage that the agents
operate in a fully decentralized manner and with minimal information
requirements - they need to access only the payoff values (not the mechanism)
of the available strategies. Our main results determine conditions under which
system-theoretic passivity properties are assured, which we leverage for
stability analysis.
|
Collective action demands that individuals efficiently coordinate how much,
where, and when to cooperate. Laboratory experiments have extensively explored
the first part of this process, demonstrating that a variety of
social-cognitive mechanisms influence how much individuals choose to invest in
group efforts. However, experimental research has been unable to shed light on
how social cognitive mechanisms contribute to the where and when of collective
action. We leverage multi-agent deep reinforcement learning to model how a
social-cognitive mechanism--specifically, the intrinsic motivation to achieve a
good reputation--steers group behavior toward specific spatial and temporal
strategies for collective action in a social dilemma. We also collect
behavioral data from groups of human participants challenged with the same
dilemma. The model accurately predicts spatial and temporal patterns of group
behavior: in this public goods dilemma, the intrinsic motivation for reputation
catalyzes the development of a non-territorial, turn-taking strategy to
coordinate collective action.
|
In this paper we study zero-noise limits of $\alpha -$stable noise perturbed
ODE's which are driven by an irregular vector field $A$ with asymptotics $%
A(x)\sim \overline{a}(\frac{x}{\left\vert x\right\vert })\left\vert
x\right\vert ^{\beta -1}x$ at zero, where $\overline{a}>0$ is a continuous
function and $\beta \in (0,1)$. The results established in this article can be
considered a generalization of those in the seminal works of Bafico \cite% {Ba}
and Bafico, Baldi \cite{BB} to the multi-dimensional case.\ Our approach for
proving these results is inspired by techniques in \cite% {PP_self_similar} and
based on the analysis of an SDE for $t\longrightarrow \infty $, which is
obtained through a transformation of the perturbed ODE.
|
A Private Set Operation (PSO) protocol involves at least two parties with
their private input sets. The goal of the protocol is for the parties to learn
the output of a set operation, i.e. set intersection, on their input sets,
without revealing any information about the items that are not in the output
set. Commonly, the outcome of the set operation is revealed to parties and
no-one else. However, in many application areas of PSO the result of the set
operation should be learned by an external participant whom does not have an
input set. We call this participant the decider. In this paper, we present new
variants of multi-party PSO, where there is a decider who gets the result. All
parties expect the decider have a private set. Other parties neither learn this
result, nor anything else about this protocol. Moreover, we present a generic
solution to the problem of PSO.
|
One of the biggest challenges in multi-agent reinforcement learning is
coordination, a typical application scenario of this is traffic signal control.
Recently, it has attracted a rising number of researchers and has become a hot
research field with great practical significance. In this paper, we propose a
novel method called MetaVRS~(Meta Variational RewardShaping) for traffic signal
coordination control. By heuristically applying the intrinsic reward to the
environmental reward, MetaVRS can wisely capture the agent-to-agent interplay.
Besides, latent variables generated by VAE are brought into policy for
automatically tradeoff between exploration and exploitation to optimize the
policy. In addition, meta learning was used in decoder for faster adaptation
and better approximation. Empirically, we demonstate that MetaVRS substantially
outperforms existing methods and shows superior adaptability, which predictably
has a far-reaching significance to the multi-agent traffic signal coordination
control.
|
We present the first measurement of the homogeneity index, $\mathcal{H}$, a
fractal or Hausdorff dimension of the early Universe from the Planck CMB
temperature variations $\delta T$ in the sky. This characterization of the
isotropy scale is model-free and purely geometrical, independent of the
amplitude of $\delta T$. We find evidence of homogeneity ($\mathcal{H}=0$) for
scales larger than $\theta_{\mathcal{H}} = 65.9 \pm 9.2 \deg $ on the CMB sky.
This finding is at odds with the $\Lambda$CDM prediction, which assumes a scale
invariant infinite universe. Such anomaly is consistent with the well known low
quadrupule amplitude in the angular $\delta T$ spectrum, but quantified in a
direct and model independent way. We estimate the significance of our finding
for $\mathcal{H}=0$ using a principal component analysis from the sampling
variations of the observed sky. This analysis is validated with an independent
theoretical prediction of the covariance matrix based purely on data. Assuming
translation invariance (and flat geometry $k=0$) we can convert the isotropy
scale $\theta_\mathcal{H}$ into a (comoving) homogeneity scale of
$\chi_\mathcal{H} \simeq 3.3 c/H_0$, which is very close to the trapped surface
generated by the observed cosmological constant $\Lambda$.
|
XRISM is an X-ray astronomical mission by the JAXA, NASA, ESA and other
international participants, that is planned for launch in 2022 (Japanese fiscal
year), to quickly restore high-resolution X-ray spectroscopy of astrophysical
objects. To enhance the scientific outputs of the mission, the Science
Operations Team (SOT) is structured independently from the instrument teams and
the Mission Operations Team. The responsibilities of the SOT are divided into
four categories: 1) guest observer program and data distributions, 2)
distribution of analysis software and the calibration database, 3) guest
observer support activities, and 4) performance verification and optimization
activities. As the first step, lessons on the science operations learned from
past Japanese X-ray missions are reviewed, and 15 kinds of lessons are
identified. Among them, a) the importance of early preparation of the
operations from the ground stage, b) construction of an independent team for
science operations separate from the instrument development, and c) operations
with well-defined duties by appointed members are recognized as key lessons.
Then, the team structure and the task division between the mission and science
operations are defined; the tasks are shared among Japan, US, and Europe and
are performed by three centers, the SOC, SDC, and ESAC, respectively. The SOC
is designed to perform tasks close to the spacecraft operations, such as
spacecraft planning, quick-look health checks, pre-pipeline processing, etc.,
and the SDC covers tasks regarding data calibration processing, maintenance of
analysis tools, etc. The data-archive and user-support activities are covered
both by the SOC and SDC. Finally, the science-operations tasks and tools are
defined and prepared before launch.
|
We study an expanding two-fluid model of non-relativistic dark matter and
radiation which are allowed to interact during a certain time span and to
establish an approximate thermal equilibrium. Such interaction which generates
an effective bulk viscous pressure at background level is expected to be
relevant for times around the transition from radiation to matter dominance. We
quantify the magnitude of this pressure for dark matter particles masses within
the range $1 {\rm eV} \lesssim m_{\chi} \lesssim 10 {\rm eV}$ around the
matter-radiation equality epoch (i.e., redshift $z_{\rm eq}\sim 3400$) and
demonstrate that the existence of a transient bulk viscosity has consequences
which may be relevant for addressing current tensions of the standard
cosmological model: i) the additional (negative) pressure contribution modifies
the expansion rate around $z_{\rm eq}$, yielding a larger $H_0$ value and ii)
large-scale structure formation is impacted by suppressing the amplitude of
matter overdensity growth via a new viscous friction-term contribution to the
M\'esz\'aros effect. As a result, both the $H_0$ and the $S_8$ tensions of the
current standard cosmological model are significantly alleviated.
|
For an autonomous robotic system, monitoring surgeon actions and assisting
the main surgeon during a procedure can be very challenging. The challenges
come from the peculiar structure of the surgical scene, the greater similarity
in appearance of actions performed via tools in a cavity compared to, say,
human actions in unconstrained environments, as well as from the motion of the
endoscopic camera. This paper presents ESAD, the first large-scale dataset
designed to tackle the problem of surgeon action detection in endoscopic
minimally invasive surgery. ESAD aims at contributing to increase the
effectiveness and reliability of surgical assistant robots by realistically
testing their awareness of the actions performed by a surgeon. The dataset
provides bounding box annotation for 21 action classes on real endoscopic video
frames captured during prostatectomy, and was used as the basis of a recent
MIDL 2020 challenge. We also present an analysis of the dataset conducted using
the baseline model which was released as part of the challenge, and a
description of the top performing models submitted to the challenge together
with the results they obtained. This study provides significant insight into
what approaches can be effective and can be extended further. We believe that
ESAD will serve in the future as a useful benchmark for all researchers active
in surgeon action detection and assistive robotics at large.
|
Traditionally Genetic Algorithm has been used for optimization of unimodal
and multimodal functions. Earlier researchers worked with constant
probabilities of GA control operators like crossover, mutation etc. for tuning
the optimization in specific domains. Recent advancements in this field
witnessed adaptive approach in probability determination. In Adaptive mutation
primarily poor individuals are utilized to explore state space, so mutation
probability is usually generated proportionally to the difference between
fitness of best chromosome and itself (fMAX - f). However, this approach is
susceptible to nature of fitness distribution during optimization. This paper
presents an alternate approach of mutation probability generation using
chromosome rank to avoid any susceptibility to fitness distribution.
Experiments are done to compare results of simple genetic algorithm (SGA) with
constant mutation probability and adaptive approaches within a limited resource
constraint for unimodal, multimodal functions and Travelling Salesman Problem
(TSP). Measurements are done for average best fitness, number of generations
evolved and percentage of global optimum achievements out of several trials.
The results demonstrate that the rank-based adaptive mutation approach is
superior to fitness-based adaptive approach as well as SGA in a multimodal
problem space.
|
A new class of test functions for black box optimization is introduced.
Affine OneMax (AOM) functions are defined as compositions of OneMax and
invertible affine maps on bit vectors. The black box complexity of the class is
upper bounded by a polynomial of large degree in the dimension. The proof
relies on discrete Fourier analysis and the Kushilevitz-Mansour algorithm.
Tunable complexity is achieved by expressing invertible linear maps as finite
products of transvections. The black box complexity of sub-classes of AOM
functions is studied. Finally, experimental results are given to illustrate the
performance of search algorithms on AOM functions.
|
The aberrations in an optical microscope are commonly measured and corrected
at one location in the field of view, within the so-called isoplanatic patch.
Full-field correction is desirable for high-resolution imaging of large
specimens. Here we present a novel wavefront detector, based on pupil sampling
with sub-apertures, which measures the aberrated wavefront phase at each
position of the specimen. Based on this measurement, we propose a region-wise
deconvolution that provides an anisoplanatic reconstruction of the sample
image. Our results indicate that the measurement and correction of the
aberrations can be performed in a wide-field fluorescence microscope over its
entire field of view.
|
A rapid decline in mortality and fertility has become major issues in many
developed countries over the past few decades. A precise model for forecasting
demographic movements is important for decision making in social welfare
policies and resource budgeting among the government and many industry sectors.
This article introduces a novel non-parametric approach using Gaussian process
regression with a natural cubic spline mean function and a spectral mixture
covariance function for mortality and fertility modelling and forecasting.
Unlike most of the existing approaches in demographic modelling literature,
which rely on time parameters to decide the movements of the whole mortality or
fertility curve shifting from one year to another over time, we consider the
mortality and fertility curves from their components of all age-specific
mortality and fertility rates and assume each of them following a Gaussian
process over time to fit the whole curves in a discrete but intensive style.
The proposed Gaussian process regression approach shows significant
improvements in terms of preciseness and robustness compared to other
mainstream demographic modelling approaches in the short-, mid- and long-term
forecasting using the mortality and fertility data of several developed
countries in our numerical experiments.
|
Driving is a routine activity for many, but it is far from simple. Drivers
deal with multiple concurrent tasks, such as keeping the vehicle in the lane,
observing and anticipating the actions of other road users, reacting to
hazards, and dealing with distractions inside and outside the vehicle. Failure
to notice and respond to the surrounding objects and events can cause
accidents. The ongoing improvements of the road infrastructure and vehicle
mechanical design have made driving safer overall. Nevertheless, the problem of
driver inattention has remained one of the primary causes of accidents.
Therefore, understanding where the drivers look and why they do so can help
eliminate sources of distractions and identify unsafe attention patterns.
Research on driver attention has implications for many practical applications
such as policy-making, improving driver education, enhancing road
infrastructure and in-vehicle infotainment systems, as well as designing
systems for driver monitoring, driver assistance, and automated driving. This
report covers the literature on changes in drivers' visual attention
distribution due to factors, internal and external to the driver. Aspects of
attention during driving have been explored across multiple disciplines,
including psychology, human factors, human-computer interaction, intelligent
transportation, and computer vision, each offering different perspectives,
goals, and explanations for the observed phenomena. We link cross-disciplinary
theoretical and behavioral research on driver's attention to practical
solutions. Furthermore, limitations and directions for future research are
discussed. This report is based on over 175 behavioral studies, nearly 100
practical papers, 20 datasets, and over 70 surveys published since 2010. A
curated list of papers used for this report is available at
\url{https://github.com/ykotseruba/attention_and_driving}.
|
Magnetic anisotropies have key role to taylor magnetic behavior in
ferromagnetic systems. Further, they are also essential elements to manipulate
the thermoelectric response in Anomalous Nernst (ANE) and Longitudinal Spin
Seebeck systems (LSSE). We propose here a theoretical approach and explore the
role of magnetic anisotropies on the magnetization and thermoelectric response
of noninteracting multidomain ferromagnetic systems. The magnetic behavior and
the thermoelectric curves are calculated from a modified Stoner Wohlfarth model
for an isotropic system, a uniaxial magnetic one, as well as for a system
having a mixture of uniaxial and cubic magnetocrystalline magnetic
anisotropies. It is verified remarkable modifications of the magnetic behavior
with the anisotropy and it is shown that the thermoelectric response is
strongly affected by these changes. Further, the fingerprints of the energy
contributions to the thermoelectric response are disclosed. To test the
robustness of our theoretical approach, we engineer films having the specific
magnetic properties and compare directly experimental data with theoretical
results. Thus, experimental evidence is provided to confirm the validity of our
theoretical approach. The results go beyond the traditional reports focusing on
magnetically saturated films and show how the thermoelectric effect behaves
during the whole magnetization curve. Our findings reveal a promising way to
explore the ANE and LSSE effects as a powerful tool to study magnetic
anisotropies, as well as to employ systems with magnetic anisotropy as sensing
or elements in technological applications.
|
IW And stars are a recently recognized subgroup of dwarf novae which are
characterized by (often repetitive) slowly rising standstills terminated by
brightening, but the exact mechanism for this variation is not yet identified.
We have identified BO Cet, which had been considered as a novalike cataclysmic
variable, as a new member of IW And stars based on the behavior in 2019-2020.
In addition to this, the object showed dwarf nova-type outbursts in 2020-2021,
and superhumps having a period 7.8% longer than the orbital one developed at
least during one long outburst. This object has been confirmed as an SU
UMa-type dwarf nova with an exceptionally long orbital period (0.1398 d). BO
Cet is thus the first cataclysmic variable showing both SU UMa-type and IW
And-type features. We obtained a mass ratio (q) of 0.31-0.34 from the
superhumps in the growing phase (stage A superhumps). At this q, the radius of
the 3:1 resonance, responsible for tidal instability and superhumps, and the
tidal truncation radius are very similar. We interpret that in some occasions
this object showed IW And-type variation when the disk size was not large
enough, but that the radius of the 3:1 resonance could be reached as the result
of thermal instability. We also discuss that there are SU UMa-type dwarf novae
above q=0.30, which is above the previously considered limit (q~0.25) derived
from numerical simulations and that this is possible since the radius of the
3:1 resonance is inside the tidal truncation radius. We constrained the mass of
the white dwarf larger than 1.0Msol, which may be responsible for the IW
And-type behavior and the observed strength of the He II emission. The exact
reason, however, why this object is unique in that it shows both SU UMa-type
and IW And-type features is still unsolved.
|
We develop time-splitting finite difference methods, using implicit
Backward-Euler and semi-implicit Crank-Nicolson discretization schemes, to
study the spin-orbit coupled spinor Bose Einstein condensates with coherent
coupling in quasi-one and quasi-two-dimensional traps. The split equations
involving kinetic energy and spin-orbit coupling operators are solved using
either time-implicit Backward-Euler or semi-implicit Crank-Nicolson methods. We
explicitly develop the method for pseudospin-1/2, spin-1, and spin-2
condensates. The results for ground states obtained with time-splitting
Backward-Euler and Crank-Nicolson methods are in excellent agreement with
time-splitting Fourier spectral method which is one of the popular methods to
solve the mean-field models for spin-orbit coupled spinor condensates. We
confirm the emergence of different phases in spin-orbit coupled pseudospin-1/2,
spin-1, and spin-2 condensates with coherent coupling.
|
We consider two models of random cones together with their duals. Let
$Y_1,\dots,Y_n$ be independent and identically distributed random vectors in
$\mathbb R^d$ whose distribution satisfies some mild condition. The random
cones $G_{n,d}^A$ and $G_{n,d}^B$ are defined as the positive hulls
$\text{pos}\{Y_1-Y_2,\dots,Y_{n-1}-Y_n\}$, respectively
$\text{pos}\{Y_1-Y_2,\dots,Y_{n-1}-Y_n,Y_n\}$, conditioned on the event that
the respective positive hull is not equal to $\mathbb R^d$. We prove limit
theorems for various expected geometric functionals of these random cones, as
$n$ and $d$ tend to infinity in a coordinated way. This includes limit theorems
for the expected number of $k$-faces and the $k$-th conic quermassintegrals, as
$n$, $d$ and sometimes also $k$ tend to infinity simultaneously. Moreover, we
uncover a phase transition in high dimensions for the expected statistical
dimension for both models of random cones.
|
A proper orthogonal decomposition-based B-splines B\'ezier elements method
(POD-BSBEM) is proposed as a non-intrusive reduced-order model for uncertainty
propagation analysis for stochastic time-dependent problems. The method uses a
two-step proper orthogonal decomposition (POD) technique to extract the reduced
basis from a collection of high-fidelity solutions called snapshots. A third
POD level is then applied on the data of the projection coefficients associated
with the reduced basis to separate the time-dependent modes from the stochastic
parametrized coefficients. These are approximated in the stochastic parameter
space using B-splines basis functions defined in the corresponding B\'ezier
element. The accuracy and the efficiency of the proposed method are assessed
using benchmark steady-state and time-dependent problems and compared to the
reduced order model-based artificial neural network (POD-ANN) and to the
full-order model-based polynomial chaos expansion (Full-PCE). The POD-BSBEM is
then applied to analyze the uncertainty propagation through a flood wave flow
stemming from a hypothetical dam-break in a river with a complex bathymetry.
The results confirm the ability of the POD-BSBEM to accurately predict the
statistical moments of the output quantities of interest with a substantial
speed-up for both offline and online stages compared to other techniques.
|
Nowadays, datacenters lean on a computer-centric approach based on monolithic
servers which include all necessary hardware resources (mainly CPU, RAM,
network and disks) to run applications. Such an architecture comes with two
main limitations: (1) difficulty to achieve full resource utilization and (2)
coarse granularity for hardware maintenance. Recently, many works investigated
a resource-centric approach called disaggregated architecture where the
datacenter is composed of self-content resource boards interconnected using
fast interconnection technologies, each resource board including instances of
one resource type. The resource-centric architecture allows each resource to be
managed (maintenance, allocation) independently. LegoOS is the first work which
studied the implications of disaggregation on the operating system, proposing
to disaggregate the operating system itself. They demonstrated the suitability
of this approach, considering mainly CPU and RAM resources. However, they
didn't study the implication of disaggregation on network resources. We
reproduced a LegoOS infrastructure and extended it to support disaggregated
networking. We show that networking can be disaggregated following the same
principles, and that classical networking optimizations such as DMA, DDIO or
loopback can be reproduced in such an environment. Our evaluations show the
viability of the approach and the potential of future disaggregated
infrastructures.
|
Previous methods decompose the blind super-resolution (SR) problem into two
sequential steps: \textit{i}) estimating the blur kernel from given
low-resolution (LR) image and \textit{ii}) restoring the SR image based on the
estimated kernel. This two-step solution involves two independently trained
models, which may not be well compatible with each other. A small estimation
error of the first step could cause a severe performance drop of the second
one. While on the other hand, the first step can only utilize limited
information from the LR image, which makes it difficult to predict a highly
accurate blur kernel. Towards these issues, instead of considering these two
steps separately, we adopt an alternating optimization algorithm, which can
estimate the blur kernel and restore the SR image in a single model.
Specifically, we design two convolutional neural modules, namely
\textit{Restorer} and \textit{Estimator}. \textit{Restorer} restores the SR
image based on the predicted kernel, and \textit{Estimator} estimates the blur
kernel with the help of the restored SR image. We alternate these two modules
repeatedly and unfold this process to form an end-to-end trainable network. In
this way, \textit{Estimator} utilizes information from both LR and SR images,
which makes the estimation of the blur kernel easier. More importantly,
\textit{Restorer} is trained with the kernel estimated by \textit{Estimator},
instead of the ground-truth kernel, thus \textit{Restorer} could be more
tolerant to the estimation error of \textit{Estimator}. Extensive experiments
on synthetic datasets and real-world images show that our model can largely
outperform state-of-the-art methods and produce more visually favorable results
at a much higher speed. The source code is available at
\url{https://github.com/greatlog/DAN.git}.
|